uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,116,691,498,215 | arxiv | \chapter{Introduction}
Efforts to find the speed of light began in the seventeenth century.
It was only then that the first evidence that light had a limited and measurable speed, came to the mind. Until then, it was almost universally believed that the speed of light has an infinite value. Afterward many savants worked on that subject to find the value of the speed of light as Armand Fizeau, Albert Michelson and many others. In 1983 International Bureau of Weights and Measures reported the speed of light to have the value of $299,792,458m/s$.
In 1905 Einstein declared the theory of special relativity based on two postulates. The First one is that physical laws are invariant in inertial frames. The second one is the invariance of the speed of light. Based only on these two hypotheses and without referring to mechanics laws, electromagnetic and other fundamental physical theories, he derived Lorentz transformations. Keeping the speed of light invariant, these transformations took the place of Galilean transformations.
Another fundamental physical constant is Planck length, $l_p$, which has the length dimension and has been obtained by combining the gravitational constant, $G$, the speed of light, $c$ and the Planck constant $\hbar$ \cite{gaarder2016gravitational}. After finding the constant of action, known as Planck constant, Max Planck mentioned that by using the three constants $G$,
$c$
and
$\hbar$, it is possible to define a global constant for length. In General relativity, the distance between two points is a dynamical parameter and is obtained by solving Einstein's field equations. Through the quantum mechanics laws, each dynamical parameter must satisfy the uncertainty principle. Actually, $l_p$ is the distance where quantum effects appear. Lorentz transformations do not preserve this minimum length. As Lorentz transformations replace Galilean transformations, it is expected that a new symmetry group that preserves $l_P$ and $c$, becomes practical.
Lorentz transformations can also be performed in de Sitter space-time as it is homogeneous\cite{aldrovandi2007sitter}. Hence Minkowski background in physical theories may be replaced by de Sitter space-time that its governing symmetry group is So(4,1) \cite{abbott1982stability,hawking1973large}. These transformations can preserve the length. So it is possible to escape the mentioned problem \cite{aldrovandi2007sitter,cacciatori2008special,nakahara2018geometry}.
Also observations show that our universe expands at an accelerating rate and a model with a positive constant is more appropriate to describe it \cite{riess1998observational}. Hence it is useful to study de Sitter space-time. Different topics have to be considered to be sure about this choice. Actually de Sitter horizon complicates many things. As in the flat case background it is important to consider space-time's boundaries \cite{wald1999gravitational,wald2000general}. In presence of a positive cosmological constant,
de Sitter null infinity, $\mathcal{I}$ is no longer null but spacelike \ and can not be study like asymptotically flat $\mathcal{I}$ which have been study by Bondi et al. in 1962 \cite{bondi1962gravitational}. They rewritten Minkowski metric in Bondi coordinates, $(u,r,x^A)$ where $u=t-r$ is the retarded time and $x^A=(\theta,\varphi)$. They used Dirichlet boundary conditions and found a meaningful notion of energy. But it is not possible to follow the same route in asymptotically de Sitter space-time because using these boundary conditions gravitational waves do not carry away de Sitter charges across future null infinity \cite{ashtekar2014asymptotics,ashtekar2015asymptotics}. So the Fefferman-Graham method \cite{fefferman1985conformal,fefferman2011ambient} and Penrose-Newaman formalism \cite{penrose1965remarkable,penrose1984spinors} can be used to study $\mathcal{I}$ in de Sitter space-time \cite{saw2016mass}. Also it is possible to add
Neumann boundary conditions $J^{AB}=0$ to Dirichlet boundary conditions and find finite, conserved, integrable and generically non-vanishing \cite{compere2019lambda}.
The de Sitter solution for Einstein's field equations is obtained in chapter two. Also the different coordinate systems for de Sitter space-time are considered. The symmetry group of the de Sitter space-time is obtained in chapter three and it is explained how these transformations can preserve a minimum length. Asymptotically flat \cite{bondi1962gravitational,penrose1965remarkable,newman1966note,arnowitt2008republication,moreschi1987general,bros2002asymptotic} and de Sitter space-times \cite{addazi2020conformal,anderson2005structure, aneesh2019conserved,anninos2011asymptotic,anninos2019sitter,ashtekar2014asymptotics,ashtekar2015asymptotics,ashtekar2019asymptotics,ashtekar2014geometry} are considered in chapter four.
\chapter{De Sitter space-time}
There exist three maximally symmetric vacuum solutions for Einstein's equations known as de Sitter, anti-de Sitter and Minkowski with positive, negative and zero curvature. Here the focus is on Einstein's equations' solution in presence of a positive cosmological constant, then different coordinate systems and Killing's vector fields are considered.
\section{De Sitter metric}
A structure for symmetric tensor type
$(0,2)$,
known as the metric tensor, has been introduced that relates two vectors in the vector space $T_{p}$ \cite{stephani2009exact, penrose1984spinors}
\begin{equation}
\eta_{\alpha\beta}e_{\mu}^{\alpha} e_{\nu}^{\alpha}=g_{\mu \nu}
\end{equation}
where
$ \begin{Bmatrix}
e_{\mu} ^{\alpha}
\end{Bmatrix}$
is a null tetrad that consists of two real null vectors
$l$,
$k$
and two complex conjugate null vectors
$m$,
$\bar{m}$.\cite{penrose1984spinors}
\begin{equation}
\begin{Bmatrix}
e_{\mu}^{\alpha}
\end{Bmatrix}
=(m,\bar{m},l,k).
\end{equation}
One can write the length element
$ds^{2}$
using vector basis
in
$T^*_{p}$ as \cite{fukuyama2009comments}
\begin{equation}
\label{eq:metric}
ds^{2}=g_{\mu \nu}\omega^{\mu}\omega^{\nu}.
\end{equation}
Also by taking coordinate basis or holonomic frame to the account, equation
\eqref{eq:metric}
can turn to
\begin{equation}
ds^{2}=g_{\mu \nu}dx^{\mu}dx^{\nu}.
\end{equation}
Actually, one can define four spacelike vectors,
$e^{\alpha}$
and a timelike vector,
$e^{0}\equiv X^{0}$, in five dimensions
as
\begin{equation}
e^{\mu}
=
(X^{0},e^{\alpha})
=
(X^{0},X^{1},X^{2},X^{3},X^{4})
.
\end{equation}
Furthermore relations as follows can be described
\begin{equation}
e^{\alpha} e_{\beta}=\delta_{\alpha}^{ \beta}\: \: \:,\: \: \: X^{0} X_{0}=-1 \: \: \:,\: \: \: e^{\alpha} X_{0}=0.
\end{equation}
Then one can write \cite{tod2015some}
\begin{equation}
l^{2}=-X^{0} X_{0}+X^{1} X_{1}+X^{2} X_{2}+X^{3} X_{3}+X^{4} X_{4}.
\label{eq:2.00}
\end{equation}
Equation \eqref{eq:2.00} shows a hyperbloyd embedded in five dimensional Minkowski space-time with the line element
\begin{equation}
ds^2=-dx^{2}_{0}+dx^{2}_{1}+dx^{2}_{2}+dx^{2}_{3}+dx^{2}_{4}.
\label{eq:2.200}
\end{equation}
This relation can also be obtained by solving Einstein's equations (see section \ref{sec:2.2.2}).
\section{Solving Einstein's equations}
Most physical theories are introduced by mathematical models and are described by a set of differential equations. Among gravitational theories, Einstein's theory has been accepted as the most successful.
In this case, differential equations are written, considering that space and time can be introduced with a pseudo Riemannian manifold and a distribution of interaction of matter and gravity.
Usually we search for exact solutions or, if possible, a general solution of differential equations. Many of these exact solutions are not physical but many others like Schwarzschild and Kerr solutions for black holes, Friedmann-Lemaître-Robertson-Walker solution for cosmology are physical \cite{stephani2009exact}. Unless imposing any restrictions on the energy-momentum tensor, each metric can be the solution of these equations, because it is just a description of the $T_{\mu \nu}$. We can apply symmetry conditions to the metric, for example by imposing algebraic constraints on the Riemann tensor or selecting boundary conditions. Here are the field equations in presence of cosmological constant
\begin{equation}
G_{\mu \nu}+\Lambda g_{\mu \nu}=(8 \pi G /c^{4})T_{\mu \nu},
\label{eq:2.1}
\end{equation}
one can consider the speed of light to be equal to one in relation \eqref{eq:2.1}.
Considering the matter field to be zero
$(T_{\mu \nu}=0)$
is one of these conditions that simplify equations \eqref{eq:2.1}. Solving Einstein equations in presence of cosmological constant for an isotropic homogeneous model, one obtains solutions that will be the simplest inflationary solutions
\begin{equation}
R_{\mu \nu} -1/2 R g_{\mu \nu}= -\Lambda g_{\mu \nu}.
\label{eq:2.3}
\end{equation}
Ricci scalar and Ricci tensor for this model are
$R=4 \Lambda$, $R_{\mu \nu}=\Lambda g_{\mu \nu}$.
\subsection{Static and spherically symmetric coordinates}
To achieve the ability to solve equations
\eqref{eq:2.1}
one have to write a basic form of the metric \cite{lenk2010general,gron2007homogeneous,carroll2019spacetime}
\begin{equation}
g_{\mu \nu}=
\begin{bmatrix}
A(t,r,\theta,\varphi)&B(t,r,\theta,\varphi)&C(t,r,\theta,\varphi)&D(t,r,\theta,\varphi)\\
B(t,r,\theta,\varphi)&E(t,r,\theta,\varphi)&F(t,r,\theta,\varphi)&G(t,r,\theta,\varphi)\\
C(t,r,\theta,\varphi)&F(t,r,\theta,\varphi)&H(t,r,\theta,\varphi)&I(t,r,\theta,\varphi)\\
D(t,r,\theta,\varphi)&G(t,r,\theta,\varphi)&I(t,r,\theta,\varphi)&J(t,r,\theta,\varphi)
\end{bmatrix}.
\label{eq:2.5}
\end{equation}
Ricci tensor can be obtained according to the metric \eqref{eq:2.5} then the result can be used in \eqref{eq:2.1} to find the final form of the metric. Accurately to obtain an exact solution, the metric is considered to be stationary, which means a timelike Killing vector field exists and a timelike coordinate can be defined according to this Killing vector field
($\frac{\partial g_{\mu \nu}}{\partial x^{0}}=0$, where $x^0$ is a timelike coordinate) \cite{d1992introducing}. Being stationary does not restrict the metric to have multiple phrases. To extricate these types of phrases one has to impose another condition on the metric as being static, which means that the metric is time-reversal invariant so multiple phrases will be omitted. Then one can apply spherical symmetry that means space-time has three spacelike Killing vector fields
$X^{\alpha}$,
with the following relation
\begin{equation}
[X^1,X^2]=X^3\quad ,\quad [X^2,X^3]=X^1 \quad,\quad [X^3,X^1]=X^2.
\end{equation}
Finally the metric take the simplified form
\begin{equation}
ds^{2}=-e^{A(r)} dt^{2}+e^{B(r)} dr^{2}+r^{2} d {\theta}^{2}+r^{2} sin^{2}{\theta} d{\varphi}^{2}.
\label{eq:2.22}
\end{equation}
$A(r)$ has been replaced to $e^{A(r)}$ because metric's elements are always positive and this choice will simplify calculations.
\subsection{Ricci tensor calculation}
According to the line element
\eqref{eq:2.22}
one can obtain non-zero components of the Ricci tensor as bellow
\begin{align}
\label{eq:2.500}
&R_{tt} = {e} ^ {A-B} (1/2A''-1/4A'B'+1/4 {A'} ^ {2} +A'/r),\\
&R_{rr} =-1/2A''+1/4A'B'-1/4 {A'} ^ {2} +B'/r,\notag\\
&R_{\theta \theta } =- {e} ^ {-B} (1+ \frac {r({A} ^ {'} - {B} ^ {'} )} {2 }) +1,\notag\\
&R_{\varphi \varphi}=sin^{2}{\theta}R_{\theta \theta}.\notag
\end{align}
Putting these relations in
\eqref{eq:2.3}
it is possible to write
\begin{equation}
\Lambda {e} ^ {A} = {e} ^ {A-B} (1/2A''-1/4A'B'+1/4 {A'} ^ {2} +A'/r)+2\Lambda e^{A}
\label{eq:2.8}
\end{equation}
and
\begin{equation}
-\Lambda {e} ^ {B} =-1/2A''+1/4A'B'-1/4 {A'} ^ {2} +B'/r-2\Lambda e^{B}.
\label{eq:2.9}
\end{equation}
Dividing
\eqref{eq:2.8}
by
\eqref{eq:2.9}
one has
\begin{equation}
A'=-B'\quad,\quad A=-B,
\end{equation}
where the integral constant is considered to be zero.
If one puts the third relation of \eqref{eq:2.500}
in
$R_{\mu \nu}=\Lambda g_{\mu \nu}$
then
\begin{align}
&e^{A}(1+rA')=1-\Lambda r^{2}\\
&X\equiv e^{A(r)}\notag\\
&X+rX'=1-\Lambda r^{2}\notag\\
&\frac{d}{dr}(rX)=\frac{d}{dr}(r-(\Lambda/3)r^{3})\notag\\
&rX=r-(\Lambda/3)r^{3}+C,\notag
\end{align}
where $C$ is the integral constant. Considering Newtonian limit, a matter field at the point $O$ causes the potential $\phi=-\frac{GM}{r}$ \cite{d1992introducing}. This potential in weak field limit results
\begin{equation}
g_{00}\simeq 1+2\phi/c^2=1-2GM/c^2r,
\end{equation}
therefore
$C\equiv GM/c^2$. Considering
$c$
and
$G$
to be equal to one, $e^{A}$ can be obtained as follows
\begin{equation}
e^{A}=1-(\Lambda /3) r^{2}+2M/r.
\end{equation}
Putting this relation in \eqref{eq:2.22} the line element becomes
\begin{equation}
{ds} ^ {2} =- (1-\frac {2M} {r} - {\Lambda} \frac {{r} ^ {2}}{3} ) {dt} ^ {2} + { (1- \frac{2M} {r} - {\Lambda} \frac {{r} ^ {2}}{3} )} ^ {-1} {dr} ^ {2} + {r} ^ {2} {d \Omega} ^ {2}.
\label{eq:2.14}
\end{equation}
Actually, this is de Sitter-Schwarzschild solution. If $\Lambda=0$ in equation \eqref{eq:2.14} then the
Schwarzschild solution for Einstein equations will be acquired and if $M=0$ de Sitter solution will be obtained
\begin{equation}
{ds} ^ {2} =- (1 - {\Lambda} \frac {{r} ^ {2}}{3} ) {dt} ^ {2} + { (1 - {\Lambda} \frac {{r} ^ {2}}{3} )} ^ {-1} {dr} ^ {2} + {r} ^ {2} {d \Omega} ^ {2}.
\label{eq:2.18}
\end{equation}
Let $l\equiv\sqrt{3/ \Lambda}$ and \eqref{eq:2.18} becomes
\begin{equation}
{ds} ^ {2} =- (1 - \frac {{r} ^ {2}}{l^{2}} ) {dt} ^ {2} + { (1 - \frac {{r} ^ {2}}{l^{2}} )} ^ {-1} {dr} ^ {2} + {r} ^ {2} {d \Omega} ^ {2}.
\label{eq:2.29}
\end{equation}
This is de Sitter line element.
\section{De Sitter horizon}
If $r=l$ then components of the line element
\eqref{eq:2.29} become irreversible (caused by the choice of coordinates) and a singularity appears in this value of $r$. The difference between this horizon and the Schwarzschild's horizon is that in de Sitter space-time, each observer has his/her own unique horizon. As a result
$t,r,\theta,\varphi$
are not appropriate coordinates to describe the whole de Sitter manifold. More precisely they are unable to delineate beyond the horizon. Therefore this coordinate system do not describe the whole de Sitter space-time, it can only explain the static patch of de Sitter space-time \cite{hawking1973large}. Afterwards it is useful to find other coordinate systems that satisfy our aim to study beyond the horizon.
\section{Embedding de Sitter space-time in five dimensional Minkowski space-time\label{sec:2.2.2} }
Different coordinates are describable for each space-time. De Sitter space-time is no exception. Therefore one can describe various coordinates for it. One can defines $(x_0,x_1,x_2,x_3,x_4)$ as follows
\begin{align}
\label{eq: embedd}
&t=l\tanh^{-1}(x_0/x_1),\\
&r=\sqrt{x_0^{2}-x_1^{2}+l^2}\notag,\\
&\theta=\cos^{-1}(\frac{x_4}{\sqrt{x_0^{2}-x_1^{2}+l^2}}),\notag\\
&\varphi=\tan(x_3/x_4)\notag.
\end{align}
If one puts this relations in
\eqref{eq:2.29}
the following relation will be obtained
\begin{equation}
ds^{2}=-dx^{2}_{0}+dx^{2}_{1}+dx^{2}_{2}+dx^{2}_{3}+dx^{2}_{4},
\label{eq:(2.25)}
\end{equation}
\eqref{eq: embedd} can be written as \cite{pascu2012atlas}
\begin{align}
\label{eq:2.30}
&{x}_0 =l \sqrt{ (1-\frac{{r}^{2}}{{l}^{2}} )} \sinh(\frac{t}{l} ),\\
&{x}_{1} =l \sqrt{ (1-\frac{{r}^{2}}{{l}^{2}})} \cosh(\frac{t}{l}),\notag\\
&x_{2}=r \sin{\theta}\cos{\varphi},\notag\\
&x_{3}=r \sin{\theta}\sin{\varphi},\notag\\
&x_{4}=r \cos{\theta}.\notag
\end{align}
by squaring these components and sum them one can writes
\begin{equation}
-x^{2}_{0}+x^{2}_{1}+x^{2}_{2}+x^{2}_{3}+x^{2}_{4}=l^{2}.
\label{eq:(2.26)}
\end{equation}
This is the relation of a hyperboloid embedded in five dimensional Minkowski space-time. This relation is analogues to \eqref{eq:2.200}. Hence relation \eqref{eq:2.200} can also be obtained by solving Einstein equations.
\section{De Sitter hyperbloid}
Another appropriate coordinates to designate de Sitter space-time is
\begin{align}
\label{eq:2.32}
&x_{0}=l\sinh (\tau/l),\\
&x_{1}=l\cosh (\tau/l)\cos(\theta),\notag\\
&x_{2}=l\cosh (\tau/l)\sin(\theta)\cos(\varphi),\notag\\
&x_{3}=l\cosh (\tau/l)\sin(\theta)\sin(\varphi)\cos(\alpha),\notag\\
&x_{4}=l\cosh (\tau/l)\sin(\theta)\sin(\varphi)\sin(\alpha),\notag
\end{align}
where $\tau=l \sinh^{-1}[l \sqrt{1-(r/l)^{2}}\sinh(t/l)]$.
If one puts
\eqref{eq:2.32}
components in \eqref{eq:(2.26)}
then a common equation will appear
\begin{equation}
\cosh^{2}(\tau/l)-\sinh^{2}(\tau/l)=1,
\label{eq:2.44}
\end{equation}
with the form \ref{fig:2.33}.
\begin{figure}
\centering
\includegraphics[width=50mm]{78.png}
\caption{ This figure illustrates
$\cosh^{2}(\tau/l)-\sinh^{2}(\tau/l)=1$.
}
\label{fig:2.33}
\end{figure}
On the other hand for spacelike part we have the relation of a two-sphere. Then for the whole space-time one can see \ref{fig:(2.1)}
\begin{figure}
\centering
\includegraphics[width=70mm]{22.jpg}
\caption{De Sitter hyperbloid that shows global coordinates. }
\label{fig:(2.1)}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=70mm]{5555.png}
\caption{In this figure circles show surfaces of constant t.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=70mm]{33333.png}
\caption{Shaded region shows static de Sitter coordinates}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=70mm]{32.png}
\caption{De Sitter space-time is conformal to the part $-\pi/2<T<\pi/2$ of the Einstein static universe.}
\end{figure}
\section{Global coordinates}
As it was explained in section \ref{sec:2.2.2} one can defines various coordinates with respect to the equation \eqref{eq:(2.26)}. Therefore it is possible to write
\begin{align}
\label{eq:2.36}
&x_{0}=l \sinh(\tau /l),\\
\label{eq:2.37}
&x_{j}=l \cosh(\tau /l) \omega^{j}.
\end{align}
Where $(j=1,2,3,4)$. Then the line element takes the form
\begin{equation}
ds^{2}=-d \tau^{2}+l^{2}\cosh^{2}(\tau/l) d \omega^{(j)2}.
\label{eq:2.33}
\end{equation}
A conformal factor can be multiply in \eqref{eq:2.33} and the compactified space-time (see Appendix \ref{app:A} ) can be obtained.
\section{De Sitter space-time's completion}
Talking about infinity is not an easy task, because it is vast and out of reach \cite{23}. Hence with aid of compactification infinity becomes accessible (see Appendix \ref{app:A}). One can multiply the conformal factor
$\Omega=\frac{1}{l^{2}\cosh^{2}(\tau/l)}$
in the metric
\eqref{eq:2.33}. Causalty is invariant under conformal transformation although it changes geometry. Geometry's change allows us to have infinities in the problem as well. Multiplying this conformal factor in
\eqref{eq:(2.25)}
one can see that de Sitter space-time is locally conformal to the Einstein static universe
\begin{equation}
ds^{2}=l^{2}\cosh^{2}(\tau/l)d\bar{s}^{2}.
\end{equation}
Where
$d\bar{s}^{2}$
is the line element of Einstein static universe.
\begin{align}
&d\bar{s}^{2}=-l^{-2}\cosh^{-2}(\tau/l)d\tau^{2}+dR^{2}+R^{2}d \Omega^{2},\\
& \xrightarrow{T=\tau/l\cosh^{-1}(\tau/l)-\sqrt{\tau^2/l^2-1}}\notag\\
&d \bar{s}^{2}=-dT^{2}+dR^{2}+R^{2}d \Omega^{2}.\notag
\end{align}
Considering the range of this components one can find infinities for this nonphysical metric.
De Sitter space-time unlike Minkowski space-time has spacelike infinity for null and timelike lines.
\section{Killing vector fields}
A vector field has conformal motions if \cite{hassani2001mathematical}
\begin{equation}
\mathcal{L}_{X}g_{\mu \nu}=2\phi(x^{\sigma})g_{\mu \nu}.
\end{equation}
In this equation if $\phi$ is constant then $X$ will be a homothetic vector and if $\phi=0$, $X$ will be a Killing vector \cite{stephani2009exact}.
Thus according to $\mathcal{L}_{X}g_{\mu \nu}=0$ Killing vector fields for de Sitter space-time are written as follows \cite{banerjee2007gauge,salcedo2017sitter,yan2017killing}
\begin{align}
&\xi^{\mu}= [\frac{r\cos(\theta)l\exp(t/l)}{\sqrt{(l^2-r^2)}},\cos(\theta)\exp(t/l)\sqrt{(l^2-r^2)}, \frac{-\exp(t/l)\sin(\theta)\sqrt{(l^2-r^2)}}{r}, 0] ,\\
&\xi^{\mu}= [0, 0, 0,1] ,\notag\\
&\xi^{\mu}= [\frac{r\sin(\theta)\sin(\phi)l\exp(t/l)}{\sqrt{(l^2-r^2)}},\sin(\theta)\sin(\varphi)\exp(t/l)\sqrt{(l^2-r^2)},\notag\\ &\frac{\exp(t/l)\cos(\theta)\sin(\varphi)\sqrt{(l^2-r^2)}}{r},\frac{\exp(t/l)\cos(\varphi)(l-r)(l+r)}{\sqrt{(l^2-r^2)r\sin(\theta)}}] ,\notag\\
&\xi^{\mu}= \frac{r\sin(\theta)\cos(\varphi)l\exp(t/l)}{\sqrt{(l^2-r^2)}}, [\sin(\theta)\cos(\varphi)\exp(t/l)\sqrt{(l^2-r^2)},\notag\\ &\frac{\exp(t/l)\cos(\theta)\cos(\varphi)\sqrt{(l^2-r^2)}}{r},\frac{-\exp(t/l)\sin(\varphi)(l-r)(l+r)}{\sqrt{(l^2-r^2)}r\sin(\theta)}] ,\notag\\
&\xi^{\mu}= [\frac{-rl\cos(\theta)\exp(-t/l)}{\sqrt{(l^2-r^2)}},\cos(\theta)\exp(-t/l)\sqrt{(l^2-r^2)}, \frac{-\sin(\theta)\exp(-t/l)\sqrt{(l^2-r^2)}}{r}, 0] ,\notag\\
&\xi^{\mu}= [-rl\sin(\theta)\sin(\varphi)\exp(-t/l)/\sqrt{(l^2-r^2)},\sin(\theta)\sin(\varphi)\exp(-t/l)\sqrt{(l^2-r^2)},\notag\\ &\frac{\cos(\theta)\sin(\varphi)\exp(-t/l)\sqrt{(l^2-r^2)}}{r},\frac{\cos(\varphi)\exp(-t/l)(l-r)(l+r)}{\sqrt{(l^2-r^2)}r\sin(\theta)}] ,\notag\\
&\xi^{\mu}=[\frac{-rl\sin(\theta)\cos(\varphi)exp(-t/l)}{\sqrt{(l^2-r^2)}},\sin(\theta)\cos(\varphi)\exp(-t/l)\sqrt{(l^2-r^2)},\notag\\
&\frac{\cos(\theta)\cos(\varphi)exp(-t/l)\sqrt{(l^2-r^2)}}{r}, \frac{-\sin(\varphi)\exp(-t/l)(l-r)(l+r)}{\sqrt{(l^2-r^2)}r\sin(\theta)}] ,\notag\\
&\xi^{\mu}= [1, 0, 0, 0] ,\notag\\
&\xi^{\mu} = [0, 0,\sin(\varphi), \frac{\cos(\varphi)}{\tan(\theta)}] ,\notag\\
&\xi^{\mu}= [0,0, \cos(\varphi), -\sin(\varphi)/\tan(\theta)] ,\notag
\end{align}
As one expected for de Sitter space-time that is maximally symmetric, ten killing vector fields have been found.
\chapter{De Sitter space-time's symmetries}
Symmetry in physics is a mathematical or physical property of a system that remains unchanged under certain transformations. There exists quantities which are expected to be invariant under transformations, so we look for symmetry groups that hold these quantities invariant. In the following, we will talk about the existence of such a quantity for gravitational systems and the symmetry group that maintains its inefficiency.
\section{Planck length \label{sec:4.1}}
Planck length, $l_p$, is the distance that light travels in Planck time. This length can be described by three fundamental physical constants, speed of light in vacuum, $c$, Planck constant, $h$, gravitational constant, $G$. Though its relation is
\begin{equation}
l_p=\sqrt{\frac{\hbar G}{c^3}}=1/616229(38)\times 10^{-35} m.
\label{eq:3.1}
\end{equation}
In 1899 Max Planck proposed to use certain constants for length, mass, time and energy. Considering only the Newton gravitational constant, speed of light and Planck constant he found these constants named Planck length, Planck mass, Planck time and Plank energy.
Quantum effects are believed to appear in this scale. to measure anything in this scale the photon momentum must be very high. Considering Heisenberg's uncertainty principle a black hole appears in this scale that its horizon is equal to Planck length. One can rewrite the uncertainty principle as
\begin{equation}
\Delta p\Delta r>\hbar/2.
\end{equation}
multiplying both sides by $2\frac{G }{c^3}$ one gets \cite{aurilia2013planck,carr2016black}
\begin{align}
\label{eq:3.444}
&\Delta(\frac{2Gm}{c^2})\Delta r> \frac{G \hbar}{c^3}\\
\Rightarrow&\Delta r_s \Delta r>l_p^2.\notag
\end{align}
where $r_s$ is gravitational radius, $r$ is coordinates radius and $l_p$ is Planck length. Relation \eqref{eq:3.444} is the uncertainty principle in quantum gravity.
Uncertainty principle anticipate the existence of black holes and wormholes so any attempt to acquire a distance smaller than Planck length is considered impossible because a black hole will appear in this distance \cite{carr2016black}.
The Lorentz symmetry group does not remain invariant at this minimum length. Note that in special relativity on the flat background, the closer the speed to the speed of light, the closer the length goes to zero
\begin{equation}
L=L_0\sqrt{1-v^2 / c^2}.
\end{equation}
This is in contrast to the invariance of the Planck length.
Lorentz group can be only realized on homogeneous space-time that means except Minkowski space-time it can be written on de Sitter and anti-de Sitter space-times that are the only possible homogeneous space-times in $(3+1)$ dimensions. In this thesis we focus on de Sitter space-time with constant positive scalar curvature.
\begin{equation}
R=12 l^{-2}.
\label{eq:3.4}
\end{equation}
Where $l$ is the de Sitter length. Equation \eqref{eq:3.4} shows the relation between Ricci scalar and length. We know by the definition that Lorentz transformations will remain the curvature invariant. Thus Lorents transformations on de Sitter space-time remain de Sitter length invariant \cite{aldrovandi2007sitter,araujo2017sitter,araujo2019sitter,gaarder2016gravitational,gibbons2003newton, salcedo2017sitter}. Somehow, we also have this concept hidden in Minkowski space-time, what remains invariant there is an infinite length and does not affect the space-time's curvature.
\section{De Sitter transformations}
De Sitter transformations can be known as rotations in five dimensional Euclidean space. Each observer has his/her own coordinate set, we want to find transformations between these coordinate sets. We search for a symmetry group that remains the metric invariant. This group named as de Sitter group $So(4,1)$ that apply in the embedding five dimensional Minkowski space-time
\begin{equation}
X^{\sigma}=\Lambda^{\sigma}_{\rho}X^{\rho}
\end{equation}
where $\Lambda^{\sigma}_{\rho}$ is the group element. In vector representation one has \cite{hartman2017lecture}
\begin{equation}
-g_{\mu \nu}X^{\mu}X^{\nu}=l^2
\end{equation}
that shows these transformations remain the length invariant. Infinitesimal transformations can be shown in the following relation
\begin{equation}
\delta X^{\sigma}=1/2 \xi^{\mu \nu} L_{\mu \nu} X^{\sigma}
\label{eq:3.12}
\end{equation}
where
$L_{\mu \nu}$
and
$\xi^{\mu \nu}$
are generators and parameters of de Sitter translations.
\section{Spherical rotations\label{3.3}}
According to the equation \eqref{eq:2.30}
one can see that the last three components are parameterize in $ \mathbb{R}^3$ then symmetry elements are rotation elements
\begin{equation}
\mathcal{G}_{rot}^{(1)}=
\begin{bmatrix}
1&0&0&0&0\\
0&1&0&0&0\\
0&0&1&0&0\\
0&0&0&\cos{\alpha}&-\sin{\alpha}\\
0&0&0&\sin{\alpha}&\cos{\alpha}
\end{bmatrix},
\end{equation}
\begin{equation}
\mathcal{G}_{rot}^{(2)}=
\begin{bmatrix}
1&0&0&0&0\\
0&1&0&0&0\\
0&0&\cos{\alpha}&-\sin{\alpha}&0\\
0&0&\sin{\alpha}&\cos{\alpha}&0\\
0&0&0&0&1
\end{bmatrix},
\end{equation}
\begin{equation}
\mathcal{G}_{rot}^{(3)}=
\begin{bmatrix}
1&0&0&0&0\\
0&1&0&0&0\\
0&0&\cos{\alpha}&0&-\sin{\alpha}\\
0&0&0&1&0\\
0&0&\sin{\alpha}&0&\cos{\alpha}
\end{bmatrix}.
\end{equation}
\section{Time translation}
As de Sitter metric is static then it should be invariant under time translation
\begin{equation}
\mathcal{T}_{trans}^{(1)}=
\begin{bmatrix}
\cosh{(\beta /l)}&-\sinh{(\beta /l)}&0&0&0\\
-\sinh{(\beta /l)}&\cosh{(\beta /l)}&0&0&0\\
0&0&1&0&0\\
0&0&0&1&0\\
0&0&0&0&1
\end{bmatrix}.
\end{equation}
This transformation can be considered as a boost in $x_1$ direction.
\section{Rotations on the hyperboloid}
As one can see in section \ref{3.3} for spherical rotation the $x_1$ axis is considered a constant axis. Afterwards allowing $x_1$ axis to be variable another subgroup of rotation known as rotations on the hyperboloid appear
\begin{equation}
\mathcal{R}_{rot}^{(1)}=
\begin{bmatrix}
1&0&0&0&0\\
0&\cos{\alpha}&-\sin{\alpha}&0&0\\
0&\sin{\alpha}&\cos{\alpha}&0&0\\
0&0&0&1&0\\
0&0&0&0&1
\end{bmatrix},
\end{equation}
\begin{equation}
\mathcal{R}_{rot}^{(2)}=
\begin{bmatrix}
1&0&0&0&0\\
0&\cos{\alpha}&0&-\sin{\alpha}&0\\
0&0&1&0&0\\
0&\sin{\alpha}&0&\cos{\alpha}&0\\
0&0&0&0&1
\end{bmatrix},
\end{equation}
\begin{equation}
\mathcal{R}_{rot}^{(3)}=
\begin{bmatrix}
1&0&0&0&0\\
0&\cos{\alpha}&0&0&-\sin{\alpha}\\
0&0&1&0&0\\
0&0&0&1&0\\
0&\sin{\alpha}&0&0&\cos{\alpha}
\end{bmatrix}.
\end{equation}
\section{boosts}
Other transformations that we have to consider are boosts
\begin{equation}
\begin{bmatrix}
\cosh{\beta}&0&-\sinh{\beta}&0&0\\
0&1&0&0&0\\
-\sinh{\beta}&0&\cosh{\beta}&0&0\\
0&0&0&1&0\\
0&0&0&0&1
\end{bmatrix},
\end{equation}
\begin{equation}
\begin{bmatrix}
\cosh{\beta}&0&0&-\sinh{\beta}&0\\
0&1&0&0&0\\
0&0&1&0&0\\
-\sinh{\beta}&0&0&\cosh{\beta}&0\\
0&0&0&0&1
\end{bmatrix},
\end{equation}
\begin{equation}
\begin{bmatrix}
\cosh{\beta}&0&0&0&-\sinh{\beta}\\
0&1&0&0&0\\
0&0&1&0&0\\
0&0&0&1&0\\
-\sinh{\beta}&0&0&0&\cosh{\beta}
\end{bmatrix}.
\end{equation}
\section{Conformal transformations}
Number of group elements for the group $SO(1,n+1)$ can be obtained according to the following relation
\begin{equation}
dimSO(1,n+1)=\frac{1}{2}(n+1)(n+2).
\label{eq:3.5}
\end{equation}
As an example Pioncaré group in $n$ dimensions has $n$ translation generators and $\frac{n(n-1)}{2}$ rotation generators.
\begin{equation}
dim Poincar \acute{e}(E^{n})=\frac{1}{2}n(n+1).
\label{eq:3.6}
\end{equation}
These calculations have been done locally. De Sitter space-time is maximally symmetric so having the curvature of a point, one can find the space-time's curvature so results of
\eqref{eq:3.5}
and
\eqref{eq:3.6}
can be referred to the whole space-time.
The deference between
\eqref{eq:3.5}
and
\eqref{eq:3.6} is $n+1$. This incompatibility can be described by adding conformal transformations. In the following different types of these transformations are represented \cite{duval2011conformal}.
Multiplying a conformal factor to a vector one obtains a transformations named as dilations
\begin{equation}
\vec{x}\rightarrow \lambda \vec{x}\quad,\quad\lambda \in \mathbb{R}.
\end{equation}
with
\begin{equation}
D=t\partial_t+x\partial_x+y\partial_y+z\partial_z
\end{equation}
as their generator.
Other relevant type of transformations are conformal spatial transformations
$\vec{x} \rightarrow \vec{x}'$, as
\begin{equation}
\frac{x'^{\mu}}{x'^{2}}=\frac{x^{\mu}}{x^{2}}+\alpha^{\mu},
\end{equation}
where $x^{2}=x_{\mu}x^{\mu}$ and $\mu=1,\dots,n$.
One can also write
\begin{equation}
x'^{\mu}=\frac{x^{\mu}+\alpha^{\mu}x^{2}}{1+2\alpha_{\mu}x^{\mu}+\alpha^{2}x^{2}},
\end{equation}
with four generators
\begin{align}
&K_1=(t^2+x^2+y^2+z^2)\partial_t+2xt\partial_x+2yt\partial_y+2zt\partial_z,\\
&K_2=2xt\partial_t+(t^2+x^2+y^2+z^2)\partial_x+2ty\partial_y+2zt\partial_z,\notag\\
&K_3=2ty\partial_t+2xt\partial_x+(t^2+x^2+y^2+z^2)\partial_y+2zt\partial_z,\notag\\
&K_4=2tz\partial_{t}+2xt\partial_x+2yt\partial_y+(t^2+x^2+y^2+z^2)\partial_z.\notag
\end{align}
As one can see in previous pages, we have ten generators for rotations, time translation and boosts and five generators for conformal transformations so we have $SO(1,4)$ as de Sitter symmetry group.
\section{Commutation relations}
It is possible to study the Lie algebra for conformal transformations (see table \ref{tab 3.1}).
\begin{table}
\begin{tabular}{|p{2.5cm}|p{5cm}|p{2.5cm}|p{5cm}|}
\hline
Translations&Rotations& Dilations&Conformal spatial transformations \\ \hline
$P_{\mu}=-\partial_{\mu}$&$M_{\mu\nu}=(x_{\mu}\partial_{\nu}-x_{\nu}\partial_{\mu})=(P_{\mu}\partial_{\nu}-P_{\nu}\partial_{\mu})$&$D=-x^{\mu}\partial_{\mu}$&$K_{\mu}=(2x_{\mu}x^{\nu}\partial_{\nu}-x^{2}\partial_{\mu})=-2x_{\mu}D+x^{2}P_{\mu}$\\ \hline
\end{tabular}
\caption{So(4,1) generators. \label{tab 3.1}}
\end{table}
So one can write the algebra that rules on these transformations
\begin{align}
\label{eq:3.54}
&[M_{\mu\nu},P_{\rho}]=(g_{\nu\rho}P_{\mu}-g_{\mu\rho}P_{\nu}),\\
&[M_{\mu\nu},M_{\rho \tau}]=(g_{\mu \tau}M_{\nu \rho}+g_{\nu \rho}M_{\mu \tau}-g_{\mu\rho}M_{\nu\tau}-g_{\nu\tau}M_{\mu\rho}),\notag\\
&[M_{\mu \nu},K_{\rho}]=(g_{\nu \rho}K_{\mu}-g_{\mu\rho}K_{\nu}),\notag\\
&[D,P_{\mu}]=+P_{\mu},\notag\\
&[D,K_{\mu}]=+K_{\mu},\notag\\
&[P_{\mu},K_{\nu}]=2(g_{\mu \nu}D+M_{\mu \nu}).\notag
\end{align}
Other combinations are zero.
\chapter{Asymptotic symmetries}
De Sitter space-time's symmetries have been considered in the previous chapter. In the following chapter we are interested in obtaining asymptotic symmetries. Our focus is on null infinity, $\mathcal{I}$, thus we are facing two problems ahead. First, according to the compactification, the topology changes, then the topology of $\mathcal{I}$ is quite different from the topology of the physical space-time. Therefore, we can not necessarily say that the ruling symmetry group at null infinity is the same as the one at physical space-time. This subject has been studied in Minkowski space-time, more details on this method will follow. But there is another important matter that
$\mathcal{I}$
is null in asymptotically flat space-times and spacelike in asymptotically de Sitter space-times. Therefore $\Lambda\rightarrow0$ does not have a continuous limit. This fact has important consequences.
In this chapter these two issues are considered and useful methods to find asymptotic symmetries for de Sitter space-time are presented.
\section{General discussion}
Talking about infinity is not facile so with the help of compactification (see Appendix \ref{app:A}) infinity becomes more palpable. As we said before by multiplying a conformal factor by the metric ($\tilde{ g}_{\mu\nu}=\Omega^2{ g}_{\mu\nu}$), one manage to attribute infinity to the boundary of a larger space-time, $(\tilde{M},\tilde{ g}_{\mu\nu})$. Conformal factor does not change the causality but it changes the geometry therefore we can not distinguish the symmetry group that rules the infinity without precise consideration. Actually the only phrase that one can say is that the group of diffeomorphisms is the appropriate symmetry group which is not useful, because one can not define preserve charge according to them. In fact, diffeomorphism invariance is a local symmetry while we need a global symmetry to describe Noether charge.
Using our knowledge about physical space-time's properties and considering their transition when $r\rightarrow \infty$ might be a good way to find features of the infinity.
As said before, conformal transformation does not change the causality. Hence it is possible to consider gravitational fields and their asymptotic limits. As gravitational fields move with the speed of light, studying null infinity and finding a useful notion for it may be feasible according to them.
At first we will review the asymptotic behavior of a gravitational field in an isolated system. Actually this case is much easier than the other ones. Observations show that a system with a positive cosmological constant is more appropriate to describe our universe. Unfortunately one can not use the process that is used in $\Lambda =0$ cases, in
$\Lambda >0$ cases \cite{bros2002asymptotic}.
It is difficult to study the asymptotic structure of a gravitational field because the field itself changes the geometry of space-time. This issue becomes clear after the work of
Arnowitt, Deser and Misner at spacelike infinity \cite{arnowitt2008republication} and the work of Bondi, Sachs and Newman at null infinity \cite{bondi1962gravitational}. In ADM framework space-time is divided to time constant surfaces. Each surface has a three dimensional metric $\lambda_{ij}(t,x^k)$ and a momentum $\pi^{ij}(t,x^k)$ according to that one can define the Hamiltonian (see \cite{arnowitt2008republication,deser1967covariant} ).
Now let's talk about null infinity, our main subject. First we will review the work of Bondi and his collaborators. They established a system for studying the expansion of the metric on a null path. Null infinity, $\mathcal{I}$, is known as the boundary for physical space-time. considering this gravitational radiation on $\mathcal{I}$ one can defines the Bondi news,
$N_{\mu\nu}$,
which in
Bondi–Sachs physical space coordinates, $\hat{x}^{\mu}=(u,l,x^{\mu})=(t-r,1/r,x^{\mu})$,
has the form \cite{ashtekar2014asymptotics}
\begin{equation}
N_{\mu\nu}=\zeta^*(\lim_{l\rightarrow0}l^{-1}\hat{\nabla}_{\mu}\hat{\nabla}_{\nu}l),
\end{equation}
where $\zeta^*$ shows the pullback to $\mathcal{I}^+$. Two components of $N_{\mu\nu}$ show the two possible modes in exact general relativity. In asymptotically flat space-times if and only if
$N_{\mu\nu} \neq 0$,
gravitational radiation do not carry energy momentum on $\mathcal{I}$. If $N_{\mu \nu} \neq 0$,
$\eta_{\mu \nu}$ is no longer unique and one has to defines a new metric, $\eta'_{\mu \nu}$ with the translation $t \rightarrow t'=t+f(\theta,\varphi)$ as $g_{\mu \nu}$ has the same asymptotic behavior with it. Thus the asymptotic symmetry group is not Pioncaré group but another group, called as BMS group including an abelian subgroup $\mathfrak{T}$ that contains translations just like the subgroup that has been defined for Pioncaré group. Hence the definition of energy momentum at null infinity is well-defined \cite{bondi1962gravitational}.
For the $\Lambda>0$ case the process is quite different. As $\mathcal{I}$ is a spacelike hypersurface one can not obtain the symmetry group the way the $BMS$ group has been obtained. It should be added that by considering $1/r$ coordinate transformation for a locally de Sitter Bondi-Sachs metric can be used and a symmetry group named as $\Lambda-BMS$ group has been obtained
To study such a structure, we first need to consider the basic definitions \cite{poole2019ds4,compere2019advanced,aneesh2019conserved}.
\section{Asymptotically flat space-time}
In physics we like to study isolated systems. If a space-time becomes flat when $r\rightarrow \infty$, this space-time is asymptotically flat and asymptotically flat space-times are isolated \cite{wald2000general,calo2018relation}.
Finding a meaningful definition for isolated systems in general relativity is not simple because finding a helpful description for infinity is difficult \cite{wald1999gravitational}. Compactifying the space-time (see Appendix \ref{app:A}) is a useful method to have a good definition for infinity. According to that, a definition for asymptotically flat space-time has also been found. A space-time is asymptotically flat if its null and space-like infinities become like the null and space-like infinities of the flat space-time. More precisely the space-time $(M,g_{\mu\nu})$ is asymptotically flat if a conformal space-time, $(\tilde{M},\tilde{g}_{\mu\nu})$ exist as $\tilde{g}_{\mu\nu}$ becomes $C^{\infty}$ everywhere except $i^0$ where it becomes $C^{>0}$ and conformal isometrie, $\psi:M \rightarrow \psi[M]\subset \tilde{M}$ with conformal factor $\Omega$ as ${g}_{ab}=\Omega^{2} \psi^*\tilde{ g}_{ab}$ satisfy the following conditions.\cite{wald2000general}
\noindent\fbox{
\parbox{\textwidth}{
1. $\bar{J^{+}}(i^0)\cup\bar{J^-}(i^0)=\tilde{M}-M$.
2. There exists an open neighborhood, V, of $\mathring{M}= i^{0} \cup \mathcal{I}^{-}\cup \mathcal{I}^{+}$, where $(V,\tilde{g}_{\mu\nu})$ is strongly causal.
3. $\Omega$ can be extended to a function on all of the $\tilde{M}$ which is $C^{2}$ at $i^{0}$ and $C^{\infty}$ elsewhere.
4. (a) For $\Omega$ at $\mathcal{I}$ one has
\begin{align}
\label{eq:4.22222}
&\Omega|_{\mathcal{I}}=0,\\
&\tilde{\nabla}_{\mu} \Omega|_{\mathcal{I}}\neq 0,\notag
\end{align}
where $\tilde{\nabla}_{\mu}$ is the covariant derivative according to $\tilde{g}_{\mu \nu}$.
(b) At $i^0$ one can write
\begin{align}
&\Omega|_{i^{0}}=0,\\
&\lim_{i^{0}} \tilde{\nabla}_{\mu}\Omega=0,\notag\\
&\lim_{i^{0}} \tilde{\nabla}_{\mu}\tilde{\nabla}_{\nu}\Omega= 2 \tilde{g}_{\mu \nu}(i^{0}).\notag
\end{align}
}}
The condition \eqref{eq:4.22222} lets $\Omega$ to be a component on $\mathcal{I}$. One has the liberty to choose $\Omega$. Thus if one chooses the conformal frame $\tilde{\nabla}_{\mu}\tilde{n}^{\mu}|_{\mathcal{I}}=0$ it is possible to use $\tilde{n}^{\mu}$ as a component on the tangent space of the $\mathcal{I}$. This opportunity can be used to change the conformal scale as
$\Omega \rightarrow \Omega'=\omega\Omega$, so
\begin{align}
&\tilde{n}'^{\mu}=\omega^{-1}\tilde{n}^{\mu},\\
&q'_{\mu\nu}|_{\mathcal{I}}=\omega^{2}q_{\mu\nu},\notag
\end{align}
where $\mathcal{L}_{\tilde{n}}\omega=0$. Choosing the conformal frame $\tilde{\nabla}_{\mu}\tilde{n}^{\mu}|_{\mathcal{I}}=0$, degrees of freedom decrease. Space-time in this conformal frame $q_{\mu\nu}$ has the signature $(0,+,+)$ at $\mathcal{I}$.
\subsection{The Bondi Sachs metric}
If one foliate the space-time to $u=constant$ hypersurfaces
The Bondi-Sachs coordinates $(u,r,x^A)$ can be used where $u=constant$ hypersurfaces are null this imply that $g_{11}=0$ and we must have $\Gamma^{0}_{11}=\Gamma^{2}_{11}=0$ that results $r^4\sin^2\theta=g_{22}g_{33}$. The line element takes the form
\begin{align}
&ds^2=e^{2\beta}Vr^{-1}du^2-2e^{2\beta}du dr+r^2h_{AB}(dx^A-U^Adu)(dx^B-U^Bdu)
\end{align}
where $A,B=3,4$ and $\beta,\: V,\: h_{AB}$ are functions of $(u,\theta,\varphi)$. One can find asymptotic symmetries by checking all transformations that preserve this form of the line element \cite{bondi1962gravitational}, see also \cite{madler2016bondi}.
\section{Asymptotically flat space-times' symmetries}
It is important to find symmetries that are presented with the vector $\xi^{\mu}
$ at null infinity. In other words near the equivalence class of vector fields that do not vanish at $\mathcal{I}$ \cite{chandrasekaran2018symmetries}. So it is possible to find vectors which satisfy the Killing equation near infinity. In curved space-time there exists a large transformation group that depends on the angle and satisfies the Killing's equation. The asymptotic symmetry group, $\mathfrak{G}$, defines as a quotient group as \cite{anninos2011asymptotic}
\begin{equation}
\mathfrak{G}=Diff_{\infty}(\partial M)\setminus Diff^{0}_{\infty}(M),
\end{equation}
where $Diff_{\infty}(M)$ is diffeomorphisms in physical space-time, $({M},{g}_{\mu\nu})$ and $Diff^{0}_{\infty}(M)$ is diffeomorphisms that are asymptotically identity. As said before, it is possible to use $\tilde{n}^{\mu}$ as a component on $\mathcal{I}$. When $\mathcal{I}$ is null, $\tilde{n}^{\mu}$ lies on the tangent space of $\mathcal{I}$ and with its aid one can define $q_{\mu\nu}$ that has the signature $(0,+,+)$. Field equations imply that $\tilde{\nabla}_{\mu}\tilde{n}^{\mu}$ vanishes in each of these divergence-free conformal frames so the answers of the equation $\tilde{\nabla}_{\mu}\tilde{n}^{\mu}|_{\mathcal{I}}=0$ are the generators of the $\mathcal{I}$.
Actually $BMS$ group is the symmetry group of $\mathcal{I}$. This group contains diffeomorphisms that remain the intrinsic metric, $q_{\mu\nu}$, and the vector field, $n^{\mu}$, invariant. $BMS$ group is smaller than $Diff(\mathcal{I})$ and has an amazing structure, as it does not change normal vectors of $\mathcal{I}$. This causes the relation
\begin{equation}
\mathcal{L}_{\xi}\tilde{n}^{\mu}|_{\mathcal{I}}=\alpha \tilde{n}^{\mu},
\end{equation}
where $\xi^{\mu}$ is the $BMS$ vector field and $\alpha$ is function that satisfies $\mathcal{L}_{n}\alpha|_{\mathcal{I}}=0$.
$BMS$ translations have to preserve $\tilde{n}_{\mu}\tilde{n}^{\mu}$ on ${\mathcal{I}}$. To have more senses about The $BMS$ group, it would be useful to consider the intrinsic metric (see Appendix \ref{app:A})
\begin{equation}
ds^2=d\xi d\xi^*=-1/4(1+\xi \xi^*)^2(d\theta^2+\sin^2\theta d\varphi^2),
\end{equation}
where $\xi=e^{i\varphi}cot{\theta/2}$. If one chooses the conformal factor $\Omega=\frac{2}{(1+\xi \xi^*)}$ each cut would be a 2-sphere. This coordinate system is useful to find the symmetry group. For a sphere the holomorphic bijections have the form \cite{esposito1992mathematical}
\begin{equation}
f(\xi)=\frac{a\xi+b}{c\xi+d},
\label{eq:4.777}
\end{equation}
where $ad-bc=1$. \eqref{eq:4.777} transformations are known as fractional linear transformations. The following conformal transformations would be valid if \eqref{eq:4.777} transformations preserve the intrinsic metric of each cut.
\begin{equation}
d\Sigma'^2=\omega^2d\Sigma^2\quad,\quad d\Sigma^2=d\theta^2+\sin^2\theta d\varphi^2.
\end{equation}
For $(q_{\mu\nu},n^{\mu})$ one can write
\begin{equation}
(q_{\mu\nu},n^{\mu}) \rightarrow (\omega^{2}q_{\mu\nu},\omega^{-1}n^{\mu}).
\end{equation}
Thus it is possible to find the conformal factor $\omega$ by calculating $d\Sigma'$
\begin{align}
\label{eq:4.1222}
dS'&=d\xi'd\xi'^*\\
&=\frac{ad\xi(c\xi+d)-cd\xi(a\xi+b)\times(a^*d\xi^*(c^*\xi^*+d^*)-c^*d\xi^*(a^*\xi^*+b^*)}{(c\xi+d)^2(c^*\xi^*+d^*)^2}\notag\\
&=\frac{\overbrace{ad}^{bc+1}a^*d^*+\overbrace{cb}^{ad-1}c^*b^*-ca^*bd^*-ac^*db^*}{(c\xi+d)^2(c^*\xi^*+d^*)^2}d\xi d\xi^*\notag\\
&=\frac{d\xi d\xi^*}{(c\xi+d)^2(c^*\xi^*+d^*)^2}=\frac{-(1+\xi \xi^*)d\Sigma^2}{4(c\xi+d)^2(c^*\xi^*+d^*)^2}.\notag
\end{align}
On the other hand
\begin{align}
\label{eq:4.1333}
dS'&=-\frac{1}{4}(1+\xi'\xi'^*)^2d\Sigma'^2\\
&=(\frac{-(a^*\xi^*+b^*)(a\xi+b)+(c\xi+d)(c^*\xi^*+d^*)}{4(c\xi+d)(c^*\xi^*+d^*)})d\Sigma'^2\notag.
\end{align}
The equation
\eqref{eq:4.1222}
is equal to
\eqref{eq:4.1333}
thus
\begin{equation}
d\Sigma'^2=\frac{(1+\xi\xi^*)^2}{[(a\xi+b)(a^*\xi^*+b^*)+(c\xi+d)(c^*\xi^*+d^*)]^2}d\Sigma^2,
\end{equation}
so $\omega$ is
\begin{equation}
\omega=\frac{1+\xi\xi^*}{(a\xi+b)(a^*\xi^*+b^*)+(c\xi+d)(c^*\xi^*+d^*)},
\end{equation}
where $\mathcal{L}_{n}\omega=0$ and the line element, in the direction of $\mathcal{I}$ generators, changes as
\begin{equation}
du'=\omega du \quad \rightarrow\quad u'=\omega[u+\alpha(\xi,\xi^*)].
\label{eq:4.111}
\end{equation}
\eqref{eq:4.777} till \eqref{eq:4.111} are transformations from the $BMS$ group \cite{boyle2016transformations}.
\subsection{supertranslations}
If the component $u$ in the direction of $\mathcal{I}$ generators transforms as
\begin{equation}
\hat{u}=u+\alpha(\xi,\xi^*),
\label{eq:4.1111}
\end{equation}
This is a supertranslation. In 1966 Newman and Penrose proposed to write $\alpha$ in
terms of spherical harmonics \cite{sachs1962asymptotic,newman1966note}
\begin{equation}
\alpha=\Sigma_{l=0}^{\infty}\Sigma_{m=-l}^{l}a_{l,m}Y_{l,m}(\theta,\varphi)
\end{equation}
where $a_{l,m}$ is constant. If $a_{l,m}=0$ for $ l>2$ then
\begin{equation}
\alpha=\epsilon_0+\epsilon_1 \sin \theta \cos \varphi +\epsilon_2 \sin \theta \sin \varphi+ \epsilon_3 \cos theta,
\end{equation}
Here the supertranslations reduce to a special case,
called the translations.\cite{newman1966note}
\subsection{translations}
Translations in Minkowski space-time can be written as
\begin{equation}
\label{eq:4.122}
t'=t+a\quad,\quad x'=x+b\quad,\quad y'=y+c\quad,\quad z'=z+d.
\end{equation}
One can define a coordinate system as
\begin{align}
\label{eq:4.133}
&u=t-r,\\
&r^2=x^2+y^2+z^2,\notag\\
&\xi=e^{i\varphi}\cot{\theta/2},\notag\\
&Z=\frac{1}{1+\xi\xi^*}.\notag
\end{align}
It is possible to write $x$, $y$ and $z$ according to complete conjugate variables as
\begin{equation}
\label{eq:4.21}
x=r(\xi+\xi^*)Z\quad,\quad y=-ir(\xi-\xi^*)Z\quad,\quad
z=r(\xi\xi^*-1)Z.
\end{equation}
Using \eqref{eq:4.122},
\eqref{eq:4.133}
and
\eqref{eq:4.21}
$r'$ can be obtained as
\begin{align}
r'&=\sqrt{x'^2+y'^2+z'^2}\\
&=(r^2(\xi+\xi^*)^2Z^2+b^2+2r(\xi+\xi^*)Zb-r^2(\xi-\xi^*)^2Z^2-c^2+ir(\xi-\xi^*)Zc\notag\\
&+r^2(\xi\xi^*-1)^2Z^2+d^2+2r(\xi\xi^*-1)Zd)^{1/2}\notag\\
&=(r^2Z^2\underbrace{(\xi^2\xi^{*2}+2\xi\xi^*+1)}_{(\xi\xi^*+1)^2}+2rZ[\underbrace{(b+ic)}_{B}\xi+\underbrace{(b-ic)}_{B^*}\xi^*+2rZ(\xi\xi^*d-d)]+c')1/2\notag\\
&=rZ(\xi\xi^*+1)\sqrt{1+\frac{\frac{2}{rZ}(B\xi+B^*\xi^*+\xi\xi^*d-d)+\frac{c'}{r^2Z^2}}{(\xi\xi^*+1)^2}}\notag\\
&\Rightarrow\notag\\
r'&\simeq(\xi\xi^*+1)[rZ+\frac{B\xi+B^*\xi^*+\xi\xi^*d-d}{(\xi\xi^*+1)^2}+O(1/r)]\notag\\
&=r+\frac{B\xi+B^*\xi^*+\xi\xi^*d-d}{(\xi\xi^*+1)}+O(1/r)\notag.
\end{align}
Thus for $u'=t'-r'$, it is possible to write
\begin{align}
u'&=t'-r'=t+a-r-\frac{B\xi+B^*\xi^*+\xi\xi^*d-d}{(\xi\xi^*+1)}+O(1/r)\\
&=u-\frac{B\xi+B^*\xi^*+\overbrace{(-a+d)}^{C}\xi\xi^*+\overbrace{(-a-d)}^{A}}{(\xi\xi^*+1)}+O(1/r)\notag.
\end{align}
Therefore \begin{equation}
u'=u+({A+B\xi+B^*\xi^*+C\xi\xi^*})Z+O(1/r).
\end{equation}
Thus if someone put the relation
\begin{equation}
\alpha=\frac{A+B\xi+B^*\xi^*+C\xi\xi^*}{1+\xi\xi^*},
\label{eq:12}
\end{equation}
in \eqref{eq:4.1111} supertranslations will be obtained. So the asymptotic symmetry group for such a space-time is the subgroup of $Diff(\mathcal{I})$ that preserve the fall of the intrinsic metric, $q_{\mu\nu}$ that means the fall of $\Omega$ and its derivatives.This group is the $BMS$ group. As a comparison Pioncaré group, $\mathfrak{P}$, is obtained by the semidirect product of the translations group,
$\mathfrak{T}$, to Lorentz group, $\mathfrak{L}$ \cite{barnich2014notes},
\begin{equation}
\mathfrak{P}=\mathfrak{L}\rtimes \mathfrak{T}.
\end{equation}
Thus the $BMS$ group will obtained as follows
\begin{equation}
\mathfrak{B}=\mathfrak{L}\rtimes \mathfrak{S},
\end{equation}
which the four dimensional translations group,
$\mathfrak{T}$, is replaced with the infinite dimensional supertranslations, $\mathfrak{S}$, this generators are $fn^{\mu}$ vector fields on $\mathcal{I}$ where $f$ is a scalar variable that satisfies $\mathcal{L}_{n}f=0$. In other words, the $BMS$ group is a group that maps $\mathcal{I}^{+}$ on itself.
\section{Asymptotic fields}
A great motif to find asymptotic symmetries is the problem of defining conserved charges in gauge theories, like electric charges in electrodynamics and energy momentum in general relativity. This problem is a result of the Noether-charge puzzle for gauge symmetries. In fact, the problem is that when one tries to define a conserved charge according to Noether's first theorem, Noether's current vanishes on shell. To be more explicit, one can consider a scalar field, $\varphi^{i}$ and the Lagrangian, $L[\varphi]$. The Euler-Lagrange equation is \cite{compere2019advanced}
\begin{equation}
\frac{\delta L}{\delta \Phi^{i}}= \frac{\partial L}{\partial \Phi^{i}}-\partial_{\mu}\left(\frac{\partial L}{\partial \partial_{\mu} \Phi^{i}}\right)+\partial_{\mu} \partial_{v}\left(\frac{\partial L}{\partial \partial_{\mu} \partial_{v} \Phi^{i}}\right)+\cdots
\end{equation}
where $\forall \Phi^{i} \in \Phi=\left\{\left(\Phi_{M}^{i}\right)_{i \in I}, g_{\mu v}\right\}$.
The generators for this system are shown as below \cite{cotaescu2000external}
\begin{equation}
\delta_{f} \phi^{i}=R^{i}_{\alpha}(f^{\alpha})=R^{i}_{\alpha},
\end{equation}
where $f^{\alpha}$ is an arbitrary function and satisfies the following relation
\begin{equation}
\label{eq:6.1}
R^{i}_{\alpha}(f^{\alpha}) \frac{\delta L}{\delta \phi^{i}} =\partial _{\mu} j^{\mu}_{f} .
\end{equation}
In consonance with Noether's second theorem, one has
\begin{equation}
R^{+i}_{\alpha}\frac{\delta L}{\delta \phi^{i}}=0,
\end{equation}
where $R^{+i}_{\alpha}$ is written without concerning the boundary terms. $R^{+i}_{\alpha}$ has the following relation with local operator, $Q_{i}$
\begin{equation}
R^{+i}_{\alpha}=\sum_{k=0}^{}i^{k} \partial_{\mu_{1}} \dots \partial_{\mu_{k}} [R^{i(\mu_{1} \dots \mu_{k})}]_{\alpha} Q_{i}.
\end{equation}
In presence of boundary terms one can write
\begin{equation}
\forall Q_{i}, f^{\alpha}: Q_{i}R^{i}_{\alpha}(f^{\alpha})=f^{\alpha}R^{+i}_{\alpha}(Q_{i})+\partial_{\mu}S^{\mu i}_{\alpha}(Q_{i},f^{\alpha}) ,
\end{equation}
where $S^{\mu i}_{\alpha}$ are differential equations.
If
$Q_{i}=\frac{\delta L}{\delta \varphi^{i}}$, then \cite{barnich2008surface}
\begin{equation}
\frac{\delta L}{\delta \varphi^{i}}R^{i}_{\alpha}(f^{\alpha})= \partial_{\mu} S^{\mu i}_{\alpha}(\frac{\delta L}{\delta \varphi^{i}},f^{\alpha}) \label{eq:6.7}
\end{equation}
Thus $S^{\mu i}_{\alpha}(\frac{\delta L}{\delta \varphi^{i}},f^{\alpha})$ is the Noether's current that satisfies the equation \eqref{eq:6.1}. This current vanishes on shell because of the linear distribution of $(\frac{\delta L}{\delta \varphi^{i}},f^{\alpha})$. According to \eqref{eq:6.1} and \eqref{eq:6.7}, it is possible to write
\begin{equation}
\partial_{\mu}(j_{\pm}^{\mu}-S^{\mu i}_{\alpha}(\frac{\delta L}{\delta \varphi^{i}},f^{\alpha}))=0.
\end{equation}
Considering Poincaré lemma, one can obtain the following relation on shell
\begin{equation}
j_{\pm}^{\mu}=S^{\mu i}_{\alpha}(\frac{\delta L}{\delta \varphi^{i}},f^{\alpha})- \partial_{\nu}k_{f}^{[\nu \mu]}\quad,\quad n>1
\end{equation}
where $k_{f}^{[\nu \mu]}$ is the superpotential and $n$ is the space-time's dimension. In one dimension, it is possible to write \cite{barnich2002covariant}
\begin{equation}
j_{f}=S^{\mu i}_{\alpha}(\frac{\delta L}{\delta \varphi^{i}},f^{\alpha})+C,
\end{equation}
where $C$ is an arbitrary constant. This relation is the solution of \eqref{eq:6.1}. $k_{f}^{[\nu \mu]}$ is arbitrary because according to $\partial_{\nu} \partial_{\mu} k^{[\nu \mu]}_{f}=0$, it vanishes in \eqref{eq:6.1}. This means Noether's currents are undefinable. In the case of exact solutions and symmetries, surface charges in the full theory are
constructed by integrating the surface charge 1-forms of the linearized theory along a
path in the space of symmetric configurations \cite{barnich2008surface}
\begin{equation}
n>1: Q[\varphi (x)]= \int_{\Sigma} j_{f} |_{\varphi (x)}= \int_{\partial \Sigma} k_{f} |_{\varphi (x)}, \label{eq:(11.6)}
\end{equation}
which is the solution of the Euler-Lagrange equation. In this relation, $\Sigma$ is a $n-1$ dimensional spacelike surface with the boundary $\partial \Sigma$, $n-1$-form current $j_{f}$ and $n-2$-form current $k_{f}$ that is the superpotential
\begin{align}
&j_{f}=j_{f}^{\mu}(d^{n-1}x)_{\mu},\\
&k_{f}=k_{f}^{[\mu \nu]}(d^{n-2}x)_{\mu \nu},\notag\\
&(d^{n-p}x)_{\mu_{1}\dots \mu_{p}} := \frac{1}{p! (n-p)!} \epsilon_{\mu_{1}\dots \mu_{p}} dx^{\mu_{(p+1)}} \dots dx^{\mu_{n}}. \notag
\end{align}
As the relation \eqref{eq:(11.6)} shows the mentioned problem, it also proposes a way to solve it. Actually, the relation \eqref{eq:(11.6)} is dependent on the boundary's features of superpotentials similar to the electrodynamics where $F_{\mu\nu}$ is written in agreement with to its superpotentials. So it may be possible to do the same thing here that proposes a probable relation between gauge symmetries and $(n-2)$-forms. The superpotential has been obtained by Abbott and Deser for asymptotically flat space-time \cite{abbott1982stability}
\begin{equation}
\mathbf{k}_{\zeta}^{\mu v}[h ; g]=\frac{\sqrt{-g}}{8 \pi G}\left(\xi^{\mu} \nabla_{\sigma} h^{v \sigma}-\xi^{\mu} \nabla^{v} h+\xi_{\sigma} \nabla^{v} h^{\mu \sigma}+\frac{1}{2} h \nabla^{v} \xi^{\mu}-\frac{1}{2} h^{\rho \nu} \nabla_{\rho} \xi^{\mu}+\frac{1}{2} h_{\sigma}^{v} \nabla^{\mu} \xi^{\sigma}\right).
\end{equation}
\section{De Sitter space-time}
\checkmark First definition: If $\tilde{M}$ with $\mathcal{I}$ as its boundary and with a diffeomorphism between ${M}$ $(\tilde{M}\setminus \mathcal{I})$ exists, then $({M},{g}_{\mu\nu})$ is weakly asymptotically de Sitter as \cite{ashtekar2014asymptotics}
1. A smooth function $\Omega$ exists on $\tilde{M}$ that vanishes on $\mathcal{I}$ and $n_{\mu}:=\tilde{\nabla}_{\mu}\Omega|_{\mathcal{I}}\neq0$. In addition one may be able to write $ \tilde{g}_{\mu\nu}=\Omega^{2} {g}_{\mu\nu}$.
2. ${g}_{\mu\nu}$ must satisfy the Einstein equation in presence of a positive cosmological constant \cite{ashtekar2014asymptotics}
\begin{equation}
{R}_{\mu\nu}-1/2{R}{g}_{\mu\nu}+\Lambda {g}_{\mu\nu}=8 \pi G {T}_{\mu\nu}\quad , \quad \Lambda>0.
\end{equation}
These two conditions are similar to the conditions defined for asymptotically flat space-time. The first condition shows the relation between the physical and unphysical space-times and $\nabla_{\mu} \Omega \neq 0$ assure one to be able to use $\Omega$ as a component on $\mathcal{I}$. The second condition declares that $\Omega^{-2}T_{\mu\nu}$ decrease smoothly on $\mathcal{I}$.
Fortunately it is possible to have different choices for $\Omega$ and one can see in the following sections how this feature help us.
\checkmark Second definition: if $\mathcal{I}$ was spacelike and geodesically complete then a weakly asymptotically de Sitter space-time will be a asymptotically de Sitter space-time. It is not possible to distinguish the topology of $\mathcal{I}$ but three possible topology have been studied by Ashtekar et al \cite{ashtekar2014asymptotics}:
$\bullet$ If $\mathcal{I}$ has the topology $\mathbb{S}^{3}$ then the space-time will be known as globally asymptotically de Sitter (figure \ref{pic:4.1}).
\begin{figure}
\centering
\begin{tikzpicture}[scale=3]
\draw (0,0) ellipse (0.75 and 0.3);
\draw (-0.75,0) -- (-0.75,-2);
\draw (-0.75,-2) arc (180:360:0.75 and 0.3);
\draw [dashed] (-0.75,-2) arc (180:360:0.75 and -0.3);
\draw (0.75,-2) -- (0.75,0);
\fill [yellow!40,opacity=0.5] (-0.75,0) -- (-0.75,-2) arc (180:360:0.75 and 0.3) -- (0.75,0) arc (0:180:0.75 and -0.3);
\node at (0,-0.2){$\mathcal{I}^+$};
\node at (0,-2.2){$\mathcal{I}^-$};
\end{tikzpicture}
\caption{ In this figure the upper circle is $\mathcal{I}^+$ and the lower circle is $\mathcal{I}^-$. The topology of this two circles are $\mathbb{S}^{3}$. \label{pic:4.1}}
\end{figure}
$\bullet$
$\mathcal{I}$ with the topology $ \mathbb{R}^{3} \simeq \mathbb{S}^{3} \setminus \left \{ p \right \}$ results a space-time that is asymptotically de Sitter in Pioncaré patch, where $p$ is spacelike infinity, $i^0$ (figure \ref{pic:4.2}).
\begin{figure}
\centering
\begin{tikzpicture}[scale=3]
\draw (0,0) ellipse (0.75 and 0.3);
\draw (-0.75,0) -- (-0.75,-2);
\draw (-0.75,-2) arc (180:360:0.75 and 0.3);
\draw [dashed] (-0.75,-2) arc (180:360:0.75 and -0.3);
\draw (0.75,-2) -- (0.75,0);
\fill [yellow!40,opacity=0.5] (-0.75,0) -- (-0.75,-2) arc (180:360:0.75 and 0.3) -- (0.75,0) arc (0:180:0.75 and -0.3);
\draw[red] (0, -2.3) .. controls (0.98,-1) .. (0.05,0.3);
\draw[red] (0, -2.3) .. controls (-0.99,-1) .. (0.05,0.3);
\node at (0,-0.2){$\mathcal{I}^+$};
\node at (0.1,0.4){$i^0$};
\filldraw (0.05,0.3) circle[radius=1pt];
\end{tikzpicture}
\caption{ The topology of null infinity in space-time that is asymptotically de Sitter in Pioncaré patch is $\mathbb{R}^{3}$.\label{pic:4.2}}
\end{figure}
$\bullet$ The space-time, $({M},{g}_{\mu\nu})$ is asymptotically de Sitter-Shcwartzshild if $\mathcal{I}$ has the topology $\mathbb{R} \times \mathbb{S}^{2} \simeq \mathbb{S}^{3}\setminus \left \{ p_1 ,p_2 \right \}$, where $p_1$ is $i^{\pm} $ and $p_2$ is $i^0$ (figure \ref{pic:4.3}).
\begin{figure}
\centering
\begin{tikzpicture}[scale=3]
\draw (0,0) ellipse (0.75 and 0.3);
\draw (-0.75,0) -- (-0.75,-2);
\draw (-0.75,-2) arc (180:360:0.75 and 0.3);
\draw [dashed] (-0.75,-2) arc (180:360:0.75 and -0.3);
\draw (0.75,-2) -- (0.75,0);
\fill [yellow!40,opacity=0.5] (-0.75,0) -- (-0.75,-2) arc (180:360:0.75 and 0.3) -- (0.75,0) arc (0:180:0.75 and -0.3);
\draw [dashed](0.1,0.3) -- (0.1,-1.7);
\draw[red] (0.05,- 0.3) .. controls (0.955,-0.5) .. (0.1, -1.3) ;
\draw[red] (-0.25,- 0.3) .. controls (-0.95,-0.5) .. (0.1, -1.3) ;
\draw[red] (-0.25, -2.3) .. controls (-0.95,-1.5) .. (0.1, -1.3);
\draw[red] (0.05, -2.3) .. controls (0.955,-1.5) .. (0.1, -1.3);
\draw[yshift = 0.1cm, snake] (0.05,- 0.38) -- (-0.25,- 0.38);
\draw[yshift = 0.1cm, snake] (0.05,- 2.38) -- (-0.25,- 2.38);
\node at (0.55,0.3){$\mathcal{I}^+$};
\node[gray] at (-0.1,-0.2){$r=0$};
\node at (0.1,0.4){$i^0$};
\node at (0.2,- 0.2){$i^+$};
\node at (-0.35,- 0.2){$i^-$};
\filldraw (0.05,0.3) circle[radius=1pt];
\filldraw (0.05,- 0.3) circle[radius=1pt];
\filldraw (-0.25,- 0.3) circle[radius=1pt];
\filldraw (0.05,- 2.3) circle[radius=1pt];
\filldraw (-0.25,- 2.3) circle[radius=1pt];
\draw [blue](0.05,- 2.3) --(-0.25,- 0.3);
\draw [blue] (0.05,- 0.3) -- (-0.25,- 2.3);
\end{tikzpicture}
\caption{This figure shows the dS-Schwarzschild space-time where red lines are dS horizons and blue line are event horizons. The topology of null infinity is $\mathbb{R} \times \mathbb{S}^{2}$. \label{pic:4.3}}
\end{figure}
Definitely all of these definitions have to satisfy the condition one. These points that present infinity make trouble for further calculations. So it would be useful to work with the first topology.
\checkmark Third definition:
$({M},{g}_{\mu\nu})$ is strongly asymptotically de Sitter if it satisfies the second definition and the intrinsic metric, $q_{\mu\nu}$ becomes conformally flat \cite{ashtekar2014asymptotics}.
These definitions have been written according to the choice of conformal factor thus each space-time can be included to different groups. Generally the metric be written as follows near the infinity
\begin{equation}
\tilde{g}_{\mu\nu}=-\tilde{\nabla}_{\mu}\Omega\tilde{\nabla}_{\nu}\Omega+\tilde{h}_{\mu\nu}
\label{eq:4.metric}
\end{equation}
where $\tilde{h}_{\mu\nu}$ is a function of $\Omega$.
\section{Asymptotic de Sitter space-time's symmetries}
As intrinsic metric in
$\Lambda>0$ case takes the signature $(+,+,+)$, $n^{\mu}$ is no longer tangential to $\mathcal{I}$. Thus it is not possible to follow the same route as in $\Lambda=0$ case to find the asymptotic symmetry group. A good idea is to consider the intrinsic metric to be conformally flat then the symmetry group reduces to $So(4,1)$. Restricting the conditions, Bach tensor vanishes and this causes losing information by fiat. In this section we review the work of Ashtekar et. al. \cite{ashtekar2014asymptotics}.
So finding another framework is essential. To achieve this aim, Einstein equations has been rewritten according to the conformal rescaling $\Omega$ \cite{ashtekar2014asymptotics}
\begin{equation}
\tilde{R}_{\mu\nu}-1/2 \tilde{g}_{\mu\nu}\tilde{R}+ 2 \Omega^{-1}(\tilde{\nabla}_{\mu}n_{\nu}-\tilde{g}_{\mu\nu}\tilde{\nabla}^{\sigma}n_{\sigma})+3\Omega^{-2} \tilde{g}_{\mu\nu}n^{\sigma}n_{\sigma}+\Omega^{-2} \Lambda \tilde{g}_{\mu\nu}=8 \pi G{T}_{\mu\nu}, \label{eq :4.5}
\end{equation}
where $\tilde{n}_{\mu}:=\tilde{\nabla}_{\mu}\Omega$ (see Appendix \ref{ap:d}).
If one multiplies $\Omega^{2}$ by the relation \eqref{eq :4.5} and applies boundary conditions, which has been mentioned in the first definition, in the resulting equation, then
\begin{equation}
\tilde{n}^{\mu}\tilde{n}_{\mu}|_{\mathcal{I}}= - \Lambda/3 =-1/l^{2}.
\label{eq:4.39}
\end{equation}
Thus
$\tilde{n}_{\mu}$ is timelike on $\mathcal{I}$ and as a result $\mathcal{I}$ itself is spacelike. Because of the exciting freedom to choose the conformal factor,
it is possible to choose one that satisfy the relation
$\tilde{\nabla}_{\mu}\tilde{n}^{\mu} |_{\mathcal{I}}=0$. This choice considerably simplifies mathematical calculations. Now multiplying $\Omega$ by \eqref{eq :4.5} and considering \eqref{eq:4.39}, the third and fourth terms of \eqref{eq :4.5} simplify together, thus
\begin{equation}
\label{eq:4.65}
\tilde{\nabla}_{\mu}\tilde{n}_{\nu}|_{\mathcal{I}}=0.
\end{equation}
Applying all these restriction one conformal degree of freedom is left, $\Omega \rightarrow \Omega'=\omega \Omega$ that results $\tilde{n}^{\mu}\tilde{\nabla}_{\mu}\omega |_{\mathcal{I}}=0$. Using this conformal frame, $\tilde{C}_{\mu\nu \sigma \lambda}$ vanishes near $\mathcal{I}$. To show this, Schouten tensor is represented in the following \cite{hollands2005comparison}
\begin{equation}
\tilde{S}_{\mu\nu}:=\tilde{R}_{\mu\nu}-(\tilde{R}/6)\tilde{g}_{\mu\nu}.
\end{equation}
The relation between Schouten tensor and its conformal transformed form is (see Appendix \ref{ap:d})
\begin{equation}
\tilde{S}_{\mu\nu}=S_{\mu\nu}-2 \Omega^{-1}\tilde{\nabla}_{\mu}\tilde{n}_{\nu}+2\Omega^{-2}\tilde{g}_{\mu\nu}\tilde{n}^{\sigma}\tilde{n}_{\sigma}. \label{eq:8.5}
\end{equation}
On the other hand it is possible to write conformal transformed Riemann tensor (see Appendix \ref{ap:j})
\begin{equation}
\tilde{R}_{\mu\nu \sigma \lambda}=\tilde{C}_{\mu\nu \sigma \lambda}+\tilde{g}_{\mu[\sigma}\tilde{S}_{\lambda]\nu}-\tilde{g}_{\nu[\sigma}\tilde{S}_{\lambda]\mu}, \label{eq:5.9}
\end{equation}
multiplying this relation by $\tilde{n}^{\lambda}$
\begin{align}
\label{eq:4.nR}
&\underbrace{\tilde{n}^{\lambda}\tilde{R}_{\mu\nu \sigma \lambda}}_{=0}=\tilde{n}^{\lambda}\tilde{C}_{\mu\nu \sigma \lambda}+\tilde{g}_{\sigma[\mu}\tilde{S}_{\nu]\lambda}\tilde{n}^{\lambda}-\tilde{n}_{[\mu}\tilde{S}_{\nu]\sigma}\\
&\Rightarrow \tilde{n}_{[\mu}\tilde{S}_{\nu]\sigma}=\tilde{n}^{\lambda}\tilde{C}_{\mu\nu \sigma \lambda}+\tilde{g}_{\sigma[\mu}\tilde{S}_{\nu]\lambda}\tilde{n}^{\lambda}\notag.
\end{align}
where $\tilde{n}^{\lambda}\tilde{R}_{\mu\nu \sigma \lambda}=0$ has been used. On the other hand multiplying \eqref{eq:8.5} by $\Omega$ and taking the derivative, one has
\begin{align}
\label{eq:5.11}
\tilde{\nabla}_{[\mu}\Omega\tilde{S}_{\nu]\sigma}&=\Omega \tilde{\nabla}_{[\mu}\tilde{S}_{\nu]\sigma}+\tilde{n}_{[\mu} \tilde{S}_{\nu]\sigma}=\Omega \tilde{\nabla}_{[\mu}\tilde{S}_{\nu]\sigma}+
\tilde{n}^{\lambda}\tilde{C}_{\mu\nu \sigma \lambda}+\tilde{g}_{\sigma[\mu}\tilde{S}_{\nu]\lambda}\tilde{n}^{\lambda}\\
&=\Omega \tilde{\nabla}_{[\mu}{S}_{\nu]\sigma}+4(\tilde{\nabla}_{[\mu}\Omega )\Omega^{-2}\tilde{g}_{\nu]\sigma}\tilde{n}_{\lambda}\tilde{n}^{\lambda}+
\tilde{n}^{\lambda}\tilde{C}_{\mu\nu \sigma \lambda}+\tilde{g}_{\sigma[\mu}{S}_{\nu]\lambda}\tilde{n}^{\lambda}\notag\\
&-2\Omega^{-2}\tilde{n}_{[\mu}(\tilde{\nabla}_{\nu]}\tilde{n}_{\lambda}+2\Omega^{-2}\tilde{n}_{[\mu}(\tilde{\nabla}_{\nu]}\tilde{n}_{\lambda}-4(\tilde{\nabla}_{[\mu}\Omega )\Omega^{-2}\tilde{g}_{\nu]\sigma}\tilde{n}_{\lambda}\tilde{n}^{\lambda}
\notag\\
&-2\tilde{\nabla}_{[\mu}\tilde{\nabla}_{\nu]}\tilde{n}_{\sigma}=\Omega \tilde{\nabla}_{[\mu}{S}_{\nu]\sigma}+\tilde{n}^{\lambda}\tilde{g}_{\lambda[\mu} S_{\nu]\sigma}+\tilde{C}_{\mu\nu\sigma\lambda}\tilde{n}^{\lambda},\notag
\end{align}
where $\tilde{K}_{\mu\nu}\tilde{n}^{\nu}=0$ and \eqref{eq:4.nR} has been used. Additionally for fields' equations in physical space-time one can write \cite{ashtekar2014asymptotics}
\begin{equation}
{S}_{\mu\nu}=(\Lambda/3){g}_{\mu\nu}+8 \pi G({T}_{\mu\nu}-1/3 {T}
{g}_{\mu\nu}) \equiv \Lambda /3 {g}_{\mu\nu}+\bar{T}_{\mu\nu},
\end{equation}
where $\bar{T}_{\mu\nu}:=8 \pi G({T}_{\mu\nu}-1/3 {T}
{g}_{\mu\nu})$. This relation can be used in \eqref{eq:5.11} so
\begin{equation}
\Omega \tilde{\nabla}_{[\mu}{S}_{\nu]\sigma}+\tilde{C}_{\mu\nu\sigma\lambda}n^{\lambda}=\tilde{\nabla}_{[\mu}(\Omega\bar{T}_{\nu]\sigma})-g_{\sigma[\mu}\bar{T}_{\nu]\lambda}n^{\lambda}.
\label{eq:5.13}
\end{equation}
As $\Omega^{-1
} {T}_{\mu\nu}$ has a smooth limit on $\mathcal{I}$, it is possible to say
\begin{equation}
\tilde{C}_{\mu\nu\sigma\lambda}n^{\lambda}|_{\mathcal{I}}=0.
\label{eq:4.722}
\end{equation}
Weyl tensor can be devided to electric an magnetic parts
\begin{align}
\label{eq:5.24}
&\tilde{E}_{\mu\sigma}:=l^{2}\tilde{C}_{\mu\nu\sigma\lambda}\tilde{n}^{\nu}\tilde{n}^{\sigma},\\
&\tilde{B}_{\mu\sigma}:=l^{2} \star \tilde{C}_{\mu\nu\sigma\lambda}\tilde{n}^{\nu}\tilde{n}^{\sigma},\notag
\end{align}
where
\begin{equation}
\star C_{\mu\nu\sigma\rho}\equiv C_{\mu\nu\sigma\rho}+iC_{\mu\nu\sigma\rho}^{\sim},
\end{equation}
for more details see Appendix \ref{ap:j}. In this relation $C_{\mu\nu\sigma\rho}^{\sim}$ is the right dual. both relations in \eqref{eq:5.24} vanish on $\mathcal{I}$ because of \eqref{eq:4.722} so
\begin{equation}
\label{eq:4.75}
\tilde{C}_{\mu\nu\sigma\lambda}|_{\mathcal{I}}=0.
\end{equation}
\section{Asymptomatic expansion}
As it is shown in the previous section, considering the intrinsic metric to be conformally flat is not a good method to define the asymptotic symmetry group as it causes losing information by fiat. So it is necessary to find another method. Here the Fefferman-Graham framework is presented. As it said before, the metric near $\mathcal{I}$ can be defined as \eqref{eq:4.metric}. To find $\tilde{h}_{\mu\nu}$, one can start with its Lie derivative, \cite{fefferman1985conformal} $\tilde{h}_{\mu\nu}$
\begin{equation}
\mathcal{L}_{n}\tilde{h}_{\mu\nu}=\underbrace{\tilde{n}^{\sigma}\tilde{\nabla}_{\sigma}\tilde{h}_{\mu\nu}}_{=0}-\tilde{h}_{\mu\sigma}\underbrace{\tilde{\nabla}_{\nu}\tilde{n}^{\sigma}}_{\tilde{K}^{\sigma}_{\nu}}-\tilde{h}_{\nu\sigma}\underbrace{\tilde{\nabla}_{\mu}\tilde{n}^{\sigma}}_{\tilde{K}^{\sigma}_{\mu}}=-2{K}_{\mu\nu}
\end{equation}
On the other hand according to
Taylor series
one has
\begin{equation}
\tilde{h}_{\mu\nu}=\sum_{j=0}^{\infty}(\tilde{h}_{\mu\nu})_{j}\Omega^j
\label{eq:4.88}
\end{equation}
where $(\tilde{h}_{\mu\nu})_{0}$ represent the intrinsic metric of $\mathcal{I}$. Also Lie derivative of the extrinsic curvature can be obtained as follows
\begin{align}
\label{eq:4.89}
\mathcal{L}_{n}\tilde{K}_{\mu\nu}&=\tilde{n}^{\sigma}\underbrace{\tilde{\nabla}_{\sigma}\tilde{K}_{\mu\nu}}_{\tilde{n}^{\sigma}\tilde{\nabla}_{\sigma}\tilde{\nabla}_{\mu}\tilde{n}_{\nu}}-\underbrace{\tilde{K}_{\mu\sigma}}_{\tilde{\nabla}_{\mu}\tilde{n}_{\sigma}}\tilde{\nabla}_{\nu}\tilde{n}^{\sigma}-\underbrace{\tilde{K}_{\nu\sigma}}_{\tilde{\nabla}_{\mu}\tilde{n}_{\sigma}}\tilde{\nabla}_{\mu}\tilde{n}^{\sigma}\\
&=\tilde{n}^{\sigma}\tilde{\nabla}_{\sigma}\tilde{\nabla}_{\mu}\tilde{n}_{\nu}-\tilde{n}^{\sigma}\tilde{\nabla}_{\mu}\tilde{\nabla}_{\sigma}\tilde{n}_{\nu}+\tilde{\nabla}_{\mu}\underbrace{(n^{\sigma}\tilde{\nabla}_{\sigma}\tilde{n}_{\nu})}_{=0}-\tilde{\nabla}_{\mu}\tilde{n}_{\sigma}\tilde{\nabla}_{\nu}\tilde{n}^{\sigma}\notag\\
&=\tilde{n}^{\sigma}(\tilde{\nabla}_{\sigma}\tilde{\nabla}_{\mu}-\tilde{\nabla}_{\mu}\tilde{\nabla}_{\sigma})\tilde{n}_{\nu}-\tilde{\nabla}_{\mu}\tilde{n}_{\sigma}\tilde{\nabla}_{\nu}\tilde{n}^{\sigma}\notag\\
&=\tilde{n}^{\sigma}\tilde{n}^{\rho}\tilde{R}_{\sigma\mu\rho\nu}-\tilde{K}_{\mu\sigma}\tilde{K}_{\nu}^{\sigma}.
\end{align}
When one multiply this relation by $\tilde{g}^{\mu\sigma}$, then
\begin{equation}
\mathcal{L}_{n} \tilde{K}_{\nu}^{\mu}={\mathcal{R}}_{\nu}^{\mu}+\tilde{K}\tilde{K}_{\nu}^{\mu}-\Omega^{-1}\tilde{K}\tilde{h}_{\nu}^{\mu}-4\Omega^{-1}\tilde{K}_{\nu}^{\mu}.
\end{equation}
Now for expand Einstein tensor one has (see Appendix \ref{ap:d})
\begin{equation}
\label{eq:4.91}
\tilde{G}_{\mu\nu}|_{\mathcal{I}}=2 \Omega^{-1}(\tilde{K}_{\mu\nu}-\tilde{g}_{\mu\nu}\tilde{K}).
\end{equation}
As the conformal derivative of Einstein tensor vanishes, it is possible to write
\begin{equation}
\tilde{h}^{\nu}_{\mu}\tilde{G}_{\nu\sigma}n^{\sigma}=0=\tilde{h}^{\nu}_{\mu}\tilde{R}_{\nu\sigma}n^{\sigma}=D_{\nu}K^{\nu}_{\mu}-D_{\mu}K.
\end{equation}
Also from \eqref{eq:4.91}, we know that
\begin{equation}
\label{eq:4.93}
\tilde{\mathcal{R}}+\tilde{K}^2-\tilde{K}_{\mu\nu}\tilde{K}^{\mu\nu}=4\Omega^{-1}\tilde{K}.
\end{equation}
For more details see Appendix \ref{ap:d}. Also it is useful to define traceless part of the extrinsic curvature, $K_{\mu\nu}$ tensor
\begin{equation}
\tilde{P}^{\mu}_{\nu}=\tilde{K}^{\mu}_{\nu}-\frac{\tilde{h}^{\mu}_{\nu}}{3}\tilde{K}.
\end{equation}
Here are other expanded parameters that would be useful for our calculations
\begin{align}
&\tilde{P}^{\mu}_{\nu}=\sum_{j=0}^{\infty}(\tilde{P}^{\mu}_{\nu})_{j}\Omega^j\quad,\quad \tilde{K}=\sum_{j=0}^{\infty}(\tilde{K})_{j}\Omega^j,\\
&\tilde{R}_{\mu\nu}=\sum_{j=0}^{\infty}(\tilde{R}_{\mu\nu})_{j}\Omega^j\quad,\quad \tilde{R}=\sum_{j=0}^{\infty}(\tilde{R})_{j}\Omega^j.\notag
\end{align}
If Lie derivative defines as
\begin{equation}
\mathcal{L}_{n}\equiv\frac{d}{d\Omega},
\end{equation}
then
\begin{align}
\label{eq:4.92}
&\frac{d}{d\Omega}\tilde{P}^{\mu}_{\nu}=[\tilde{\mathcal{R}}^{\mu}_{\nu}-\frac{\tilde{h}^{\mu}_{\nu}}{3}\tilde{\mathcal{R}}]-\tilde{K}\tilde{P}^{\mu}_{\nu}+2\Omega^{-1}\tilde{P}^{\mu}_{\nu},\\
&\frac{d}{d\Omega}\tilde{K}=-\tilde{\mathcal{R}}-\tilde{K}^2+5\Omega^{-1}\tilde{K},\notag\\
&\frac{d}{d\Omega}\tilde{h}^{\mu}_{\nu}=2\tilde{h}_{\nu \sigma}\tilde{K}^{\sigma}_{\mu}.\notag
\end{align}
Accordingly it is possible to write higher orders as \cite{jager2008conserved}
\begin{align}
\label{eq:4.97}
&(2+j)(\tilde{P}^{\mu}_{\nu})_{j}=([\tilde{\mathcal{R}}^{\mu}_{\nu})_{j-1}-\frac{\tilde{h}^{\mu}_{\nu}}{3}(\tilde{\mathcal{R}})_{j-1}]-\sum_{m=0}^{j-1}(\tilde{K})_{m}(\tilde{P}^{\mu}_{\nu})_{j-1-m},\\
&(5+j)(\tilde{K})_{j}=-(\tilde{\mathcal{R}})_{j-1}-\sum_{m=0}^{j-1}(\tilde{K})_{m}(\tilde{K})_{j-1-m},\notag\\
&j(\tilde{h}_{\mu\nu})_{j}=2\sum^{j-1}_{m=0}[(\tilde{h}_{\nu\sigma})_{m}(\tilde{P}^{\sigma}_{\mu})_{j-1-m}+\frac{1}{3}(\tilde{h}_{\mu\nu})_{m}(\tilde{K})_{j-1-m}],\notag
\end{align}
which are Fefferman-Graham relations for Einstein's equations. $j$ should be smaller than $d-2$ in \eqref{eq:4.97}.
To find $\tilde{h}_{\mu\nu}$ one has to write \eqref{eq:4.88} for $j=2$ as here
$d=4$,
\begin{equation}
\tilde{h}_{\mu\nu}=(\tilde{h}_{\mu\nu})_0\Omega^{0}+(\tilde{h}_{\mu\nu})_1\Omega^{1}+(\tilde{h}_{\mu\nu})_2\Omega^{2}
\end{equation}
According to \eqref{eq:4.97} one has
\begin{align}
\tilde{h}_{\mu\nu}&=(\tilde{h}_{\mu\nu})_{0}+1/2 \Omega^2(\tilde{h}_{\mu\nu})_{0}+3/2\Omega^2\mathcal{R}_{\mu\nu}-3/2\Omega^2KK_{\mu\nu}+3/2\Omega K(\tilde{h}_{\mu\nu})_{0}\\
&-6\Omega K_{\mu\nu}
-3/2\Omega^2K^{\nu}_{\mu}K_{\nu\sigma}
-3/2\Omega K_{\mu\sigma}\notag
\end{align}
On the other hand for electric part of the Weyl tensor one has
\begin{equation}
\label{eq:4.100}
\tilde{E}_{\mu\nu}=\frac{1}{d-3}\Omega^{3-d}(\tilde{C}_{\mu\nu\sigma\rho}\tilde{n}^{\nu}\tilde{n}^{\rho})
\end{equation}
where
$d=4$. Someone remember the relation \eqref{eq:5.9}, on the other hand the relation for conformal transformed Schouten tensor in $\omega=constant$ surfaces is
\begin{equation}
\tilde{S}_{\mu\nu}=-2\Omega^{-1}\tilde{\nabla}_{\mu}\tilde{n}_{\nu}.
\end{equation}
The Riemann tensor multiplies by $n^{\nu}n^{\rho}$ and define the other indices as free indices. Using \eqref{eq:5.9} it is possible to write
\begin{equation}
\label{4.102}
\tilde{h}^{\iota}_{\mu}\tilde{h}^{\chi}_{\rho}\tilde{R}_{\iota\nu\chi\rho}n^{\nu}n^{\rho}=\tilde{h}^{\iota}_{\mu}\tilde{h}^{\chi}_{\rho}\tilde{C}_{\iota\nu\chi\rho}n^{\nu}n^{\rho}+\Omega^{-1}{K}_{\mu\sigma}
\end{equation}
where \eqref{eq:4.39} with $l=1$ has been used.
Also for the Riemann tensor one can write
\begin{equation}
\label{4.103}
\tilde{h}^{\iota}_{\mu}\tilde{h}^{\chi}_{\rho}\tilde{R}_{\iota\nu\chi\rho}n^{\nu}n^{\rho}= \tilde{h}^{\iota}_{\mu}\tilde{h}^{\chi}_{\rho}n^{\nu}(\tilde{\nabla}_{\iota}\tilde{\nabla}_{\nu}-\tilde{\nabla}_{\nu}\tilde{\nabla}_{\iota})n_{\chi}=\mathcal{L}_{n}K_{\mu\sigma}+K_{\mu\nu}K^{\nu}_{\sigma},
\end{equation}
using
\eqref{4.102}
and
\eqref{4.103} one has
\begin{equation}
\tilde{C}_{\mu\nu\sigma\rho}{n}^{\nu}{n}^{\rho}=\mathcal{L}_{n}{K}_{\mu\sigma}+{K}_{\mu}^{\nu}{K}_{\nu\sigma}+\Omega^{-1}{K}_{\mu\sigma}
\end{equation}
\begin{equation}
\tilde{C}_{\mu\nu\sigma\rho}{n}^{\nu}{n}^{\rho}=\mathcal{L}_{n}{K}_{\mu\sigma}+{K}_{\mu}^{\nu}{K}_{\nu\sigma}+\Omega^{-1}{K}_{\mu\sigma}.
\end{equation}
Using this relation in \eqref{eq:4.100} one has
\begin{equation}
\label{eq:4.106}
\tilde{E}_{\mu\nu}=\Omega^{-1}(\mathcal{L}_{n}{K}_{\mu\sigma}+{K}_{\mu}^{\nu}{K}_{\nu\sigma}+\Omega^{-1}{K}_{\mu\sigma})
\end{equation}
According to these relations the relation of the unphysical metric in four dimensions is
\begin{equation}
\label{eq:4.101}
\tilde{g}_{\mu\nu}=-\tilde{\nabla}_{\mu}\Omega\tilde{\nabla}_{\nu}\Omega+(1+\frac{1}{2}\Omega^2)(\tilde{h}_{\mu\nu})_{0}-\frac{3}{2}\Omega^3\tilde{E}_{\mu\nu}+O(\Omega^4)
\end{equation}
where $(\tilde{h}_{\mu\nu})_{0}$ is the metric of a three-sphere. Unfortunately the Killing equation for this metric is not exactly solvable.
\section{Finding the intrinsic metric according to the tetrad formalism}
To simplify the following calculations, de Sitter line element is written as
\begin{equation}
ds^{2}=-F(r)dt^2+F(r)^{-1}dr^2+r^2d\Omega^2
\end{equation}
where $F(r)=1-\Lambda r^2/3$. Using advance null coordinates one has
\begin{equation}
ds^2=-F(r)du^2-2dudr+r^2d\Omega^2
\end{equation}
where $u=t-r^*$, then the matrix representation of the reversed metric is
\begin{equation}
g^{\mu\nu}=
\begin{bmatrix}
0&-1&0&0\\
-1&-F(r)&0&0\\
0&0&r^{-2}&0\\
0&0&0&r^{-2}\csc^2\theta
\end{bmatrix}
\end{equation}
Accordingly a null tetrad can be defined \cite{lopez2006absorption,saw2016mass,saw2017behavior}
\begin{align}
\label{eq:4.11111}
& {l}^{\mu}=[0,1,0,0],\\
& {n}^{\mu}=[1,F(r),0,0]\notag,\\
& {m}^{\mu}=[0,0,\frac{1}{\sqrt{2}r},\frac{i}{\sqrt{2}r}\csc^2\theta],\notag\\
&{\bar{m}}^{\mu}=[0,0,\frac{1}{\sqrt{2}r},-\frac{i}{\sqrt{2}r}\csc^2\theta].\notag
\end{align}
Where the metric has the relation $g^{\mu\nu}=l^{\mu}n^{\nu}+l^{\nu}n^{\mu}-m^{\mu}\bar{m}^{\nu}-m^{\nu}\bar{m}^{\mu}$. For $r$ independent terms in ${{m}}^{\mu}$
and ${\bar{m}}^{\mu}$ one has
\begin{align}
&\xi^{\varphi(0)}=\frac{i}{\sqrt{2}}\csc^2\theta,\\
&\xi^{\theta(0)}=\frac{i}{\sqrt{2}}\notag
\end{align}
and for higher order
\begin{align}
\xi^{\mu}=\xi^{\mu(0)}r^{-1}+O(r^{-2}).
\end{align}
Derivative operators can be defined according to \eqref{eq:4.11111} \cite{saw2016mass}
\begin{align}
&D=l^{\mu}\nabla_{\mu}=\frac{\partial}{\partial r},\\
&D'=l^{\mu}\nabla_{\mu}=\frac{\partial}{\partial u}-1/2(1-\Lambda r^2/3)\frac{\partial}{\partial r},\notag\\
&\delta=m^{\mu}\nabla_{\mu}=\frac{1}{\sqrt{2}r}\frac{\partial}{\partial_{\theta}}+\frac{i}{\sqrt{2}r}\csc^2\theta\frac{\partial}{\partial_{\varphi}},\notag\\
&\delta'=\bar{m}^{\mu}\nabla_{\mu}=\frac{1}{\sqrt{2}r}\frac{\partial}{\partial_{\theta}}-\frac{i}{\sqrt{2}r}\csc^2\theta\frac{\partial}{\partial_{\varphi}}
\end{align}
and their operations on
$u,r,\theta,\varphi$
are
\begin{align}
&Du=0\quad,\quad D'u=1\quad,\quad \delta u=0\quad,\quad \delta' u=0,\\
&Dr=1\quad,\quad D'r=\Lambda/6r^2-1/2\quad,\quad \delta r=0\quad,\quad \delta' r=0\notag,\\
&D\theta=0\quad,\quad D'\theta=0\quad,\quad \delta\theta=1/\sqrt{2}r\quad,\quad \delta'\theta=1/\sqrt{2}r,\notag\\
&D\varphi=0\quad,\quad D'\varphi=0\quad, \quad \delta\varphi=i/\sqrt{2}r\csc \theta\quad, \quad \delta'\varphi=-i/\sqrt{2}r\csc \theta.\notag
\end{align}
Also for second order derivatives one has
\begin{align}
\label{eq:4.117}
&DD'r=\Lambda r/3\quad,\quad D\delta\theta=-1/\sqrt{2}r^2,\\
&D\delta\varphi=-i\csc\theta/\sqrt{2}r^2\quad,\quad D'\delta\theta=\Lambda/6\sqrt{2}+1/2\sqrt{2}r^2\notag,\\
&D'\delta\varphi=(\Lambda/6\sqrt{2}+1/2\sqrt{2}r^2)i\csc\theta\quad,\quad \delta'\delta\varphi(-i/2r^2)=\csc\theta\cot\theta\notag,\\
&\delta\delta'\varphi=(i/2r^2)\csc\theta\cot\theta.\notag
\end{align}
As the calculations have been done for $O(r^{-2})$ only terms with $\Lambda$ become important. So $\xi^{\mu}$ can be rewrite as follows
\begin{align}
&\xi^{\theta(0)}=\frac{1}{\sqrt{2}}e^{\Lambda f(u,\theta)},\\
&\xi^{\varphi(0)}=\frac{1}{\sqrt{2}}e^{\Lambda f(u,\theta)}\csc\theta\notag.
\end{align}
Hence for spherical expanded metric one has
\begin{equation}
\label{eq:11}
g_1=e^{\Lambda f(u,\theta)}d\theta^2+e^{\Lambda f(u,\theta)}\csc\theta d\varphi^2.
\end{equation}
Also for first relation in \eqref{eq:4.117}, one has
\begin{equation}
g=\frac{\Lambda}{3}r^2du^2+r^2g_{1}+O(r^{-2}).
\end{equation}
Considering the conformal factor
$ \Omega=r^{-1}$
one can obtain
\begin{equation}
\label{eq:13}
\tilde{g}=\frac{\Lambda}{3}du^2+g_{1}+O(r^{-1}).
\end{equation}
\subsection{Killing vector fields for the intrinsic metric}
Unfortunately it is not possible to solve Killing equations analytically for the metric \eqref{eq:13}. On the contrary, one can write Killing equation for spherically symmetric part of the metric, that is shown in relation \eqref{eq:11}. First
one must take an arbitrary killing field $X^{\mu}=X^{\theta}\partial_{\theta}+X^{\varphi}\partial_{\varphi}$ and then write the killing equation for it \cite{saw2017mass}
\begin{equation}
\mathcal{L}_{X}g_{\mu\nu}=X^{\sigma}\partial_{\sigma}g_{\mu\nu}+g_{\sigma\nu}\partial_{\mu}X^{\sigma}+g_{\mu\sigma}\partial_{\nu}X^{\sigma}.
\end{equation}
This relation gives three independent equations
\begin{align}
&\theta\theta:\partial_{\theta}(X^{\theta}e^{\Lambda e^{\Lambda f}})=0,\\
&\varphi\varphi : \partial_{\varphi}X^{\varphi}+X^{\theta}\cot\theta=0,\notag\\
& \theta\varphi: \partial_{\varphi}X^{\theta}+\partial_{\theta}X^{\varphi}e^{-4\Lambda f}\sin^2\theta=0.\notag
\end{align}
That result
\begin{align}
\label{eq:15}
&X^{\theta}(\theta,\varphi)=\frac{dA(\varphi)}{d\varphi}e^{-\Lambda f(\theta)},\\
& X^{\varphi}(\theta,\varphi)=2\sqrt{2}\alpha(\theta)A(\varphi)-X(\theta),\notag\\
&\frac{d^2A(\varphi)}{d\varphi^2}+(2\sqrt{2}e^{-3\Lambda f(\theta)}\frac{d\alpha (\theta)}{d\theta}\sin^2\theta)A{\varphi}=e^{-3\Lambda f(\theta)}\frac{dX(\theta)}{d \theta}\sin^2\theta. \notag
\end{align}
Where $\alpha(\theta)=-\frac{1}{2\sqrt{2}\sin\theta}\frac{d}{d\theta}(e^{-\Lambda f(\theta)}\sin\theta)$ that is spin coefficient. Let $\omega(\theta)^2=2\sqrt{2}e^{-3\Lambda f(\theta)}\frac{d\alpha (\theta)}{d\theta}\sin^2\theta$. The last equation in \eqref{eq:15} determines harmonic oscillator with frequency $\omega(\theta)$. The function $A(\varphi)$ is then dependent of $\theta$ and $\varphi$. Since $A(\varphi)$ must be independent of $\theta$ the only possibility is to set $\omega(\theta)$ constant and $X^{\theta}$ must be zero as well.
Therefore, we have just one single Killing vector on this axisymmetric topological 2-sphere
\begin{equation}
X^{\mu}=\partial_{\varphi}
\end{equation}
Then one does not have the whole So(3) group on $\mathcal{I}$.
\begin{appendices}
\chapter{Compactification\label{app:A}}
In This appendix one can see how a manifold maps to a compactified manifold. As an example consider the map of $M=\mathbb{R}^2$ to the $\tilde{M}=\mathbb{S}^2 \backslash \{(0,0,1)\}$ where $(0,0,1)$ is the north pole of the sphere. It is possible, as figure \ref{fig:A.1} shows, to map a set of points on the surface onto the sphere. Considering the north pole of the sphere, N, as a fixed point and choosing a point from the surface, A, one can draw a line according to these two points. Obviously this line passes through a point on the sphere, $B$. Likewise $A$ maps onto $B$, the only point of the sphere to which no point on the surface is attributed is the North Pole. It is possible to draw an infinite number of lines which pass through the point $(0,0,1)$ that are parallel to the surface and do not cross the surface. On the other hand how farther is the point on the surface the angle between the line and the surface becomes smaller. Hence it seems to be a good idea to consider $(0,0,1)$ as the point that represents all the points on infinity of the surface $\mathbb{R}^2$. If one foliates this sphere to circles each shit has the metric
\begin{equation}
ds^2=dx^2+dy^2,
\end{equation}
with the relation
$x^{2}+y^{2}=r^2$.
It is possible to write the metric in the form
\begin{equation}
ds^2=d\xi d\xi^{*},
\end{equation}
where $\xi=x+iy$
and
$\xi^{*}=x-iy$. This will be helpful to change the coordinate system origin as
$x'=x, y'=y-r, z'=z$. Thus as one can see in \ref{fig:A.2} $\cot\theta/2$ can be used to show the position of points on the circle. So $Z=\cot\theta/2$ can be defined. In this coordinate system the metric takes the form
\begin{equation}
ds^2=\frac{4}{(1+Z^2)^2}dZ^2.
\end{equation}
\begin{figure}
\centering
\begin{tikzpicture}[scale=3]
\draw (0,0) circle (1cm);
\filldraw[fill=green!20,draw=green!50!black](0,0) ellipse (1cm and 0.2cm);
\draw (-1.8,-0.5) to (-1,0.5);
\draw (-1,0.5) to (1.8,0.5);
\draw (1.8,0.5) to (1,-0.5);
\draw (1,-0.5) to (-1.8,-0.5);
\draw (0.2,1.36) to (-1,-0.76)
\filldraw
(0,1) circle (1pt) node[align=center, above]{N};
\filldraw
(-0.8,-0.4) circle (1pt) [color=red] node[align=center, above]{A};
\filldraw
(-0.34,0.4) circle(1pt) [color=blue] node [ align=center, above]{B};
\filldraw
(0,0)circle(1pt) [color=gray] node [align=center, above]{O};
\end{tikzpicture}
\caption{Mapping $\mathbb{R}^2$ onto $\mathbb{S}^2$.}
\label{fig:A.1}
\end{figure}
It is possible to find similar coordinates for an sphere too. First one has to set
$z'=z-r$,
$y'=y$
and
$x'=x$. Then the position of points on the sphere can be shown by $\xi=e^{i\varphi}cot{\theta/2}$ (figure \ref{fig:A.3}). The metric can be written as
\begin{equation}
ds^2=d\xi d\xi^*=1/4(1+\xi \xi^*)^2(d\theta^2+\sin^2\theta d\varphi^2).
\end{equation}
By choosing the conformal factor $\Omega=\frac{2}{(1+\xi \xi^*)}$ the unphysical metric becomes a 2-sphere.
Now consider the metric of $\mathbb{R}^{d+1}$
\begin{equation}\label{eq:A,1}
ds^2=-dt^2+dr^2+r^2d\Omega^2_{d-1},
\end{equation}
where $d\Omega^2_{d-1}$ is the metric of the 2-sphere. It is useful to define null coordinates as
\begin{eqnarray}
v=t+r\quad,\quad u=t-r.
\end{eqnarray}
Thus the metric \eqref{eq:A,1} takes the form
\begin{equation}
ds^2=-dudv+\frac{1}{4}(v-u)^2 d\Omega^2_{d-1}.
\end{equation}
One can foliates the space to $u=constant$ or $v=constant$ surfaces and multiply the conformal factor $\Omega=\frac{2}{v-u}$ to each of these cuts. This exactly the same that one can do to find the metric near $\mathcal{I}$ for asymptotically flat space-times.
\begin{figure}
\centering
\begin{tikzpicture}[scale=3]
\filldraw[fill=green!50,draw=black!50!] (0,0mm) -- (1.73mm,1mm) arc (0:83:2mm) -- cycle ;
\node at (0.15, 0.35)[green] (a) {$\theta$};
\filldraw[fill=red!40,draw=black!50!] (0,-10mm) -- (1mm,-8.27mm) arc (0:62:2mm) -- cycle ;
\node at (0.1, -0.6)[red] (b) {$\frac{\theta}{2}$};
\draw[->] (-1.25,0) -- (1.25,0) coordinate (x axis) node[left, above]{X};
\draw[->] (0,-1.25) -- (0,1.25) coordinate (y axis)node[left]{Y};
\draw (0,0) circle (1cm);
\draw[very thick,dashed,purple] (0,0) -- (-90:1cm ) ;
\draw [->] [very thick,blue] (0,0) -- (30:1cm) ;
\node at (0.4,0.3)[blue] {$\vec{r}$};
\draw [very thick,dashed,orange] (0,-1) -- (30:1cm);
\node at (0.95,0.5)[orange]{$P$};
\end{tikzpicture}
\caption{This illustration show how $\cot\theta/2$ can be used to shows the position of $P$. }
\label{fig:A.2}
\end{figure}
\tdplotsetmaincoords{60}{110}
\pgfmathsetmacro{\rvec}{.8}
\pgfmathsetmacro{\thetavec}{30}
\pgfmathsetmacro{\phivec}{60}
\begin{figure}
\centering
\begin{tikzpicture}[scale=5,tdplot_main_coords]
\coordinate (O) at (0,0,0);
\draw[thick,->] (0,0,0) -- (1,0,0) node[anchor=north east]{$x$};
\draw[thick,->] (0,0,0) -- (0,1,0) node[anchor=north west]{$y$};
\draw[thick,->] (0,0,0) -- (0,0,1) node[anchor=south]{$z$};
\tdplotsetcoord{P}{\rvec}{\thetavec}{\phivec}
\draw[-stealth,color=red] (O) -- (P) node[above right] {$P$} ;
\draw[dashed, color=red] (O) -- (Pxy);
\draw[dashed, color=red] (P) -- (Pxy);
\tdplotdrawarc{(O)}{0.2}{0}{\phivec}{anchor=north}{$\phi$}
\tdplotsetthetaplanecoords{\phivec}
\tdplotdrawarc[tdplot_rotated_coords]{(0,0,0)}{0.5}{0}%
{\thetavec}{anchor=south west}{$\theta$}
\shade[ball color = yellow!40, opacity = 0.4] (0,0) circle (0.53cm);
\draw (0,0) ellipse (0.53cm and 0.2cm);
\draw [dashed, color=blue] (0,0,0) to (0,0,-0.6);
\draw [dashed, color=blue] (0,0,-0.6) to (P);
\draw[draw=black] (0,0,-0.6) -- (0.1,0.1,-0.3) arc (0:62:1mm) -- cycle ;
\node at (0,0.1,-0.4) {$\frac{\theta}{2}$};
\end{tikzpicture}
\caption{ $\xi=e^{i\varphi}cot{\theta/2}$ can be used to show the position of $P$. }
\label{fig:A.3}
\end{figure}
\chapter{Connection and Ricci tensor for higher orders}
For the metric
\begin{equation}
g_{\mu\nu}=\hat{g}_{\mu\nu}+h_{\mu\nu},
\end{equation}
the connection can be find as
\begin{align}
\label{eq:B.2}
\Gamma^{\rho}_{\mu \nu}=1/2 \hat{g}^{\rho \lambda}
(\partial_{\mu}\hat{g}_{\nu \lambda}+\partial_{\nu}\hat{g}_{\mu \lambda}-\partial_{\lambda}\hat{g}_{\mu \nu})\\
+1/2 {h}^{\rho \lambda}(\partial_{\mu}\hat{g}_{\nu \lambda}
+\partial_{\nu}\hat{g}_{\mu \lambda}-\partial_{\lambda}\hat{g}_{\mu \nu})\notag\\
+1/2 {g}^{\rho \lambda}(\partial_{\mu}{h}_{\nu \lambda}+\partial_{\nu}{h}_{\mu \lambda}-\partial_{\lambda}{h}_{\mu \nu})\notag\\
+1/2 {h}^{\rho \lambda}(\partial_{\mu}{h}_{\nu \lambda}+\partial_{\nu}{h}_{\mu \lambda}-\partial_{\lambda}{h}_{\mu \nu}).\notag
\end{align}
Using the $\hat{g}_{\sigma \epsilon}\hat{g}^{\sigma \epsilon}$ in the second term and the definition for the zero and second order connection, $ \Gamma^{\rho (0)}_{\mu \nu}=1/2 \hat{g}^{\rho \lambda}(\partial_{\mu}\hat{g}_{\nu \lambda}+\partial_{\nu}\hat{g}_{\mu \lambda}-\partial_{\lambda}\hat{g}_{\mu \nu})$,
$ \Gamma^{\rho(2)}_{\mu \nu}=1/2 {h}^{\rho \lambda}(\partial_{\mu}{h}_{\nu \lambda}+\partial_{\nu}{h}_{\mu \lambda}-\partial_{\lambda}{h}_{\mu \nu})$,
the relation \eqref{eq:B.2} can be written as
\begin{align}
\label{eq:B.3}
\Gamma^{\rho}_{\mu \nu}&=\Gamma^{\rho(0)}_{\mu \nu}
+1/2 {h}^{\rho \lambda}\hat{g}_{ \sigma \epsilon}\hat{g}^{ \sigma \epsilon}(\partial_{\mu}\hat{g}_{\nu \lambda}+\partial_{\nu}\hat{g}_{\mu \lambda}-\partial_{\lambda}\hat{g}_{\mu \nu})\\
& +1/2(\partial_{\mu}h^{\rho}_{\nu}+\partial_{\nu}h^{\rho}_{\mu}-\partial^{\rho}h_{\mu \nu})+\Gamma^{\rho(2)}_{\mu \nu},\notag\\
&\Rightarrow\notag\\
&\Gamma^{\rho}_{\mu \nu}=\Gamma^{\rho(0)}_{\mu \nu}
+\delta^{\lambda}_{\epsilon}h^{\rho}_{\sigma} \hat{g}^{\sigma \epsilon}(\partial_{\mu}\hat{g}_{\nu \lambda}+\partial_{\nu}\hat{g}_{\mu \lambda}-\partial_{\lambda}\hat{g}_{\mu \nu})\notag\\
&+1/2(\partial_{\mu}h^{\rho}_{\nu}+\partial_{\nu}h^{\rho}_{\mu}-\partial^{\rho}h_{\mu \nu})+\Gamma^{\rho(2)}_{\mu \nu},\notag\\
&\Rightarrow\notag\\
&\Gamma^{\rho}_{\mu \nu}=\Gamma^{\rho(0)}_{\mu \nu}+1/2(2h^{\rho}_{\sigma}\Gamma^{\rho(0)}_{\mu \nu}+\partial_{\mu}h^{\rho}_{\nu}+\partial_{\nu}h^{\rho}_{\mu}-\partial^{\rho}h_{\mu \nu})+\Gamma^{\rho(2)}_{\mu \nu}.\notag
\end{align}
On the other hand
\begin{align}
\label{eq:B.4}
&\hat{g}^{\rho \xi}(\partial_{\mu}h_{\xi \nu}+\partial_{\nu}h_{\xi \mu}-\partial_{\xi}h_{\mu \nu}+\Gamma^{\chi}_{\mu \xi}h_{\chi \nu}+\Gamma^{\chi}_{\mu \nu}h_{\chi \xi}+\Gamma^{\chi}_{\nu \mu}h_{\chi \xi}-\Gamma^{\chi}_{\nu \xi}h_{\mu \chi}-\Gamma^{\chi}_{\mu \xi}h_{\nu\chi})\\
&=\hat{g}^{\rho \xi}(\nabla_{\mu}h_{\xi \nu}+\nabla_{\nu}h_{\xi \mu}-\nabla_{\xi}h_{\mu \nu}).\notag
\end{align}
Comparing the second term of the relation \eqref{eq:B.3} with the relation \eqref{eq:B.4} one can see
\begin{equation}
\boxed{\hat{\Gamma}^{\rho (1)}_{\mu \nu}=1/2(\hat{\nabla}_{\mu}h^{\rho}_{\nu}+\hat{\nabla}_{\nu}h^{\rho}_{\mu}-\hat{\nabla}^{\rho}h_{\mu\nu})}
\end{equation}
Now we will calculate the Ricci tensor for higher orders
\begin{align}
R_{\mu\nu}^{(1)}&=1/2 \partial_{\rho}(\hat{\nabla}_{\mu}h^{\rho}_{\nu}+\hat{\nabla}_{\nu}h^{\rho}_{\mu}-\hat{\nabla}^{\rho}h_{\mu\nu})-1/2\partial_{\mu}(\hat{\nabla}_{\rho}h^{\rho}_{\nu}+\hat{\nabla}_{\nu}h^{\rho}_{\rho}-\hat{\nabla}^{\rho}h_{\rho\nu})\\
&+1/2(\hat{\nabla}_{\rho}h^{\rho}_{\lambda}+\hat{\nabla}_{\lambda}h^{\rho}_{\rho}-\hat{\nabla}^{\rho}h_{\rho\lambda})\Gamma^{\lambda(0)}_{\mu\nu}+1/2 (\hat{\nabla}_{\mu}h^{\lambda}_{\nu}+\hat{\nabla}_{\nu}h^{\lambda}_{\mu}-\hat{\nabla}^{\lambda}h_{\mu\nu})\Gamma^{\rho(0)}_{\rho\lambda}\notag\\
&-1/2(\hat{\nabla}_{\rho}h^{\lambda}_{\nu}+\hat{\nabla}_{\nu}h^{\lambda}_{\rho}-\hat{\nabla}^{\lambda}h_{\rho\nu})\Gamma^{\rho(0)}_{\lambda\mu}-1/2(\hat{\nabla}_{\rho}h^{\lambda}_{\mu}+\hat{\nabla}_{\mu}h^{\lambda}_{\rho}-\hat{\nabla}^{\lambda}h_{\rho\mu})\Gamma^{\rho(0)}_{\lambda\nu}\notag\\
&=1/2[\underbrace{\partial_{\rho}\hat{\nabla}_{\mu}h^{\rho}_{\nu}+\hat{\nabla}_{\mu}h^{\lambda}_{\nu}\Gamma^{\rho(0)}_{\rho\lambda}-\hat{\nabla}_{\iota}h^{\rho}_{\nu}\Gamma^{\iota(0)}_{\rho\mu}-\hat{\nabla}_{\mu}h^{\rho}_{\iota}\Gamma^{\iota(0)}_{\rho\nu}}_{\hat{\nabla}_{\rho}\hat{\nabla}_{\nu}h^{\rho}_{\mu}}]\notag\\
&+1/2[\underbrace{\partial_{\rho}\hat{\nabla}_{\nu}h^{\rho}_{\mu}+\hat{\nabla}_{\nu}h^{\lambda}_{\mu}\Gamma^{\rho(0)}_{\rho\lambda}-\hat{\nabla}_{\iota}h^{\rho}_{\mu}\Gamma^{\iota(0)}_{\rho\nu}-\hat{\nabla}_{\nu}h^{\rho}_{\iota}\Gamma^{\iota(0)}_{\rho\mu}}_{\hat{\nabla}_{\rho}\hat{\nabla}_{\mu}h^{\rho}_{\nu}}].\notag
\end{align}
Finely the following relation is obtained
\begin{equation}
\boxed{R^{(1)}_{\mu\nu}=1/2(\hat{\nabla}_{\rho}\hat{\nabla}_{\nu}h^{\rho}_{\mu}+\hat{\nabla}_{\rho}\hat{\nabla}_{\mu}h^{\rho}_{\nu}-\hat{\nabla}^{\rho}\hat{\nabla}_{\rho}h_{\mu\nu}-\hat{\nabla}_{\mu}\hat{\nabla}_{\nu}h)}.
\end{equation}
For the second order connection
\begin{align}
\label{eq:B.8}
R^{(2)}_{\mu \nu}&= 1/2(\partial_{\rho}h^{\rho \lambda}\partial_{\mu}h_{\lambda \nu}+\partial_{\rho}h^{\rho \lambda}\partial_{\nu}h_{\mu \lambda }-\partial_{\rho}h^{\rho \lambda}\partial_{\lambda}h_{\mu \nu})\\
&+1/2h^{\rho \lambda}(\partial_{\rho}\partial_{\mu}h_{\lambda \nu}+\partial_{\rho}\partial_{\nu}h_{\mu \lambda}-\partial_{\rho}\partial_{\lambda}h_{\mu \nu})\notag\\
&-1/2 (\partial_{\nu}h^{\rho \lambda}\partial_{\mu}h_{\lambda \rho}+\partial_{\nu}h^{\rho \lambda} \partial_{\rho}h_{\mu \lambda}-\partial_{\nu}h^{\rho \lambda}\partial_{\lambda}h_{\rho \mu})\notag\\
&-1/2h^{\rho \lambda}(\partial_{\nu}\partial_{\mu}h_{\lambda \rho}+\partial_{\nu} \partial_{\rho}h_{\mu \lambda}-\partial_{\nu}\partial_{\lambda}h_{\rho \mu})\notag\\
&+1/4(\hat{\nabla}_{\rho}h^{\rho}_{\lambda}+\hat{\nabla}_{\lambda}h^{\rho}_{\rho}-\hat{\nabla}^{\rho}h_{\nu \mu})
\times(\hat{\nabla}_{\mu}h^{\lambda}_{\nu}+\hat{\nabla}_{\nu}h^{\lambda}_{\mu}-\hat{\nabla}^{\lambda}h_{\mu \nu})\notag\\
&-1/4(\hat{\nabla}_{\nu}h^{\rho}_{\lambda}+\hat{\nabla}_{\lambda}h^{\rho}_{\nu}-\hat{\nabla}^{\rho}h_{\lambda \nu})
\times(\hat{\nabla}_{\rho}h^{\lambda}_{\mu}+\hat{\nabla}_{\mu}h^{\lambda}_{\rho}-\hat{\nabla}^{\lambda}h_{\rho \mu})\notag\\
&+1/4(\hat{g}^{\rho\xi}(\partial_{\rho}\hat{g}_{\lambda \xi}+\partial_{\lambda}\hat{g}_{\xi \rho}-\partial_{\xi}\hat{g}_{\rho \lambda}))
\times h^{\lambda \xi}(\partial_{\mu}h_{\nu \xi}+\partial_{\nu}h_{\xi \mu}-\partial_{\xi}h_{\mu \nu})\notag
\\
&+1/4(h^{\rho\xi}(\partial_{\rho}h_{\lambda \xi}+\partial_{\lambda}h_{\xi \rho}-\partial_{\xi}h_{\rho \lambda}))
\times \hat{g}^{\lambda \xi}(\partial_{\mu}\hat{g}_{\nu \xi}+\partial_{\nu}\hat{g}_{\xi \mu}-\partial_{\xi}\hat{g}_{\mu \nu})\notag\\
&+1/4 \hat{g}^{\rho \xi}(\partial_{\nu}\hat{g}_{\lambda \xi}+\partial_{\lambda}\hat{g}_{\xi \nu}-\partial_{\xi}\hat{g}_{\nu \lambda})
\times
h^{\lambda \xi}(\partial_{\rho}h_{\mu \xi}+\partial_{\mu}h_{\xi \rho}-\partial_{\xi}h_{\rho \mu})\notag
\\
&+1/4h^{\rho \xi}(\partial_{\nu}h_{\lambda \xi}+\partial_{\lambda}h_{\xi \nu}-\partial_{\xi}h_{\nu \lambda})
\times
\hat{g}^{\lambda \xi}(\partial_{\rho}\hat{g}_{\mu \xi}+\partial_{\mu}\hat{g}_{\xi \rho}-\partial_{\xi}\hat{g}_{\rho \mu}).\notag
\end{align}
In this relation many terms may be vanish depend on the gauge that we choose.
\chapter{Ricci decomposition\label{ap:j}}
It is possible to write Riemann tensor to three irreducible terms
\begin{equation}
R_{\mu \nu \sigma \rho}=C_{\mu \nu \sigma \rho}+E_{\mu \nu \sigma \rho}+G_{\mu \nu \sigma \rho}.
\label{eq:2.20}
\end{equation}
where $C_{\mu \nu \sigma \rho}$ is Weyl tensor, $E_{\mu \nu \sigma \rho}$ is semi trace-less part, $G_{\mu \nu \sigma \rho}$ is scalar part of the Riemann tensor. It is possible to obtain this terms using the Kulkarni-Nomizu product. Then for $E_{\mu \nu \sigma \rho}$ one has
\begin{equation}
\label{eq:G.2}
E_{\mu \nu \sigma \rho}=\alpha (g\mathbin{\bigcirc\mspace{-15mu}\wedge\mspace{3mu}} R)_{\mu \nu \sigma \rho}=2\alpha
(g_{\rho[\mu}R_{\nu]\sigma}-g_{\sigma[\mu}R_{\nu]\rho}),
\end{equation}
Also for $G_{\mu \nu \sigma \rho}$ it is possible to write
\begin{equation}
\label{eq:G.3}
G_{\mu \nu \sigma \rho}=\beta R(g\mathbin{\bigcirc\mspace{-15mu}\wedge\mspace{3mu}} g)_{\mu \nu \sigma \rho} =\beta R(g_{\mu \rho}g_{\nu \sigma}-g_{\mu \sigma}g_{\nu \rho}).
\end{equation}
$\alpha$
and
$\beta$
in relations
\eqref{eq:G.2}
and
\eqref{eq:G.3}
are arbitrary constants.
Putting
\eqref{eq:G.2}
and
\eqref{eq:G.3}
in
\eqref{eq:2.20}, one can write
\begin{equation}
\label{eq:G.4}
R_{\mu \nu \sigma \rho}=2\beta g_{\rho[\mu}g_{\nu]\sigma}+2\alpha (g_{\rho[\mu}R_{\nu]\sigma}-g_{\sigma[\mu}R_{\nu]\rho})+C_{\mu \nu \rho \sigma}.
\end{equation}
Now $\alpha$
and
$\beta$ can be obtained
\begin{align}
\label{eq:G.5}
&\alpha=\frac{1}{n-2},\\
&\beta=\frac{-1}{(n-1)(n-2)}.\notag
\end{align}
Using constants \eqref{eq:G.5}, the relation \eqref{eq:G.4} can be rewritten
\begin{equation}
\label{eq:G.6}
R_{\mu \nu \sigma \rho}=\frac{-2}{(n-1)(n-2)} g_{\rho[\mu}g_{\nu]\sigma}+\frac{2}{n-2} (g_{\rho[\mu}R_{\nu]\sigma}-g_{\sigma[\mu}R_{\nu]\rho})+C_{\mu \nu \rho \sigma}.
\end{equation}
This process is called Ricci decomposition.
Also it is possible to rewrite these relation according to Schouten tensor $S_{\mu\nu}=R_{\mu\nu}-\frac{1}{6}g_{\mu\nu}R$, in four dimensions
\begin{equation}
{R}_{\mu\nu\sigma\rho}={C}_{\mu\nu\sigma\rho}+g_{\mu[\sigma}{S}_{\rho]\nu}-g_{\nu[\sigma}{S}_{\rho]\mu}.
\end{equation}
or
\begin{equation}
R^{\mu\nu}_{\sigma \rho}=C^{\mu\nu}_{\sigma \rho}-\frac{1}{3}R\delta^{\mu}_{[\sigma}\delta^{\nu}_{\rho]}+2\delta^{[\mu}_{[\sigma}R^{\nu]}_{\rho]}.
\end{equation}
It is possible to describe left dual right dual for Weyl tensor
\begin{align}
&^{\sim} C_{\mu\nu\sigma\rho}\equiv\frac{1}{2}\varepsilon_{\mu\nu\iota\chi}C^{\iota\chi}_{\:\:\:\:\sigma\rho},\\
&C_{\mu\nu\sigma\rho}^{\sim}\equiv\frac{1}{2}\varepsilon_{\sigma\rho\iota\chi}C^{\:\:\:\:\iota\chi}_{\mu\nu}.\notag
\end{align}
Dual tensors satisfy the following relation
\begin{equation}
^{\sim} C_{\mu\nu\sigma\rho}\equiv C_{\mu\nu\sigma\rho}^{\sim}.
\end{equation}
It is also useful to represent the complex conjugate form of the Weyl tensor
\begin{equation}
\star C_{\mu\nu\sigma\rho}\equiv C_{\mu\nu\sigma\rho}+iC_{\mu\nu\sigma\rho}^{\sim}.
\end{equation}
On the other hand for self-dual bivectors one can write
\begin{equation}
X_{\mu}\equiv \star X_{\mu\nu}u^{\nu}=0\quad,\quad X_{\mu}u^{\mu}=0\quad,\quad u_{\mu}u^{\nu}=-1,
\end{equation}
where $u^{\mu}$ is a timelike unit vector. So For Weyl tensor one has
\begin{equation}
-Q_{\mu\nu}\equiv\star C_{\mu \nu\sigma \rho }u^{\nu}u^{\rho}\equiv E_{\mu\sigma}+iB_{\mu\sigma}\quad,\quad u_{\mu}u^{\mu}=-1,
\end{equation}
where $E_{\mu\sigma} $
and
$ B_{\mu\sigma}$ are electric and magnetic parts of the Weyl tensor.
If near the de Sitter null infinity $\tilde{n}^{\mu}$ used as the timelike unit vector
with the relation $\tilde{n}^{\mu}\tilde{n}_{\mu}=-1/l^2$, then one has
\begin{align}
&\tilde{E}_{\mu\sigma}:=l^{2}\tilde{C}_{\mu\nu\sigma\lambda}n^{\nu}n^{\sigma},\\
&\tilde{B}_{\mu\sigma}:=l^{2} \star \tilde{C}_{\mu\nu\sigma\lambda}n^{\nu}n^{\sigma}.\notag
\end{align}
\eqref{eq:G.6} calculation for de Sitter space-time shows that $C_{\mu \nu \rho \sigma}$ and $E_{\mu \nu \rho \sigma}$ become zero. Thus Riemann tensor takes the form
\begin{equation}
R_{\mu \nu \sigma \rho}=\frac{R}{n(n-1)}g_{\rho[\mu}g_{\nu]\sigma}=\frac{R}{n(n-1)}(g_{\mu \rho}g_{\nu \sigma}-g_{\mu \sigma}g_{\nu \rho}).
\label{eq:2.23}
\end{equation}
The relation \eqref{eq:2.23} shows that Riemman tensor can be described only with the Ricci scalar that means having the curvature of a point one is able to obtain the curvature of the whole maximally symmetric space-time
\begin{equation}
G_{\mu \nu}=R_{\mu \nu}-1/2Rg_{\mu \nu}=-1/4Rg_{\mu \nu}.
\end{equation}
Another traceless tensor is the Bach tensor that is also invariant under conformal transformations. Bach tensor has the following form on $\mathcal{I}$
\begin{equation}
B_{\mu\nu\sigma}=D_{[\mu}(\mathcal{R}_{\nu]\sigma}-\frac{1}{4}q_{\nu]\sigma}\mathcal{R}),
\end{equation}
where $q_{\nu\sigma}$ is the metric on $\mathcal{I}$ and $D_{\mu}$ is the covariant derivative related to $q_{\nu\sigma}$.
$\mathcal{R}$
and
$\mathcal{R}_{\nu\sigma}$ are Ricci scalar and Ricci tensor on $\mathcal{I}$. Now it is possible to rewrite the relation of $B_{\mu\nu\sigma}$
according to asymptotic Weyl tensor, $K_{\mu\nu\sigma\rho}=\Omega^{-1}{C}_{\mu\nu\sigma\rho}=\Omega$
\begin{equation}
B_{\mu\nu\sigma}=\frac{1}{2}q^{\chi}_{\mu}q_{\nu}^{\psi}q_{\sigma}^{\iota}D_{[\chi}S_{\psi]\iota}=\frac{1}{2}q^{\chi}_{\mu}q_{\nu}^{\psi}q_{\sigma}^{\iota}K_{\chi\psi\iota\gamma}\tilde{n}^{\gamma}.
\end{equation}
\chapter{Tensor field caused by derivative operator \label{ap:d}}
Two derivative operator $\nabla$ and $\tilde{\nabla}$ according to ${g}_{\mu\nu}$ and $\tilde{g}_{\mu\nu}$. If one operate them on the dual vector field, $\omega_{\nu}$ and the scalar field, $f$, according to Leibnitz rule it is possible to write \cite{stephani2009exact}
\begin{equation}
\tilde{\nabla}_{\mu}(f\omega_{\nu})-{\nabla}_{\mu}(f\omega_{\nu})=f( \tilde{\nabla}_{\mu}\omega_{\nu}-{\nabla}_{\mu}\omega_{\nu}).
\end{equation}
Each derivative operator maps a tensor of rank $(k,l)$ to a tensor of rank $(k,l+1)$. So $\tilde{\nabla}_{\mu}\omega_{\nu}-{\nabla}_{\mu}\omega_{\nu}$ create a tensor field, $C^{\sigma}_{\mu\nu }$ as
\begin{equation}
\tilde{\nabla}_{\mu}\omega_{\nu}={\nabla}_{\mu}\omega_{\nu}-C^{\sigma}_{\mu\nu }\omega_{\sigma}.
\end{equation}
considering $ \tilde{\nabla}_{\mu}$ as the partial derivative $\partial_{\mu}$ then $C^{\sigma}_{\mu\nu }$ reduce to the Christoffel symbol.
It has to be said that from torsion free condition for $C^{\sigma}_{\mu\nu }$ one has the relation
\begin{equation}
C^{\sigma}_{\mu\nu }=C^{\sigma}_{\nu\mu }.
\end{equation}
The operation of derivative operator on the metric is then
\begin{equation}
\label{eq:d.4}
0=\tilde{\nabla}_{\mu}\tilde{g}_{\nu\sigma}={\nabla}_{\mu}\tilde{g}_{\nu\sigma}-C^{\rho}_{\mu\nu}\tilde{g}_{\rho\sigma}-C^{\rho}_{\mu\sigma}\tilde{g}_{\nu\rho}
\end{equation}
and for other indices' combination one also has
\begin{align}
\label{eq:d.5}
\tilde{\nabla}_{\nu}\tilde{g}_{\mu\sigma}&={\nabla}_{\nu}\tilde{g}_{\mu\sigma}-C^{\rho}_{\nu\mu}\tilde{g}_{\rho\sigma}-C^{\rho}_{\nu\sigma}\tilde{g}_{\mu\rho},\\
\tilde{\nabla}_{\sigma}\tilde{g}_{\mu\nu}&={\nabla}_{\sigma}\tilde{g}_{\mu\nu}-C^{\rho}_{\sigma\mu}\tilde{g}_{\rho\nu}-C^{\rho}_{\sigma\nu}\tilde{g}_{\nu\rho}.\notag
\end{align}
Subtracting the first relation of \eqref{eq:d.5} from \eqref{eq:d.4} and using the second relation of \eqref{eq:d.5} one has
\begin{equation}
C^{\rho}_{\mu\sigma}=1/2\tilde{g}^{\nu\rho}({\nabla}_{\mu}\tilde{g}_{\nu\sigma}+{\nabla}_{\sigma}\tilde{g}_{\mu\nu}-{\nabla}_{\nu}\tilde{g}_{\mu\sigma}).
\label{d.6}
\end{equation}
For the unphysical metric one has
\begin{equation}
{\nabla}_{\sigma} \tilde{g}_{\mu\nu}={\nabla}_{\sigma}(\Omega^2{g}_{\mu\nu})=2\Omega{g}_{\mu\nu}\nabla \Omega.
\end{equation}
So the relation \eqref{d.6} can be rewritten as
\begin{equation}
\label{d.8}
{C}^{\rho}_{\mu\sigma}=\Omega^{-1}{g}^{\nu\rho}({g}_{\nu\sigma}n_{\mu}+{g}_{\mu\nu}n_{\sigma}-{g}_{\mu\sigma}n^{\rho})=2\Omega^{-1}\delta^{\rho}_{(\sigma}n_{\mu)}-\Omega^{-1}{g}^{\nu\rho}{g}_{\mu\sigma}n^{\rho},
\end{equation}
where $n_{\mu}={\nabla}_{\mu}\Omega$. It is possible to write similar calculation according to $\tilde{\nabla}_{\mu}$ and ${g}_{\mu\nu}$
\begin{equation}
\label{eq:D.99}
\tilde{C}^{\rho}_{\mu\sigma}=\Omega^{-1}{g}^{\nu\rho}({g}_{\nu\sigma}\tilde{n}_{\mu}+{g}_{\mu\nu}\tilde{n}_{\sigma}-{g}_{\mu\sigma}\tilde{n}^{\rho})=2\Omega^{-1}\delta^{\rho}_{(\sigma}\tilde{n}_{\mu)}-\Omega^{-1}{g}^{\nu\rho}{g}_{\mu\sigma}\tilde{n}^{\rho}.
\end{equation}
Riemann tensor can be obtained according to each
\eqref{eq:D.99} or \eqref{d.8} but because of our convention it is important to use $\tilde{C}^{\rho}_{\mu\sigma}$ from \eqref{eq:D.99}, so one has
\begin{align}
\tilde{R}^{\rho}_{\mu\sigma\nu}&={R}^{\rho}_{\mu\sigma\nu}-2\tilde{\nabla}_{[\mu}\tilde{C}^{\rho}_{\sigma]\nu}+2\tilde{C}^{\lambda}_{\nu]\mu}\tilde{C}^{\rho}_{\sigma]\lambda}\\
&={R}^{\rho}_{\mu\sigma\nu}+2\Omega^{-1}\delta_{[\mu}\tilde{\nabla}_{\sigma]}\tilde{\nabla}_{\nu}\Omega-2\Omega^{-1}\tilde{g}^{\rho\lambda}\tilde{g}_{\nu[\mu}\tilde{\nabla}_{\sigma]}\tilde{\nabla}_{\lambda}\Omega+2\Omega^{-2}\tilde{\nabla}_{[\mu}\Omega\delta^{\lambda}_{\sigma]}\tilde{\nabla}_{\nu}\Omega\notag\\
&-2\Omega^{-2}\tilde{\nabla}_{[\mu}\Omega\tilde{g}_{\sigma]\nu}\tilde{\nabla}_{\xi}\Omega-2\tilde{g}_{\nu[\mu}\delta^{\rho}_{\sigma]}\tilde{g}^{\lambda\xi}\tilde{\nabla}_{\xi}\Omega\tilde{\nabla}_{\lambda}\Omega\notag
\end{align}
and also for the Ricci tensor
\begin{align}
\label{eq:D.8}
\tilde{R}_{\mu\nu}=&\tilde{\nabla}_{\rho}\tilde{C}^{\rho}_{\mu\nu}-\tilde{\nabla}_{\nu}\tilde{C}^{\rho}_{\rho\mu}+\tilde{C}^{\rho}_{\rho\lambda}\tilde{C}^{\lambda}_{\mu\nu}-\tilde{C}^{\rho}_{\mu\lambda}\tilde{C}^{\lambda}_{\rho\nu}\\
=&R_{\mu\nu}+(d-2)\Omega^{-2}\tilde{\nabla}_{\nu}\Omega\tilde{\nabla}_{\mu}\Omega-(d-2)\Omega^{-2}\tilde{g}_{\mu\nu}\tilde{g}^{\rho\sigma}\tilde{\nabla}_{\rho}\Omega\tilde{\nabla}_{\sigma}\Omega\notag\\
-&(d-2)\Omega^{-1}\tilde{\nabla}_{\mu}\tilde{\nabla}_{\nu}\Omega-\Omega^{-1}\tilde{g}_{\mu\nu}\tilde{g}^{\rho\sigma}\tilde{\nabla}_{\rho}\tilde{\nabla}_{\sigma}\Omega\notag.
\end{align}
multiplying this relation by $\tilde{g}_{\mu\nu}$ the Ricci scalar can be obtained
\begin{equation}
\tilde{R}=R-2(d-1)\Omega^{-1}\tilde{\nabla}^{\nu}\tilde{\nabla}_{\nu}\Omega-(d-2)(d-1)\Omega^{-2}\tilde{\nabla}^{\nu}\Omega\tilde{\nabla}_{\nu}\Omega.
\label{eq:D.9}
\end{equation}
On the other hand according to \eqref{eq:D.8}
and
\eqref{eq:D.9} Schouten tensor in four dimensions can be written as follows
\begin{equation}
\tilde{S}_{\mu\nu}=\tilde{R}_{\mu\nu}-1/6\tilde{g}_{\mu\nu}\tilde{R}={S}_{\mu\nu}-2\Omega^{-2}\tilde{\nabla}_{\nu}\Omega\tilde{\nabla}_{\mu}\Omega+2\Omega^{-1}\tilde{\nabla}_{\nu}\tilde{\nabla}_{\mu}\Omega.
\end{equation}
Einstein tensor can be also obtained using
\eqref{eq:D.8}
and
\eqref{eq:D.9}
\begin{align}
\label{eq:D.14}
\tilde{G}_{\mu\nu}&=\tilde{R}_{\mu\nu}-1/2\tilde{g}_{\mu\nu}\tilde{R}=G_{\mu\nu}+ 2 \Omega^{-1}(\tilde{\nabla}_{\mu}\tilde{n}_{\nu}-\tilde{g}_{\mu\nu}\tilde{\nabla}^{\sigma}\tilde{n}_{\sigma})+3\Omega^{-2} \tilde{g}_{\mu\nu}\tilde{n}^{\sigma}\tilde{n}_{\sigma}\\
&+\Omega^{-2} \Lambda \tilde{g}_{\mu\nu}.\notag
\label{eq:D.13}
\end{align}
from vacuum condition
$G_{\mu\nu}=0$, thus the two final terms in \eqref{eq:D.14} simplify together, on $\mathcal{I}$
\begin{equation}
\tilde{G}_{\mu\nu}=2 \Omega^{-1}(\tilde{\nabla}_{\mu}\tilde{n}_{\nu}-\tilde{g}_{\mu\nu}\tilde{\nabla}^{\sigma}\tilde{n}_{\sigma})=2 \Omega^{-1}(\tilde{K}_{\mu\nu}-\tilde{g}_{\mu\nu}\tilde{K})
\label{eq:D.15}
\end{equation}
where $\tilde{K}_{\mu\nu}=\tilde{\nabla}_{\mu}\tilde{n}_{\nu}$
and
$\tilde{K}=\tilde{\nabla}^{\sigma}\tilde{n}_{\sigma}$.
For Riemann tensor in three dimensions one has
\begin{equation}
\label{eq:d.166}
\mathcal{R}^{\sigma}_{\mu\nu\rho}\omega_{\sigma}= D_{\mu}D_{\nu}\omega_{\rho}- D_{\nu}D_{\mu}\omega_{\rho}
\end{equation}
where $ D_{\mu}D_{\nu}\omega_{\rho}$ can be written as
\begin{align}
\label{eq:d.177}
D_{\mu}D_{\nu}\omega_{\rho}&=D_{\mu}(h^{\psi}_{\nu}h^{\chi}_{\rho}\tilde{\nabla}_{\psi}\omega_{\chi})=h^{\iota}_{\mu} h^{\xi}_{\nu}h^{\epsilon}_{\rho} \tilde{\nabla}_{\iota}(h^{\psi}_{\xi}h^{\chi}_{\epsilon}\tilde{\nabla}_{\psi}\omega_{\chi})\\
&=h^{\iota}_{\mu} h^{\xi}_{\nu}h^{\epsilon}_{\rho} \tilde{\nabla}_{\iota}(\underbrace{h^{\psi}_{\xi}}_{\tilde{g}^{\psi}_{\xi}+\tilde{n}^{\psi}\tilde{n}_{\xi}})h^{\chi}_{\epsilon}\tilde{\nabla}_{\psi}\omega_{\chi}
+h^{\iota}_{\mu} h^{\xi}_{\nu}h^{\epsilon}_{\rho} h^{\psi}_{\xi}\tilde{\nabla}_{\iota}(\underbrace{h^{\chi}_{\epsilon}}_{\tilde{g}^{\chi}_{\epsilon}+\tilde{n}^{\chi}\tilde{n}_{\epsilon}})\tilde{\nabla}_{\psi}\omega_{\chi}\notag\\
&+h^{\iota}_{\mu} h^{\xi}_{\nu}h^{\epsilon}_{\rho} h^{\psi}_{\xi}h^{\chi}_{\epsilon}\tilde{\nabla}_{\iota}\tilde{\nabla}_{\psi}\omega_{\chi}\notag
\end{align}
where $\tilde{g}_{\mu\nu}=-\tilde{n}_{\mu}\tilde{n}_{\nu}+\tilde{h}_{\mu\nu}$. If one use $\tilde{\nabla}^{\mu}\tilde{n}_{\nu}=\tilde{K}^{\mu}_{\nu}$
and
$\tilde{\nabla}^{\rho}\tilde{g}_{\mu\nu}$ in this relation, the relation \eqref{eq:d.177} takes the following form
\begin{align}
D_{\mu}D_{\nu}\omega_{\rho}&=h^{\iota}_{\mu} h^{\xi}_{\nu}h^{\epsilon}_{\rho} h^{\psi}_{\xi}h^{\chi}_{\epsilon}\tilde{\nabla}_{\iota}\tilde{\nabla}_{\psi}\omega_{\chi}+h_{\rho}^{\chi}\tilde{K}_{\mu\nu}\tilde{n}^{\sigma}\tilde{\nabla}_{\sigma}\omega_{\chi}+h^{\sigma}_{\nu}\tilde{K}_{\mu\rho}\tilde{n}^{\xi}\tilde{\nabla}_{\sigma}\omega_{\xi}.
\end{align}
remember that $\tilde{K}^{\mu}_{\nu}$ indices are lowered and raised
with $h_{\mu\nu}$. This calculation can be done similarly for $D_{\nu}D_{\mu}\omega_{\rho}$. Then puting these relations in \eqref{eq:d.166} one has \cite{feng2018weiss}
\begin{equation}
\mathcal{R}^{\sigma}_{\mu\nu\rho}=h^{\iota}_{\mu} h^{\xi}_{\nu}h^{\epsilon}_{\rho} h^{\sigma}_{\chi}\tilde{R}_{\iota\xi\epsilon}^{\chi}-\tilde{K}_{\mu\rho}\tilde{K}^{\sigma}_{\nu}-\tilde{K}_{\sigma\rho}\tilde{K}^{\sigma}_{\mu}.
\end{equation}
Using this relation one can also find the three-dimensional form of the Ricci tensor for $\Omega= constant$ surfaces
\begin{equation}
\label{eq:d.20}
\mathcal{R}_{\mu\rho}=\mathcal{R}^{\sigma}_{\mu\sigma\rho}=\tilde{h}^{\iota}_{\mu}\tilde{h}^{\chi}_{\rho}\tilde{R}_{\iota\chi}-\tilde{K}_{\mu\rho}\tilde{K}-\tilde{K}_{\sigma\rho}\tilde{K}^{\sigma}_{\mu}.
\end{equation}
If one put $G_{\mu\nu}=\tilde{R}_{\mu\nu}-1/2\tilde{g}_{\mu\nu}\tilde{R}$ in \eqref{eq:D.15} and multiplying the resulting equation by $\tilde{n}^{\mu}\tilde{n}^{\nu}$, one finds
\begin{align}
\label{eq:d.16}
\tilde{R}+2\tilde{R}_{\mu\sigma}\tilde{n}^{\mu}\tilde{n}^{\sigma}=4\Omega^{-1}(\tilde{K}_{\mu\sigma}-\tilde{g}_{\mu\sigma}\tilde{K})\tilde{n}^{\mu}\tilde{n}^{\sigma}.
\end{align}
Using
$K_{\mu\sigma}n^{\mu}=0$ in \eqref{eq:d.16} one gets
\begin{align}
\label{eq:d.22}
\tilde{R}+2\tilde{R}_{\mu\sigma}\tilde{n}^{\mu}\tilde{n}^{\sigma}=-4\Omega^{-1}\tilde{g}_{\mu\sigma}\tilde{n}^{\mu}\tilde{n}^{\sigma}\tilde{K}=4\Omega^{-1}(-\tilde{K}+\underbrace{\tilde{h}_{\mu\sigma}\tilde{n}^{\mu}\tilde{n}^{\sigma}}_{=0})=-4\Omega^{-1}\tilde{K}.
\end{align}
Doing some mathematics one can find the following relation by using \eqref{eq:d.20} in \eqref{eq:d.22} so
\begin{align}
\mathcal{R}+\tilde{K}^2-\tilde{K}_{\mu\nu}\tilde{K}^{\mu\nu}=4\Omega^{-1}\tilde{K}.
\end{align}
\end{appendices}
\bibliographystyle{ieeetr}
\chapter{Introduction}
Efforts to find the speed of light began in the seventeenth century.
It was only then that the first evidence that light had a limited and measurable speed, came to the mind. Until then, it was almost universally believed that the speed of light has an infinite value. Afterward many savants worked on that subject to find the value of the speed of light as Armand Fizeau, Albert Michelson and many others. In 1983 International Bureau of Weights and Measures reported the speed of light to have the value of $299,792,458m/s$.
In 1905 Einstein declared the theory of special relativity based on two postulates. The First one is that physical laws are invariant in inertial frames. The second one is the invariance of the speed of light. Based only on these two hypotheses and without referring to mechanics laws, electromagnetic and other fundamental physical theories, he derived Lorentz transformations. Keeping the speed of light invariant, these transformations took the place of Galilean transformations.
Another fundamental physical constant is Planck length, $l_p$, which has the length dimension and has been obtained by combining the gravitational constant, $G$, the speed of light, $c$ and the Planck constant $\hbar$ \cite{gaarder2016gravitational}. After finding the constant of action, known as Planck constant, Max Planck mentioned that by using the three constants $G$,
$c$
and
$\hbar$, it is possible to define a global constant for length. In General relativity, the distance between two points is a dynamical parameter and is obtained by solving Einstein's field equations. Through the quantum mechanics laws, each dynamical parameter must satisfy the uncertainty principle. Actually, $l_p$ is the distance where quantum effects appear. Lorentz transformations do not preserve this minimum length. As Lorentz transformations replace Galilean transformations, it is expected that a new symmetry group that preserves $l_P$ and $c$, becomes practical.
Lorentz transformations can also be performed in de Sitter space-time as it is homogeneous\cite{aldrovandi2007sitter}. Hence Minkowski background in physical theories may be replaced by de Sitter space-time that its governing symmetry group is So(4,1) \cite{abbott1982stability,hawking1973large}. These transformations can preserve the length. So it is possible to escape the mentioned problem \cite{aldrovandi2007sitter,cacciatori2008special,nakahara2018geometry}.
Also observations show that our universe expands at an accelerating rate and a model with a positive constant is more appropriate to describe it \cite{riess1998observational}. Hence it is useful to study de Sitter space-time. Different topics have to be considered to be sure about this choice. Actually de Sitter horizon complicates many things. As in the flat case background it is important to consider space-time's boundaries \cite{wald1999gravitational,wald2000general}. In presence of a positive cosmological constant,
de Sitter null infinity, $\mathcal{I}$ is no longer null but spacelike \ and can not be study like asymptotically flat $\mathcal{I}$ which have been study by Bondi et al. in 1962 \cite{bondi1962gravitational}. They rewritten Minkowski metric in Bondi coordinates, $(u,r,x^A)$ where $u=t-r$ is the retarded time and $x^A=(\theta,\varphi)$. They used Dirichlet boundary conditions and found a meaningful notion of energy. But it is not possible to follow the same route in asymptotically de Sitter space-time because using these boundary conditions gravitational waves do not carry away de Sitter charges across future null infinity \cite{ashtekar2014asymptotics,ashtekar2015asymptotics}. So the Fefferman-Graham method \cite{fefferman1985conformal,fefferman2011ambient} and Penrose-Newaman formalism \cite{penrose1965remarkable,penrose1984spinors} can be used to study $\mathcal{I}$ in de Sitter space-time \cite{saw2016mass}. Also it is possible to add
Neumann boundary conditions $J^{AB}=0$ to Dirichlet boundary conditions and find finite, conserved, integrable and generically non-vanishing \cite{compere2019lambda}.
The de Sitter solution for Einstein's field equations is obtained in chapter two. Also the different coordinate systems for de Sitter space-time are considered. The symmetry group of the de Sitter space-time is obtained in chapter three and it is explained how these transformations can preserve a minimum length. Asymptotically flat \cite{bondi1962gravitational,penrose1965remarkable,newman1966note,arnowitt2008republication,moreschi1987general,bros2002asymptotic} and de Sitter space-times \cite{addazi2020conformal,anderson2005structure, aneesh2019conserved,anninos2011asymptotic,anninos2019sitter,ashtekar2014asymptotics,ashtekar2015asymptotics,ashtekar2019asymptotics,ashtekar2014geometry} are considered in chapter four.
\chapter{De Sitter space-time}
There exist three maximally symmetric vacuum solutions for Einstein's equations known as de Sitter, anti-de Sitter and Minkowski with positive, negative and zero curvature. Here the focus is on Einstein's equations' solution in presence of a positive cosmological constant, then different coordinate systems and Killing's vector fields are considered.
\section{De Sitter metric}
A structure for symmetric tensor type
$(0,2)$,
known as the metric tensor, has been introduced that relates two vectors in the vector space $T_{p}$ \cite{stephani2009exact, penrose1984spinors}
\begin{equation}
\eta_{\alpha\beta}e_{\mu}^{\alpha} e_{\nu}^{\alpha}=g_{\mu \nu}
\end{equation}
where
$ \begin{Bmatrix}
e_{\mu} ^{\alpha}
\end{Bmatrix}$
is a null tetrad that consists of two real null vectors
$l$,
$k$
and two complex conjugate null vectors
$m$,
$\bar{m}$.\cite{penrose1984spinors}
\begin{equation}
\begin{Bmatrix}
e_{\mu}^{\alpha}
\end{Bmatrix}
=(m,\bar{m},l,k).
\end{equation}
One can write the length element
$ds^{2}$
using vector basis
in
$T^*_{p}$ as \cite{fukuyama2009comments}
\begin{equation}
\label{eq:metric}
ds^{2}=g_{\mu \nu}\omega^{\mu}\omega^{\nu}.
\end{equation}
Also by taking coordinate basis or holonomic frame to the account, equation
\eqref{eq:metric}
can turn to
\begin{equation}
ds^{2}=g_{\mu \nu}dx^{\mu}dx^{\nu}.
\end{equation}
Actually, one can define four spacelike vectors,
$e^{\alpha}$
and a timelike vector,
$e^{0}\equiv X^{0}$, in five dimensions
as
\begin{equation}
e^{\mu}
=
(X^{0},e^{\alpha})
=
(X^{0},X^{1},X^{2},X^{3},X^{4})
.
\end{equation}
Furthermore relations as follows can be described
\begin{equation}
e^{\alpha} e_{\beta}=\delta_{\alpha}^{ \beta}\: \: \:,\: \: \: X^{0} X_{0}=-1 \: \: \:,\: \: \: e^{\alpha} X_{0}=0.
\end{equation}
Then one can write \cite{tod2015some}
\begin{equation}
l^{2}=-X^{0} X_{0}+X^{1} X_{1}+X^{2} X_{2}+X^{3} X_{3}+X^{4} X_{4}.
\label{eq:2.00}
\end{equation}
Equation \eqref{eq:2.00} shows a hyperbloyd embedded in five dimensional Minkowski space-time with the line element
\begin{equation}
ds^2=-dx^{2}_{0}+dx^{2}_{1}+dx^{2}_{2}+dx^{2}_{3}+dx^{2}_{4}.
\label{eq:2.200}
\end{equation}
This relation can also be obtained by solving Einstein's equations (see section \ref{sec:2.2.2}).
\section{Solving Einstein's equations}
Most physical theories are introduced by mathematical models and are described by a set of differential equations. Among gravitational theories, Einstein's theory has been accepted as the most successful.
In this case, differential equations are written, considering that space and time can be introduced with a pseudo Riemannian manifold and a distribution of interaction of matter and gravity.
Usually we search for exact solutions or, if possible, a general solution of differential equations. Many of these exact solutions are not physical but many others like Schwarzschild and Kerr solutions for black holes, Friedmann-Lemaître-Robertson-Walker solution for cosmology are physical \cite{stephani2009exact}. Unless imposing any restrictions on the energy-momentum tensor, each metric can be the solution of these equations, because it is just a description of the $T_{\mu \nu}$. We can apply symmetry conditions to the metric, for example by imposing algebraic constraints on the Riemann tensor or selecting boundary conditions. Here are the field equations in presence of cosmological constant
\begin{equation}
G_{\mu \nu}+\Lambda g_{\mu \nu}=(8 \pi G /c^{4})T_{\mu \nu},
\label{eq:2.1}
\end{equation}
one can consider the speed of light to be equal to one in relation \eqref{eq:2.1}.
Considering the matter field to be zero
$(T_{\mu \nu}=0)$
is one of these conditions that simplify equations \eqref{eq:2.1}. Solving Einstein equations in presence of cosmological constant for an isotropic homogeneous model, one obtains solutions that will be the simplest inflationary solutions
\begin{equation}
R_{\mu \nu} -1/2 R g_{\mu \nu}= -\Lambda g_{\mu \nu}.
\label{eq:2.3}
\end{equation}
Ricci scalar and Ricci tensor for this model are
$R=4 \Lambda$, $R_{\mu \nu}=\Lambda g_{\mu \nu}$.
\subsection{Static and spherically symmetric coordinates}
To achieve the ability to solve equations
\eqref{eq:2.1}
one have to write a basic form of the metric \cite{lenk2010general,gron2007homogeneous,carroll2019spacetime}
\begin{equation}
g_{\mu \nu}=
\begin{bmatrix}
A(t,r,\theta,\varphi)&B(t,r,\theta,\varphi)&C(t,r,\theta,\varphi)&D(t,r,\theta,\varphi)\\
B(t,r,\theta,\varphi)&E(t,r,\theta,\varphi)&F(t,r,\theta,\varphi)&G(t,r,\theta,\varphi)\\
C(t,r,\theta,\varphi)&F(t,r,\theta,\varphi)&H(t,r,\theta,\varphi)&I(t,r,\theta,\varphi)\\
D(t,r,\theta,\varphi)&G(t,r,\theta,\varphi)&I(t,r,\theta,\varphi)&J(t,r,\theta,\varphi)
\end{bmatrix}.
\label{eq:2.5}
\end{equation}
Ricci tensor can be obtained according to the metric \eqref{eq:2.5} then the result can be used in \eqref{eq:2.1} to find the final form of the metric. Accurately to obtain an exact solution, the metric is considered to be stationary, which means a timelike Killing vector field exists and a timelike coordinate can be defined according to this Killing vector field
($\frac{\partial g_{\mu \nu}}{\partial x^{0}}=0$, where $x^0$ is a timelike coordinate) \cite{d1992introducing}. Being stationary does not restrict the metric to have multiple phrases. To extricate these types of phrases one has to impose another condition on the metric as being static, which means that the metric is time-reversal invariant so multiple phrases will be omitted. Then one can apply spherical symmetry that means space-time has three spacelike Killing vector fields
$X^{\alpha}$,
with the following relation
\begin{equation}
[X^1,X^2]=X^3\quad ,\quad [X^2,X^3]=X^1 \quad,\quad [X^3,X^1]=X^2.
\end{equation}
Finally the metric take the simplified form
\begin{equation}
ds^{2}=-e^{A(r)} dt^{2}+e^{B(r)} dr^{2}+r^{2} d {\theta}^{2}+r^{2} sin^{2}{\theta} d{\varphi}^{2}.
\label{eq:2.22}
\end{equation}
$A(r)$ has been replaced to $e^{A(r)}$ because metric's elements are always positive and this choice will simplify calculations.
\subsection{Ricci tensor calculation}
According to the line element
\eqref{eq:2.22}
one can obtain non-zero components of the Ricci tensor as bellow
\begin{align}
\label{eq:2.500}
&R_{tt} = {e} ^ {A-B} (1/2A''-1/4A'B'+1/4 {A'} ^ {2} +A'/r),\\
&R_{rr} =-1/2A''+1/4A'B'-1/4 {A'} ^ {2} +B'/r,\notag\\
&R_{\theta \theta } =- {e} ^ {-B} (1+ \frac {r({A} ^ {'} - {B} ^ {'} )} {2 }) +1,\notag\\
&R_{\varphi \varphi}=sin^{2}{\theta}R_{\theta \theta}.\notag
\end{align}
Putting these relations in
\eqref{eq:2.3}
it is possible to write
\begin{equation}
\Lambda {e} ^ {A} = {e} ^ {A-B} (1/2A''-1/4A'B'+1/4 {A'} ^ {2} +A'/r)+2\Lambda e^{A}
\label{eq:2.8}
\end{equation}
and
\begin{equation}
-\Lambda {e} ^ {B} =-1/2A''+1/4A'B'-1/4 {A'} ^ {2} +B'/r-2\Lambda e^{B}.
\label{eq:2.9}
\end{equation}
Dividing
\eqref{eq:2.8}
by
\eqref{eq:2.9}
one has
\begin{equation}
A'=-B'\quad,\quad A=-B,
\end{equation}
where the integral constant is considered to be zero.
If one puts the third relation of \eqref{eq:2.500}
in
$R_{\mu \nu}=\Lambda g_{\mu \nu}$
then
\begin{align}
&e^{A}(1+rA')=1-\Lambda r^{2}\\
&X\equiv e^{A(r)}\notag\\
&X+rX'=1-\Lambda r^{2}\notag\\
&\frac{d}{dr}(rX)=\frac{d}{dr}(r-(\Lambda/3)r^{3})\notag\\
&rX=r-(\Lambda/3)r^{3}+C,\notag
\end{align}
where $C$ is the integral constant. Considering Newtonian limit, a matter field at the point $O$ causes the potential $\phi=-\frac{GM}{r}$ \cite{d1992introducing}. This potential in weak field limit results
\begin{equation}
g_{00}\simeq 1+2\phi/c^2=1-2GM/c^2r,
\end{equation}
therefore
$C\equiv GM/c^2$. Considering
$c$
and
$G$
to be equal to one, $e^{A}$ can be obtained as follows
\begin{equation}
e^{A}=1-(\Lambda /3) r^{2}+2M/r.
\end{equation}
Putting this relation in \eqref{eq:2.22} the line element becomes
\begin{equation}
{ds} ^ {2} =- (1-\frac {2M} {r} - {\Lambda} \frac {{r} ^ {2}}{3} ) {dt} ^ {2} + { (1- \frac{2M} {r} - {\Lambda} \frac {{r} ^ {2}}{3} )} ^ {-1} {dr} ^ {2} + {r} ^ {2} {d \Omega} ^ {2}.
\label{eq:2.14}
\end{equation}
Actually, this is de Sitter-Schwarzschild solution. If $\Lambda=0$ in equation \eqref{eq:2.14} then the
Schwarzschild solution for Einstein equations will be acquired and if $M=0$ de Sitter solution will be obtained
\begin{equation}
{ds} ^ {2} =- (1 - {\Lambda} \frac {{r} ^ {2}}{3} ) {dt} ^ {2} + { (1 - {\Lambda} \frac {{r} ^ {2}}{3} )} ^ {-1} {dr} ^ {2} + {r} ^ {2} {d \Omega} ^ {2}.
\label{eq:2.18}
\end{equation}
Let $l\equiv\sqrt{3/ \Lambda}$ and \eqref{eq:2.18} becomes
\begin{equation}
{ds} ^ {2} =- (1 - \frac {{r} ^ {2}}{l^{2}} ) {dt} ^ {2} + { (1 - \frac {{r} ^ {2}}{l^{2}} )} ^ {-1} {dr} ^ {2} + {r} ^ {2} {d \Omega} ^ {2}.
\label{eq:2.29}
\end{equation}
This is de Sitter line element.
\section{De Sitter horizon}
If $r=l$ then components of the line element
\eqref{eq:2.29} become irreversible (caused by the choice of coordinates) and a singularity appears in this value of $r$. The difference between this horizon and the Schwarzschild's horizon is that in de Sitter space-time, each observer has his/her own unique horizon. As a result
$t,r,\theta,\varphi$
are not appropriate coordinates to describe the whole de Sitter manifold. More precisely they are unable to delineate beyond the horizon. Therefore this coordinate system do not describe the whole de Sitter space-time, it can only explain the static patch of de Sitter space-time \cite{hawking1973large}. Afterwards it is useful to find other coordinate systems that satisfy our aim to study beyond the horizon.
\section{Embedding de Sitter space-time in five dimensional Minkowski space-time\label{sec:2.2.2} }
Different coordinates are describable for each space-time. De Sitter space-time is no exception. Therefore one can describe various coordinates for it. One can defines $(x_0,x_1,x_2,x_3,x_4)$ as follows
\begin{align}
\label{eq: embedd}
&t=l\tanh^{-1}(x_0/x_1),\\
&r=\sqrt{x_0^{2}-x_1^{2}+l^2}\notag,\\
&\theta=\cos^{-1}(\frac{x_4}{\sqrt{x_0^{2}-x_1^{2}+l^2}}),\notag\\
&\varphi=\tan(x_3/x_4)\notag.
\end{align}
If one puts this relations in
\eqref{eq:2.29}
the following relation will be obtained
\begin{equation}
ds^{2}=-dx^{2}_{0}+dx^{2}_{1}+dx^{2}_{2}+dx^{2}_{3}+dx^{2}_{4},
\label{eq:(2.25)}
\end{equation}
\eqref{eq: embedd} can be written as \cite{pascu2012atlas}
\begin{align}
\label{eq:2.30}
&{x}_0 =l \sqrt{ (1-\frac{{r}^{2}}{{l}^{2}} )} \sinh(\frac{t}{l} ),\\
&{x}_{1} =l \sqrt{ (1-\frac{{r}^{2}}{{l}^{2}})} \cosh(\frac{t}{l}),\notag\\
&x_{2}=r \sin{\theta}\cos{\varphi},\notag\\
&x_{3}=r \sin{\theta}\sin{\varphi},\notag\\
&x_{4}=r \cos{\theta}.\notag
\end{align}
by squaring these components and sum them one can writes
\begin{equation}
-x^{2}_{0}+x^{2}_{1}+x^{2}_{2}+x^{2}_{3}+x^{2}_{4}=l^{2}.
\label{eq:(2.26)}
\end{equation}
This is the relation of a hyperboloid embedded in five dimensional Minkowski space-time. This relation is analogues to \eqref{eq:2.200}. Hence relation \eqref{eq:2.200} can also be obtained by solving Einstein equations.
\section{De Sitter hyperbloid}
Another appropriate coordinates to designate de Sitter space-time is
\begin{align}
\label{eq:2.32}
&x_{0}=l\sinh (\tau/l),\\
&x_{1}=l\cosh (\tau/l)\cos(\theta),\notag\\
&x_{2}=l\cosh (\tau/l)\sin(\theta)\cos(\varphi),\notag\\
&x_{3}=l\cosh (\tau/l)\sin(\theta)\sin(\varphi)\cos(\alpha),\notag\\
&x_{4}=l\cosh (\tau/l)\sin(\theta)\sin(\varphi)\sin(\alpha),\notag
\end{align}
where $\tau=l \sinh^{-1}[l \sqrt{1-(r/l)^{2}}\sinh(t/l)]$.
If one puts
\eqref{eq:2.32}
components in \eqref{eq:(2.26)}
then a common equation will appear
\begin{equation}
\cosh^{2}(\tau/l)-\sinh^{2}(\tau/l)=1,
\label{eq:2.44}
\end{equation}
with the form \ref{fig:2.33}.
\begin{figure}
\centering
\includegraphics[width=50mm]{78.png}
\caption{ This figure illustrates
$\cosh^{2}(\tau/l)-\sinh^{2}(\tau/l)=1$.
}
\label{fig:2.33}
\end{figure}
On the other hand for spacelike part we have the relation of a two-sphere. Then for the whole space-time one can see \ref{fig:(2.1)}
\begin{figure}
\centering
\includegraphics[width=70mm]{22.jpg}
\caption{De Sitter hyperbloid that shows global coordinates. }
\label{fig:(2.1)}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=70mm]{5555.png}
\caption{In this figure circles show surfaces of constant t.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=70mm]{33333.png}
\caption{Shaded region shows static de Sitter coordinates}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=70mm]{32.png}
\caption{De Sitter space-time is conformal to the part $-\pi/2<T<\pi/2$ of the Einstein static universe.}
\end{figure}
\section{Global coordinates}
As it was explained in section \ref{sec:2.2.2} one can defines various coordinates with respect to the equation \eqref{eq:(2.26)}. Therefore it is possible to write
\begin{align}
\label{eq:2.36}
&x_{0}=l \sinh(\tau /l),\\
\label{eq:2.37}
&x_{j}=l \cosh(\tau /l) \omega^{j}.
\end{align}
Where $(j=1,2,3,4)$. Then the line element takes the form
\begin{equation}
ds^{2}=-d \tau^{2}+l^{2}\cosh^{2}(\tau/l) d \omega^{(j)2}.
\label{eq:2.33}
\end{equation}
A conformal factor can be multiply in \eqref{eq:2.33} and the compactified space-time (see Appendix \ref{app:A} ) can be obtained.
\section{De Sitter space-time's completion}
Talking about infinity is not an easy task, because it is vast and out of reach \cite{23}. Hence with aid of compactification infinity becomes accessible (see Appendix \ref{app:A}). One can multiply the conformal factor
$\Omega=\frac{1}{l^{2}\cosh^{2}(\tau/l)}$
in the metric
\eqref{eq:2.33}. Causalty is invariant under conformal transformation although it changes geometry. Geometry's change allows us to have infinities in the problem as well. Multiplying this conformal factor in
\eqref{eq:(2.25)}
one can see that de Sitter space-time is locally conformal to the Einstein static universe
\begin{equation}
ds^{2}=l^{2}\cosh^{2}(\tau/l)d\bar{s}^{2}.
\end{equation}
Where
$d\bar{s}^{2}$
is the line element of Einstein static universe.
\begin{align}
&d\bar{s}^{2}=-l^{-2}\cosh^{-2}(\tau/l)d\tau^{2}+dR^{2}+R^{2}d \Omega^{2},\\
& \xrightarrow{T=\tau/l\cosh^{-1}(\tau/l)-\sqrt{\tau^2/l^2-1}}\notag\\
&d \bar{s}^{2}=-dT^{2}+dR^{2}+R^{2}d \Omega^{2}.\notag
\end{align}
Considering the range of this components one can find infinities for this nonphysical metric.
De Sitter space-time unlike Minkowski space-time has spacelike infinity for null and timelike lines.
\section{Killing vector fields}
A vector field has conformal motions if \cite{hassani2001mathematical}
\begin{equation}
\mathcal{L}_{X}g_{\mu \nu}=2\phi(x^{\sigma})g_{\mu \nu}.
\end{equation}
In this equation if $\phi$ is constant then $X$ will be a homothetic vector and if $\phi=0$, $X$ will be a Killing vector \cite{stephani2009exact}.
Thus according to $\mathcal{L}_{X}g_{\mu \nu}=0$ Killing vector fields for de Sitter space-time are written as follows \cite{banerjee2007gauge,salcedo2017sitter,yan2017killing}
\begin{align}
&\xi^{\mu}= [\frac{r\cos(\theta)l\exp(t/l)}{\sqrt{(l^2-r^2)}},\cos(\theta)\exp(t/l)\sqrt{(l^2-r^2)}, \frac{-\exp(t/l)\sin(\theta)\sqrt{(l^2-r^2)}}{r}, 0] ,\\
&\xi^{\mu}= [0, 0, 0,1] ,\notag\\
&\xi^{\mu}= [\frac{r\sin(\theta)\sin(\phi)l\exp(t/l)}{\sqrt{(l^2-r^2)}},\sin(\theta)\sin(\varphi)\exp(t/l)\sqrt{(l^2-r^2)},\notag\\ &\frac{\exp(t/l)\cos(\theta)\sin(\varphi)\sqrt{(l^2-r^2)}}{r},\frac{\exp(t/l)\cos(\varphi)(l-r)(l+r)}{\sqrt{(l^2-r^2)r\sin(\theta)}}] ,\notag\\
&\xi^{\mu}= \frac{r\sin(\theta)\cos(\varphi)l\exp(t/l)}{\sqrt{(l^2-r^2)}}, [\sin(\theta)\cos(\varphi)\exp(t/l)\sqrt{(l^2-r^2)},\notag\\ &\frac{\exp(t/l)\cos(\theta)\cos(\varphi)\sqrt{(l^2-r^2)}}{r},\frac{-\exp(t/l)\sin(\varphi)(l-r)(l+r)}{\sqrt{(l^2-r^2)}r\sin(\theta)}] ,\notag\\
&\xi^{\mu}= [\frac{-rl\cos(\theta)\exp(-t/l)}{\sqrt{(l^2-r^2)}},\cos(\theta)\exp(-t/l)\sqrt{(l^2-r^2)}, \frac{-\sin(\theta)\exp(-t/l)\sqrt{(l^2-r^2)}}{r}, 0] ,\notag\\
&\xi^{\mu}= [-rl\sin(\theta)\sin(\varphi)\exp(-t/l)/\sqrt{(l^2-r^2)},\sin(\theta)\sin(\varphi)\exp(-t/l)\sqrt{(l^2-r^2)},\notag\\ &\frac{\cos(\theta)\sin(\varphi)\exp(-t/l)\sqrt{(l^2-r^2)}}{r},\frac{\cos(\varphi)\exp(-t/l)(l-r)(l+r)}{\sqrt{(l^2-r^2)}r\sin(\theta)}] ,\notag\\
&\xi^{\mu}=[\frac{-rl\sin(\theta)\cos(\varphi)exp(-t/l)}{\sqrt{(l^2-r^2)}},\sin(\theta)\cos(\varphi)\exp(-t/l)\sqrt{(l^2-r^2)},\notag\\
&\frac{\cos(\theta)\cos(\varphi)exp(-t/l)\sqrt{(l^2-r^2)}}{r}, \frac{-\sin(\varphi)\exp(-t/l)(l-r)(l+r)}{\sqrt{(l^2-r^2)}r\sin(\theta)}] ,\notag\\
&\xi^{\mu}= [1, 0, 0, 0] ,\notag\\
&\xi^{\mu} = [0, 0,\sin(\varphi), \frac{\cos(\varphi)}{\tan(\theta)}] ,\notag\\
&\xi^{\mu}= [0,0, \cos(\varphi), -\sin(\varphi)/\tan(\theta)] ,\notag
\end{align}
As one expected for de Sitter space-time that is maximally symmetric, ten killing vector fields have been found.
\chapter{De Sitter space-time's symmetries}
Symmetry in physics is a mathematical or physical property of a system that remains unchanged under certain transformations. There exists quantities which are expected to be invariant under transformations, so we look for symmetry groups that hold these quantities invariant. In the following, we will talk about the existence of such a quantity for gravitational systems and the symmetry group that maintains its inefficiency.
\section{Planck length \label{sec:4.1}}
Planck length, $l_p$, is the distance that light travels in Planck time. This length can be described by three fundamental physical constants, speed of light in vacuum, $c$, Planck constant, $h$, gravitational constant, $G$. Though its relation is
\begin{equation}
l_p=\sqrt{\frac{\hbar G}{c^3}}=1/616229(38)\times 10^{-35} m.
\label{eq:3.1}
\end{equation}
In 1899 Max Planck proposed to use certain constants for length, mass, time and energy. Considering only the Newton gravitational constant, speed of light and Planck constant he found these constants named Planck length, Planck mass, Planck time and Plank energy.
Quantum effects are believed to appear in this scale. to measure anything in this scale the photon momentum must be very high. Considering Heisenberg's uncertainty principle a black hole appears in this scale that its horizon is equal to Planck length. One can rewrite the uncertainty principle as
\begin{equation}
\Delta p\Delta r>\hbar/2.
\end{equation}
multiplying both sides by $2\frac{G }{c^3}$ one gets \cite{aurilia2013planck,carr2016black}
\begin{align}
\label{eq:3.444}
&\Delta(\frac{2Gm}{c^2})\Delta r> \frac{G \hbar}{c^3}\\
\Rightarrow&\Delta r_s \Delta r>l_p^2.\notag
\end{align}
where $r_s$ is gravitational radius, $r$ is coordinates radius and $l_p$ is Planck length. Relation \eqref{eq:3.444} is the uncertainty principle in quantum gravity.
Uncertainty principle anticipate the existence of black holes and wormholes so any attempt to acquire a distance smaller than Planck length is considered impossible because a black hole will appear in this distance \cite{carr2016black}.
The Lorentz symmetry group does not remain invariant at this minimum length. Note that in special relativity on the flat background, the closer the speed to the speed of light, the closer the length goes to zero
\begin{equation}
L=L_0\sqrt{1-v^2 / c^2}.
\end{equation}
This is in contrast to the invariance of the Planck length.
Lorentz group can be only realized on homogeneous space-time that means except Minkowski space-time it can be written on de Sitter and anti-de Sitter space-times that are the only possible homogeneous space-times in $(3+1)$ dimensions. In this thesis we focus on de Sitter space-time with constant positive scalar curvature.
\begin{equation}
R=12 l^{-2}.
\label{eq:3.4}
\end{equation}
Where $l$ is the de Sitter length. Equation \eqref{eq:3.4} shows the relation between Ricci scalar and length. We know by the definition that Lorentz transformations will remain the curvature invariant. Thus Lorents transformations on de Sitter space-time remain de Sitter length invariant \cite{aldrovandi2007sitter,araujo2017sitter,araujo2019sitter,gaarder2016gravitational,gibbons2003newton, salcedo2017sitter}. Somehow, we also have this concept hidden in Minkowski space-time, what remains invariant there is an infinite length and does not affect the space-time's curvature.
\section{De Sitter transformations}
De Sitter transformations can be known as rotations in five dimensional Euclidean space. Each observer has his/her own coordinate set, we want to find transformations between these coordinate sets. We search for a symmetry group that remains the metric invariant. This group named as de Sitter group $So(4,1)$ that apply in the embedding five dimensional Minkowski space-time
\begin{equation}
X^{\sigma}=\Lambda^{\sigma}_{\rho}X^{\rho}
\end{equation}
where $\Lambda^{\sigma}_{\rho}$ is the group element. In vector representation one has \cite{hartman2017lecture}
\begin{equation}
-g_{\mu \nu}X^{\mu}X^{\nu}=l^2
\end{equation}
that shows these transformations remain the length invariant. Infinitesimal transformations can be shown in the following relation
\begin{equation}
\delta X^{\sigma}=1/2 \xi^{\mu \nu} L_{\mu \nu} X^{\sigma}
\label{eq:3.12}
\end{equation}
where
$L_{\mu \nu}$
and
$\xi^{\mu \nu}$
are generators and parameters of de Sitter translations.
\section{Spherical rotations\label{3.3}}
According to the equation \eqref{eq:2.30}
one can see that the last three components are parameterize in $ \mathbb{R}^3$ then symmetry elements are rotation elements
\begin{equation}
\mathcal{G}_{rot}^{(1)}=
\begin{bmatrix}
1&0&0&0&0\\
0&1&0&0&0\\
0&0&1&0&0\\
0&0&0&\cos{\alpha}&-\sin{\alpha}\\
0&0&0&\sin{\alpha}&\cos{\alpha}
\end{bmatrix},
\end{equation}
\begin{equation}
\mathcal{G}_{rot}^{(2)}=
\begin{bmatrix}
1&0&0&0&0\\
0&1&0&0&0\\
0&0&\cos{\alpha}&-\sin{\alpha}&0\\
0&0&\sin{\alpha}&\cos{\alpha}&0\\
0&0&0&0&1
\end{bmatrix},
\end{equation}
\begin{equation}
\mathcal{G}_{rot}^{(3)}=
\begin{bmatrix}
1&0&0&0&0\\
0&1&0&0&0\\
0&0&\cos{\alpha}&0&-\sin{\alpha}\\
0&0&0&1&0\\
0&0&\sin{\alpha}&0&\cos{\alpha}
\end{bmatrix}.
\end{equation}
\section{Time translation}
As de Sitter metric is static then it should be invariant under time translation
\begin{equation}
\mathcal{T}_{trans}^{(1)}=
\begin{bmatrix}
\cosh{(\beta /l)}&-\sinh{(\beta /l)}&0&0&0\\
-\sinh{(\beta /l)}&\cosh{(\beta /l)}&0&0&0\\
0&0&1&0&0\\
0&0&0&1&0\\
0&0&0&0&1
\end{bmatrix}.
\end{equation}
This transformation can be considered as a boost in $x_1$ direction.
\section{Rotations on the hyperboloid}
As one can see in section \ref{3.3} for spherical rotation the $x_1$ axis is considered a constant axis. Afterwards allowing $x_1$ axis to be variable another subgroup of rotation known as rotations on the hyperboloid appear
\begin{equation}
\mathcal{R}_{rot}^{(1)}=
\begin{bmatrix}
1&0&0&0&0\\
0&\cos{\alpha}&-\sin{\alpha}&0&0\\
0&\sin{\alpha}&\cos{\alpha}&0&0\\
0&0&0&1&0\\
0&0&0&0&1
\end{bmatrix},
\end{equation}
\begin{equation}
\mathcal{R}_{rot}^{(2)}=
\begin{bmatrix}
1&0&0&0&0\\
0&\cos{\alpha}&0&-\sin{\alpha}&0\\
0&0&1&0&0\\
0&\sin{\alpha}&0&\cos{\alpha}&0\\
0&0&0&0&1
\end{bmatrix},
\end{equation}
\begin{equation}
\mathcal{R}_{rot}^{(3)}=
\begin{bmatrix}
1&0&0&0&0\\
0&\cos{\alpha}&0&0&-\sin{\alpha}\\
0&0&1&0&0\\
0&0&0&1&0\\
0&\sin{\alpha}&0&0&\cos{\alpha}
\end{bmatrix}.
\end{equation}
\section{boosts}
Other transformations that we have to consider are boosts
\begin{equation}
\begin{bmatrix}
\cosh{\beta}&0&-\sinh{\beta}&0&0\\
0&1&0&0&0\\
-\sinh{\beta}&0&\cosh{\beta}&0&0\\
0&0&0&1&0\\
0&0&0&0&1
\end{bmatrix},
\end{equation}
\begin{equation}
\begin{bmatrix}
\cosh{\beta}&0&0&-\sinh{\beta}&0\\
0&1&0&0&0\\
0&0&1&0&0\\
-\sinh{\beta}&0&0&\cosh{\beta}&0\\
0&0&0&0&1
\end{bmatrix},
\end{equation}
\begin{equation}
\begin{bmatrix}
\cosh{\beta}&0&0&0&-\sinh{\beta}\\
0&1&0&0&0\\
0&0&1&0&0\\
0&0&0&1&0\\
-\sinh{\beta}&0&0&0&\cosh{\beta}
\end{bmatrix}.
\end{equation}
\section{Conformal transformations}
Number of group elements for the group $SO(1,n+1)$ can be obtained according to the following relation
\begin{equation}
dimSO(1,n+1)=\frac{1}{2}(n+1)(n+2).
\label{eq:3.5}
\end{equation}
As an example Pioncaré group in $n$ dimensions has $n$ translation generators and $\frac{n(n-1)}{2}$ rotation generators.
\begin{equation}
dim Poincar \acute{e}(E^{n})=\frac{1}{2}n(n+1).
\label{eq:3.6}
\end{equation}
These calculations have been done locally. De Sitter space-time is maximally symmetric so having the curvature of a point, one can find the space-time's curvature so results of
\eqref{eq:3.5}
and
\eqref{eq:3.6}
can be referred to the whole space-time.
The deference between
\eqref{eq:3.5}
and
\eqref{eq:3.6} is $n+1$. This incompatibility can be described by adding conformal transformations. In the following different types of these transformations are represented \cite{duval2011conformal}.
Multiplying a conformal factor to a vector one obtains a transformations named as dilations
\begin{equation}
\vec{x}\rightarrow \lambda \vec{x}\quad,\quad\lambda \in \mathbb{R}.
\end{equation}
with
\begin{equation}
D=t\partial_t+x\partial_x+y\partial_y+z\partial_z
\end{equation}
as their generator.
Other relevant type of transformations are conformal spatial transformations
$\vec{x} \rightarrow \vec{x}'$, as
\begin{equation}
\frac{x'^{\mu}}{x'^{2}}=\frac{x^{\mu}}{x^{2}}+\alpha^{\mu},
\end{equation}
where $x^{2}=x_{\mu}x^{\mu}$ and $\mu=1,\dots,n$.
One can also write
\begin{equation}
x'^{\mu}=\frac{x^{\mu}+\alpha^{\mu}x^{2}}{1+2\alpha_{\mu}x^{\mu}+\alpha^{2}x^{2}},
\end{equation}
with four generators
\begin{align}
&K_1=(t^2+x^2+y^2+z^2)\partial_t+2xt\partial_x+2yt\partial_y+2zt\partial_z,\\
&K_2=2xt\partial_t+(t^2+x^2+y^2+z^2)\partial_x+2ty\partial_y+2zt\partial_z,\notag\\
&K_3=2ty\partial_t+2xt\partial_x+(t^2+x^2+y^2+z^2)\partial_y+2zt\partial_z,\notag\\
&K_4=2tz\partial_{t}+2xt\partial_x+2yt\partial_y+(t^2+x^2+y^2+z^2)\partial_z.\notag
\end{align}
As one can see in previous pages, we have ten generators for rotations, time translation and boosts and five generators for conformal transformations so we have $SO(1,4)$ as de Sitter symmetry group.
\section{Commutation relations}
It is possible to study the Lie algebra for conformal transformations (see table \ref{tab 3.1}).
\begin{table}
\begin{tabular}{|p{2.5cm}|p{5cm}|p{2.5cm}|p{5cm}|}
\hline
Translations&Rotations& Dilations&Conformal spatial transformations \\ \hline
$P_{\mu}=-\partial_{\mu}$&$M_{\mu\nu}=(x_{\mu}\partial_{\nu}-x_{\nu}\partial_{\mu})=(P_{\mu}\partial_{\nu}-P_{\nu}\partial_{\mu})$&$D=-x^{\mu}\partial_{\mu}$&$K_{\mu}=(2x_{\mu}x^{\nu}\partial_{\nu}-x^{2}\partial_{\mu})=-2x_{\mu}D+x^{2}P_{\mu}$\\ \hline
\end{tabular}
\caption{So(4,1) generators. \label{tab 3.1}}
\end{table}
So one can write the algebra that rules on these transformations
\begin{align}
\label{eq:3.54}
&[M_{\mu\nu},P_{\rho}]=(g_{\nu\rho}P_{\mu}-g_{\mu\rho}P_{\nu}),\\
&[M_{\mu\nu},M_{\rho \tau}]=(g_{\mu \tau}M_{\nu \rho}+g_{\nu \rho}M_{\mu \tau}-g_{\mu\rho}M_{\nu\tau}-g_{\nu\tau}M_{\mu\rho}),\notag\\
&[M_{\mu \nu},K_{\rho}]=(g_{\nu \rho}K_{\mu}-g_{\mu\rho}K_{\nu}),\notag\\
&[D,P_{\mu}]=+P_{\mu},\notag\\
&[D,K_{\mu}]=+K_{\mu},\notag\\
&[P_{\mu},K_{\nu}]=2(g_{\mu \nu}D+M_{\mu \nu}).\notag
\end{align}
Other combinations are zero.
\chapter{Asymptotic symmetries}
De Sitter space-time's symmetries have been considered in the previous chapter. In the following chapter we are interested in obtaining asymptotic symmetries. Our focus is on null infinity, $\mathcal{I}$, thus we are facing two problems ahead. First, according to the compactification, the topology changes, then the topology of $\mathcal{I}$ is quite different from the topology of the physical space-time. Therefore, we can not necessarily say that the ruling symmetry group at null infinity is the same as the one at physical space-time. This subject has been studied in Minkowski space-time, more details on this method will follow. But there is another important matter that
$\mathcal{I}$
is null in asymptotically flat space-times and spacelike in asymptotically de Sitter space-times. Therefore $\Lambda\rightarrow0$ does not have a continuous limit. This fact has important consequences.
In this chapter these two issues are considered and useful methods to find asymptotic symmetries for de Sitter space-time are presented.
\section{General discussion}
Talking about infinity is not facile so with the help of compactification (see Appendix \ref{app:A}) infinity becomes more palpable. As we said before by multiplying a conformal factor by the metric ($\tilde{ g}_{\mu\nu}=\Omega^2{ g}_{\mu\nu}$), one manage to attribute infinity to the boundary of a larger space-time, $(\tilde{M},\tilde{ g}_{\mu\nu})$. Conformal factor does not change the causality but it changes the geometry therefore we can not distinguish the symmetry group that rules the infinity without precise consideration. Actually the only phrase that one can say is that the group of diffeomorphisms is the appropriate symmetry group which is not useful, because one can not define preserve charge according to them. In fact, diffeomorphism invariance is a local symmetry while we need a global symmetry to describe Noether charge.
Using our knowledge about physical space-time's properties and considering their transition when $r\rightarrow \infty$ might be a good way to find features of the infinity.
As said before, conformal transformation does not change the causality. Hence it is possible to consider gravitational fields and their asymptotic limits. As gravitational fields move with the speed of light, studying null infinity and finding a useful notion for it may be feasible according to them.
At first we will review the asymptotic behavior of a gravitational field in an isolated system. Actually this case is much easier than the other ones. Observations show that a system with a positive cosmological constant is more appropriate to describe our universe. Unfortunately one can not use the process that is used in $\Lambda =0$ cases, in
$\Lambda >0$ cases \cite{bros2002asymptotic}.
It is difficult to study the asymptotic structure of a gravitational field because the field itself changes the geometry of space-time. This issue becomes clear after the work of
Arnowitt, Deser and Misner at spacelike infinity \cite{arnowitt2008republication} and the work of Bondi, Sachs and Newman at null infinity \cite{bondi1962gravitational}. In ADM framework space-time is divided to time constant surfaces. Each surface has a three dimensional metric $\lambda_{ij}(t,x^k)$ and a momentum $\pi^{ij}(t,x^k)$ according to that one can define the Hamiltonian (see \cite{arnowitt2008republication,deser1967covariant} ).
Now let's talk about null infinity, our main subject. First we will review the work of Bondi and his collaborators. They established a system for studying the expansion of the metric on a null path. Null infinity, $\mathcal{I}$, is known as the boundary for physical space-time. considering this gravitational radiation on $\mathcal{I}$ one can defines the Bondi news,
$N_{\mu\nu}$,
which in
Bondi–Sachs physical space coordinates, $\hat{x}^{\mu}=(u,l,x^{\mu})=(t-r,1/r,x^{\mu})$,
has the form \cite{ashtekar2014asymptotics}
\begin{equation}
N_{\mu\nu}=\zeta^*(\lim_{l\rightarrow0}l^{-1}\hat{\nabla}_{\mu}\hat{\nabla}_{\nu}l),
\end{equation}
where $\zeta^*$ shows the pullback to $\mathcal{I}^+$. Two components of $N_{\mu\nu}$ show the two possible modes in exact general relativity. In asymptotically flat space-times if and only if
$N_{\mu\nu} \neq 0$,
gravitational radiation do not carry energy momentum on $\mathcal{I}$. If $N_{\mu \nu} \neq 0$,
$\eta_{\mu \nu}$ is no longer unique and one has to defines a new metric, $\eta'_{\mu \nu}$ with the translation $t \rightarrow t'=t+f(\theta,\varphi)$ as $g_{\mu \nu}$ has the same asymptotic behavior with it. Thus the asymptotic symmetry group is not Pioncaré group but another group, called as BMS group including an abelian subgroup $\mathfrak{T}$ that contains translations just like the subgroup that has been defined for Pioncaré group. Hence the definition of energy momentum at null infinity is well-defined \cite{bondi1962gravitational}.
For the $\Lambda>0$ case the process is quite different. As $\mathcal{I}$ is a spacelike hypersurface one can not obtain the symmetry group the way the $BMS$ group has been obtained. It should be added that by considering $1/r$ coordinate transformation for a locally de Sitter Bondi-Sachs metric can be used and a symmetry group named as $\Lambda-BMS$ group has been obtained
To study such a structure, we first need to consider the basic definitions \cite{poole2019ds4,compere2019advanced,aneesh2019conserved}.
\section{Asymptotically flat space-time}
In physics we like to study isolated systems. If a space-time becomes flat when $r\rightarrow \infty$, this space-time is asymptotically flat and asymptotically flat space-times are isolated \cite{wald2000general,calo2018relation}.
Finding a meaningful definition for isolated systems in general relativity is not simple because finding a helpful description for infinity is difficult \cite{wald1999gravitational}. Compactifying the space-time (see Appendix \ref{app:A}) is a useful method to have a good definition for infinity. According to that, a definition for asymptotically flat space-time has also been found. A space-time is asymptotically flat if its null and space-like infinities become like the null and space-like infinities of the flat space-time. More precisely the space-time $(M,g_{\mu\nu})$ is asymptotically flat if a conformal space-time, $(\tilde{M},\tilde{g}_{\mu\nu})$ exist as $\tilde{g}_{\mu\nu}$ becomes $C^{\infty}$ everywhere except $i^0$ where it becomes $C^{>0}$ and conformal isometrie, $\psi:M \rightarrow \psi[M]\subset \tilde{M}$ with conformal factor $\Omega$ as ${g}_{ab}=\Omega^{2} \psi^*\tilde{ g}_{ab}$ satisfy the following conditions.\cite{wald2000general}
\noindent\fbox{
\parbox{\textwidth}{
1. $\bar{J^{+}}(i^0)\cup\bar{J^-}(i^0)=\tilde{M}-M$.
2. There exists an open neighborhood, V, of $\mathring{M}= i^{0} \cup \mathcal{I}^{-}\cup \mathcal{I}^{+}$, where $(V,\tilde{g}_{\mu\nu})$ is strongly causal.
3. $\Omega$ can be extended to a function on all of the $\tilde{M}$ which is $C^{2}$ at $i^{0}$ and $C^{\infty}$ elsewhere.
4. (a) For $\Omega$ at $\mathcal{I}$ one has
\begin{align}
\label{eq:4.22222}
&\Omega|_{\mathcal{I}}=0,\\
&\tilde{\nabla}_{\mu} \Omega|_{\mathcal{I}}\neq 0,\notag
\end{align}
where $\tilde{\nabla}_{\mu}$ is the covariant derivative according to $\tilde{g}_{\mu \nu}$.
(b) At $i^0$ one can write
\begin{align}
&\Omega|_{i^{0}}=0,\\
&\lim_{i^{0}} \tilde{\nabla}_{\mu}\Omega=0,\notag\\
&\lim_{i^{0}} \tilde{\nabla}_{\mu}\tilde{\nabla}_{\nu}\Omega= 2 \tilde{g}_{\mu \nu}(i^{0}).\notag
\end{align}
}}
The condition \eqref{eq:4.22222} lets $\Omega$ to be a component on $\mathcal{I}$. One has the liberty to choose $\Omega$. Thus if one chooses the conformal frame $\tilde{\nabla}_{\mu}\tilde{n}^{\mu}|_{\mathcal{I}}=0$ it is possible to use $\tilde{n}^{\mu}$ as a component on the tangent space of the $\mathcal{I}$. This opportunity can be used to change the conformal scale as
$\Omega \rightarrow \Omega'=\omega\Omega$, so
\begin{align}
&\tilde{n}'^{\mu}=\omega^{-1}\tilde{n}^{\mu},\\
&q'_{\mu\nu}|_{\mathcal{I}}=\omega^{2}q_{\mu\nu},\notag
\end{align}
where $\mathcal{L}_{\tilde{n}}\omega=0$. Choosing the conformal frame $\tilde{\nabla}_{\mu}\tilde{n}^{\mu}|_{\mathcal{I}}=0$, degrees of freedom decrease. Space-time in this conformal frame $q_{\mu\nu}$ has the signature $(0,+,+)$ at $\mathcal{I}$.
\subsection{The Bondi Sachs metric}
If one foliate the space-time to $u=constant$ hypersurfaces
The Bondi-Sachs coordinates $(u,r,x^A)$ can be used where $u=constant$ hypersurfaces are null this imply that $g_{11}=0$ and we must have $\Gamma^{0}_{11}=\Gamma^{2}_{11}=0$ that results $r^4\sin^2\theta=g_{22}g_{33}$. The line element takes the form
\begin{align}
&ds^2=e^{2\beta}Vr^{-1}du^2-2e^{2\beta}du dr+r^2h_{AB}(dx^A-U^Adu)(dx^B-U^Bdu)
\end{align}
where $A,B=3,4$ and $\beta,\: V,\: h_{AB}$ are functions of $(u,\theta,\varphi)$. One can find asymptotic symmetries by checking all transformations that preserve this form of the line element \cite{bondi1962gravitational}, see also \cite{madler2016bondi}.
\section{Asymptotically flat space-times' symmetries}
It is important to find symmetries that are presented with the vector $\xi^{\mu}
$ at null infinity. In other words near the equivalence class of vector fields that do not vanish at $\mathcal{I}$ \cite{chandrasekaran2018symmetries}. So it is possible to find vectors which satisfy the Killing equation near infinity. In curved space-time there exists a large transformation group that depends on the angle and satisfies the Killing's equation. The asymptotic symmetry group, $\mathfrak{G}$, defines as a quotient group as \cite{anninos2011asymptotic}
\begin{equation}
\mathfrak{G}=Diff_{\infty}(\partial M)\setminus Diff^{0}_{\infty}(M),
\end{equation}
where $Diff_{\infty}(M)$ is diffeomorphisms in physical space-time, $({M},{g}_{\mu\nu})$ and $Diff^{0}_{\infty}(M)$ is diffeomorphisms that are asymptotically identity. As said before, it is possible to use $\tilde{n}^{\mu}$ as a component on $\mathcal{I}$. When $\mathcal{I}$ is null, $\tilde{n}^{\mu}$ lies on the tangent space of $\mathcal{I}$ and with its aid one can define $q_{\mu\nu}$ that has the signature $(0,+,+)$. Field equations imply that $\tilde{\nabla}_{\mu}\tilde{n}^{\mu}$ vanishes in each of these divergence-free conformal frames so the answers of the equation $\tilde{\nabla}_{\mu}\tilde{n}^{\mu}|_{\mathcal{I}}=0$ are the generators of the $\mathcal{I}$.
Actually $BMS$ group is the symmetry group of $\mathcal{I}$. This group contains diffeomorphisms that remain the intrinsic metric, $q_{\mu\nu}$, and the vector field, $n^{\mu}$, invariant. $BMS$ group is smaller than $Diff(\mathcal{I})$ and has an amazing structure, as it does not change normal vectors of $\mathcal{I}$. This causes the relation
\begin{equation}
\mathcal{L}_{\xi}\tilde{n}^{\mu}|_{\mathcal{I}}=\alpha \tilde{n}^{\mu},
\end{equation}
where $\xi^{\mu}$ is the $BMS$ vector field and $\alpha$ is function that satisfies $\mathcal{L}_{n}\alpha|_{\mathcal{I}}=0$.
$BMS$ translations have to preserve $\tilde{n}_{\mu}\tilde{n}^{\mu}$ on ${\mathcal{I}}$. To have more senses about The $BMS$ group, it would be useful to consider the intrinsic metric (see Appendix \ref{app:A})
\begin{equation}
ds^2=d\xi d\xi^*=-1/4(1+\xi \xi^*)^2(d\theta^2+\sin^2\theta d\varphi^2),
\end{equation}
where $\xi=e^{i\varphi}cot{\theta/2}$. If one chooses the conformal factor $\Omega=\frac{2}{(1+\xi \xi^*)}$ each cut would be a 2-sphere. This coordinate system is useful to find the symmetry group. For a sphere the holomorphic bijections have the form \cite{esposito1992mathematical}
\begin{equation}
f(\xi)=\frac{a\xi+b}{c\xi+d},
\label{eq:4.777}
\end{equation}
where $ad-bc=1$. \eqref{eq:4.777} transformations are known as fractional linear transformations. The following conformal transformations would be valid if \eqref{eq:4.777} transformations preserve the intrinsic metric of each cut.
\begin{equation}
d\Sigma'^2=\omega^2d\Sigma^2\quad,\quad d\Sigma^2=d\theta^2+\sin^2\theta d\varphi^2.
\end{equation}
For $(q_{\mu\nu},n^{\mu})$ one can write
\begin{equation}
(q_{\mu\nu},n^{\mu}) \rightarrow (\omega^{2}q_{\mu\nu},\omega^{-1}n^{\mu}).
\end{equation}
Thus it is possible to find the conformal factor $\omega$ by calculating $d\Sigma'$
\begin{align}
\label{eq:4.1222}
dS'&=d\xi'd\xi'^*\\
&=\frac{ad\xi(c\xi+d)-cd\xi(a\xi+b)\times(a^*d\xi^*(c^*\xi^*+d^*)-c^*d\xi^*(a^*\xi^*+b^*)}{(c\xi+d)^2(c^*\xi^*+d^*)^2}\notag\\
&=\frac{\overbrace{ad}^{bc+1}a^*d^*+\overbrace{cb}^{ad-1}c^*b^*-ca^*bd^*-ac^*db^*}{(c\xi+d)^2(c^*\xi^*+d^*)^2}d\xi d\xi^*\notag\\
&=\frac{d\xi d\xi^*}{(c\xi+d)^2(c^*\xi^*+d^*)^2}=\frac{-(1+\xi \xi^*)d\Sigma^2}{4(c\xi+d)^2(c^*\xi^*+d^*)^2}.\notag
\end{align}
On the other hand
\begin{align}
\label{eq:4.1333}
dS'&=-\frac{1}{4}(1+\xi'\xi'^*)^2d\Sigma'^2\\
&=(\frac{-(a^*\xi^*+b^*)(a\xi+b)+(c\xi+d)(c^*\xi^*+d^*)}{4(c\xi+d)(c^*\xi^*+d^*)})d\Sigma'^2\notag.
\end{align}
The equation
\eqref{eq:4.1222}
is equal to
\eqref{eq:4.1333}
thus
\begin{equation}
d\Sigma'^2=\frac{(1+\xi\xi^*)^2}{[(a\xi+b)(a^*\xi^*+b^*)+(c\xi+d)(c^*\xi^*+d^*)]^2}d\Sigma^2,
\end{equation}
so $\omega$ is
\begin{equation}
\omega=\frac{1+\xi\xi^*}{(a\xi+b)(a^*\xi^*+b^*)+(c\xi+d)(c^*\xi^*+d^*)},
\end{equation}
where $\mathcal{L}_{n}\omega=0$ and the line element, in the direction of $\mathcal{I}$ generators, changes as
\begin{equation}
du'=\omega du \quad \rightarrow\quad u'=\omega[u+\alpha(\xi,\xi^*)].
\label{eq:4.111}
\end{equation}
\eqref{eq:4.777} till \eqref{eq:4.111} are transformations from the $BMS$ group \cite{boyle2016transformations}.
\subsection{supertranslations}
If the component $u$ in the direction of $\mathcal{I}$ generators transforms as
\begin{equation}
\hat{u}=u+\alpha(\xi,\xi^*),
\label{eq:4.1111}
\end{equation}
This is a supertranslation. In 1966 Newman and Penrose proposed to write $\alpha$ in
terms of spherical harmonics \cite{sachs1962asymptotic,newman1966note}
\begin{equation}
\alpha=\Sigma_{l=0}^{\infty}\Sigma_{m=-l}^{l}a_{l,m}Y_{l,m}(\theta,\varphi)
\end{equation}
where $a_{l,m}$ is constant. If $a_{l,m}=0$ for $ l>2$ then
\begin{equation}
\alpha=\epsilon_0+\epsilon_1 \sin \theta \cos \varphi +\epsilon_2 \sin \theta \sin \varphi+ \epsilon_3 \cos theta,
\end{equation}
Here the supertranslations reduce to a special case,
called the translations.\cite{newman1966note}
\subsection{translations}
Translations in Minkowski space-time can be written as
\begin{equation}
\label{eq:4.122}
t'=t+a\quad,\quad x'=x+b\quad,\quad y'=y+c\quad,\quad z'=z+d.
\end{equation}
One can define a coordinate system as
\begin{align}
\label{eq:4.133}
&u=t-r,\\
&r^2=x^2+y^2+z^2,\notag\\
&\xi=e^{i\varphi}\cot{\theta/2},\notag\\
&Z=\frac{1}{1+\xi\xi^*}.\notag
\end{align}
It is possible to write $x$, $y$ and $z$ according to complete conjugate variables as
\begin{equation}
\label{eq:4.21}
x=r(\xi+\xi^*)Z\quad,\quad y=-ir(\xi-\xi^*)Z\quad,\quad
z=r(\xi\xi^*-1)Z.
\end{equation}
Using \eqref{eq:4.122},
\eqref{eq:4.133}
and
\eqref{eq:4.21}
$r'$ can be obtained as
\begin{align}
r'&=\sqrt{x'^2+y'^2+z'^2}\\
&=(r^2(\xi+\xi^*)^2Z^2+b^2+2r(\xi+\xi^*)Zb-r^2(\xi-\xi^*)^2Z^2-c^2+ir(\xi-\xi^*)Zc\notag\\
&+r^2(\xi\xi^*-1)^2Z^2+d^2+2r(\xi\xi^*-1)Zd)^{1/2}\notag\\
&=(r^2Z^2\underbrace{(\xi^2\xi^{*2}+2\xi\xi^*+1)}_{(\xi\xi^*+1)^2}+2rZ[\underbrace{(b+ic)}_{B}\xi+\underbrace{(b-ic)}_{B^*}\xi^*+2rZ(\xi\xi^*d-d)]+c')1/2\notag\\
&=rZ(\xi\xi^*+1)\sqrt{1+\frac{\frac{2}{rZ}(B\xi+B^*\xi^*+\xi\xi^*d-d)+\frac{c'}{r^2Z^2}}{(\xi\xi^*+1)^2}}\notag\\
&\Rightarrow\notag\\
r'&\simeq(\xi\xi^*+1)[rZ+\frac{B\xi+B^*\xi^*+\xi\xi^*d-d}{(\xi\xi^*+1)^2}+O(1/r)]\notag\\
&=r+\frac{B\xi+B^*\xi^*+\xi\xi^*d-d}{(\xi\xi^*+1)}+O(1/r)\notag.
\end{align}
Thus for $u'=t'-r'$, it is possible to write
\begin{align}
u'&=t'-r'=t+a-r-\frac{B\xi+B^*\xi^*+\xi\xi^*d-d}{(\xi\xi^*+1)}+O(1/r)\\
&=u-\frac{B\xi+B^*\xi^*+\overbrace{(-a+d)}^{C}\xi\xi^*+\overbrace{(-a-d)}^{A}}{(\xi\xi^*+1)}+O(1/r)\notag.
\end{align}
Therefore \begin{equation}
u'=u+({A+B\xi+B^*\xi^*+C\xi\xi^*})Z+O(1/r).
\end{equation}
Thus if someone put the relation
\begin{equation}
\alpha=\frac{A+B\xi+B^*\xi^*+C\xi\xi^*}{1+\xi\xi^*},
\label{eq:12}
\end{equation}
in \eqref{eq:4.1111} supertranslations will be obtained. So the asymptotic symmetry group for such a space-time is the subgroup of $Diff(\mathcal{I})$ that preserve the fall of the intrinsic metric, $q_{\mu\nu}$ that means the fall of $\Omega$ and its derivatives.This group is the $BMS$ group. As a comparison Pioncaré group, $\mathfrak{P}$, is obtained by the semidirect product of the translations group,
$\mathfrak{T}$, to Lorentz group, $\mathfrak{L}$ \cite{barnich2014notes},
\begin{equation}
\mathfrak{P}=\mathfrak{L}\rtimes \mathfrak{T}.
\end{equation}
Thus the $BMS$ group will obtained as follows
\begin{equation}
\mathfrak{B}=\mathfrak{L}\rtimes \mathfrak{S},
\end{equation}
which the four dimensional translations group,
$\mathfrak{T}$, is replaced with the infinite dimensional supertranslations, $\mathfrak{S}$, this generators are $fn^{\mu}$ vector fields on $\mathcal{I}$ where $f$ is a scalar variable that satisfies $\mathcal{L}_{n}f=0$. In other words, the $BMS$ group is a group that maps $\mathcal{I}^{+}$ on itself.
\section{Asymptotic fields}
A great motif to find asymptotic symmetries is the problem of defining conserved charges in gauge theories, like electric charges in electrodynamics and energy momentum in general relativity. This problem is a result of the Noether-charge puzzle for gauge symmetries. In fact, the problem is that when one tries to define a conserved charge according to Noether's first theorem, Noether's current vanishes on shell. To be more explicit, one can consider a scalar field, $\varphi^{i}$ and the Lagrangian, $L[\varphi]$. The Euler-Lagrange equation is \cite{compere2019advanced}
\begin{equation}
\frac{\delta L}{\delta \Phi^{i}}= \frac{\partial L}{\partial \Phi^{i}}-\partial_{\mu}\left(\frac{\partial L}{\partial \partial_{\mu} \Phi^{i}}\right)+\partial_{\mu} \partial_{v}\left(\frac{\partial L}{\partial \partial_{\mu} \partial_{v} \Phi^{i}}\right)+\cdots
\end{equation}
where $\forall \Phi^{i} \in \Phi=\left\{\left(\Phi_{M}^{i}\right)_{i \in I}, g_{\mu v}\right\}$.
The generators for this system are shown as below \cite{cotaescu2000external}
\begin{equation}
\delta_{f} \phi^{i}=R^{i}_{\alpha}(f^{\alpha})=R^{i}_{\alpha},
\end{equation}
where $f^{\alpha}$ is an arbitrary function and satisfies the following relation
\begin{equation}
\label{eq:6.1}
R^{i}_{\alpha}(f^{\alpha}) \frac{\delta L}{\delta \phi^{i}} =\partial _{\mu} j^{\mu}_{f} .
\end{equation}
In consonance with Noether's second theorem, one has
\begin{equation}
R^{+i}_{\alpha}\frac{\delta L}{\delta \phi^{i}}=0,
\end{equation}
where $R^{+i}_{\alpha}$ is written without concerning the boundary terms. $R^{+i}_{\alpha}$ has the following relation with local operator, $Q_{i}$
\begin{equation}
R^{+i}_{\alpha}=\sum_{k=0}^{}i^{k} \partial_{\mu_{1}} \dots \partial_{\mu_{k}} [R^{i(\mu_{1} \dots \mu_{k})}]_{\alpha} Q_{i}.
\end{equation}
In presence of boundary terms one can write
\begin{equation}
\forall Q_{i}, f^{\alpha}: Q_{i}R^{i}_{\alpha}(f^{\alpha})=f^{\alpha}R^{+i}_{\alpha}(Q_{i})+\partial_{\mu}S^{\mu i}_{\alpha}(Q_{i},f^{\alpha}) ,
\end{equation}
where $S^{\mu i}_{\alpha}$ are differential equations.
If
$Q_{i}=\frac{\delta L}{\delta \varphi^{i}}$, then \cite{barnich2008surface}
\begin{equation}
\frac{\delta L}{\delta \varphi^{i}}R^{i}_{\alpha}(f^{\alpha})= \partial_{\mu} S^{\mu i}_{\alpha}(\frac{\delta L}{\delta \varphi^{i}},f^{\alpha}) \label{eq:6.7}
\end{equation}
Thus $S^{\mu i}_{\alpha}(\frac{\delta L}{\delta \varphi^{i}},f^{\alpha})$ is the Noether's current that satisfies the equation \eqref{eq:6.1}. This current vanishes on shell because of the linear distribution of $(\frac{\delta L}{\delta \varphi^{i}},f^{\alpha})$. According to \eqref{eq:6.1} and \eqref{eq:6.7}, it is possible to write
\begin{equation}
\partial_{\mu}(j_{\pm}^{\mu}-S^{\mu i}_{\alpha}(\frac{\delta L}{\delta \varphi^{i}},f^{\alpha}))=0.
\end{equation}
Considering Poincaré lemma, one can obtain the following relation on shell
\begin{equation}
j_{\pm}^{\mu}=S^{\mu i}_{\alpha}(\frac{\delta L}{\delta \varphi^{i}},f^{\alpha})- \partial_{\nu}k_{f}^{[\nu \mu]}\quad,\quad n>1
\end{equation}
where $k_{f}^{[\nu \mu]}$ is the superpotential and $n$ is the space-time's dimension. In one dimension, it is possible to write \cite{barnich2002covariant}
\begin{equation}
j_{f}=S^{\mu i}_{\alpha}(\frac{\delta L}{\delta \varphi^{i}},f^{\alpha})+C,
\end{equation}
where $C$ is an arbitrary constant. This relation is the solution of \eqref{eq:6.1}. $k_{f}^{[\nu \mu]}$ is arbitrary because according to $\partial_{\nu} \partial_{\mu} k^{[\nu \mu]}_{f}=0$, it vanishes in \eqref{eq:6.1}. This means Noether's currents are undefinable. In the case of exact solutions and symmetries, surface charges in the full theory are
constructed by integrating the surface charge 1-forms of the linearized theory along a
path in the space of symmetric configurations \cite{barnich2008surface}
\begin{equation}
n>1: Q[\varphi (x)]= \int_{\Sigma} j_{f} |_{\varphi (x)}= \int_{\partial \Sigma} k_{f} |_{\varphi (x)}, \label{eq:(11.6)}
\end{equation}
which is the solution of the Euler-Lagrange equation. In this relation, $\Sigma$ is a $n-1$ dimensional spacelike surface with the boundary $\partial \Sigma$, $n-1$-form current $j_{f}$ and $n-2$-form current $k_{f}$ that is the superpotential
\begin{align}
&j_{f}=j_{f}^{\mu}(d^{n-1}x)_{\mu},\\
&k_{f}=k_{f}^{[\mu \nu]}(d^{n-2}x)_{\mu \nu},\notag\\
&(d^{n-p}x)_{\mu_{1}\dots \mu_{p}} := \frac{1}{p! (n-p)!} \epsilon_{\mu_{1}\dots \mu_{p}} dx^{\mu_{(p+1)}} \dots dx^{\mu_{n}}. \notag
\end{align}
As the relation \eqref{eq:(11.6)} shows the mentioned problem, it also proposes a way to solve it. Actually, the relation \eqref{eq:(11.6)} is dependent on the boundary's features of superpotentials similar to the electrodynamics where $F_{\mu\nu}$ is written in agreement with to its superpotentials. So it may be possible to do the same thing here that proposes a probable relation between gauge symmetries and $(n-2)$-forms. The superpotential has been obtained by Abbott and Deser for asymptotically flat space-time \cite{abbott1982stability}
\begin{equation}
\mathbf{k}_{\zeta}^{\mu v}[h ; g]=\frac{\sqrt{-g}}{8 \pi G}\left(\xi^{\mu} \nabla_{\sigma} h^{v \sigma}-\xi^{\mu} \nabla^{v} h+\xi_{\sigma} \nabla^{v} h^{\mu \sigma}+\frac{1}{2} h \nabla^{v} \xi^{\mu}-\frac{1}{2} h^{\rho \nu} \nabla_{\rho} \xi^{\mu}+\frac{1}{2} h_{\sigma}^{v} \nabla^{\mu} \xi^{\sigma}\right).
\end{equation}
\section{De Sitter space-time}
\checkmark First definition: If $\tilde{M}$ with $\mathcal{I}$ as its boundary and with a diffeomorphism between ${M}$ $(\tilde{M}\setminus \mathcal{I})$ exists, then $({M},{g}_{\mu\nu})$ is weakly asymptotically de Sitter as \cite{ashtekar2014asymptotics}
1. A smooth function $\Omega$ exists on $\tilde{M}$ that vanishes on $\mathcal{I}$ and $n_{\mu}:=\tilde{\nabla}_{\mu}\Omega|_{\mathcal{I}}\neq0$. In addition one may be able to write $ \tilde{g}_{\mu\nu}=\Omega^{2} {g}_{\mu\nu}$.
2. ${g}_{\mu\nu}$ must satisfy the Einstein equation in presence of a positive cosmological constant \cite{ashtekar2014asymptotics}
\begin{equation}
{R}_{\mu\nu}-1/2{R}{g}_{\mu\nu}+\Lambda {g}_{\mu\nu}=8 \pi G {T}_{\mu\nu}\quad , \quad \Lambda>0.
\end{equation}
These two conditions are similar to the conditions defined for asymptotically flat space-time. The first condition shows the relation between the physical and unphysical space-times and $\nabla_{\mu} \Omega \neq 0$ assure one to be able to use $\Omega$ as a component on $\mathcal{I}$. The second condition declares that $\Omega^{-2}T_{\mu\nu}$ decrease smoothly on $\mathcal{I}$.
Fortunately it is possible to have different choices for $\Omega$ and one can see in the following sections how this feature help us.
\checkmark Second definition: if $\mathcal{I}$ was spacelike and geodesically complete then a weakly asymptotically de Sitter space-time will be a asymptotically de Sitter space-time. It is not possible to distinguish the topology of $\mathcal{I}$ but three possible topology have been studied by Ashtekar et al \cite{ashtekar2014asymptotics}:
$\bullet$ If $\mathcal{I}$ has the topology $\mathbb{S}^{3}$ then the space-time will be known as globally asymptotically de Sitter (figure \ref{pic:4.1}).
\begin{figure}
\centering
\begin{tikzpicture}[scale=3]
\draw (0,0) ellipse (0.75 and 0.3);
\draw (-0.75,0) -- (-0.75,-2);
\draw (-0.75,-2) arc (180:360:0.75 and 0.3);
\draw [dashed] (-0.75,-2) arc (180:360:0.75 and -0.3);
\draw (0.75,-2) -- (0.75,0);
\fill [yellow!40,opacity=0.5] (-0.75,0) -- (-0.75,-2) arc (180:360:0.75 and 0.3) -- (0.75,0) arc (0:180:0.75 and -0.3);
\node at (0,-0.2){$\mathcal{I}^+$};
\node at (0,-2.2){$\mathcal{I}^-$};
\end{tikzpicture}
\caption{ In this figure the upper circle is $\mathcal{I}^+$ and the lower circle is $\mathcal{I}^-$. The topology of this two circles are $\mathbb{S}^{3}$. \label{pic:4.1}}
\end{figure}
$\bullet$
$\mathcal{I}$ with the topology $ \mathbb{R}^{3} \simeq \mathbb{S}^{3} \setminus \left \{ p \right \}$ results a space-time that is asymptotically de Sitter in Pioncaré patch, where $p$ is spacelike infinity, $i^0$ (figure \ref{pic:4.2}).
\begin{figure}
\centering
\begin{tikzpicture}[scale=3]
\draw (0,0) ellipse (0.75 and 0.3);
\draw (-0.75,0) -- (-0.75,-2);
\draw (-0.75,-2) arc (180:360:0.75 and 0.3);
\draw [dashed] (-0.75,-2) arc (180:360:0.75 and -0.3);
\draw (0.75,-2) -- (0.75,0);
\fill [yellow!40,opacity=0.5] (-0.75,0) -- (-0.75,-2) arc (180:360:0.75 and 0.3) -- (0.75,0) arc (0:180:0.75 and -0.3);
\draw[red] (0, -2.3) .. controls (0.98,-1) .. (0.05,0.3);
\draw[red] (0, -2.3) .. controls (-0.99,-1) .. (0.05,0.3);
\node at (0,-0.2){$\mathcal{I}^+$};
\node at (0.1,0.4){$i^0$};
\filldraw (0.05,0.3) circle[radius=1pt];
\end{tikzpicture}
\caption{ The topology of null infinity in space-time that is asymptotically de Sitter in Pioncaré patch is $\mathbb{R}^{3}$.\label{pic:4.2}}
\end{figure}
$\bullet$ The space-time, $({M},{g}_{\mu\nu})$ is asymptotically de Sitter-Shcwartzshild if $\mathcal{I}$ has the topology $\mathbb{R} \times \mathbb{S}^{2} \simeq \mathbb{S}^{3}\setminus \left \{ p_1 ,p_2 \right \}$, where $p_1$ is $i^{\pm} $ and $p_2$ is $i^0$ (figure \ref{pic:4.3}).
\begin{figure}
\centering
\begin{tikzpicture}[scale=3]
\draw (0,0) ellipse (0.75 and 0.3);
\draw (-0.75,0) -- (-0.75,-2);
\draw (-0.75,-2) arc (180:360:0.75 and 0.3);
\draw [dashed] (-0.75,-2) arc (180:360:0.75 and -0.3);
\draw (0.75,-2) -- (0.75,0);
\fill [yellow!40,opacity=0.5] (-0.75,0) -- (-0.75,-2) arc (180:360:0.75 and 0.3) -- (0.75,0) arc (0:180:0.75 and -0.3);
\draw [dashed](0.1,0.3) -- (0.1,-1.7);
\draw[red] (0.05,- 0.3) .. controls (0.955,-0.5) .. (0.1, -1.3) ;
\draw[red] (-0.25,- 0.3) .. controls (-0.95,-0.5) .. (0.1, -1.3) ;
\draw[red] (-0.25, -2.3) .. controls (-0.95,-1.5) .. (0.1, -1.3);
\draw[red] (0.05, -2.3) .. controls (0.955,-1.5) .. (0.1, -1.3);
\draw[yshift = 0.1cm, snake] (0.05,- 0.38) -- (-0.25,- 0.38);
\draw[yshift = 0.1cm, snake] (0.05,- 2.38) -- (-0.25,- 2.38);
\node at (0.55,0.3){$\mathcal{I}^+$};
\node[gray] at (-0.1,-0.2){$r=0$};
\node at (0.1,0.4){$i^0$};
\node at (0.2,- 0.2){$i^+$};
\node at (-0.35,- 0.2){$i^-$};
\filldraw (0.05,0.3) circle[radius=1pt];
\filldraw (0.05,- 0.3) circle[radius=1pt];
\filldraw (-0.25,- 0.3) circle[radius=1pt];
\filldraw (0.05,- 2.3) circle[radius=1pt];
\filldraw (-0.25,- 2.3) circle[radius=1pt];
\draw [blue](0.05,- 2.3) --(-0.25,- 0.3);
\draw [blue] (0.05,- 0.3) -- (-0.25,- 2.3);
\end{tikzpicture}
\caption{This figure shows the dS-Schwarzschild space-time where red lines are dS horizons and blue line are event horizons. The topology of null infinity is $\mathbb{R} \times \mathbb{S}^{2}$. \label{pic:4.3}}
\end{figure}
Definitely all of these definitions have to satisfy the condition one. These points that present infinity make trouble for further calculations. So it would be useful to work with the first topology.
\checkmark Third definition:
$({M},{g}_{\mu\nu})$ is strongly asymptotically de Sitter if it satisfies the second definition and the intrinsic metric, $q_{\mu\nu}$ becomes conformally flat \cite{ashtekar2014asymptotics}.
These definitions have been written according to the choice of conformal factor thus each space-time can be included to different groups. Generally the metric be written as follows near the infinity
\begin{equation}
\tilde{g}_{\mu\nu}=-\tilde{\nabla}_{\mu}\Omega\tilde{\nabla}_{\nu}\Omega+\tilde{h}_{\mu\nu}
\label{eq:4.metric}
\end{equation}
where $\tilde{h}_{\mu\nu}$ is a function of $\Omega$.
\section{Asymptotic de Sitter space-time's symmetries}
As intrinsic metric in
$\Lambda>0$ case takes the signature $(+,+,+)$, $n^{\mu}$ is no longer tangential to $\mathcal{I}$. Thus it is not possible to follow the same route as in $\Lambda=0$ case to find the asymptotic symmetry group. A good idea is to consider the intrinsic metric to be conformally flat then the symmetry group reduces to $So(4,1)$. Restricting the conditions, Bach tensor vanishes and this causes losing information by fiat. In this section we review the work of Ashtekar et. al. \cite{ashtekar2014asymptotics}.
So finding another framework is essential. To achieve this aim, Einstein equations has been rewritten according to the conformal rescaling $\Omega$ \cite{ashtekar2014asymptotics}
\begin{equation}
\tilde{R}_{\mu\nu}-1/2 \tilde{g}_{\mu\nu}\tilde{R}+ 2 \Omega^{-1}(\tilde{\nabla}_{\mu}n_{\nu}-\tilde{g}_{\mu\nu}\tilde{\nabla}^{\sigma}n_{\sigma})+3\Omega^{-2} \tilde{g}_{\mu\nu}n^{\sigma}n_{\sigma}+\Omega^{-2} \Lambda \tilde{g}_{\mu\nu}=8 \pi G{T}_{\mu\nu}, \label{eq :4.5}
\end{equation}
where $\tilde{n}_{\mu}:=\tilde{\nabla}_{\mu}\Omega$ (see Appendix \ref{ap:d}).
If one multiplies $\Omega^{2}$ by the relation \eqref{eq :4.5} and applies boundary conditions, which has been mentioned in the first definition, in the resulting equation, then
\begin{equation}
\tilde{n}^{\mu}\tilde{n}_{\mu}|_{\mathcal{I}}= - \Lambda/3 =-1/l^{2}.
\label{eq:4.39}
\end{equation}
Thus
$\tilde{n}_{\mu}$ is timelike on $\mathcal{I}$ and as a result $\mathcal{I}$ itself is spacelike. Because of the exciting freedom to choose the conformal factor,
it is possible to choose one that satisfy the relation
$\tilde{\nabla}_{\mu}\tilde{n}^{\mu} |_{\mathcal{I}}=0$. This choice considerably simplifies mathematical calculations. Now multiplying $\Omega$ by \eqref{eq :4.5} and considering \eqref{eq:4.39}, the third and fourth terms of \eqref{eq :4.5} simplify together, thus
\begin{equation}
\label{eq:4.65}
\tilde{\nabla}_{\mu}\tilde{n}_{\nu}|_{\mathcal{I}}=0.
\end{equation}
Applying all these restriction one conformal degree of freedom is left, $\Omega \rightarrow \Omega'=\omega \Omega$ that results $\tilde{n}^{\mu}\tilde{\nabla}_{\mu}\omega |_{\mathcal{I}}=0$. Using this conformal frame, $\tilde{C}_{\mu\nu \sigma \lambda}$ vanishes near $\mathcal{I}$. To show this, Schouten tensor is represented in the following \cite{hollands2005comparison}
\begin{equation}
\tilde{S}_{\mu\nu}:=\tilde{R}_{\mu\nu}-(\tilde{R}/6)\tilde{g}_{\mu\nu}.
\end{equation}
The relation between Schouten tensor and its conformal transformed form is (see Appendix \ref{ap:d})
\begin{equation}
\tilde{S}_{\mu\nu}=S_{\mu\nu}-2 \Omega^{-1}\tilde{\nabla}_{\mu}\tilde{n}_{\nu}+2\Omega^{-2}\tilde{g}_{\mu\nu}\tilde{n}^{\sigma}\tilde{n}_{\sigma}. \label{eq:8.5}
\end{equation}
On the other hand it is possible to write conformal transformed Riemann tensor (see Appendix \ref{ap:j})
\begin{equation}
\tilde{R}_{\mu\nu \sigma \lambda}=\tilde{C}_{\mu\nu \sigma \lambda}+\tilde{g}_{\mu[\sigma}\tilde{S}_{\lambda]\nu}-\tilde{g}_{\nu[\sigma}\tilde{S}_{\lambda]\mu}, \label{eq:5.9}
\end{equation}
multiplying this relation by $\tilde{n}^{\lambda}$
\begin{align}
\label{eq:4.nR}
&\underbrace{\tilde{n}^{\lambda}\tilde{R}_{\mu\nu \sigma \lambda}}_{=0}=\tilde{n}^{\lambda}\tilde{C}_{\mu\nu \sigma \lambda}+\tilde{g}_{\sigma[\mu}\tilde{S}_{\nu]\lambda}\tilde{n}^{\lambda}-\tilde{n}_{[\mu}\tilde{S}_{\nu]\sigma}\\
&\Rightarrow \tilde{n}_{[\mu}\tilde{S}_{\nu]\sigma}=\tilde{n}^{\lambda}\tilde{C}_{\mu\nu \sigma \lambda}+\tilde{g}_{\sigma[\mu}\tilde{S}_{\nu]\lambda}\tilde{n}^{\lambda}\notag.
\end{align}
where $\tilde{n}^{\lambda}\tilde{R}_{\mu\nu \sigma \lambda}=0$ has been used. On the other hand multiplying \eqref{eq:8.5} by $\Omega$ and taking the derivative, one has
\begin{align}
\label{eq:5.11}
\tilde{\nabla}_{[\mu}\Omega\tilde{S}_{\nu]\sigma}&=\Omega \tilde{\nabla}_{[\mu}\tilde{S}_{\nu]\sigma}+\tilde{n}_{[\mu} \tilde{S}_{\nu]\sigma}=\Omega \tilde{\nabla}_{[\mu}\tilde{S}_{\nu]\sigma}+
\tilde{n}^{\lambda}\tilde{C}_{\mu\nu \sigma \lambda}+\tilde{g}_{\sigma[\mu}\tilde{S}_{\nu]\lambda}\tilde{n}^{\lambda}\\
&=\Omega \tilde{\nabla}_{[\mu}{S}_{\nu]\sigma}+4(\tilde{\nabla}_{[\mu}\Omega )\Omega^{-2}\tilde{g}_{\nu]\sigma}\tilde{n}_{\lambda}\tilde{n}^{\lambda}+
\tilde{n}^{\lambda}\tilde{C}_{\mu\nu \sigma \lambda}+\tilde{g}_{\sigma[\mu}{S}_{\nu]\lambda}\tilde{n}^{\lambda}\notag\\
&-2\Omega^{-2}\tilde{n}_{[\mu}(\tilde{\nabla}_{\nu]}\tilde{n}_{\lambda}+2\Omega^{-2}\tilde{n}_{[\mu}(\tilde{\nabla}_{\nu]}\tilde{n}_{\lambda}-4(\tilde{\nabla}_{[\mu}\Omega )\Omega^{-2}\tilde{g}_{\nu]\sigma}\tilde{n}_{\lambda}\tilde{n}^{\lambda}
\notag\\
&-2\tilde{\nabla}_{[\mu}\tilde{\nabla}_{\nu]}\tilde{n}_{\sigma}=\Omega \tilde{\nabla}_{[\mu}{S}_{\nu]\sigma}+\tilde{n}^{\lambda}\tilde{g}_{\lambda[\mu} S_{\nu]\sigma}+\tilde{C}_{\mu\nu\sigma\lambda}\tilde{n}^{\lambda},\notag
\end{align}
where $\tilde{K}_{\mu\nu}\tilde{n}^{\nu}=0$ and \eqref{eq:4.nR} has been used. Additionally for fields' equations in physical space-time one can write \cite{ashtekar2014asymptotics}
\begin{equation}
{S}_{\mu\nu}=(\Lambda/3){g}_{\mu\nu}+8 \pi G({T}_{\mu\nu}-1/3 {T}
{g}_{\mu\nu}) \equiv \Lambda /3 {g}_{\mu\nu}+\bar{T}_{\mu\nu},
\end{equation}
where $\bar{T}_{\mu\nu}:=8 \pi G({T}_{\mu\nu}-1/3 {T}
{g}_{\mu\nu})$. This relation can be used in \eqref{eq:5.11} so
\begin{equation}
\Omega \tilde{\nabla}_{[\mu}{S}_{\nu]\sigma}+\tilde{C}_{\mu\nu\sigma\lambda}n^{\lambda}=\tilde{\nabla}_{[\mu}(\Omega\bar{T}_{\nu]\sigma})-g_{\sigma[\mu}\bar{T}_{\nu]\lambda}n^{\lambda}.
\label{eq:5.13}
\end{equation}
As $\Omega^{-1
} {T}_{\mu\nu}$ has a smooth limit on $\mathcal{I}$, it is possible to say
\begin{equation}
\tilde{C}_{\mu\nu\sigma\lambda}n^{\lambda}|_{\mathcal{I}}=0.
\label{eq:4.722}
\end{equation}
Weyl tensor can be devided to electric an magnetic parts
\begin{align}
\label{eq:5.24}
&\tilde{E}_{\mu\sigma}:=l^{2}\tilde{C}_{\mu\nu\sigma\lambda}\tilde{n}^{\nu}\tilde{n}^{\sigma},\\
&\tilde{B}_{\mu\sigma}:=l^{2} \star \tilde{C}_{\mu\nu\sigma\lambda}\tilde{n}^{\nu}\tilde{n}^{\sigma},\notag
\end{align}
where
\begin{equation}
\star C_{\mu\nu\sigma\rho}\equiv C_{\mu\nu\sigma\rho}+iC_{\mu\nu\sigma\rho}^{\sim},
\end{equation}
for more details see Appendix \ref{ap:j}. In this relation $C_{\mu\nu\sigma\rho}^{\sim}$ is the right dual. both relations in \eqref{eq:5.24} vanish on $\mathcal{I}$ because of \eqref{eq:4.722} so
\begin{equation}
\label{eq:4.75}
\tilde{C}_{\mu\nu\sigma\lambda}|_{\mathcal{I}}=0.
\end{equation}
\section{Asymptomatic expansion}
As it is shown in the previous section, considering the intrinsic metric to be conformally flat is not a good method to define the asymptotic symmetry group as it causes losing information by fiat. So it is necessary to find another method. Here the Fefferman-Graham framework is presented. As it said before, the metric near $\mathcal{I}$ can be defined as \eqref{eq:4.metric}. To find $\tilde{h}_{\mu\nu}$, one can start with its Lie derivative, \cite{fefferman1985conformal} $\tilde{h}_{\mu\nu}$
\begin{equation}
\mathcal{L}_{n}\tilde{h}_{\mu\nu}=\underbrace{\tilde{n}^{\sigma}\tilde{\nabla}_{\sigma}\tilde{h}_{\mu\nu}}_{=0}-\tilde{h}_{\mu\sigma}\underbrace{\tilde{\nabla}_{\nu}\tilde{n}^{\sigma}}_{\tilde{K}^{\sigma}_{\nu}}-\tilde{h}_{\nu\sigma}\underbrace{\tilde{\nabla}_{\mu}\tilde{n}^{\sigma}}_{\tilde{K}^{\sigma}_{\mu}}=-2{K}_{\mu\nu}
\end{equation}
On the other hand according to
Taylor series
one has
\begin{equation}
\tilde{h}_{\mu\nu}=\sum_{j=0}^{\infty}(\tilde{h}_{\mu\nu})_{j}\Omega^j
\label{eq:4.88}
\end{equation}
where $(\tilde{h}_{\mu\nu})_{0}$ represent the intrinsic metric of $\mathcal{I}$. Also Lie derivative of the extrinsic curvature can be obtained as follows
\begin{align}
\label{eq:4.89}
\mathcal{L}_{n}\tilde{K}_{\mu\nu}&=\tilde{n}^{\sigma}\underbrace{\tilde{\nabla}_{\sigma}\tilde{K}_{\mu\nu}}_{\tilde{n}^{\sigma}\tilde{\nabla}_{\sigma}\tilde{\nabla}_{\mu}\tilde{n}_{\nu}}-\underbrace{\tilde{K}_{\mu\sigma}}_{\tilde{\nabla}_{\mu}\tilde{n}_{\sigma}}\tilde{\nabla}_{\nu}\tilde{n}^{\sigma}-\underbrace{\tilde{K}_{\nu\sigma}}_{\tilde{\nabla}_{\mu}\tilde{n}_{\sigma}}\tilde{\nabla}_{\mu}\tilde{n}^{\sigma}\\
&=\tilde{n}^{\sigma}\tilde{\nabla}_{\sigma}\tilde{\nabla}_{\mu}\tilde{n}_{\nu}-\tilde{n}^{\sigma}\tilde{\nabla}_{\mu}\tilde{\nabla}_{\sigma}\tilde{n}_{\nu}+\tilde{\nabla}_{\mu}\underbrace{(n^{\sigma}\tilde{\nabla}_{\sigma}\tilde{n}_{\nu})}_{=0}-\tilde{\nabla}_{\mu}\tilde{n}_{\sigma}\tilde{\nabla}_{\nu}\tilde{n}^{\sigma}\notag\\
&=\tilde{n}^{\sigma}(\tilde{\nabla}_{\sigma}\tilde{\nabla}_{\mu}-\tilde{\nabla}_{\mu}\tilde{\nabla}_{\sigma})\tilde{n}_{\nu}-\tilde{\nabla}_{\mu}\tilde{n}_{\sigma}\tilde{\nabla}_{\nu}\tilde{n}^{\sigma}\notag\\
&=\tilde{n}^{\sigma}\tilde{n}^{\rho}\tilde{R}_{\sigma\mu\rho\nu}-\tilde{K}_{\mu\sigma}\tilde{K}_{\nu}^{\sigma}.
\end{align}
When one multiply this relation by $\tilde{g}^{\mu\sigma}$, then
\begin{equation}
\mathcal{L}_{n} \tilde{K}_{\nu}^{\mu}={\mathcal{R}}_{\nu}^{\mu}+\tilde{K}\tilde{K}_{\nu}^{\mu}-\Omega^{-1}\tilde{K}\tilde{h}_{\nu}^{\mu}-4\Omega^{-1}\tilde{K}_{\nu}^{\mu}.
\end{equation}
Now for expand Einstein tensor one has (see Appendix \ref{ap:d})
\begin{equation}
\label{eq:4.91}
\tilde{G}_{\mu\nu}|_{\mathcal{I}}=2 \Omega^{-1}(\tilde{K}_{\mu\nu}-\tilde{g}_{\mu\nu}\tilde{K}).
\end{equation}
As the conformal derivative of Einstein tensor vanishes, it is possible to write
\begin{equation}
\tilde{h}^{\nu}_{\mu}\tilde{G}_{\nu\sigma}n^{\sigma}=0=\tilde{h}^{\nu}_{\mu}\tilde{R}_{\nu\sigma}n^{\sigma}=D_{\nu}K^{\nu}_{\mu}-D_{\mu}K.
\end{equation}
Also from \eqref{eq:4.91}, we know that
\begin{equation}
\label{eq:4.93}
\tilde{\mathcal{R}}+\tilde{K}^2-\tilde{K}_{\mu\nu}\tilde{K}^{\mu\nu}=4\Omega^{-1}\tilde{K}.
\end{equation}
For more details see Appendix \ref{ap:d}. Also it is useful to define traceless part of the extrinsic curvature, $K_{\mu\nu}$ tensor
\begin{equation}
\tilde{P}^{\mu}_{\nu}=\tilde{K}^{\mu}_{\nu}-\frac{\tilde{h}^{\mu}_{\nu}}{3}\tilde{K}.
\end{equation}
Here are other expanded parameters that would be useful for our calculations
\begin{align}
&\tilde{P}^{\mu}_{\nu}=\sum_{j=0}^{\infty}(\tilde{P}^{\mu}_{\nu})_{j}\Omega^j\quad,\quad \tilde{K}=\sum_{j=0}^{\infty}(\tilde{K})_{j}\Omega^j,\\
&\tilde{R}_{\mu\nu}=\sum_{j=0}^{\infty}(\tilde{R}_{\mu\nu})_{j}\Omega^j\quad,\quad \tilde{R}=\sum_{j=0}^{\infty}(\tilde{R})_{j}\Omega^j.\notag
\end{align}
If Lie derivative defines as
\begin{equation}
\mathcal{L}_{n}\equiv\frac{d}{d\Omega},
\end{equation}
then
\begin{align}
\label{eq:4.92}
&\frac{d}{d\Omega}\tilde{P}^{\mu}_{\nu}=[\tilde{\mathcal{R}}^{\mu}_{\nu}-\frac{\tilde{h}^{\mu}_{\nu}}{3}\tilde{\mathcal{R}}]-\tilde{K}\tilde{P}^{\mu}_{\nu}+2\Omega^{-1}\tilde{P}^{\mu}_{\nu},\\
&\frac{d}{d\Omega}\tilde{K}=-\tilde{\mathcal{R}}-\tilde{K}^2+5\Omega^{-1}\tilde{K},\notag\\
&\frac{d}{d\Omega}\tilde{h}^{\mu}_{\nu}=2\tilde{h}_{\nu \sigma}\tilde{K}^{\sigma}_{\mu}.\notag
\end{align}
Accordingly it is possible to write higher orders as \cite{jager2008conserved}
\begin{align}
\label{eq:4.97}
&(2+j)(\tilde{P}^{\mu}_{\nu})_{j}=([\tilde{\mathcal{R}}^{\mu}_{\nu})_{j-1}-\frac{\tilde{h}^{\mu}_{\nu}}{3}(\tilde{\mathcal{R}})_{j-1}]-\sum_{m=0}^{j-1}(\tilde{K})_{m}(\tilde{P}^{\mu}_{\nu})_{j-1-m},\\
&(5+j)(\tilde{K})_{j}=-(\tilde{\mathcal{R}})_{j-1}-\sum_{m=0}^{j-1}(\tilde{K})_{m}(\tilde{K})_{j-1-m},\notag\\
&j(\tilde{h}_{\mu\nu})_{j}=2\sum^{j-1}_{m=0}[(\tilde{h}_{\nu\sigma})_{m}(\tilde{P}^{\sigma}_{\mu})_{j-1-m}+\frac{1}{3}(\tilde{h}_{\mu\nu})_{m}(\tilde{K})_{j-1-m}],\notag
\end{align}
which are Fefferman-Graham relations for Einstein's equations. $j$ should be smaller than $d-2$ in \eqref{eq:4.97}.
To find $\tilde{h}_{\mu\nu}$ one has to write \eqref{eq:4.88} for $j=2$ as here
$d=4$,
\begin{equation}
\tilde{h}_{\mu\nu}=(\tilde{h}_{\mu\nu})_0\Omega^{0}+(\tilde{h}_{\mu\nu})_1\Omega^{1}+(\tilde{h}_{\mu\nu})_2\Omega^{2}
\end{equation}
According to \eqref{eq:4.97} one has
\begin{align}
\tilde{h}_{\mu\nu}&=(\tilde{h}_{\mu\nu})_{0}+1/2 \Omega^2(\tilde{h}_{\mu\nu})_{0}+3/2\Omega^2\mathcal{R}_{\mu\nu}-3/2\Omega^2KK_{\mu\nu}+3/2\Omega K(\tilde{h}_{\mu\nu})_{0}\\
&-6\Omega K_{\mu\nu}
-3/2\Omega^2K^{\nu}_{\mu}K_{\nu\sigma}
-3/2\Omega K_{\mu\sigma}\notag
\end{align}
On the other hand for electric part of the Weyl tensor one has
\begin{equation}
\label{eq:4.100}
\tilde{E}_{\mu\nu}=\frac{1}{d-3}\Omega^{3-d}(\tilde{C}_{\mu\nu\sigma\rho}\tilde{n}^{\nu}\tilde{n}^{\rho})
\end{equation}
where
$d=4$. Someone remember the relation \eqref{eq:5.9}, on the other hand the relation for conformal transformed Schouten tensor in $\omega=constant$ surfaces is
\begin{equation}
\tilde{S}_{\mu\nu}=-2\Omega^{-1}\tilde{\nabla}_{\mu}\tilde{n}_{\nu}.
\end{equation}
The Riemann tensor multiplies by $n^{\nu}n^{\rho}$ and define the other indices as free indices. Using \eqref{eq:5.9} it is possible to write
\begin{equation}
\label{4.102}
\tilde{h}^{\iota}_{\mu}\tilde{h}^{\chi}_{\rho}\tilde{R}_{\iota\nu\chi\rho}n^{\nu}n^{\rho}=\tilde{h}^{\iota}_{\mu}\tilde{h}^{\chi}_{\rho}\tilde{C}_{\iota\nu\chi\rho}n^{\nu}n^{\rho}+\Omega^{-1}{K}_{\mu\sigma}
\end{equation}
where \eqref{eq:4.39} with $l=1$ has been used.
Also for the Riemann tensor one can write
\begin{equation}
\label{4.103}
\tilde{h}^{\iota}_{\mu}\tilde{h}^{\chi}_{\rho}\tilde{R}_{\iota\nu\chi\rho}n^{\nu}n^{\rho}= \tilde{h}^{\iota}_{\mu}\tilde{h}^{\chi}_{\rho}n^{\nu}(\tilde{\nabla}_{\iota}\tilde{\nabla}_{\nu}-\tilde{\nabla}_{\nu}\tilde{\nabla}_{\iota})n_{\chi}=\mathcal{L}_{n}K_{\mu\sigma}+K_{\mu\nu}K^{\nu}_{\sigma},
\end{equation}
using
\eqref{4.102}
and
\eqref{4.103} one has
\begin{equation}
\tilde{C}_{\mu\nu\sigma\rho}{n}^{\nu}{n}^{\rho}=\mathcal{L}_{n}{K}_{\mu\sigma}+{K}_{\mu}^{\nu}{K}_{\nu\sigma}+\Omega^{-1}{K}_{\mu\sigma}
\end{equation}
\begin{equation}
\tilde{C}_{\mu\nu\sigma\rho}{n}^{\nu}{n}^{\rho}=\mathcal{L}_{n}{K}_{\mu\sigma}+{K}_{\mu}^{\nu}{K}_{\nu\sigma}+\Omega^{-1}{K}_{\mu\sigma}.
\end{equation}
Using this relation in \eqref{eq:4.100} one has
\begin{equation}
\label{eq:4.106}
\tilde{E}_{\mu\nu}=\Omega^{-1}(\mathcal{L}_{n}{K}_{\mu\sigma}+{K}_{\mu}^{\nu}{K}_{\nu\sigma}+\Omega^{-1}{K}_{\mu\sigma})
\end{equation}
According to these relations the relation of the unphysical metric in four dimensions is
\begin{equation}
\label{eq:4.101}
\tilde{g}_{\mu\nu}=-\tilde{\nabla}_{\mu}\Omega\tilde{\nabla}_{\nu}\Omega+(1+\frac{1}{2}\Omega^2)(\tilde{h}_{\mu\nu})_{0}-\frac{3}{2}\Omega^3\tilde{E}_{\mu\nu}+O(\Omega^4)
\end{equation}
where $(\tilde{h}_{\mu\nu})_{0}$ is the metric of a three-sphere. Unfortunately the Killing equation for this metric is not exactly solvable.
\section{Finding the intrinsic metric according to the tetrad formalism}
To simplify the following calculations, de Sitter line element is written as
\begin{equation}
ds^{2}=-F(r)dt^2+F(r)^{-1}dr^2+r^2d\Omega^2
\end{equation}
where $F(r)=1-\Lambda r^2/3$. Using advance null coordinates one has
\begin{equation}
ds^2=-F(r)du^2-2dudr+r^2d\Omega^2
\end{equation}
where $u=t-r^*$, then the matrix representation of the reversed metric is
\begin{equation}
g^{\mu\nu}=
\begin{bmatrix}
0&-1&0&0\\
-1&-F(r)&0&0\\
0&0&r^{-2}&0\\
0&0&0&r^{-2}\csc^2\theta
\end{bmatrix}
\end{equation}
Accordingly a null tetrad can be defined \cite{lopez2006absorption,saw2016mass,saw2017behavior}
\begin{align}
\label{eq:4.11111}
& {l}^{\mu}=[0,1,0,0],\\
& {n}^{\mu}=[1,F(r),0,0]\notag,\\
& {m}^{\mu}=[0,0,\frac{1}{\sqrt{2}r},\frac{i}{\sqrt{2}r}\csc^2\theta],\notag\\
&{\bar{m}}^{\mu}=[0,0,\frac{1}{\sqrt{2}r},-\frac{i}{\sqrt{2}r}\csc^2\theta].\notag
\end{align}
Where the metric has the relation $g^{\mu\nu}=l^{\mu}n^{\nu}+l^{\nu}n^{\mu}-m^{\mu}\bar{m}^{\nu}-m^{\nu}\bar{m}^{\mu}$. For $r$ independent terms in ${{m}}^{\mu}$
and ${\bar{m}}^{\mu}$ one has
\begin{align}
&\xi^{\varphi(0)}=\frac{i}{\sqrt{2}}\csc^2\theta,\\
&\xi^{\theta(0)}=\frac{i}{\sqrt{2}}\notag
\end{align}
and for higher order
\begin{align}
\xi^{\mu}=\xi^{\mu(0)}r^{-1}+O(r^{-2}).
\end{align}
Derivative operators can be defined according to \eqref{eq:4.11111} \cite{saw2016mass}
\begin{align}
&D=l^{\mu}\nabla_{\mu}=\frac{\partial}{\partial r},\\
&D'=l^{\mu}\nabla_{\mu}=\frac{\partial}{\partial u}-1/2(1-\Lambda r^2/3)\frac{\partial}{\partial r},\notag\\
&\delta=m^{\mu}\nabla_{\mu}=\frac{1}{\sqrt{2}r}\frac{\partial}{\partial_{\theta}}+\frac{i}{\sqrt{2}r}\csc^2\theta\frac{\partial}{\partial_{\varphi}},\notag\\
&\delta'=\bar{m}^{\mu}\nabla_{\mu}=\frac{1}{\sqrt{2}r}\frac{\partial}{\partial_{\theta}}-\frac{i}{\sqrt{2}r}\csc^2\theta\frac{\partial}{\partial_{\varphi}}
\end{align}
and their operations on
$u,r,\theta,\varphi$
are
\begin{align}
&Du=0\quad,\quad D'u=1\quad,\quad \delta u=0\quad,\quad \delta' u=0,\\
&Dr=1\quad,\quad D'r=\Lambda/6r^2-1/2\quad,\quad \delta r=0\quad,\quad \delta' r=0\notag,\\
&D\theta=0\quad,\quad D'\theta=0\quad,\quad \delta\theta=1/\sqrt{2}r\quad,\quad \delta'\theta=1/\sqrt{2}r,\notag\\
&D\varphi=0\quad,\quad D'\varphi=0\quad, \quad \delta\varphi=i/\sqrt{2}r\csc \theta\quad, \quad \delta'\varphi=-i/\sqrt{2}r\csc \theta.\notag
\end{align}
Also for second order derivatives one has
\begin{align}
\label{eq:4.117}
&DD'r=\Lambda r/3\quad,\quad D\delta\theta=-1/\sqrt{2}r^2,\\
&D\delta\varphi=-i\csc\theta/\sqrt{2}r^2\quad,\quad D'\delta\theta=\Lambda/6\sqrt{2}+1/2\sqrt{2}r^2\notag,\\
&D'\delta\varphi=(\Lambda/6\sqrt{2}+1/2\sqrt{2}r^2)i\csc\theta\quad,\quad \delta'\delta\varphi(-i/2r^2)=\csc\theta\cot\theta\notag,\\
&\delta\delta'\varphi=(i/2r^2)\csc\theta\cot\theta.\notag
\end{align}
As the calculations have been done for $O(r^{-2})$ only terms with $\Lambda$ become important. So $\xi^{\mu}$ can be rewrite as follows
\begin{align}
&\xi^{\theta(0)}=\frac{1}{\sqrt{2}}e^{\Lambda f(u,\theta)},\\
&\xi^{\varphi(0)}=\frac{1}{\sqrt{2}}e^{\Lambda f(u,\theta)}\csc\theta\notag.
\end{align}
Hence for spherical expanded metric one has
\begin{equation}
\label{eq:11}
g_1=e^{\Lambda f(u,\theta)}d\theta^2+e^{\Lambda f(u,\theta)}\csc\theta d\varphi^2.
\end{equation}
Also for first relation in \eqref{eq:4.117}, one has
\begin{equation}
g=\frac{\Lambda}{3}r^2du^2+r^2g_{1}+O(r^{-2}).
\end{equation}
Considering the conformal factor
$ \Omega=r^{-1}$
one can obtain
\begin{equation}
\label{eq:13}
\tilde{g}=\frac{\Lambda}{3}du^2+g_{1}+O(r^{-1}).
\end{equation}
\subsection{Killing vector fields for the intrinsic metric}
Unfortunately it is not possible to solve Killing equations analytically for the metric \eqref{eq:13}. On the contrary, one can write Killing equation for spherically symmetric part of the metric, that is shown in relation \eqref{eq:11}. First
one must take an arbitrary killing field $X^{\mu}=X^{\theta}\partial_{\theta}+X^{\varphi}\partial_{\varphi}$ and then write the killing equation for it \cite{saw2017mass}
\begin{equation}
\mathcal{L}_{X}g_{\mu\nu}=X^{\sigma}\partial_{\sigma}g_{\mu\nu}+g_{\sigma\nu}\partial_{\mu}X^{\sigma}+g_{\mu\sigma}\partial_{\nu}X^{\sigma}.
\end{equation}
This relation gives three independent equations
\begin{align}
&\theta\theta:\partial_{\theta}(X^{\theta}e^{\Lambda e^{\Lambda f}})=0,\\
&\varphi\varphi : \partial_{\varphi}X^{\varphi}+X^{\theta}\cot\theta=0,\notag\\
& \theta\varphi: \partial_{\varphi}X^{\theta}+\partial_{\theta}X^{\varphi}e^{-4\Lambda f}\sin^2\theta=0.\notag
\end{align}
That result
\begin{align}
\label{eq:15}
&X^{\theta}(\theta,\varphi)=\frac{dA(\varphi)}{d\varphi}e^{-\Lambda f(\theta)},\\
& X^{\varphi}(\theta,\varphi)=2\sqrt{2}\alpha(\theta)A(\varphi)-X(\theta),\notag\\
&\frac{d^2A(\varphi)}{d\varphi^2}+(2\sqrt{2}e^{-3\Lambda f(\theta)}\frac{d\alpha (\theta)}{d\theta}\sin^2\theta)A{\varphi}=e^{-3\Lambda f(\theta)}\frac{dX(\theta)}{d \theta}\sin^2\theta. \notag
\end{align}
Where $\alpha(\theta)=-\frac{1}{2\sqrt{2}\sin\theta}\frac{d}{d\theta}(e^{-\Lambda f(\theta)}\sin\theta)$ that is spin coefficient. Let $\omega(\theta)^2=2\sqrt{2}e^{-3\Lambda f(\theta)}\frac{d\alpha (\theta)}{d\theta}\sin^2\theta$. The last equation in \eqref{eq:15} determines harmonic oscillator with frequency $\omega(\theta)$. The function $A(\varphi)$ is then dependent of $\theta$ and $\varphi$. Since $A(\varphi)$ must be independent of $\theta$ the only possibility is to set $\omega(\theta)$ constant and $X^{\theta}$ must be zero as well.
Therefore, we have just one single Killing vector on this axisymmetric topological 2-sphere
\begin{equation}
X^{\mu}=\partial_{\varphi}
\end{equation}
Then one does not have the whole So(3) group on $\mathcal{I}$.
\begin{appendices}
\chapter{Compactification\label{app:A}}
In This appendix one can see how a manifold maps to a compactified manifold. As an example consider the map of $M=\mathbb{R}^2$ to the $\tilde{M}=\mathbb{S}^2 \backslash \{(0,0,1)\}$ where $(0,0,1)$ is the north pole of the sphere. It is possible, as figure \ref{fig:A.1} shows, to map a set of points on the surface onto the sphere. Considering the north pole of the sphere, N, as a fixed point and choosing a point from the surface, A, one can draw a line according to these two points. Obviously this line passes through a point on the sphere, $B$. Likewise $A$ maps onto $B$, the only point of the sphere to which no point on the surface is attributed is the North Pole. It is possible to draw an infinite number of lines which pass through the point $(0,0,1)$ that are parallel to the surface and do not cross the surface. On the other hand how farther is the point on the surface the angle between the line and the surface becomes smaller. Hence it seems to be a good idea to consider $(0,0,1)$ as the point that represents all the points on infinity of the surface $\mathbb{R}^2$. If one foliates this sphere to circles each shit has the metric
\begin{equation}
ds^2=dx^2+dy^2,
\end{equation}
with the relation
$x^{2}+y^{2}=r^2$.
It is possible to write the metric in the form
\begin{equation}
ds^2=d\xi d\xi^{*},
\end{equation}
where $\xi=x+iy$
and
$\xi^{*}=x-iy$. This will be helpful to change the coordinate system origin as
$x'=x, y'=y-r, z'=z$. Thus as one can see in \ref{fig:A.2} $\cot\theta/2$ can be used to show the position of points on the circle. So $Z=\cot\theta/2$ can be defined. In this coordinate system the metric takes the form
\begin{equation}
ds^2=\frac{4}{(1+Z^2)^2}dZ^2.
\end{equation}
\begin{figure}
\centering
\begin{tikzpicture}[scale=3]
\draw (0,0) circle (1cm);
\filldraw[fill=green!20,draw=green!50!black](0,0) ellipse (1cm and 0.2cm);
\draw (-1.8,-0.5) to (-1,0.5);
\draw (-1,0.5) to (1.8,0.5);
\draw (1.8,0.5) to (1,-0.5);
\draw (1,-0.5) to (-1.8,-0.5);
\draw (0.2,1.36) to (-1,-0.76)
\filldraw
(0,1) circle (1pt) node[align=center, above]{N};
\filldraw
(-0.8,-0.4) circle (1pt) [color=red] node[align=center, above]{A};
\filldraw
(-0.34,0.4) circle(1pt) [color=blue] node [ align=center, above]{B};
\filldraw
(0,0)circle(1pt) [color=gray] node [align=center, above]{O};
\end{tikzpicture}
\caption{Mapping $\mathbb{R}^2$ onto $\mathbb{S}^2$.}
\label{fig:A.1}
\end{figure}
It is possible to find similar coordinates for an sphere too. First one has to set
$z'=z-r$,
$y'=y$
and
$x'=x$. Then the position of points on the sphere can be shown by $\xi=e^{i\varphi}cot{\theta/2}$ (figure \ref{fig:A.3}). The metric can be written as
\begin{equation}
ds^2=d\xi d\xi^*=1/4(1+\xi \xi^*)^2(d\theta^2+\sin^2\theta d\varphi^2).
\end{equation}
By choosing the conformal factor $\Omega=\frac{2}{(1+\xi \xi^*)}$ the unphysical metric becomes a 2-sphere.
Now consider the metric of $\mathbb{R}^{d+1}$
\begin{equation}\label{eq:A,1}
ds^2=-dt^2+dr^2+r^2d\Omega^2_{d-1},
\end{equation}
where $d\Omega^2_{d-1}$ is the metric of the 2-sphere. It is useful to define null coordinates as
\begin{eqnarray}
v=t+r\quad,\quad u=t-r.
\end{eqnarray}
Thus the metric \eqref{eq:A,1} takes the form
\begin{equation}
ds^2=-dudv+\frac{1}{4}(v-u)^2 d\Omega^2_{d-1}.
\end{equation}
One can foliates the space to $u=constant$ or $v=constant$ surfaces and multiply the conformal factor $\Omega=\frac{2}{v-u}$ to each of these cuts. This exactly the same that one can do to find the metric near $\mathcal{I}$ for asymptotically flat space-times.
\begin{figure}
\centering
\begin{tikzpicture}[scale=3]
\filldraw[fill=green!50,draw=black!50!] (0,0mm) -- (1.73mm,1mm) arc (0:83:2mm) -- cycle ;
\node at (0.15, 0.35)[green] (a) {$\theta$};
\filldraw[fill=red!40,draw=black!50!] (0,-10mm) -- (1mm,-8.27mm) arc (0:62:2mm) -- cycle ;
\node at (0.1, -0.6)[red] (b) {$\frac{\theta}{2}$};
\draw[->] (-1.25,0) -- (1.25,0) coordinate (x axis) node[left, above]{X};
\draw[->] (0,-1.25) -- (0,1.25) coordinate (y axis)node[left]{Y};
\draw (0,0) circle (1cm);
\draw[very thick,dashed,purple] (0,0) -- (-90:1cm ) ;
\draw [->] [very thick,blue] (0,0) -- (30:1cm) ;
\node at (0.4,0.3)[blue] {$\vec{r}$};
\draw [very thick,dashed,orange] (0,-1) -- (30:1cm);
\node at (0.95,0.5)[orange]{$P$};
\end{tikzpicture}
\caption{This illustration show how $\cot\theta/2$ can be used to shows the position of $P$. }
\label{fig:A.2}
\end{figure}
\tdplotsetmaincoords{60}{110}
\pgfmathsetmacro{\rvec}{.8}
\pgfmathsetmacro{\thetavec}{30}
\pgfmathsetmacro{\phivec}{60}
\begin{figure}
\centering
\begin{tikzpicture}[scale=5,tdplot_main_coords]
\coordinate (O) at (0,0,0);
\draw[thick,->] (0,0,0) -- (1,0,0) node[anchor=north east]{$x$};
\draw[thick,->] (0,0,0) -- (0,1,0) node[anchor=north west]{$y$};
\draw[thick,->] (0,0,0) -- (0,0,1) node[anchor=south]{$z$};
\tdplotsetcoord{P}{\rvec}{\thetavec}{\phivec}
\draw[-stealth,color=red] (O) -- (P) node[above right] {$P$} ;
\draw[dashed, color=red] (O) -- (Pxy);
\draw[dashed, color=red] (P) -- (Pxy);
\tdplotdrawarc{(O)}{0.2}{0}{\phivec}{anchor=north}{$\phi$}
\tdplotsetthetaplanecoords{\phivec}
\tdplotdrawarc[tdplot_rotated_coords]{(0,0,0)}{0.5}{0}%
{\thetavec}{anchor=south west}{$\theta$}
\shade[ball color = yellow!40, opacity = 0.4] (0,0) circle (0.53cm);
\draw (0,0) ellipse (0.53cm and 0.2cm);
\draw [dashed, color=blue] (0,0,0) to (0,0,-0.6);
\draw [dashed, color=blue] (0,0,-0.6) to (P);
\draw[draw=black] (0,0,-0.6) -- (0.1,0.1,-0.3) arc (0:62:1mm) -- cycle ;
\node at (0,0.1,-0.4) {$\frac{\theta}{2}$};
\end{tikzpicture}
\caption{ $\xi=e^{i\varphi}cot{\theta/2}$ can be used to show the position of $P$. }
\label{fig:A.3}
\end{figure}
\chapter{Connection and Ricci tensor for higher orders}
For the metric
\begin{equation}
g_{\mu\nu}=\hat{g}_{\mu\nu}+h_{\mu\nu},
\end{equation}
the connection can be find as
\begin{align}
\label{eq:B.2}
\Gamma^{\rho}_{\mu \nu}=1/2 \hat{g}^{\rho \lambda}
(\partial_{\mu}\hat{g}_{\nu \lambda}+\partial_{\nu}\hat{g}_{\mu \lambda}-\partial_{\lambda}\hat{g}_{\mu \nu})\\
+1/2 {h}^{\rho \lambda}(\partial_{\mu}\hat{g}_{\nu \lambda}
+\partial_{\nu}\hat{g}_{\mu \lambda}-\partial_{\lambda}\hat{g}_{\mu \nu})\notag\\
+1/2 {g}^{\rho \lambda}(\partial_{\mu}{h}_{\nu \lambda}+\partial_{\nu}{h}_{\mu \lambda}-\partial_{\lambda}{h}_{\mu \nu})\notag\\
+1/2 {h}^{\rho \lambda}(\partial_{\mu}{h}_{\nu \lambda}+\partial_{\nu}{h}_{\mu \lambda}-\partial_{\lambda}{h}_{\mu \nu}).\notag
\end{align}
Using the $\hat{g}_{\sigma \epsilon}\hat{g}^{\sigma \epsilon}$ in the second term and the definition for the zero and second order connection, $ \Gamma^{\rho (0)}_{\mu \nu}=1/2 \hat{g}^{\rho \lambda}(\partial_{\mu}\hat{g}_{\nu \lambda}+\partial_{\nu}\hat{g}_{\mu \lambda}-\partial_{\lambda}\hat{g}_{\mu \nu})$,
$ \Gamma^{\rho(2)}_{\mu \nu}=1/2 {h}^{\rho \lambda}(\partial_{\mu}{h}_{\nu \lambda}+\partial_{\nu}{h}_{\mu \lambda}-\partial_{\lambda}{h}_{\mu \nu})$,
the relation \eqref{eq:B.2} can be written as
\begin{align}
\label{eq:B.3}
\Gamma^{\rho}_{\mu \nu}&=\Gamma^{\rho(0)}_{\mu \nu}
+1/2 {h}^{\rho \lambda}\hat{g}_{ \sigma \epsilon}\hat{g}^{ \sigma \epsilon}(\partial_{\mu}\hat{g}_{\nu \lambda}+\partial_{\nu}\hat{g}_{\mu \lambda}-\partial_{\lambda}\hat{g}_{\mu \nu})\\
& +1/2(\partial_{\mu}h^{\rho}_{\nu}+\partial_{\nu}h^{\rho}_{\mu}-\partial^{\rho}h_{\mu \nu})+\Gamma^{\rho(2)}_{\mu \nu},\notag\\
&\Rightarrow\notag\\
&\Gamma^{\rho}_{\mu \nu}=\Gamma^{\rho(0)}_{\mu \nu}
+\delta^{\lambda}_{\epsilon}h^{\rho}_{\sigma} \hat{g}^{\sigma \epsilon}(\partial_{\mu}\hat{g}_{\nu \lambda}+\partial_{\nu}\hat{g}_{\mu \lambda}-\partial_{\lambda}\hat{g}_{\mu \nu})\notag\\
&+1/2(\partial_{\mu}h^{\rho}_{\nu}+\partial_{\nu}h^{\rho}_{\mu}-\partial^{\rho}h_{\mu \nu})+\Gamma^{\rho(2)}_{\mu \nu},\notag\\
&\Rightarrow\notag\\
&\Gamma^{\rho}_{\mu \nu}=\Gamma^{\rho(0)}_{\mu \nu}+1/2(2h^{\rho}_{\sigma}\Gamma^{\rho(0)}_{\mu \nu}+\partial_{\mu}h^{\rho}_{\nu}+\partial_{\nu}h^{\rho}_{\mu}-\partial^{\rho}h_{\mu \nu})+\Gamma^{\rho(2)}_{\mu \nu}.\notag
\end{align}
On the other hand
\begin{align}
\label{eq:B.4}
&\hat{g}^{\rho \xi}(\partial_{\mu}h_{\xi \nu}+\partial_{\nu}h_{\xi \mu}-\partial_{\xi}h_{\mu \nu}+\Gamma^{\chi}_{\mu \xi}h_{\chi \nu}+\Gamma^{\chi}_{\mu \nu}h_{\chi \xi}+\Gamma^{\chi}_{\nu \mu}h_{\chi \xi}-\Gamma^{\chi}_{\nu \xi}h_{\mu \chi}-\Gamma^{\chi}_{\mu \xi}h_{\nu\chi})\\
&=\hat{g}^{\rho \xi}(\nabla_{\mu}h_{\xi \nu}+\nabla_{\nu}h_{\xi \mu}-\nabla_{\xi}h_{\mu \nu}).\notag
\end{align}
Comparing the second term of the relation \eqref{eq:B.3} with the relation \eqref{eq:B.4} one can see
\begin{equation}
\boxed{\hat{\Gamma}^{\rho (1)}_{\mu \nu}=1/2(\hat{\nabla}_{\mu}h^{\rho}_{\nu}+\hat{\nabla}_{\nu}h^{\rho}_{\mu}-\hat{\nabla}^{\rho}h_{\mu\nu})}
\end{equation}
Now we will calculate the Ricci tensor for higher orders
\begin{align}
R_{\mu\nu}^{(1)}&=1/2 \partial_{\rho}(\hat{\nabla}_{\mu}h^{\rho}_{\nu}+\hat{\nabla}_{\nu}h^{\rho}_{\mu}-\hat{\nabla}^{\rho}h_{\mu\nu})-1/2\partial_{\mu}(\hat{\nabla}_{\rho}h^{\rho}_{\nu}+\hat{\nabla}_{\nu}h^{\rho}_{\rho}-\hat{\nabla}^{\rho}h_{\rho\nu})\\
&+1/2(\hat{\nabla}_{\rho}h^{\rho}_{\lambda}+\hat{\nabla}_{\lambda}h^{\rho}_{\rho}-\hat{\nabla}^{\rho}h_{\rho\lambda})\Gamma^{\lambda(0)}_{\mu\nu}+1/2 (\hat{\nabla}_{\mu}h^{\lambda}_{\nu}+\hat{\nabla}_{\nu}h^{\lambda}_{\mu}-\hat{\nabla}^{\lambda}h_{\mu\nu})\Gamma^{\rho(0)}_{\rho\lambda}\notag\\
&-1/2(\hat{\nabla}_{\rho}h^{\lambda}_{\nu}+\hat{\nabla}_{\nu}h^{\lambda}_{\rho}-\hat{\nabla}^{\lambda}h_{\rho\nu})\Gamma^{\rho(0)}_{\lambda\mu}-1/2(\hat{\nabla}_{\rho}h^{\lambda}_{\mu}+\hat{\nabla}_{\mu}h^{\lambda}_{\rho}-\hat{\nabla}^{\lambda}h_{\rho\mu})\Gamma^{\rho(0)}_{\lambda\nu}\notag\\
&=1/2[\underbrace{\partial_{\rho}\hat{\nabla}_{\mu}h^{\rho}_{\nu}+\hat{\nabla}_{\mu}h^{\lambda}_{\nu}\Gamma^{\rho(0)}_{\rho\lambda}-\hat{\nabla}_{\iota}h^{\rho}_{\nu}\Gamma^{\iota(0)}_{\rho\mu}-\hat{\nabla}_{\mu}h^{\rho}_{\iota}\Gamma^{\iota(0)}_{\rho\nu}}_{\hat{\nabla}_{\rho}\hat{\nabla}_{\nu}h^{\rho}_{\mu}}]\notag\\
&+1/2[\underbrace{\partial_{\rho}\hat{\nabla}_{\nu}h^{\rho}_{\mu}+\hat{\nabla}_{\nu}h^{\lambda}_{\mu}\Gamma^{\rho(0)}_{\rho\lambda}-\hat{\nabla}_{\iota}h^{\rho}_{\mu}\Gamma^{\iota(0)}_{\rho\nu}-\hat{\nabla}_{\nu}h^{\rho}_{\iota}\Gamma^{\iota(0)}_{\rho\mu}}_{\hat{\nabla}_{\rho}\hat{\nabla}_{\mu}h^{\rho}_{\nu}}].\notag
\end{align}
Finely the following relation is obtained
\begin{equation}
\boxed{R^{(1)}_{\mu\nu}=1/2(\hat{\nabla}_{\rho}\hat{\nabla}_{\nu}h^{\rho}_{\mu}+\hat{\nabla}_{\rho}\hat{\nabla}_{\mu}h^{\rho}_{\nu}-\hat{\nabla}^{\rho}\hat{\nabla}_{\rho}h_{\mu\nu}-\hat{\nabla}_{\mu}\hat{\nabla}_{\nu}h)}.
\end{equation}
For the second order connection
\begin{align}
\label{eq:B.8}
R^{(2)}_{\mu \nu}&= 1/2(\partial_{\rho}h^{\rho \lambda}\partial_{\mu}h_{\lambda \nu}+\partial_{\rho}h^{\rho \lambda}\partial_{\nu}h_{\mu \lambda }-\partial_{\rho}h^{\rho \lambda}\partial_{\lambda}h_{\mu \nu})\\
&+1/2h^{\rho \lambda}(\partial_{\rho}\partial_{\mu}h_{\lambda \nu}+\partial_{\rho}\partial_{\nu}h_{\mu \lambda}-\partial_{\rho}\partial_{\lambda}h_{\mu \nu})\notag\\
&-1/2 (\partial_{\nu}h^{\rho \lambda}\partial_{\mu}h_{\lambda \rho}+\partial_{\nu}h^{\rho \lambda} \partial_{\rho}h_{\mu \lambda}-\partial_{\nu}h^{\rho \lambda}\partial_{\lambda}h_{\rho \mu})\notag\\
&-1/2h^{\rho \lambda}(\partial_{\nu}\partial_{\mu}h_{\lambda \rho}+\partial_{\nu} \partial_{\rho}h_{\mu \lambda}-\partial_{\nu}\partial_{\lambda}h_{\rho \mu})\notag\\
&+1/4(\hat{\nabla}_{\rho}h^{\rho}_{\lambda}+\hat{\nabla}_{\lambda}h^{\rho}_{\rho}-\hat{\nabla}^{\rho}h_{\nu \mu})
\times(\hat{\nabla}_{\mu}h^{\lambda}_{\nu}+\hat{\nabla}_{\nu}h^{\lambda}_{\mu}-\hat{\nabla}^{\lambda}h_{\mu \nu})\notag\\
&-1/4(\hat{\nabla}_{\nu}h^{\rho}_{\lambda}+\hat{\nabla}_{\lambda}h^{\rho}_{\nu}-\hat{\nabla}^{\rho}h_{\lambda \nu})
\times(\hat{\nabla}_{\rho}h^{\lambda}_{\mu}+\hat{\nabla}_{\mu}h^{\lambda}_{\rho}-\hat{\nabla}^{\lambda}h_{\rho \mu})\notag\\
&+1/4(\hat{g}^{\rho\xi}(\partial_{\rho}\hat{g}_{\lambda \xi}+\partial_{\lambda}\hat{g}_{\xi \rho}-\partial_{\xi}\hat{g}_{\rho \lambda}))
\times h^{\lambda \xi}(\partial_{\mu}h_{\nu \xi}+\partial_{\nu}h_{\xi \mu}-\partial_{\xi}h_{\mu \nu})\notag
\\
&+1/4(h^{\rho\xi}(\partial_{\rho}h_{\lambda \xi}+\partial_{\lambda}h_{\xi \rho}-\partial_{\xi}h_{\rho \lambda}))
\times \hat{g}^{\lambda \xi}(\partial_{\mu}\hat{g}_{\nu \xi}+\partial_{\nu}\hat{g}_{\xi \mu}-\partial_{\xi}\hat{g}_{\mu \nu})\notag\\
&+1/4 \hat{g}^{\rho \xi}(\partial_{\nu}\hat{g}_{\lambda \xi}+\partial_{\lambda}\hat{g}_{\xi \nu}-\partial_{\xi}\hat{g}_{\nu \lambda})
\times
h^{\lambda \xi}(\partial_{\rho}h_{\mu \xi}+\partial_{\mu}h_{\xi \rho}-\partial_{\xi}h_{\rho \mu})\notag
\\
&+1/4h^{\rho \xi}(\partial_{\nu}h_{\lambda \xi}+\partial_{\lambda}h_{\xi \nu}-\partial_{\xi}h_{\nu \lambda})
\times
\hat{g}^{\lambda \xi}(\partial_{\rho}\hat{g}_{\mu \xi}+\partial_{\mu}\hat{g}_{\xi \rho}-\partial_{\xi}\hat{g}_{\rho \mu}).\notag
\end{align}
In this relation many terms may be vanish depend on the gauge that we choose.
\chapter{Ricci decomposition\label{ap:j}}
It is possible to write Riemann tensor to three irreducible terms
\begin{equation}
R_{\mu \nu \sigma \rho}=C_{\mu \nu \sigma \rho}+E_{\mu \nu \sigma \rho}+G_{\mu \nu \sigma \rho}.
\label{eq:2.20}
\end{equation}
where $C_{\mu \nu \sigma \rho}$ is Weyl tensor, $E_{\mu \nu \sigma \rho}$ is semi trace-less part, $G_{\mu \nu \sigma \rho}$ is scalar part of the Riemann tensor. It is possible to obtain this terms using the Kulkarni-Nomizu product. Then for $E_{\mu \nu \sigma \rho}$ one has
\begin{equation}
\label{eq:G.2}
E_{\mu \nu \sigma \rho}=\alpha (g\mathbin{\bigcirc\mspace{-15mu}\wedge\mspace{3mu}} R)_{\mu \nu \sigma \rho}=2\alpha
(g_{\rho[\mu}R_{\nu]\sigma}-g_{\sigma[\mu}R_{\nu]\rho}),
\end{equation}
Also for $G_{\mu \nu \sigma \rho}$ it is possible to write
\begin{equation}
\label{eq:G.3}
G_{\mu \nu \sigma \rho}=\beta R(g\mathbin{\bigcirc\mspace{-15mu}\wedge\mspace{3mu}} g)_{\mu \nu \sigma \rho} =\beta R(g_{\mu \rho}g_{\nu \sigma}-g_{\mu \sigma}g_{\nu \rho}).
\end{equation}
$\alpha$
and
$\beta$
in relations
\eqref{eq:G.2}
and
\eqref{eq:G.3}
are arbitrary constants.
Putting
\eqref{eq:G.2}
and
\eqref{eq:G.3}
in
\eqref{eq:2.20}, one can write
\begin{equation}
\label{eq:G.4}
R_{\mu \nu \sigma \rho}=2\beta g_{\rho[\mu}g_{\nu]\sigma}+2\alpha (g_{\rho[\mu}R_{\nu]\sigma}-g_{\sigma[\mu}R_{\nu]\rho})+C_{\mu \nu \rho \sigma}.
\end{equation}
Now $\alpha$
and
$\beta$ can be obtained
\begin{align}
\label{eq:G.5}
&\alpha=\frac{1}{n-2},\\
&\beta=\frac{-1}{(n-1)(n-2)}.\notag
\end{align}
Using constants \eqref{eq:G.5}, the relation \eqref{eq:G.4} can be rewritten
\begin{equation}
\label{eq:G.6}
R_{\mu \nu \sigma \rho}=\frac{-2}{(n-1)(n-2)} g_{\rho[\mu}g_{\nu]\sigma}+\frac{2}{n-2} (g_{\rho[\mu}R_{\nu]\sigma}-g_{\sigma[\mu}R_{\nu]\rho})+C_{\mu \nu \rho \sigma}.
\end{equation}
This process is called Ricci decomposition.
Also it is possible to rewrite these relation according to Schouten tensor $S_{\mu\nu}=R_{\mu\nu}-\frac{1}{6}g_{\mu\nu}R$, in four dimensions
\begin{equation}
{R}_{\mu\nu\sigma\rho}={C}_{\mu\nu\sigma\rho}+g_{\mu[\sigma}{S}_{\rho]\nu}-g_{\nu[\sigma}{S}_{\rho]\mu}.
\end{equation}
or
\begin{equation}
R^{\mu\nu}_{\sigma \rho}=C^{\mu\nu}_{\sigma \rho}-\frac{1}{3}R\delta^{\mu}_{[\sigma}\delta^{\nu}_{\rho]}+2\delta^{[\mu}_{[\sigma}R^{\nu]}_{\rho]}.
\end{equation}
It is possible to describe left dual right dual for Weyl tensor
\begin{align}
&^{\sim} C_{\mu\nu\sigma\rho}\equiv\frac{1}{2}\varepsilon_{\mu\nu\iota\chi}C^{\iota\chi}_{\:\:\:\:\sigma\rho},\\
&C_{\mu\nu\sigma\rho}^{\sim}\equiv\frac{1}{2}\varepsilon_{\sigma\rho\iota\chi}C^{\:\:\:\:\iota\chi}_{\mu\nu}.\notag
\end{align}
Dual tensors satisfy the following relation
\begin{equation}
^{\sim} C_{\mu\nu\sigma\rho}\equiv C_{\mu\nu\sigma\rho}^{\sim}.
\end{equation}
It is also useful to represent the complex conjugate form of the Weyl tensor
\begin{equation}
\star C_{\mu\nu\sigma\rho}\equiv C_{\mu\nu\sigma\rho}+iC_{\mu\nu\sigma\rho}^{\sim}.
\end{equation}
On the other hand for self-dual bivectors one can write
\begin{equation}
X_{\mu}\equiv \star X_{\mu\nu}u^{\nu}=0\quad,\quad X_{\mu}u^{\mu}=0\quad,\quad u_{\mu}u^{\nu}=-1,
\end{equation}
where $u^{\mu}$ is a timelike unit vector. So For Weyl tensor one has
\begin{equation}
-Q_{\mu\nu}\equiv\star C_{\mu \nu\sigma \rho }u^{\nu}u^{\rho}\equiv E_{\mu\sigma}+iB_{\mu\sigma}\quad,\quad u_{\mu}u^{\mu}=-1,
\end{equation}
where $E_{\mu\sigma} $
and
$ B_{\mu\sigma}$ are electric and magnetic parts of the Weyl tensor.
If near the de Sitter null infinity $\tilde{n}^{\mu}$ used as the timelike unit vector
with the relation $\tilde{n}^{\mu}\tilde{n}_{\mu}=-1/l^2$, then one has
\begin{align}
&\tilde{E}_{\mu\sigma}:=l^{2}\tilde{C}_{\mu\nu\sigma\lambda}n^{\nu}n^{\sigma},\\
&\tilde{B}_{\mu\sigma}:=l^{2} \star \tilde{C}_{\mu\nu\sigma\lambda}n^{\nu}n^{\sigma}.\notag
\end{align}
\eqref{eq:G.6} calculation for de Sitter space-time shows that $C_{\mu \nu \rho \sigma}$ and $E_{\mu \nu \rho \sigma}$ become zero. Thus Riemann tensor takes the form
\begin{equation}
R_{\mu \nu \sigma \rho}=\frac{R}{n(n-1)}g_{\rho[\mu}g_{\nu]\sigma}=\frac{R}{n(n-1)}(g_{\mu \rho}g_{\nu \sigma}-g_{\mu \sigma}g_{\nu \rho}).
\label{eq:2.23}
\end{equation}
The relation \eqref{eq:2.23} shows that Riemman tensor can be described only with the Ricci scalar that means having the curvature of a point one is able to obtain the curvature of the whole maximally symmetric space-time
\begin{equation}
G_{\mu \nu}=R_{\mu \nu}-1/2Rg_{\mu \nu}=-1/4Rg_{\mu \nu}.
\end{equation}
Another traceless tensor is the Bach tensor that is also invariant under conformal transformations. Bach tensor has the following form on $\mathcal{I}$
\begin{equation}
B_{\mu\nu\sigma}=D_{[\mu}(\mathcal{R}_{\nu]\sigma}-\frac{1}{4}q_{\nu]\sigma}\mathcal{R}),
\end{equation}
where $q_{\nu\sigma}$ is the metric on $\mathcal{I}$ and $D_{\mu}$ is the covariant derivative related to $q_{\nu\sigma}$.
$\mathcal{R}$
and
$\mathcal{R}_{\nu\sigma}$ are Ricci scalar and Ricci tensor on $\mathcal{I}$. Now it is possible to rewrite the relation of $B_{\mu\nu\sigma}$
according to asymptotic Weyl tensor, $K_{\mu\nu\sigma\rho}=\Omega^{-1}{C}_{\mu\nu\sigma\rho}=\Omega$
\begin{equation}
B_{\mu\nu\sigma}=\frac{1}{2}q^{\chi}_{\mu}q_{\nu}^{\psi}q_{\sigma}^{\iota}D_{[\chi}S_{\psi]\iota}=\frac{1}{2}q^{\chi}_{\mu}q_{\nu}^{\psi}q_{\sigma}^{\iota}K_{\chi\psi\iota\gamma}\tilde{n}^{\gamma}.
\end{equation}
\chapter{Tensor field caused by derivative operator \label{ap:d}}
Two derivative operator $\nabla$ and $\tilde{\nabla}$ according to ${g}_{\mu\nu}$ and $\tilde{g}_{\mu\nu}$. If one operate them on the dual vector field, $\omega_{\nu}$ and the scalar field, $f$, according to Leibnitz rule it is possible to write \cite{stephani2009exact}
\begin{equation}
\tilde{\nabla}_{\mu}(f\omega_{\nu})-{\nabla}_{\mu}(f\omega_{\nu})=f( \tilde{\nabla}_{\mu}\omega_{\nu}-{\nabla}_{\mu}\omega_{\nu}).
\end{equation}
Each derivative operator maps a tensor of rank $(k,l)$ to a tensor of rank $(k,l+1)$. So $\tilde{\nabla}_{\mu}\omega_{\nu}-{\nabla}_{\mu}\omega_{\nu}$ create a tensor field, $C^{\sigma}_{\mu\nu }$ as
\begin{equation}
\tilde{\nabla}_{\mu}\omega_{\nu}={\nabla}_{\mu}\omega_{\nu}-C^{\sigma}_{\mu\nu }\omega_{\sigma}.
\end{equation}
considering $ \tilde{\nabla}_{\mu}$ as the partial derivative $\partial_{\mu}$ then $C^{\sigma}_{\mu\nu }$ reduce to the Christoffel symbol.
It has to be said that from torsion free condition for $C^{\sigma}_{\mu\nu }$ one has the relation
\begin{equation}
C^{\sigma}_{\mu\nu }=C^{\sigma}_{\nu\mu }.
\end{equation}
The operation of derivative operator on the metric is then
\begin{equation}
\label{eq:d.4}
0=\tilde{\nabla}_{\mu}\tilde{g}_{\nu\sigma}={\nabla}_{\mu}\tilde{g}_{\nu\sigma}-C^{\rho}_{\mu\nu}\tilde{g}_{\rho\sigma}-C^{\rho}_{\mu\sigma}\tilde{g}_{\nu\rho}
\end{equation}
and for other indices' combination one also has
\begin{align}
\label{eq:d.5}
\tilde{\nabla}_{\nu}\tilde{g}_{\mu\sigma}&={\nabla}_{\nu}\tilde{g}_{\mu\sigma}-C^{\rho}_{\nu\mu}\tilde{g}_{\rho\sigma}-C^{\rho}_{\nu\sigma}\tilde{g}_{\mu\rho},\\
\tilde{\nabla}_{\sigma}\tilde{g}_{\mu\nu}&={\nabla}_{\sigma}\tilde{g}_{\mu\nu}-C^{\rho}_{\sigma\mu}\tilde{g}_{\rho\nu}-C^{\rho}_{\sigma\nu}\tilde{g}_{\nu\rho}.\notag
\end{align}
Subtracting the first relation of \eqref{eq:d.5} from \eqref{eq:d.4} and using the second relation of \eqref{eq:d.5} one has
\begin{equation}
C^{\rho}_{\mu\sigma}=1/2\tilde{g}^{\nu\rho}({\nabla}_{\mu}\tilde{g}_{\nu\sigma}+{\nabla}_{\sigma}\tilde{g}_{\mu\nu}-{\nabla}_{\nu}\tilde{g}_{\mu\sigma}).
\label{d.6}
\end{equation}
For the unphysical metric one has
\begin{equation}
{\nabla}_{\sigma} \tilde{g}_{\mu\nu}={\nabla}_{\sigma}(\Omega^2{g}_{\mu\nu})=2\Omega{g}_{\mu\nu}\nabla \Omega.
\end{equation}
So the relation \eqref{d.6} can be rewritten as
\begin{equation}
\label{d.8}
{C}^{\rho}_{\mu\sigma}=\Omega^{-1}{g}^{\nu\rho}({g}_{\nu\sigma}n_{\mu}+{g}_{\mu\nu}n_{\sigma}-{g}_{\mu\sigma}n^{\rho})=2\Omega^{-1}\delta^{\rho}_{(\sigma}n_{\mu)}-\Omega^{-1}{g}^{\nu\rho}{g}_{\mu\sigma}n^{\rho},
\end{equation}
where $n_{\mu}={\nabla}_{\mu}\Omega$. It is possible to write similar calculation according to $\tilde{\nabla}_{\mu}$ and ${g}_{\mu\nu}$
\begin{equation}
\label{eq:D.99}
\tilde{C}^{\rho}_{\mu\sigma}=\Omega^{-1}{g}^{\nu\rho}({g}_{\nu\sigma}\tilde{n}_{\mu}+{g}_{\mu\nu}\tilde{n}_{\sigma}-{g}_{\mu\sigma}\tilde{n}^{\rho})=2\Omega^{-1}\delta^{\rho}_{(\sigma}\tilde{n}_{\mu)}-\Omega^{-1}{g}^{\nu\rho}{g}_{\mu\sigma}\tilde{n}^{\rho}.
\end{equation}
Riemann tensor can be obtained according to each
\eqref{eq:D.99} or \eqref{d.8} but because of our convention it is important to use $\tilde{C}^{\rho}_{\mu\sigma}$ from \eqref{eq:D.99}, so one has
\begin{align}
\tilde{R}^{\rho}_{\mu\sigma\nu}&={R}^{\rho}_{\mu\sigma\nu}-2\tilde{\nabla}_{[\mu}\tilde{C}^{\rho}_{\sigma]\nu}+2\tilde{C}^{\lambda}_{\nu]\mu}\tilde{C}^{\rho}_{\sigma]\lambda}\\
&={R}^{\rho}_{\mu\sigma\nu}+2\Omega^{-1}\delta_{[\mu}\tilde{\nabla}_{\sigma]}\tilde{\nabla}_{\nu}\Omega-2\Omega^{-1}\tilde{g}^{\rho\lambda}\tilde{g}_{\nu[\mu}\tilde{\nabla}_{\sigma]}\tilde{\nabla}_{\lambda}\Omega+2\Omega^{-2}\tilde{\nabla}_{[\mu}\Omega\delta^{\lambda}_{\sigma]}\tilde{\nabla}_{\nu}\Omega\notag\\
&-2\Omega^{-2}\tilde{\nabla}_{[\mu}\Omega\tilde{g}_{\sigma]\nu}\tilde{\nabla}_{\xi}\Omega-2\tilde{g}_{\nu[\mu}\delta^{\rho}_{\sigma]}\tilde{g}^{\lambda\xi}\tilde{\nabla}_{\xi}\Omega\tilde{\nabla}_{\lambda}\Omega\notag
\end{align}
and also for the Ricci tensor
\begin{align}
\label{eq:D.8}
\tilde{R}_{\mu\nu}=&\tilde{\nabla}_{\rho}\tilde{C}^{\rho}_{\mu\nu}-\tilde{\nabla}_{\nu}\tilde{C}^{\rho}_{\rho\mu}+\tilde{C}^{\rho}_{\rho\lambda}\tilde{C}^{\lambda}_{\mu\nu}-\tilde{C}^{\rho}_{\mu\lambda}\tilde{C}^{\lambda}_{\rho\nu}\\
=&R_{\mu\nu}+(d-2)\Omega^{-2}\tilde{\nabla}_{\nu}\Omega\tilde{\nabla}_{\mu}\Omega-(d-2)\Omega^{-2}\tilde{g}_{\mu\nu}\tilde{g}^{\rho\sigma}\tilde{\nabla}_{\rho}\Omega\tilde{\nabla}_{\sigma}\Omega\notag\\
-&(d-2)\Omega^{-1}\tilde{\nabla}_{\mu}\tilde{\nabla}_{\nu}\Omega-\Omega^{-1}\tilde{g}_{\mu\nu}\tilde{g}^{\rho\sigma}\tilde{\nabla}_{\rho}\tilde{\nabla}_{\sigma}\Omega\notag.
\end{align}
multiplying this relation by $\tilde{g}_{\mu\nu}$ the Ricci scalar can be obtained
\begin{equation}
\tilde{R}=R-2(d-1)\Omega^{-1}\tilde{\nabla}^{\nu}\tilde{\nabla}_{\nu}\Omega-(d-2)(d-1)\Omega^{-2}\tilde{\nabla}^{\nu}\Omega\tilde{\nabla}_{\nu}\Omega.
\label{eq:D.9}
\end{equation}
On the other hand according to \eqref{eq:D.8}
and
\eqref{eq:D.9} Schouten tensor in four dimensions can be written as follows
\begin{equation}
\tilde{S}_{\mu\nu}=\tilde{R}_{\mu\nu}-1/6\tilde{g}_{\mu\nu}\tilde{R}={S}_{\mu\nu}-2\Omega^{-2}\tilde{\nabla}_{\nu}\Omega\tilde{\nabla}_{\mu}\Omega+2\Omega^{-1}\tilde{\nabla}_{\nu}\tilde{\nabla}_{\mu}\Omega.
\end{equation}
Einstein tensor can be also obtained using
\eqref{eq:D.8}
and
\eqref{eq:D.9}
\begin{align}
\label{eq:D.14}
\tilde{G}_{\mu\nu}&=\tilde{R}_{\mu\nu}-1/2\tilde{g}_{\mu\nu}\tilde{R}=G_{\mu\nu}+ 2 \Omega^{-1}(\tilde{\nabla}_{\mu}\tilde{n}_{\nu}-\tilde{g}_{\mu\nu}\tilde{\nabla}^{\sigma}\tilde{n}_{\sigma})+3\Omega^{-2} \tilde{g}_{\mu\nu}\tilde{n}^{\sigma}\tilde{n}_{\sigma}\\
&+\Omega^{-2} \Lambda \tilde{g}_{\mu\nu}.\notag
\label{eq:D.13}
\end{align}
from vacuum condition
$G_{\mu\nu}=0$, thus the two final terms in \eqref{eq:D.14} simplify together, on $\mathcal{I}$
\begin{equation}
\tilde{G}_{\mu\nu}=2 \Omega^{-1}(\tilde{\nabla}_{\mu}\tilde{n}_{\nu}-\tilde{g}_{\mu\nu}\tilde{\nabla}^{\sigma}\tilde{n}_{\sigma})=2 \Omega^{-1}(\tilde{K}_{\mu\nu}-\tilde{g}_{\mu\nu}\tilde{K})
\label{eq:D.15}
\end{equation}
where $\tilde{K}_{\mu\nu}=\tilde{\nabla}_{\mu}\tilde{n}_{\nu}$
and
$\tilde{K}=\tilde{\nabla}^{\sigma}\tilde{n}_{\sigma}$.
For Riemann tensor in three dimensions one has
\begin{equation}
\label{eq:d.166}
\mathcal{R}^{\sigma}_{\mu\nu\rho}\omega_{\sigma}= D_{\mu}D_{\nu}\omega_{\rho}- D_{\nu}D_{\mu}\omega_{\rho}
\end{equation}
where $ D_{\mu}D_{\nu}\omega_{\rho}$ can be written as
\begin{align}
\label{eq:d.177}
D_{\mu}D_{\nu}\omega_{\rho}&=D_{\mu}(h^{\psi}_{\nu}h^{\chi}_{\rho}\tilde{\nabla}_{\psi}\omega_{\chi})=h^{\iota}_{\mu} h^{\xi}_{\nu}h^{\epsilon}_{\rho} \tilde{\nabla}_{\iota}(h^{\psi}_{\xi}h^{\chi}_{\epsilon}\tilde{\nabla}_{\psi}\omega_{\chi})\\
&=h^{\iota}_{\mu} h^{\xi}_{\nu}h^{\epsilon}_{\rho} \tilde{\nabla}_{\iota}(\underbrace{h^{\psi}_{\xi}}_{\tilde{g}^{\psi}_{\xi}+\tilde{n}^{\psi}\tilde{n}_{\xi}})h^{\chi}_{\epsilon}\tilde{\nabla}_{\psi}\omega_{\chi}
+h^{\iota}_{\mu} h^{\xi}_{\nu}h^{\epsilon}_{\rho} h^{\psi}_{\xi}\tilde{\nabla}_{\iota}(\underbrace{h^{\chi}_{\epsilon}}_{\tilde{g}^{\chi}_{\epsilon}+\tilde{n}^{\chi}\tilde{n}_{\epsilon}})\tilde{\nabla}_{\psi}\omega_{\chi}\notag\\
&+h^{\iota}_{\mu} h^{\xi}_{\nu}h^{\epsilon}_{\rho} h^{\psi}_{\xi}h^{\chi}_{\epsilon}\tilde{\nabla}_{\iota}\tilde{\nabla}_{\psi}\omega_{\chi}\notag
\end{align}
where $\tilde{g}_{\mu\nu}=-\tilde{n}_{\mu}\tilde{n}_{\nu}+\tilde{h}_{\mu\nu}$. If one use $\tilde{\nabla}^{\mu}\tilde{n}_{\nu}=\tilde{K}^{\mu}_{\nu}$
and
$\tilde{\nabla}^{\rho}\tilde{g}_{\mu\nu}$ in this relation, the relation \eqref{eq:d.177} takes the following form
\begin{align}
D_{\mu}D_{\nu}\omega_{\rho}&=h^{\iota}_{\mu} h^{\xi}_{\nu}h^{\epsilon}_{\rho} h^{\psi}_{\xi}h^{\chi}_{\epsilon}\tilde{\nabla}_{\iota}\tilde{\nabla}_{\psi}\omega_{\chi}+h_{\rho}^{\chi}\tilde{K}_{\mu\nu}\tilde{n}^{\sigma}\tilde{\nabla}_{\sigma}\omega_{\chi}+h^{\sigma}_{\nu}\tilde{K}_{\mu\rho}\tilde{n}^{\xi}\tilde{\nabla}_{\sigma}\omega_{\xi}.
\end{align}
remember that $\tilde{K}^{\mu}_{\nu}$ indices are lowered and raised
with $h_{\mu\nu}$. This calculation can be done similarly for $D_{\nu}D_{\mu}\omega_{\rho}$. Then puting these relations in \eqref{eq:d.166} one has \cite{feng2018weiss}
\begin{equation}
\mathcal{R}^{\sigma}_{\mu\nu\rho}=h^{\iota}_{\mu} h^{\xi}_{\nu}h^{\epsilon}_{\rho} h^{\sigma}_{\chi}\tilde{R}_{\iota\xi\epsilon}^{\chi}-\tilde{K}_{\mu\rho}\tilde{K}^{\sigma}_{\nu}-\tilde{K}_{\sigma\rho}\tilde{K}^{\sigma}_{\mu}.
\end{equation}
Using this relation one can also find the three-dimensional form of the Ricci tensor for $\Omega= constant$ surfaces
\begin{equation}
\label{eq:d.20}
\mathcal{R}_{\mu\rho}=\mathcal{R}^{\sigma}_{\mu\sigma\rho}=\tilde{h}^{\iota}_{\mu}\tilde{h}^{\chi}_{\rho}\tilde{R}_{\iota\chi}-\tilde{K}_{\mu\rho}\tilde{K}-\tilde{K}_{\sigma\rho}\tilde{K}^{\sigma}_{\mu}.
\end{equation}
If one put $G_{\mu\nu}=\tilde{R}_{\mu\nu}-1/2\tilde{g}_{\mu\nu}\tilde{R}$ in \eqref{eq:D.15} and multiplying the resulting equation by $\tilde{n}^{\mu}\tilde{n}^{\nu}$, one finds
\begin{align}
\label{eq:d.16}
\tilde{R}+2\tilde{R}_{\mu\sigma}\tilde{n}^{\mu}\tilde{n}^{\sigma}=4\Omega^{-1}(\tilde{K}_{\mu\sigma}-\tilde{g}_{\mu\sigma}\tilde{K})\tilde{n}^{\mu}\tilde{n}^{\sigma}.
\end{align}
Using
$K_{\mu\sigma}n^{\mu}=0$ in \eqref{eq:d.16} one gets
\begin{align}
\label{eq:d.22}
\tilde{R}+2\tilde{R}_{\mu\sigma}\tilde{n}^{\mu}\tilde{n}^{\sigma}=-4\Omega^{-1}\tilde{g}_{\mu\sigma}\tilde{n}^{\mu}\tilde{n}^{\sigma}\tilde{K}=4\Omega^{-1}(-\tilde{K}+\underbrace{\tilde{h}_{\mu\sigma}\tilde{n}^{\mu}\tilde{n}^{\sigma}}_{=0})=-4\Omega^{-1}\tilde{K}.
\end{align}
Doing some mathematics one can find the following relation by using \eqref{eq:d.20} in \eqref{eq:d.22} so
\begin{align}
\mathcal{R}+\tilde{K}^2-\tilde{K}_{\mu\nu}\tilde{K}^{\mu\nu}=4\Omega^{-1}\tilde{K}.
\end{align}
\end{appendices}
\bibliographystyle{ieeetr}
|
1,116,691,498,216 | arxiv | \section{Introduction}
Consider a filtered probability space $(\Omega, \mathcal{F}, \{\mathcal{F}_{t}\}_{t\geq 0}, \mathbb{P})$ where the filtration $\{\mathcal{F}_{t}\}_{t\geq 0}$ satisfies the usual conditions. Let $W=\{W_{t}\}_{t\geq 0}$ be a standard Brownian motion adapted to $\{\mathcal{F}_{t}\}_{t\geq 0}$. The reflected Ornstein–Uhlenbeck (OU) process reflected at $0$ is described by the following stochastic differential equation (SDE)
\begin{equation}\label{eq1}
\left\{
\begin{aligned}
&\mathrm{d}X_{t}=-\theta X_{t}\mathrm{d}t+\sigma\mathrm{d}W_{t}+\mathrm{d}L_{t},\\
&X_{t}\geq 0 \quad\text{for all}\quad t\geq0,\\
&X_{0}=x,
\end{aligned}
\right.
\end{equation}
where $\theta\in (0,\infty)$ is the unknown parameter, $\sigma\in (0,\infty)$ is a constant and $L=\{L_{t}\}_{t\geq 0}$ is the minimal continuous increasing process which ensures that $X_{t}\geq 0$ for all $t\geq 0$. The process $L$ increases only when $X$ hits the boundary $0$, so that
\begin{equation*}
\int_{[0, \infty)} I(X_{t}\geq 0) \mathrm{d} L_{t}=0,
\end{equation*}
where $I(\cdot)$ is the indicator function.
The reflected OU process behaves like a standard OU process in the interior of its domain $(0, \infty)$. Benefiting from its reflecting barrier, the reflected OU process has been widely used in many areas such as the queueing system \citep{Ward2005}, financial engineering \citep{Bo2010} and mathematical biology \citep{Ricciardi1987}. The reflecting barrier is assumed to be $0$ for the physical restriction of the state processes such as queue-length, stock prices and interest rates, which take non-negative values. For more details on reflected OU processes and their broad applications, one can refer to \cite{Harrison1985} and \cite{Whitt2002}.
The parameter estimation problem in the reflected OU process has gained much attention in recent years due to its increased applications in broad fields. It is necessary that the parameters which characterize the reflected OU process should be estimated via the data in many real-world applications.
As far as we know, the maximum likelihood estimator (MLE) for the drift parameter $\theta$ is studied in \cite{Bo2011}. They obtain the strong consistency and asymptotic normality of their estimator, but they don't get the explicit form of asymptotic variance. The sequential MLE based on the continuously observed processes throughout a random time interval $[0,\tau]$ is studied in \cite{Lee2012}, where $\tau$ is a stopping time. The main tool used in the above two papers is the Girsanov's theorem of reflected Brownian motion.
On the other hand, an ergodic type of estimator for $\theta$ based on discrete observations is studied in \cite{Hu2015}. Recently, the moment estimators for all parameters $(\theta,\sigma)$ based on the ergodic theorem is studied in \cite{Hu2021}. However, there is only limited literature on least squares estimator (LSE) for the drift parameter of a reflected OU process.
In this paper, we propose two types of LSEs for the drift parameter $\theta$ based on continuously observed processes and discretely observed processes respectively.
The continuous-type LSE is motivated by aiming to minimize
\begin{equation*}
\int_{0}^{T}\left|\dot{X}_{t}+\theta X_{t}-\dot{L}_{t}\right|^{2} \mathrm{d}t.
\end{equation*}
It is a quadratic function of $\theta$, although we don't know $\dot{L}_{t}$ and $\dot{X}_{t}$. The minimum is achieved when
\begin{equation*}
\hat{\theta}_{T}=-\frac{\int_{0}^{T} X_{t}\mathrm{d} X_{t}-\int_{0}^{T}X_{t}\mathrm{d}L_{t}}{\int_{0}^{T} X_{t}^{2} \mathrm{d} t}.
\end{equation*}
Assume that $h\rightarrow0$ and $nh\rightarrow\infty$, as $n\rightarrow\infty$. When the processes is observed at the discrete time instants $\{t_{k}=kh, k=0,1,\cdots,n\}$, the discrete-type LSE is motivated by minimizing the following contrast function
\begin{equation*}
\sum_{k=0}^{n-1}|X_{t_{k+1}}-X_{t_{k}}+\theta X_{t_{k}}h-\vartriangle_{k}L|^{2},
\end{equation*}
where $\vartriangle_{k}L=L_{t_{k+1}}-L_{t_{k}}$. The minimum is achieved when
\begin{equation*}
\tilde{\theta}_{n}=-\frac{\sum_{k=0}^{n-1}X_{t_{k}}(X_{t_{k+1}}-X_{t_{k}}-\vartriangle_{k}L)}{\sum_{k=0}^{n-1}X_{t_{k}}^{2}h}.
\end{equation*}
The remainder of this paper is organized as follows. In Section \ref{sec2}, we describe some preliminary results related to our context. Section \ref{sec3} is devoted to obtaining the asymptotic behavior of the two estimators. Section \ref{sec4} presents some numerical results and Section \ref{sec5} concludes.
\section{Preliminaries}\label{sec2}
In this section, we first introduce some basic facts. Throughout this paper, we shall use the notation ``$\stackrel{P}{\longrightarrow}$" to denote ``convergence in probability" and the notation ``$\sim$" to denote ``convergence in distribution".
With the previous results \citep{Hu2015,Linetsky2005,Ward2003}, we know that the unique invariant density of $\{X_{t}\}_{t\geq 0}$ is
\begin{equation}\label{eq2}
p(x)=2 \sqrt{\frac{2 \gamma}{\sigma^{2}}} \phi\left(\sqrt{\frac{2 \gamma}{\sigma^{2}}} x\right), \quad x \in[0, \infty),
\end{equation}
where $\phi(u)=(2 \pi)^{-1 / 2} e^{-\frac{u^{2}}{2}}$ is the (standard) Gaussian density function.
Based on the basic stability theories of Markov processes, we have the following ergodic lemma.
\begin{lem}\label{lem1}
For any $x \in \mathbb{R}_{+}$ and any $f \in L_{1}(\mathbb{R}_{+}, \mathcal{B}(\mathbb{R}_{+}))$, we have
\begin{enumerate}[a.]
\item The continuously observed processes $\{X_{t}\}_{t\geq0}$ is ergodic,
\begin{equation*}
\lim _{T \rightarrow \infty} \frac{1}{T} \int_{0}^{T}f(X_{t})\mathrm{d}t=\mathbb{E}[f(X_{\infty})]=\int_{0}^{\infty} f(x) p(x) d x.
\end{equation*}
\item The discretely observed processes $\{X_{t_{k}}, k=0,1\cdots,n\}$ is ergodic,
\begin{equation*}
\lim _{n \rightarrow \infty} \frac{1}{n} \sum_{k=1}^{n} f(X_{t_{k}})=\mathbb{E}[f(X_{\infty})]=\int_{0}^{\infty} f(x) p(x) d x.
\end{equation*}
\end{enumerate}
\end{lem}
\noindent\textbf{Proof of Lemma \ref{lem1}}. One can see \cite{Han2016} and \cite{Hu2015} for a proof.
\hfill$\square$
By Lemma \ref{lem1} and the unique invariant density Eq. (\ref{eq2}), we obtain the formula of second-order moment estimator as following
\begin{equation}\label{eq3}
\lim_{T \rightarrow \infty}\frac{1}{T}\int_{0}^{T}X^{2}_{t}\mathrm{d}t=\lim_{n\rightarrow \infty}\frac{1}{n}\sum_{k=1}^{n}X_{t_{k}}^{2}=\mathbb{E}|X(\infty)|^{2}=\int_{0}^{\infty} x^{2} p(x) \mathrm{d} x=\frac{\sigma^{2}}{2\theta}.
\end{equation}
\section{Asymptotic behavior of the least squares estimators}\label{sec3}
In this section, we consider the asymptotic behavior of the LSEs for the drift parameter $\theta$. By Eq. (\ref{eq1}), we provide two useful and crucial alternative expressions for $\hat{\theta}_{T}$ and $\tilde{\theta}_{n}$
\begin{equation}\label{eq4}
\hat{\theta}_{T}=\theta-\sigma \frac{\int_{0}^{T} X_{t} \mathrm{d} W_{t}}{\int_{0}^{T} X_{t}^{2} \mathrm{d} t},
\end{equation}
and
\begin{equation}\label{eq5}
\tilde{\theta}_{n}=\theta+\frac{\sum_{k=0}^{n-1}X_{t_{k}}\int_{t_{k}}^{t_{k+1}}\theta(X_{t}-X_{t_{k}})\mathrm{d}t-\sigma\vartriangle_{k}W}{\sum_{k=0}^{n-1}X^{2}_{t_{k}}h}.
\end{equation}
The following theorem proves the consistency of the continuous-type LSE.
\begin{thm}\label{thm1}
The continuous-type LSE $\hat{\theta}_{T}$ of $\theta$ admits the strong consistency, i.e.,
\begin{equation*}
\hat{\theta}_{T}\stackrel{P}{\longrightarrow} \theta,
\end{equation*}
as $T$ tends to infinity.
\end{thm}
\noindent\textbf{Proof of Theorem \ref{thm1}}. From the alternative expression Eq. (\ref{eq4}), we have
\begin{equation*}
\hat{\theta}_{T}-\theta=-\sigma \frac{\frac{1}{T}\int_{0}^{T} X_{t} \mathrm{d} W_{t}}{\frac{1}{T}\int_{0}^{T} X_{t}^{2} \mathrm{d} t}.
\end{equation*}
By Lemma \ref{lem1} and Eq. (\ref{eq3}), we have
\begin{equation}\label{eq6}
\lim_{T \rightarrow \infty}\frac{1}{T}\int_{0}^{T}X_{t}^{2}\mathrm{d}t= \frac{\sigma^{2}}{2\theta}.
\end{equation}
Taking into account that the process $\{\int_{0}^{t}X_{s}\mathrm{d}W_{s}, t\geq 0\}$ is a martingale and with quadratic variation $\int_{0}^{t}X_{s}^{2}\mathrm{d}s$. Then
\begin{equation*}
\mathbb{E}\bigg[\frac{1}{T}\int_{0}^{T}X_{t}\mathrm{d}W_{t}\bigg]=0,
\end{equation*}
and
\begin{equation*}
\lim_{T \rightarrow \infty}\mathbb{E}\bigg[\big(\frac{1}{T}\int_{0}^{T}X_{t}\mathrm{d}W_{t}\big)^{2}\bigg]=\frac{\sigma^{2}}{2\theta T}=O(T^{-1}).
\end{equation*}
By Chebyshev's inequality, we have
\begin{equation}\label{eq7}
\lim_{T\rightarrow\infty}\frac{1}{T}\int_{0}^{T}X_{t}\mathrm{d}W_{t}=0.
\end{equation}
Combining Eq. (\ref{eq6}) and (\ref{eq7}), we obtain the desired results.
\hfill$\square$
We establish the asymptotic normality of the continuous-type LSE in the following theorem. The convergence rate is comparable to MLE based approch \citep{Bo2011}, and we obtain the explicit formula of the asymptotic variance.
\begin{thm}\label{thm2}
The continuous-type LSE $\hat{\theta}_{T}$ of $\theta$ admits the asymptotic normality, i.e.,
\begin{equation*}
\sqrt{T}(\hat{\theta}_{T}-\theta)\sim\mathcal{N}(0, 2\theta),
\end{equation*}
as $T$ tends to infinity.
\end{thm}
\noindent\textbf{Proof of Theorem \ref{thm2}}
Note that
\begin{equation*}
\begin{aligned}
\sqrt{T}(\hat{\theta}_{T}-\theta)&=-\sigma\sqrt{T}\frac{\int_{0}^{T}X_{t}\mathrm{d}W_{t}}{\int_{0}^{T} X_{t}^{2}\mathrm{d}t}\\
&=-\frac{\frac{\sigma}{\sqrt{T}}\int_{0}^{T}X_{t}\mathrm{d}W_{t}}{\frac{1}{T}\int_{0}^{T} X_{t}^{2}\mathrm{d}t}.
\end{aligned}
\end{equation*}
From Eq. (\ref{eq3}), we have that $\frac{1}{T}\int_{0}^{T} X_{t}^{2}\mathrm{d}t$ converges to $\frac{\sigma^{2}}{2\theta}$
almost surely, as $T$ tends to infinity. Then, it is sufficient to show that $\frac{\sigma}{\sqrt{T}}\int_{0}^{T}X_{t}\mathrm{d}W_{t}$ converges in law to a centered normal distribution as $T$ tends to infinity. It follows immediately from $X_{t}$ is adapted with respect to $\mathcal{F}_{t}$ that $\frac{\sigma}{\sqrt{T}}\int_{0}^{T}X_{t}\mathrm{d}W_{t}$ is a Gaussian random variable with mean $0$ and variance $\frac{\sigma^{2}}{T}\int_{0}^{T} X_{t}^{2}\mathrm{d}t$. Based on Eq. (\ref{eq3}) again, we obtain
\begin{equation}\label{eq8}
\frac{\sigma}{\sqrt{T}}\int_{0}^{T}X_{t}\mathrm{d}W_{t}\sim\mathcal{N}(0, \frac{\sigma^{4}}{2\theta}).
\end{equation}
By Slutsky's theorem and Eq. (\ref{eq8}), we have
\begin{equation*}
\frac{\frac{\sigma}{\sqrt{T}}\int_{0}^{T}X_{t}\mathrm{d}W_{t}}{\frac{1}{T}\int_{0}^{T} X_{t}^{2}\mathrm{d}t}\sim\mathcal{N}(0,2\theta),
\end{equation*}
which completes the proof.
\hfill$\square$
The following theorem proves the consistency of the discrete-type LSE.
\begin{thm}\label{thm3}
The discrete-type LSE $\tilde{\theta}_{n}$ admits the consistency, i.e.,
\begin{equation*}
\tilde{\theta}_{n}\stackrel{P}{\longrightarrow}\theta,
\end{equation*}
as $n$ tends to infinity.
\end{thm}
\noindent\textbf{Proof of Theorem \ref{thm3}}. From the alternative expression Eq. (\ref{eq5}), we have
\begin{equation*}
\tilde{\theta}_{n}-\theta=\frac{\frac{1}{nh}\sum_{k=0}^{n-1}X_{t_{k}}\big(\int_{t_{k}}^{t_{k+1}}\theta(X_{t}-X_{t_{k}})\mathrm{d}t-\sigma\vartriangle_{k}W\big)}{\frac{1}{nh}\sum_{k=0}^{n-1}X^{2}_{t_{k}}h}.
\end{equation*}
We first consider the estimate of $\sup_{t_{k}\leq t\leq t_{k+1}}|X_{t}-X_{t_{k}}|$. For $t_{k}\leq t\leq t_{k+1}$, we have
\begin{equation*}
\begin{aligned}
&|X_{t}-X_{t_{k}}|\\
=&|-\theta\int_{t_{k}}^{t} X_{u}\mathrm{d}u+\sigma(W_{t}-W_{t_{k}})+(L_{t}-L_{t_{k}})|\\
=&|-\theta\int_{t_{k}}^{t}(X_{u}-X_{t_{k}})\mathrm{d}u-\theta X_{t_{k}}(t-t_{k})+\sigma(W_{t}-W_{t_{k}})(L_{t}-L_{t_{k}})|\\
\leq&|X_{t_{k}}|h\theta+\sup_{t_{k}\leq t\leq t_{k+1}}\big(\sigma|W_{t}-W_{t_{k}}|+(L_{t}-L_{t_{k}})\big)+\theta\int_{t_{k}}^{t}|X_{u}-X_{t_{k}}|\mathrm{d}u.
\end{aligned}
\end{equation*}
By Gronwall's inequality, we have
\begin{equation*}
|X_{t}-X_{t_{k}}|\leq\bigg(|X_{t_{k}}|h\theta+\sup_{t_{k}\leq t\leq t_{k+1}}\big(\sigma|W_{t}-W_{t_{k}}|+(L_{t}-L_{t_{k}})\big)\bigg)e^{\theta(t-t_{k})}.
\end{equation*}
It follows that
\begin{equation*}
\sup_{t_{k}\leq t\leq t_{k+1}}|X_{t}-X_{t_{k}}|\leq \bigg(|X_{t_{k}}|h+\sup_{t_{k}\leq t\leq t_{k+1}}\big(\sigma|W_{t}-W_{t_{k}}|+(L_{t}-L_{t_{k}})\big)\bigg)e^{\theta h}.
\end{equation*}
By the properties of the process $L$, we have
\begin{equation*}
L_{t_{k+1}}-L_{t_{k}}=\max\big(0, A_{t_{k}}-X_{t_{k}}\big),
\end{equation*}
where $A_{t_{k}}=\sup_{t_{k}\leq t\leq t_{k+1}}\big\{\theta X_{t_{k}}(t-t_{k})-\sigma(W_{t}-W_{t_{k}})\big\}$.
By all paths of Brownian motion are $\alpha$-H$\ddot{o}$lder continuity, where $\alpha\in(0,\frac{1}{2})$, we have
\begin{equation*}
\begin{aligned}
\sup_{t_{k}\leq t\leq t_{k+1}}|X_{t}-X_{t_{k}}|&\leq Ch^{\alpha}e^{\theta h}=O(h^{\alpha}),
\end{aligned}
\end{equation*}
where $C$ is a constant.
Then
\begin{equation}\label{eq10}
\begin{aligned}
&\frac{1}{nh}\sum_{k=0}^{n-1}X_{t_{k}}\int_{t_{k}}^{t_{k+1}}\theta(X_{t}-X_{t_{k}})\mathrm{d}t\\
\leq&\frac{\theta}{nh}\sum_{k=0}^{n-1}X_{t_{k}}\sup_{t_{k}\leq t\leq t_{k+1}}|X_{t}-X_{t_{k}}|h\\
=&O(h^{\alpha}),
\end{aligned}
\end{equation}
which goes to $0$ as $h\rightarrow0$.
Let $\phi_{k}(t)=X_{t_{k}}I_{\{t\in[t_{k},t_{k+1})\}}(t)$. Then we have
\begin{equation}\label{eq11}
\lim_{n \rightarrow \infty}\sum_{k=0}^{n-1}X_{t_{k}}\vartriangle_{k}W=\lim_{n \rightarrow \infty}\sum_{k=0}^{n-1}\int_{0}^{nh}\phi_{k}(t)\mathrm{d}W_{t}.
\end{equation}
By some similar arguments as in the proof of Theorem \ref{thm1}, we have
\begin{equation}\label{eq12}
\lim_{n\rightarrow\infty}\frac{1}{nh}\sum_{k=0}^{n-1}X_{t_{k}}\sigma\vartriangle_{k}W=0.
\end{equation}
Combining Eq. (\ref{eq3}), (\ref{eq10}) and (\ref{eq12}), we obtain the desired results.
\hfill$\square$\\
The following theorem establishes the asymptotic normality of the discrete-type LSE.
\begin{thm}\label{thm4}
Assume that $nh^{1+2\alpha}\rightarrow 0$ for $\alpha\in(0,1/2)$, as $n$ tends to infinity. The discrete-type LSE $\tilde{\theta}_{n}$ of $\theta$ admits the asymptotic normality, i.e.,
\begin{equation*}
\sqrt{nh}(\tilde{\theta}_{n}-\theta)\sim\mathcal{N}(0,2\theta),
\end{equation*}
as $n$ tends to infinity.
\end{thm}
\noindent\textbf{Proof of Theorem \ref{thm4}}. Note that
\begin{equation*}
\sqrt{nh}(\tilde{\theta}_{n}-\theta)=\frac{\frac{1}{\sqrt{nh}}\sum_{k=0}^{n-1}X_{t_{k}}\big(\int_{t_{k}}^{t_{k+1}}\theta(X_{t}-X_{t_{k}})\mathrm{d}t-\sigma\vartriangle_{k}W\big)}{\frac{1}{nh}\sum_{k=0}^{n-1}X^{2}_{t_{k}}h}.
\end{equation*}
By Eq. (\ref{eq10}), we have
\begin{equation}
\frac{1}{\sqrt{nh}}\sum_{k=0}^{n-1}X_{t_{k}}\int_{t_{k}}^{t_{k+1}}\theta (X_{t}-X_{t_{k}})\mathrm{d}t\leq O(\sqrt{nh^{1+2\alpha}}),
\end{equation}
which goes to $0$ as $n$ tends to infinity. By some similar arguments as in the proof of Theorem \ref{thm2}, we have
\begin{equation*}
\frac{1}{nh}\int_{0}^{nh}\phi_{k}(t)\mathrm{d}W_{t}\sim\mathcal{N}(0,\frac{\sigma^{4}}{2\theta}).
\end{equation*}
By Eq. (\ref{eq11}), we have
\begin{equation*}
\frac{\sigma}{\sqrt{nh}}\sum_{k=0}^{n-1}X_{t_{k}}\vartriangle_{k}W\sim\mathcal{N}(0,\frac{\sigma^{4}}{2\theta}).
\end{equation*}
By Eq. (\ref{eq3}) and Slutsky's theorem, we obtain the desired results.
\hfill$\square$
\begin{rmk}
Our method can be applied to the reflected OU processes with two-sided reflecting barriers $(0,b)$, where $b\in(0,\infty)$. The two types of LSEs of a two-sided reflected OU process are
\begin{equation*}
\hat{\theta}_{T}=-\frac{\int_{0}^{T} X_{t}\mathrm{d} X_{t}-\int_{0}^{T}X_{t}\mathrm{d}L_{t}+\int_{0}^{T}X_{t}\mathrm{d}R_{t}}{\int_{0}^{T} X_{t}^{2} \mathrm{d} t},
\end{equation*}
and
\begin{equation*}
\tilde{\theta}_{n}=-\frac{\sum_{k=0}^{n-1}X_{t_{k}}(X_{t_{k+1}}-X_{t_{k}}-\vartriangle_{k}L+\vartriangle_{k}R)}{\sum_{k=0}^{n-1}X_{t_{k}}^{2}h},
\end{equation*}
\end{rmk}
where $R$ is the minimal continuous increasing process such that $X\leq b$. The unique invariant density is given by \citep{Linetsky2005}
\begin{equation*}
p(x)=\frac{\sqrt{2 \theta}}{\sigma} \frac{\phi\left(\frac{\sqrt{2 \theta}}{\theta} x\right)}{\Phi\left(\frac{\sqrt{2 \theta}}{\sigma} b\right)-\frac{1}{2}}, \quad x \in[0, b],
\end{equation*}
where $\Phi(y)=\int_{-\infty}^{y}\phi(u)\mathrm{d}u$.
Hence the consistency and asymptotic distributions of the two estimators could be obtained. The proofs are similar to the proofs of Theorem \ref{thm1}-\ref{thm4}. We omit the details here.
\section{Numerical results}\label{sec4}
In this section, we present some numerical results. For a Monte Carlo simulation of the reflected OU process, one can refer to \cite{Lepingle(1995)}, which is known to yield the same rate of convergence as the usual Euler–Maruyama scheme.
Denote the time between each discretization step by $h=0.01$. We perform $N=1000$ Monte Carlo simulations of the sample paths generated by the model with different settings. The overall parameter estimates are evaluated by the bias, standard deviation (Std.dev) and mean squared error (MSE). We also give calculations for the asymptotic variance (Asy.var) $\sqrt{nh}(\tilde{\theta}_{n}-\theta)$. The results are presented in Table \ref{table1}.
What we need to emphasize is that the asymptotic variance is exactly the approximation of $2\theta$ even with different settings of $\theta$. It is effective to verify the explicit, closed form formula proposed in Theorem \ref{thm2} and \ref{thm4}.
Table \ref{table1} summarizes the main findings over 1000 simulations. We observe that as the sample size increases, the bias decreases and is small, that the empirical and model-based standard errors agree reasonably well. The performance improves with larger sample
sizes.
The distribution of the proposed estimator with two different settings are illustrated as a histogram in Figure \ref{fig1} and \ref{fig2}. In each figure, the standard normal distribution density is overlayed as a solid curve. The histogram asymptotically approximates to the standard normal distribution density. Thus, the LSEs work well whether $\theta$ is big ($\theta=1$) or small ($\theta=0.5$) and whether in a fairly short time ($T=10$) or long ($T=1000$) time.
\begin{table}[htp]
\caption{Simulation results}
\begin{tabular}{ccccc}
\hline
\makebox[0.2\textwidth]{True parameter} & \makebox[0.1\textwidth]{} & \makebox[0.18\textwidth]{$n=10^{3}$} & \makebox[0.18\textwidth]{$n=10^{4}$} & \makebox[0.18\textwidth]{$n=10^{5}$}\tabularnewline
\hline
\hline
$\theta=0.5$,
& Bias & 0.2006 & 0.0129 & -0.0076\tabularnewline
$\sigma=0.2$
& Std.dev & 0.4380 & 0.1040 & 0.0310\tabularnewline
& Asy.var & 1.9100 & 1.0800 & 0.9620\tabularnewline
& MSE & 0.2320 & 0.0110 & 0.0010\tabularnewline
\hline
$\theta=0.5$,
& Bias & 0.1890 & 0.0171 & -0.0084\tabularnewline
$\sigma=0.5$
& Std.dev & 0.4550 & 0.1080 & 0.0315\tabularnewline
& Asy.var & 2.0700 & 1.1700 & 0.9900\tabularnewline
& MSE & 0.2430 & 0.0120 & 0.0011\tabularnewline
\hline
$\theta=1$,
& Bias & 0.1410 & -0.0124 & -0.0255\tabularnewline
$\sigma=1$
& Std.dev & 0.5030 & 0.1450 & 0.0439\tabularnewline
& Asy.var & 2.5300 & 2.1200 & 1.9300\tabularnewline
& MSE & 0.2730 & 0.0213 & 0.0026\tabularnewline
\hline
\end{tabular}
\label{table1}
\end{table}
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{100.001.05.02}
\caption{Histogram of $\sqrt{T}(\hat{\theta}_{T}-\theta)$ with $T=100$, $h=0.01$, $\theta=0.5$ and $\sigma=0.2$.}
\label{fig1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[scale=0.7]{1000.001.02.02}
\caption{Histogram of $\sqrt{T}(\hat{\theta}_{T}-\theta)$ with $T=1000$, $h=0.01$, $\theta=0.2$ and $\sigma=0.2$.}
\label{fig2}
\end{figure}
\section{Conclusion}\label{sec5}
In this paper, we present two types of least squares estimators for the reflected Ornstein–Uhlenbeck process based on continuously observed processes and discretely observed processes respectively. The consistency and the asymptotic normality have been studied. Moreover, we derive the explicit formula of the asymptotic variance, which is $2\theta$. Numerical results show that the least squares estimators work well with different settings.
Some further research may include investigating the statistical inference for the other reflected diffusions.
|
1,116,691,498,217 | arxiv | \section*{Acknowledgments}
This work is supported by National Key R\&D Program of China (No.2018YFB1004401) and NSFC (No.61532021, 61772537, 61772536, 61702522)
\section{}
\subsection{Implementation Details}
\vpara{Matching Component.}
For the multi-field modeling, we divide the attributes of a paper into two fields: the author names and the words in all the other attributes, including the paper's title and keywords, the published venue, and the target author's affiliation. We separate author names from others as author names have no literal or semantic overlaps with them, while other attributes are merged due to the overlaps of similar words.
The special symbols such as ``-" and ``." in author names and words are removed. The stop words are removed and the stems of the words are extracted.
We pre-train an embedding for each author name and word. Specifically, we use Word2Vec to train a name embedding in the context of all the coauthors' names in a paper, and train a word embedding in the context of all the other occurred words in title, keywords, venue and affiliation.
We set the dimension of the embedding as 100. To enable matrix operation, for each paper or candidate person, we limit the maximal number of author names to 100, the maximal number of words to 500, and the maximal number of papers published by each person to 100.
The hyper-parameters of the RBF kernel functions are set the same as~\cite{xiong2017end}. We use 11 RBF kernel functions, with the hyper-parameters $\mu=\{1, 0.9, 0.7, 0.5, 0.3, 0.1, -0.1, -0.3, -0.5, -0.7, -0.9\}$ and $\sigma = \{10^{-3}, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1\}$, where the kernel with $\mu=1.0$ and $\sigma=10^{-3}$ captures the exact matches, and other kernels capture soft matches between tokens.
\hide{
The details of the matching component include:
\begin{enumerate}
\item We first lookup the embeddings for each author name and word from a pre-trained embedding corpus;
\item Based on the pre-trained embeddings, for each field of the attributes, we generate a similarity embedding between the target paper and each paper of the candidate person by the basic profile model;
\item For a pair of the target paper and a paper of the candidate, we concatenate the similarity embeddings of different fields by the proposed attention mechanism in the multi-field profile model;
\item For a pair of the target paper and the candidate, we concatenate similarity embeddings of different papers of the candidate by the proposed attention mechanism in the multi-field multi-instance model;
\item We concatenate the multi-field profile feature vector with multi-field multi-instance feature vector to create the final similarity embedding;
\item A non-linear function $g$ is to used to transform the similarity embedding to a one-dimension matching score;
\item The triplet loss in Eq. (6) is used to train the matching component.
\end{enumerate}
}
The model parameters $\Theta$ include the word/author embeddings which are fine-tuned in our model, the parameters of the attention mechanism and the parameters of the non-linear function $g$ that is used to transform the similarity embedding into a matching score. For training, the mini-batch is set as 80. The learning rate is set as 0.001.
\vpara{Decision Component.}
The decision component is a basic multiple layer perceptron, with four-layer full connections followed by a ReLU function. Other complex models obtain the similar performance with the basic MLP.
The model parameters $\Phi$ include the the parameters of the basic MLP.
For training, the size of the mini-batch is set as 128. The learning rate is set as 0.001.
\subsection{Running Environment}
We implement the model by Tensorflow and run the code on an Enterprise Linux Server with 40 Intel(R) Xeon(R) CPU cores (E5-2640 v4 @ 2.40GHz and 252G memory) and 1 NVIDIA Tesla V100 GPU core (32G memory).
\subsection{Baselines}
\label{sec:baseline}
The features extracted for SVM is shown in Table~\ref{tb:features}.
{\small \begin{table}[t]
{\caption{Features extracted for the SVM model. \small{$p$: target paper, $a$: target author in $p$, $c$: candidate person.} }\label{tb:features}}
\vspace{-0.08in}
{\renewcommand{\arraystretch}{1}%
{
\setlength{\extrarowheight}{1pt}
\begin{tabular}{
@{}c@{ } l@{}}
\noalign{ \hrule height 1pt}
\textbf{No.} & \textbf{Feature description} \\ \hline
\textbf{1} & Paper number of $c$\\
\textbf{2} & Distinct venue number of $c$\\ \hdashline
\textbf{3} & Frequency of $a$'s coauthors in $c$\\
\textbf{4} & Ratio of $a$'s coauthors in $p$'s author names\\
\textbf{5} & Ratio of $a$'s coauthors in $c$'s author names\\ \hdashline
\textbf{6} & Frequency of $a$'s affiliation in $c$\\
\textbf{7} & Ratio of $a$'s affiliation in the same name's affiliations in $c$\\
\textbf{8} & Cosine similarity of the affiliations between $a$ and $c$'s same names\\
\textbf{9} & Jaccards similarity of the affiliations between $a$ and $c$'s same names\\ \hdashline
\textbf{10} & Frequency of $p$'s venue in $c$\\
\textbf{11} & Ratio of $p$'s venue in $c$\\
\textbf{12} & Cosine similarity between $p$'s venue and $c$'s venues \\
\textbf{13} & Jaccards similarity between $p$'s venue and $c$'s venues \\ \hdashline
\textbf{14} & Cosine similarity between $p$'s title and $c$'s titles\\
\textbf{15} & Jaccards similarity between $p$'s title and $c$'s titles\\
\textbf{16} & Cosine similarity between $p$'s keywords and $c$'s keywords\\
\textbf{17} & Jaccards similarity between $p$'s keywords and $c$'s keywords\\
\noalign{\hrule height 1pt}
\end{tabular}}
}
\end{table} }
\end{appendices}
\section{CONNA}}
In this section, we first give an overview of the end-to-end framework and then introduce the matching component which is to match the most possible candidate to the target pair and the decision component which is to decide whether to assign the top matched candidate to the target pair or not respectively. Finally, we introduce how to self-correct the errors of the two components by jointly fine-tuning them through reinforcement learning.
\subsection{Overview}
\label{sec:overview}
At first, given a target pair $\langle p,a\rangle$, the candidate persons $C$ are the persons having the relevant names with the target author $a$. We define the relevant names as simple variants of $a$'s name, including moving the last name to the first and keeping the initials of the names except for the last name. For example, the variants of ``Jing Zhang" include ``Zhang Jing", ``J Zhang" and ``Z Jing". For annotating a dataset for training and evaluating the models of name disambiguation, this simple candidate generation strategy can already result in enough challenging candidates.
The whole process of name disambiguation is divided into offline training and online predicting, which is shown in Figure~\ref{fig:framework}.
During the offline training process, we firstly train a matching component to estimate the probability of matching each candidate to the target pair and make the matching probability of the right person higher than those of the wrong persons for each target pair. The matching component constructs the training data from $\mathcal{D}=\{(\langle p,a \rangle,C)\}$ as a set of triplets $\mathcal{D}^r = \{(\langle p,a \rangle, c^+, c^-)\}$, where $\langle p,a \rangle$ is the target paper-author pair, $c^+$ is the real right person and $c^-$ is a wrong person from the candidates. The objective is to make $\langle p,a \rangle$ closer to $c^+$ than to $c^-$.
Then, we train a decision component to accept each sample $(\langle p,a \rangle,\hat{C}) \in \hat{\mathcal{D}}$ as the input and output a label $\hat{y}$ for the top matched person $\hat{c} \in \hat{C}$, where $\hat{C}$ is ranked by the trained matching component, $\hat{y}=1$ indicates $\hat{c}$ is the right person and $\hat{y}=0$ indicates $\hat{c}$ is the wrong person.
We construct the training data $D^c$ for the decision component by extracting $(\langle p,a \rangle,c^+)$ as the positive instance (i.e., $y=1$) and $(\langle p,a \rangle,\hat{c}^-)$ as the negative instance (i.e., $y=0$) from each sample $(\langle p,a \rangle,\hat{C})$, where $\hat{c}^-$ indicates the top matched wrong person in $C$.
Finally, we fine-tune the matching component based on the feedback (i.e., error cases) of the decision component, and then fine-tune the decision component based on the updated output of the matching component. Essentially, the matching component tries to keep the relative distances between the right and the wrong persons of each target pair, and the decision component devotes to optimize the absolute positions between the top matched persons of all the target pairs found by the matching component.
During the online predicting process, to disambiguate a target pair $\langle p,a \rangle$, the matching component firstly finds out the top matched candidate person $\hat{c}$, then based on the similarity features $\phi(\langle p,a \rangle,\hat{c})$ output by the matching component, the decision component will predict the label $\hat{y}$ for $\hat{c}$ and finally assign the person $c^*$ to $\langle p,a \rangle$, where $c^* = \hat{c}$ if $\hat{y}=1$ and $c^* = $ NIL otherwise.
\subsection{Matching}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figures/sup_framework_TKDE.png}
\caption{\label{fig:framework} The whole framework of training and predicting.}
\end{figure}
\vpara{Basic Profile Model (BP).}
\label{sec:interaction}
Let's imagine how humans assign a paper to a person. The humans usually browse all the papers published by the person to understand her/his affiliation, overall research interest, and frequently collaborated authors, then comparing them with those of the paper. In other words, humans directly compare the person's profile with the target pair, which can guide us to build our model. Thus, we name the model as the basic profile model. Specifically, we merge all the attributes of a paper and divide them into a set of tokens to represent the paper, and then merge the tokens of all the papers of a person into a unified set of tokens to represent the person's profile. Based on the token-based representations of the target paper and the person, we can estimate the similarity between them.
Note a complete author name or a word in titles, keywords, venues and affiliations is viewed as a token.
Some metrics such as Jaccards Coefficient~\cite{Salto:1983} and cosine similarity~\cite{Salto:1983} can easily capture the exact matches. However, they suffer from the sparsity of the token-based representations. For example, the similarity is zero if two representations do not contain any same tokens, even if they are semantically similar. On the other hand, recently, some representation-based models~\cite{hu2014convolutional,huang2013learning} can successfully capture the soft/semantic similarities, as they embed the high-dimensional sparse features into low-dimensional dense representations. Through training on the labeled data, the model can reduce the distance between the semantically similar inputs in the low-dimensional space. However, these models may suffer from the problem of semantic drift. For example, two token-based representations with many overlapped tokens may become dissimilar after being embedded by the model, as the global representation may dilute the effect of the exact same tokens by other different tokens. In summary, the above two types of methods are good at either exact matching or soft matching. To capture both the exact and soft matches, we adopt the interaction-based models~\cite{dai2018convolutional,hu2014convolutional,xiong2017end} widely used in information retrieval. The interaction-based models first build a similarity matrix between each candidate person and the target pair and then apply an aggregation function to extract features from the matrix. These models avoid learning the global representations, thus can reduce the issue of semantic drift.
\textit{Similarity Matrix.}
We represent the matches between each candidate and the target pair as a similarity matrix $\mathbf{S}$, with each element $\mathbf{S}_{ij}$ standing for the basic interaction, i.e., the cosine similarity
$\mathbf{S}_{ij} = \frac{\mathbf{p}_i \cdot \mathbf{c}_j }{||\mathbf{p}_i||\cdot ||\mathbf{c}_i||}$ between $\mathbf{p}_i$ and $\mathbf{c}_j$, where $\mathbf{p}_i$ represents the embedding of the $i$-th token in the target pair $\langle p,a \rangle$ and $\mathbf{c}_j$ represents the embedding of the $j$-th token in the candidate person $c$, which can be pre-trained by Word2Vec~\cite{mikolov2013efficient,li2019scaling} or BERT~\cite{devlin2019bert}.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{figures/sup_interaction.png}
\caption{\label{fig:profilemodel} The basic profile model.}
\end{figure}
\textit{Aggregation Function.}
For sentence matching, CNN~\cite{hu2014convolutional,pang2016text} and RNN~\cite{wan2016match} are widely used as aggregation functions to extract matching patterns from the similarity matrix. However, different from sentence matching, title, keywords, venue and affiliation are all short text. We need to pay more attention to the occurrence of the exact same or semantically similar tokens. Thus we adopt an RBF kernel aggregation function~\cite{xiong2017end} to extract features. Specifically, the $i$-th row $\mathbf{S}_i = \{S_{i0},\cdots, S_{iM}\}$ of the similarity matrix --- the similarities between the $i$-th token of the target pair and each token of the candidate person, is transformed into a feature vector $\mathbf{K}(\mathbf{S}_i)$, with each of the $k$-th element $K_k(\mathbf{S}_i)$ being converted by the $k$-th RBF kernel with mean $\mu_k$ and variance $\sigma_k$. Then the feature vectors of all the tokens in the target pair are summed up into the final similarity embedding $\phi(\langle p,a \rangle,c)$, i.e.,
\beqn{
\phi(\langle p,a \rangle,c) &=& \sum_{i=1}^{N} \log \mathbf{K}(\mathbf{S}_i) ,\\ \label{eq:phi}
\mathbf{K}(\mathbf{S}_i)&=& \{K_1(\mathbf{S}_i), \cdots, K_K(\mathbf{S}_i)\}, \\ \label{eq:K_vector}
K_k(\mathbf{S}_i) &=& \sum_{j=1}^{M} \exp \left[ - \frac{(S_{ij} - \mu_k)^2}{2\sigma_k^2}\right]. \label{eq:K}
}
The kernel with $\mu=1$ and $\sigma \rightarrow 0$ only considers the exact matches between tokens, and others, e.g., with $\mu=0.5$, counts the number of tokens in the candidate person whose similarities to a queried token in the target paper are close to 0.5. Thus, the kernel aggregation not only emphasizes the effect of exact matching but also captures the soft matches. Figure~\ref{fig:profilemodel} illustrates the model.
\vpara{Multi-field Profile Model (MFP).}
\label{sec:multi-field}
The basic profile model does not distinguish different fields of attributes but groups them together. However, it is not necessary to compare different attributes, such as comparing authors with venues. Moreover, it takes more effect to compare coauthor names than other attributes. So we build a basic profile model on each field of the attributes respectively, i.e. different attributes are not allowed to be compared, then aggregate the similarity embeddings together by the corresponding attention coefficients estimated by:
\beqn{\label{eq:attention}
\alpha_f &=& \frac{ \exp (w\phi(A_f^p, A_f^c) + b)}{ \sum_{f}\exp(w\phi(A_f^p,A_f^c) + b)}, \\ \nonumber
\phi(\langle p,a \rangle, c) &=& \sum_f \alpha_f \phi(A_f^p, A_f^c), \nonumber
}
\noindent where $\phi(A_f^p, A_f^c)$ denotes the similarity embedding between $A_f^p$ and $A_f^c$ with $A_f^p$ being the $f$-th field of $p$ and $A_f^c$ being that of the candidate person $c$. Notations $w$ and $b$ denote the parameters. The model is named as multi-field profile model and is shown in Figure~\ref{fig:multi-field}.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{figures/fig5_multi_field.png}
\caption{\label{fig:multi-field} The multi-field profile model.}
\end{figure}
\vpara{Multi-field Multi-instance Model (MFMI).}
\label{sec:multi-field-multi-instance}
A person usually publishes multiple papers. Some persons even publish papers of multiple topics on multiple fields of venues and collaborate with multiple communities of persons. In this scenario, a target paper can be only similar to a small part of a person's diverse profile, but is totally irrelevant to other parts of the profile. However, the multi-field profile model may dilute the similarity with this small part when summing the similarities with all the tokens in a person's profile together by Eq.\eqref{eq:K}. To reduce the impact from the irrelevant papers, we build a multi-field model between the target pair and each published paper of the candidate person, and then aggregate the resultant similarity embeddings of all the published papers by their corresponding attention coefficients, which are estimated the same as Eq.\eqref{eq:attention}. The model is named as the multi-field multi-instance model and is shown in Figure~\ref{fig:multi-instance}.
\vpara{The Combination Model (CONNA$^r$).}
\label{sec:combinationmodel}
Essentially, the multi-field profile model captures the global similarities between the
target pair and a person's profile, while the multi-field multi-instance model considers the local similarities between the target pair and each of the papers published by a person. Both of them can be explained
intuitively, thus we can combine their output similarity
embeddings together as the final feature embedding.
We summarize different component variants in Table \ref{tb:modelcomponents}.
\iffalse
\vpara{Summary of Model Components.}
\label{sec:modelcomponent}
The model and its components are list in the Table~\ref{tb:ranking_perforamance}
\fi
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{figures/fig6_multi_field_instance.png}
\caption{\label{fig:multi-instance} The multi-field multi-instance model.}
\end{figure}
\vpara{Loss Function.}
\label{sec:triplets}
We use the triplet loss function to optimize the matching component. Similar ideas has been also used in~\cite{chen2017task,zhang2018camel,zhang2018name}.
Let $\mathcal{D}^r$ be a set of triplets with each triplet denoted as $(\langle p,a \rangle, c^{+}, c^{-})$, where $c^+$ is the right person of the target pair $\langle p,a \rangle$ and $c^-$ is a wrong person sampled from the candidates, the triplet loss function $\mathcal{L}(\Theta)$ is defined as:
{\footnotesize \beqn{
\label{eq:rankloss}
\mathcal{L}(\Theta) \!
&=& \!\!\!\!\!\!\!\!\!\!\!\sum_{(\langle p,a \rangle,c^+,c^-) \in \mathcal{D}^r} \mathcal{L}_{\Theta}(\langle p,a \rangle,c^+,c^-) \\
&=& \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\sum_{(\langle p,a \rangle,c^+,c^-) \in \mathcal{D}^r} \!\!\!\!\!\!\!\!\!\max \{ 0, g(\phi(\langle p,a \rangle, c^{-}))-g(\phi(\langle p,a \rangle, c^{+}))+m)\}, \nonumber
}}
\noindent where $g$ is defined to be a non-linear function to transform the similarity embedding $\phi$ into a one-dimension matching score that can be compared between the positive pair $(\langle p,a \rangle, c^{+})$ and the negative pair $(\langle p,a \rangle, c^{-})$. Notation $\Theta$ indicates the parameter of the matching component and $m>0$ is a margin enforcing a distance between positive pairs and negative pairs. We optimize the triplet loss instead of directly optimizing the cross-entropy loss between the output matching score and the true label, as we aim at finding the top matched candidate from all the candidates for each target pair, thus the objective should be keeping a relative order within the candidate persons of each target pair instead of keeping a global order among all the $(p,c)$ pairs. The triplet loss is more direct and close to our objective than the cross-entropy loss.
\subsection{Decision}
\label{sec:classification}
The decision component is built upon the output of the matching component to identify the right person, who can be either the top matched real person or NIL.
The candidate persons $C$ of each sample $(\langle p,a \rangle,C) \in \mathcal{D}$ are ranked into $\hat{C}$ based on the matching probabilities estimated by the matching component. Note for the samples with $c^* = c^+$, the real right person $c^+$ may be ranked the first or not. Then the decision component is trained to predict the first ranked person $\hat{c} \in \hat{C}$ to be a right person (i.e., $\hat{y}=1$) or a wrong person (i.e., $\hat{y}=0$). To achieve the goal, we construct the training data $\mathcal{D}^c$ from the ranked dataset $\hat{\mathcal{D}}=\{(\langle p,a \rangle,\hat{C}) \}$. Specifically, from each sample $(\langle p,a \rangle,\hat{C})$, we extract $(\langle p,a \rangle,c^+)$ as the positive instance (i.e., $y=1$) and extract $(\langle p,a \rangle,\hat{c}^-)$ as the negative instance (i.e., $y=0$), where $\hat{c}^-$ indicates the first ranked wrong person in $C$.
In another words, the positive instances are only extracted from the samples with $c^* = c^+$, while the negative instances are extracted from both the samples with $c^* = c^+$ and the samples with $c^* = \text{NIL}$.
For an instance $(\langle p,a \rangle,c)$, we use the similarity embedding $\phi(\langle p,a \rangle,c)$ output by the matching component as its feature. Thus, $\mathcal{D}^c=\{(\phi(\langle p,a \rangle,c^+),y=1)\} \cup \{(\phi(\langle p,a \rangle,\hat{c}^-),y=0)\}$. Then we train a multi-layer perceptron $h(\Phi)$:
\beq{
\label{eq:decision}
h(\Phi): \{\phi(\langle p,a \rangle,c)\} \rightarrow \{y\},
}
\noindent where $y$ is the label of the instance $(\langle p,a \rangle,c)$, whose value equals 1 if $(\langle p,a \rangle,c)$ is a positive instance and 0 otherwise.
\begin{table}
\newcolumntype{?}{!{\vrule width 1pt}}
\newcolumntype{C}{>{\centering\arraybackslash}p{3.5em}}
\caption{
\label{tb:modelcomponents} Matching component variants of CONNA.
\normalsize
}
\centering \scriptsize
\renewcommand\arraystretch{1.0}
\begin{tabular}{ll}
\toprule
Component variants & Key idea\\
\midrule
Basic Profile (BP)
& The basic interaction-based model
\\
Multi-field Profile (MFP)
& Build BP for each field
\\
Multi-field Multi-instance (MFMI)
& Build MFP for each instance
\\
CONNA$^r$
& Combine MFP and MFMI
\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Reinforcement Self-correction}
\label{sec:joint}
We finally fine-tune the two components by jointly training them to correct their errors by themselves.
The matching component can be viewed as the generator to generate the ranking list. Without the decision component, the triplet loss in Eq.\eqref{eq:rankloss} is used to measure whether the ranking list is good or not. However, as the final objective is to determine whether the top ranked candidate is the right person or not, the triplet loss is not enough to verify the effect. Fortunately, we can use the prediction result of the top ranked candidate by the decision component as the delayed feedback to the ranking results of the matching component.
Specifically, we can punish the ranking list with the wrongly predicted top candidate and reward the ranking list with the correctly predicted top candidate. Then based on the reward we update the matching component, expecting the ranking lists generated by the matching component to the decision component are more accurate. Followed by the motivation, we propose fine-tuning the two components via reinforcement learning. Specifically, the objective is to maximize the expected reward of the ranking lists generated by the matching component:
\beq{
\label{eq:expectation}
J(\Theta) = \sum_{(\langle p,a \rangle,\hat{C})) \in \hat{\mathcal{D}}} p_{\Theta}(\langle p,a \rangle,\hat{C})R(y, \hat{y}),
}
\noindent where $\hat{\mathcal{D}}$ is the ranked training data, $p_{\Theta}(\langle p,a \rangle,\hat{C})$ is the probability of generating the ranking list $\hat{C}$ of the target pair $\langle p,a \rangle$ by the matching component, and $R(y,\hat{y})$ is the reward function defined as follows:
\beq{
\label{eq:reward}
R(y, \hat{y}) = \left\{\begin{array}{cl}
1 & \hat{y} = y; \\
0 & \mbox{otherwise.}
\end{array}\right.
}
\noindent where $\hat{y}$ is the predicted label for the top-ranked candidate $\hat{c}$ of $\hat{C}$ and $y$ is the ground truth label. The defined reward function encourages the matching component to float the right person at the top and push the wrong person away from the top. The policy gradient algorithm~\cite{sutton:00} is adopted to optimize the expected reward in Eq.\eqref{eq:expectation}, whose gradient is calculated as:
\beqn{
\label{eq:gradient} \nonumber
\nabla_{\Theta} J(\Theta) \!\!\!\!&=& \!\!\!\!\!\!\!\!\!\!\!\sum_{(\langle p,a \rangle,\hat{C}) \in \hat{\mathcal{D}}} R(y, \hat{y})\nabla p_{\Theta}(\langle p,a \rangle,\hat{C}), \\
\!\!\!\!&\simeq&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \sum_{(\langle p,a \rangle,\hat{c}, c^-) \in \mathcal{D}^r} \!\!\!\!\!\!\!R(y, \hat{y})\nabla \mathcal{L}_{\Theta}(\langle p,a \rangle,\hat{c}, c^- ).
}
Since the probability of a ranking list $\hat{C}$ is not easy to be estimated, we transform $\hat{C}$ into a set of triplets, with each triplet including the target pair $\langle p,a \rangle$, the top ranked candidate $\hat{c} \in \hat{C}$ and a negative candidate $c^{-} \in \hat{C}$. Then the loss of a triplet in Eq.\eqref{eq:rankloss} is calculated and the losses of all the triplets are summed up to approximately measure the ranking performance of $\hat{C}$. Thus, the gradient $\nabla p_{\Theta}(\langle p,a \rangle,\hat{C})$ is approximated by $ \nabla \mathcal{L}(\langle p,a \rangle,\hat{c}, c^-)$ of all the triplets in $\hat{C}$.
Then the parameters $\Theta$ of the matching component can be updated by the gradient. After the matching component is tuned, the decision component is also updated based on the updated similarity embeddings output by the matching component. Algorithm~\ref{algo:joint_model} illustrates the joint training process, where we firstly pre-train the matching component and the decision component, and then jointly fine-tune the two components together.
\begin{algorithm}[t]
{\small \caption{Reinforcement Joint Training\label{algo:joint_model}}
\KwIn{A training set $\mathcal{D} = \{(\langle p,a \rangle,C)\}$.}
\KwOut{A matching component and a decision component parametrized by $\Theta$ and $\Phi$ respectively.}
Build $\mathcal{D}^r=\{(\langle p,a \rangle,c^+,c^-)\}$ from $\mathcal{D}$;\\
Pre-train $\Theta$ of the matching component on $\mathcal{D}^r$;\\
Rank $\mathcal{D}$ by the matching component to generate $\hat{\mathcal{D}}$;\\
Build $\mathcal{D}^c = \{(\phi(\langle p,a \rangle,c),y)\}$ from $\hat{\mathcal{D}}$;\\
Pre-train $\Phi$ of the decision component on $\mathcal{D}^c$;\\
\Repeat{Convergence}{
\For{ $(\langle p,a \rangle,\hat{C})\in \hat{\mathcal{D}}$ }{
Predict $\hat{y}$ for $\hat{c}$ by the decision component;\\
Calculate $R(y, \hat{y})$ by Eq.\eqref{eq:reward};\\
Calculate $\nabla_{\Theta} J(\Theta)$ by Eq.\eqref{eq:gradient};\\
$\Theta \rightarrow \Theta + \mu \nabla_{\Theta} J(\Theta)$, where $\mu$ is the learning rate;\\
}
Re-rank $\mathcal{D}$ to generate $\hat{\mathcal{D}}$ by the matching component;\\
Re-generate $\mathcal{D}^c$ from $\hat{\mathcal{D}}$;\\
Update $\Phi$ of the decision component on $\mathcal{D}^c$;\\
}
}
\end{algorithm}
\section{Conclusion}
\label{sec:con}
This paper presents the first attempt to formalize and solve the problem of name disambiguation on the fly by considering different cases of assignments, in particular when a paper cannot be assigned to any existing persons in the system. We propose a novel joint model that consists of a matching component and a decision component, where a multi-field multi-instance interaction-based model is trained to match the candidates to each target paper, and then a classification decision model is trained to decide whether to assign the top matched candidate to the target paper or not. Through reinforcement joint fine-tuning, the two components can bootstrap each other and self-correct some of their errors. The experimental results on the recent largest dataset for name disambiguation demonstrate that the proposed model performs significantly better than state-of-the-art baseline methods. The model has already been deployed on AMiner to disambiguate the online papers.
\section{AMiner Data Observation}
\label{sec:Obser}
We collect the name disambiguation dataset from AMiner Billboard\footnote{https://www.aminer.cn/billboard}, which is a website publishing a variety of AMiner online real data benchmarks for data mining and data analysis. The dataset contains 800 common person names and 358,754 papers belonging to 12,484 persons.
Before presenting our model, we have conducted an in-depth analysis of the AMiner dataset. And then, we did find some interesting phenomenon.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{Figures/figure_author_dis}
\caption{\label{fig:distri_author} Distribution of the number of co-occurrence authors.}
\end{figure}
Let us consider a simple baseline model - SVM to directly formalizes the continuous name disambiguation as a binary classification problem. Given a paper and some candidate authors, the SVM model firstly extracts several traditional interaction-features about the co-occurrence number of authors, the similarity of title, venue, and organizations between paper and each candidate author, then, SVM outputs the probability of the candidate author is the right author of the paper (predict to 1), finally, we rank the candidate authors by their probability.
Follow the process, we systematically evaluate the SVM performance on the AMiner dataset. The evaluation metric is Hit Ratio@K (HR@K) which is the ranking metric measuring the percentage of the candidate lists
with the right person ranked before top K. At there, we empirically set K is 1 because we pay more attention to whether the top-ranked author is the right author or not. Unexpectedly, the SVM model achieves not bad HR@1 performance - 83.1\%, which confuses us that the continuous name disambiguation task is not very challenging?
\begin{table}
\centering
\caption{\label{tb:Different_case_per} Different Case Performance (\%).}
\begin{tabular}{cccc}
\noalign{ \hrule height 1.2pt}
\textbf{Case} & \textbf{\tabincell{c}{Easy Cases \\ (X $\geq$ 0.4)}} & \textbf{\tabincell{c}{Hard Cases \\ (X $<$ 0.4)}} & \textbf{Overall} \\
\noalign{ \hrule height 1.2pt}
\textbf{Data Ratio} & \textbf{77.86} & 22.14 & --- \\\hdashline
\textbf{HR@1} &\textbf{90} & 64.6 & 83.1 \\
\noalign{ \hrule height 1.2pt}
\end{tabular}
\end{table}
Liu et al. \cite{liu2013s} researches a problem of linking users across multiple online communities which is similar to our task. In that paper \cite{liu2013s}, Liu et al. have made a survey based on the huge social network dataset, and then, he come to the conclusion that traditional username-related feature is important! Some instances can be done precisely by using only traditional username-related features.
Inspired by the conclusions in \cite{liu2013s}, we count the distribution of the co-occurrence number of authors of all paper-candidate author pairs on AMiner dataset. Fig. 2 presents the result. The X-axis is the proportion of the number of co-occurrence authors to all authors of the paper. A bigger ratio of the number of co-occurrence authors indicates the candidate author is more likely to be the right person of the paper, and also the instance is easy to distinguish and predict. The Y-axis is the data proportion. From the Fig. 2, we can see that the main part of instances has a bigger ratio of the number of co-occurrence authors. For the following analysis, we empirically set a line X = 0.4 to distinguish the instances whose ratio of the number of co-occurrence authors bigger than 0.4 which we called easy cases and those smaller than 0.4 which we called hard cases. Table. 1 demonstrate the accurate data ratio and performance in each case. From the Fig. 2, we can see that easy cases account for the majority part of the whole instances which is up to 77.86\%, and its Hit@1 performance is 90\% which is pretty good. While the hard cases occupy a small part which is about 22.12\%, and it has a poor Hit@1 performance - 64.6\%. Based on the observations, we comprehensively analyzing the continuous name disambiguation task on the real online scenario. And we can conclude that the task is very challenging and meaningful on the hard cases. Because superior paper assignment results is an important prerequisite ensuring the effectiveness of expertise search and personalized academic services.
In the experiment section, we will compare our proposed model with the baseline model on both the overall dataset and different cases data.
\section{Experiment}
\label{sec:exp}
\hide{
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{Figures/reinforce}
\caption{\label{fig:reinforce} An Illustration of Reinforcement Joint Fine-tuning.}
\end{figure}
}
All codes and data used in the paper are publicly available\footnote{https://github.com/BoChen-Daniel/TKDE-2019-CONNA}.
\subsection{Experimental Settings}
\subsubsection{Datasets}
We evaluate CONNA on two name disambiguation datasets:
\textit{OAG-WhoIsWho}\footnote{https://www.aminer.cn/whoiswho}: Is the largest human-annotated name disambiguation dataset so far, which is consist of 608,363 papers belonging to 57,138 persons of 642 common names. Existing work either leverage the disambiguating results by algorithms in some well-known academic websites such as Scopus~\cite{reijnhoudt2014}, CiteSeerX~\cite{zhang2017name}, Web of Science~\cite{Backes:2018} and PubMed~\cite{torvik2009author}, or annotate a much smaller datasets by human beings, such as 8,453~\cite{han2004two}, 6,921~\cite{kang2011construction}, 7,528~\cite{tang2012unified} and 2,946 annotated persons~\cite{muller2017data}. Compared to the most popular KDD Cup 2013 challenge dataset, the OAG-WhoIsWho is also superior to it both in quantity (608,363 vs 424,384 in terms of the number of papers) and quality (fully human-labeled vs partially human-labeled). We annotate the dataset as follows.
From the AMiner system, we choose 642 highly ambiguous names, create the relevant names by the candidate generation strategy in \secref{sec:overview} and select all the authors for each name, collect all the papers assigned for each author and extract title, authors, organizations, keywords and abstract for each paper. We also collect all the unassigned papers for each name from AMiner. Since the assigned papers may be wrongly assigned and the papers are not fully assigned, additional efforts are needed to clean and reassign the papers. First, we clean the dataset by removing the wrongly assigned papers or splitting the papers of an author into different clusters. Second, we annotate the unassigned papers or merge the papers of two authors. We aim to clean the dataset as much as possible but increase the highly reliable assignments. According to the purpose, we only hire one annotator to perform the cleaning step, but hire three annotators to perform the assignment step respectively and then obtain the final results by majority voting their annotations. Besides, an annotation tool is developed to recommend highly reliable removing, splitting, assigning or merging operations to the annotators to simplify the human annotation process\footnote{https://www.aminer.cn/annotation}.
\textit{KDD Cup}~\cite{roy2013microsoft}: Is the dataset used in the KDD Cup 2013 challenge 1 to address name disambiguation problem. We collect the training data containing 3,739 authors and 123,447 papers, as only the training labels are published. We only use title, organizations, keywords and abstract as features, but ignore coauthor names. As shown in Figure~\ref{subfig:kdd-cup}, the distribution of same-coauthor ratio is extremely skewed. According to Eq.\eqref{eq:same-coauthor-ratio}, same-coauthor ratio equalling 1 means the second similar candidate and the least similar candidate have the same number of same-coauthors with the target pair. In another word, the most similar candidate is significantly different from all the other candidates when only considering the coauthor name features.
Thus, 98\% target pairs holding 1.0 same-coauthor ratio means only using the coauthor names can correctly assign 98\% target pairs.
In fact, when considering the coauthor name feature, any baselines including our model can easily achieve approximate 99\% HR@1. Thus, for increasing the difficulty, we ignore coauthor names on this dataset.
\iffalse
\begin{figure}[t]
\centering
\includegraphics[width=0.2\textwidth]{Figures/kdd_cup_ratio}
\caption{\label{fig:kdd-cup} Distribution of the same-coauthor ratio on KDD Cup dataset.}
\end{figure}
\fi
\begin{figure}
\centering
\subfigure[]{\label{subfig:kdd-cup}
\includegraphics[width=0.22\textwidth]{figures/kdd_cup_ratio.png}
}
\centering
\subfigure[]{\label{subfig:name_attr}
\includegraphics[width=0.22\textwidth]{figures/name_attr.png}
}
\caption{\label{fig:fig7} (a) Distribution of the same-coauthor ratio on KDD Cup dataset; (b) The effects of different attributes.}
\end{figure}
\subsubsection{Comparison Methods}
\vpara{Matching Component.}To evaluate the matching performance, we compare feature engineering-based GBDT and three embedding-based models:
\textit{GBDT}: Is a widely used model to solve KDD Cup 2013 challenge-1~\cite{efimov2013kdd,li2013feature,zhao2013scorecard}. We train a GBDT model to estimate a matching probability between each candidate and the target pair. The extracted features for GBDT are shown in Table~\ref{tb:features}. As the model can directly predict a label for each candidate, it also be used for deciding to assign the most matched candidate to the target pair if its label is 1.
\textit{Camel}~\cite{zhang2018camel}: Is a representation-based model. Given a triplet $(\langle p,a \rangle,c^+,c^-)$, it first represents $\langle p,a \rangle$ by $p$'s title, and represents $c^+$ and $c^-$ by their identities. Then it calculates the matching scores for both $(\langle p,a \rangle,c^+)$ and $(\langle p,a \rangle,c^-)$, and finally optimizes the difference between their matching scores.
\textit{HetNetE}~\cite{chen2017task}: Is similar as Camel except that $\langle p,a \rangle$ is represented by all its attributes.
\textit{GML}~\cite{zhang2018name}: Is a representation-based model to identify whether two papers are written by the same person through optimizing a triplet loss. The model accepts the pre-trained embeddings of all the tokens in a paper as input and output an embedding for the paper. We represent a person by averaging all his/her papers' embeddings.
\vpara{Decision Component.}
To evaluate the performance of the decision component, we compare two strategies:
\textit{Threshold}~\cite{gottipati2011linking}: Picks the top matched person whose score is lower than a threshold as NIL, where the threshold is determined as the value when the best accuracy is obtained on a validation set. We use the same matching model as our proposed method to obtain the top matched persons.
\textit{Heuristic Loss}~\cite{clark2016deep}: Unifies the NIL decision and the matching process by incorporating the costs of assigning a paper to a wrong NIL person or assigning an unlinkable paper to a wrong existing person into the loss function of ranking the wrong person before the right person. NIL is inserted as an additional candidate person for each paper. The representations of $p$ and $c$ which are made in the same way as GML, are concatenated as the input of a neural network to produce their matching score. When $c = \text{NIL}$, the representation of $c$ is not included.
\vpara{Variants of Our Model.}
We also compare different variants where CONNA$^r$(BP), CONNA$^r$(MFP), CONNA$^r$(MFMI)\space and CONNA$^r$\space correspond to the variants in Table~\ref{tb:modelcomponents}. CrossEntropy\space modifies CONNA$^r$\space by replacing the triplet loss with the cross-entropy loss, which can be directly used for deciding the assignments. CONNA\space trains CONNA$^r$\space plus a decision component once. CONNA+Fine-tune\space jointly trains the two components in CONNA.
{\scriptsize \begin{table}[t]
{\caption{Features extracted for GBDT model. \small{$p$: target paper, $a$: target author in $p$, $c$: candidate person.} \label{tb:features}}}
\vspace{-0.08in}
{\renewcommand{\arraystretch}{1}%
{
\setlength{\extrarowheight}{1pt}
\begin{tabular}{
@{}c@{ } l@{}}
\noalign{ \hrule height 1pt}
\textbf{No.} & \textbf{Feature description} \\ \hline
\textbf{1} & The number of the papers of $c$\\
\hdashline
\textbf{2} & The number of the coauthors of $a$ in $p$\\
\textbf{3} & The number of the coauthors of $c$\\
\textbf{4} & The number of the same coauthors between $a$ and $c$\\
\textbf{5} & Ratio of the same coauthors between $a$ and $c$ in $p$'s coauthor names\\
\textbf{6} & Ratio of the same coauthors between $a$ and $c$ in $c$'s coauthor names\\ \hdashline
\textbf{7} & Frequency of $a$'s affiliation in $c$'s affiliations\\
\textbf{8} & Ratio of $a$'s affiliation in $c$'s affiliations\\
\textbf{9} & Cosine similarity between $a$'s affiliation and $c$'s affiliations\\
\textbf{10} & Jaccards similarity between $a$'s affiliation and $c$'s affiliations\\ \hdashline
\textbf{11} & Distinct number of venues of $c$\\
\textbf{12} & Frequency of $p$'s venue in $c$\\
\textbf{13} & Ratio of $p$'s venue in $c$\\
\textbf{14} & Cosine similarity between $p$'s venue and $c$'s venues \\
\textbf{15} & Jaccards similarity between $p$'s venue and $c$'s venues \\ \hdashline
\textbf{16} & Cosine similarity between $p$'s title and $c$'s titles\\
\textbf{17} & Jaccards similarity between $p$'s title and $c$'s titles\\ \hdashline
\textbf{18} & Distinct number of keywords in $c$\\
\textbf{19} & Frequency of $p$'s keywords of $c$\\
\textbf{20} & Ratio of $p$'s keywords in $c$\\
\textbf{21} & Cosine similarity between $p$'s keywords and $c$'s keywords\\
\textbf{22} & Jaccards similarity between $p$'s keywords and $c$'s keywords\\
\noalign{\hrule height 1pt}
\end{tabular}}
}
\end{table} }
\subsubsection{Evaluation Settings}
For each dataset, we randomly sample 20\% persons for testing and divide the rest into training, which results in 45,711 authors for training and 11,427 authors for testing on OAG-WhoIsWho dataset, and 2,991 authors for training and 748 authors for testing on KDD Cup dataset. For each author in both training and testing data, we first sort their papers by the published year in ascending order. Then we choose the latest 20\% papers as the author's unassigned paper and leave 80\% papers as the author profile.
We first evaluate the matching of the candidate persons to the target pair, and further evaluate the decision of the top matched person as the right person or NIL.
\vpara{Matching Evaluation.}
For evaluating the matching performance, we sample 10,000 target pairs from the training data. Each target pair paired with its right person composes a positive instance.
We also sample 9 wrong persons paired with each target paper to compose 9 negative instances. The process results in 90,000 triplets for training.
For testing, we sample 2,000 target pairs from the test data, where each one is associated with the right person and 19 wrong persons.
The wrong persons are sampled from the candidates. We follow the name variant strategy in section 3.1 to generate candidates on OAG-WhoIsWho. While for KDD Cup, names are so different that no candidates can be found by simply varying names. Instead, we calculate the Jaro-Winkler similarity between a candidate's name and the target author, and select the candidates whose scores are larger than 0.5 as the wrong persons.
We use Hit Ratio at top k (HR@k) and mean reciprocal rank (MRR) as the metrics for evaluating whether the right person will be ranked at the top among all the candidates. Since there is only one right person for each target pair, HR@k measures the percentage of the candidate lists with the right person ranked before top k. MRR measures the average of reciprocal ranks of the right persons.
Higher HR@k and MRR indicate better performance.
\vpara{Decision Evaluation.}
We construct the training data for the decision component upon the output of the matching component. Specifically, we also use the 10,000 positive instances for the matching component as those for the decision component. Then we extract the target pairs and the corresponding top matched wrong persons to compose the negative instances.
For testing, in addition to the 2,000 target pairs and the corresponding candidates including the right persons (i.e., positive sample $(\langle p,a \rangle,C)$ with $c^*=c^+$), we extract extra 2,000 target pairs and the corresponding candidates excluding the right persons (i.e., negative sample $(\langle p,a \rangle,C)$ with $c^*=\text{NIL}$). Conveniently, we remove the right person $c^+$ from each positive sample and create a negative sample by the remaining wrong persons.
We count the number of true positive (tp), false negative (fn), true negative (tn) and false positive (fp) samples and then calculate precision, recall and f1:
\beqn{
\text{tp} &=& |\{ c^* = c^+ \text{ and } \hat{c} = c^+ \text{ and } \hat{y} = 1 \}| ,\\\nonumber
\text{fn} &=& |\{ c^* = c^+ \text{ and } \hat{y} = 0 \}| , \\\nonumber
\text{tn} &=& | \{ c^* = \text{NIL} \text{ and } \hat{y} =0\} |, \\\nonumber
\text{fp} &=& | \{ c^* = \text{NIL} \text{ and } \hat{y} =1 \} \cup \\\nonumber
&&\{c^*= c^+ \text{ and } \hat{c} \ne c^+ \text{ and } \hat{y} = 1\} |, \\\nonumber
}
\noindent where tp is the number of the positive samples, with the right persons ranked at the first (i.e., $\hat{c} = c^+$) and also predicted as the right persons (i.e., $\hat{y}=1$). On the contrary, fn counts the positive samples with $\hat{y}=0$. Notation tn denotes the number of negative samples with the first ranked persons predicted as the wrong persons (i.e., $\hat{y}=0$), while fp counts the negative samples with $\hat{y}=1$ and also counts the positive samples with the wrong persons ranked at the first (i.e., $\hat{c} \ne c^+$) but still predicted as the right persons (i.e., $\hat{y}=1$).
Since we aim at assigning the target pair to an existing right person and also assigning it to NIL if there is no right person, we calculate precision and recall for both the cases with $c^* = c^+$ and $c^* = \text{NIL}$:
\beqn{
c^* = c^+\!\!\!&:& \!\!\!\text{Pre.} = \frac{\text{tp}}{\text{tp + fp}}, \quad \text{Rec.} = \frac{\text{tp}}{\text{tp + fn}}; \\ \nonumber
c^* = \text{NIL} \!\!\!&:&\!\!\!\text{Pre.} = \frac{\text{tn}}{\text{tn + fn}}, \quad \text{Rec.} = \frac{\text{tn}}{\text{tn + fp}}.
}
\subsubsection{Implementation Details}
We divide the attributes of a paper into two fields: coauthor names and other attributes including title, abstract, organizations and keywords, as coauthor names have no literal or semantic overlaps with other attributes.
We pre-train an embedding for each author name and each word. Specifically, we use Word2Vec to train an embedding for an author name in the context of all the coauthors' names in a paper, and train an embedding for a word in the context of all the other occurred words in title, keywords, venue and affiliation.
We set the dimension of the embedding as 100. To enable matrix operation, for each paper or candidate person, we restrict the maximal number of author names to 100, the maximal number of words to 500, and the maximal number of papers published by each person to 100.
The hyper-parameters of the RBF kernel functions are set the same as~\cite{xiong2017end}. We use 11 RBF kernels, with the hyper-parameters $\mu$=$\{1, 0.9, 0.7, 0.5, 0.3, 0.1, -0.1, -0.3, -0.5, -0.7, -0.9\}$ and $\sigma$=$\{10^{-3}, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1, 0.1\}$
Function $g$ in Eq.\eqref{eq:rankloss} is instantiated as a 3-layer MLP followed by a ReLU function which transforms a similarity embedding $\phi(\langle p,a \rangle, c)$ into a 1-dimensional score. Function $h$ in Eq.\eqref{eq:decision} is also a 3-layer MLP which transforms a $\phi(\langle p,a \rangle, c)$ into 2-dimensional classification probabilities.
\subsection{Performance Analysis}
\begin{table}
\newcolumntype{?}{!{\vrule width 1pt}}
\newcolumntype{C}{>{\centering\arraybackslash}p{3.5em}}
\caption{
\label{tb:ranking_perforamance} Performance of the matching results (\%).
\normalsize
}
\centering \footnotesize
\renewcommand\arraystretch{1.0}
\begin{tabular}{@{}c@{~}?*{1}{m{0.6cm}m{0.6cm}m{0.6cm}?}*{1}{m{0.6cm}m{0.6cm}m{0.6cm}}}
\toprule
\multirow{2}{*}{Model}
&\multicolumn{3}{c?}{OAG-WhoIsWho}
&\multicolumn{3}{c@{}}{KDD Cup}
\\
\cmidrule{2-4} \cmidrule{5-7}
& {HR@1} & {HR@3} & {MRR} & {HR@1} & {HR@3} & {MRR}\\
\midrule
Camel
&41.20&62.00& 55.00
& 44.62 & 67.19 & 59.44
\\
HetNetE
&46.00&67.00&60.24
& 51.06 & 77.44 & 66.41
\\
GML
&70.87&94.53&82.59
&72.13&95.34&82.90
\\
GBDT
&87.30&98.10&92.71
&84.18&92.09&89.59
\\ \midrule
CONNA$^r$(BP)
& 86.20 & 96.40 & 92.20
& 91.12& 95.72 & 93.73
\\
CONNA$^r$(MFP)
& 88.00 & 98.75 & 93.25
&-&-&-
\\
CONNA$^r$(MFMI)
& 89.45 & 98.40 & 93.82
& 91.45 & 95.80 & 94.03
\\
\midrule
\textbf{CONNA}
& 90.45 & 98.30 & 94.46
& 92.10 & 96.35& 94.66
\\
\textbf{CONNA+Fine-tune}
& \textbf{91.10} & \textbf{98.45} & \textbf{94.86}
& \textbf{92.60} & \textbf{96.71} & \textbf{94.95}
\\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Matching Performance}
\vpara{Overall Matching Performance.}
Table~\ref{tb:ranking_perforamance} shows the matching performance of the proposed model, the model variants and the comparison methods on the two datasets OAG-WhoIsWho and KDD Cup. In terms of HR@1, the proposed CONNA+Fine-tune\space achieves 3.80\% to 49.90\% improvement over all the baseline methods.
Camel, HetNetE and GML are all representation-based deep learning models, which can capture the soft/semantic matches, but they will dilute the effect of the exact matches of tokens due to the global representations of the papers and persons.
Among the three models, HetNetE uses all the attributes of a paper rather than the single title to represent a paper, which achieves better performance than Camel. Camel and HetNetE represent the candidate persons only based on their identities. Thus they suffer from the sparsity issue, i.e, the embeddings of the persons cannot be trained accurately if they publish few papers. GML avoids the sparsity issue through representing persons by their published papers. However, it is difficult to directly compare the embeddings of a long text (i.e, all the papers of a candidate person) and a short text (i.e., a target paper).
In the name disambiguation problem, the exact matches between tokens especially the matches between coauthor names are more important than the soft matches, thus although GBDT only captures the exact matches, it performs better than the representation-based models. The proposed interaction-based matching component in CONNA\space captures both the exact and the soft matches through comparing local representations of each token pairs instead of comparing the global representations of papers and persons. Specifically, the kernel aggregation function used in the matching component summarizes a frequency distribution of the exact matches and different kinds of soft matches, which can't dilute the effect of extract matches by the other soft matches.
Thus, the proposed matching component performs better than all the comparison methods.
Compared with CONNA, the performance of CONNA+Fine-tune\space is further improved, as the decision component gives additional feedbacks to supervise the ranking of the matching component.
The result indicates that through jointly fine-tuning of the two components, the errors of the matching component can be reduced.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figures/case-multi-field-weight.png}
\caption{\label{fig:multi-field-case} Case study of multi-field effect.}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figures/case-multi-instance-weight.png}
\caption{\label{fig:multi-instance-case} Case study of multi-instance effect.}
\end{figure}
Comparing the results on the two datasets, we can see that the advantage of our model over the feature engineering-based GBDT is much more significant on KDD Cup (+8.42\% in HR@1) than OAG-WhoIsWho (+3.80\% in HR@1). Since coauthor features are not used on the KDD Cup, the results indicate that CONNA\space can better capture the semantics of the attributes except coauthor names.
\iffalse
\begin{table}
\newcolumntype{?}{!{\vrule width 1pt}}
\newcolumntype{C}{>{\centering\arraybackslash}p{3.5em}}
\caption{
\label{tb:ranking_perforamance} Performance of the matching results (\%).
\normalsize
}
\centering \small
\renewcommand\arraystretch{1.0}
\begin{tabular}{ccccc}
\toprule
Model & {HR@1} & {HR@3} & {HR@5} & {MRR}\\
\midrule
Camel
& 68.02 & 89.27 & 93.96 & 79.40
\\
HetNetE
& 75.85 & 93.74 & 97.05 & 85.26
\\
GML
& 60.80 & 90.20 & 96.40 & 73.44
\\
GBDT
& 83.10 & 94.49 & 96.30 & 88.69
\\ \midrule
CONNA$^r$(BP)
& 86.20 & 96.40 &98.41 & 92.20
\\
CONNA$^r$(MFP)
& 88.00 & 98.75 & 99.50 & 93.25
\\
CONNA$^r$(MFMI)
& 89.45 & 98.40 & 99.60 & 93.82
\\
\midrule
CONNA$^r$-Name
& 84.65 & 98.50 & 99.35 & 91.37
\\
CONNA$^r$-Attr
& 83.40 & 96.10 & 95.15 & 88.78
\\
\midrule
\textbf{CONNA}
& 90.70 & 98.50 & 99.40 & 94.59
\\
\textbf{CONNA+Fine-tune}
& \textbf{91.71} & \textbf{98.90} & \textbf{99.60} & \textbf{95.24}
\\
\bottomrule
\end{tabular}
\end{table}
\fi
\vpara{Multi-field Effect.}
We conduct an ablation study to analyze the effects of different modeling strategies on the matching component. Since only one field is used on the KDD Cup dataset, we analyze the effect of multi-fields on the OAG-WhoisWho dataset. From Table~\ref{tb:ranking_perforamance}, we can see that CONNA$^r$(MFP)\space performs better than CONNA$^r$(BP)\space (improving 1.8\% in terms of HR@1), which indicates that it is necessary to build the interaction-based models for different attributes separately and distinguish their effects.
We also investigate the effects of different fields by removing coauthor names and other attributes respectively based on the model CONNA. The experimental results in Figure~\ref{subfig:name_attr} show that removing either coauthor names or other attributes performs significantly worse (-5.80\%-7.05\%, HR@1) than CONNA, which indicates that both coauthor names and other attributes impact the performance obviously. What's more, removing names is comparable to removing other attributes, which indicates that coauthor names are more important than all the other attributes on the task of name disambiguation.
\vpara{Multi-instance Effect.}
Table~\ref{tb:ranking_perforamance} also shows that on OAG-WhoisWho, CONNA$^r$(MFMI)\space performs better than CONNA$^r$(MFP)\space (+1.45\% in terms of HR@1), which demonstrates the strength of distinguishing different papers of a person. HR@1 of CONNA$^r$(MFMI)\space is further improved by 1.00\% if we combine the profile model CONNA$^r$(MFP)\space and the multi-instance model CONNA$^r$(MFMI)\space as CONNA. The result indicates that both the global similarity between the target paper and a candidate's whole profile, and the local similarities between the target paper and each paper of a candidate take effects on matching performance. The results on KDD Cup also present the advantages of multi-instances.
\begin{table*}
\newcolumntype{?}{!{\vrule width 1pt}}
\newcolumntype{C}{>{\centering\arraybackslash}p{3.5em}}
\caption{
\label{tb:classification_perforamance} Performance of the decision results (\%).
\normalsize
}
\centering \footnotesize
\renewcommand\arraystretch{1.0}
\begin{tabular}{@{}c@{~}?*{1}{m{0.6cm}m{0.6cm}m{0.6cm}?}*{1}{m{0.6cm}m{0.6cm}m{0.6cm}?}*{1}{m{0.6cm}m{0.6cm}m{0.6cm}?}*{1}{m{0.6cm}m{0.6cm}m{0.6cm}}}
\toprule
\multirow{3}{*}{Model}
&\multicolumn{6}{c?}{OAG-WhoIsWho}
&\multicolumn{6}{c@{}}{KDD Cup}
\\
\cmidrule{2-4} \cmidrule{5-7}
\cmidrule{8-10} \cmidrule{11-13}
&\multicolumn{3}{c?}{Samples with $c^*=c^+$}
&\multicolumn{3}{c?}{Samples with $c^*=\text{NIL}$}
&\multicolumn{3}{c?}{Samples with $c^*=c^+$}
&\multicolumn{3}{c@{}}{Samples with $c^*=\text{NIL}$}
\\
\cmidrule{2-4} \cmidrule{5-7} \cmidrule{8-10} \cmidrule{11-13}
& {Pre.} & {Rec.} & {F1} & {Pre.} & {Rec.} & {F1}
& {Pre.} & {Rec.} & {F1} & {Pre.} & {Rec.} & {F1}
\\
\midrule
GBDT
& 82.87 & 72.40 & 77.28
& 75.39 & 85.04 & 79.98
& 83.64 & 71.64 & 77.17
& 75.20 & 85.98 & 80.23
\\
Threshold
& 79.33 & 57.60 & 66.38
& 66.47 & \textbf{84.07} & 74.24
& 74.89 & 71.00 & 72.90
& 72.43 & 76.20 & 74.27
\\
Heuristic Loss
& 71.79 & 78.40 & 74.95
& 76.21 & 69.20 & 72.54
& 85.14 & 69.60 & 76.59
& 74.29 & 87.85 & 80.50
\\
CrossEntropy
& 79.42 & 82.33 & 80.85
& 81.66 & 78.67 & 80.14
& 89.60 & 82.79 & 86.06
& 86.15 & 88.05 & 87.09
\\
\midrule
\textbf{CONNA}
&79.53&89.87&84.38
&88.35&76.87&82.21
&88.44& \textbf{86.20}&87.31
&\textbf{86.54}&88.73&87.62
\\
\textbf{CONNA+Fine-tune}
&\textbf{82.47} & \textbf{90.33} & \textbf{86.22}
&\textbf{89.31} & 80.80 & \textbf{84.84}
&\textbf{89.87}&85.73&\textbf{87.75}
&86.36&\textbf{90.33}&\textbf{88.30}
\\
\bottomrule
\end{tabular}
\end{table*}
\vpara{Interpretability of the Matching Component.}
We present some cases in Figure~\ref{fig:multi-field-case} and Figure~\ref{fig:multi-instance-case} to demonstrate the interpretability of the proposed matching component. From Figure~\ref{fig:multi-field-case}, we can see that although the number of the matched tokens between the target paper and the positive candidate person is less than that of the negative candidate person, the matched coauthors are more important than the matched words in titles and venues, because the attention $\alpha$ learned by our model for the matched coauthors on the positive candidate is 0.69, comparing with 0.31 learned for the matched titles and venues. And the attention learned on the negative candidate also emphasizes the matched coauthors. CONNA\space distinguishes different fields' effects by the attention, thus it can correctly identify the positive candidate, while the basic profile model CONNA$^r$(BP)\space wrongly returns the negative candidate as the most matched candidate, as it treats the matches in all the fields equally.
In Figure~\ref{fig:multi-instance-case}, we present the affiliation of ``Dan Chen" in both the target paper and the positive candidate. It is shown that a paper of the positive candidate has the same affiliation with the target paper, and the corresponding attention $\beta$ learned by our model for the paper is 0.79, while the values of $\beta$ learned for other papers are much smaller than this paper. CONNA\space distinguishes different papers' effects, thus it can correctly identify the positive candidate, while the basic profile model CONNA$^r$(BP)\space treats the matches in all the papers equally, which dilutes the effects of similar papers by the other irrelevant papers. The learned attentions for different fields and different papers both demonstrates the interpretability of the proposed matching component.
\vpara{Matching Performance on Different Scenarios.}
We conduct additional experiments on the matching performance of different baselines and CONNA\space with different same-coauthor ratios on OAG-WhoIsWho dataset and present the results in Figure~\ref{fig:proportion_coauthor}. We can see that HR@1 of the embedding-based models, i.e. Camel, HetNetE, GML and CONNA\space drop more slightly (drops from 6.63\% to 18.73\%) than feature-engineering based GBDT (drops more than 29.69\%) when the same-coauthor ratio decreases from 1.0 to 0.1. This indicates that the embedding-based model can better capture the semantic matches when the coauthor features are week.
Especially when the same-coauthor ratio is less than 0.1, the performance
gap between CONNA\space and GBDT is significantly more than 16\%. The result indicates
that CONNA\space is more suitable to tackle the hard cases, i.e.
the cases that are hardly predicted by similar coauthors.
\begin{table}
\newcolumntype{?}{!{\vrule width 1pt}}
\newcolumntype{C}{>{\centering\arraybackslash}p{3.5em}}
\caption{
\label{tb:effciency} Average time cost(ms) of assigning each target pair.
\normalsize
}
\centering
\renewcommand\arraystretch{1.0}
\begin{tabular}{cccc}
\toprule
Model & Feature Preparing & Matching & Decision\\
\midrule
GBDT
& 183.34 & - & 3.61
\\
\midrule
CONNA
& 260.45 & 76.12 & 6.34
\\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Decision Performance}
Table~\ref{tb:classification_perforamance} shows the final decision performance of the proposed model and the comparison methods. Comparing with other methods, in terms of F1, the proposed joint model CONNA+Fine-tune\space achieves 1.69\%-19.84\% improvement on the samples with $c^*=c^+$ and 1.21\%-14.03\% improvement on the samples with $c^*=\text{NIL}$. We evaluate the results on both of the samples as we aim at not only assigning the target papers to the right persons if they exist, but also assigning them to NIL if the right persons do not exist.
The problem in this paper is not merely a matching or a classification decision problem, but can be solved by firstly matching each candidate to the target paper $p$ and then deciding whether the top matched person is right or not. Thus, we need to not only keep the relevant order within each candidate list, but also globally distinguish all the positive pairs from all the negative pairs.
GBDT and CrossEntropy\space only aim to optimize the global positions of all the $(\langle p,a \rangle,c)$ pairs, but ignore the relative order within each candidate list.
Although the globally predicted probabilities can be used to compare the candidates of each target paper, the relative order is not directly optimized, leading to a lot of mistakes in the final results.
Threshold can be viewed as a global optimization model, but merely uses a heuristic threshold to distinguish different complicated cases.
Heuristic Loss incorporates the costs related to NIL into the original loss of ranking the wrong persons before the right persons, but it suffers from the heuristically configured weights of different costs.
CONNA\space first estimates the matching probability of each candidate to the target pair and then decides the top matched candidate. This two-step strategy which is widely adopted in entity linking~\cite{mcnamee2010hltcoe,ratinov2011local} is proved to be effective.
Compared with CONNA, the performance of CONNA+Fine-tune\space is further improved, as some of the wrongly-predicted instances are gradually represented better to generate accurate similarity embeddings by the iteratively refined matching component, which will finally increase the number of rightly predicted instances.
The result demonstrates that the errors of the decision component can be reduced through jointly fine-tuning of the two components.
\iffalse
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{Figures/error_ana}
\caption{\label{fig:error_case} Error case study.}
\end{figure}
\fi
\vpara{Convergence Analysis.}
We plot the train/test loss of the matching component and the decision component with the increase of the joint training epochs. The results in Figure~\ref{fig:convergence} show that the performance of the two components both decrease sharply at the beginning of the joint training and then gradually change stable, which indicate the convergence of CONNA+Fine-tune.
\iffalse
\vpara{Error Case Analysis.}
We also conduct some error analysis and show a typical error case without the right person ($c^* = \text{NIL}$) in Figure~\ref{fig:error_case}.
The target paper of ``Yi Chen" in the case is wrongly assigned to a person coming from ``material institute", because ``Yi Chen" in the target paper is from the ``department of polymer materials and engineering", which is highly relevant to the ``material institute" of the predicted person. Besides, the target paper also has a same coauthor named ``Chang Peng" with the predicted person, and there are also several same keywords in titles and venues between them. Our model gives the wrong prediction on these confusing cases which occupy about 90\% of all the error cases. To solve these cases, one potential way is to firstly disambiguate the highly confident authors in a target paper, and then leverage their linking results to help disambiguating the confusing authors in the same paper, instead of to disambiguate the different authors in a paper independently. We leave this study in the future.
\fi
\subsection{Online Deployment on AMiner}
\label{sec:online}
Table~\ref{tb:effciency} presents the average time cost of assigning each target paper by the proposed CONNA\space model and the best baseline GBDT.
We implement the experiments by Tensorflow and run the code on an Enterprise Linux Server with 40 Intel(R) Xeon(R) CPU cores (E5-2640 v4 @ 2.40GHz and 252G memory) and 1 NVIDIA Tesla V100 GPU core (32G memory).
Since GBDT is a classification model without the matching component, we only present the cost of the decision process, which uses the label of the top predicted candidate as the predictive result. From Table~\ref{tb:effciency}, we can see that CONNA\space is about 1.83$\times$ slower than GBDT, which is mainly determined by the feature preparing process. Although CONNA\space performs much better than GBDT on both the ranking and the decision performance, from Figure~\ref{fig:proportion_coauthor}, we can see for about 62.17\% easy samples, i.e., the target pairs with the same-coauthor ratio larger than 0.9, the ranking performance of GBDT is comparable to CONNA, where the ranking performance directly determines the final decision performance of the top-1 candidates. Thus, to improve the online assignment efficiency meanwhile keeping the assignment performance, for each target pair, if its same-coauthor ratio is larger than 0.9, we directly apply GBDT to perform paper assignment, otherwise we apply CONNA\space to complete the task.
\begin{figure}
\centering
\subfigure[Matching component.]{\label{subfig:matching_convergence}
\includegraphics[width=0.22\textwidth]{figures/matching_conver.png}
}
\subfigure[Decision component.]{\label{subfig:decision_convergence}
\includegraphics[width=0.22\textwidth]{figures/decision_conver.png}
}
\caption{\label{fig:convergence} Convergence Analysis.}
\end{figure}
In addition, the online candidate selection is a little different from the offline name variant strategy explained in~\secref{sec:overview}. To improve the recall of the online predicting as much as possible, we adopt ElasticSearch\footnote{https://www.elastic.co} to perform fuzzy search for similar candidates with each target author. Compared with this online fuzzy strategy, the offline candidate selection is more strict, as for annotating high-quality name disambiguation dataset, the simple name variant strategy can already produce enough challenging candidates. However, the fuzzy strategy may result in too many noisy candidates, which increase annotation efforts.
\begin{figure}
\centering
\subfigure[A $c^*=c^+$ case.]{\label{subfig:normal_case}
\includegraphics[width=0.48\textwidth]{figures/sup_mask_no_nil.png}
}
\subfigure[A $c^*=\text{NIL}$ case.]{\label{subfig:nil_case}
\includegraphics[width=0.48\textwidth]{figures/sup_mask_nil.png}
}
\caption{\label{fig:demo} A demo of disambiguation on the fly in AMiner.}
\end{figure}
We develop a demo of disambiguation on the fly in AMiner\footnote{http://na-demo.aminer.cn/}, and show two screenshots of the demo in Figure~\ref{fig:demo}. In the demo, users are allowed to search a paper by its title, then select the expected paper and click one author name to see the disambiguation results of the paper with the current name. Under the selected paper, we present the most matched candidates by the trained matching component in CONNA\space on the left, and show the decision result of the assigned person by the trained decision component in CONNA\space on the right. Figure~\ref{subfig:normal_case} shows a case with $c^*=c^+$. We can see that our model can correctly match ``Jing Zhang" from Renmin University for the author ``Jing Zhang" in the paper ``StructInf: Mining Structural Influence from Social Streams" at the top and then decide the top matched one as the final assigned person.
Figure~\ref{subfig:nil_case} shows a case with $c^*=\text{NIL}$. Since ``Bo Chen" of the paper ``MEgo2Vec: Embedding Matched Ego Networks for User Alignment Across Social Networks" is a postgraduate student whose profile has not been established by AMiner, none of the existing ``Bo Chen" should be assigned to the paper. Our model correctly assigns NIL to this case.
Besides, since errors are still inevitable, we allow the users to provide feedback to our decision results. Specifically, users are allowed to directly ``submit" the result if they agree with it, otherwise, they can choose another right person from the top matched persons. The feedback can be simply regarded as new training instances to update the decision performance at each step of the joint training.
\section{Introduction}
\label{sec:intro}
Name disambiguation, aiming at disambiguating who is who, is one of the fundamental problems of the online academic network platforms such as Google Scholar, Microsoft Academic and AMiner.
The problem has been extensively studied for decades~\cite{han2004two,huang2006efficient,louppe2016ethnicity,tang2012unified,wang2010constraint,wang2011adana,zhang2018name} and most of the works focus on how to group the papers belonging to same persons together into a cluster from scratch. However, online academic systems have already maintained a huge number of person profiles, which are made by the ``from scratch" algorithms or human beings. Out of the consideration of the computation and time cost of the real systems, it is not practical to re-compute the clusters from scratch for the new arriving papers every day. We need a more effective way to deal with the problem of name disambiguation on the fly.
\iffalse
\textcolor{blue}{However, online academic systems have already maintained a huge number of existing persons' profiles. It is impossible to re-compute the high-cost clustering from scratch for each newly added paper. On the contrary, it is more realistic to assume the previously built profiles are correct and only assign new papers to right profiles, i.e. name disambiguation on the fly, which is highly efficient and practical than the methods from scratch.}
\fi
This paper takes AMiner as the basis to explain how we deal with the name ambiguity problem when continuously updating persons' profiles. AMiner is a free online academic search and mining system~\cite{tang2008arnetminer}, which has already extracted 133,204,120 researchers' profiles from the Web~\cite{tang2010combination} and integrated with 263,781,570 papers from heterogeneous publication databases~\cite{zhang2018name}. Currently, the newly arrived papers of AMiner are more than 500,000 per month. How to correctly assign these papers to the right persons in the system on the fly is a critical problem for many upper applications such as expert finding, academic evaluation, reviewer recommendation and so on.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figures/problem.png}
\caption{\label{fig:rankingchallenges} Disambiguation on the fly. Given a target
paper with target author as ``Yang Yang", we aim at searching for the right person
of ``Yang Yang" from the candidates, where the right person can be a
real person or a non-existing candidate denoted as NIL.}
\end{figure}
Existing methods on addressing the similar problem of anonymous author identification~\cite{chen2017task,zhang2018camel, zhao2019uncertainty} are possible solutions to continuously disambiguating papers on the fly.
However, they merely target at finding the top matched person from all the candidates, but fail to deal with the situation when no right person exists, which is common in real academic systems. For example, the papers published by new researchers should not be assigned to any persons, as their profiles have not been established by the system. Thus, to assign a paper on the fly, we need to pay attention to not only find the top matched candidate, but also identify whether to assign the top matched candidate or create a new person. In other words, we consider the absence of the right person from the candidates to be a distinct candidate, the so-called NIL candidate.
Figure~\ref{fig:rankingchallenges} illustrates the problem to be solved in the paper, where given a paper with an author to be disambiguated, the returned right person can be a real person or a non-existing candidate denoted as NIL. Actually, in AMiner, in addition to the ``on-the-fly" assignment, we also perform a ``from scratch" algorithm to cluster ``NIL" papers into new profiles, and run an offline ``checking" algorithm to correct errors from historical profiles periodically. In general, AMiner performs a multi-strategy combining ``from scratch", ``on-the-fly" and ``checking" together to solve the complex continuous name disambiguation problem. In this paper, we only introduce the principle of ``on-the-fly" strategy under the assumption that the previously built profiles are correct, where the errors of the profiles are left to the ``checking" strategy.
To tackle the problem, we first investigate how to find the top matched candidate for a given target paper.
Straightforwardly, we can use the traditional feature-engineering methods to estimate the matching probability between each candidate and the target paper, and then return the top matched candidate. However, these methods are devoted to exactly matching the tokens between a paper and a person, which is too rigid and cannot handle the cases with similar semantics but different tokens.
The widely used representation-based models~\cite{chen2017task,zhang2018camel} can capture the soft/semantic matches through learning low-dimensional dense embeddings, but they may contrarily hurt the performance of exact matching due to the highly compressed embeddings. For example in Figure~\ref{fig:rankingchallenges}, if only depending on the semantics of learned embeddings, we can infer that both of the candidates are interested in social network mining. However, it is apparent that the exact matches of the coauthor names or words, e.g., ``Jie Tang", ``Juanzi Li", ``social", ``network" between the target paper and the right person are more than those of the wrong person. Thus, a challenge is posed: \textit{how to capture both the exact matches and the soft matches in a principled way?} Simultaneously, the effects of different fields are different. For example, the two matched coauthors in the right person make it significantly more confident than the wrong person with only one matched coauthor, compared with the matches in other fields. Besides, each person publishes multiple papers, which also take different effects. For example in Figure~\ref{fig:rankingchallenges}, in the papers of the right person, the effect of the second similar paper may be diluted by the first irrelevant one if combining all papers. Thus, \textit{an effective way to distinguish the effects of different fields of the attributes and different instances of the published papers is worth studying.}
After obtaining the top matched candidate, we need to decide whether to assign the top matched candidate or NIL candidate to the target paper.
The NIL problem is widely studied in entity linking, a similar problem that aims at linking the mentions extracted from the unstructured text to the right entities in a knowledge graph. We can adopt the similar idea to assign the NIL candidate to a target paper if the score of the top matched person is smaller than a NIL threshold~\cite{gottipati2011linking,shen2013linking} or if the top matched person is predicted as NIL by an additional classifier~\cite{ratinov2011local}. Essentially, the first process of finding the top matched candidate tries to keep the relative distances between the right and the wrong persons of each target paper, and the later process of assigning the top matched candidate or not devotes to optimize the absolute positions among top matched candidates of all target papers. Intuitively, the two processes can influence each other, and the errors of each process can be corrected by their interactions. However, \textit{none of the existing NIL solutions are aware of this and it is not clear how to correct the errors by the interactions between the two processes.}
To this end, in AMiner, we propose a joint model CONNA\space that consists of a matching component and a decision component to solve CONtinuous Name Ambiguity, i.e., name disambiguation on the fly, where ``on the fly" emphasizes the solved problem in the paper is different from name disambiguation ``from scratch". In the model, the matching component adopts an interaction-based deep learning model plus a kernel pooling strategy to capture both the exact and soft matches between a target paper and a candidate person and also a multi-field multi-instance strategy to distinguish the effects of different attributes and different instances of papers. The decision component is trained on the similarity embeddings learned by the matching component, to further decide whether a top matched person is the right person or not. In addition, the errors of the proposed model can be self-corrected through jointly fine-tune the two components by reinforcement learning. To summarize, the main contributions include:
\begin{itemize}
\item We propose
CONNA\space consisting of a multi-field multi-instance interaction-based matching component and a decision component to address the problem of continuous name disambiguation.
With jointly fine-tuning of the two components by reinforcement learning, the errors of the two components can be self-corrected.
\item Experimental results on two large name disambiguation datasets show that CONNA\space compares favorably decision accuracy (+1.21\%-19.84\% in terms of F1) and matching accuracy (+ 3.80\%-49.90\% in terms of HR@1) against the baselines methods. CONNA is deployed on AMiner to assign papers on the fly now. All codes and data used in the paper are publicly available\footnote{https://github.com/BoChen-Daniel/TKDE-2019-CONNA}.
\end{itemize}
\section{Problem Formulation}
\label{sec:problem}
We introduce the definitions and the problem in this section.
\begin{definition}
\textbf{Paper.} We denote a paper as $p$ associated with multiple fields of attributes, i.e., $p = \{A_1, \cdots, A_F\}$, where $A_f \in p$ represents the $f$-th attribute such as authors' names and affiliations, title, keywords, venue and so on.
\end{definition}
\begin{definition}
\textbf{Target paper-author pair.} Given a paper $p$ with one of its authors denoted by $a$, we define a target paper-author pair as $\langle p,a \rangle$, where $p$ is the target paper and $a$ is the target author to be disambiguated. We abbreviate a target paper-author pair as a target pair henceforth.
\end{definition}
\begin{definition}
\textbf{Candidate Persons.} Given a target pair $\langle p,a \rangle$, the corresponding candidate persons $C$ are those who are closely related to the target pair $\langle p,a \rangle$. Each candidate person $c_l \in C$ is composed of multiple papers, i.e., $c_l = \{p_1, \cdots, p_{n_l}\}$, where each paper $p_t = \{A_1, \cdots, A_F\}$ and $n_l$ is the number of papers published by $c_l$.
\end{definition}
For a target pair $\langle p,a \rangle$, to find the right person from its candidate persons $C$, a straightforward way is to compare the coauthors' names of $a$ in $p$ with the coauthors' names of each candidate person in $C$\footnote{The names are treated as strings to be compared with each other.}. The assumption is the more overlaps between the coauthors' names, the more likely the candidate is the right author of $p$. The similar idea is adopted in~\cite{liu2013s},
which found that if only using the users' names, 56\% same users with different accounts across the social networks can be correctly linked together. However, how can the names take effect in identifying the right person for the target pairs?
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figures/fig2_coauthor_ratio.png}
\caption{\label{fig:proportion_coauthor} Distribution of the same-coauthor ratio and the corresponding matching performance. {\small Yellow bar: Distribution of the same-coauthor ratio of the target pairs. Lines: HR@1 performances of different methods.}}
\end{figure}
To answer the question, we collect 100,000 target pairs from AMiner. For each target pair $\langle p,a \rangle$, we collect its candidate persons (Cf.~\secref{sec:overview} for candidate generation details) and calculate the same-coauthor ratio:
\beq{
\label{eq:same-coauthor-ratio}
\text{Same-coauthor ratio} = \frac{\max\limits_{c \in C} S_c - \mathop{\text{second}}\limits_{c \in C} S_c}{\max\limits_{c \in C} S_c- \min\limits_{c \in C} S_c},
}
\noindent
where $S_c$ is the number of the same coauthors of $a$ in $p$ with the candidate $c$. Same-coauthor ratio reflects the gap between the most similar candidate and the second similar candidate. The denominator is to normalize the gap calculated for different candidate lists into the same scale.
It will be easier to distinguish the right person from the other candidates when the same-coauthor ratio is larger.
Then we plot the distribution of the same-coauthor ratio for all the target pairs in Figure~\ref{fig:proportion_coauthor}, where X-axis indicates the same-coauthor ratio of a target pair, and Y-axis on the left denotes the proportion of the target pairs with a certain same-coauthor ratio.
From the figure, we can see that although 62.72\% target pairs have large same-coauthor ratios, there are still 14.59\% target pairs having small same-coauthor ratios.
The coauthor-related features will hardly take effect when dealing with the target pairs with small same-coauthor ratios. For these target pairs, it is also not easy to leverage other features except the coauthor features.
To verify the above hypothesis, we estimate the probability of matching each candidate person to the target pair by GBDT based on several features such as the literal similarities between the title, venue, or the affiliations of the target pair and those of a candidate person besides the coauthor-related features, then evaluate whether the top matched candidate is the right person or not and show the evaluated metric, top 1 Hit Ratio (i.e., HR@1 on the right Y-axis) for different ranges of the same-coauthor ratio in Figure~\ref{fig:proportion_coauthor}. Clearly, we can see that the performance of GBDT decreases dramatically with the decrease of the same-coauthor ratio. The evaluated HR@1 is 66.71\% when the same-coauthor ratio is within (0, 0.1), but is 96.40\% within (0.9,1.0). The results indicate that when the coauthors of the target pair and the right person are not similar, it is also difficult for feature-engineering methods to capture the similarities of other attributes. Thus, a more promising way to match each candidate with the target pair is required.
In addition to find the top matched candidate, we also need to consider the situation when no right person exists, which is usually ignored by existing author identification tasks~\cite{chen2017task,zhang2018camel}. Suppose an academic system establishes a profile for a researcher only if she/he has published at least one paper, a lot of papers written by the new researchers who publish papers for the first time, cannot be assigned to any existing person in the system. Thus, the right person should be either a real person or a non-existing person. In summary, the problem is defined as:
\begin{problem}
\textbf{Disambiguation on the fly.} Given a training set $\mathcal{D} = \{(\langle p,a \rangle,C)\}$, for each target paper-author pair $\langle p,a \rangle$ and the corresponding candidate persons $C$, the right person $c^*$ can be either a real person in $C$ denoted by $c^+$ or a non-existing person denoted by NIL, and other persons except $c^*$ in $C$ are the wrong persons denoted by $\{c^-\}$. The target is to learn a predictive function
\beq{
\mathcal{F} : \{( \langle p, a \rangle, C)\} \rightarrow \{c^*\}
}to assign a target paper-author pair to its right person.
\end{problem}
In our problem, $a$ is usually used to select candidate persons and $p$ is used to extract features to match the candidates.
To simplify the problem, we assume the historical papers assigned to the candidates are correct. However, historical errors cannot be avoided. Thus, we design an independent model to check and correct the historical assignments repeatedly. The study is left in the future.
\section{Related Work}
\label{sec:related}
This paper is related to the problems of name disambiguation from scratch, author identification and entity linking.
\vpara{Name Disambiguation from Scratch.} Much effort has been made to disambiguate names from scratch defined as: given a set of papers written by the authors with similar name, it targets at partitioning all the papers into several disjoint clusters, with each of them corresponds to a real person. Existing work firstly represent papers by traditional feature engineering methods~\cite{chen-martin-2007-towards, huang2006efficient,tang2012unified,wang2010constraint,wang2011adana} or embedding models~\cite{qiao2019unsupervised,wang2020author,zhang2017name,zhang2018name} and then adopt a clustering algorithm such as hierarchical agglomerative clustering~\cite{chen-martin-2007-towards, qiao2019unsupervised,wang2020author,zhang2017name,zhang2018name}, K-means~\cite{wang2010constraint, chen2019toward}, DBSCAN~\cite{huang2006efficient} or semi-supervised clustering~\cite{louppe2016ethnicity} to partition these papers. Embedding models further include graph auto-encoder~\cite{zhang2018name}, heterogeneous GCN~\cite{qiao2019unsupervised} and adversarial representation learning~\cite{wang2020author}.
Continuous name disambiguation is formalized differently from the above problem, thus it can not be solved by the above methods.
\vpara{Author Identification.} Several works devote to anonymous author identification for a paper, which assume the authors of the target paper are unknown in a double-blind setting. For example, Chen et al.~\cite{chen2017task} and Zhang et al.~\cite{zhang2018camel} both optimize the difference between the right and the wrong authors. However, their models cannot be applied to unseen authors in the training set, as they only consider the identities of the authors. While we model authors' profiles, which do not depend on authors' identities. KDD Cup 2013 held an author identification challenge to solve the similar problem. However, the situation that no right person exists was not considered and all the participations devoted to feature-engineering methods~\cite{efimov2013kdd,zhao2013scorecard}.
\vpara{Entity Linking.} Entity linking aims at linking the mentions extracted from the unstructured text to the right entities in a knowledge graph~\cite{shen2014entity}.
Feature-based ~\cite{lehmann2010lcc} or neural models such as skip-gram~\cite{yamada2016joint}, autoencoder~\cite{he2013learning}, CNN~\cite{sun2015modeling}, LSTM~\cite{Kolitsas2018ACL} are proposed to calculate the similarity between the context of a mention and a candidate entity.
The NIL problem is widely studied in entity linking. The main solutions usually include the NIL threshold methods~\cite{gottipati2011linking,shen2013linking} predicting the mention as unlinkable if the score of the top ranked entity is smaller than a NIL threshold, the classification methods~\cite{mcnamee2010hltcoe,ratinov2011local} which predict the unlinkable mentions by a binary classifier, and the unified models incorporating unlinkable mention prediction process into entity matching process~\cite{clark2016deep,han2011generative}. Different from above, we jointly train the NIL decision model and the candidate matching model to boost both of their performance.
|
1,116,691,498,218 | arxiv | \section{Introduction}
Spoken dialogue systems and voice assistants have been developed to facilitate natural conversation between machines and humans. They provide services through devices such as Amazon Echo Show and smartphones to help the user do tasks \cite{mctear:2004} and, more recently, for open domain chitchat \cite{serban2016generative}, all through voice.
Recent advances have been facilitated by the huge amounts of data collected through such devices and have resulted in the recent success of deep machine methods, providing significant improvements in performance. However, not all languages are able to benefit from these advances, particularly those that are under-resourced. These include sign languages and it means that those who sign are not able to leverage such interactive systems nor the benefits that automatic transcription and translating of signing would afford.
Here, we advance the state of the art with respect to transcribing British Sign Language (BSL). Our aim is for automated transcription of the BSL into English leveraging video recognition technologies. BSL enables communication of meaning through parameters such as hand shape, position, hand orientation, motion, and non-manual signals \cite{sutton1999linguistics}. BSL has no standard notation for writing the signs, as with letters and words in English.
Analogous to the International Phonetic Alphabet (IPA), highly detailed mapping of visual indicators to written form are available, such as HamNoSys \cite{hanke2004hamnosys}. Despite the expressiveness of the HamNoSys writing system, its practical uses are limited and only a handful of experts know how to use it. Recent methods for automatic speech recognition (ASR) use deep neural models to bypass the need for phoneme dictionaries \cite{DBLP:journals/corr/HannunCCCDEPSSCN14}, which are then combined with language models.
\sloppy Previous work \cite{mocialov2016towards,mocialovtowards} has shown that we can use visual features to automatically predict individual signs. This work follows on in that these individual signs are to be used with a language model to take into account context and therefore increase accuracy of the transcriber, which outputs a string of word-like tokens. These tokens are called glosses \cite{sutton1999linguistics,cormier2015bsl}. Although glosses are translated BSL signs, they also convey some grammatical information about BSL. This makes glosses useful in their own right without the videos of the BSL signs and sheds some light into the syntax and semantics of the BSL.
This paper focuses on language modelling, a common technique in the field of ASR and Natural Language Processing to model the likelihood of certain words following each other in a sequence. We improve modelling of the BSL glosses by proposing to use transfer learning approaches, such as fine-tuning and layer substitution.
The use of transfer learning technique can overcome the data sparsity issue in statistical modelling for scarce resource languages by using similar resources that can be found in large quantities and then further training the models on a specific low resource data.
We show that a model, pre-trained on the Penn Treebank (PTB) dataset\footnote{https://catalog.ldc.upenn.edu/ldc99t42} and fine-tuned on the BSL monolingual corpus\footnote{http://www.bslcorpusproject.org/} can yield better results. This is in contrast to the same architecture that is trained directly on the BSL dataset without pre-training.
This is a somewhat surprising result as there are marked differences between the two languages, particularly with the respect to the syntax \cite{sutton1999linguistics}.
The paper begins with presenting methods for modelling languages and how they can be utilised in the BSL modelling. Section~\ref{transfer_learning} gives an overview of how transfer learning can be achieved as well as the use of transfer learning in sign languages. Section~\ref{datadatadata} gives an overview of the datasets that are used in this paper, their statistics, and pre-processing steps to create two monolingual corpora for statistical model training. Section~\ref{methodologymethodology} describes in detail the setup for the experiments in this paper. Section~\ref{resultsresults} presents the results of the models employed for this research and discusses these results and the limitations of the approach taken in terms of the data used in Section~\ref{discussion}. The paper is then concluded and future work is proposed
\section{Related Work}
\subsection{Sign Language Modelling}\label{sign_language}
Despite the availability of many alternatives for language modelling, such as count-based n-grams and their variations \cite{chen1999empirical,rosenfeld2000two,maccartney2005nlp,bulyko2007language,guthrie2006closer}, hidden Markov models \cite{dreuw2008visual,dreuw2008benchmark}, decision trees and decision forests \cite{filimonov2011decision}, and neural networks \cite{deena2016combining,mikolov2010recurrent}, research in sign language modelling predominantly employs simple n-gram models, such as in \newcite{DBLP:journals/corr/CateH17}, \newcite{forster2012rwth}, and \newcite{masso2010dealing}.
The reason for the wide-spread use of n-grams in sign language modelling is the simplicity of the method. However, there is a disconnect between n-grams and sign language in that signing is embodied and perceived visually, while the n-grams are commonly applied to text sequence modelling. For this reason, the authors in \newcite{stein2007hand}, \newcite{zhao2000machine}, \newcite{dreuw2008benchmark}, \newcite{masso2010dealing}, and \newcite{forster2013improving} model glosses, such as the ones shown on Figure~\ref{elanannot}, which are obtained from the transcribed sign languages, in a similar way to how language modelling is applied to automatic transcribed words from speech.
Glosses model the meaning of a sign in a written language, but not the execution (i.e. facial expressions, hand movement). Therefore, the more detailed meaning of what was signed may get lost when working with the higher-level glosses. To overcome this issue and to incorporate valuable information into sign language modelling, additional features are added in similar research, such as non-manual features (e.g facial expressions) \cite{san2009spoken,masso2010dealing,zhao2000machine,stein2007hand}.
In this work we use glosses because we want to model BSL purely at the gloss level without any additional information (e.g. facial expressions).
\subsection{Transfer Learning}\label{transfer_learning}
While transfer learning is a more general machine learning term, cross-domain adaptation of language models is used in the language modelling literature \cite{deena2016combining,ma2017approaches}. Models are usually trained on some specific domain that consists of a specific topic, genre, and similar features that can be identified by an expert. For example, a restaurant domain when a new type of a restaurant is created then the system needs to be able to adapt and be able to understand and discuss this new type of the restaurant. Unfortunately, it is nearly impossible to train a model for all possible configuration of current or future features.
Commonly, a set of features are extracted from the raw data. When features change, re-training is required.
Useful features can also be extracted without expert knowledge with such techniques as Latent Dirichlet Allocation (LDA). These features usually take the form of words that represent topics in the data \cite{deena2016combining}. Best practice tries to avoid re-training the models every time one of the features changes as the domain changes due to the overhead involved.
Model-based adaptation to the new domains, on the other hand, is achieved by either fine-tuning or the introduction of adaptation layer(s) \cite{yosinski2014transferable}. Fine-tuning involves further training the already pre-trained model using the data from the new domain. The intuition behind the fine-tuning is that it is much quicker to learn new information with related knowledge. The adaptation layer approach incorporates new knowledge by re-training only the adaptation layer, whereas the rest of the model remains exactly the same as if it was used in the original domain and acts as a feature extractor for the new domain \cite{deena2016combining}.
Transfer learning has been applied to sign languages in computing for various purposes to demonstrate that the method is suitable for the task due to the lack of substantial domain-specific sign language data. Transfer learning has been successfully applied to static pose estimation, transferring the knowledge from pose estimation to the sign language pose estimation \cite{DBLP:journals/corr/GattupalliGA16} and classification of fingerspelled letters in American Sign Language \cite{garcia2016real,karthickaryajayeshkudase2017,belalchaudhary2017,muskandhimandrg.n.rathna2017}. In particular, most of the transfer learning in sign language has been applied to static image recognition to recognise the hand shape in an image using convolutional neural networks.
We apply transfer learning to the language modelling task as this is a key challenge in successfully transcribing BSL.
\section{Corpora}\label{datadatadata}
The BSL corpus and the preprocessed Penn Treebank (PTB) corpus were chosen for this research. The monolingual PTB dataset consists of telephone speech, newswire, microphone speech, and transcribed speech. The dataset is preprocessed to eliminate letters, numbers, or punctuation and was used by Mikolov~\shortcite{mikolov2010recurrent}. The BSL corpus contains video conversations among deaf native, near-native and fluent signers across the United Kingdom. Almost all of the approximately one hundred recorded conversations are annotated for thirty seconds each at the gloss level using ELAN\footnote{https://tla.mpi.nl/tools/tla-tools/elan/} annotation tool \cite{schembri2013building}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.4\linewidth]{pics/BSL}
\caption{The BSL Corpus Project Sample Video Snippets\protect\footnotemark}
\label{bslcorpus}
\end{figure}
\footnotetext{http://www.bslcorpusproject.org/cava/}
All recordings of the signers were made using up to four standard video cameras with a plain backdrop to provide full body view of the individuals, as well as, views from above of their use of signing
space. The conversations between the signers included signing personal experience anecdotes, spontaneous conversations \cite{schembri2013building}.
The BSL data that we focused on was narratives between two participants, where one person had to think of a topic to sign about to another participant during the elicitation.
\placetextbox{0.1}{0.77}{\textsf{a)}}%
\placetextbox{0.1}{0.705}{\textsf{b)}}%
\placetextbox{0.1}{0.675}{\textsf{c)}}%
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth,trim={0 0 20cm 17cm},clip]{pics/elan}
\end{figure}
\vspace{-1.5em}
\begin{table}[h!]
\centering
\label{my-label}
\resizebox{\textwidth}{!}{%
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|l|l|l|l|l|}
\hline
\textbf{RH-IDgloss} & PT:PRO1SG & EXPLAIN & ABOUT & PT:POSS1SG & FS:PUPPY & DSEW(FLAT)-BE:ANIMAL & PT:POSS1SG & WANT & FAMILY & AT-LAST & HAVE & DSEW(FLAT)-BE:ANIMAL & ?LAST-WEEK & GOOD \\ \hline
\textbf{LH-IDgloss} & & EXPLAIN & ABOUT & & FS:PUPPY & DSEW(FLAT)-BE:ANIMAL & & & & AT-LAST & HAVE & DSEW(FLAT)-BE:ANIMAL & & GOOD \\ \hline
\textbf{Free Translation} & \multicolumn{6}{c|}{I want to tell you about my puppy} & \multicolumn{8}{c|}{My family got a puppy last year} \\ \hline
\multicolumn{15}{c}{} \\
\multicolumn{15}{c}{} \\
\hline
\textbf{Model Input Gloss} & & EXPLAIN & ABOUT & & PUPPY & ANIMAL & & WANT & FAMILY & AT-LAST & HAVE & ANIMAL & LAST-WEEK & GOOD \\ \hline
\end{tabular}
}
\end{table}
\begin{figure}[h!]
\centering
\caption{a) The BSL Corpus Annotation in ELAN; b) Table shows full text of the annotated glosses for the two first sentences from the ELAN annotation; c) Glosses that are used for the BSL modelling}
\label{elanannot}
\end{figure}
The corpus is annotated with glosses, taken from the BSL SignBank in ELAN as shown in Figure~\ref{elanannot}a. Figure~\ref{elanannot}b shows all the glosses of the first sentence. As mentioned above, gloss is an identifier of a unique sign, written in English and should represent its phonological and morphological meaning \cite{schembri2013building}. In the corpus, the glosses are identified throughout the videos for both left and right hands as sometimes different signs can be signed at the same time. Apart from the glossing, the annotations include the corresponding free English written translation of the meaning of the signing split into sentences (see the Free Translation in the Figure~\ref{elanannot}). Figure~\ref{elanannot}c shows which glosses are considered for the BSL modelling and which are ignored. This is done to match the vocabulary of the PTB corpus for the transfer learning purposes.
\subsection{Data Pre-processing}\label{datapreproc}
For the BSL corpus, we ignore the free translation and extract English text from the glosses, preserving the order of the signs executed. For example, in Figure~\ref{elanannot}, right-hand glosses identify the following order of the signs: good, explain, about, puppy, etc. excluding glosses, such as PT:PRO for pointing signs or PT:POSS for possessives and others (Figure~\ref{elanannot}c), which are explained in more detail in Fenlon et al.~\shortcite{fenlon2014using}. Since the gloss annotation does not include explicit punctuation, it is impossible to tell where a signed sentence begins and where it stops. To overcome this limitation of the gloss annotation, we use the Free Translation annotation, which gives the boundaries of sentences in videos. Later, we split the extracted glosses into sentences using these sentence boundaries. By the end of the pre-processing stage, we have glosses (excluding special glosses for pointing signs, posessives or other non-lexical glosses) in the order that the corresponding signs were executed in the video, split into sentences. As a result, we extracted 810 nominal sentences from the BSL corpus with an average length of the sentence being 4.31 glossed signs, minimum and maximum lengths of 1 and 13 glossed signs respectively. A monolingual dataset has been created with the extracted sentences. As obtained from the PTB dataset \cite{merityRegOpt}, the English language corpus has 23.09 words on average per sentence with minimum being 3 and maximum 84 words per sentence. The pre-processed BSL corpus has a vocabulary of 666 words, while the PTB dataset has a vocabulary of \num[group-separator={,}]{10002} words. From this point on in this paper, we will use the term `words' to refer to both glosses in the BSL and words in the PTB datasets because we aim to use a common vocabulary for training our models.
Both monolingual datasets were split into training, validation, and testing sets as required for training and evaluation of the statistical models. Both datasets were split using ratio 85:15. The smaller subset, in turn, was split 50:50 for validation and testing for the two datasets.
\section{Language Modelling Methodology}\label{methodologymethodology}
\subsection{Statistical Language Models}
Perplexity measure has been used for evaluation and comparison purposes of different models. We used the following formula to calculate the perplexity values: $e^{Cross-Entropy}$ as used in \newcite{bengio2003neural}, which approximates geometric average of the predicted words probabilities on the test set. We have explicitly modelled out-of-vocabulary (OOV), such as $<unk>$ placeholder in all the experiments.
\subsubsection{Neural Models}
For comparison, we use two methods: 1) stacked LSTM and 2) Feed-Forward (FFNN) architectures to create the BSL language models. All models are implemented in PyTorch\footnote{http://pytorch.org/} with weight-drop recurrent regularisation scheme for the LSTMs, which is important for overcoming commonly known LSTM model generalisation issues \cite{merityRegOpt,merityAnalysis}. The feed-forward model, on the other hand, had no regularisations as it is less susceptible to overfitting due to the much smaller number of parameters.
The parameters that were modified to achieve the lowest perplexity were input size of the overall input sequence for the recurrent neural network (back-propagation through time, BPTT), batch size, learning rate, and the optimizer. The parameters were selected using the grid search approach using perplexity metric. As a result, for the stacked LSTMs, bptt was set to 5, batch size was set to 16, discounted learning rate was set to 30, and the optimizer was set to stochastic gradient descent. In case of the feed-forward network, input was set to 5 words, batch size was set to 16, discounted learning rate was set to 30, and the optimizer was set to stochastic gradient descent. All the neural models were trained for 100 epochs.
In the case of the neural networks, the sequences of words were tokenised (i.e. turned into integers) and the tokenisation was stored to ensure the same tokenisation during the transfer learning phase. The input, therefore, consisted of a set of tokens, while the outputs (i.e. predicted words) were turned into a one-hot vectors.
\begin{figure}[h]
\centering
\subfloat[\mbox{Stacked LSTMs model}]{\label{stackedlstmsgraph}{\scalebox{0.4}{\includegraphics[width=0.4\linewidth]{pics/stackelstms} }}}%
\qquad\qquad\qquad
\subfloat[\mbox{Feed-Forward model}]{\label{ffnngraph}{\scalebox{0.3}{\includegraphics[width=0.4\linewidth]{pics/ffnn} }}}%
\caption{The two types of neural models used to test transfer methods for sign language modelling}%
\label{fig:example}%
\end{figure}
\paragraph{Stacked LSTMs}
Figure~\ref{stackedlstmsgraph} shows the architecture of the stacked LSTM model. The model consists of an embedding layer of 400 nodes, which, together with the tokenisation, turns string of words into a vector of real numbers. Secondly, three LSTM layers with 1150 nodes each are stacked vertically for deeper feature extraction. Thirdly, the linear layer downsizes the stacked LSTMs output to the vocabulary size and applies linear transformation with softmax normalisation. The weights of the embedding and the linear layers are tied. This means that the two layers share the same weights, which reduces the number of parameters of the network and makes the convergence during training faster. The same architecture was used in \newcite{merityRegOpt} to model PTB dataset, reporting 57.3 perplexity, utilising cache in the model from recent predictions.
\paragraph{FFNN}
Figure~\ref{ffnngraph} shows the Feed-forward model architecture. The model does not have the stacked LSTMs layers. Instead, the stacked LSTMs are substituted with one hidden fully-connected rectifier layer, which is known to overcome the vanishing gradient problem. The weights of the embedding and the outputs layers are not tied together. Similar architectures have been used for language modelling in \newcite{le2013structured}, \newcite{4960686}, and \newcite{DBLP:journals/corr/BrebissonSAVB15} with the hidden layer having different activation functions with the PTB dataset being used in \newcite{DBLP:journals/corr/AudhkhasiSR14}, reporting 137.32 perplexity.
\subsubsection{Training the Models
Transfer learning was achieved with both fine-tuning and substitution. Both FFNN and LSTM were trained on the PTB dataset and then either fine-tuned or the last layer was substituted with the new adaptation layer, freezing the rest of the weights, and further training on the BSL dataset.
To achieve fine-tuning, first the best model is saved after the training of both the FFNN and the stacked LSTMs on the PTB dataset. Then the training is restarted on the BSL corpus, having initialised the model with the weights, trained on the PTB dataset.
To perform layer substitution as a transfer learning approach, the same first step as with the fine-tuning is repeated and the model, trained on the PTB, is saved. When the training is restarted on the BSL dataset, the saved model is loaded and the last linear layer is substituted with a layer that has as many nodes as the BSL vocabulary. Later, all the weights of the network are locked and will not be modified during the optimisation. Only the weights of the last substituted layer will be modified. This method uses the pretrained network as a feature extractor and only modifies the last layer weights to train the model for the BSL dataset.
\section{Results}\label{resultsresults}
This section is split into two subsections. We firstly present results without transfer learning, namely both the FFNN and the stacked LSTMs models trained and tested on the PTB dataset or trained and tested on the BSL. Later we present results with the transfer learning, with both FFNN and the stacked LSTMs models trained on the PTB dataset and then fine-tuned and tested on the BSL.
To show that the two languages are different, as discussed in Section~\ref{datapreproc}, we applied the model trained on one language to the other language and vice versa. As a result, the model trained on English language and applied to the BSL scored 1051.91 in perplexity using SRILM toolkit \cite{stolcke2002srilm}. Conversely, a model trained on the BSL has been applied to the English language and scored 1447.23 in perplexity. As expected, the perplexity is high in both cases, which means that the probability distribution over the next word in one language is far from the true distribution of words in the other language.
\subsection{Without Transfer Learning}
\begin{table}[!h]
\centering
\resizebox{0.5\textwidth}{!}{%
\setlength{\tabcolsep}{0.5em}
{\renewcommand{\arraystretch}{2.0
\begin{tabular}{|c|c|c|c|}
\cline{1-4}
\textbf{Method} & \textbf{\begin{tabular}[c]{@{}c@{}}Penn Treebank\\(PTB)\end{tabular}} & \multicolumn{2}{c|}{\textbf{\begin{tabular}[c]{@{}c@{}}The BSL\\Corpus Project\end{tabular}}} \\ \cline{1-4}
\begin{tabular}[c]{@{}c@{}}FFNN\end{tabular} & 190.46 & \multicolumn{2}{c|}{\textbf{258.1}} \\ \cline{1-4}
\begin{tabular}[c]{@{}c@{}}Stacked LSTMs\end{tabular} & 65.91 & \multicolumn{2}{c|}{274.03} \\ \cline{1-4}
OOV & 6.09\% & \multicolumn{2}{c|}{25.18\%} \\ \hline
\end{tabular}%
}
}
\caption{\label{perplexities_table}Perplexities on either the PTB or the BSL test sets using models trained and tested on the same corpus (i.e. PTB and BSL)}
\end{table}
Table~\ref{perplexities_table} shows perplexities on the two datasets with two statistical models. From the table, we can infer that the trained models on the PTB dataset have lower perplexity than the same architectures trained on the BSL dataset. This can be explained by the fact that the PTB dataset has more data than the BSL dataset and, therefore, statistical models can generalise better. Furthermore, the amount of data is further reduced in the BSL case as the OOV covers a quarter of the overall dataset.
\subsection{With Transfer Learning}
Table~\ref{perplexities_table2} shows perplexities on the two datasets with two statistical models, applying transfer learning. From this table, it can be seen that the substitution approach gives very similar results independent of the whether FFNN or stacked LSTMs model is used (123.92 versus 125.32). The best result is achieved with the fine-tuning approach on the stacked LSTMs model, while the higher perplexity result is on the FFNN model with the fine-tuning approach. Similar results have been reported in \newcite{irie2016lstm}, where fine-tuned GRU performed worse than fine-tuned LSTM model. In addition, the OOV count differs from that of the Table~\ref{perplexities_table} due to the fact that a subset of the vocabulary, observed in the PTB dataset during training is then identified in the BSL dataset during testing.
\begin{table}[!h]
\centering
\resizebox{0.5\textwidth}{!}{%
\setlength{\tabcolsep}{0.5em}
{\renewcommand{\arraystretch}{1.2
\begin{tabular}{|c|c|c|c|c|}
\hline
\tiny \textbf{\begin{tabular}[c]{@{}c@{}} \qquad \\ Method\\ \qquad \end{tabular}} & \multicolumn{2}{c|}{\textbf{\tiny Fine-tuning}} & \multicolumn{2}{c|}{\textbf{\tiny Substitution}} \\ \cline{1-1} \hline
\tiny \begin{tabular}[c]{@{}c@{}}FFNN\end{tabular} & \multicolumn{2}{c|}{\tiny 179.3} & \multicolumn{2}{c|}{\tiny 123.92} \\ \cline{1-1} \hline
\tiny \begin{tabular}[c]{@{}c@{}}Stacked LSTMs\end{tabular} & \multicolumn{2}{c|}{\tiny \textbf{121.46}} & \multicolumn{2}{c|}{\tiny 125.32} \\ \hline
\multicolumn{1}{|c|}{\tiny OOV} & \multicolumn{4}{c|}{\tiny 12.71\%} \\ \hline
\end{tabular}%
}
}
\caption{\label{perplexities_table2}Perplexities on the BSL test set after applying the transfer learning on FFNN and LSTMs}
\end{table}
\subsection{Discussion}\label{discussion}
The salient idea of this paper is whether transfer learning is a legitimate method for modelling one language with the knowledge of another, assuming the languages are different, but share some common properties, such as vocabulary. This theory is intuitive and has been discussed in linguistics for spoken languages \cite{kaivapalu2007morphology}. In our case, PTB corpus covers most of the vocabulary found in the BSL corpus (12.71\% OOV) by the virtue of the gloss annotation of the BSL corpus \cite{schembri2013building}. However, the languages are assumed to be different as they evolved independently of one another \cite{faberfaber}.
The results obtained are different from reported in similar research. For example, for the FFNN model, Audhkhasi et al.~\shortcite{DBLP:journals/corr/AudhkhasiSR14} report 137.32 versus our achieved 190.46 perplexity and for the stacked LSTMs model, Merity et al.~\shortcite{merityRegOpt} report 57.3 versus our achieved 65.91 perplexity. This can be explained by the fact that not all the regularisation techniques had been used in this research as in the past research and the model training had been restricted to 100 epochs. Further training may further reduce the perplexity to that reported in Merity et al.~\shortcite{merityRegOpt}.
From the results, we can see that the transfer learning leads to superior models than the models trained on the BSL directly (258.1 and 274.03 against 123.92 and 125.32). Since the quality of the trained models using either of the approaches is similar in case of the stacked LSTMs model (121.46 and 125.32), the choice between the fine-tuning and substitution can be guided based on the convergence speed. During the substitution, only one layer of the network is replaced with a new one and the rest of the weights in the network are locked, therefore, one set of weights will be optimized. This is in contrast to the fine-tuning method, which optimizes all of the weights, which may, in turn, require more interactions, depending on how different the new data is.
\section{Conclusion}
This paper shows how transfer learning techniques can be used to improve language modelling for the BSL language at the gloss level. Statistical modelling techniques are used to generate language models and to evaluate them using a perplexity measure.
The choice of the transfer learning technique is guided by the scarcity of available resources of the BSL language and the availability of the English language dataset that shares similar language modelling vocabulary with the annotated BSL. Feed-forward and recurrent neural models have been used to evaluate and compare generated language models. The results show that transfer learning can achieve superior quality of the generated language models. However, our pre-processed BSL corpus lacks constructs that are essential for a sign language, such as classifier signs and others. Nevertheless, transfer learning for modelling the BSL shows promising results and should be investigated further.
\subsection{Future Work}
Although this paper discusses the use of a model initially trained on English and presents promising preliminary results, the annotation of the BSL, used in this paper, is limited as this paper serves as a proof of concept. In particular, the annotation used is missing some of the grammatical aspects of the BSL, such as classifier signs and others. Inclusion of these into the BSL language modelling would increase the OOV count as the English language does not have equivalent language constructs. This raises a question whether a sign language can be modelled using other languages that may have these constructs. More generally, is it possible to model a language with transfer learning using other less-related languages? Similar questions have been partly answered for the written languages in the field of machine translation \cite{gu2018universal} by bringing words of different languages close to each other in the latent space. However, nothing similar has been done for the sign languages.
From the methodological side of the modelling, additional advanced state of the art techniques should be experimented with to achieve greater quality of the generated models, such as attention mechanism for the recurrent neural networks. Finally, this paper focuses on key techniques for sign processing, which could be part of a larger conversational system whereby signers could interact with computers and home devices through their natural communication medium of sign.
Research in such end-to-end systems would include vision processing, segmentation, classification, and language modelling as well as language understanding and dialogue modelling, all tuned to sign language.
%
%
%
%
%
\bibliographystyle{acl}
|
1,116,691,498,219 | arxiv | \section{Introduction}
\label{sec:intro}
For noncentrosymmetric superconductors (NCSs) that obey time-reversal symmetry and exhibit line nodes of the superconducting gap function, one can define a momentum-dependent invariant that ensures the existence of flat bands of zero-energy surface states in regions of the surface Brillouin zone where this invariant is nonzero~\cite{SR11,BST11,SBT12,STY11,HQS13,SB15}. Due to being their own antiparticles, these surface modes are also known as Majorana modes.
As the flat surface bands occupy a nonzero fraction $S_f/S_{\text{BZ}}$ of the area $S_{\text{BZ}}$ of the surface Brillouin zone, it is possible to construct linear combinations of them which are localized at arbitrary points in real space~\cite{RRT20}. However, the number of independent localized zero-energy modes in real space is not equal to the number of sites at the surface, i.e., the number of points in the surface Brillouin zone, but is reduced by a factor of $S_f/S_{\text{BZ}}$. For the same reason, the wave packets have a minimal width in real space that is inversely proportional to the maximal diameter of the support of the flat bands in the surface Brillouin zone. Moreover, these wave packets have zero eigenenergy only in the limit of an infinitely thick slab.
Majorana modes have attracted a lot of interest in the context of quantum computation \cite{Kit03,NSSFS08,SLTD10,LSD10,ORO10,ElF15,SFN15,OrO20}. Such applications require ways to move Majorana modes around and, in particular, to move them past each other so as to braid their world lines. The behavior upon braiding is of physical relevance because quasiparticles in a two-dimensional system can display anyon statistics. These anyons are called Abelian if, upon exchanging, they can gather any phase factor $e^{i\phi}$. On the other hand, for non-Abelian anyons, the braiding operations do not commute anymore. Localized Majorana modes at the surfaces of NCSs provide a promising platform for this. Here, we make progress in two ways: First, it is necessary to confine the Majorana modes to a certain real-space region of the surface, i.e., to construct Majorana circuits. Second, one also has to be able to move Majorana wave packets, which requires a time-dependent modification of the bands such that they are weakly dispersing.
In this paper, we suggest one introduce time-reversal-symmetry-breaking terms to the Hamiltonian to achieve both objectives. These terms are realized by exchange fields applied to parts of the surface by bringing the superconductor into contact with a ferromagnetic insulator.
If an exchange field is applied to the entire surface, this leads to a tilting of the previously flat surface bands away from zero energy~\cite{BTS13,STB13} due to the momentum-dependent spin polarization of the surface states~\cite{BST15}. We first use this effect to restrict the zero-energy surface states to certain strips on the surface by applying a strong exchange field everywhere else. For a nonzero spin polarization of the surface states in the field-free system, this generically leads to the localization of low-energy surface modes in the field-free strip. We then introduce a small exchange field to this previously field-free strip to induce a weak dispersion in order to move a Majorana wave packet along the strip.
The remainder of this paper is organized as follows. In Sec.\ \ref{sec:Model system}, we introduce the model used for our analysis. In Sec.\ \ref{sec:Eigenvalues and eigenstates of the Hamiltonian}, we derive and discuss the surface states in the presence of an exchange field applied to strips of various forms. This is followed by an analysis of the prospects of creating a linear dispersion along a strip and thereby moving Majorana wave packets. We summarize our results and draw conclusions in Sec.~\ref{sec:summary}.
\section{Model system}
\label{sec:Model system}
For our calculations, we use a model system with point group $C_{4v}$, which is relevant for, e.g., CePt$_3$Si~\cite{BHMPSGSNSR04}, CeRhSi$_3$~\cite{KISUAT05}, CeIrSi$_3$~\cite{SOSYTYMHTSO06}, and LaAlO$_3$/SrTiO$_3$ heterostructures \cite{RTC07}. We determine the low-energy eigenstates and eigenvalues of the Bogoliubov--de Gennes (BdG) Hamiltonian of a $(101)$ slab by a Fourier transformation to real space along the two axes which are not translationally invariant followed by exact diagonalization of the resulting matrix.
\subsection{Hamiltonian}
We start by considering a three-dimensional single band NCS described by the Hamiltonian
\begin{equation}
H = \frac{1}{2} \sum_{\mathbf{k}}\Psi_\mathbf{k}^\dagger \mathcal{H}_\text{BdG}(\mathbf{k})\Psi_{\mathbf{k}} ,
\end{equation}
with the Nambu spinor $\Psi_\mathbf{k} = (c_{\mathbf{k},\uparrow},c_{\mathbf{k},\downarrow},c_{ -\mathbf{k},\uparrow}^\dagger,c_{-\mathbf{k},\downarrow}^\dagger)^\top$ of the electronic creation and annihilation operators for momentum $\mathbf{k}$ and spin $\sigma\in\lbrace\uparrow, \downarrow \rbrace$ and the BdG Hamiltonian
\begin{equation} \label{eq:H_BdG(k)}
\mathcal{H}_{\text{BdG}}(\mathbf{k}) =
\begin{pmatrix}
\epsilon(\mathbf{k}) \hat{\sigma}^0 +\lambda \mathbf{l}_\mathbf{k}\cdot \boldsymbol{\hat{\sigma}}&\hat{\Delta}(\mathbf{k})\\
\hat{\Delta}^\dagger(\mathbf{k})&-\epsilon(\mathbf{k}) \hat{\sigma}^0 + \lambda \mathbf{l}_\mathbf{k}\cdot \boldsymbol{\hat{\sigma}}^*
\end{pmatrix}.
\end{equation}
Here, the vector of Pauli matrices and the $2\times2$ identity matrix are denoted by $\boldsymbol{\hat{\sigma}}$ and $\hat{\sigma}^0$, respectively, and
\begin{equation} \label{eq:h(k)}
h(\mathbf{k}) = \epsilon(\mathbf{k}) \hat{\sigma}^0 + \lambda \mathbf{l}_\mathbf{k}\cdot \boldsymbol{\hat{\sigma}}
\end{equation}
is the normal-state Hamiltonian. The first term, $\epsilon(\mathbf{k}) \hat{\sigma}^0$, which is diagonal in the spin basis, will henceforth be represented by the tight-binding dispersion
\begin{equation}
\epsilon(\mathbf{k})=-2t(\cos k_x+\cos k_y +\cos k_z)-\mu
\end{equation}
for nearest-neighbor hopping strength $t$ and chemical potential $\mu$. The second term, $\lambda \mathbf{l}_\mathbf{k}\cdot \boldsymbol{\hat{\sigma}}$, is the antisymmetric spin-orbit-coupling (ASOC) term, in which $\lambda$ represents the spin-orbit-coupling (SOC) strength, while the form of the SOC vector~$\mathbf{l_k}$ is constrained by the lattice symmetries~\cite{SBT12}. For the point group $C_{4v}$, a first-order expansion leads to a Rashba-type SOC with~\cite{S09}
\begin{equation}
\mathbf{l_k} = \mathbf{\hat{x}} \sin k_y
- \mathbf{\hat{y}} \sin k_x .
\end{equation}
In the energetically most favorable pairing state, the vector of triplet pairing amplitudes tends to be parallel to the ASOC vector $\mathbf{l_k}$~\cite{FAKS04} so that the pairing matrix can be written as
\begin{equation}\label{eq:pairing}
\hat{\Delta}=(\Delta^s \hat{\sigma}^0+\Delta^t \mathbf{l_k}\cdot\boldsymbol{\hat{\sigma}})(i\hat{\sigma}^y),
\end{equation}
with the singlet and triplet pairing strengths $\Delta^s$ and $\Delta^t$, respectively, which we assume to be both constant and positive. Here, $i\hat\sigma^y$ represents the unitary part of the antiunitary time-reversal operation. In this paper, we consider $(s+p)$-wave pairing, as described by Eq.\ \eqref{eq:pairing}, because this leads to the simplest nodal structure. A more complicated momentum dependence of the pairing matrix, e.g., $d$-wave or $f$-wave pairing, would lead to additional nodes and more complex shapes of the momentum-space regions hosting zero-energy surface states \cite{SBT12}. This would increase the computational effort without leading to qualitatively new physical effects.
Diagonalizing Eq.~\eqref{eq:h(k)} leads to two helicity bands
\begin{equation}
\xi^\pm_\mathbf{k}=\epsilon_\mathbf{k}\pm\lambda|\mathbf{l_k}|,
\end{equation}
with the gap in the positive ($+$) and negative ($-$) helicity band being
\begin{equation}
\Delta^\pm_\mathbf{k} =\Delta^s \pm \Delta^t |\mathbf{l_k}|,
\end{equation}
respectively.
Thus, for sufficiently large $\Delta^t$, the negative-helicity gap $\Delta^-$ can change sign, i.e., line nodes generically appear on the negative-helicity Fermi surface~\footnote{If $\Delta^s$ and $\Delta^t$ have opposite sign the nodes appear on the positive-helicity Fermi surface.}.
This ensures the topological stability of flat bands of Majorana surface states within the projection of the bulk nodal lines onto the surface Brillouin zone~\cite{SBT12,STY11,BST11}.
\subsection{Setups}
\label{sec:setups}
Our goal is to find ways to confine and manipulate the Majorana modes. However, as these modes do not carry electric charge, one cannot hope to control them via an electric field. Instead, we will add an exchange field term to the Hamiltonian, which shifts the surface modes to non-zero energy by coupling to their spin polarization \cite{YSTY11,BTS13,MPSR13,STB13,WLLL13}. In the simplest setup, the exchange field is applied to a strip on the surface. In this scenario, we expect the zero-energy surface modes to be destroyed on the exchange-field strip, while they should persist at low energy in the field-free region.
\begin{figure}[!htbp]
\includegraphics[width=\columnwidth]{fig1.pdf}
\caption{Four different setups for the strip directions: The strip is oriented along the $y$ direction in setup 1 and setup 2 and along the $m$ direction in setup 3 and setup 4. The exchange field points in the $y$ direction in setups 1 and 3 and in the $m$ direction in setups 2 and 4.}
\label{fig:setups}
\end{figure}
To parametrize the slab with (101) surfaces, we keep the $y$ coordinate, which is parallel to the surface, while rotating the $x$ and $z$ directions to
\begin{align}
m&\equiv \left\lfloor\frac{x-z}{2}\right\rfloor, \\
l&\equiv x+z,
\end{align}
which are parallel and orthogonal to the slabs surface, respectively (see Fig. \ref{fig:setups}). We Fourier transform the BdG Hamiltonian in Eq.~\eqref{eq:H_BdG(k)} to real space and then consider a slab of thickness $L$ in the $l$ direction as well as length $M$ in the $m$ direction and $Y$ in the $y$ direction. The boundary conditions are open in the $l$ direction perpendicular to the surfaces and periodic in the $m$ and $y$ directions parallel to the surfaces. The exchange field can then be applied as a term $\mathbf{h}\cdot \boldsymbol{\hat{\sigma}}$ that acts on all surface-layer sites belonging to the exchange-field strip. This breaks translational invariance in the in-plane direction orthogonal to the strip but respects it in the parallel direction. It is useful to leave the Hamiltonian in momentum space in the latter direction. A detailed derivation of the Hamiltonian is presented in Appendix~\ref{sec:Derivation of the mean-field Hamiltonian matrix}.
For the concrete configuration of the strip and the exchange field, we consider the four main setups shown in Fig.~\ref{fig:setups}: both the strip and the field can point either in the $y$ direction or in the $m$ direction, which we will call setups~1,~2,~3, and~4, respectively. We do not consider out-of-plane fields, as they are more difficult to produce experimentally and do not yield any additional insight. Moreover, we ignore strips in any direction other than the two coordinate axes $m$ and $y$. We can, however, change whether the exchange field is applied to the $l=1$ surface or the $l=L$ surface, which will lead to different eigenstates and eigenvalues of the Hamiltonian because the $C_{4v}$ symmetry does not require the two surfaces to be equivalent.
\section{Spectrum and eigenstates for a strip}
\label{sec:Eigenvalues and eigenstates of the Hamiltonian}
In this section, we examine the low-energy spectrum and the corresponding eigenstates of a system with an exchange field applied to a strip at the surface according to the four setups described in Sec.~\ref{sec:setups}. First, we construct a perturbative argument about the qualitative effect of the exchange field on the eigenvalues of the Hamiltonian. We then use exact diagonalization to confirm this hypothesis and reveal further details.
\subsection{First-order perturbation theory and spin polarization of the field-free system}
\label{sec:First-order perturbation theory and spin polarization of the field-free system}
For low field strengths $h=|\mathbf{h}|$, the exchange field term $\mathbf{h}\cdot \boldsymbol{\hat{\sigma}}$ can be considered as a perturbation to the field-free system. We label the states $|k_m,k_y,\nu\rangle$ of the field-free system according to their surface momentum $(k_m,k_y)$ and the index $\nu$, which enumerates the $4L$ states with the same surface momentum $(k_m,k_y)$, ordered by increasing modulus $|E|$ of the eigenenergy. Under the influence of an exchange field applied to the layer $l$, these states get shifted by an amount
\begin{equation}
\Delta E_{|k_m,k_y,\nu\rangle} \propto \mathbf{h} \cdot \langle \mathbf{\hat{s}}_l
\rangle_{|k_m,k_y,\nu\rangle},
\label{eq:ZeemanE}
\end{equation}
up to first order of perturbation theory. In this equation, the expectation value $\left\langle \boldsymbol{\hat{\sigma}}_{l}\right\rangle_{|k_m,k_y,\nu\rangle}
$ of the spin polarization is defined as
\begin{equation}
\left\langle \mathbf{\hat{s}}_l \right\rangle_{|k_m,k_y,\nu\rangle}=\langle k_m,k_y,\nu|P_{l,l}\otimes \begin{pmatrix}
\boldsymbol{\hat{\sigma}}&0\\0&-\boldsymbol{\hat{\sigma}}^\top
\end{pmatrix}|k_m,k_y,\nu\rangle,
\end{equation}
where $P_{l,l}$ is an $L{\times}L$ matrix with entry $1$ in the $(l,l)$ component and zero entries otherwise. Thus, to the first order of perturbation theory, the energy corrections due to the applied exchange field are proportional to the zero-field spin polarization.
\begin{figure*}[!htbp]
\includegraphics[width=\textwidth]{fig2.pdf}
{\phantomsubcaption \label{subfig:spinm1}
\phantomsubcaption \label{subfig:spinmL}
\phantomsubcaption \label{subfig:spiny1}}
\caption{Spin polarization of the field-free system. \subref{subfig:spinm1} $m$ component of the spin at the $l=1$ surface and \subref{subfig:spinmL} at the $l=L$ surface, as a function of the surface momenta $k_m$ and $k_y$. \subref{subfig:spiny1} $y$ component of the spin, which is equal at the $l=1$ and the $l=L$ surface. The green lines are the projections of the bulk nodal lines onto the surface Brillouin zone. As the maximal spin polarization differs significantly between the three cases, different color scales are used.}
\label{fig:spin}
\end{figure*}
The momentum-dependent spin polarization of the field-free system can be calculated by transforming the BdG Hamiltonian in Eq.~\eqref{eq:H_BdG(k)} to real space in the $l$ direction perpendicular to the slab and using open boundary conditions (see Appendix~\ref{sec:Derivation of the mean-field Hamiltonian matrix}). Figure~\ref{fig:spin} shows the result for the $m$ and $y$ components of the spin at the $l=1$ surface and for the $m$ component at the $l=L$ surface of a slab with thickness $L=200$, hopping amplitude $t=1$, spin-orbit coupling $\lambda=-1.5$, chemical potential $\mu=-3$, and gaps $\Delta^s=0.3426$ and $\Delta^t=0.5$. These parameters are also used for all further calculations. Our qualitative results do not depend on the specific values of these parameters. The $y$ components $s^y_{l}$ of the spins are the same for $l=1$ and $l=L$. Hence, for setups 1 and 3, we do not expect different shifts in energy for the two surfaces. In general, we expect the energy shift to be linear in the field strength $h$ for all those momenta in the strip direction (i.e., $k_y$ in setups 1 and 2 and $k_m$ in setups 3 and 4), for which the corresponding spin polarization is nonzero. Thus, the momentum $k_y=0$ in setup 2 is an exception, for which this argument does not hold, and which will therefore not show a linear field dependence of the energy shift, because $s^{(m)}_{l=1}$ and $s^{(m)}_{l=L}$ vanish by symmetry.
Moreover, the $m$ component of the spin at the $l=L$ surface is much smaller than for the $l=1$ surface and there are additional sign changes for $k_m$ close to $\pm 1$. The origin of this is an accidental near cancellation of spin polarizations in the \textit{x} and \textit{z} directions. The weak spin polarization and corresponding small energy shifts have a profound effect on the surface states, as we will see below.
\subsection{Classification of surface states}
\label{sec:Classification of surface states}
All calculations in this section are performed for the exchange-field strip covering half the surface of width $M=100$ in setups 1 and 2 and $Y=100$ in setups 3 and 4 , i.e., the exchange-field strip has a width of $\Delta m=M/2=50$ in setups 1 and 2 and of $\Delta y=Y/2=50$ in setups 3 and 4. The strip is centered in the middle of the slab, i.e., at $m=50$ or $y=50$.
Figure~\ref{fig:box_antibox} shows the energies and probability densities of surface states for an example of the first situation explained in Sec.~\ref{sec:First-order perturbation theory and spin polarization of the field-free system}, i.e., one with a nonzero spin polarization in the exchange-field direction. The calculations for this example are performed for the surface momentum $k_y=0$ and a field $\mathbf{h}=0.025\, \mathbf{\hat{e}}_y$ applied according to setup 1 at the $l=1$ surface, i.e., with both the field and the strip oriented along the $y$ direction. As the Hamiltonian is a $4LM{\times}4LM$ matrix [see Eq.~\eqref{eq:H_matrix_set12} in Appendix~\ref{sec:Derivation of the mean-field Hamiltonian matrix}], each vector representing an eigenstate $\Psi$ can be divided into $LM$ tuples of length four. Thus, every site $(m,l)$ corresponds to a quadruple
\begin{align}
\Psi_{n}(m,l) &= (\Psi^{(p,\uparrow)}_{n}(m,l),\Psi^{(p,\downarrow)}_{n}(m,l), \nonumber \\
&\qquad \Psi^{(h,\uparrow)}_{n}(m,l),\Psi^{(h,\downarrow)}_{n}(m,l)),
\end{align}
which represents the particle-spin-up, particle-spin-down, hole-spin-up, and hole-spin-down amplitudes of the state at the site $(m,l)$.
\begin{figure}[!htbp]
\includegraphics[width=0.48\textwidth]{fig3.pdf}
{\phantomsubcaption \label{subfig:box_antibox_a}
\phantomsubcaption \label{subfig:box_antibox_b}
\phantomsubcaption \label{subfig:box_antibox_c}
\phantomsubcaption \label{subfig:box_antibox_d}
}
\caption{Surface states and their energies arising for setup 1, which exhibits nonzero spin polarization in the field-free system. \subref{subfig:box_antibox_a} Energies at $k_y=0$ of a slab with an exchange field $\mathbf{h}=0.025\, \mathbf{\hat{e}}_y$ applied to a strip along the $y$ direction at the $l=1$ surface, arranged in increasing order and colored according to the fraction of the squared modulus of the wave function that is localized in the exchange-field part of the surface. For reference, the field-free case (diamonds) and the fully covered surface (squares) are plotted as well. \subref{subfig:box_antibox_b} Squared modulus of the corresponding wave function at $l=1$. \subref{subfig:box_antibox_c} Probability density of the first box state and \subref{subfig:box_antibox_d} of the last anti-box state over the whole thickness $l\in\lbrace 1, \hdots, L\rbrace$ of the slab.}
\label{fig:box_antibox}
\end{figure}
The states in Figs.~\ref{subfig:box_antibox_a} and (b) are ordered according to increasing absolute value of the corresponding energy $|E|$ and enumerated by an index $n$. Figure~\ref{subfig:box_antibox_b} shows the $l=1$ part of the squared modulus
\begin{equation}
|\Psi_{n}(m,l=1)|^2 \equiv \sum_{i=(p,\uparrow),(p,\downarrow),(h,\uparrow),(h,\downarrow)}|\Psi^{i}_{n}(m,l=1)|^2
\end{equation}
of the wave function as a function of $m$ (vertical axis) from $n=1$ to $150$ (horizontal axis). In Fig.~\ref{subfig:box_antibox_a}, the corresponding eigenvalues are plotted and colored according to the fraction $\sum_{m=26}^{75}|\Psi_{n}(m,l=1)|^2/\sum_{m=1}^{100}|\Psi_{n}(m,l=1)|^2$ of the surface part of the state that is localized on the exchange-field strip. The spectra plotted using empty diamonds and empty squares in Fig.~\ref{subfig:box_antibox_a} refer to the cases with zero applied field and with the field applied to the whole surface, respectively. So, compared to the full-field case, only approximately half as many states at the $l=1$ surface get shifted away from zero energy. All surface states decay rapidly into the bulk, as can be seen exemplarily in Figs.~\ref{subfig:box_antibox_c} and (d). The lowest eigenvalues $|E_n|$ in Fig.~\ref{subfig:box_antibox_a} correspond to states localized at the opposite, field-free $l=L$ surface and will be ignored in further discussion. Starting at $n=21$, the states are localized almost entirely on the field-free strip. They resemble the states of a quantum mechanical particle in a box potential in that they have an increasing number of nodes with increasing energy and decay rapidly into the exchange-field strip, i.e., the walls of the box. We will call these states \emph{box states} from now on. As an example, a state with zero nodes that is localized almost entirely on the field-free strip is depicted in Fig.~\ref{subfig:box_antibox_c}. At higher energies, a non-negligible part of the states starts to be localized at the boundaries between the two kinds of strips and on the exchange-field strip, until finally, there is a sharp transition at $n\approx 100$. Beyond this point, the states are localized mostly on the exchange-field strip and, similar to the low-energy states introduced above, the number of nodes depends on $n$. However, in this case, the number of nodes \emph{decreases} with increasing energy. Therefore, we are going to call these states \emph{anti-box states} from now on. The last of these anti-box states, which has zero nodes, is shown in Fig.~\ref{subfig:box_antibox_d}.
According to the perturbative arguments in Sec.~\ref{sec:First-order perturbation theory and spin polarization of the field-free system}, the eigenenergies should be linear in the field strength. Figure \ref{fig:E(h)lin} shows that for $k_y=0$, the field dependence of the eigenenergies is indeed linear for the anti-box states, while it stays very close to zero for the box states. The shift of the eigenenergy for the lowest box state is several orders of magnitude smaller than for the anti-box states. In contrast to the anti-box states, the field dependence of the box states is not linear. Instead, the initial increase is characterized by an exponent that is smaller than unity and the curve flattens for stronger fields. This can be attributed to the fact that even though the box state is mostly localized on the field-free strip, it has a small nonzero weight on the exchange-field strip. This part of the state is strongly affected by the exchange field and leads to a small energy shift. However, the weight of the box state on the exchange-field strip decreases with increasing field strength, which leads to the nonlinear behavior. As examples for both the box and the anti-box states, the field dependence of the eigenenergies of the states with zero nodes on the exchange-field strip and on the field-free strip are indicated in red in Fig.~\ref{fig:E(h)lin}.
\begin{figure}[!htbp]
\includegraphics[width=0.48\textwidth]{fig4.pdf}
\caption{Energies corresponding to the surface states at $k_y=0$ for an exchange field according to setup 1 applied to the $l=1$ surface, for varying exchange field strength $h$. The field dependence of the eigenvalues corresponding to the highest anti-box state and the lowest box state are indicated in red. Inset: Zoom-in on the energy of the lowest box state.}
\label{fig:E(h)lin}
\end{figure}
\begin{figure*}[!htbp]
\includegraphics[width=0.8\textwidth]{fig5.pdf}
{\phantomsubcaption \label{subfig:set2_a}
\phantomsubcaption \label{subfig:set2_b}
\phantomsubcaption \label{subfig:set2_c}}
\caption{Energies and surface states at $k_y=0$ for an exchange field according to setup 2 applied to the $l=1$ surface. \subref{subfig:set2_a} Spectrum of the Hamiltonian for varying exchange field strength $h\in[ 0,1]$. \subref{subfig:set2_b} Eigenenergies for an exchange field $\mathbf{h}=0.25\, \mathbf{\hat{e}}_m$, colored according to the fraction of the squared modulus of the wave function which is localized on the field strip and arranged in increasing order. For reference, the field-free case (diamonds) and the fully covered surface (squares) are plotted as well. \subref{subfig:set2_c} Squared modulus of the wave function of the corresponding eigenstates in the $l=1$ layer.}
\label{fig:set2}
\end{figure*}
For all other setups, results of the analogous calculations are qualitatively similar to setup 1 in that they exhibit low-energy box states on the field-free strip and anti-box states with linearly field-dependent energy on the exchange-field strip, with two notable exceptions. One of these is a field applied according to setup 2 for states at $k_y=0$. Results for this situation are depicted in Fig.~\ref{fig:set2}. As shown in Fig.~\ref{subfig:set2_a}, there are no anti-box states with linearly field-dependent eigenvalues. All flat-band states remain close to zero energy, the lowest order of field dependence is quadratic, and there is only a weak shift in energy even at high field strength, which can be seen in Fig.~\ref{subfig:set2_b} for $h=0.25$. Moreover, the states are not clearly localized on either one of the two strips [see Fig.~\ref{subfig:set2_c}]. This deviation from the previously described behavior results from the fact that the relevant spin polarization for this setup is zero for all states at $k_y=0$. Thus the correction to the eigenenergies of first order in the field vanishes and the remaining field dependence is quadratic.
\begin{figure}[!htbp]
\includegraphics[width=0.48\textwidth]{fig6.pdf}
{\phantomsubcaption \label{subfig:set4d_a}
\phantomsubcaption \label{subfig:set4d_b}
\phantomsubcaption \label{subfig:set4d_c}
\phantomsubcaption \label{subfig:set4d_d}
\phantomsubcaption \label{subfig:set4d_e}}
\caption{Surface states and energies arising from a field according to setup 4 at the $l=L$ surface of the slab. \subref{subfig:set4d_a} Energies at $k_y=1$, colored according to the fraction of the squared modulus of the wave function which is localized on the field strip of the surface and arranged in increasing order. For reference, the field-free case (diamonds) and the fully covered surface (squares) are plotted as well. \subref{subfig:set4d_b} Squared modulus of the corresponding wave function at $l=L$. \subref{subfig:set4d_c} Example of a surface state that is localized on both of the strips. \subref{subfig:set4d_d}, \subref{subfig:set4d_e} The two states that are localized on either of the strip's boundaries.}
\label{fig:set4d}
\end{figure}
The other exception from the box/anti-box phenomenology occurs if both strip and field are oriented along the $m$ direction at the $l=L$ surface, which is shown in Fig.~\ref{fig:set4d} for $k_m=1$. In this case, most states remain at low energy and $|\Psi_{n}(m,l=1)|^2$ oscillates strongly at the entire surface of the slab. Thus, they are not localized on either one of the strips [see Figs.~\ref{subfig:set4d_a} and (b)]. An example of such a state is shown in Fig.~\ref{subfig:set4d_c}. The only two surface states that do not obey this pattern are two states localized on either of the boundaries between the two kinds of strips, shown in Figs.~\ref{subfig:set4d_d} and (e). The eigenenergy of one of these states has a linear field dependence with a positive slope, while the other has a negative slope with the same modulus (see Fig.~\ref{fig:set4dfield}). These observations can be explained as follows: For every state originating from a Majorana surface mode at $(k_m,k_y)$, the first-order perturbation theory has to start from a linear combination of the two degenerate states at $(\pm k_m,k_y)$. As shown by Fig.~\ref{subfig:spinmL}, for $k_m=1$ these states have a small zero-field $m$-spin polarization, which, however, strongly oscillates between positive and negative values along the real-space $m$ axis. A derivation of this fact can be found in Appendix~\ref{sec:Oscillating m-spin polarization for setup 4 on the l=L surface}. If a field is applied, none of these oscillations lead to an actual shift of the eigenenergy of a state because the positive and negative spins cancel out. Therefore, almost all states remain at zero energy. Only in the case where a peak of the $m$-spin polarization is on one side of the boundary between the field-free and the exchange-field strip and the corresponding dip is on the other, does the energy get shifted upward or downward linearly. Thus, the two states that are shifted linearly are strongly localized at the boundary between the two kinds of strips.
\begin{figure}[!htbp]
\includegraphics[width=0.48\textwidth]{fig7.pdf}
\caption{Energies corresponding to the surface states of a system at $k_m=1$ for which an exchange field according to setup 4 is applied to the $l=L$ surface for varying exchange-field strength $h$. The field dependence of the eigenvalues corresponding to the states localized on the boundaries between the two strips are indicated in red. Inset: Zoom-in on the low-energy part of the spectrum.}
\label{fig:set4dfield}
\end{figure}
To conclude this section, we emphasize that generically the strips exhibit a dichotomy of box states with weak field dependence and anti-box states with linear field dependence. Exceptions occur if the relevant spin polarization for the field-free surface either vanishes exactly because of symmetry or is accidentally small.
\section{Dispersion in the strip direction}
\label{sec:Dispersion in strip direction}
\begin{figure*}[!htbp]
\setlength{\tabcolsep}{2pt}
\begin{tabular}{l|l|l}
\toprule
&$\mathbf{h}=h\mathbf{\hat{e}}_y$&$\mathbf{h}=h\mathbf{\hat{e}}_m$\\ \midrule
\rotatebox{90}{strips $\parallel \mathbf{\hat{e}}_y$}&\includegraphics[width=0.46\textwidth]{fig8a.pdf}&\includegraphics[width=0.46\textwidth]{fig8b.pdf}\\
& (a) setup~1 & (b) setup~2\vspace{2pt}\\ \hline
\rotatebox{90}{strips $\parallel \mathbf{\hat{e}}_m$} &\includegraphics[width=0.46\textwidth]{fig8c.pdf}& \includegraphics[width=0.46\textwidth]{fig8d.pdf}\\
& (c) setup~3 & (d) setup~4\\
\bottomrule
\end{tabular}
{\phantomsubcaption \label{subfig:twofields_set1}
\phantomsubcaption \label{subfig:twofields_set2}
\phantomsubcaption \label{subfig:twofields_set3}
\phantomsubcaption \label{subfig:twofields_set4}
}
\caption{Dispersion for the specified setups, where a strong field is applied to one strip at the $l=1$ surface of the slab to restrict the flat-band states to the other strip. A weak field is applied to the latter strip, in order to make the previously flat bands weakly dispersive. Both fields point in the same direction, i.e., the $y$ direction for panels \subref{subfig:twofields_set1} and \subref{subfig:twofields_set3} and the $m$ direction for panels \subref{subfig:twofields_set2} and \subref{subfig:twofields_set4}, while the strip is oriented along the $y$ direction for panels \subref{subfig:twofields_set1} and \subref{subfig:twofields_set2} and the $m$ direction for panels \subref{subfig:twofields_set3} and \subref{subfig:twofields_set4}. The dispersion is plotted in blue. For reference, the dispersion of a completely field-free system is given in light gray in the background. The insets show zoom-ins on the low-energy parts of the dispersions.}
\label{fig:twofields_setups}
\end{figure*}
In this section, we propose a method to move localized wave packets of Majorana zero modes. As described in Sec.~\ref{sec:intro}, one can form wave packets from the zero-energy Majorana modes within the projection of the bulk nodal lines. As shown in Sec.~\ref{sec:Classification of surface states}, one can localize modes at a strip on the surface by applying an exchange field everywhere else, which will shift states with a significant weight outside of the field free strip (i.e., anti-box states) to high energies. The box states with most of their weight on the field-free strip remain at much lower energy. They are still \emph{approximately} degenerate in the conserved momentum component parallel to the strip and so one can choose an appropriate linear combination of the box states at different momenta to build a localized wave packet. The simplest idea for moving the wave packets is to apply a small field to the previously field-free strip in order to create a weak linear dispersion along the strip \cite{BTS13,STB13}. A wave packet would move without broadening if it were a superposition of components with the same velocity $\partial E/\partial k_\mathrm{strip}$ along the strip, where $k_\mathrm{strip}$ is the momentum parallel to the strip.
One can make predictions about the resulting dispersion based on the spin polarization of the field-free system. For every momentum $k_\text{strip}$, the first-order perturbation theory described in Sec.~\ref{sec:First-order perturbation theory and spin polarization of the field-free system} can be applied, i.e., the exchange field couples to the spin polarization of the surface states of the \emph{field-free} system. For every value of $k_\text{strip}$, there are fewer than $M$ (setup 1 or 2) or fewer than $Y$ (setup 3 or 4) surface states corresponding to different values of the momentum $k_{\perp}$ orthogonal to the strip direction in the field-free system. If the field is switched on, approximately half of these states are shifted away from zero energy by an amount $\Delta E$ which, to first order, is proportional to the spin polarization. The other half correspond to the other surface of the slab. Thus, the shape of the resulting dispersion $E(k_\text{strip})$ can be predicted from the projection of the zero-field spin polarization along the direction orthogonal to the strip.
In Fig.~\ref{fig:twofields_setups}, the dispersion $E(k_\text{strip})$ is shown for all four setups. The large field was chosen as $h_\text{large}=0.005$ for setups 1 and 2 and $h_\text{large}=0.05$ for setups 3 and 4, while the small field was $h_\text{small}=0.00025$ for setups 1 and 2 and $h_\text{small}=0.0025$ for setups 3 and 4. The dispersion in Fig.~\ref{subfig:twofields_set1}, which belongs to setup 1, is inappropriate for our goals, as the bands are merely shifted away from zero energy instead of being tilted to form a linear dispersion. On the other hand, setup 2 in Fig.~\ref{subfig:twofields_set2} displays a linear dispersion over a wide range of momenta $k_y$, which should allow one to move a wave packet in the $y$ direction. Since the dispersion is not perfectly linear, the wave packet will broaden with time.
To move a wave packet in the $m$ direction, it would be necessary to construct a linear dispersion on the small-field strip of either setup 3 or setup 4. Similar to setup 1, setup 4 does not lead to a linear dispersion, but rather shifts the bands away from zero energy, as can be seen in Fig.~\ref{subfig:twofields_set4}. For setup 3, parts of the dispersion are linear, as shown in Fig.~\ref{subfig:twofields_set3}. However, we see that there are no linearly dispersing low-energy states at small momentum $k_m$. The reason for this is clear from Fig.~\ref{fig:spin}: There are no flat-band states at small $k_m$ for the field-free surface \footnote{Even if the system parameters are such that the two projections of the bulk nodal lines touch or overlap, there will be at most two points of zero-energy states at $k_m=0$, because the momentum dependent winding number that protects the zero-energy Majorana modes within the projection of the bulk nodal lines cancels if two of those regions overlap. Thus, only the boundaries of two of these regions will touch at $k_m=0$.} It is nevertheless possible to construct a wave packet out of states with approximately the same velocity. However, the high degree of anisotropy between strips in the two orthogonal directions on the surface is likely detrimental to constructing more complicated structures.
A crucial insight is that the general shape of the dispersions of the introduced two-field setup can be predicted from the spin polarization of the field-free system. In first-order perturbation theory, the dispersion in the strip direction is determined by the projections of the spin polarizations along the axis orthogonal to the strip. The spin polarization of the field-free system is thus a straightforward tool to predict which point groups other than $C_{4v}$ are promising candidates to achieve weak linear dispersions on strips in two linearly independent directions. Figure 4 of Ref.~\cite{BST15} shows that for the (111) surface of a NCS with point group $O$, the spin polarization is parallel to the surface and rotates by $2\pi$ when the momentum parallel to the surface is rotated by $2\pi$. This system is thus promising for strips in arbitrary directions, which we leave for future research.
\section{Summary and conclusions}
\label{sec:summary}
Motivated by the goal of manipulating localized Majorana modes at the surface of a NCS, we have analyzed the consequences of the application of an exchange field to part of a surface with Majorana flat bands. As a model system, we have used a slab with (101) surfaces of a superconductor with point group $C_{4v}$ with an exchange field applied to a strip on the surface.
We have seen both from first-order perturbation theory and exact diagonalization that in cases where the spin polarization of the zero-energy surface states in the field-free system is not zero or very small for a certain momentum in the strip direction, the eigenstates at low energies are localized on the field-free strip. They have an increasing number of nodes with weakly increasing energy, and thus resemble the states of a particle in a box, which is why we have called them box states. On the other hand, there are states which are shifted away from zero energy by an amount $\Delta E$ that is proportional to the exchange-field strength $h$. These states are localized on the exchange-field strip. At fixed field strength, they have a decreasing number of nodes with increasing energy and we have called them anti-box states. It is thus possible to achieve the first prerequisite for manipulating the surface modes: to constrain them to predefined regions. This picture breaks down if the spin polarization of the field-free surface states vanishes by symmetry or is accidentally small. In this case, first-order perturbation theory is no longer valid and we do not find well-defined box and anti-box states.
We have also considered a small exchange field on the previously field-free strip with the goal to introduce a linear dispersion to the almost flat bands of Majorana modes. We have found that it is possible to obtain an approximately linear dispersion for a range of momenta for strips in both the $y$ and the $m$ directions. Hence, by switching the weak field on and off, one can, in principle, also achieve the second prerequisite for Majorana manipulation: to move wave packets in a controlled manner. The deviation from perfect linearity will lead to broadening of wave packets with time. By making the support of the wave packets narrower in momentum space, they become broader in real space but the velocities become more uniform so that the additional time-dependent broadening is reduced. The necessary optimizing of wave packets and the dynamics resulting from switching the weak field on and off are interesting topics for future research. In general, the shape of the dispersion on the weak-field strip can be predicted from the spin polarization of the field-free system. Thus, good candidates for model systems and setups that allow for a linear dispersion in two independent surface directions can be identified based on the spin polarization.
\vspace*{3ex}
\acknowledgments
We thank P. M. R. Brydon and J. E. R\"uckert for useful discussions. Financial support by the Deut\-sche Forschungsgemeinschaft via the Collaborative Research Center SFB 1143, project A04, and the Cluster of Excellence on Complexity and Topology in Quantum Matter ct.qmat (EXC 2147) is gratefully acknowledged.
|
1,116,691,498,220 | arxiv | \section{Introduction}
Charge-density-wave (CDW) order usually exists in some low-dimensional materials, especially those transition-metal chalcogenides. \cite{Wilson,Kim,Salvo,Boswell} When the CDW order is suppressed by doping or pressure, a list of them can be tuned to superconductors. \cite{Morosan,Kusmartseva,Sipos,Hoesch} In the temperature-doping ($T$-$x$) or temperature-pressure ($T$-$p$) phase diagram, sometimes a superconducting dome is observed on top of a CDW quantum critical point (QCP). \cite{Morosan,Kusmartseva,Sipos,Hoesch} The reminiscent of this kind of phase diagram to the heavy-fermion and high-$T_c$ cuprate superconductors raises the possibility of unconventional superconductivity caused by CDW fluctuations. \cite{Morosan,Kusmartseva,Sipos,Hoesch,Norman}
ZrTe$_3$ is such a compound in which CDW order and superconductivity compete and coexist. \cite{Yamaya1} It belongs to a family of trichalcogenides MX$_3$ (M = Ti, Zr, Hf, U, Th, and X = S, Se, Te). The structure consists of infinite X-X chains formed by stacking MX$_3$ prisms. \cite{Furuseth} The polyhedra are arranged in double sheets and stacked along monoclinic $c$-axis by van der Waals forces. \cite{Furuseth} Pristine ZrTe$_3$ itself harbors filamentary superconductivity with $T_c \sim$ 2 K.\cite{Yamaya1} The CDW vector $\overrightarrow{q} \approx (1/14; 0; 1/3)$ is developed in ZrTe$_3$ below $T_{CDW} \sim$ 63 K.\cite{Eaglesham} Like other CDW materials, pressure and doping can melt the CDW order and stabilize its superconductivity to bulk. \cite{Zhu1,Lei,Yamaya2,Zhu2} Recently, isovalent substitution of Se for Te is also found to cause a superconducting dome in ZrTe$_{3-x}$Se$_x$ system, with maximum $T_c$ = 4.4 K at the optimal doping $x$ = 0.04. \cite{Zhu3} It was suggested that this superconductivity may be mediated by quantum critical charge fluctuations. \cite{Zhu3} To clarifying the underlying pairing mechanism, it is important to know the superconducting gap symmetry and structure.
Ultra-low-temperature heat transport is an established bulk technique to probe the superconducting gap structure \cite{Shakeripour}. The existence of a finite residual linear term $\kappa_0/T$ in zero magnetic field is an evidence for gap nodes \cite{Shakeripour}. The field dependence of $\kappa_0/T$ may further give support for a nodal superconducting state, and provide information on the gap anisotropy, or multiple gaps \cite{Shakeripour}.
In this paper, we measure the ultra-low-temperature thermal conductivity of ZrTe$_{3-x}$Se$_x$ single crystals near optimal doping, to investigate whether its superconducting state is unconventional. The negligible $\kappa_{0}/T$ in zero field and the rapid field dependence of $\kappa_{0}(H)/T$ in low field strongly suggest multiple nodeless superconducting gaps in ZrTe$_{3-x}$Se$_x$. In this sense, the superconductivity in ZrTe$_{3-x}$Se$_x$ is likely conventional.
\section{Experiment}
The ZrTe$_{3-x}$Se$_x$ single crystals were grown by iodine vapor transport method. \cite{Zhu1,Zhu3} Two single crystals from different batches, both with nominal composition $x$ = 0.04, were used for this study. Their exact compositions were determined by wavelength-dispersive spectroscopy (WDS), utilizing an electron probe microanalyzer (Shimadzu EPMA-1720). The dc magnetization was measured at $H$ = 20 Oe, with zero-field cooling, using a SQUID (MPMS, Quantum Design). The samples were cleaved and cut to rectangular bars, with typical dimensions of 2.12 $\times$ 1.01 $\times$ 0.030 mm$^3$. The largest surface is $ab$-plane. Contacts were made directly on the sample surfaces with silver paint, which were used for both resistivity and thermal conductivity measurements. The contacts are metallic with typical resistance 200 m$\Omega$ at 2 K. In-plane thermal conductivity was measured in a dilution refrigerator, using a standard four-wire steady-state method with two RuO$_2$ chip thermometers, calibrated {\it in situ} against a reference RuO$_2$ thermometer. Magnetic fields were applied along the $c$ axis and perpendicular to the heat current. To ensure a homogeneous field distribution in the sample, all fields were applied at temperature above $T_c$.
\section{Results and discussion}
According to the WDS results, the actual Se content of the two ZrTe$_{3-x}$Se$_x$ single crystals is $x$ = 0.044 and 0.051, respectively. Below we will use the actual $x$. Figure 1(a) presents the normalized dc magnetization of ZrTe$_{2.956}$Se$_{0.044}$ and ZrTe$_{2.949}$Se$_{0.051}$ single crystals. The $T_c$ defined by the onset of diamagnetic transition is 4.0 K for both samples. The significant diamagnetic response confirms that the superconductivity is stabilized to bulk from the filament superconductivity in pristine ZrTe$_3$, which is consistent with previous report. \cite{Zhu3} This bulk superconductivity will be further supported by our thermal conductivity data in this study.
\begin{figure}
\includegraphics[clip,width=7.8cm]{Fig1.pdf}
\caption{(Color online) (a) The normalized dc magnetization of ZrTe$_{2.956}$Se$_{0.044}$ and ZrTe$_{2.949}$Se$_{0.051}$ single crystals, measured in $H$ = 20 Oe with zero-field-cooled (ZFC) process. (b) The in-plane resistivity of ZrTe$_{2.956}$Se$_{0.044}$ and ZrTe$_{2.949}$Se$_{0.051}$ single crystals. No anomaly was observed in the normal state, suggesting the complete suppression of CDW state. (c) The resistive superconducting transition at low temperature. For clarity, the resistivity value of the $x$ = 0.044 sample is magnified by five times. The $T_c$ defined by $\rho$ = 0 is 4.06 and 3.87 K for $x$ = 0.044 and 0.051 samples, respectively.}
\end{figure}
Figure 1(b) shows the in-plane resistivity $\rho(T)$ of ZrTe$_{2.956}$Se$_{0.044}$ and ZrTe$_{2.949}$Se$_{0.051}$ single crystals. No anomaly was observed in the normal state, suggesting the complete suppression of CDW state in them. \cite{Zhu3} Fitting the normal-state resistivity data below 60 K to $\rho(T)$ = $\rho_0$ + $AT^n$ gives residual resistivity $\rho_0$ = 2.82 and 21.5 $\mu\Omega$ cm for $x$ = 0.044 and 0.051 samples, respectively. The resistive superconducting transition at low temperature is plotted in Fig. 1(c). The $T_c$ defined by $\rho$ = 0 is 4.06 and 3.87 K for $x$ = 0.044 and 0.051 samples, respectively. Both of them are near the optimal doping in the phase diagram of ZrTe$_{3-x}$Se$_x$, and the $x$ = 0.051 sample is slightly overdoped. \cite{Zhu3}
To determine their upper critical field $H_{c2}$, the low-temperature resistivity of these two samples under magnetic fields was also measured. Figure 2(a) and 2(b) show the low temperature $\rho(T)$ curve of ZrTe$_{2.956}$Se$_{0.044}$ and ZrTe$_{2.949}$Se$_{0.051}$ single crystals under various fields. With increasing field, the superconducting transition is gradually suppressed to lower temperature, and the magnetoresistance in the normal state is very weak. The $H_{c2}(T)$, defined by $\rho = 0$ in (a) and (b), is plotted in Fig. 2(c) for both $x$ = 0.044 and 0.051 samples. From Fig. 2(c), we roughly estimate $H_{c2}(0) \approx$ 1.40 and 0.85 T for them, respectively.
\begin{figure}
\includegraphics[clip,width=7cm]{Fig2.pdf}
\caption{(Color online) Low-temperature resistivity of (a) ZrTe$_{2.956}$Se$_{0.044}$ and (b) ZrTe$_{2.949}$Se$_{0.051}$ single crystals under various magnetic fields. (c) Temperature dependence of the upper critical field $H_{c2}(T)$, defined by $\rho = 0$ in (a) and (b). The dashed lines are guide to eye, which point to $H_{c2}(0) \approx$ 1.40 and 0.85 T for $x$ = 0.044 and 0.051 samples, respectively.}
\end{figure}
The temperature dependence of in-plane thermal conductivity for ZrTe$_{2.949}$Se$_{0.044}$ and ZrTe$_{2.956}$Se$_{0.051}$ single crystals in zero and applied magnetic fields is shown in Fig. 3, plotted as $\kappa/T$ vs $T$. The thermal conductivity at very low temperature can usually be fitted to $\kappa/T$ = $a + bT^{\alpha-1}$. \cite{Sutherland,SYLi} The two terms $aT$ and $bT^{\alpha}$ represent contributions from electrons and phonons, respectively. The power $\alpha$ is typically between 2 and 3, due to specular reflections of phonons at the boundary. \cite{Sutherland,SYLi} One can see that all the curves in Fig. 3 are roughtly linear, therefore we fix $\alpha$ to 2. In zero field, the fittings give $\kappa_0/T$ = 0.008 $\pm$ 0.008 and 0.009 $\pm$ 0.002 mW K$^{-2}$ cm$^{-1}$ for the $x$ = 0.044 and 0.051 samples, respectively. Such a tiny $\kappa_0/T$ in zero field is negligible for both samples. As $T \to 0$, since all electrons become Cooper pairs for $s$-wave nodeless superconductors, there are no fermionic quasiparticles to conduct heat. Therefore there is no residual linear term of $\kappa_0/T$, as seen in V$_3$Si \cite{Sutherland}. However, for unconventional superconductors with nodes in the superconducting gap, the nodal quasiparticles will contribute a finite $\kappa_0/T$ in zero field. \cite{Shakeripour} For example, $\kappa_0/T$ = 1.41 mW K$^{-2}$ cm$^{-1}$ for the overdoped cuprate Tl$_{2}$Ba$_{2}$CuO$_{6+\delta}$ (Tl-2201), a $d$-wave superconductor with $T_c$ = 15 K. \cite{Proust} For the $p$-wave superconductor Sr$_2$RuO$_4$, $\kappa_0/T$ = 17 mW K$^{-2}$ cm$^{-1}$. \cite{Suzuki} Therefore, the negligible $\kappa_0/T$ of the $x$ = 0.044 and 0.051 samples samples suggest that the superconducting gap of ZrTe$_{3-x}$Se$_x$ is nodeless. Note that the negligible $\kappa_0/T$ in zero field also supports the bulk superconductivity in our samples.
\begin{figure}
\includegraphics[clip,width=7.8cm]{Fig3.pdf}
\caption{(Color online) Low-temperature in-plane thermal conductivity of (a) ZrTe$_{2.956}$Se$_{0.044}$ and (b) ZrTe$_{2.949}$Se$_{0.051}$ single crystals in zero and magnetic fields. The lines are fits of the data to $\kappa/T = a + bT^{\alpha-1}$, with $\alpha$ fixed to 2. The dashed lines represent the normal-state Wiedemann-Franz law expectation $L_0$/$\rho_0$ for the $x$ = 0.044 and 0.051 samples, respectively.}
\end{figure}
When applying field, $\kappa/T$ gradually increases with increasing field, as seen in Fig. 3. In $H$ = 0.5 T, the fittings give $\kappa_0/T$ = 8.27 $\pm$ 0.08 and 1.14 $\pm$ 0.03 mW K$^{-2}$ cm$^{-1}$ for the $x$ = 0.044 and 0.051 samples, respectively. These values roughly meet their Wiedemann-Franz law expectations $L_0/\rho_0$ ($L_0$ is the Lorenz number 2.45 $\times$ 10$^{-8}$ W $\Omega$ K$^{-2}$ and $\rho_0$ is the sample's residual resistivity). The verification of the Wiedemann-Franz law in the normal state shows the reliability of our thermal conductivity measurements. The bulk $H_{c2}$(0) $\approx$ 0.5 T is taken for both samples, which is lower than those determined from resistivity measurements.
To gain more information of the gap structure in ZrTe$_{2.956}$Se$_{0.044}$ and ZrTe$_{2.949}$Se$_{0.051}$, we check the field dependence of their $\kappa_0/T$. The normalized $\kappa_0/T$ as a function of $H/H_{c2}$ is plotted in Fig. 4. For comparison, the data of the clean $s$-wave superconductor Nb, \cite{Lowell} the multiband $s$-wave superconductor NbSe$_2$, \cite{Boaknin} and an overdoped sample of the $d$-wave superconductor Tl-2201 are also plotted. \cite{Proust} The slow field dependence of $\kappa_0/T$ at low field for Nb manifests its single isotropic superconducting gap. From Fig. 4, the curves of $x$ = 0.044 and 0.051 samples are similar to that of NbSe$_2$, a multiband $s$-wave superconductor with the gap ratio $\Delta_l/\Delta_s$ $\approx$ 3. \cite{Boaknin} This suggests that ZrTe$_{3-x}$Se$_x$ also has multiple nodeless superconducting gaps. Previously, {\it ab initio} calculation of the band structure for ZrTe$_3$ at ambient pressure gives a central rounded 2D Fermi surface sheet and two flatter q1D sheets. \cite{Hoesch} Therefore, the observation of multiple nodeless superconducting gaps in ZrTe$_{3-x}$Se$_x$ system is not surprising.
\begin{figure}
\includegraphics[clip,width=7cm]{Fig4.pdf}
\caption{(Color online) Normalized residual linear term $\kappa_0/T$ of ZrTe$_{2.956}$Se$_{0.044}$ and ZrTe$_{2.949}$Se$_{0.051}$ single crystals as a function of $H/H_{c2}$. Similar data of the clean $s$-wave superconductor Nb, \cite{Lowell} an overdoped $d$-wave cuprate superconductor Tl-2201, \cite{Proust} and the multiband $s$-wave superconductor NbSe$_2$ \cite{Boaknin} are also plotted for comparison.}
\end{figure}
Theoretically, it has been shown that unconventional superconductivity with $d_{xy}$ symmetry can appear in close proximity to a charge-ordered phase, and the superconductivity is mediated by charge fluctuations. \cite{Scalapino,Merino} Since the $d_{xy}$-wave gap has line nodes, our results clear rule out this kind of unconventional superconductivity in ZrTe$_{3-x}$Se$_x$. In this context, the superconductivity in ZrTe$_{3-x}$Se$_x$ is likely conventional. Similar situation happens in Cu$_x$TiSe$_2$ system. Thermal conductivity measurements suggested conventional $s$-wave superconductivity with single isotropic gap in Cu$_{0.06}$TiSe$_2$, near where the CDW order vanishes. \cite{Li} So far, the evidence for unconventional superconductivity induced by CDW fluctuations in real materials is still lack. The experiments on more systems with superconductivity near a CDW QCP are needed.
\section{Conclusion}
In summary, we have measured the ultra-low-temperature thermal conductivity of ZrTe$_{2.956}$Se$_{0.044}$ and ZrTe$_{2.949}$Se$_{0.051}$ single crystals, which are near the optimal doping in the phase diagram of ZrTe$_{3-x}$Se$_x$ system. The absence of $\kappa_0/T$ in zero field for both compounds gives strong evidence for nodeless superconducting gap. The field dependence of $\kappa_0(H)/T$ further suggests multiple nodeless gaps in ZrTe$_{3-x}$Se$_x$. Unconventional superconductivity with line nodes is excluded in this trichalcogenide system although there is a CDW QCP. It is likely that the superconductivity in ZrTe$_{3-x}$Se$_x$ is still conventional.
\begin{center}
{\bf ACKNOWLEDGEMENTS}
\end{center}
This work is supported by the Ministry of Science and Technology of China (National Basic Research Program No. 2012CB821402 and No. 2015CB921401), the Natural Science Foundation of China (No. 91421101, No. 11422429, and No. 11204312), the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning, and STCSM of China (No. 15XD1500200).\\
$^*$ E-mail: shiyan$\[email protected]
|
1,116,691,498,221 | arxiv |
\section{Background and Related Work}
\label{sec:background}
Provenance~\cite{moreau10ftws} is a term originating in the art world. It refers to the
chain of ownership of a work of art, including its original creator
and each owner and location of the work from creation to the present.
In the digital realm, provenance refers to information describing how
an object came to be in its current form, typically including any
other data from which it was derived and a description of the ways in
which the input data was transformed to produce the output data. It
is usually collected to allow post-hoc analysis to answer questions
such as, ``From where did this data come?'', ``On what objects does it
depend?'', or ``What programs were used in its production?''
Different applications of provenance will need different data, and
systems are either tailored for specific applications or have a
mechanism for selective capture, so that a user can obtain the right data
for his/her intended use. Such flexibility makes it difficult to
determine which system or configuration is
most appropriate for a given task.
We apply ProvMark to three current provenance capture tools,
namely SPADEv2~\cite{gehani12middleware},
OPUS~\cite{balakrishnan13tapp} and CamFlow~\cite{pasquier17socc}. All
three systems run under Linux. There are many other provenance
capture systems. PASS is no longer
maintained~\cite{muniswamy-reddy06usenix}; HiFi~\cite{pohlymmb12} was
subsumed by LPM~\cite{bates15tbm}, which is similar to CamFlow, but
less portable and not maintained. We focus on SPADE, OPUS and CamFlow
as representative, currently available examples.
Figure~\ref{graph:recording-architecture} shows how each system
interacts with an application and the Linux kernel.
\begin{figure}[tb]
\begin{center}
\includegraphics[scale=0.22]{img/system-background.pdf}
\caption{Architecture summary of the SPADE, OPUS, and
CamFlow provenance recording systems}
\label{graph:recording-architecture}
\end{center}
\end{figure}
SPADE's intended use is synthesizing provenance from machines in a
distributed system, so it emphasizes relationships between processes
and digital objects across distributed hosts. Our analysis uses
SPADEv2 (tag \emph{tc-e3}) with the Linux Audit Reporter~\cite{LinuxAudit}, which
constructs a provenance graph using information from the Linux audit
system (including the Audit service, daemon, and dispatcher). SPADE
runs primarily in user space and provides many alternative
configurations, including filtering and transforming the data, for
example, to enable versioning or finer-grained tracking of I/O or
network events, or to make use of information in procfs to obtain
information about processes that were started before SPADE. We use a
baseline configuration that collects data from the Audit Daemon
(auditd), without versioning.
OPUS focuses on file system
operations, attempting to abstract such operations and make the
provenance collection process portable. It wraps standard C library
calls with hooks that record provenance. The OPUS system is especially
concerned with versioning support and proposes a Provenance Versioning
Model, analogous to models previously introduced in the context of
PASS~\cite{muniswamyreddy09tos} and later SPADE~\cite{gehanitbm11}.
OPUS also runs in user space, but it relies on intercepting calls to
a dynamically linked library (e.g., \verb|libc|). Therefore, it is
blind to activities that do not go through an intercepted dynamic
library, but can observe the C library calls and userspace abstractions
such as file descriptors.
CamFlow's emphasis is sustainability through modularity, interfacing
with the kernel via Linux Security Modules (LSM). The LSM hooks
capture provenance, but then dispatch it to user space, via relayfs,
for further processing. It strives for completeness and has its roots
in Information Flow Control systems~\cite{pasquier2017camflow}. By
default, CamFlow captures all system activity visible to LSM and
relates different executions to form a single provenance graph; as we
shall see, this leads to some complications for repeatable
benchmarking. SPADE can also be configured to support similar
behavior, while CamFlow can also be used (instead of Linux Audit) to
report provenance to SPADE. Compared to SPADE and OPUS, which both
run primarily in user space, CamFlow~\cite{pasquier17socc} monitors activity and
generates the provenance graph from inside the kernel, via LSM and
NetFilter hooks. This means the correctness of the provenance data
depends on the LSM operation. As the rules are set directly on the LSM
hooks themselves, which are already intended to monitor all
security-sensitive operations, CamFlow can monitor and/or record all
sensitive operations. CamFlow allows users to set filtering rules
when collecting provenance.
Prior work~\cite{chan17tapp, moreau08challenge} on comparing different
provenance systems has followed a manual approach. For example,
the Provenance Challenge~\cite{moreau08challenge} proposed a set of
scientific computation scenarios and solicited submissions
illustrating how different capture systems handle these
scenarios, to facilitate (manual) comparison of systems. More
recently, Chan et al. proposed a pragmatic, but also manual,
approach~\cite{chan17tapp} to benchmarking OS-level provenance
tracking at the level of individual system calls. However, these
manual approaches are error-prone and not
scalable.
It is also worth mentioning that Pasquier et
al.~\cite{pasquier2018ccs} perform static analysis showing a
conservative over-approximation of the CamFlow provenance recorded for
each call. However, these results are
not yet automatically compared with actual run-time behavior.
Provenance expressiveness benchmarking is also related to the emerging
topic of \emph{forensic-ready
systems}~\cite{alrajeh17esecfse,pasquale18icse,zhu15icse}. Work in
this area considers how to add logging to an existing system to detect
known classes of behaviors, particularly for legal evidence or
regulatory compliance, or how to advise developers on how to add
appropriate logging to systems in development. The provenance
collection systems above strive for completeness so that
previously-unseen behaviors can be detected, and proactively record
information so that user applications do not need to be modified.
\section{Conclusions and Future Work}
\label{sec:concl}
This paper presents ProvMark, an automated approach to benchmarking
and testing the behavior of provenance capture systems. To the best
of our knowledge it is the first work to address the unique challenges
of testing and comparing provenance systems. We have outlined the
design of ProvMark, and showed how it helps identify gaps in coverage,
idiosyncrasies and even a few bugs in three different provenance
capture systems. We also showed that it has acceptable performance
despite the need to solve multiple NP-complete graph matching subproblems.
ProvMark is a significant step towards validation of such
systems and should be useful for developing correctness or
completeness criteria for them.
There are several directions for future work, to address the
limitations discussed in Section~\ref{sec:limitations}. First,
additional support for automating the process of creating new
benchmarks, or understanding and interpreting the results, would
increase the usefuless of the system, as would extending it to
provenance tracking at other levels, such as distributed system
coordination layers. Second, although we have evaluated scalability uo
to 10--20 target system calls, realistic applications such as
suspicious behavior analysis would require dealing with much larger
target activities and graphs. Finally, ProvMark cannot deal with
nondetermism, including concurrent I/O and network actvity, and
extending its approach to handle nondeterministic target activity is a
focus of current work.
\section{System Evaluation}
\label{sec:eval}
In this section, we will evaluate ProvMark. We claim that the ProvMark
system makes it easy to test and compare the behavior of provenance
tracking systems and to assess their completeness and correctness.
The previous section gave some evidence that the results are useful
for understanding the different systems and what information they do
or do not collect. In this section, we will evaluate the performance
and extensibility of ProvMark itself. For a fair comparison, all the
experiments are done in a virtual machine with 1 virtual CPU and 4GB
of virtual memory. Dependencies for all three provenance collecting
tools, including Neo4J and some needed Python libraries, are also
installed. All virtual machines were hosted on a Intel i5-4570 3.2GHz
with 8GB RAM running Scientific Linux 7.5.
\subsection{System Performance}
We first discuss the performance of the ProvMark system. Some
performance overhead is acceptable for a benchmarking or testing tool;
however, we need to establish that ProvMark's strategy for solving
NP-complete subgraph isomorphism problems is effective in practice
(i.e. takes minutes rather than days).
In this section, we will show that the extra work done by ProvMark to
generate benchmarks from the original provenance result is acceptable.
We first report measurements of the recording time. Since the number
of trials varies, we report the average time needed per trial. For
SPADE, recording took approximately 20s per trial. For OPUS, the
recording time was around 28s per trial, and for CamFlow each trial
took around 10s. We wish to emphasize that the recording time results
are \emph{not} representative of the recording times of these systems
in normal operation --- they include repeatedly starting, stopping, and
waiting for output to be flushed, and the waiting times are
intentionally conservative to avoid garbled results. No conclusions
about the relative performance of the recording tools when used as intended should be
drawn.
In Figures~\ref{chart:timedspade}--\ref{chart:timedcamflow}, we
summarize the time needed for ProvMark to run five representative
syscalls using SPADE, OPUS, and CamFlow respectively. The
bars are divided into three portions, representing the time needed for
the
transformation, generalization and comparison subsystems. Note that
the x-axes are not all to the same scale: in particular the
transformation, generalization and comparison times for OPUS are much
higher than for the other tools because of database startup and access
time, and because the graphs extracted from the database are larger.
Modifying OPUS to circumvent the database and serialize provenance
directly would avoid this cost, but we have chosen to use the tools as
they are to the greatest extent possible.
\begin{figure}[tb]
\begin{tikzpicture}
\begin{axis}[
xbar stacked,
xlabel={Time (seconds)},
symbolic y coords={open,execve,fork,setuid,rename},
xmin = 0,
xmax = 3,
y=-0.32cm,
bar width=0.2cm,
ytick=data,
legend style={nodes={scale=0.75, transform shape}, legend columns=1}
]
\addplot+[xbar] plot coordinates {
(.208,open)
(.261,execve)
(.233,fork)
(.204,setuid)
(.207,rename)
};
\addplot+[xbar] plot coordinates {
(.13,open)
(1.133,execve)
(0.135,fork)
(0.128,setuid)
(.136,rename)
};
\addplot+[xbar] plot coordinates {
(.043,open)
(.109,execve)
(.041,fork)
(.044,setuid)
(.044,rename)
};
\legend{Transformation,Generalization,Comparison}
\end{axis}
\end{tikzpicture}
\caption{Timing results: SPADE+Graphviz}\label{chart:timedspade}
\bigskip
\begin{tikzpicture}
\begin{axis}[
xbar stacked,
xlabel={Time (seconds)},
symbolic y coords={open,execve,fork,setuid,rename},
xmin = 0,
xmax = 2000,
y=-0.32cm,
bar width=0.2cm,
ytick=data,
legend style={nodes={scale=0.75, transform shape}, legend columns=1}
]
\addplot+[xbar] plot coordinates {
(364.132,open)
(356.858,execve)
(372.7,fork)
(377.446,setuid)
(355.253,rename)
};
\addplot+[xbar] plot coordinates {
(18.318,open)
(16.306,execve)
(48.461,fork)
(17.271,setuid)
(21.461,rename)
};
\addplot+[xbar] plot coordinates {
(2.221,open)
(2.051,execve)
(731.556,fork)
(2.198,setuid)
(2.72,rename)
};
\legend{Transformation,Generalization,Comparison}
\end{axis}
\end{tikzpicture}
\caption{Timing results: OPUS+Neo4J}\label{chart:timedopus}
\bigskip
\begin{tikzpicture}
\begin{axis}[
xbar stacked,
xlabel={Time (seconds)},
symbolic y coords={open,execve,fork,setuid,rename},
xmin = 0,
xmax =3,
y=-0.32cm,
bar width=0.2cm,
ytick=data,
legend style={nodes={scale=0.75, transform shape}, legend columns=1}
]
\addplot+[xbar] plot coordinates {
(.878,open)
(.867,execve)
(.941,fork)
(.837,setuid)
(.879,rename)
};
\addplot+[xbar] plot coordinates {
(.218,open)
(.226,execve)
(.282,fork)
(.219,setuid)
(.191,rename)
};
\addplot+[xbar] plot coordinates {
(.119,open)
(.114,execve)
(.218,fork)
(.104,setuid)
(.103,rename)
};
\legend{Transformation,Generalization,Comparison}
\end{axis}
\end{tikzpicture}
\caption{Timing results: CamFlow+ProvJson}\label{chart:timedcamflow}
\end{figure}
From the data in
Figures~\ref{chart:timedspade}--\ref{chart:timedcamflow} we can see
that the transformation stage is typically the most time-consuming part.
The transformation stage maps the different provenance output formats
to Datalog format. The transformation time for OPUS is much longer
than for the other two systems. This appears to be because OPUS
produces larger graphs (including recording environment variables),
and extracting them involves running Neo4j queries, which also has a
one-time JVM warmup and database initialization cost.
The time required for the generalization stage depends mainly on the
number of elements (nodes and edges) in the graph, since this stage
matches elements in pairs of graphs to find an isomorphic graph
matching. As we can see from the results, the generalization of
OPUS graphs again takes significantly longer than for SPADE and
CamFlow, probably because the graphs are larger. For SPADE,
generalization of the \syscall{execve} benchmark takes much longer than for
other calls (though still only a few seconds).
The time required for the comparison stage is usually less than for
generalization (recall that in generalization we process two pairs of
graphs whereas in comparison we just compare the background and
foreground graphs). Comparison of OPUS graphs again takes longest,
while comparison of CamFlow graphs again takes slightly longer than for
SPADE, perhaps because of the larger number of properties.
In general, we can conclude that the time needed for ProvMark's
transformation, generalization, and comparison stages is acceptable
compared with the time needed for recording. Most benchmarks complete
within a few minutes at most, though some that produce larger target graphs
take considerably longer; thus, at this point running all of the
benchmarks takes a few hours. This seems like an acceptable
price to pay for increased confidence in these systems, and while
there is clearly room for
improvement, this compares favorably with
manual analysis, which is tedious and would take a skilled user
several hours. In addition, we have monitored the memory usage and
found that ProvMark never used more than 75\% of memory on a 4GB
virtual machine, indicating memory was not a
bottleneck.
\subsection{Scalability}
\begin{figure}[tb]
\begin{tikzpicture}
\begin{axis}[
xbar stacked,
xlabel={Time (seconds)},
symbolic y coords={scale1,scale2,scale4,scale8},
xmin = 0,
xmax = 1.5,
y=-0.32cm,
bar width=0.2cm,
ytick=data,
legend style={nodes={scale=0.75, transform shape}, legend columns=1}
]
\addplot+[xbar] plot coordinates {
(.219,scale1)
(.221,scale2)
(.204,scale4)
(.331,scale8)
};
\addplot+[xbar] plot coordinates {
(.198,scale1)
(.206,scale2)
(.234,scale4)
(.398,scale8)
};
\addplot+[xbar] plot coordinates {
(.088,scale1)
(.101,scale2)
(.108,scale4)
(.197,scale8)
};
\legend{Transformation,Generalization,Comparison}
\end{axis}
\end{tikzpicture}
\caption{Scalability results: SPADE+Graphviz}\label{chart:scalespade}
\bigskip
\begin{tikzpicture}
\begin{axis}[
xbar stacked,
xlabel={Time (seconds)},
symbolic y coords={scale1,scale2,scale4,scale8},
xmin = 0,
xmax = 650,
y=-0.32cm,
bar width=0.2cm,
ytick=data,
legend style={nodes={scale=0.75, transform shape}, legend columns=1}
]
\addplot+[xbar] plot coordinates {
(358.919,scale1)
(358.965,scale2)
(359.792,scale4)
(364.211,scale8)
};
\addplot+[xbar] plot coordinates {
(16.136,scale1)
(16.341,scale2)
(16.792,scale4)
(18.716,scale8)
};
\addplot+[xbar] plot coordinates {
(2.036,scale1)
(2.139,scale2)
(2.657,scale4)
(3.702,scale8)
};
\legend{Transformation,Generalization,Comparison}
\end{axis}
\end{tikzpicture}
\caption{Scalability results: OPUS+Neo4J}\label{chart:scaleopus}
\bigskip
\begin{tikzpicture}
\begin{axis}[
xbar stacked,
xlabel={Time (seconds)},
symbolic y coords={scale1,scale2,scale4,scale8},
xmin = 0,
xmax =6,
y=-0.32cm,
bar width=0.2cm,
ytick=data,
legend style={nodes={scale=0.75, transform shape}, legend columns=1}
]
\addplot+[xbar] plot coordinates {
(.919,scale1)
(.925,scale2)
(1.007,scale4)
(1.039,scale8)
};
\addplot+[xbar] plot coordinates {
(.216,scale1)
(.287,scale2)
(.764,scale4)
(2.240,scale8)
};
\addplot+[xbar] plot coordinates {
(.128,scale1)
(.130,scale2)
(.234,scale4)
(.369,scale8)
};
\legend{Transformation,Generalization,Comparison}
\end{axis}
\end{tikzpicture}
\caption{Scalability results: CamFlow+ProvJson}\label{chart:scalecamflow}
\end{figure}
Our experiments so far concentrated on benchmarking one syscall at a
time. The design of ProvMark allows arbitrary activity as the target
action, including sequences of syscalls. As mentioned above, we
can simply identify a target action
sequence using \texttt{\#ifdef TARGET} in order to let ProvMark generate background and foreground
programs respectively.
Of course, if the target activity consists of multiple syscalls, the
time needed for ProvMark to process results and solve the resulting
subgraph isomorphism problems will surely increase. The generated
provenance graph will also increase in size and number of
elements. The NP-complete complexity of sub-graph isomorphism means
that in the worst case, the time needed to solve larger problem
instances could increase exponentially. This section investigates the
scalability of ProvMark when the size of the target action increases.
In figure \ref{chart:scalespade}-\ref{chart:scalecamflow}, the charts
show the time needed for the three processing subsystems of ProvMark in handling
some scalability test cases on SPADE, OPUS and CamFlow respectively. The
scalability test cases range from scale1 to
scale8. In test case scale1, the target action sequence is simply a
creation of a file and another deletion of the newly created file. In
test case scale2, scale4 and scale8, the same target action is repeated twice,
four times, and eight times respectively. The results show that the time
needed initially grows slowly for SPADE, but by scale8 the
time almost doubles compared to scale1. For OPUS, the time increases
are dwarfed by the high overhead of transformation, which includes the
one-time Neo4j startup and access costs as discussed above. For CamFlow, the
time needed approximately doubles at each scale factor. Although further
scalability experiments are needed to consider much larger graph or benchmark
sizes, these experiments do demonstrate that ProvMark can currently
handle short sequences of 10-20 syscalls without problems.
\subsection{Modularity and Extensibility}
ProvMark was designed with extensibility in mind. Only the first two
stages (recording and generalization) depend on the details of the
recording system being benchmarked, so to support a new system, it
suffices to implement an additional recording module that uses the new
system to record provenance for a test executable, and (if needed) an
additional translation module that maps its results to Datalog.
As discussed earlier, it was non-trivial to develop recording modules
for the three systems that produce reliable results. In particular,
supporting CamFlow required several iterations and discussion with its
developers, which have resulted in changes to CamFlow to accommodate
the needs of benchmarking. This architecture has been refined through
experience with multiple versions of the SPADE and CamFlow systems,
and we have generally found the changes needed to ProvMark to maintain
compatibility with new versions to be minor. Developing new
recording modules
ought to be straightforward for systems that work in a similar way to
one of the three currently supported.
If a provenance tool generates data in a format not currently
supported by ProvMark, an additional module for handling this type of
provenance data is needed. The need for new transformations should
decrease over time as more tools are introduced to ProvMark since
there are a limited number of common provenance formats.
As shown in Table~\ref{tab:modulesizes}, none of the three recording or transformation modules required more than 200 lines of code (Python 3 using only standard library imports). We initially developed ProvMark with support for SPADE/DOT and OPUS/Neo4j combinations; thus, adding support for CamFlow and PROV-JSON only required around 330 lines of code.
\begin{table}[tb]
\centering
\begin{tabular}{l|ccc}
Module & SPADE & OPUS & CamFlow \\
(Format) & (DOT) & (Neo4j) & (PROV-JSON) \\\hline
Recording & 171 & 118 & 192 \\
Transformation & 74 & 122 & 128
\end{tabular}
\caption{Module sizes (Python lines of code)}
\label{tab:modulesizes}
\end{table}
\normalsize
Finally, throughout the development period of ProvMark, the candidate
tools have all been updated and include new features and
behaviour. Over time, we have updated ProvMark to react to changes in
both SPADE and CamFlow, with few problems.
\subsection{Discussion and Limitations}\label{sec:limitations}
While we have argued that ProvMark is effective, fast enough
to produce timely results, and easy to extend, it also has some
limitations, which we now discuss.
At this stage, ProvMark has only been applied to provenance tracking
at the operating-system level; in particular, for SPADE, OPUS and
CamFlow running under Linux. We believe the same approach can be
adapted to other operating systems or other layers of distributed
systems, assuming an extension to deal with nondeterminstic or concurrent
activity as described below.
We have evaluated ProvMark using a small subset of individual
syscalls. Creating benchmarks currently takes some manual effort and
it would be helpful to automate this process to ease the task of
checking all of the possible control flow paths each syscall can take,
or even generate multi-step sequences. In addition, the analysis and
comparison of the resulting benchmark graphs requires some manual effort and
understanding.
We have shown ProvMark is capable of handling benchmark
programs with a small number of target system calls. Scaling up to
larger amounts of target activity or background activity appears
challenging, due to the NP-completeness and worst-case exponential
behavior of subgraph isomorphism testing. However, for deterministic activity it
may be possible to take advantage of other structural characteristics of the
provenance graphs to speed up matching: for example, if matched nodes
are usually produced in the same order (according to timestamps), then
it may be possible to incrementally match the foreground and
background graphs.
Lastly, ProvMark currently handles deterministic activity only.
Nondeterminism (for example through concurrency) introduces additional
challenges: both the foreground and background programs might have
several graph structures corresponding to different schedules, and
there may be a large number of different possible schedules. It also
appears neceessary to perform some kind of fingerprinting or graph
structure summarization to group the different possible graphs
according to schedule. We may also need to run larger numbers of
trials and develop new ways to align the different structures so as to
obtain reliable results. It appears challenging to ensure
completeness, that is, that all of the possible behaviors are
observed, especially since the number of possible schedules can grow
exponentially in the size of the program.
\section{Introduction}
Data provenance is information about the origin, history, or
derivation of some information~\cite{moreau10ftws}. It is commonly
proposed as a basis for reproducibility~\cite{pasquier2017if},
dependability~\cite{alvaro2017abstracting}, and regulatory
compliance~\cite{pasquier2018data}, and it is increasingly being used
for security, through forensic audit~\cite{wang08tissec} or online
dynamic detection of malicious behavior~\cite{han2017frappuccino,
hassan2018towards}. To cater to the requirements of different
use-cases, there are many system-level provenance capture systems in
the literature, such as PASS~\cite{muniswamy-reddy06usenix},
Hi-Fi~\cite{pohlymmb12}, SPADE~\cite{gehani12middleware},
OPUS~\cite{balakrishnan13tapp}, LPM~\cite{bates15tbm},
Inspector~\cite{inspector}, and CamFlow~\cite{pasquier17socc} covering
a variety of operating systems from Linux and BSD to Android and
Windows. Each system assembles low level system events into a
high-level \emph{provenance graph} describing processes, system
resources, and causal relationships among them.
Often,
such systems are described as capturing a \emph{complete} and
\emph{accurate} description of system activity. To date, the
designers of each such system have decided how to interpret these
goals independently, making different choices regarding what activity to
record and how to represent it. Although some of these systems
do use standards such as W3C PROV~\cite{w3c-prov} that establish a
common vocabulary for provenance-related data, such standards
do \emph{not} specify how to record operating system-level behaviour,
or even when such records are considered ``accurate'' or
``complete''. Indeed, as we shall see, in practice there is little
consensus about how specific activities (e.g., renaming a file) should
be represented in a provenance graph.
Additionally, different systems also work at different
system layers (e.g., kernel space vs. user space), so
some information may be unavailable to a given system.
To illustrate the problem, consider Figure~\ref{fig:rename}, which
shows three provenance graph representations of the same
\syscall{rename} system call, as recorded by three different systems.
These
graphs clearly illustrate nontrivial structural differences in how
\syscall{rename} is represented as communications between processes (blue rectangles) and
artifacts or resources (yellow ovals).
\begin{figure}[tb]
\begin{minipage}[b]{0.4\columnwidth}
\begin{subfigure}[b]{\columnwidth}
\centering
\href{https://provmark2018.github.io/sampleResult/spade/rename.svg}{
\includegraphics[scale=0.25]{img/spade-rename.pdf}
}
\captionof{figure}{SPADE}
\end{subfigure}
\\
\begin{subfigure}[b]{\columnwidth}
\centering
\href{https://provmark2018.github.io/sampleResult/camflow/rename.svg}{
\includegraphics[scale=0.25]{img/camflow-rename.pdf}
}
\captionof{figure}{CamFlow}
\end{subfigure}
\end{minipage}
\begin{minipage}[b]{0.59\columnwidth}
\begin{subfigure}[b]{\columnwidth}
\centering
\href{https://provmark2018.github.io/sampleResult/opus/rename.svg}{
\includegraphics[scale=0.25]{img/opus-rename.pdf}
}
\captionof{figure}{OPUS}
\end{subfigure}
\end{minipage}
\caption{A rename system call, as recorded by three different
provenance recorders. Images are clickable links to full-size versions with property labels.}
\label{fig:rename}
\end{figure}
Security analysts, auditors, or regulators seeking to use these
systems face many challenges. Some of these challenges stem from the
inherent size and complexity of provenance graphs: it is not unusual
for a single day's activity to generate graphs with millions of nodes
and edges. However, if important information is missing, then these
blind spots may render the records useless for their intended purpose.
For example, if a provenance capture system does not record edges
linking reads and writes to local sockets, then attackers can evade
notice by using these communication channels. Such an omission could
be due to a bug, but it could also result from
misconfiguration, silent failure, or an inherent limitation of the
recording system.
Provenance capture systems provide a
broad-spectrum recording service, separate from the monitored
applications, but (like any software system) they are not perfect, and
can have their own bugs or idiosyncrasies. In order
to rely on them for critical applications such as reproducible
research, compliance monitoring, or intrusion detection, we need to
be able to understand and validate their behavior. The strongest form of
validation would consist of verifying that the provenance records
produced by a system are accurate representations of the actual
execution history of the system. However, while there is now some
work on formalizing operating system kernels, such as seL4~\cite{sel4} and
HyperKernel~\cite{hyperkernel}, there are as yet no complete
formal models of mainstream operating systems such as Linux.
Developing such a model seems prerequisite to fully formalizing the
accuracy, correctness, or completeness of provenance systems.
If there is no immediate prospect of being able to formally define and
prove correctness for provenance recording systems, perhaps we can at
least make it easier to compare and understand their behavior. We
advocate a pragmatic approach as a first step toward the goal of
validating and testing provenance systems, which (following Chan et
al.~\cite{chan17tapp}) we call \emph{expressiveness benchmarking}. In
contrast to performance benchmarking, which facilitates quantitative
comparisons of run time or other aspects of a system's behavior,
expressiveness benchmarking facilitates qualitative comparisons of how
different provenance systems record the same activities. The goal of
such benchmarking is to answer questions such as:
\begin{itemize}
\item What does each node/edge in the graph tell us about actual
events? (correctness, accuracy)
\item What information is guaranteed to be captured and what are the
blind spots? (completeness)
\end{itemize}
Previous work on comparing provenance representations has been based
on manual inspection to compare the
graphs~\cite{moreau08challenge,chan17tapp}, but this manual approach
is error prone and does not scale or allow automated testing.
However, automating this task faces numerous challenges. It is
difficult to configure some systems to record a single process's
activity reproducibly. Provenance records include volatile
information such as timestamps and identifiers that vary across runs,
even when the underlying process is deterministic. Furthermore,
a process's activity may include background process start-up activity
that needs to be filtered out.
Expressiveness benchmarking is a \emph{first (but important) step}
towards understanding what it means for system provenance to be
complete and/or correct. It offers a systematic means to compare different
approaches. For example, how is a \syscall{rename} system call
represented, and how do nodes and edges correspond to processes, files
or file versions, or relationships among them? Are unsuccessful calls
recorded? Are records of multiple similar calls aggregated together?
By analyzing the same benchmark examples across different provenance
capture systems, we
can compare the information collected by different approaches to
produce an understanding of the relative capabilities of the systems
and their suitability for different tasks. Expressiveness
benchmarking also enables \emph{regression testing} since we can
automatically compare benchmark graphs before and after a system
change to detect differences.
We present \emph{ProvMark},
an automated system for provenance
expressiveness benchmarking.
ProvMark executes a given target operation and records the resulting
provenance captured by a given system. Different runs of the same system
can then be compared to show whether each system records the target
activity, and if so how.
Our main contributions are as follows:
\begin{itemize}
\item We show how to generalize from
multiple runs to obtain repeatable results, abstracting away
volatile or transient data such as timestamps or identifiers.
\item We solve the required graph matching problems using an external
solver, based on Answer Set Programming (ASP), a variation of logic
programming.
\item We evaluate ProvMark's performance and effectiveness in testing
and comparing provenance systems, highlighting several bugs or
idiosyncrasies found so far.
\end{itemize}
ProvMark has been developed in consultation with developers of three
provenance recording systems, SPADE, OPUS and CamFlow, several of whom
are coauthors of this paper. They have validated the results
and in some cases helped adapt their systems to ease benchmarking.
The ProvMark system along with supplementary results is publicly
available at \url{http://provmark2018.github.io}~\cite{provmark2018}.
\section{ProvMark Documentation}
\par
ProvMark is a fully automated system that generates system level
provenance benchmarks for different provenance collection tools and systems, such as SPADE, OPUS and CamFlow. It is the main tool mentioned in the paper. This section provides documentation for how to use it and access the source code of ProvMark.
\subsection{ProvMark source and release page}
\par
The source code of ProvMark is stored in a Github repository and the
link is
\href{https://github.com/arthurscchan/ProvMark/}{here}\footnote{https://github.com/arthurscchan/ProvMark/}. ProvMark
is still undergoing development. There are many new updates after the
publication of this paper. In order for the reader of this paper to
use or test ProvMark with all the features mentioned in this paper, we
have provided a release version for this Middleware submission. The
link is
\href{https://github.com/arthurscchan/ProvMark/releases/tag/Middleware2019}{here}\footnote{https://github.com/arthurscchan/ProvMark/releases/tag/Middleware2019}. In
this release page, documentation on how to install, configure and use
the ProvMark tool has been provided. There is also a tarball with a
version of ProvMark that we are mentioning in this paper. In addition,
a set of sample results and additional experimental timing results is provided in this page.
\subsection{Directory structure of ProvMark source}
\begin{description}
\item[benchmarkProgram] Contains sample c programs for the collection of provenance information of different syscalls
\item[clingo] Contains the clingo code
\item[config] Contains the configuration profile of different tool choices for stage 1 and stage 2
\item[documentation] Contains the documentation for ProvMark
\item[genClingoGraph] Contains code to transform graph format
\item[processGraph] Contains code to handle graph comparison and generalization
\item[sampleResult] Contains sample benchmark result on our trial
\item[startTool] Contains tools to handle provenance collecting tools currently supported and retrieve result from them
\item[template] Contains html template for result generation
\item[vagrant] Contains vagrant files for those provenance collecting tools currently supported
\end{description}
\subsection{ProvMark Installation}
Installing ProvMark is simple, just clone the whole git repository. The current stable version corresponding to this paper is tagged by tag Middleware2019.
You could also directly download the source code tarball from the release page mentioned above.
\\
\footnotesize
\begin{verbatim}
git clone https://github.com/arthurscchan/ProvMark.git
cd ProvMark
git checkout Middleware2019
\end{verbatim}
\normalsize
\subsubsection{Vagrant installation}
In the vagrant folder, we have prepared the
\href{https://www.vagrantup.com/}{Vagrant}\footnote{https://www.vagrantup.com/}
script for the three provenance collecting tools currently
supported. If you have vagrant (v2.2.2 or greater) and virtual box
(v6.0 or greater) installed in your system, you can follow the steps
below to build up a virtual environment which everything (tools and
ProvMark) are installed.
\newpage
\subsubsection{Vagrant script for SPADE}
\footnotesize
\begin{verbatim}
cd ./vagrant/spade
vagrant plugin install vagrant-vbguest
vagrant up
vagrant ssh
\end{verbatim}
\normalsize
\subsubsection{Vagrant script for OPUS}
\footnotesize
\begin{verbatim}
cd ./vagrant/opus
vagrant plugin install vagrant-vbguest
vagrant up
vagrant ssh
\end{verbatim}
\normalsize
\par
\bigskip
To run OPUS, you also need a source or binary distribution for the OPUS system itself, which is available \href{https://github.com/DTG-FRESCO/opus}{here}\footnote{https://github.com/DTG-FRESCO/opus}
\subsubsection{Vagrant script for CamFlow}
\footnotesize
\begin{verbatim}
cd ./vagrant/camflowv045
vagrant plugin install vagrant-vbguest
vagrant up
vagrant halt
vagrant up
vagrant ssh
\end{verbatim}
\normalsize
It is necessary to reboot the VM (halt / up) so that the camflow-enabled kernel will be used. This kernel should be highlighted as the first entry by the boot loader but if not, it should be selected.
After the above steps, you should be given a ssh connection to the
virtual machine from which you can start ProvMark on your chosen tools directly.
Note: the installation process can take an extended amount of time depending on your configuration.
\subsection{ProvMark configuration}
Configuration of ProvMark is done by modifying the \texttt{config/config.ini}
file. Changing this file should not be necessary in common cases.
In the ProvMark system, the first two stages collect provenance information from provenancee collecting tools and transform the provenance result into a clingo graph. Different tools need different process handling and type conversion. These configuration parameters are stored in the \texttt{config.ini} file.
Each profile starts with the name and includes three settings as follows:
\\
\begin{description}
\item[stage1tool] define the script (and parameter) for stage 1 handling when this profile is chosen
\item[stage2handler] define which graph handler is used to handle the raw provenance information when this profile is chosen
\item[filtergraphs] define if graph filtering is needed. The default value for SPADE and OPUS is false and true for CamFlow. The graph filtering is a mechanism provided by ProvMark to filter out obviously incomplete or incorrect graphs generated by the provenance systems. It can increase the accuracy for the resulting benchmark, but it will decrease the efficiency.
\end{description}
\par
\bigskip
Each profile defines one supporting tool and its configuration. If new tools are supported in ProvMark, a new profile will be created here.
\newpage
\subsection{ProvMark usage}
\subsubsection{Parameters}
\textbf{Currently Supported Tools:}
\begin{description}
\item[spg] SPADE with Graphviz storage
\item[spn] SPADE with Neo4j storage
\item[opu] OPUS
\item[cam] CamFlow
\end{description}
\bigskip
\textbf{Tools Base Directory:}\\
Base directory of the chosen tool, it is assumed that if you want to execute this benchmarking system on certain provenance collecting tools, you should have installed that tools with all dependencies required by the tools. If you build ProvMark with the given Vagrant script, the default tools base directory is shown as follows.
\\
\begin{description}
\item[SPADE] /home/vagrant/SPADE
\item[OPUS] /home/vagrant/opus (or where OPUS was manually installed)
\item[CamFlow] ./ (this parameter is ignored but some value must be provided)
\end{description}
\bigskip
\textbf{Benchmark Directory:}\\
Base directory of the benchmark program.\\
Point the script to the syscall choice for the benchmarking process. \\
\\
\textbf{Number of trials (Default: 2):}\\
Number of trials executed for each graph for generalization.\\
More trials will result in longer processing time, but provide a more accurate result as multiple trials can help to filter out uncertainty and unrelated elements and noise. \\
\\
\textbf{Result Type:}
\begin{description}
\item[rb] benchmark only
\item[rg] benchmark and generalized foreground and background graph only
\item[rh] html page displaying benchmark and generalized foreground and background graph
\end{description}
\subsubsection{Single Execution}
\par
This command will only call a specific benchmark program and generate benchmark with the chosen provenance system.
\\\\
Usage:
\scriptsize
\begin{verbatim}
./fullAutomation.py <Tools> <Tools Base Directory> <Benchmark Directory> [<Trial>]
\end{verbatim}
\normalsize
\bigskip
Sample:
\scriptsize
\begin{verbatim}
./fullAutomation.py cam ./ ./benchmarkProgram/baseSyscall/grpCreat/cmdCreat 2
\end{verbatim}
\normalsize
\subsubsection{Batch Execution}
\par
Automatically execute ProvMark for all syscalls currently supported. The runTests.sh script will search for all benchmark programs recursively in the default benchmarkProgarm folder and benchmark them one by one. It will also group the final result and post process them according to the given result type parameter.
\\\\
Usage:
\footnotesize
\begin{verbatim}
./runTests.sh <Tools> <Tools_Path> <Result Type>
\end{verbatim}
\normalsize
\bigskip
Sample:
\footnotesize
\begin{verbatim}
./runTests.sh spg /home/vagrant/SPADE rh
\end{verbatim}
\normalsize
\bigskip
For more examples of the usage please refer to the documentation provided on the release page or the README file at the parent directory of the source code.
\subsection{Sample Output}
\par
For the generation of the sample output, we have used the provided Vagrant script to build up the environment for the three provenance systems and execute a batch execution in each of the built virtual machine. The following command is used in each virtual machine respectively.
\subsubsection{SPADE}
\footnotesize
\begin{verbatim}
./runTests.sh spg /home/vagrant/SPADE rh
\end{verbatim}
\normalsize
\subsubsection{OPUS}
\footnotesize
\begin{verbatim}
./runTests.sh opu /home/vagrant/opus rh
\end{verbatim}
\normalsize
\subsubsection{CamFlow}
\footnotesize
\begin{verbatim}
./runTests.sh cam . rh 11
\end{verbatim}
\normalsize
The results on successful completion of this script are accessible in \texttt{finalResult/index.html}.
\subsubsection{Full Timing Result}
\par
From the data in Figures~\ref{chart:timedspade}--\ref{chart:timedcamflow} we can see the time needed for ProvMark to process some of the system calls using specific provenance systems. The full list of timing result containing all system calls included in the above batch execution can be found in the following path inside the source tarball from the release page. They are separated by the chosen provenance systems.
\\
\begin{description}
\item[SPADE] sampleResult/spade.time
\item[OPUS] sampleResult/opus.time
\item[CamFlow] sampleResult/camflow.time
\end{description}
\par
\bigskip
Each line in those result files representing one system call execution and all parameters in each line are separated by a comma. The first two parameters represent the provenance system and system calls chosen for this line of result. The remaining four floating-point numbers represent the time needed (in seconds) for each ProvMark subsystem to process that system call (with the chosen provenance system) in order. ProvMark will automatically time all the process and attached a line of timing result at the end of \texttt{/tmp/time.log} for each system call execution. You must have the privilege to write to that file to get the timing result.
\section{Demonstration}
\label{sec:result}
We present a demonstration of using ProvMark to investigate and
compare the behavior of the different capture systems. These results
pertain to SPADEv2 (tag \emph{tc-e3}) and OPUS version 0.1.0.26
running on Ubuntu 16.04 and CamFlow version 0.4.5 running on Fedora
27.
Unix-like operating systems support over three hundred system calls.
We focus on a subset of 22 common system call families shown in
Table~\ref{table:syscall}, including the most common file and process
management calls. We prepared benchmarking programs for these
syscalls, along with tests for each one to ensure that the target
behavior was performed successfully. Note that the same system call
may display different behavior using different parameters and system
states; we focus on common-case behavior here, but handling other
scenarios such as failure cases is straightforward as outlined in
Section~\ref{sec:usecases}.
\begin{table}[tb]
\center
\begin{tabular}{|l|l|p{5.5cm}|}
\hline
1 &Files & \syscall{close}, \syscall{creat}, \syscall{dup[2,3]}, \syscall{[sym]link[at]}, \syscall{mknod[at]}, \syscall{open[at]},
\syscall{[p]read}, \syscall{rename[at]}, \syscall{[f]truncate},
\syscall{unlink[at]}, \syscall{[p]write}
\\ \hline
2& Processes & \syscall{clone}, \syscall{execve}, \syscall{exit}, \syscall{[v]fork}, \syscall{kill}\\\hline
3& Permissions & \syscall{[f]chmod[at]}, \syscall{[f]chown[at]}, \syscall{set[re[s]]gid}, \syscall{set[re[s]]uid}\\\hline
4& Pipes & \syscall{pipe[2]}, \syscall{tee}\\\hline
\end{tabular}
\caption{Benchmarked syscalls}\label{table:syscall}
\normalsize
\end{table}
We cannot show all of the benchmark graphs, nor can we show the graphs
at a readable size; the full results are available for inspection
online~\cite{provmark2018}. Table \ref{tab:validation} summarizes the
results. In this table, ``ok'' means the graph is correct (according
to the system's developers), and ``empty'' means the foreground and
background graphs were similar and so the target activity was undetected. The table includes notes indicating
the reason for emptiness. Also, Table~\ref{table:results} shows
selected graphs (again, images are links to scalable online images
with full details) for the benchmark result. Although the property
keys and values are omitted, we can compare the graph structure and
see that the three tools generate very different structures for the
same system calls. This demonstrates one of the important motivations
for expressiveness benchmarking. In these graph results, the yellow
ovals represent artifacts which includes files, pipes or other
resources. The blue rectangles represent processes involved in the
named system calls. The green or gray ovals represent dummy nodes
which stand for pre-existing parts of the graph. We retain these
dummy nodes to make the result a complete graph.
In this section we present and discuss the benchmarking outputs for
several representative syscalls, highlighting minor bugs found and
cases where the behavior of individual tools or the variation among
different tools was surprising.
\begin{table}[tb]
\centering
\begin{tabular}{|c|c|c|c|c|}
\hline
Group& syscall & SPADE & OPUS & CamFlow\\\hline
1& close & ok & ok & empty (LP)\\
1& creat & ok & ok & ok\\
1& dup & empty (SC) & ok & empty (NR) \\
1& dup2 & empty (SC) & ok & empty (NR) \\
1& dup3 & empty (SC) & ok & empty (NR) \\
1& link & ok & ok & ok\\
1& linkat & ok & ok & ok\\
1& symlink & ok & ok & empty (NR)\\
1& symlinkat & ok & ok & empty (NR) \\
1& mknod & empty (NR) & ok & empty (NR)\\
1& mknodat & empty (NR) & empty (NR) & empty (NR)\\
1& open & ok & ok & ok\\
1& openat & ok & ok & ok\\
1& read & ok & empty (NR) & ok\\
1& pread & ok & empty (NR) & ok\\
1& rename & ok & ok & ok\\
1& renameat & ok & ok & ok\\
1& truncate & ok & ok & ok\\
1& ftruncate & ok & ok & ok\\
1&unlink & ok & ok & ok\\
1&unlinkat & ok & ok & ok\\
1&write & ok & empty (NR) & ok\\
1& pwrite & ok & empty (NR) & ok\\
\hline
2&clone & ok & empty (NR) & ok\\
2& execve & ok & ok & ok\\
2& exit & empty (LP) &empty (LP) & empty (LP)\\
2& fork & ok & ok & ok\\
2& kill & empty (LP) &empty (LP) & empty (LP)\\
2& vfork & ok (DV) & ok & ok\\
\hline
3&chmod & ok & ok & ok\\
3&fchmod & ok & empty (NR) & ok\\
3&fchmodat & ok & ok & ok\\
3&chown & empty (NR) & ok & ok\\
3&fchown & empty (NR) & empty (NR) & ok\\
3&fchownat & empty (NR)& ok & ok\\
3&setgid & ok & ok & ok\\
3&setregid & ok & ok & ok\\
3&setresgid & empty (SC) & empty (NR) & ok\\
3&setuid & ok & ok & ok\\
3&setreuid & ok & ok & ok\\
3&setresuid & ok (SC) & empty (NR) & ok\\
\hline
4&pipe & empty (NR) & ok & empty (NR)\\
4&pipe2 & empty (NR) & ok & empty (NR)\\
4&tee & empty (NR) & empty (NR) & ok\\
\hline
\end{tabular}
\\\smallskip
\begin{tabular}{|c|p{7cm}|}
\hline Note& Meaning\\\hline
NR & Behavior not recorded (by default configuration)\\
SC & Only state changes monitored\\
LP & Limitation in ProvMark\\
DV & Disconnected \syscall{vfork}ed process\\\hline
\end{tabular}
\caption{Summary of validation results}
\label{tab:validation}
\end{table}
\normalsize
\subsection{File system}
File system calls such as \syscall{creat}, \syscall{open},
\syscall{close} are generally covered well by all of the tools, but
they each take somewhat different approaches to representing even
simple behavior. For example, for the \syscall{open} call, SPADE adds
a single node and edge for the new file, while CamFlow creates a node
for the file object, a node for its path, and several edges linking
them to each other and the opening process, and OPUS creates four new
nodes including two nodes corresponding to the file. On the other
hand, reads and writes appear similar in both SPADE and CamFlow, while
by default, OPUS does not record file reads or
writes. For \syscall{close}, CamFlow
records the underlying kernel data structures eventually being freed,
but ProvMark does not reliably observe this.
The \syscall{dup[2,3]} syscall duplicates a file descriptor so that
the process can use two different file descriptors to access the same
open file. SPADE and CamFlow do not record this call directly, but the
changes to the process's file descriptor state can affect future file
system calls that are monitored. OPUS does record \syscall{dup[2,3]}.
The two added nodes are not directly connected to
each other, but are connected to the same process node in the
underlying graph. One component records the system call itself and
the other one records the creation of a new resource (the duplicated
file descriptor).
The \syscall{link[at]} call is recorded by all three systems but
\syscall{symlink[at]} is not recorded by CamFlow 0.4.5. Likewise,
\syscall{mknod} is only recorded by OPUS, and \syscall{mknodat} is not
recorded by any of the systems.
The \syscall{rename[at]} call, as shown in the introduction, illustrates
how the three systems record different graph structure for this
operation. SPADE represents a rename using two nodes for the new and
old filenames, with edges linking them to each other and to the
process that performed the rename. OPUS creates around a dozen nodes,
including nodes corresponding to the rename call itself and the new and
old filename. CamFlow represents a rename as adding a new path
associated with the file object; the old path does not appear in the
benchmark result.
\input{resulttable}
\subsection{Process management}
The process management operations start processes (\syscall{[v]fork},
\syscall{clone}), replace a process's memory space with a new
executable and execute it (\syscall{execve}), or terminate a process
(\syscall{kill}, \syscall{exit}).
Process creation and initialization calls are generally monitored by
all three tools,
except that OPUS does not appear to monitor \syscall{clone}.
Inspecting the results manually shows significant differences,
however. For example, the SPADE benchmark graph for \syscall{execve} is
large, but has just a few nodes for OPUS and CamFlow. On
the other hand, the \syscall{fork} and \syscall{vfork} graphs are
small for SPADE and CamFlow and large for OPUS. This may indicate the
different perspectives of these tools: SPADE relies on the activity
reported by Linux Audit, while OPUS relies on intercepting C library
calls outside the kernel.
Another interesting observation is that the SPADE benchmark results
for \syscall{fork} and \syscall{vfork} differ. Specifically, for
\syscall{vfork}, SPADE represents the forked process as a disconnected
activity node, i.e. there is no graph structure connecting the parent
and child process. This is because Linux Audit reports system calls
on exit, while the parent process that called \syscall{vfork} is
suspended until the child process exits. SPADE sees the
\syscall{vforked} child process in the log executing its own syscalls
before it actually sees the \syscall{vfork} that created the child
process.
The \syscall{exit} and \syscall{kill} calls are not detected
because they deviate from the assumptions our approach is based on.
A process always has an implicit \syscall{exit} at the end, while
killing a process means that it does not terminate normally. Thus,
the \syscall{exit} and \syscall{kill} benchmarks are all empty. We
plan to consider a more sophisticated approach that can benchmark these
calls in future work.
\subsection{Permissions}
We included syscalls that change file permissions
(such as \syscall{[f]chown[at]}, \syscall{[f]chmod[at]}) or process
permissions in this group. According to its documentation, SPADE
currently records \syscall{[f]chmod[at]} but not
\syscall{[f]chown[at]}. OPUS does not monitor \syscall{fchmod} or
\syscall{fchown} because from its perspective these calls only perform
read/write activity and do not affect the process's file descriptor
state, so as for other read/write activity OPUS does not record
anything in its default configuration. CamFlow records all of these
operations.
In its default configuration, SPADE does not explicitly record
\syscall{setresuid} or \syscall{setresgid}, but it does monitor
changes to these process attributes and records observed changes. Our
benchmark result for \syscall{setresuid} is nonempty, reflecting an
actual change of user id, while our benchmark for \syscall{setresgid}
just sets the group id attribute to its current value, and so this activity
is not noticed by SPADE. OPUS also does not track these two calls,
while CamFlow again tracks all of them.
\subsection{Pipes}
Pipes provide efficient interprocess communication. The
\syscall{pipe[2,3]} call creates a pipe, which can be read or written
using standard file system calls, and \syscall{tee} duplicates
information from one pipe into another without consuming it.
Of the three systems, only OPUS records \syscall{pipe[2,3]} calls, while
only CamFlow currently records \syscall{tee}.
\section{System Design and Methodology}
\label{sec:system}
ProvMark is intended to automatically identify the (usually small)
subgraph of a provenance graph that is recorded for a given target
activity. Target activities could consist of individual system calls
(or \emph{syscalls}), sequences of syscalls, or more general
(e.g., concurrent) processes. For the moment, we consider the simplest
case of a single syscall, but the same techniques generalize to
deterministic sequential target activities; handling concurrency and
nondeterminism are beyond the scope of this paper.
We call the targeted system call the \emph{target call} and the
corresponding subgraph the \emph{target graph}. Naively, one might
proceed by writing a simple C program for each target system call that
just performs that call and nothing else. However, starting and
ending a process creates considerable ``boilerplate'' provenance,
including calls such as \syscall{fork}, \syscall{execve}, and \syscall{exit},
as well as accesses to program files and libraries and, sometimes,
memory mapping calls. Furthermore,
some target calls require other \emph{prerequisite} calls to be performed first. For
example, analyzing a \syscall{read} or \syscall{close} system call requires first
performing an \syscall{open}. Process startup and prerequisite calls
are both examples of
\emph{background activity} that we would like to elide.
In each benchmark, we use an \verb|#ifdef TARGET| CPP directive to
identify the target behavior of interest. ProvMark generates two
executables for each such benchmark: a \emph{foreground program} that
includes all code in the benchmark program, including the target and
any context needed for it to execute, and a \emph{background program}
that contains the background activities. The two binaries are almost
identical; the difference between the resulting graphs should
precisely capture the target behavior.
\begin{figure*}[tb]
\begin{center}
\includegraphics[scale=0.4]{img/system-overview}
\captionof{figure}{ProvMark system overview. The recording stage (1)
uses one of the provenance recorders to compute background
graphs $bg_1,bg_2$ and foreground graphs $fg_1,fg_2$. (The
same recorder is used for all four graphs.) The transformation
stage (2) maps these graphs to a uniform Datalog format. The
generalization stage (3) identifies the common structure of $bg_1$ and
$bg_2$, resulting in $bg$, and likewise $fg_1$ and $fg_2$ are
generalized to $fg$. Finally, $bg$ and $fg$ are compared (4);
structure corresponding to $bg$ in $fg$ is removed, yielding the
benchmark result.}
\label{fig:arch}
\end{center}
\end{figure*}
ProvMark includes a script for each syscall that generates and
compiles the appropriate C executables and prepares a staging
directory in which they will be executed with any needed setup, for
example, first creating a file to run an \syscall{unlink} system
call. We constructed these scripts manually since different
system calls require different setup. The following code snippet
illustrates the background program (including \syscall{open}) needed
for the target \syscall{close} syscall (with \verb|#ifdef| surrounding
the target):
\footnotesize
\begin{verbatim}
// close.c
#include <fcntl.h>
#include <unistd.h>
void main() {
int id=open("test.txt", O_RDWR);
#ifdef TARGET
close(id);
#endif
}
\end{verbatim}
\normalsize
Figure~\ref{fig:arch} provides an overview of ProvMark, which is composed of four
subsystems: (1) recording, (2) transformation, (3) generalization, and
(4) comparison. Users can select which provenance capture system
to use, which benchmark to run, and other configuration settings, such
as the number of trials. Before presenting the details of the four
subsystems, we outline some common use cases for ProvMark.
\input{usecases}
\subsection{Recording}
The \emph{recording} subsystem runs the
provenance capture tools on test
programs.
This subsystem first prepares a staging directory that provides a consistent
environment for test execution.
The recording subsystem then starts the provenance capture tool with
appropriate settings, captures the provenance generated by the tool,
and stops the tool afterwards.
The recording subsystem is the only one to interact directly with the
target provenance capture tools. For each tool, we implemented a
script that configures the tool to capture the provenance of a
specific process, rather than recording all contemporaneous system
events. Recording multiple runs of the same process using CamFlow
was challenging in earlier versions because CamFlow only serialized
nodes and edges once, when first seen. The current version (0.4.5)
provides a workaround to this problem that re-serializes the needed
structures when they are referenced later. We also modified its
configuration slightly to avoid tracking ProvMark's own behavior. We
otherwise use the default configuration for each system; we refer to
these configurations as the \emph{baseline} configurations.
The recording subsystem is used to record provenance graphs for the
foreground and background variants of each benchmark. Provenance can
include transient data that varies across runs, such as timestamps, so
we typically record multiple runs of each program and filter out the
transient information in a later \emph{generalization} stage,
described below.
Some of the provenance collecting tools were originally designed to
record whole system provenance. The tools start recording when the
machine is started and only stops when the machine is shut
down. This behaviour ensures that provenance recording covers the
entire operating system session. As we need to obtain repeatable
results from several recording attempts for generalization and
benchmark generation, we need to restart the recording section
multiple times during the provenance collection. This may interfere with
the results of some tools as they are not originally designed for
frequent restarting of recording sessions. Thus the recording
subsystem aims to manage the collecting tools by inserting and
managing timeouts between the sessions. For example, we usually obtain
stable results from SPADE by inserting timeouts to wait for successful
completion of SPADE's graph generation process; even so, we sometimes
stop too early and obtain inconsistent results leading to mismatched
graphs. Similarly, using CamFlow, we sometimes experience small
variations in the size or structure of the results for reasons we have
not been able to determine. In both cases we deal with this by
running a larger number of trials and retaining the two smallest
consistent results (as discussed later). For OPUS, any two runs are
usually consistent, but starting OPUS and loading data into or out of
Neo4j are time-consuming operations.
\subsection{Transformation}
Different systems output their provenance graphs in different formats.
For example, SPADE supports Graphviz DOT format and Neo4J storage
(among others), OPUS also supports Neo4J storage, and CamFlow supports
W3C PROV-JSON~\cite{provjson} as well as a number of other storage or
stream processing backends. CamFlow also can be used instead of Linux
Audit as a reporter to SPADE, though we have not yet experimented with
this configuration.
To streamline the remaining stages, we translate these three
formats to a common representation. Unlike the recording stage, this
stage really is straightforward, but we will describe the common
format in detail because it is important for understanding the
remaining stages.
\setlength{\abovecaptionskip}{0pt}
\begin{figure}[tb]
\lstset{caption={Datalog Graph
Format},frame=single,label={potassco:index}}
\begin{lstlisting}
Node n<gid>(<nodeID>,<label>)
Edge e<gid>(<edgeID>,<srcID>,<tgtID>,<label>)
Property p<gid>(<nodeID/edgeID>,<key>,<value>)
\end{lstlisting}
\end{figure}
The common target format is a logical representation of property
graphs, in which nodes
and edges can have labels as well as associated properties (key-value
dictionaries). Specifically, given a set $\Sigma$ of node and edge
labels, $\Gamma$ of property keys, and $D$ of data values, we consider
property graphs $G = (V,E,src,tgt,lab,prop)$ where $V$ is a set
of vertex identifiers and $E$ a set of edge identifiers; these are
disjoint ($V \cap E = \emptyset$). Further, $src,tgt: E \to V$ maps each edge $e$
to its source and target nodes, respectively,
$lab : V \cup E \to \Sigma$ maps each node or edge $x$ to its
label $lab(x) \in \Sigma$, and
$prop : (V \cup E) \times \Gamma \to D$ is a partial function such
that for a given node or edge $x$, then $prop(x,p)$ (if defined) is
the value for property $p\in \Gamma$. In practice, $\Sigma$, $\Gamma$
and $D$ are each sets of strings.
\setlength{\abovecaptionskip}{0pt}
\begin{figure}[tb]
\begin{center}
\includegraphics[scale=0.35]{img/sample.pdf}
\captionof{figure}{Sample Graphs}\label{graph:sample}
\end{center}
\lstset{caption={Datalog Format for Figure
\ref{graph:sample}},frame=single,label={potassco:sample}}
\begin{lstlisting}
ng1(n1,"File").
pg1(n1,"Userid","1").
pg1(n1,"Name","text").
ng2(n1,"File").
ng2(n2,"Process").
pg2(n1,"Userid","1").
eg2(e1,n1,n2,"Used").
pg2(n1,"Name","text").
\end{lstlisting}
\end{figure}
\setlength{\abovecaptionskip}{10pt}
For provenance graphs, following the W3C PROV vocabulary, the node
labels are typically \emph{entity}, \emph{activity} and \emph{agent},
edge labels are typically relations such as \emph{wasGeneratedBy} and
\emph{used}, and properties are either PROV-specific property names or
domain-specific ones, and their values are typically strings.
However, our representation does not assume the labels and properties
are known in advance; it works with those produced by the tested system.
We represent property graphs as
sets of logical facts using a Prolog-like syntax called
\emph{Datalog}~\cite{ahv}, which is often used to represent relational data in
logic programming (as well as databases~\cite{ahv} and networking~\cite{green13ftdb}).
The generic form of the Datalog graph format we use is shown as
Listing \ref{potassco:index}. We assume a fixed string \verb|gid| used
to uniquely identify a given graph as part of its node, edge and label
relation names. Each node $v \in V$ is represented as a fact
$n_{gid}(v, lab(v))$. Likewise, each edge $e = (v,w) \in E$ is represented as a
fact $e_{gid}(e, src(e), tgt(e), lab(e))$. Finally, if a node or edge $x$ has property $p$
with value $s$, we represent this with the fact
$p_{gid}(x, p, s)$. Two sample graphs are shown in Figure \ref{graph:sample}
and their Datalog representations in Listing
\ref{potassco:sample}.
All the remaining stages, including the final result, work on the
Datalog graph representation, so these stages are independent of
the provenance capture tool and its output format. The Datalog
representation can easily be visualized.
\subsection{Graph Generalization}
The third subsystem performs \emph{graph
generalization}. Recall that the recording stage produces several
graphs for a given test program. We wish to identify a single,
representative and general graph for each test program. To formalize
these notions, we adapt the notion of \emph{graph isomorphism} to
property graphs:
$G_1$ is isomorphic to $G_2$ if there is an invertible function $h :
V_1 \cup E_1 \to V_2\cup E_2$
such that
\begin{enumerate}
\item $h(src_1(e)) = src_2(h(e))$ and $h(tgt_1(e)) = tgt_2(h(e))$,
i.e. $h$ preserves edge relationships
\item $lab_1(x) = lab_2(h(x))$, i.e. $h$ preserves node and edge labels
\item $prop_1(x,k,v) = prop_2(h(x),k,v)$, i.e. $h$ preserves
properties.
\end{enumerate}
Moreover, we say that $G_1$ and $G_2$ are \emph{similar}
if only the first two conditions hold, that is, if $G_1$ and $G_2$
have the same shape but possibly different properties.
We assume that over sufficiently many recording trials, there will be at least
two \emph{representative} ones, which are similar to each other.
Identifying two representative runs is complicated by the fact that
there might be multiple pairs of similar graphs. We discuss a
strategy for identifying an appropriate pair below.
To obtain two representative graphs, we first consider all of the
trial runs and partition them into similarity classes. We first
discard all graphs that are only similar to themselves, and consider
these to be failed runs. Among the remaining similarity classes, we
choose a pair of graphs whose size is smallest. Picking the two
largest graphs also seems to work; the choice seems arbitrary. However,
picking the largest background graph and the smallest foreground graph
leads to failure if the extra background structure is not found in the
foreground, while making the opposite choice leads to extra structure
being found in the difference.
Given two similar graphs, the generalization stage identifies the
property values that are consistent across the two graphs and removes
the transient ones.
The generalization stage searches for a matching of nodes and edges of the
two graphs that minimizes the number of different properties. We
assume that the differences are transient data and discard them. We
would like to minimize the number of differences, that is, match as
many properties as possible.
Similarity and generalization are instances of the \emph{graph
isomorphism} problem, whose status (in P or NP-complete) is
unknown~\cite{arvind05beatcs}. We solve these problems using an
Answer Set Programming (ASP)~\cite{gebser11aicom} specification that
defines the desired matching between the two graphs. ASP is a
decidable form of logic programming that combines a high-level logical
specification language with efficient search and optimization
techniques analogous to SAT solving or integer linear programming. An answer
set program specifies a set of possible \emph{models}, which
correspond to solutions to a problem. The specification is a
collection of logical formulas that define when a model is a possible
solution. In our
setting, the models consist of matching relations between two graphs,
satisfying the requirements of graph similarity. ASP is well-suited to this problem
because the graphs and specification can be represented concisely,
and (as we shall see) the ASP solver can efficiently find solutions.
The problem specification is a logic program defining when a
binary relation forms an isomorphism between the graphs.
The code in Listing~\ref{fig:gi} defines graph isomorphism in ASP.
ASP specifications consist of rules that constrain the possible
solution set. The first four lines specify that $h$ can relate any
node in $G_1$ to any node in $G_2$ and vice versa, and similarly for
edges. The next two lines constrain $h$ to be a 1-1 function. (Rules of
the form \verb|:- A,B,C.| can be read as ``It cannot be the case that
\verb|A|, \verb|B|, and \verb|C| hold''.) The remaining three pairs
of lines specify that $h$ must preserve node and edge labels and the
source and targets of edges.
\begin{figure}[t]
\lstset{frame=single,caption={Graph similarity},label={fig:gi}}
\begin{lstlisting}
{h(X,Y) : n2(Y,_)} = 1 :- n1(X,_).
{h(X,Y) : n1(X,_)} = 1 :- n2(Y,_).
{h(X,Y) : e2(Y,_,_,_)} = 1 :- e1(X,_,_,_).
{h(X,Y) : e1(X,_,_,_)} = 1 :- e2(Y,_,_,_).
:- X <> Y, h(X,Z), h(Y,Z).
:- X <> Y, h(Z,Y), h(Z,X).
:- n1(X,L), h(X,Y), not n2(Y,L).
:- n2(Y,L), h(X,Y), not n1(X,L).
:- e1(E1,_,_,L), h(E1,E2), not e2(E2,_,_,L).
:- e2(E2,_,_,L), h(E1,E2), not e1(E1,_,_,L).
:- e1(E1,X,_,_), h(E1,E2), e2(E2,Y,_,_), not h(X,Y).
:- e1(E1,_,X,_), h(E1,E2), e2(E2,_,Y,_), not h(X,Y).
\end{lstlisting}
\end{figure}
This specification can be solved using \texttt{clingo}, an efficient
ASP solver~\cite{gebser11aicom}. As ASP is a kind of logic
programming, it helps ProvMark to determine an optimal matching
solution among two graphs by searching for a model that satisfies the
specification. We use the resulting matching to determine which
properties are common across both graphs and discard the others. We
perform generalization independently on the foreground and background
graphs. The outputs of this stage are the two generalized graphs
representing the invariant activity of the foreground and background
programs respectively.
\subsection{Graph Comparison}
The fourth and last subsystem is \emph{graph comparison}. Its purpose
is to match the background graph to a subgraph of the foreground
graph; the unmatched part of the foreground graph corresponds to the
target activity.
Because provenance recording is generally monotonic (append-only), we expect that the generalized provenance graph for the background
program is a subgraph of the generalized provenance graph for the
foreground program, so there should be a one-to-one matching from the
nodes and edges in the background graph to the foreground graph. This
graph matching problem is a variant of the subgraph isomorphism
problem, a classical NP-complete problem~\cite{cook71stoc}.
We again solve these problems using ASP, taking advantage of the fact
that ASP solvers can search for optimal solutions according to some
cost model.
Given two graphs, $G_1$ and
$G_2$, the solver finds a matching from the nodes and edges of $G_1$
to those of $G_2$ that identifies a subgraph of $G_2$ isomorphic to
$G_1$ and minimizes the number of mismatched properties.
Listing~\ref{fig:sg-approx} shows the approximate subgraph
optimization problem specification. It is related to graph
similarity, but only requires nodes/edges in $G_1$ to be matched
to one in $G_2$ and not the reverse. Additionally, the last four
lines define the cost of a matching as the number of properties
present in $G_1$ with no matching property in $G_2$, and this cost is
to be
minimized.
\begin{figure}[tb]
\lstset{frame=single,caption={Approximate subgraph
isomorphism},label={fig:sg-approx}}
\begin{lstlisting}
{h(X,Y) : n2(Y,_)} = 1 :- n1(X,_).
{h(X,Y) : e2(Y,_,_,_)} = 1 :- e1(X,_,_,_).
:- X <> Y, h(X,Z), h(Y,Z).
:- X <> Y, h(Z,Y), h(Z,X).
:- n1(X,L), h(X,Y), not n2(Y,L).
:- e1(E1,_,_,L), h(E1,E2), not e2(E2,_,_,L).
:- e1(E1,X,_,_), h(E1,E2), e2(E2,Y,_,_), not h(X,Y).
:- e1(E1,_,X,_), h(E1,E2), e2(E2,_,Y,_), not h(X,Y).
cost(X,K,0) :- p1(X,K,V), h(X,Y), p2(Y,K,V).
cost(X,K,1) :- p1(X,K,V), h(X,Y), p2(Y,K,W), V <> W.
cost(X,K,1) :- p1(X,K,V), h(X,Y), not p2(Y,K,_).
#minimize { PC,X,K : cost(X,K,PC) }.
\end{lstlisting}
\end{figure}
We again use
the \texttt{clingo} solver to find an optimal matching. Once we have
found such a matching, the graph comparison subsystem subtracts the
matched nodes and edges in the background activity from the foreground
graph. This is essentially a set difference operation between the
nodes and edges of the graphs, but we also retain any nodes that are
sources or targets of edges in the difference; these are visualized as
green or gray \emph{dummy nodes}.
\section*{Acknowledgments}
Effort sponsored by the Air Force Office of Scientific Research, Air
Force Material Command, USAF, under grant number
\grantnum{AFOSR}{FA8655-13-1-3006}. The U.S. Government and University
of Edinburgh are authorised to reproduce and distribute reprints for
their purposes notwithstanding any copyright notation thereon. Cheney
was also supported by ERC Consolidator Grant Skye (grant number
\grantnum{ERC}{682315}). This material is based upon work supported
by the Defense Advanced Research Projects Agency (DARPA) under
contract \grantnum{DARPA}{FA8650-15-C-7557} and the National Science Foundation under
Grant \grantnum{NSF}{ACI-1547467} and \grantnum{NSF}{SSI-1450277} (End-to-End Provenance).
Any opinions, findings, and conclusions or
recommendations expressed in this material are those of the authors
and do not necessarily reflect the views of the National Science
Foundation.
We are grateful to Ajitha Rajan for comments on a draft of this paper
and to our shepherd Boris Koldehofe and anonymous reviewers for
insightful comments.
{ \bibliographystyle{ACM-Reference-Format}
\subsection{Use Cases}
\label{sec:usecases}
People are beginning to build critical distributed applications,
particularly security applications, using system-level
provenance. However, it is difficult for them to know how to interpret
results or implement queries to detect activity on different
systems. Both potential users and developers of all three considered
provenance recording tools have participated in the design of
ProvMark. To clarify when, and to whom, ProvMark is useful, in this
section we outline several common use cases. In each case, ProvMark
automates a central, labor-intensive step: namely, running tools to
extract provenance, graphs and analyzing the graphs produced by a tool
to identify a target activity. The first two use cases are real
(albeit lightly fictionalized) situations where we have used ProvMark.
The other two are more speculative, but illustrate how ProvMark could
be used for testing or exploration.
\paragraph{Tracking failed calls}
Alice, a security analyst, wants to know which provenance recorders
track syscalls that fail due to access control violations, since these
calls may be an indication of an attack or surveillance attempt. She
can write small benchmark programs that capture various access control
failure scenarios; most only take a few minutes to write, by modifying
other, similar benchmarks for successful calls. Once this is done, ProvMark can
run all of them to produce benchmark results. For example, Alice
considers what happens if a non-privileged user unsuccessfully
attempts to overwrite \texttt{/etc/passwd} by renaming another file.
By default SPADE installs Linux Audit rules that only report on
successful system calls, so SPADE records no information in this case.
OPUS monitors system calls via intercepting C library calls, so it
knows whether a call is being attempted, and typically generates some
graph structure even for unsuccessful calls. For example, the result
of a failed \syscall{rename} call has the same structure as shown in
Figure~\ref{fig:rename}, but with a different return value property of -1
instead of 0. Finally, CamFlow can in principle monitor failed system
calls, particularly involving permission checks, but does not do so in
this case. Alice concludes that for her application, OPUS may provide
the best solution, but resolves to discuss this issue with the SPADE
and CamFlow developers as well.
\paragraph{Configuration validation}
Bob, a system administrator, wants to make sure that an installation
of SPADE is configured correctly to match a security policy. SPADE
(like the other systems) has several configuration parameters, to
allow greater or lesser levels of detail, coalescing similar
operations (such as repeated reads or writes), or enabling/disabling
versioning. These also affect performance and system overhead, so Bob
wants to ensure that enough information is recorded to enable
successful audits, while minimizing system load.
Bob can use ProvMark to benchmark alternative configurations of
SPADE. For example, SPADE provides a flag \texttt{simplify} that is
enabled by default. Disabling \texttt{simplify} causes
\syscall{setresgid} and \syscall{setresuid} (among others) to be
explicitly monitored, and Bob wants to ensure these calls are tracked.
However, on doing so, Bob also uncovered a minor bug: when
\syscall{simplify} is disabled, one of the properties of a background
edge is initialized to a random value, which shows up in the benchmark
as a disconnected subgraph. The bug was reported to the SPADE
developers and quickly fixed.
SPADE also provides various \emph{filters} and \emph{transformers}
which perform pre-processing or post-processing of the stored
provenance respectively. Bob also experimented with one of SPADE's filters,
\texttt{IORuns}, which controls whether runs of similar read or write
operations are coalesced into a single edge. Surprisingly, Bob found that in the
benchmarked version of SPADE, enabling this filter had no effect.
This turned out to be due to an inconsistency in the property names
used by the filter vs. those generated by SPADE. This has also now
been fixed.
\paragraph{Regression testing}
Charlie, a developer of provenance recording tool XYZTrace, wants to
be able to document the level of completeness of XYZTrace to
(potentially skeptical) users. ProvMark can be used for regression
testing, by recording the graphs produced in a given benchmarking run,
and comparing them with the results of future runs, using the same
code for graph isomorphism testing ProvMark already uses during
benchmarking. Charlie writes a simple script that stores the
benchmark graphs (as Datalog) from previous runs, and whenever the
system is changed, a new benchmarking run is performed and the results
compared with the previous ones. When differences are detected, if the changes are expected then the new version of the graph replaces the old one; if the changes are unexpected, this is investigated as a potential bug.
\paragraph{Suspicious activity detection}
Dora, a security researcher, wants to identify patterns in provenance
graphs that are indicative of a potential attack on the system. She
compiles a set of scripts that carry out different kinds of attacks,
and configures CamFlow on a virtual machine. She is particularly
interested in detecting privilege escalation events where an attacker
is able to gain access to new resources by subverting a privileged
process. She instruments the scripts to treat the privilege
escalation step as the ``target activity''. Using ProvMark, Dora can
obtain example provenance graphs that include the target activity or
exclude it. If the graphs are not too large (e.g. hundreds rather
than thousands of nodes), ProvMark's default behavior will also be
able to compute the differences between the graphs with and without
the target activity.
|
1,116,691,498,222 | arxiv | \section{Introduction and main result}
Several classical special function inequalities, such as Fej\'{e}r's Inequality \cite{An00} or the Askey-Gasper Inequality \cite{An00},
assert the positivity of an object that can be defined by a linear recurrence with polynomial
coefficients. Even for the special case of linear recurrences
\begin{equation}\label{eq:cfinite rec}
a(n+d) = s_1 a(n+d-1) + \dots + s_{d-1}a(n+1) + s_d a(n), \quad n\in\mathbb{N},
\end{equation}
with constant coefficients $s_1,\dots,s_d\in\mathbb{R}$ it is not
always a simple matter to decide from the recurrence coefficients and the real initial values
$a(0),\dots,a(d-1)$ whether the solution is positive or not.
We call sequences $(a(n))$ that satisfy a recurrence of the form \eqref{eq:cfinite rec} {\em recurrence sequences}.
Zeilberger \cite{Ze90} gives them the more suggestive name {\em $C$-finite sequences\/}.
Linear combinations (with constant coefficients) of recurrence sequences are recurrence sequences again,
so positivity results are useful for comparing the magnitude of two sequences, too.
It is well known \cite{Ev03} that
the sequence $(a(n))$ can be written in terms of the roots $\alpha_1,\dots,\alpha_s$ of the
{\em characteristic polynomial\/}
\[
z^d - s_1 z^{d-1}- \dots - s_{d-1} z - s_d
\]
of the recurrence as a generalized power sum
\begin{equation}\label{eq:a(n) repr}
a(n) = C_1(n) \alpha_1^n + \dots C_d(n) \alpha_s^n,
\end{equation}
where the $C_k(n)$ are polynomials in $n$ with complex coefficients. Given a recurrence of the form \eqref{eq:cfinite rec}
and initial values $a(0),\dots,a(d-1)$, the $\alpha_k$ and the $C_k$ can be readily computed.
We refer to the $\alpha_k$ that occur in \eqref{eq:a(n) repr} with nonzero coefficient as {\em characteristic roots of\/}
$(a(n))$. The characteristic roots of maximal modulus will be called {\em dominating characteristic roots of\/}
$(a(n))$.
\begin{example}\label{ex:cfinite rec}
Consider the recurrence
\begin{align*}
a(n+5) & = 3 a(n+4) -2(\sqrt{5}+1)a(n+3)+6(\sqrt{5}+1)a(n+2) \\
& \phantom{=} -16a(n+1) + 48a(n).
\end{align*}
Its characteristic polynomial is
\[
(z-3)(z-2 \mathrm{e}^{7\mathrm{i}\pi/5})(z-2\mathrm{e}^{-7\mathrm{i}\pi/5})
(z-2\mathrm{e}^{2\mathrm{i}\pi/5})(z-2\mathrm{e}^{-2\mathrm{i}\pi/5}),
\]
and the solution is given by
\begin{align}
a(n) & = c_0 3^n + 2^n \left(c_1 \mathrm{e}^{7n\mathrm{i}\pi/5} + \overline{c}_1 \mathrm{e}^{3n\mathrm{i}\pi/5}
+ c_2 \mathrm{e}^{2n\mathrm{i}\pi/5} + \overline{c}_2 \mathrm{e}^{8n\mathrm{i}\pi/5} \right) \notag \\
& = c_0 3^n + O(2^n), \label{eq:O(2^n)}
\end{align}
where the coefficients $c_0\in\mathbb{R}$ and $c_1,c_2\in\mathbb{C}$ depend on the real
initial values $a(0),\dots,a(3)$.
We may ask ourselves whether $3^{-n}a(n)$ approaches $c_0$ from one side only. If $c_1$ and $c_2$
do not both vanish, it is natural to expect (and will be established in this paper) that this does not hold, because
the $O(2^n)$ term seems to oscillate.
\end{example}
We pose the following conjecture.
\begin{conjecture}\label{cj:equ abs}
Let $(a(n))$ be a recurrence sequence with no real positive dominating characteristic root. Then
there are infinitely many $n$ with $a(n)>0$ and infinitely many $n$ with $a(n)<0$.
\end{conjecture}
The sequence $(a(n))$ might not oscillate if there is a real positive dominating characteristic root.
See Section~\ref{se:pos real} for more on this.
So far Conjecture~\ref{cj:equ abs} has only been verified for one dominating characteristic root (trivial) and
for one pair of conjugate complex roots \cite{Bu81}.
We cannot follow an argument from Nagasaka and Shiue \cite{Na90}, viz.\ that this special case
should immediately imply the truth of the conjecture in general.
The main goal of this paper
is to establish the following theorem by an extension of Burke and Webb's proof.
\begin{theorem}[Main Theorem]\label{thm:main}
Let $(a(n))$ be a recurrence sequence with at most four dominating characteristic roots, none of which is real positive. Then
there are infinitely many $n$ with $a(n)>0$ and infinitely many $n$ with $a(n)<0$.
\end{theorem}
The rest of the paper is organized as follows. In Section~\ref{se:pre} we reduce Theorem~\ref{thm:main}
from multiple roots to simple roots and subsequently
to a geometric statement about the distribution modulo one of integer multiples of a real vector
$\bm{\xi}=(\xi_1,\xi_2)=(\arg{\alpha_1},\arg{\alpha_2})/2\pi$,
except for some special cases of Theorem~\ref{thm:main} that are settled in Section~\ref{se:compl}.
In Section~\ref{se:irr} we deduce the
desired result from Kronecker's Approximation Theorem, provided that one of $\xi_1,\xi_2$ is irrational. The proof in the case
where both are rational is the subject of Sections~\ref{se:rat} and \ref{se:edrat}.
Section~\ref{se:pos real} presents a metric result that deals with the case of a positive real root.
In the conclusion we comment, among other things,
on extending our approach to Conjecture~\ref{cj:equ abs} to an arbitrary number of dominating characteristic roots.
\section{Notation and preliminaries}\label{se:pre}
We write $\mathbb{N}$, $\mathbb{Z}$, $\mathbb{Q}$, $\mathbb{R}_0^+$, $\mathbb{R}$ and $\mathbb{C}$
for the sets of natural numbers (including zero), integers, rational numbers, non-negative real numbers,
real numbers and complex numbers, respectively. The conjugate of a complex number $z$ is denoted by $\overline{z}$.
Whenever $\bm{v}$ is a vector, we use the same letter with a subscript for its components,
as in $\bm{v}=(v_1,\dots,v_m)$.
For vectors $\bm{\xi}=(\xi_1,\dots,\xi_m)$, $\bm{\rho}=(\rho_1,\dots,\rho_m)$ of real numbers
and a real number $\rho$, we write
\begin{align*}
\bm{\xi} \bmod \bm{\rho} &= (\xi_1 \bmod \rho_1,\dots, \xi_1 \bmod \rho_1) \quad \textnormal{and} \\
\bm{\xi} \bmod \rho &= (\xi_1 \bmod \rho,\dots, \xi_1 \bmod \rho).
\end{align*}
We define the open rectangle parallel to the axes with side lengths $2\lambda_1,2\lambda_2\in\mathbb{R}$
centred at $\bm{c}=(c_1,c_2)\in\mathbb{R}^2$ as
\[
\mathcal{R}_{\lambda_1,\lambda_2}(\bm{c}) := \left\{ \bm{x} \in \mathbb{R}^2 :
|x_1-c_1|<\lambda_1, |x_2-c_2|<\lambda_2 \right\}.
\]
For an open square parallel to the axes we write
\[
\mathcal{S}_\lambda(\bm{c}) := \mathcal{R}_{\lambda,\lambda}(\bm{c}),\quad \lambda \in \mathbb{R}, \bm{c} \in \mathbb{R}^2.
\]
A {\em lattice} \cite{Ca59} is a discrete subgroup $\Lambda\subset\mathbb{R}^m$. Its determinant
is denoted by $\bm{d}(\Lambda)$.
The lattice $L_g(a_1,a_2)$ is defined in Section~\ref{se:rat}.
\vskip 5mm
Let $(a(n))$ be as in Theorem~\ref{thm:main}.
We order the characteristic roots $\alpha_1,\dots,\alpha_{s}$ of $(a(n))$ such that $\alpha_1,\dots,\alpha_{l}$ contain
all real dominating characteristic roots, precisely one element of every pair of conjugate non-real dominating characteristic roots
and no other roots.
Note that this implies $l=1$ or $l=2$.
Moreover, let $\alpha_1,\dots,\alpha_{l}$ be ordered such that
\[
D:= \deg C_1 = \dots = \deg C_m > \deg C_{m+1} \geq \dots \geq \deg C_{l}
\]
for some $1\leq m \leq l\leq 2$. Then we obtain \cite{Kn94}
\[
n^{-D}a(n) = \sum_{k=1}^m\left(c_k\alpha_k^n +
\overline{c}_k\overline{\alpha}_k^n\right) + O(n^{-1}|\alpha_1|^n),
\]
where $c_k$ is the leading coefficient of $C_k(n)$.
This formula shows that Theorem~\ref{thm:main} can be deduced from Burke and Webb's result
($m=1$) and the following theorem ($m=2$). Observe that we can safely assume $|\alpha_1|=|\alpha_2|=1$,
since we can divide by the positive factor $|\alpha_1|^n$.
\begin{theorem}\label{thm:main2}
Let $\alpha_1,\alpha_2\in\mathbb{C}\backslash\mathbb{R}_0^+$, $|\alpha_1|=|\alpha_2|=1$,
$\alpha_1\neq\alpha_2\neq\overline{\alpha}_1$. Let further $c_1,c_2$ be nonzero complex numbers and
\begin{equation}\label{eq:b(n)}
b(n) := c_1 \alpha_1^n + \overline{c}_1\overline{\alpha}_1^n
+ c_2 \alpha_2^n + \overline{c}_2\overline{\alpha}_2^n, \qquad n\geq 0.
\end{equation}
Then there is $\delta>0$ such that $b(n)>\delta$ for infinitely many $n$ and
$b(n)<-\delta$ for infinitely many $n$.
\end{theorem}
Note that if $\delta$ was replaced by zero, it might happen that
e.g.\ all negative values $b(n)$ are so small in absolute value that the remainder term of $a(n)$, which comes from
the characteristic roots of smaller modulus, takes over and makes the corresponding values
$a(n)$ positive.
This uniformity condition was missed by Burke and Webb \cite{Bu81}.
They only argue that $c_1 \alpha_1^n + \overline{c}_1\overline{\alpha}_1^n$ has
infinitely many positive and infinitely many negative values, which is not sufficient, but their proof can be easily repaired.
Now let $\alpha_1$, $\alpha_2$, $c_1$, $c_2$ be as in Theorem~\ref{thm:main2}.
Replacing $(\alpha_k,c_k)$ by $(\overline{\alpha}_k,\overline{c}_k)$ and vice versa if necessary, we may assume $\mathrm{Im}(c_k)\geq 0$.
Putting $\theta_k:=\arg{\alpha_k}$, we obtain by standard formulas
\begin{align*}
b(n) & = 2\sum_{k=1}^2 \mathrm{Re}\left( c_k \exp\left(\mathrm{i}n\theta_k \right) \right) \\
& = 2\sum_{k=1}^2\left(\mathrm{Re}(c_k) \cos{n\theta_k} - \mathrm{Im}(c_k) \sin{n \theta_k}\right) \\
& = \sum_{k=1}^2 w_k \sin(n\theta_k + \varphi_k),
\end{align*}
where the coefficients are nonzero real numbers
\[
w_k :=
\begin{cases}
-2|c_k|, & c_k \in \mathbb{C}\backslash\mathbb{R}; \\
2c_k, & c_k \in \mathbb{R},
\end{cases}
\]
and the $\varphi_k$ are given by
\[
\varphi_k :=
\begin{cases}
-\arctan\tfrac{\mathrm{Re}(c_k)}{\mathrm{Im}(c_k)}, & c_k \in \mathbb{C}\backslash\mathbb{R}; \\
\frac{1}{2}\pi, & c_k \in \mathbb{R}.
\end{cases}
\]
We turn our attention to the signs of $\sin(n\theta_k + \varphi_k)$. If we can prove that
for every pair $(S_1,S_2)$ of $+1$'s and $-1$'s there are
infinitely many $n$ such that the sign of $\sin(n\theta_k + \varphi_k)$
equals $S_k$ for $k=1,2$, we will have shown that $(b(n))$ oscillates,
whatever the values of the $c_k$ (and thus the $w_k$) are. In other words, we are looking for $n$ such that
\[
(n\theta_k + \varphi_k) \bmod 2\pi \in\ ]0,\pi[
\]
or
\[
(n\theta_k + \varphi_k) \bmod 2\pi \in\ ]\pi,2\pi[,
\]
respectively. To get the $\delta$ in Theorem~\ref{thm:main2}, we have to shrink the intervals to
\[
]\epsilon,\pi-\epsilon[ \quad \text{and} \quad ]\pi+\epsilon,2\pi-\epsilon[
\]
for some small $\epsilon>0$, of course independent from $n$.
Now we rescale to the unit interval.
\begin{theorem}\label{thm:xi0,xi1}
Let $\xi_1,\xi_2 \in \left]0,1\right[\backslash\{\tfrac{1}{2}\}$ such that $\xi_1\not\equiv\pm\xi_2\pmod{1}$ and,
if both $\xi_1$ and $\xi_2$ are rational, then the pair of their denominators
(written with the larger denominator first) is none of
$(5,5)$, $(6,3)$, $(8,4)$.
Then for all $\bm{c}\in\mathbb{R}^2$ there is $\epsilon>0$ such that
there are infinitely many $n$ with
\[
n(\xi_1,\xi_2)\bmod 1\ \in\ \mathcal{S}_{1/4-\epsilon}(\bm{c}) \bmod 1.
\]
\end{theorem}
Since the sine function is continuous, applying this theorem with
$(\xi_1,\xi_2)=(\theta_1/2\pi,\theta_2/2\pi)$ and $c_k=\tfrac{1}{4}-\varphi_k/2\pi$ to make
$\sin(n\theta_k+\varphi_k)$ positive and $c_k=\tfrac{3}{4}-\varphi_k/2\pi$ for a negative sign proves Theorem~\ref{thm:main2},
unless one of the $\alpha_k$ is a negative real number (which implies $\xi_k=\tfrac{1}{2}$)
or $\theta_1/2\pi,\theta_2/2\pi$ are rational numbers with denominators in $\{(5,5), (6,3), (8,4)\}$.
Section~\ref{se:compl} deals with these special cases of Theorem~\ref{thm:main2}.
In the proof of Theorem~\ref{thm:xi0,xi1} we distinguish the following three cases:
\begin{enumerate}
\item[(1)] $\xi_1, \xi_2, 1$ are linearly independent over $\mathbb{Q}$. \label{case1}
\item[(2)] $\xi_1, \xi_2$ are not both rational, but satisfy a linear relation $u_1\xi_1 + u_2\xi_2 = v$ with
$u_1,u_2,v\in\mathbb{Z}$. \label{case2}
\item[(3)] $\xi_1$ and $\xi_2$ are both rational. \label{case3}
\end{enumerate}
Section~\ref{se:irr} settles the first two cases.
The proof of Theorem~\ref{thm:xi0,xi1} in Case~3 is the content of Sections~\ref{se:rat} and \ref{se:edrat}.
We remark that in order to prove Conjecture~\ref{cj:equ abs} for one
pair of conjugate complex dominating roots,
it suffices to show that for every real number $\xi\neq \tfrac{1}{2}$ with
$0<\xi<1$ and every real number $c$ there is $\epsilon>0$ such that for infinitely many $n$
\[
n\xi\bmod 1\ \in\ \left]c-\tfrac{1}{4}+\epsilon,c+\tfrac{1}{4}-\epsilon\right[ \bmod 1.
\]
This is essentially what was done (without $\epsilon$, cf. the introduction) by Burke and Webb \cite{Bu81}.
\section{The irrational cases}\label{se:irr}
The closure of the set of integer multiples of a vector $\bm{\xi}=(\xi_1,\xi_2)$ modulo one
is described by a classical result from Diophantine approximation.
\begin{theorem}[Kronecker's Theorem]\label{thm:kronecker}
Let $\xi_1$, $\xi_2$ be real numbers.
\begin{itemize}
\item[(i)] If $\xi_1,\xi_2,1$ are linearly independent over the rationals, then the points
$n\bm{\xi}\bmod 1$, $n\in\mathbb{N}$, lie dense in the unit square.
\item[(ii)] If $\xi_1,\xi_2$ are not both rational, but satisfy a relation
$u_1\xi_1 + u_2\xi_2 = v$ with $u_1,u_2,v\in\mathbb{Z}$ and $\gcd(u_1,u_2,v)=1$, then the points
$n\bm{\xi}\bmod 1$, $n\in\mathbb{N}$, lie dense on the portions of the lines
\[
\ell_t:=\left\{ \bm{x} \in \mathbb{R}^2 : u_1x_1+u_2x_2 = t \right\}, \quad t\in\mathbb{Z},
\]
which lie within the unit square.
\end{itemize}
\end{theorem}
\begin{proof}
See e.g.\ Niven \cite[Theorems~3.4 and 3.6]{Ni63}.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=.37]{fig1}
\caption{The unit square with $(n\xi_1,n\xi_2)\bmod 1$ for $\xi_1=2\sqrt{2}$, $\xi_1-2\xi_2=2$ and $n=0,\dots,50.$}
\label{fi:one_rel}
\end{figure}
Part (i) of Theorem~\ref{thm:kronecker} settles Case~1 of Theorem~\ref{thm:xi0,xi1}.
We proceed to Case~2. Let
$\bm{c}\in\mathbb{R}^2$ be arbitrary but fixed and
$\ell_t$ be as in part (ii) of Theorem~\ref{thm:kronecker}.
Since
\[
\bigcup_{t\in\mathbb{Z}}\ell_t + \mathbb{Z}^2 = \bigcup_{t\in\mathbb{Z}}\ell_t,
\]
it suffices to find infinitely many $n\bm{\xi}\bmod 1$ in the set
\[
\mathcal{S}_{1/4-\epsilon}(\bm{c}) \cap \bigcup_{t\in\mathbb{Z}} \ell_t,
\]
where $\epsilon>0$ is yet to be chosen.
First suppose that $\xi_1$ and $\xi_2$ are irrational.
Then the parallel lines $\ell_t$ are neither horizontal nor vertical,
since $u_1u_2\neq 0$. Two adjacent lines $\ell_t$, $\ell_{t+1}$
have horizontal distance $1/|u_1|$
and vertical distance $1/|u_2|$. Since $\xi_1\not\equiv\pm\xi_2\pmod{1}$, one of
these quantities must be smaller than or equal to $\tfrac{1}{2}$.
Thus
\[
\mathcal{S}_{1/4}(\bm{c}) \cap \bigcup_{t\in\mathbb{Z}} \ell_t \neq \emptyset.
\]
In fact this set is not only non-empty but contains a line segment.
Clearly, we can find $\epsilon>0$ such that the set
$\mathcal{S}_{1/4-\epsilon}(\bm{c}) \cap \bigcup_{t\in\mathbb{Z}} \ell_t$
still contains a line segment of length greater than zero.
Filling this line segment densely with points $n\bm{\xi}\bmod 1$ requires infinitely many $n$.
Now let $\xi_1$ be rational and $\xi_2$ be irrational, and let $b_1\in\mathbb{N}$ be the
denominator of $\xi_1$. This implies $u_2=0$.
Then the lines $\ell_t$ are vertical, and the horizontal distance between $\ell_t$ and $\ell_{t+1}$ is $1/b_1\leq \tfrac{1}{3}$, since
$b_1>2$ by the assumptions of Theorem~\ref{thm:xi0,xi1}. Case~2 of Theorem~\ref{thm:xi0,xi1} is proved.
\section{The rational case}\label{se:rat}
The main goal of this section and the next one is to prove the following theorem.
\begin{theorem}\label{thm:genrat}
Let $a_1,a_2,b_1,b_2\in\mathbb{N}$, $2\leq b_2\leq b_1$, $1\leq a_k < b_k$,
$\gcd(a_k,b_k)=1$ for $k=1,2$ and $\tfrac{a_1}{b_1} \not\equiv \pm\tfrac{a_2}{b_2} \pmod{1}$.
Then there is $\bm{c}\in[0,1]^2$ such that for all $n\in\mathbb{N}$
\[
n(\tfrac{a_1}{b_1},\tfrac{a_2}{b_2}) \bmod 1 \notin \mathcal{S}_{1/4}(\bm{c}) \bmod 1
\]
provided that
\begin{equation}\label{eq:genrat excl}
(b_1,b_2) \in \left\{ (5,5),(6,3),(8,4) \right\} \cup \left\{ (b_1,2) : 2\leq b_1 \in \mathbb{N} \right\},
\end{equation}
and there is no such $\bm{c}$ if \eqref{eq:genrat excl} does not hold.
\end{theorem}
To see that Case~3 of Theorem~\ref{thm:xi0,xi1} follows from Theorem~\ref{thm:genrat}, note that
the purely periodic sequence
\[
n(\tfrac{a_1}{b_1},\tfrac{a_2}{b_2}) \bmod 1 =
(\tfrac{na_1\bmod b_1}{b_1},\tfrac{na_2\bmod b_2}{b_2}), \quad n \geq 0,
\]
assumes each of its finitely many values infinitely often.
The $\epsilon$ has disappeared because the set of all $n(\tfrac{a_1}{b_1},\tfrac{a_2}{b_2})\bmod 1$
is finite and $\mathcal{S}_{1/4}(\bm{c})$ is open.
\begin{proof}[Proof of the right to left implication of Theorem~\ref{thm:genrat}]
If $b_2=2$, we necessarily have $a_2=1$, and we may take $c_2=\tfrac{1}{4}$ and $c_1\in\mathbb{R}$ arbitrary.
(See Figure~\ref{fi:spec cases} for an example.)
If $(b_1,b_2)=(5,5)$, it is easy to see that for all $\bm{a}$ in question the set of integer multiples modulo one
is one of the two sets
\[
\{ n(\tfrac{1}{5},\tfrac{2}{5}) \bmod 1 : n\in\mathbb{N}\} \quad \textnormal{and} \quad
\{ n(-\tfrac{1}{5},\tfrac{2}{5}) \bmod 1 : n\in\mathbb{N}\},
\]
obtained from $\bm{a}=(1,2)$ and $\bm{a}=(-1,2)$, respectively.
Similarly, for $(b_1,b_2)=(6,3)$ it suffices to consider $\bm{a}=(\pm 1,2)$.
This is also true for $(b_1,b_2)=(8,4)$, if we take $\bm{a}=(\pm 3,1)$ instead
of $(\pm 1,2)$.
The number of $\bm{a}$'s to check can be reduced further
by taking advantage of some obvious symmetries. By the subsequent lemma,
the alternative with negative first entry can be discarded in each of the three cases.
Figure~\ref{fi:spec cases} shows that in the remaining cases we may take $\bm{c}=(\tfrac{1}{2},\tfrac{1}{2})$,
$(\tfrac{1}{12},\tfrac{1}{3})$ and $(\tfrac{1}{2},\tfrac{1}{2})$, respectively.
\end{proof}
\begin{lemma}\label{le:symm}
Define the maps $s$ and $\tau$ on $\mathbb{R}^2$ by
\[
s(x_1,x_2) = ((1-x_1)\bmod 1,x_2) \quad \text{and} \quad \tau(x_1,x_2) = (x_2,x_1).
\]
Then for all real numbers $\xi_1$, $\xi_2$
\[
s((\xi_1,\xi_2)\bmod 1) = s(\xi_1,\xi_2)\bmod 1 \quad \text{and} \quad \tau((\xi_1,\xi_2)\bmod 1) = \tau(\xi_1,\xi_2)\bmod 1.
\]
\end{lemma}
\begin{proof}
Obvious.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=.37]{m52}
\includegraphics[scale=.37]{m55}
\includegraphics[scale=.37]{m63}
\includegraphics[scale=.37]{fig5}
\caption{The unit square with the set $\{n(\tfrac{a_1}{b_1},\tfrac{a_2}{b_2})\bmod 1:n\in\mathbb{N}\}$
for $(\tfrac{a_1}{b_1},\tfrac{a_2}{b_2})=(\tfrac{1}{5},\tfrac{1}{2})$, $(\tfrac{1}{5},\tfrac{2}{5})$,
$(\tfrac{1}{6},\tfrac{2}{3})$ and $(\tfrac{3}{8},\tfrac{1}{4})$, respectively.
}\label{fi:spec cases}
\end{figure}
We have shown this implication just for the sake of completeness. The interesting
part of Theorem~\ref{thm:genrat} for our purpose is the converse implication. Its
proof is the content of the remainder of this section and of the following section.
\begin{definition}\label{def:L}
Let $g$ be a positive integer and $a_1,a_2$ be integers relatively prime to $g$.
Then we define the {\em lattice of multiples of $\bm{a}=(a_1,a_2)$ modulo $g$\/} as
\begin{equation*}
L_g(a_1,a_2) := \left\{ \bm{u}\in\mathbb{Z}^2 : n\bm{a} \equiv \bm{u}\pmod{g}\quad
\textnormal{for some}\ n \in \mathbb{N}\right\}.
\end{equation*}
\end{definition}
Alternatively \cite{Ro97}, $L_g(a_1,a_2)$ can be defined as the lattice generated by the vectors
$(0,g),(g,0)$ and $(a_1,a_2)$.
The lattices $L_g(a_1,a_2)$ will provide a convenient representation of the sets of integer
multiples of rational numbers modulo one, which we encountered in Theorem~\ref{thm:genrat}. For this purpose
we require
a version of the well-known Chinese Remainder Theorem for moduli that are not necessarily pairwise relatively prime.
\begin{theorem}[Generalized Chinese Remainder Theorem]\label{thm:gcrt}
Let $b_1,\dots,b_m$ be positive integers and $u_1,\dots,u_m$ be integers. Then there is an
integer $0\leq u<\textnormal{lcm}(b_1,\dots,b_m)$ with
\[
u \equiv u_i \mod{b_i},\quad 1\leq i\leq m,
\]
provided that
\[
u_i \equiv u_j \mod{\gcd(b_i,b_j)},\quad 1\leq i,j \leq m.
\]
\end{theorem}
\begin{proof}
See Knuth \cite[Exercise~4.3.2.3]{Kn98}.
\end{proof}
\begin{lemma}\label{le:M to L}
Let $a_1,a_2$ be integers and $b_1,b_2$ be positive integers with $\gcd(a_k,b_k)=1$ for $k=1,2$
and $g:=\gcd(b_1,b_2)$.
Then
\begin{flalign*}
(i)& \quad \left\{ n(\tfrac{a_1}{b_1}, \tfrac{a_2}{b_2})\bmod 1 : n\in\mathbb{N}\right\}
= \left\{ (\tfrac{u_1}{b_1},\tfrac{u_2}{b_2}) : \bm{u}\in L_g(a_1,a_2), 0\leq u_k < b_k\right\} & \\
(ii)& \quad L_g(a_1,a_2) = \left\{ \bm{u} \in \mathbb{Z}^2 : a_1u_2 \equiv a_2u_1\pmod{g} \right\} &
\end{flalign*}
\end{lemma}
\begin{proof} We have
\begin{align*}
&\phantom{=} \left\{ n(\tfrac{a_1}{b_1}, \tfrac{a_2}{b_2})\bmod 1 : n\in\mathbb{N}\right\} \\
&= \left\{ (\tfrac{n a_1 \bmod b_1}{b_1}, \tfrac{n a_2\bmod b_2}{b_2}) : n\in\mathbb{N}\right\} \\
&= \left\{ (\tfrac{u_1}{b_1},\tfrac{u_2}{b_2}) : n\bm{a}\equiv \bm{u}\pmod{\bm{b}},
0\leq u_k<b_k,\ k=1,2,\quad \textnormal{for some}\ n \in \mathbb{N}\right\} \\
&= \left\{ (\tfrac{u_1}{b_1},\tfrac{u_2}{b_2}) : \bm{u}\in L_g(a_1,a_2), 0\leq u_k < b_k\right\}.
\end{align*}
The latter equality and assertion (ii) follow from Theorem~\ref{thm:gcrt}.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=.6]{fig3}
\caption{The lattice $L_5(2,1)$.}
\label{fi:example L}
\end{figure}
\begin{example}\label{ex:example L}
In Example~\ref{ex:cfinite rec} the $O(2^n)$ term yields $\xi_1=\theta_1/2\pi=\tfrac{7}{10}$
and $\xi_2=\theta_2/2\pi=\tfrac{2}{5}$. The corresponding lattice $L_5(7,1)=L_5(2,1)$ is displayed
in Figure~\ref{fi:example L}.
\end{example}
Let $\bm{a}=(a_1,a_2)$ and $\bm{b}=(b_1,b_2)$ be as in the assumptions of Theorem~\ref{thm:genrat},
but such that $\bm{b}$ is not in the set~\eqref{eq:genrat excl}, and put $g:=\gcd(b_1,b_2)$.
In the light of Lemma~\ref{le:M to L}, it is an immediate consequence of the periodicity property
\begin{equation}\label{eq:period}
L_g(a_1,a_2) = L_g(a_1,a_2) + g\mathbb{Z}^2
\end{equation}
that searching a point $n(\tfrac{a_1}{b_1},\tfrac{a_2}{b_2})\bmod 1$ in a `modded' square $\mathcal{S}_{1/4}(\bm{c})\bmod 1$
amounts to looking for a point of the lattice $L_g(a_1,a_2)$ in the rectangle
$\mathcal{R}_{b_1/4,b_2/4}(b_1 c_1,b_2 c_2)$ with side lengths $b_1/2$, $b_2/2$.
We let $c_k$ absorb $b_k$ and write again $\bm{c}=(c_1,c_2)$ for the arbitrary centre $(b_1 c_1,b_2 c_2)$.
\begin{example}\label{ex:point in rect}
If we want to show that the $O(2^n)$ term in \eqref{eq:O(2^n)} oscillates, we are lead to the problem
of finding a point of $L_5(2,1)$ in any rectangle $\mathcal{R}_{5/2,5/4}(\bm{c})$, $\bm{c}\in\mathbb{R}^2$.
\end{example}
If the numbers $b_1/g$ and $b_2/g$ are large, it is easy to find a point
of $L_g(a_1,a_2)$ in the rectangle, whereas $b_1=b_2=g$ is the most difficult case.
This is so because if we fix $a_1$, $a_2$ and $g$ and enlarge $b_1/g$ and $b_2/g$, the lattice
$L_g(a_1,a_2)$ remains invariant, while the rectangle becomes bigger.
At first glance, the problem seems to be easily reducible to the case of equal denominators $b_1=b_2=g$.
In Example~\ref{ex:point in rect}, if we could show that any square $\mathcal{S}_{5/4}(\bm{c})$ contains a point
of $L_5(2,1)$, then it would follow at once that every rectangle $\mathcal{R}_{5/2,5/4}(\bm{c})$ contains
a point of $L_5(2,1)$. But we have already seen (Theorem~\ref{thm:genrat}) that there are squares $\mathcal{S}_{5/4}(\bm{c})$
without points of $L_5(2,1)$. In general, the catch is that
even if $(a_1,a_2,b_1,b_2)$ satisfy the requirements of Theorem~\ref{thm:genrat}
and $(b_1,b_2)$ is not in the set \eqref{eq:genrat excl},
it may happen that $(a_1\bmod g,a_2\bmod g,g,g)$
violate the requirements of Theorem~\ref{thm:genrat} or that $(g,g)$ is in \eqref{eq:genrat excl}.
Therefore we choose a different approach for the case $b_1\neq b_2$.
For relatively prime $b_1$ and $b_2$ the lattice $L_g(a_1,a_2)$ equals $\mathbb{Z}^2$.
All rectangles $\mathcal{R}_{b_1/4,b_2/4}(\bm{c})$ with $\bm{c}\in\mathbb{R}^2$
have side lengths greater than one and therefore contain a point of $\mathbb{Z}^2$.
If $g=2$, then $a_1$ and $a_2$ must be odd,
hence
\[
L_g(a_1,a_2) = \left\{ \bm{u} \in \mathbb{Z}^2 : u_1\equiv u_1\pmod{2} \right\}.
\]
Since $b_1>4$ in this case, it is easy to see that this lattice contains a point of any rectangle
$\mathcal{R}_{b_1/4,b_2/4}(\bm{c})$.
From now on we assume $g \geq 3$.
The following proposition deals with the case $(b_1,b_2)=(2g,g)$.
Recall that $(b_1,b_2)=(4,2)$, $(6,3)$ and $(8,4)$ need not be considered, because
they are in the set~\eqref{eq:genrat excl}.
\begin{proposition}\label{pr:2g 2g}
Let $a_1,a_2,b_1,b_2$ be as in Theorem~\ref{thm:genrat}.
Suppose $g\geq 5$, $b_1=2g$ and $b_1=g$. Then for all $\bm{c}\in\mathbb{R}^2$
\[
L_g(a_1,a_2) \cap \mathcal{R}_{b_1/4,b_2/4}(\bm{c}) \neq \emptyset.
\]
\end{proposition}
\begin{proof}
Observe that by the periodicity property \eqref{eq:period} of $L_g(a_1,a_2)$
it suffices to find a point of the lattice in the set
\begin{equation}\label{eq:3g set}
\mathcal{R}_{b_1/4,b_2/4}(\bm{c}) + g\mathbb{Z}^2.
\end{equation}
Let $\bm{p}$ be the lower left corner of $\mathcal{R}_{b_1/4,b_2/4}(\bm{c})$.
We assume w.l.o.g.\ $0\leq p_1,p_2 < g$
and define $I:=\ ]p_2,p_2+\tfrac{1}{2}g[$.
Then \eqref{eq:3g set} contains the set
\begin{equation}\label{eq:3g strip}
\left([0,g[ \backslash \{p_1\}\right) \times I = \left([0,g[ \times I\right) \backslash \left(\{p_1\} \times I\right).
\end{equation}
The interval $I$ contains at least two integers, since its length is $\tfrac{1}{2}g>2$. Since $a_2$ is invertible
modulo $g$, there are at least two points of
$L_g(a_1,a_2)$ in $[0,g[\times I$ by part (ii) of Lemma~\ref{le:M to L}, and
at least one of them lies in \eqref{eq:3g strip}.
\end{proof}
Now we consider values of $b_1$ that are at least $3g$, which completes the case $b_1\neq b_2$
of Theorem~\ref{thm:genrat}.
\begin{proposition}\label{pr:3g}
Let $a_1,a_2,b_1,b_2$ be as in Theorem~\ref{thm:genrat}.
Suppose $g\geq 3$ and $b_1\geq 3g$.
Then for all $\bm{c}\in\mathbb{R}^2$
\[
L_g(a_1,a_2) \cap \mathcal{R}_{b_1/4,b_2/4}(\bm{c}) \neq \emptyset.
\]
\end{proposition}
\begin{proof}
It suffices to consider $b_1=3g$ and $b_2=g$.
Proceeding analogously to the proof of Proposition~\ref{pr:2g 2g},
we arrive at the set $[0,g[ \times I$ instead of \eqref{eq:3g strip}.
The result follows from part (ii) of Lemma~\ref{le:M to L} and $\tfrac{1}{2}g>1$.
\end{proof}
\section{The rational case with equal denominators}\label{se:edrat}
In order to finish the proof of Theorem~\ref{thm:genrat}, and thus the proof of Theorem~\ref{thm:xi0,xi1},
we will establish the following proposition.
\begin{proposition}\label{pr:g}
Let $a_1,a_2,b_1,b_2$ be as in Theorem~\ref{thm:genrat}.
Suppose $b_1=b_2=g\neq 5$.
Then for all $\bm{c}\in\mathbb{R}^2$
\[
L_g(a_1,a_2) \cap \mathcal{S}_{g/4}(\bm{c}) \neq \emptyset.
\]
\end{proposition}
If $L_g(a_1,a_2)$ contains one or two sufficiently short vectors, its points are dense enough
so that the square $\mathcal{S}_{g/4}(\bm{c})$ is populated by at least one lattice point.
This is the basic idea of our proof of Proposition~\ref{pr:g}.
Although there are algorithms \cite{Le94,Ro97} tailored to $L_g(a_1,a_2)$ for computing a reduced lattice basis,
we do not know of any specialized a priori bounds for the norm of the basis elements.
Therefore, we appeal to the standard bound.
\begin{definition} Let $\mathcal{K}$ be a subset of $\mathbb{R}^m$ and $\Lambda \subset \mathbb{R}^m$ be a lattice. Then
the {\em successive minima\/} of $\mathcal{K}$ w.r.t. $\Lambda$ are defined for $1\leq k \leq m$ by
\begin{equation*}
\lambda_k(\mathcal{K},\Lambda):=\inf \left\{\lambda >0 : \lambda \mathcal{K}\ \textnormal{contains}\ k\
\textnormal{linearly independent points of}\ \Lambda\right\}.
\end{equation*}
\end{definition}
In the following theorem, the term {\em body} denotes a set $\mathcal{K}\subset \mathbb{R}^m$ with non-empty interior such that
$\mathcal{K}$ is contained in the closure of its interior.
\begin{theorem}[Minkowski's Second Theorem]\label{thm:mink2}
If $\Lambda$ is an $m$-dimensional lattice in $\mathbb{R}^m$ and $\mathcal{K}\subset \mathbb{R}^m$
is a bounded zero-symmetric convex body with volume $V(\mathcal{K})$, then
\[
\lambda_1(\mathcal{K},\Lambda) \dotsm \lambda_m(\mathcal{K},\Lambda) V(\mathcal{K}) \leq 2^m \bm{d}(\Lambda).
\]
\end{theorem}
\begin{proof}
See Gruber and Lekkerkerker's monograph~\cite[Theorem~2.16.3]{Gr87}.
\end{proof}
From this theorem we will deduce that $L_g(a_1,a_2)$ must contain either two `short' linearly
independent vectors or one `very short' nonzero vector.
If the first case occurs, we will apply the following result of Bender~\cite{Be62}.
\begin{lemma}\label{le:bender}
Let $\{\bm{w}_1,\bm{w}_2\}$ be a basis of a lattice $\Lambda\subset\mathbb{R}^2$, and let $0<\vartheta<\pi$
be the angle between $\bm{w}_1$ and $\bm{w}_2$. Suppose further that
$\mathcal{C}\subset\mathbb{R}^2$ is a bounded convex set such that the quotient of its area and its perimeter
is greater than
\[
\tfrac{1}{2}\max\left(\|\bm{w}_1\|_2,\|\bm{w}_2\|_2 \sin\vartheta\right).
\]
Then $\mathcal{C}$ contains a point of $\Lambda$.
\end{lemma}
For the second case, where we find one vector of `very small' norm in $L_g(a_1,a_2)$, we could
not find an applicable result in the literature that would ensure a lattice point
in the square, so we provide one now.
\begin{lemma}\label{le:one short vector}
Let $\Lambda\subset\mathbb{R}^2$ be a lattice and $\bm{r}=(r_1,r_2)$ be a point of $\Lambda$ with
$\gcd(r_1,r_2)=1$ and $0<r_2\leq r_1$.
Let further $\mathcal{Q}$ be an open square with sides parallel to the axes and side length $A>0$.
If $\mathcal{Q}$ contains no point of $\Lambda$, then
\begin{equation*}
A \leq \max\left( r_1, \tfrac{\bm{d}(\Lambda)+2r_1r_2}{r_1+r_2}\right).
\end{equation*}
\end{lemma}
\begin{proof}
There is a family $\mathfrak{L}$ of parallel equidistant
lines with slope $s:=r_2/r_1$ such that $\Lambda\subset \bigcup\mathfrak{L}$ and
the perpendicular distance between two adjacent lines of $\mathfrak{L}$ is $\bm{d}(\Lambda)/\|\bm{r}\|_2$
\cite[Lemma~III.5]{Ca59}.
Then the vertical distance between two adjacent lines is $D:=\bm{d}(\Lambda)/r_1$.
We claim
\begin{multline}\label{eq:min max}
\min_{\bm{c}\in\mathbb{R}^2} \max_{\ell\in\mathfrak{L}}\ \left(\textnormal{horizontal length of}\
\ell\cap\mathcal{S}_{A/2}(\bm{c})\right) \\
=\begin{cases}
A, & D\leq A(1-s); \\
\frac{A(1+s)-D}{2s}, & A(1-s) \leq D \leq A(1+s); \\
0, & D \geq A(1+s).
\end{cases}
\end{multline}
If $D\leq A(1-s)$, then for each square $\mathcal{S}=\mathcal{S}_{A/2}(\bm{c})$ there is a line in $\mathfrak{L}$ that goes
through the left and the right edge of the square (see Figure~\ref{fi:slope}). This settles the first
case in the right hand side of \eqref{eq:min max}.
If $D$ is larger than $A(1+s)$, there
is a square that is not intersected by any line from $\mathfrak{L}$.
We are left with the intermediate case
$A(1-s) \leq D \leq A(1+s)$. To achieve the minimum in~\eqref{eq:min max}, we must certainly place $\mathcal{S}$
such that there is no line from $\mathfrak{L}$ in the parallelogram $\mathcal{P}(\mathcal{S})$ of Figure~\ref{fi:slope}.
But then there is always a line $\ell\in\mathfrak{L}$ that intersects $\mathcal{S}\backslash \mathcal{P}(\mathcal{S})$, say in the
upper triangle of $\mathcal{S}\backslash \mathcal{P}(\mathcal{S})$. If no line intersects the lower triangle of
$\mathcal{S}\backslash \mathcal{P}(\mathcal{S})$, we can make the maximum in~\eqref{eq:min max} smaller by pushing
$\mathcal{S}$ downwards. The smallest possible value of the maximum is achieved as soon as the intersections of $\mathcal{S}$ with
$\ell$ and the line from $\mathfrak{L}$ just below $\ell$ have equal length. It is easy to see that these intersections both
have horizontal length $(A(1+s)-D)/2s$.
Now that \eqref{eq:min max} is established, let
$\mathcal{Q}$ be an open square with sides parallel to the axes and side length
\begin{equation}\label{eq:estimateA}
A > \max\left( r_1, \tfrac{\bm{d}(\Lambda)+2r_1r_2}{r_1+r_2}\right).
\end{equation}
Our goal is to show $\mathcal{Q}\cap \Lambda\neq \emptyset$.
If the first case in the right hand side of~\eqref{eq:min max} occurs, we are well off:
Since $A > r_1$, the line segment in $\mathcal{Q}\cap\bigcup\mathfrak{L}$
of horizontal length $A$ must contain a point of $\Lambda$.
The third case in~\eqref{eq:min max} cannot happen, since it would imply $\bm{d}(\Lambda)\geq A(r_1+r_2)$,
contradicting \eqref{eq:estimateA}.
As for the second case, $A > \tfrac{\bm{d}(\Lambda)+2r_1r_2}{r_1+r_2}$ implies
\[
r_1 < \frac{A(r_1+r_2)-\bm{d}(\Lambda)}{2r_2} = \frac{A(1+s)-D}{2s},
\]
hence $\mathcal{Q}\cap \Lambda\neq \emptyset$.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[scale=.6]{fig6}
\caption{The square $\mathcal{S}$ (shaded) and the parallelogram $\mathcal{P}(\mathcal{S})$ (hatched), which
lies between two lines of slope $s$ that go through the upper right and the lower left corner of $\mathcal{S}$,
respectively.}\label{fi:slope}
\end{figure}
\begin{proof}[Proof of Proposition~\ref{pr:g}]
We begin this proof, which is the core of the proof of Theorem~\ref{thm:main},
by settling the cases where $g$ is at most $9$. The only numbers to consider are $g=7,8,9$,
since for smaller $g\neq 5$ there are no $a_1,a_2$ that satisfy the requirements of Theorem~\ref{thm:genrat}
(and hence of Proposition~\ref{pr:g}).
First let $g=7$. If we have proved the desired result for a pair $(a_1,a_2)$, we need not
consider the five pairs
\[
(a_2,a_1), (g-a_1,a_2), (a_1,g-a_2),
(g-a_2,a_1) \quad \text{and} \quad (a_2,g-a_1)
\]
any more by Lemma~\ref{le:symm}. It is readily seen that
under our restrictions on $a_1$, $a_2$ all lattices $L_7(a_1,a_2)$
are equal to
$L_7(1,3)$ modulo these symmetries. Similarly, for $g=8$ and $g=9$
it suffices to consider $L_8(3,1)$ and $L_9(2,1)$, respectively.
In all three cases it is easy to verify the desired result.
From now on we assume $g\geq 10$.
Put $\Lambda:=L_g(a_1,a_2)$ and let
\[
\mathcal{B} := \left\{ \bm{x} \in\mathbb{R}^2 : \|\bm{x}\|_2 \leq 1\right\}
\]
be the unit circle.
It is not difficult to see \cite[Section~2]{Le94} that the determinant of $\Lambda$ is $\bm{d}(\Lambda)=g$.
Then Theorem~\ref{thm:mink2} shows
\[
\lambda_1(\mathcal{B},\Lambda)\lambda_2(\mathcal{B},\Lambda)\pi \leq 4g.
\]
First suppose $\lambda_2(\mathcal{B},\Lambda)<g/4$.
The quotient of the area of $\mathcal{S}_{g/4}(\bm{c})$
and its perimeter is $\tfrac{g^2}{4}/2g=g/8$, hence we can apply Lemma~\ref{le:bender}.
If, on the other hand, $\lambda_2(\mathcal{B},\Lambda)\geq g/4$,
then we have $\lambda_1(\mathcal{B},\Lambda)\leq \tfrac{16}{\pi}$, which provides us
with a nonzero point $\bm{r}\in\Lambda$ with $\|\bm{r}\|_2\leq \tfrac{16}{\pi}$.
W.l.o.g.\ assume that $\bm{r}$ satisfies $\gcd(r_1,r_2)=1$ and $0<r_2\leq r_1$.
According to Lemma~\ref{le:one short vector}, it suffices to show
\[
\frac{g}{2} > \frac{g+2r_1r_2}{r_1+r_2},
\]
i.e.
\begin{equation}\label{eq:r1 r2}
4r_1r_2 < g(r_1+r_2-2).
\end{equation}
This inequality is satisfied for $g\geq 10$ and
\[
\bm{r} \in \left\{ (2,1),(3,1),(4,1),(2,2),(3,2),(4,2),(3,3),(4,3)\right\},
\]
which are all values of $\bm{r}$ in question. Observe that $a_1\not\equiv a_2\pmod{g}$ implies
$(1,1)\notin\Lambda$.
\end{proof}
This completes the proof of Theorems~\ref{thm:genrat} and~\ref{thm:xi0,xi1}.
We remark that the successive minima approach from the preceding proof can be applied to the case
of distinct denominators $b_1$, $b_2$, too. However, the number of special
cases that have to be checked separately is much larger than for equal denominators.
\section{Completion of the proof of the main theorem}\label{se:compl}
If $\xi_2$ from Theorem~\ref{thm:xi0,xi1} equals $\tfrac{1}{2}$, which corresponds to $\arg{\alpha_2}=\theta_2=\pi$
and thus to a real negative dominating characteristic root $\alpha_2$ in
Theorem~\ref{thm:main}, then
the squares centred at $\bm{c}=(c_1,\tfrac{1}{4})$ or $\bm{c}=(c_1,\tfrac{3}{4})$, $c_1\in\mathbb{R}$, do not contain
any point $n\bm{\xi}\bmod 1$.
But in this case we need not consider all squares:
\[
w_2 \sin(n \pi+\varphi_2) = (-1)^n w_2 \sin{\varphi_2},
\]
hence $\varphi_2$ can be absorbed in $w_2$, and
we retain full generality if we assign a convenient value to $\varphi_2$.
\begin{proposition}\label{pr:eta real}
Let $\xi_1 \in \left]0,1\right[\backslash \{\tfrac{1}{2}\}$. Then
for all $c_1 \in \mathbb{R}$ there are $\epsilon > 0$ and $c_2 \in \mathbb{R}$ such that
for infinitely many $n\in\mathbb{N}$
\[
n(\xi_1,\tfrac{1}{2})\bmod 1\ \in\ \mathcal{S}_{1/4-\epsilon}(\bm{c}) \bmod 1.
\]
\end{proposition}
\begin{proof}
Let $c_1$ be an arbitrary real number with $0\leq c_1<1$.
It suffices to show that there is $n\in\mathbb{N}$ with
\[
n\xi_1\bmod 1\ \in\ \left]c_1 - \tfrac{1}{4}, c_1 + \tfrac{1}{4}\right[ \bmod 1.
\]
If $\xi_1$ is irrational, this follows immediately from Theorem~\ref{thm:kronecker}.
Now let $\xi_1$ be a rational number with denominator $b_1 > 0$.
The desired result follows from $1/b_1 < \tfrac{1}{2}$ and
\[
\left\{ n\xi_1\bmod 1 : n\in\mathbb{N} \right\} = \left\{ 0,\tfrac{1}{b_1},\dots,\tfrac{b_1-1}{b_1}\right\}.
\]
\end{proof}
Applying Proposition~\ref{pr:eta real} with $\xi_1=\theta_1/2\pi$, where $\theta_k=\arg{\alpha_k}$ as usual,
settles the case of Theorem~\ref{thm:main2} where $\alpha_2$ is a negative
real number. Clearly, the same argument applies if $\alpha_1$ is negative real and $\alpha_2$ is complex.
Finally let us see what happens if the pair of denominators of $(\theta_1/2 \pi,\theta_2/2 \pi)$ in
Theorem~\ref{thm:main2} is $(5,5)$, $(6,3)$ or $(8,4)$.
According to Theorem~\ref{thm:genrat}, our argument with lattice points in squares fails for these
values. Fortunately, it is straightforward to show directly that the purely periodic
sequences $(b(n))$ arising from these values oscillate.
Once again we can appeal to the symmetries noted in Lemma~\ref{le:symm}.
Indeed, swapping $\theta_1$ and $\theta_2$ does no harm,
and the sign of $\theta_k$ can be absorbed in $w_1$ and $\varphi_1$.
It turns out that for each of the three pairs of denominators it suffices to consider one pair of numerators,
namely $(a_1,a_2)=(2,2)$ for the denominators $(6,3)$ and $(8,4)$,
and $(a_1,a_2)=(4,2)$ for the denominators $(5,5)$.
In the following proposition, the cases (i), (ii) and (iii) correspond
to the pairs of denominators $(6,3)$, $(8,4)$ and $(5,5)$, respectively.
\begin{proposition}
Let $w_1$, $w_2$ be nonzero real numbers and $\varphi_1$, $\varphi_2$ be real numbers. Then the sequence
defined by
\[
b(n) := w_1 \sin(n\theta_1+\varphi_1) + w_2 \sin(n\theta_2+\varphi_2), \quad n\geq 0,
\]
has a positive and a negative entry for each of the following values of $(\theta_1,\theta_2)$.
\begin{flalign*}
(i) & \quad \theta_1=\tfrac{1}{3}\pi, \theta_2=\tfrac{2}{3}\pi & \\
(ii) & \quad \theta_1=\tfrac{1}{4}\pi, \theta_2=\tfrac{1}{2}\pi & \\
(iii) & \quad \theta_1=\tfrac{4}{5}\pi, \theta_2=\tfrac{2}{5}\pi &
\end{flalign*}
\end{proposition}
\begin{proof}
It suffices to consider $w_1=1$.
We set $s_i:=\sin\varphi_i$ and $c_i:=\cos\varphi_i$ for $i=1,2$,
suppose $b(n)\geq 0$ for all $n\geq 0$ and derive a contradiction.
(i) From
\begin{align*}
b(0)+b(1)+b(5) &=2s_1, \\
b(2)+b(3)+b(4) &=-2s_1, \\
b(0)+b(3) &=2s_2w_2, \\
b(1)+b(2)+b(4)+b(5) &=-2s_2w_2
\end{align*}
we deduce $s_1=s_2w_2=0$, so $s_2=0$ or $w_2=0$. If $w_2=0$, then
\begin{align*}
b(1)& = \tfrac{1}{2}\sqrt{3}c_1, \\
b(4)& = -\tfrac{1}{2}\sqrt{3}c_1
\end{align*}
implies $c_1=0$, which contradicts $c_1^2+s_1^2=1$.
If, on the other hand, $s_2=0$, then $c_1=0$ follows from
\begin{align*}
b(1)+b(2)=\sqrt{3}c_1, \\
b(4)+b(5)=-\sqrt{3}c_1.
\end{align*}
(ii) Now
\begin{align*}
b(1)+b(3) &=\sqrt{2}c_1, \\
b(5)+b(7) &=-\sqrt{2}c_1, \\
b(1)+b(7) &=\sqrt{2}s_1, \\
b(3)+b(5) &=-\sqrt{2}s_1
\end{align*}
shows $c_1=s_1=0$, again contradicting $c_1^2+s_1^2=1$.
(iii) Since
\begin{align*}
b(0) &=s_1+s_2w_2, \\
b(1)+b(2)+b(3)+b(4) &=-s_1-s_2w_2,
\end{align*}
we have $s_2w_2=-s_1$. Then we obtain
\begin{align*}
b(1)+b(4) &=-\sqrt{5}s_1, \\
b(2)+b(3) &=\sqrt{5}s_1,
\end{align*}
hence $s_1=0$ and $c_1=\pm 1$. Therefore $w_2=0$ or $s_2=0$.
If $w_2=0$, then the values
\[
b(1) = \tfrac{1}{2} \sqrt{\tfrac{1}{2}(5-\sqrt{5})} c_1
\]
and
\[
b(2) = -\tfrac{1}{2} \sqrt{\tfrac{1}{2}(5+\sqrt{5})} c_1
\]
have opposite signs. It remains to consider the case $s_2=0$.
For each of the four possible values of $(c_1,c_2)=(\pm 1,\pm 1)$ the inequalities
$b(0)\geq 0,\dots,b(4)\geq 0$ form a linear system of inequalities in $w_2$. It is easy
to check that none of these four systems are solvable.
\end{proof}
The proof of Theorem~\ref{thm:main2} is complete, hence Theorem~\ref{thm:main} is established.
\section{A positive real root}\label{se:pos real}
If one of the dominating characteristic roots $\alpha_1,\dots,\alpha_m$ is real positive, Conjecture~\ref{cj:equ abs}
is not applicable. Consider the sequence defined by
\begin{equation}\label{eq:a pos real}
a(n) := \sum_{k=1}^m w_k\sin(n\theta_k+\varphi_k)+1+o(1), \quad n\geq 0,
\end{equation}
where $\theta_1,\dots,\theta_m$, $w_1,\dots,w_m$ are nonzero real numbers and $\varphi_1,\dots,\varphi_m$ are real numbers.
Here and throughout this section we assume that
the coefficient of the real positive root is positive (and
thus w.l.o.g.\ equals one). Analogous
considerations apply for a negative coefficient.
The behaviour of $(a(n))$
depends on how $1$ compares to
\begin{equation}\label{eq:s real root}
S:=-\inf_{n\geq 0} \ \sum_{k=1}^m w_k\sin(n\theta_k+\varphi_k)\ \in\ \ ]-W,W],
\end{equation}
where $W=\sum_{k=1}^m|w_k|$.
The sequence $(a(n))$ is positive for large $n$ if $S<1$ (in particular, if $W<1$),
and it oscillates if $S>1$.
If $S=1$, the behaviour of $(a(n))$ depends on how well
$\sum_{k=1}^m w_k\sin(n\theta_k+\varphi_k)$ approximates $-1$ and possibly on the $o(1)$ term.
The preceding discussion gives a handy criterion only for $W<1$, which was already
obtained by Burke and Webb \cite{Bu81}.
For $W\geq 1$ we confine ourselves to showing
how $(a(n))$ behaves for almost all values of the parameters $\theta_k$, $\varphi_k$ and $w_k$.
\begin{lemma}\label{le:almost xi}
Let $\bm{\alpha}\in\mathbb{R}^m$ and let $(\psi(n))$ be a sequence
of positive real numbers such that $\sum_{n\geq 0} \psi(n)^m$ converges.
Then the set of inequalities
\[
(n\xi_k-\alpha_k)\bmod 1 < \psi(n), \quad 1\leq k\leq m
\]
has infinitely many solutions $n\in\mathbb{N}$ for almost no $\bm{\xi}\in\mathbb{R}^m$.
\end{lemma}
\begin{proof}
See Cassels \cite[Lemma~VII.2.1]{Ca57}.
\end{proof}
In order to apply the following theorem we require the dominating characteristic roots to be simple.
This assumption makes the remainder term $r(n)$ go to zero exponentially. Parts (i) and (iii)
hold for multiple roots as well, since they only require $r(n)=o(1)$.
Our proof of part (ii), however, breaks down for $m=1$ in case of a multiple root,
because then we can ensure only $r(n)=O(n^{-1})$ and this leads to a divergent series in Lemma~\ref{le:almost xi}.
\begin{theorem}\label{thm:almost no theta}
Let $w_1,\dots,w_m$ be nonzero real numbers with $W:=\sum_{k=1}^m|w_k|$,
$\varphi_1,\dots,\varphi_m$ be real numbers and $(r(n))$ be a real sequence
with $r(n)=O(\omega^n)$ for some $0<\omega<1$.
\begin{itemize}
\item[(i)] If $W<1$, then for all $\bm{\theta}\in\mathbb{R}^m$ the sequence $(a(n))$ defined by
\[
a(n) := \sum_{k=1}^m w_k\sin(n\theta_k+\varphi_k)+1+r(n)
\]
is positive for large $n$.
\item[(ii)] If $W=1$, then for almost all $\bm{\theta}\in\mathbb{R}^m$ the sequence $(a(n))$ is positive for large $n$.
\item[(iii)] If $W>1$, then $(a(n))$ oscillates for almost all $\bm{\theta}\in\mathbb{R}^m$.
\end{itemize}
\end{theorem}
\begin{proof}
(i) is clear.
(iii) follows from the $m$-dimensional version of Theorem~\ref{thm:kronecker} (see Section~\ref{se:conclusion}), because $\theta_1/2\pi,\dots,\theta_m/2\pi,1$ are linearly
independent over the rationals for almost all $\bm{\theta}$. We proceed to prove (ii).
Suppose $a(n)\leq 0$ for all $n$ in an infinite set $I\subseteq \mathbb{N}$. To make $a(n)$ non-positive, $\sin(n\theta_k+\varphi_k)$
has to be very close to $-1$ if $w_k>0$ and very close to $1$ if $w_k<0$. To be precise, we must have
\[
\lim_{\substack{n\to\infty \\ n\in I}}f(n) = 0
\]
for
\[
f(n) := (f_1(n),\dots,f_m(n))
\]
with
\[
f_k(n) :=
\begin{cases}
(n\theta_k +\varphi_k - \tfrac{1}{2}\pi)\bmod 2\pi, & w_k < 0; \\
(n\theta_k + \varphi_k - \tfrac{3}{2}\pi)\bmod 2\pi, & w_k > 0.
\end{cases}
\]
By Taylor expansion, we obtain
\begin{align*}
\sum_{k=1}^m w_k\sin(n\theta_k+\varphi_k)+1
&= -\sum_{k=1}^m|w_k| + \frac{1}{2}\sum_{k=1}^m|w_k|f_k(n)^2 + O(\sum_{k=1}^m f_k(n)^4) + 1 \\
&= \frac{1}{2}\sum_{k=1}^m|w_k|f_k(n)^2 + O(\sum_{k=1}^m f_k(n)^4) \quad \text{as}\ n\to\infty\ \text{in}\ I.
\end{align*}
Removing finitely many elements from $I$ if necessary, we thus have
\[
\sum_{k=1}^m w_k\sin(n\theta_k+\varphi_k)+1 > \frac{w}{3}\sum_{k=1}^m f_k(n)^2, \quad n\in I,
\]
where $w:=\min_{1\leq k\leq m}|w_k|>0$. Since $a(n)\leq 0$ for $n\in I$, this implies
\[
\sum_{k=1}^m f_k(n)^2 < -\tfrac{3}{w}r(n) = O(\omega^n), \quad n\in I,
\]
hence for $1\leq k\leq m$
\[
f_k(n) = O(\omega^{n/2})\quad \text{as} \ n\to\infty \ \text{in}\ I.
\]
According to Lemma~\ref{le:almost xi} this holds for almost no $\bm{\theta}$.
\end{proof}
Finer questions may be asked about the sets of measure zero alluded to in Theorem~\ref{thm:almost no theta}.
As for part (ii) of the theorem, we note that there are $\varphi_1,\dots,\varphi_m$,
$r(n)$ and infinitely many $\bm{\theta}$ such that $(a(n))$ oscillates for all nonzero $w_1,\dots,w_m$ with $W=1$.
To see this, define
\[
\varphi_k :=
\begin{cases}
\tfrac{1}{2}\pi, & w_k < 0; \\
\tfrac{3}{2}\pi, & w_k > 0,
\end{cases}
\]
let $\bm{\theta}\in\mathbb{Q}^m$ be arbitrary
and $r(n):=(-\omega)^{n+1}$ for some $0<\omega<1$. Then $a(n)\geq (-\omega)^{n+1}=\omega^{n+1}>0$
for odd $n$, and $a(n)=-\omega<0$ if $n$ is two times a common multiple of the denominators of $\theta_1,\dots,\theta_m$.
The preceding example is a special case of the following proposition, which completely
describes the behaviour of $(a(n))$ under the assumptions of part (ii) of Theorem~\ref{thm:almost no theta}
and the additional constraint $\theta_k/\pi\in\mathbb{Q}$ for $1\leq k\leq m$.
\begin{proposition}\label{pr:theta rat}
Let $\theta_1,\dots,\theta_m$ be real numbers such that $\theta_k/2\pi$
is a rational number $a_k/b_k$ for $1\leq k\leq m$, let $\varphi_1,\dots,\varphi_m$ be real numbers,
let $w_1,\dots,w_m$ be nonzero real numbers with $\sum_{k=1}^m |w_k|=1$ and define
\[
a(n) := \sum_{k=1}^m w_k\sin(n\theta_k+\varphi_k)+1 + o(1), \quad n\geq 0.
\]
\begin{itemize}
\item[(i)] If there is a $k$ such that $\varphi_k/\pi$ is irrational, then $(a(n))$ is positive for large $n$.
\item[(ii)] Suppose that $\varphi_k/2\pi$ is a rational number $c_k/d_k$ for $1\leq k\leq m$.
If for all $1\leq k,l\leq m$
\begin{equation}\label{eq:theta rat congr}
b_k(A_kd_k - 4c_k) \equiv b_l(A_ld_l - 4c_l) \pmod{4\gcd(d_kb_k,d_lb_l)}
\end{equation}
with
\[
A_k :=
\begin{cases}
1, & w_k < 0; \\
3, & w_k > 0,
\end{cases}
\]
then there are infinitely many $n$ with $b(n)=0$, where
\[
b(n) := \sum_{k=1}^m w_k\sin(n\theta_k+\varphi_k) + 1 \geq 0,
\]
and the behaviour of $(a(n))$ depends in an obvious way on the sign of the $o(1)$ term
for these $n$.
If there are $k$, $l$ such that \eqref{eq:theta rat congr} does not hold, then $(a(n))$ is positive for large $n$.
\end{itemize}
\end{proposition}
\begin{proof}
The purely periodic sequence $(b(n))$ satisfies $b(n)\geq 0$ for all $n\geq 0$. If none of its finitely many values are zero, then
$(a(n))$ is positive for large $n$. We have $b(n)=0$ if and only if
$\sin(n\theta_k+\varphi_k)$ equals $1$ for the $k$'s with $w_k<0$ and
$-1$ for the $k$'s with $w_k>0$, i.e.
\[
n\theta_k + \varphi_k \equiv \tfrac{1}{2}A_k\pi \pmod {2\pi}, \quad 1\leq k\leq m,
\]
which is equivalent to
\[
n\tfrac{a_k}{b_k} + \tfrac{\varphi_k}{2\pi} \equiv \tfrac{1}{4}A_k \pmod{1}, \quad 1\leq k\leq m.
\]
Clearly, this cannot hold if one of the $\varphi_k/\pi$ is irrational.
Under the assumption of part (ii), we are lead to the system of congruences
\[
4d_ka_k n \equiv b_k(A_kd_k - 4c_k) \pmod{4d_kb_k}, \quad 1\leq k\leq m.
\]
Now the result follows from Theorem~\ref{thm:gcrt}.
\end{proof}
\section{Conclusion and future directions}\label{se:conclusion}
In order to extend our approach to Conjecture~\ref{cj:equ abs} to $m$ dominating characteristic roots,
we need to show that infinitely many $n(\xi_1,\dots,\xi_m)\bmod 1$ lie in any given $m$-dimensional hypercube
(modulo one) with side length $\tfrac{1}{2}-\epsilon$.
Theorem~\ref{thm:kronecker} generalizes in the following way \cite[Theorem~III.5.IV]{Ca57}:
The points $n\bm{\xi}\bmod 1$ lie dense in the set of all
$\bm{x}\in\mathbb{R}^m$ that satisfy $\langle \bm{u},\bm{x} \rangle\in\mathbb{Z}$ for all
integer vectors $\bm{u}$ with $\langle \bm{u},\bm{\xi} \rangle\in\mathbb{Z}$.
Again the case of rational $\xi_1=a_1/b_1,\dots,\xi_m=a_m/b_m$ with equal denominators
$b_1=\dots=b_m=g$ will be the crux of the proof.
This case seems to become more and more difficult for fixed denominator $g$ as $m$ increases, since the set
\begin{equation}\label{eq:set m}
\left\{n(\tfrac{a_1}{g},\dots,\tfrac{a_m}{g})\bmod 1 : n\in\mathbb{N}\right\}
\end{equation}
has $g$ elements for all $m$, whereas the volume of the hypercube
is $(\tfrac{1}{2}-\epsilon)^m$.
Theorem~\ref{thm:mink2} is certainly a valuable tool.
Hadwiger \cite{Ha70} has extended Bender's two-dimensional result (Lemma~\ref{le:bender})
that we used in the proof of Proposition~\ref{pr:g}
to arbitrary dimension $m$. A significant extension of Lemma~\ref{le:one short vector} is still needed.
Anyway it is conceivable that the exceptional rational values $a_1/b_1,\dots,a_m/b_m$
for which the hypercube might contain no point of \eqref{eq:set m} become unmanageable
as $m$ increases (Cf. Section~\ref{se:compl}).
Our results on a positive real characteristic root leave ample room for refinement. For instance,
one could try to extend part (ii) of Theorem~\ref{thm:almost no theta} to the case of a multiple root
or to continue the discussion begun in Proposition~\ref{pr:theta rat} by relaxing the
requirement that all $\theta_k/\pi$ be rational.
The Skolem--Mahler--Lech Theorem \cite{Ev03} describes
the structure of the zero set $\{n:a(n)=0\}$ of a recurrence sequence.
It is the union of a finite set and finitely many arithmetic progressions.
There might be an analogue of this result for the set $\{n:a(n)>0\}$.
Finally, we have excluded algorithmics so far. We do not know whether the positivity of recurrence sequences
is a decidable problem. Proving Conjecture~\ref{cj:equ abs} and
giving an effective criterion instead of the metric Theorem~\ref{thm:almost no theta}
would lead to a decision procedure.
|
1,116,691,498,223 | arxiv | \section{Introduction}\label{sec:intro}
\subsection{Backgrounds}
The Alday-Gaiotto-Tachikawa (AGT) correspondence conjectures an intriguing relation between conformal field theories in two dimensions and $\mathcal{N}=2$ supersymmetric gauge theories in four dimensions. It was originally conjectured in \cite{AGT} that conformal blocks of the Liouville field theory coincide with the Nekrasov partition function of $\mathcal{N}=2$ supersymmetric gauge theory of gauge group $SU(2)$ in four dimensions. Since then, this correspondence has been intensively studied to verify the details and to extend with great generalities. See \cite{LF,T1,T2} and references therein for reviews and recent developments of the AGT correspondence.
For $SU(2)$ theories (with or without matters) on $\mathbb{C}^2$, it was conjectured by Gaiotto \cite{G} that the Nekrasov partition function equals to the norm of the Whittaker vector in the corresponding Verma module of the Virasoro algebra, which is often called the ``Gaiotto vector'' in literature. This is a special example of the AGT correspondence because the Gaiotto vector can be realised as the irregular limit of a four-punctured conformal block. The relation between Nekrasov partition functions and Gaiotto vectors has been generalised in several ways, and in particular, it is proven \cite{N} that the Nekrasov partition function $Z_{\text{Nek}}^{\mathcal{G}\text{, pure}}$ of pure theory of any simply-laced gauge group $\mathcal{G}$ coincides with the norm of the Gaiotto vector $\ket{G_{\mathfrak{g}}}$ for the corresponding $\mathcal{W}(\mathfrak{g})$-algebra.
In a completely different context, the notion of ``Airy structures'' was introduced by Kontsevich and Soibelman \cite{KS} (see also \cite{ABCD}) as an algebraic reformulation (and generalisation) of the Chekhov-Eynard-Orantin (CEO) topological recursion \cite{CEO,EO,EO2}. In short, an Airy structure is a collection of differential operators $\{H_i\}_{i\in\mathbb{Z}_{>0}}$ with certain properties, and there exists a unique formal power series solution $Z_{\text{Airy}}$ satisfying differential equations $H_i Z_{\text{Airy}}=0$ from which $Z_{\text{Airy}}$ is recursively determined. Shortly after, Borot et al \cite{HAS} found a systematic recipe to construct an Airy structure from a \emph{twisted} module of the $\mathcal{W}(\mathfrak{gl}_r)$-algebra\footnote{They found Airy structures for a $\mathcal{W}(\mathfrak{g})$-algebra of type D or E as well}, and showed an equivalence to the $r$-ramified Bouchard-Eynard topological recursion \cite{BE} on an appropriate spectral curve. The representation of the differential operators $\{H_i\}^{\mathfrak{gl}_r}_{i\in\mathbb{Z}_{>0}}$ is in a one-to-one correspondence with the defining data of the spectral curve, and the partition function $Z^{\mathfrak{gl}_r}_{\text{Airy}}$ encodes the same information as the multilinear differentials $\omega_{g,n}$ obtained by the Bouchard-Eynard topological recursion.
Since $\mathcal{W}$-algebras play an important role in the AGT correspondence, Airy structures, and the topological recursion, is there any relation between them? This point was recently addressed by Borot, Bouchard, Chidambaram, and Creutzig (BBCC) \cite{BBCC}. They found an Airy structure as an \emph{untwisted} module of a $\mathcal{W}(\mathfrak{g})$-algebra of type A, B, C or D whose partition function $Z_{\text{Airy}}^{\mathfrak{g}}$ is none other than the Gaiotto vector $\ket{G_{\mathfrak{g}}}$ expressed as a power series of formal variables. With an appropriate inner product $(\cdot|\cdot)$ for $Z_{\text{Airy}}^{\mathfrak{g}}$, the AGT correspondence and the results of BBCC imply that
\begin{equation}
Z_{\text{Nek}}^{\mathcal{G}\text{, pure}} \;\overset{\text{AGT}}{=}\; \braket{G_{\mathfrak{g}}|G_{\mathfrak{g}}} \;\overset{\text{BBCC}}{=}\; (Z_{\text{Airy}}^{\mathfrak{g}}|Z_{\text{Airy}}^{\mathfrak{g}}), \label{BBCC}
\end{equation}
where the first equality is not proven for type B and C, but it is expected to hold.
However, it is worth emphasizing that the topological recursion dual to the Airy structure of \cite{BBCC} is \emph{not} the original CEO, or Bouchard-Eynard topological recursion. It is rather called the topological recursion \emph{without branch covers} which was first presented in \cite[Section 10]{ABCD} for $r=2$. Its dual description in terms of Airy structures was first realised in \cite[Section 4.1.3]{SAS} for $r=2$, and BBCC significantly generalised for all $r\geq2$ in \cite{BBCC}. This difference originates from the fact that the Gaiotto vector is constructed in an untwisted module of the $\mathcal{W}(\mathfrak{gl}_r)$-algebra whereas the CEO or Bouchard-Eynard topological recursion is related to twisted modules. Below is a schematic summary of relations between the topological recursion and Airy structures for the $\mathcal{W}(\mathfrak{gl}_r)$-algebra:
\begin{figure}[h]
\begin{tikzpicture}[every text node part/.style={align=center}]
\node (c) {};
\node[entity, below right=0mm and 5mm of c] (TR1) {Topological recursion on \\ an $r$-ramified spectral curve};
\node[entity, below left=0mm and 5mm of c] (AS1) {Airy structure as a \emph{twisted} \\ module of the $\mathcal{W}(\mathfrak{gl}_r)$-algebra};
\draw[<->, shorten >= 2pt, shorten <= 2pt, draw=black,thick] (TR1) -- node [text width=2cm,midway,above ] {1\;:\;1} (AS1);
\node[entity, below right=18mm and 5mm of c] (TR2) {Topological recursion \\ without branch covers};
\node[entity, below left=18mm and 5mm of c] (AS2) {Airy structure as an \emph{untwisted} \\ module of the $\mathcal{W}(\mathfrak{gl}_r)$-algebra};
\draw[<->, shorten >= 2pt, shorten <= 2pt, draw=black,thick] (TR2) -- node [text width=2cm,midway,above ] {1\;:\;1} (AS2);
\end{tikzpicture}
\caption{Correspondences between Airy structures and the topological recursion. The first one is for the topological recursion of CEO \cite{EO2} and Bouchard-Eynard \cite{BE}. The work of BBCC \cite{BBCC} is the second one.
}\label{fig:1}
\end{figure}
\subsection{Goal of This Paper}
A supersymmetric generalisation of Airy structures, called ``super Airy structures'', was introduced by the author and five more collaborators in \cite{SAS}. Following it, the ``$\mathcal{N}=1$ super topological recursion'' -- a supersymmetric generalisation of the CEO topological recursion -- was recently proposed by Bouchard and the author in \cite{BO2}. Moreover, \cite{BO2} shows an equivalence between the $\mathcal{N}=1$ super topological recursion on a local super spectral curve and the corresponding super Airy structure as a twisted module of $\mathcal{N}=1$ super Virasoro algebra. This means that a supersymmetric generalisation of the first correspondence in Figure~\ref{fig:1} for $r=2$ has been established by \cite{SAS,BO2}.
Now, motivated by the work of BBCC \cite{BBCC}, one may ask: for $r=2$, can the second correspondence in Figure~\ref{fig:1} be also generalised with $2d$ supersymmetry? And more interestingly, can we incorporate $2d$ supersymmetry into an interesting relation \eqref{BBCC}? The goal of the present paper is to show that the answers are, YES, to both questions. Here we give an outline of the conceptual path to achieve this goal.
\cite{SAS} presented four different classes of super Airy structures as modules of the $\mathcal{N}=1$ super Virasoro algebra (see Table~\ref{table:1}). The untwisted and the $\mu$-twisted module play a role of building blocks in the present paper, and we suitably generalise discussions in \cite{SAS} to match with the corresponding super topological recursion. We note that the super Airy structures discussed in \cite{BO2} reduce down to the $\rho$-twisted module, which is indeed a natural supersymmetric extension of the CEO topological recursion.
\begin{table}[h]
\centering
\begin{tabular}{ | c || c | c | c | }
\hline
classes & boson & fermion & sector \\
\hline
untwisted & $\times$ & $\times$ & NS \\
\hline
$\mu$-twisted & $\times$ & $\Circle$ & R \\
\hline
$\sigma$-twisted & $\Circle$ & $\times$ & R \\
\hline
$\rho$-twisted & $\Circle$ & $\Circle$ & NS \\
\hline
\end{tabular}
\vspace{3mm}
\caption{Four classes of super Airy structures as modules of the $\mathcal{N}=1$ super Virasoro algebra \cite{SAS}. $\Circle$ and $\times$ denotes twisted and untwisted respectively, and the resulting super Virasoro algebra is either in the Neveu-Schwarz (NS) sector or Ramond (R) sector.}\label{table:1}
\end{table}
On the other hand, untwisted/$\mu$-twisted super spectral curves and the corresponding super topological recursion are new concepts, hence we will define them in the present paper. These super spectral curves would be supersymmetric analogues of spectral curves \emph{without branch covers}. However, since we do not have a good understanding of global super spectral curves at the moment, we avoid using the terminology ``without branch covers'', but rather adapt the notation of twisting from \cite{SAS}. Note that in our notation, the definition of a local super spectral curve of \cite[Section 2.2]{BO2} would be named as a $\rho$-twisted super spectral curve. We then define untwisted and $\mu$-twisted abstract super loop equations, which play a key role to show an equivalence between the super topological recursion and super Airy structures. These relations are schematically summarised as follows:
\begin{figure}[h]
\centering
\begin{tikzpicture}
\node[entity, align=center] (eq) {untwisted or $\mu$-twisted\\ abstract super loop equations};
\node[below left= 7mm and -2cm of eq, align=left] (P) {Solve geometrically \\ -- residue analysis};
\node[below right= 7mm and -2cm of eq, align=left] (V) {Solve algebraically \\ -- super Virasoro constraints};
\node[entity, align=center, below=7mm of P] (TR) {untwisted or $\mu$-twisted\\ super topological recursion};
\node[entity, align=center, below=7mm of V] (AS) {super Airy structure as \\ untwisted or $\mu$-twisted modules};
\draw[-,draw=black, thick] (eq) to (P);
\draw[-,draw=black, thick] (eq) to (V);
\draw[->,draw=black,thick] (P) to (TR);
\draw[->,draw=black,thick] (V) to (AS);
\end{tikzpicture}
\caption{One of the goals of the present paper is to mathematically formulate the above flowchart. We note that the above flowchart with $\rho$-twisting is none other than the work of \cite{BO2}.}\label{fig:goal}
\end{figure}
Finally, it was conjectured by \cite{BBM,Ito} that the norm of the Gaiotto vector for $\mathcal{N}=1$ superconformal blocks corresponds to the Nekrasov partition function of gauge group $U(2)$ on $\mathbb{C}^2/\mathbb{Z}_2$. We will explore a relation between the Gaiotto vector in the Neveu-Schwarz/Ramond sector and the partition function of a super Airy structure as an untwisted/$\mu$-twisted module of the $\mathcal{N}=1$ super Virasoro algebra. In particular, we prove existence of the Gaiotto vectors for superconformal blocks which is assumed in physics literature. From this perspective, the framework of super Airy structures is a bridge connecting the super topological recursion and Gaiotto vectors for superconformal blocks. This stands as a supersymmetric generalisation of \eqref{BBCC}.
\subsection{Organisation}
This paper is organised as follows. In Section~\ref{sec:STR}, we will introduce the notion of an untwisted/$\mu$-twisted super spectral curve as well as abstract super loop equations. We then define an associated recursive formalism in Definition~\ref{def:STR} that solves the abstract super loop equations, which we call the untwisted/$\mu$-twisted super topological recursion. In Section~\ref{sec:SAS}, we construct the corresponding super Airy structure for each untwisted/$\mu$-twisted super spectral curve, and show a one-to-one relation to the super topological recursion. This equivalence is stated in Theorem~\ref{thm:main} which is the main theorem of the present paper. In Section~\ref{sec:SCB}, we recall basic facts about the Gaiotto vectors for $\mathcal{N}=1$ super conformal blocks, and review a conjectural relation between the Gaiotto vectors and the Nekrasov partition function of pure $U(2)$ theory on $\mathbb{C}^2/\mathbb{Z}_2$. We then prove in Proposition~\ref{prop:main} that the Gaiotto vector in the Neveu-Schwarz or Ramond sector is none other than the partition function of an appropriate super Airy structure, with a simple change of parameters. We conclude in Section~\ref{sec:Conclusion} with open problems and possible future directions.
\subsubsection*{Acknowledgements
}The author owes many thanks to Nitin Chidambaram for inspirational discussions and for helpful explanations of the recent results of BBCC \cite{BBCC}.
The author thanks Andrea Brini, Omar Kidwai, and Piotr Su\l kowski for various comments.
The author also acknowledges the Young Researchers Integrability School and Workshop 2020 (YRISW) for a series of introductory lectures at which the author learnt about some topics discussed in the present paper. This work is supported by the Engineering and Physical Sciences Research Council under grant agreement ref. EP/S003657/1.
\section{Super Topological Recursion}\label{sec:STR}
In this section, we develop a supersymmetric analogue of the topological recursion without branch covers. We introduce the definitions of an untwisted/$\mu$-twisted super spectral curve and abstract super loop equations, and construct the corresponding recursive formalism. We note that they differ from the definitions of a local super spectral curve and the super topological recursion recently proposed in \cite{BO2} which would be called a $\rho$-twisted super spectral curve and the $\rho$-twisted super topological recursion in our notation, and which is a natural supersymmetric generalisation of the CEO topological recursion. Parts of the presentation follow \cite{BO2}.
\subsection{Untwisted/$\mu$-twisted Super Spectral Curves}
There are three symplectic vector spaces underlying untwisted/$\mu$-twisted super spectral curves, namely the vector space for bosons $V_{z}^B$ , the one for untwisted fermions $V_{z}^{NS}$, and the one for $\mu$-twisted fermions $V_{z}^R$. Since untwisted/$\mu$-twisted fermions correspond to the Neveu-Schwarz/Ramond sector, we often denote them by NS/R. Note that $V_{z}^B$ and $V_z^R$ are the same as the ones given in \cite{BO2}, but, $V_z^{NS}$ is new. We will define each of them below.
\subsubsection{Vector Space for Bosons}
Let us define $V_{z}^B$ by
\begin{equation}
V_{z}^B:=\{\;\omega\in\mathbb{C}[z^{-1},z\rrbracket dz\;\;|\;\;\underset{z\rightarrow0}{\text{Res}}\,\omega(z)=0\;\},
\end{equation}
equipped with the following symplectic pairing $\Omega^B:V_z^B\times V_z^B\rightarrow\mathbb{C}$:
\begin{equation}
df_1,df_2\in V_z^B,\;\;\;\;\Omega^B(df_1,df_2)=\underset{z\rightarrow0}{\text{Res}} f_1(z)df_2(z).\label{inner product}
\end{equation}
Note that $\Omega^B$ makes sense because no vector in $V_{z}^B$ has residues. We then define a Lagrangian subspace $V_{z}^{B+}=\mathbb{C}\llbracket z\rrbracket dz\subset V_{z}^B$, with a choice of a basis $(d\xi_l)_{l>0}$ as
\begin{equation}
d\xi_l(z):=z^{l-1}dz,\;\;\;\;\;l\in\mathbb{Z}_{>0}.
\end{equation}
We consider another Lagrangian subspace $V_{z}^{B-}\subset V_{z}^B$ complementary to $V_z^{B+}$, but this is not unique, and there is a choice of so-called ``bosonic polarization parameters'' $\phi_{lm}=\phi_{ml}\in\mathbb{C}$ for all $k,l\in\mathbb{Z}_{>0}$. If we denote by $(d\xi_{-l})_{l>0}$ a basis of $V_z^{B-}$, bosonic polarization parameters $\phi_{lm}$ appear in $(d\xi_{-l})_{l>0}$ as:
\begin{equation}
d\xi_{-l}(z)=\frac{dz}{z^{l+1}}+\sum_{m>0}\frac{\phi_{lm}}{l}d\xi_m(z),\;\;\;\;l\in\mathbb{Z}_{>0}.\label{bbasis}
\end{equation}
One can easily check
\begin{equation}
\forall k,l\in\mathbb{Z}_{\neq0},\;\;\;\;\Omega^B(d\xi_k,d\xi_l)=\frac{\delta_{k+l,0}}{k},\label{vecHeis}
\end{equation}
hence $V_z^{B-}$ is indeed complementary to $V_z^{B+}$. One may find \eqref{vecHeis} similar to commutation relations in the Heisenberg algebra.
The choice of polarization can be encoded into a formal symmetry bilinear form $\omega_{0,2|0}$ as follows,
\begin{equation}
\omega_{0,2|0}(z_1,z_2|)=\frac{dz_1 dz_2}{(z_1-z_2)^2}+\sum_{k,l>0}\phi_{kl}\;d\xi_k(z_1) d\xi_l(z_2).
\end{equation}
In the domain $|z_1|>|z_2|$, the basis of $V_z^{B\pm}$ appears in the expansion of $\omega_{0,2|0}(z_1,z_2|)$ as
\begin{equation}
\omega_{0,2|0}(z_1,z_2|)=\sum_{l\geq1}ld\xi_{-l}(z_1)d\xi_l(z_2).\label{02z=0}
\end{equation}
In order to define a spectral curve without branch covers, we further consider a one-dimensional vector space $V_z^{B\,0}$ and chose its basis $d\xi_0$ by
\begin{equation}
d\xi_0(z)=\frac{dz}{z}.
\end{equation}
Note that $d\xi_0\not\in V_z^B$. It does not make sense to apply the definition of the symplectic form \eqref{inner product} to $\tilde{V}_z^B=V_z^B\oplus V^{B\,0}_z$ because $\int d\xi_0=\log z$. We can still formally define a degenerate symplectic form $\tilde \Omega^B:\tilde{V}_z^B\times\tilde{V}_z^B\to\mathbb{C}$ such that if $k,l\neq0$ then $\tilde \Omega^B(d\xi_k,d\xi_l)$ follows \eqref{inner product} and $\tilde \Omega^B(d\xi_0,d\xi_k)=0$ for all $k\in\mathbb{Z}$. Thus, one can interpret that $d\xi_0\in V_z^{B\,0}$ plays the same role as the Heisenberg zero mode.
We now define a spectral curve without branch covers:
\begin{definition}\label{def:curveB}
A \emph{spectral curve with one component without branch covers} consists of a degenerate symplectic vector space $\tilde{V}_z^B$ with a maximal isotropic subspace $V_z^{B+}$, and the following data:
\begin{itemize}
\item a choice of ``dilaton shift parameters'' $(\tau_l)_{l\geq-(N-1)}$ with $\tau_{N-1}\neq0$ and $N\in\mathbb{Z}_{>0}$ which can be encoded in a choice of a one-form $\omega_{0,1|0}\in \tilde V^{B}_z$:
\begin{equation}
\omega_{0,1|0}(z)=\sum_{l\geq-(N-1)}\tau_{l}d\xi_{l}(z),
\end{equation}
\item a choice of ``bosonic polarization parameters'', which can be encoded in a choice of a symmetric bilinear differential $\omega_{0,2|0}$:
\begin{equation}
\omega_{0,2|0}(z_1,z_2|)=\frac{dz_1 dz_2}{(z_1-z_2)^2}+\sum_{k,l>0}\phi_{kl}\;d\xi_k(z_1) d\xi_l(z_2).
\end{equation}
\item a choice of ``$D$-terms'' $D_k$ for $1\leq k\leq N$ which can be encoded in a choice of a one-form $\omega_{1,1}\in \tilde V^{B-}_z$:
\begin{equation}
\omega_{1,1|0}(z|)=\sum_{l=1}^N D_k\xi_{-k}.
\end{equation}
\item a choice of ``crosscap parameters'' $(Q_l)_{l\geq-(N-1)}$ with $N\in\mathbb{Z}_{>0}$ which can be encoded in a choice of a one-form $\omega_{\frac12,1|0}\in V^{B}_z$:
\begin{equation}
\omega_{\frac12,1|0}(z|)=\sum_{l\geq-(N-1)}Q_{l}d\xi_{l}(z),
\end{equation}
\end{itemize}
\end{definition}
\begin{remark}
The above definition is similar to those in \cite{ABCD,BBCC} but slightly different. BBCC \cite{BBCC} considers a spectral curve with $(r-1)$ components but only for $N=1$. We show that duality between the (super) topological recursion and (super) Airy structures holds for arbitrary positive integers $N$, though applications to Gaiotto vectors require to set $N=1$.
\end{remark}
\subsubsection{Vector Space for Untwisted Fermions}
The vector space $V^{NS}_{z,\theta}$ for untwisted fermions is
\begin{equation}
V^{NS}_{z,\theta}:=\{\eta\in\mathbb{C}[z^{-1},z\rrbracket\;\Theta^{NS}(z,\theta)\},
\end{equation}
where
\begin{equation}
\Theta^{NS}(z,\theta):=\left(\theta+dz\frac{\partial}{\partial\theta}\right),
\end{equation}
and $\theta$ is a Grassmann variable. We equip $V^{NS}_{z,\theta}$ with a pairing $\Omega^F:V^F_{z,\theta}\times V^F_{z,\theta}\rightarrow\mathbb{C}$
\begin{equation}
\Omega^{NS}(\eta_1,\eta_2):=\underset{z\rightarrow0}{\text{Res}}\;\eta_1(z,\theta)\eta_2(z,\theta),\label{NS product}
\end{equation}
We often denote $\Theta^{NS}(z,\theta)$ and $\Theta^{NS}(z_i,\theta_i)$ by $\Theta^{NS}_z$ and $\Theta^{NS}_i$ for brevity. We also omit the $\theta$-dependence below. Note that $(\Theta_z^{NS})^2 = dz$, hence, $\Theta^{NS}_z$ can be thought of as a nonzero section of a completely integrable subbundle $\tilde{\mathcal{D}}\subset T^*\mathbb{C}_z$ of rank $0|1$\footnote{In the context of superconformal structures of super Riemann surfaces, a completely nonintegrable subbundle $\mathcal{D}$ of rank $0|1$ normally refers to $\mathcal{D}\subset T\mathbb{C}_z$ (e.g., \cite[Section 2.2]{W}) instead of $T^*\mathbb{C}_z$. Thus, one might find $\tilde{\mathcal{D}}$ a dual description of $\mathcal{D}$.}.
We decompose $V_z^{NS}$ into two Lagrangian subspaces $V_z^{NS+}$ and $V_z^{NS-}$ as follows. We define $V_z^{NS+}=\{\eta\in \mathbb{C}\llbracket z\rrbracket\,\Theta_z^{NS}\}$, and we fix its basis $(\eta_{l+\frac12})_{l\geq0}$ by
\begin{equation}
\eta_{l+\frac12}(z,\theta):=z^{l}\,\Theta_z,\;\;\;\;\;l\in\mathbb{Z}_{\geq0}.
\end{equation}
Let $V_z^{NS-}$ be another Lagrangian subspace complementary to $V_z^{NS+}$, then its basis $(\eta_{-l-\frac 12})_{l\geq0}$ is given with ``untwisted polarization parameters'' $\psi^{NS}_{lm}=-\psi^{NS}_{ml}\in\mathbb{C}$ as
\begin{equation}
\eta_{-l-\frac 12}(z,\theta):=\left(\frac{1}{z^{l+1}}+\sum_{k\geq0}\psi^{NS}_{lk}z^{k}\right)\,\Theta,\;\;\;\;l\in\mathbb{Z}_{\geq0}.\label{nsbasis}
\end{equation}
Note that we have
\begin{equation}
\forall k,l\in\mathbb{Z},\;\;\;\;\Omega^{NS}(\eta_{k+\frac12},\eta_{l-\frac12})=\delta_{k+l,0},
\end{equation}
which resembles commutation relations in the Clifford algebra. Analogous to $\omega_{0,2|0}$, we encode untwisted polarization parameters into an antisymmetric bilinear form $\omega^{NS}_{0,0|2}$ which is defined as
\begin{equation}
\omega^{NS}_{0,0|2}(|z_1,z_2)=-\frac{\Theta^{NS}_1\Theta^{NS}_2}{(z_1-z_2)}+\sum_{k,l\geq0}\psi^{NS}_{kl}\;d\eta_{k+\frac 12}(z_1) d\eta_{l+\frac12}(z_2).
\end{equation}
Then, in the domain $|z_1|<|z_2|$, the basis of $V_z^{NS\pm}$ appears in the expansion of $\omega_{0,0|2}^{NS}(|z_1,z_2)$ as
\begin{equation}
\omega^{NS}_{0,0|2}(|z_1,z_2)=\sum_{l\geq0}\eta_{l+\frac12}(z_1)\eta_{-l-\frac12}(z_2).\label{NS02expansion}
\end{equation}
We now have all ingredients to define an untwisted super spectral curve:
\begin{definition}\label{def:curveNS}
An \emph{untwisted super spectral curve $\mathcal{S}_{NS}$ with one component} consists of a $\mathbb{Z}_2$-graded degenerate symplectic vector space $V=\tilde{V}_z^B\oplus V^{NS}_z$ with a maximal isotropic subspace $V_z^{B+}\oplus V_z^{NS+}$, and the following data:
\begin{itemize}
\item a choice of ``dilaton shift parameters'' $(\tau_l)_{l\geq-(N-1)}$ with $\tau_{N-1}\neq0$ and $N\in\mathbb{Z}_{>0}$ which can be encoded in a choice of a one-form $\omega_{0,1|0}\in \tilde V^{B}_z$:
\begin{equation}
\omega_{0,1|0}(z)=\sum_{l\geq-(N-1)}\tau_{l}d\xi_{l}(z),
\end{equation}
\item a choice of ``bosonic polarization parameters'', which can be encoded in a choice of a symmetric bilinear differential $\omega_{0,2|0}$:
\begin{equation}
\omega_{0,2|0}(z_1,z_2|)=\frac{dz_1 dz_2}{(z_1-z_2)^2}+\sum_{k,l>0}\phi_{kl}\;d\xi_k(z_1) d\xi_l(z_2).
\end{equation}
\item a choice of ``$D$-terms'' $D_k$ for $1\leq k\leq N$ which can be encoded in a choice of a one-form $\omega_{1,1}\in V^{B-}_z$:
\begin{equation}
\omega_{1,1|0}(z|)=\sum_{l=1}^N D_k\xi_{-k}.
\end{equation}
\item a choice of ``crosscap parameters'' $(Q_l)_{l\geq-(N-1)}$ with $N\in\mathbb{Z}_{>0}$ which can be encoded in a choice of a one-form $\omega_{\frac12,1|0}\in \tilde V^{B}_z$:
\begin{equation}
\omega_{\frac12,1|0}(z|)=\sum_{l\geq-(N-1)}Q_{l}d\xi_{l}(z),
\end{equation}
\item a choice of ``untwisted polarization parameters'', which can be encoded in a choice of an antisymmetric bilinear differential $\omega^{NS}_{0,2|0}$:
\begin{equation}
\omega^{NS}_{0,0|2}(|z_1,z_2)=-\frac{\Theta^{NS}_1\Theta^{NS}_2}{(z_1-z_2)}+\sum_{k,l\geq0}\psi^{NS}_{kl}\;d\eta_{k+\frac 12}(z_1) d\eta_{l+\frac12}(z_2).
\end{equation}
\end{itemize}
\end{definition}
\subsubsection{Vector Spaces for $\mu$-twisted Fermions}
For $\mu$-twisted fermions, the vector space $V^{R}_{z,\theta}$ is given by
\begin{equation}
V^{R}_{z,\theta}:=\{\eta\in\mathbb{C}[z^{-1},z\rrbracket\;\Theta^R(z,\theta)\},
\end{equation}
where
\begin{equation}
\Theta^R(z,\theta):=\left(\theta+zdz\frac{\partial}{\partial\theta}\right),
\end{equation}
and $\theta$ is a Grassmann variable. We equip $V^F$ with a pairing $\Omega^R:V^R_{z,\theta}\times V^R_{z,\theta}\rightarrow\mathbb{C}$
\begin{equation}
\Omega^R(\eta_1,\eta_2):=\underset{z\rightarrow0}{\text{Res}}\;\eta_1(z,\theta)\eta_2(z,\theta),\label{R product}
\end{equation}
We again denote $\Theta^R(z,\theta)$ and $\Theta^R(z_i,\theta_i)$ by $\Theta^{R}_z$ and $\Theta^{R}_i$ for brevity. Note that $(\Theta_z^R)^2 = z dz$, hence, $\Theta_z^R$ fails to be a nonzero section of the completely integrable subbundle $\tilde{\mathcal{D}}\subset T^*\mathbb{C}_z$ of rank $0|1$. Thus, one may interpret that $V^{R}_{z}$ is associated with a Ramond divisor at the origin in the context of \cite[Section 4.1]{W}.
As explained in \cite[Section 2.2]{BO2}, $V^{R}_{z}$ are decomposed into three subspaces $V_z^{R+},V_z^{R\,0},V_z^{R-}$. Similar to $V_z^{B+}, V_z^{NS+}$, we define $V_z^{R+}=\{\eta\in \mathbb{C}\llbracket z\rrbracket\,\Theta_z^R\}$, and we fix its basis $(\eta_l)_{l>0}$ as
\begin{equation}
\eta_l(z,\theta):=z^{l-1}\,\Theta^R_z,\;\;\;\;\;l\in\mathbb{Z}_{>0}.
\end{equation}
Next, the zero mode space $ V^{R\,0}$ is one-dimensional whose basis $(\eta_0)$ is given by
\begin{equation}
\eta_0(z,\theta):=\left(\frac{1}{z}+\sum_{k>0}\psi^R_{0k}z^{k-1}\right)\,\Theta_z^R,\label{R basis0}
\end{equation}
where $\psi_{0k}\in\mathbb{C}$. At last, $V_z^{R-}$ is complementary to $V_z^{R+}\oplus V_z^{R\,0}$ whose basis $(\eta_{-l})_{l>0}$ is given by
\begin{equation}
\eta_{-l}(z,\theta):=\left(\frac{1}{z^{l+1}}+\sum_{k\geq0}\psi^R_{lk}z^{k-1}\right)\,\Theta_z^R,\label{R basis}
\end{equation}
where ``$\mu$-twisted polarization parameters'' $\psi^R_{kl}$ satisfy
\begin{equation}
\psi_{00}^R=0,\;\;\;\;\psi^R_{kl}+\psi^R_{lk}+\psi^R_{0k}\psi^R_{0l}=0,\;\;\;\;\forall k,l\in\mathbb{Z}_{\geq0}.\label{R polarization parameters}
\end{equation}
One can easily check that $(\eta_{l})_{l\in\mathbb{Z}}$ satisfy
\begin{equation}
\forall k,l\in\mathbb{Z},\;\;\;\;\Omega^R(\eta_k,\eta_l)=\delta_{k+l,0},
\end{equation}
which again resembles commutation relations in the Clifford algebra. In order to put all $\mu$-twisted polarization parameters into one package, we define an antisymmetric bilinear form $\omega_{0,0|2}^R$ by
\begin{equation}
\omega_{0,0|2}^R(|z_1,z_2):=-\frac12\frac{z_1+z_2}{z_1-z_2}\frac{\Theta^R_1 \Theta^R_2}{z_1z_2}-\sum_{k,l\geq1}\frac{\psi^R_{k-1\;l-1}-\psi^R_{l-1\;k-1}}{1+\delta_{(k-1)(l-1),0}}\frac{\eta_l(z_1) \eta_k(z_2)}{2z_1z_2}.
\end{equation}
Then, in the domain $|z_1|<|z_2|$, we have
\begin{equation}
\omega^R_{0,0|2}(|z_1,z_2)\rightarrow\sum_{l>0}\eta_{l}(z_1)\eta_{-l}(z_2)+\frac12\eta_0(z_1)\eta_0(z_2).\label{R02expansion}
\end{equation}
A $\mu$-twisted super spectral curve is defined analogously to an untwisted one:
\begin{definition}\label{def:curveR}
A \emph{$\mu$-twisted super spectral curve $\mathcal{S}_R$ with one component} consists of a $\mathbb{Z}_2$-graded degenerate symplectic vector space $V=\tilde{V}_z^B\oplus V^{R}_z$, with a maximal isotropic subspace $V_z^{B+}\oplus V_z^{R+}$, and the following data:
\begin{itemize}
\item a choice of ``dilaton shift parameters'' $(\tau_l)_{l\geq-(N-1)}$ with $\tau_{N-1}\neq0$ and $N\in\mathbb{Z}_{>0}$ which can be encoded in a choice of a one-form $\omega_{0,1|0}\in \tilde V^{B}_z$:
\begin{equation}
\omega_{0,1|0}(z)=\sum_{l\geq-(N-1)}\tau_{l}d\xi_{l}(z),
\end{equation}
\item a choice of ``bosonic polarization parameters'', which can be encoded in a choice of a symmetric bilinear differential $\omega_{0,2|0}$:
\begin{equation}
\omega_{0,2|0}(z_1,z_2|)=\frac{dz_1 dz_2}{(z_1-z_2)^2}+\sum_{k,l>0}\phi_{kl}\;d\xi_k(z_1) d\xi_l(z_2).
\end{equation}
\item a choice of ``$D$-terms'' $D_k$ for $1\leq k\leq N$ which can be encoded in a choice of a one-form $\omega_{1,1}\in V^{B-}_z$:
\begin{equation}
\omega_{1,1|0}(z|)=\sum_{l=1}^N D_k\xi_{-k}.
\end{equation}
\item a choice of ``crosscap parameters'' $(Q_l)_{l\geq-(N-1)}$ with $N\in\mathbb{Z}_{>0}$ which can be encoded in a choice of a one-form $\omega_{\frac12,1|0}\in \tilde V^{B}_z$:
\begin{equation}
\omega_{\frac12,1|0}(z|)=\sum_{l\geq-(N-1)}Q_{l}d\xi_{l}(z),
\end{equation}
\item a choice of ``$\mu$-twisted polarization parameters'', which can be encoded in a choice of an antisymmetric bilinear differential $\omega^R_{0,2|0}$:
\begin{equation}
\omega^{R}_{0,0|2}(|z_1,z_2)=-\frac12\frac{z_1+z_2}{z_1-z_2}\frac{\Theta^R_1 \Theta^R_2}{z_1z_2}-\sum_{k,l\geq1}\frac{\psi^R_{k-1\;l-1}-\psi^R_{l-1\;k-1}}{1+\delta_{(k-1)(l-1),0}}\frac{\eta_l(z_1) \eta_k(z_2)}{2z_1z_2}.
\end{equation}
\end{itemize}
\end{definition}
\subsection{Untwisted and $\mu$-twisted Abstract Super Loop Equations}
It turns out that the forms of untwisted/$\mu$-twisted abstract super loop equations and the corresponding super topological recursion are almost universal regardless of the twisting. In addition, all super spectral curves considered in the present paper are with one component. Thus, we often drop ``untwisted/$\mu$-twisted ... with one component'', and we simply call $\mathcal{S}_F$ a super spectral curve where $F\in\{NS,R\}$. For brevity of presentations, we further define a constant $f$ such that:
\begin{align}
\text{for }F=NS,\;\;\;\;f=\frac12,\;\;\;\;
\text{for }F=R,\;\;\;\;\;\;\;f=0\label{fandNSR}
\end{align}
\begin{remark}\label{rem:para}
A super spectral curve $\mathcal{S}_F$ is completely fixed by the following seven parameters:
\begin{equation}
(F,N,\tau_{l},\phi_{kl},\psi_{mn}^F,D_k,Q_l)
\end{equation}
\end{remark}
\subsubsection{Abstract Super Loop Equations}
Given a super spectral curve $\mathcal{S}_F$, we consider an infinite sequence of multilinear differentials $\omega_{g,n|2m}$ for $2g,n,m\in\mathbb{Z}_{\geq0}$ with $2g+n+2m\geq3$ as
\begin{equation}
\omega_{g,n|2m}\in \left(\bigotimes_{j=1}^nV_{z_j}^{B-}\right)\otimes \left(\bigotimes_{k=1}^{2m} V_{u_k,\theta_k}^{F\,0,-} \right),
\end{equation}
where we should drop $V_{u_k,\theta_k}^{F\,0}$ when $F=NS$. Note that $g$ can be a \emph{half-integer}. We say that $\omega_{g,n|2m}$ ``respect the polarization'', because they are in the subspaces $V_{z_j}^{B-}$ and $V_{u_k,\theta_k}^{F\, 0,-}$ defined by the choice of polarization in the super spectral curve $\mathcal{S}_F$. We impose that the $\omega_{g,n|2m}$ are symmetric under permutations of the first $n$ entries, and anti-symmetric under permutations of the last $2m$ entries. We assume no symmetry under permutations of some of the first $n$ entries with some of the last $2m$ entries. Note that the $\omega_{g,n|2m}$ always have an even number of elements in $\bigotimes V_{u,\theta}^{F\,0,-}$.
Let us denote by $I,J$ a set of variables $I=(z_1,...)$ and $J=((u_1,\theta_1),..)$. The number of variables in $I$ and $J$ should be read off from multilinear differentials of interests, e.g., for $\omega_{g,n+1|2m}(z_0,I|(u_0,\theta_0),J)$ we have $|I|=n$ and $|J|=2m-1$. For $2g+n+2m\geq3$ we define:
\begin{align}
\mathcal{Q}_{g,n|2m}^{BF}(I|z,J)=\;&\omega_{g-1,n+1|2m}(z,I|z,J)\nonumber\\
&+\left(\underset{\tilde z\rightarrow0}{{\rm Res}}\,\omega_{\frac12,1|0}(\tilde z)\right)\left(\mathcal{D}_z\cdot\omega_{g-\frac12,n|2m}(I|z,J)+\frac{1-2f}{2}d\xi_0(z)\omega_{g-\frac12,n|2m}(I|z,J)\right)\nonumber\\
&+\sum_{g_1+g_2=g}\sum_{\substack{I_1\cup I_2=I \\ J_1\cup J_2=J}}(-1)^{\rho}\omega_{g_1,n_1+1|2m_1}(z,I_1|J_1)\,\omega_{g_2,n_2|2m_2}(I_2|z,J_2),\label{QFB}
\end{align}
where we dropped the $\theta$-dependence for brevity. $(-1)^{\rho}=1$ if $J_1\cup J_2$ is an even permutation of $J$ and $(-1)^{\rho}=-1$ otherwise. Similarly, for $2g+n+2m\geq2$ and $(g,n,m)\neq(1,0,0)$, we define
\begin{align}
\mathcal{Q}_{g,n+1|2m}^{BB}(z,I|J)=\;&\omega_{g-1,n+2|2m}(z,z,I|J)+\frac12\left(\underset{\tilde z\rightarrow0}{{\rm Res}}\,\omega_{\frac12,1|0}(\tilde z)\right)\mathcal{D}_z\cdot\omega_{g-\frac12,n+1|2m}(z,I|J)\nonumber\\
&+\sum_{g_1+g_2=g}\sum_{\substack{I_1\cup I_2=I \\ J_1\cup J_2=J}}(-1)^{\rho}\omega_{g_1,n_1+1|2m_1}(z,I_1|J_1)\,\omega_{g_2,n_2+1|2m_2}(z,I_2|J_2),\label{QBB}
\end{align}
\begin{align}
\mathcal{Q}_{g,n+1|2m}^{FF}(z,I|J)=\;&-\frac12\mathcal{D}_z\cdot\omega_{g-1,n|2m+2}(I|z,u,J)\Bigr|_{u=z}\nonumber\\
&+\frac12\sum_{g_1+g_2=g}\sum_{\substack{I_1\cup I_2=I \\ J_1\cup J_2=J}}(-1)^{\rho}\mathcal{D}_z\cdot\omega_{g_1,n_1|2m_1}(I_1|z,J_1) \,\omega_{g_2,n_2|2m_2}(I_2|z,J_2),\label{QFF}
\end{align}
where for $\omega(z)=f(z)dz\in V_{z}^B$ and $\eta(z,\theta)=g(z)\Theta^F(z,\theta)\in V_{z,\theta}^F$, the derivative operator $\mathcal{D}_z$ is defined to act as
\begin{align}
\mathcal{D}_z\cdot\omega(z)&=df(z) dz\in V_z^B\otimes V_z^B,\\
\mathcal{D}_z\cdot\eta(z,\theta)&=dg(z) \Theta^F(z,\theta)\in V_z^B\otimes V_{z,\theta}^F.
\end{align}
We note that
\begin{equation}
\underset{\tilde z\rightarrow0}{{\rm Res}}\,\omega_{\frac12,1|0}(\tilde z)=Q_0.
\end{equation}
In the context of conformal field theory in two dimensions, this $Q_0$-dependence can be thought of as a consequence of the so-called ``background charge''. When $Q_0=0$, \eqref{QFB}-\eqref{QFF} are similar to Eq. (2.36)-(2.38) in \cite{BO2} but they are rather simpler because there is no involution operator $\sigma$. Note that $Q_0=0$ corresponds to the self-dual limit in $\mathcal{N}=2$ supersymmetric gauge theory in four dimensions.
Following \cite[Definition 2.8]{BO2}, we define abstract super loop equations:
\begin{definition}\label{def:SLE}
Given an untwisted/$\mu$-twisted super spectral curve $\mathcal{S}_F$, the \emph{untwisted/$\mu$-twisted abstract super loop equations} are the following set of constraints:
\begin{enumerate}
\item \emph{quadratic bosonic loop equations}: for $2g+n+2m\geq2$ and $(g,n,m)\neq(1,0,0)$
\begin{equation}
\mathcal{Q}_{g,n+1|2m}^{BB}(z,I|J)+\mathcal{Q}_{g,n+1|2m}^{FF}(z,I|J)\in z^{-N-1} V_z^{B+} \otimes V_z^{B+},
\end{equation}
\item \emph{quadratic fermionic loop equations}: for $2g+n+2m\geq3$
\begin{equation}
\mathcal{Q}_{g,n|2m}^{BF}(I|z,J)\in z^{-N-1} V_z^{B+}\otimes V_z^{F+}.
\end{equation}
\end{enumerate}
\end{definition}
\subsection{Untwisted/$\mu$-twisted Super Topological Recursion}
We would like to prove that there exists a unique solution of the abstract super loop equations. Similar to the argument in \cite{BO2}, however, existence is easier to prove once we describe them in terms of super Airy structures. Therefore, in this section we assume existence of a solution, and present a way of computing a unique solution. We call this computational method the untwisted/$\mu$-twisted super topological recursion.
Towards the construction of the super topological recursion, let us first define $\tilde{\mathcal{Q}}_{g,n+1|2m}^{BB}$ and $\tilde{\mathcal{Q}}_{g,n|2m}^{BF}$ by
\begin{align}
\tilde{\mathcal{Q}}_{g,n+1|2m}^{BB}(z,I|J)=&\,\mathcal{Q}_{g,n+1|2m}^{BB}(z,I|J)-\omega_{0,1|0}(z|)\,\omega_{g,n+1|2m}(z,I|J),\\
\tilde{\mathcal{Q}}_{g,n|2m+2}^{BF}(I|z,J)=&\,\mathcal{Q}_{g,n|2m+2}^{BF}(I|z,J)-\omega_{0,1|0}(z|)\,\omega_{g,n|2m+2}(I|z,J).
\end{align}
That is, they are independent of $\omega_{g,n+1|2m}$ or $\omega_{g,n|2m+2}$ respectively. Next, we define the ``recursion kernels'' $K^{BB},K^{BF}$ which are defined by
\begin{align}
K^{BB}(z_0,z)=&-\frac{\int^{z}_{0}\omega_{0,2|0}(z_0,\cdot|)}{\omega_{0,1|0}(z|)},\\
K^{BF}(z_0,z)=&-\frac{\omega^F_{0,0|2}(|z,z_0)-(f+\frac12)\eta_f(z)\eta_{-f}(z_0)}{2\omega_{0,1|0}(z|)}.\label{BR}
\end{align}
One should find them similar to the recursion kernels in \cite[Section 3]{BO2}.
With these recursion kernels, we now define the untwisted/$\mu$-twisted super topological recursion, and show that it uniquely solves the abstract super loop equations.
\begin{definition}\label{def:STR}
Given an untwisted/$\mu$-twisted super spectral curve $\mathcal{S}_F$, the ``untwisted/$\mu$-twisted super topological recursion'' is a recursive construction of multilinear differentials $\omega_{g,n|2m}$ that respect polarisation by the following set of formulae:
\begin{itemize}
\item For $2g+n+2m\geq2$ with $(g,n,m)\neq(1,0,0)$,
\begin{equation}
\omega_{g,n+1|2m}(z_0,I|J)=\underset{z\rightarrow0}{{\rm Res}}\,K^{BB}(z_0,z)\left(\tilde{\mathcal{Q}}_{g,n+1|2m}^{BB}(z,I|J)+\mathcal{Q}_{g,n+1|2m}^{FF}(z,I|J)\right),\label{BR}
\end{equation}
\item For $2g+n+2m\geq3$,
\begin{equation}
\omega_{g,n|2m+2}(I|u_1,u_2,J)=\hat{\omega}_{g,n|2m+2}(I|u_1,u_2,J)-\eta_{-f}(u_1)\underset{z\rightarrow0}{{\rm Res}}\,\hat{\omega}_{g,n|2m+2}(I|u_2,z,J)\eta_f(z),\label{FR}
\end{equation}
where
\begin{equation}
\hat{\omega}_{g,n|2m+2}(I|u_1,u_2,J)=\underset{z\rightarrow0}{{\rm Res}}\,K^{FB}(u_1,z)\tilde{\mathcal{Q}}_{g,n|2m+2}^{FB}(I|z,u_2,J).\label{FR1}
\end{equation}
\end{itemize}
\end{definition}
\begin{proposition}\label{prop:STR}
If there exists a solution to the untwisted/$\mu$-twisted abstract super loop equations that respects the polarization, then it is uniquely constructed recursively by the untwisted/$\mu$-twisted super topological recursion
\end{proposition}
\begin{proof}
The proof closely follows how \cite{BO2} proves the super topological recursion, but our case is simpler because there is no involution operator $\sigma$. Given a super spectral curve $\mathcal{S}_F$, let us assume existence of a solution of the abstract super loop equations. Since $K^{BB}(z_0,z)$ has a zero of order $N+1$ at $z=0$, the quadratic bosonic loop equations imply that for $2g+n+2m\geq2$ and $(g,n,m)\neq(1,0,0)$,
\begin{equation}
\underset{z\rightarrow0}{{\rm Res}}\,K^{BB}(z_0)\biggl(\mathcal{Q}_{g,n+1|2m}^{BB}(z,I|J)+\mathcal{Q}_{g,n+1|2m}^{FF}(z,I|J)\biggr)=0.
\end{equation}
Let us focus on terms involving $\omega_{g,n+1|2m}$ on the left hand side. They appear in the form:
\begin{align}
\underset{z\rightarrow0}{{\rm Res}}\,K^{BB}(z_0)\biggl(\omega_{0,1|0}(z|)\omega_{g,n+1|2m}(z,I|J)\biggr)&=-\underset{z\rightarrow0}{{\rm Res}}\,\int^{z}_{0}\omega_{0,2|0}(z_0,\cdot|)\omega_{g,n+1|2m}(z,I|J)\nonumber\\
&=-\omega_{g,n+1|2m}(z,I|J)\label{p1},
\end{align}
where we used \eqref{02z=0} in the equality. The remaning terms are none other than $\tilde{\mathcal{Q}}_{g,n+1|2m}^{BB}$ and $\mathcal{Q}_{g,n+1|2m}^{FF}$, hence, this proves that \eqref{BR} indeed solves the quadratic bosonic loop equations.
Similarly, the quadratic fermionic loop equations imply that for $2g+n+2m\geq1$,
\begin{equation}
\underset{z\rightarrow0}{{\rm Res}}\,K^{FB}(u_1,z)\mathcal{Q}_{g,n|2m+2}^{FB}(I|z,u_2,J)=0.\label{p2}
\end{equation}
One can repeat the same procedure as we did in \eqref{p1}. That is, terms involving $\omega_{g,n|2m+2}$ on the left hand side of \eqref{p2} become
\begin{align}
&-\underset{z\rightarrow0}{{\rm Res}}\left(\omega^F_{0,0|2}(|z,u_1)-\left(f+\frac12\right)\eta_{f}(z)\eta_{-f}(u_1)\right)\omega_{g,n|2m+2}(I|z,u_2,J)\nonumber\\
&=:-\hat{\omega}_{g,n|2m+2}(I|u_1,u_2,,J),
\end{align}
where this should be taken as the definition of $\hat{\omega}_{g,n|2m+2}$, hence, \eqref{FR1} holds. Notice that the only differences between $\hat{\omega}_{g,n|2m+2}(I|u_1,u_2,J)$ and $\omega_{g,n|2m+2}(I|u_1,u_2,J)$ are terms that depend on $\eta_{-f}(u_1)$ thanks to \eqref{NS02expansion} and \eqref{R02expansion}. Since fermionic entries are antisymmetric under their permutations by definition, however, one can indeed supplement this missing $\eta_{-f}(u_1)$-dependence precisely by the second term in \eqref{FR}. It is clear that \eqref{BR} together with \eqref{FR} are recursive for $\omega_{g,n|2m}$ in $2g+n+2m$, hence we have constructed all $\omega_{g,n|2m}$ starting with a super spectral curve, subject to the assumption of existence of a solution. This completes the proof.
\end{proof}
\begin{remark}
It is highly nontrivial to show that $\omega_{g,n|2m}$ obtained from \eqref{BR} and \eqref{FR} are symmetric under permutations of the first $n$ entries, and anti-symmetric under permutations of the last $2m$ entries. Also, for $n m\neq0$, there are two formulae to compute $\omega_{g,n|2m}$: from either \eqref{BR} or \eqref{FR}. In other words, having the recursive formulae \eqref{BR} and \eqref{FR} is not sufficient to show existence. We will prove existence of a solution in Corollary~\ref{coro:main} with the help of super Airy structures.
\end{remark}
\begin{remark}\label{rem:noF0}
It is easy to show that all $\omega_{g,n|2m}(J|K)=0$ for all $2g+n+2m\geq3$ with $g<1$, that is, $g=0$ or $g=\frac12$. This is a general property of the untwisted/$\mu$-twisted super topological recursion. Note that $\omega_{0,n|2m}(J|K)$ can be nonzero if one considers the super topological recursion of \cite{BO2}, i.e., the $\rho$-twisted super topological recursion.
\end{remark}
\section{Super Airy Structures}\label{sec:SAS}
In this section, we will reformulate the super topological recursion from an algebraic point of view. A key notion is super Airy structures introduced in \cite{SAS}. It has already been shown in \cite{BO2} that the super topological recursion on a $\rho$-\emph{twisted} super spectral curve is dual to a super Airy structure derived from a \emph{$\rho$-twisted} module of the $\mathcal{N}=1$ super Virasoro algebra. Here, our focus is on super Airy structures related to untwisted/$\mu$-twisted modules instead.
\subsection{Review of Super Airy Structures}
We first review the definition and properties of super Airy structures. This subsection very closely follows \cite[Section 4.1]{BO2}.
Let $U=U_0\oplus U_1\oplus\mathbb{C}^{0|1}$ be a super vector space of dimension $d+1$ over $\mathbb{C}$ (the super vector space could be infinite-dimensional, but for simplicity of presentation we will assume here that it has finite dimension). We define $\{x^i\}_{i\in I}$ to be linear coordinates on $U_0\oplus U_1$ where $I=\{1,...,d\}$ with $x^0$ to be the coordinate of the extra $\mathbb{C}^{0|1}$, and their parity is defined such that $|x^i|=0$ if $x^i\in U_0$, $|x^i|=1$ if $x^i\in U_1$, and $|x^0|=1$. Furthermore, let us denote by
\begin{equation}
\mathcal{D}_{\hbar}(U)=\mathbb{C}\llbracket \hbar^{\frac12},x^0,\hbar\partial_{x^0},\{x^i\}_{i\in I},\{\hbar\partial_{x^i}\}_{i\in I}\rrbracket
\end{equation}
the completed algebra of differential operators acting on $U$ where $\hbar$ is a formal variable. We then introduce a $\mathbb{Z}$-grading by
\begin{equation}
\deg(x^0)=\deg(x^i)=1,\;\;\;\deg(\hbar\partial_{x^0})=\deg(\hbar\partial_{x^i})=1,\;\;\;\;\deg(\hbar)=2.
\end{equation}
\begin{definition}[{\cite[Definition 2.3]{SAS}}]\label{def:SAS}
A \emph{super Airy structure} is a set of differential operators $\{H_i\}_{i\in I}\in\mathcal{D}_{\hbar}(U)$ such that:
\begin{enumerate}
\item for each $i\in I$, $H_i$ is of the form
\begin{equation}
H_i=\hbar\partial_{x^i}-P_i,\label{form}
\end{equation}
where $P_i\in\mathcal{D}_{\hbar}(U)$ has degree greater than 1 with $|P_i|=|x^i|$,
\item there exists $f_{ij}^k\in\mathcal{D}_{\hbar}(U)$ such that
\begin{equation}
[H_i,H_j]_s=\hbar\sum_{k\in I} f_{ij}^k \,H_k,\label{left}
\end{equation}
where $[\cdot,\cdot]_s$ is a super-commutator.
\end{enumerate}
\end{definition}
It is crucial that the $x^0$-dependence appears only in the $\{P_i\}_{i\in I}$, but not in the degree 1 term (there is no $H_0$). In other words, the dimension of the super vector space $U$ is one more than the number of differential operators $\{H_i\}_{i\in I}$. We thus call $x^0$ the \emph{extra variable}. We note that there is no notion of extra variables in the standard, nonsupersymmetric formalism of Airy structures.
\begin{theorem}[{\cite[Theorem 2.10]{SAS}}]\label{thm:SAS1}
Given a super Airy structure $\{H_i\}_{i\in I}$, there exists a unique formal power series $\hbar \mathcal{F}(x)\in\mathbb{C}\llbracket \hbar^{\frac12},x^0,(x^i)_{i\in I}\rrbracket$ (up to addition of terms in $\mathbb{C}\llbracket \hbar\rrbracket$) such that:
\begin{enumerate}
\item $\hbar \mathcal{F}(x)$ has no term of degree 2 or less,
\item every term in $\hbar \mathcal{F}(x)$ has even parity,
\item it satisfies $H_i\,e^{\mathcal{F}}=0$.
\end{enumerate}
\end{theorem}
$Z := e^{\mathcal{F}}$ is called the \emph{partition function} and $\mathcal{F}$ \emph{the free energy}. Note that $e^{\mathcal{F}}$ is not a power series in $\hbar$, and so one should replace condition (3) by $e^{-\mathcal{F}}\,H_i\,e^{\mathcal{F}}=0$, which gives a power series in $\hbar$. However, as is standard, we write $H_i\,e^{\mathcal{F}}=0$ for brevity.
Explicitly, $\mathcal{F}$ can be expanded as follows
\begin{equation}
\mathcal{F}=\sum_{g\in\frac12\mathbb{Z}_{\geq0},\;n\in\mathbb{Z}_{\geq0}}^{2g+n>2}\frac{\hbar^{g-1}}{n!}\sum_{i_1,...,i_n\in \{0,I\}}F_{g,n}(i_1,...,i_n)\prod_{k=1}^nx^{i_k},
\end{equation}
where the restriction on the sum ($2g + n >2$) comes from condition (1) in Theorem~\ref{thm:SAS1}.
$F_{g,n}(i_1,...,i_n)$ is $\mathbb{Z}_2$-symmetric under permutations of indices\footnote{The original definition in \cite{SAS} considers only power series of $\hbar$ rather than $\hbar^{\frac12}$. However, there is no issue with extending the algebra with $\hbar^{\frac12}$ because $\deg\hbar^{\frac12}=1$. In particular, Theorem~\ref{thm:SAS1} stands because their proof considers induction with respect to $\chi=2g+n$, and $\chi$ remains integers even with $\hbar^{\frac12}$.}.
\subsection{Super Virasoro Type}
We now take both the bosonic and fermionic vector spaces $U_0,U_1$ to be countably infinite dimensional. Also, we explicitly distinguish bosonic and fermionic coordinates, namely, we denote by $\{x^1,x^2,...\}$ and $\{\theta^1,\theta^2,...\}$ the coordinates on $U_0$ and $U_1$ respectively and $\theta^0\in\mathbb{C}^{0|1}$ is treated as the extra variable. In particular, all $\{\theta^0,\theta^1,\theta^2,...\}$ are Grassmann variables.
We then define differential operators $\{J_a\}_{a\in\mathbb{Z}}$ as
\begin{equation}
\forall a\in\mathbb{Z}_{>0},\;\;\;\;J_{a}=\hbar\frac{\partial}{\partial x^a},\;\;\;\;J_0=\tau_0+\hbar^{\frac12}Q_0,\;\;\;\;J_{-a}=ax^a.\label{Heis}
\end{equation}
where $\tau_0,Q_0\in\mathbb{C}$ and this is different from $J_0$ in \cite[Section 4.2]{BO2}. For the Neveu-Schwarz sector (equiv. untwisted fermion), we define Grassmann differential operators $\{\Gamma_r\}_{r\in\mathbb{Z}+\frac12}$ by
\begin{equation}
\forall i\in\mathbb{Z}_{\geq0},\;\;\;\;\Gamma_{i+\frac12}=\hbar\frac{\partial}{\partial\theta^{i}},i\;\;\;\;\Gamma_{-i-\frac12}=\theta^{i},\label{Cliff}
\end{equation}
On the other hand, for the Ramond sector (equiv. $\mu$-twisted fermion), we define $\{\Gamma_r\}_{r\in\mathbb{Z}}$ by
\begin{equation}
\forall r\in\mathbb{Z}_{>0},\;\;\;\;\Gamma_{r}=\hbar\frac{\partial}{\partial\theta^{r}},\;\;\;\;\Gamma_0=\frac{\theta^0}{2}+\hbar\frac{\partial}{\partial\theta^0},\;\;\;\;\Gamma_{-r}=\theta^{r},
\end{equation}
It is straightforward to see that $J_a$ are a basis for the Heisenberg algebra, while the $\Gamma_a$ are a basis for the Clifford algebra:
\begin{equation}
[J_a,J_b]=a\,\hbar\,\delta_{a+b,0},\;\;\;\;[J_a,\Gamma_{b}]=0,\;\;\;\;\{\Gamma_{a},\Gamma_{b}\}=\hbar\,\delta_{a+b,0}.
\end{equation}
Using these differential operators, we define super Virasoro differential operators with background charge $Q_0$ (where $: \cdots :$ denotes normal ordering) as
\begin{align}
n\in\mathbb{Z},\;\;\;\;L_{n}&=\frac12\sum_{j\in\mathbb{Z}}: J_{-j}J_{n+j} : + \frac12\sum_{s\in\mathbb{Z}+f}(\frac n2+s) : \Gamma_{-s}\Gamma_{s+n} :\nonumber\\
&+\delta_{n,0}\delta_{F,R}\frac{\hbar}{16}-\frac{n+1}{2}\hbar^{\frac12}Q_0J_n,\label{L1}\\
r\in\mathbb{Z}+f,\;\;\;\;G_{r}&=\sum_{j\in\mathbb{Z}} :J_{-j}\Gamma_{j+r} :-\left(r+\frac12\right)\hbar^{\frac12}Q_0\Gamma_r,\label{G1}
\end{align}
where recall from \eqref{fandNSR} that $f=1/2$ in the Neveu-Schwarz sector and $f=0$ in the Ramond sector. Note that they generate the $\mathcal{N}=1$ super Virasoro algebra with $\hbar$ inserted \footnote{$\hbar$ is inserted in order to meet the criteria of super Airy structures \eqref{left}. Also, the central charge in \cite{BF,Ito} uses a different normalisation.}:
\begin{align}
[L_m,L_n]=&\hbar(m+n)L_{m+n}+\hbar\frac{c}{12}(m^3-m)\delta_{m+n,0},\\
[L_m,G_r]=&\hbar\left(\frac{m}{2}-r\right)G_{m+r},\\
\{G_r,G_s\}=&2\hbar L_{r+s}+\hbar \frac{c}{3}\left(r^2-\frac14\right)\delta_{r+s,0},\\
c=&\hbar\left(\frac{3}{2}-3Q_0^2\right).
\end{align}
In terms of representations of vertex operator algebras, these differential operators are in untwisted or $\mu$-twisted modules of the $\mathcal{N}=1$ super Virasoro algebra \cite{SAS}.
We now construct a super Airy structure $\tilde{\mathcal{S}}_F$ dual to a super spectral curve $\mathcal{S}_F$. Recall from Remark~\ref{rem:para} that a super spectral curve $\mathcal{S}_F$ is completely determined when one fixes all the parameters $(F,N,\tau_{l},\phi_{kl},\psi_{mn}^F,D_k,Q_l)$. Let us take the same parameters, and consider a set $\tilde{\mathcal{S}}_F=\{H_{i} ,F_{i}\}_{ i\in\mathbb{Z}_{\geq1}}$ of the following differential operators:
\begin{align}
\forall i\in\mathbb{Z}_{\geq1},\;\;\;\;H_{i} =& \Phi_N L_{N+i-1} \Phi_N^{-1}+\hbar \tilde{D}_i,\label{Hi}\nonumber\\
&-\frac12\sum_{k,l=1}^{N-1}\delta_{k+l,N+i-1}(\tau_{-k}+\hbar^{\frac12}Q_{-k})(\tau_{-l}+\hbar^{\frac12}Q_{-l}),\\
F_{i} =& \Phi_N G_{N+i+f-1} \Phi_N^{-1},
\end{align}
where
\begin{equation}
\tilde D_i=\sum_{k=0}^{N-i+2f}\tau_{k-(N-1)}D_{i+k},\;\;\;\;D_{i>N+2f}=0,\label{Di}
\end{equation}
\begin{align}
\Phi_N:=&\exp\left(-\frac{1}{\hbar}\sum_{l=1}^{N-1}\frac{\tau_{-l}+\hbar^{\frac12}Q_{-l}}{l}J_{-l}\right)\nonumber\\
&\times\exp\left(\frac{1}{\hbar}\left(\sum_{l>0}\frac{\tau_l+\hbar^{\frac12}Q_l}{l}J_l+\sum_{l,k>0}\frac{\phi_{kl}}{2kl}J_kJ_l+\sum_{k,l\geq0}\frac{\psi^F_{kl}}{2}\Gamma_{k+f}\Gamma_{l+f}\right)\right).\label{Phi}
\end{align}
Note that the order of operators in $\Phi_N$ is important. That is, the conjugation with negative modes $(J_{-l})_{l\in\mathbb{Z}_{>0}}$ should act after all positive modes $(J_{l})_{l\in\mathbb{Z}_{>0}}$. When $N=1$ and $Q_l=0$ for all $l\in\mathbb{Z}$, $\Phi_1$ is identical to $\Phi$ in \cite[Section 4.2]{BO2}. However, we need to additionally consider conjugation by finitely many negative modes $J_{-l}$ in order to match with the super topological recursion when $N>1$.
\begin{proposition}\label{prop:SAS}
$\tilde{\mathcal{S}}_F$ forms a super Airy structure.
\end{proposition}
\begin{proof}
Since $\{L_n,G_r\}$ generates an $\mathcal{N}=1$ super Virasoro subalgebra, and since the $\Phi_N$-action is just conjugate, it is easy to see that $\tilde{\mathcal{S}}_F$ satisfies the second condition in Definition~\ref{def:SAS}. Importantly, since $L_N,...,L_{2N-1+2f}$ never appear on the right hand sides of super Virasoro commutation relations, constant terms can freely be added to, or subtracted from $L_N,...,L_{2N-1+2f}$ without changing commutation relations \footnote{This point is already addressed in \cite[Section 5.1]{SAS} when $\tau_l=\delta_{l,-N+1}$ and $Q_l=\phi_{kl}=\psi_{kl}=0$.}. The second line in \eqref{Hi} is there to remove unwanted constant terms that appear in $ \Phi_N L_{N+i-1} \Phi_N^{-1}$.
We next show that there exists a linear transformation that brings $H_{i} ,F_{i}$ to the form of \eqref{form}. Note that $\Phi_N$ acts on Heisenberg modes $(J_a)_{a\in\mathbb{Z}}$ and $(\Gamma_r)_{r\in\mathbb{Z}+f}$ as:
\begin{align}
\Phi_N J_0 \Phi_N^{-1}=&J_0,\\
\forall a\in\mathbb{Z}_{\neq0}\;\;\;\;\;\;\;\Phi_N J_{-a} \Phi_N^{-1}=&J_{-a}+\sum_{b\geq1}\frac{\phi_{ab}}{b}J_b\nonumber\\
&+\tau_{a}+\hbar^{\frac12}Q_{a}+\sum_{b=1}^{N-1}\frac{(\tau_{-b}+\hbar^{\frac12}Q_{-b})\phi_{ab}}{b},\label{PhiJ}\\
\forall r\in\mathbb{Z}+f\;\;\;\;\Phi_N \Gamma_{-r}\Phi_N^{-1}=&\Gamma_{-r}+\sum_{s\in\mathbb{Z}_{\geq0}}\psi_{r-f, s}\Gamma_{s+f},\label{PhiGamma}
\end{align}
where we conventionally defined $\phi_{0,k}=0$, and $\tau_{l-N+1}=Q_{l-N+1}=\phi_{l,k}=\psi_{l,k}=0$ for $l\in\mathbb{Z}_{<0}$. Using \eqref{PhiJ} and \eqref{PhiGamma}, one can explicitly write $H_{i} ,F_{i}$ as
\begin{align}
H_i&=\sum_{k\in\mathbb{Z}_{\geq0}} C_k J_{i+k}+\hbar^{\frac12}\sum_{k\in\mathbb{Z}_{\geq0}} C'_k J_{i+k}-\frac{N+i}{2}Q\hbar^{\frac12}J_{i+N-1}\nonumber\\
&\;\;\;\;+\frac12\sum_{j,k\in\mathbb{Z}_{\neq0}}C_i^{j,k|}:J_jJ_k:+\frac12\sum_{i,j\in\mathbb{Z}}C_i^{|j,k}:\Gamma_{j+f}\Gamma_{k+f}:+\hbar \tilde D_i\delta_{i\leq N},\\
F_i&=\sum_{k\in\mathbb{Z}_{\geq0}} C_k \Gamma_{i+k+f}+\hbar^{\frac12}\sum_{k\in\mathbb{Z}_{\geq0}} C'_k \Gamma_{i+k+f}\nonumber\\
&-\left(N+i-\frac{1-2f}{2}\right)Q\hbar^{\frac12}\Gamma_{N+i+f-1}+\sum_{j,k\in\mathbb{Z}}C_i^{j|k}:J_j\Gamma_{k+f}:,
\end{align}
where
\begin{align}
C_k=&\tau_{k-N+1}+\sum_{p=1}^{N-1}\frac{\tau_{-p}\phi_{p, k-N+1}}{p}\label{C0},\\
C'_k=&Q_{k-N+1}+\sum_{p=1}^{N-1}\frac{Q_{-p}\phi_{p, k-N+1}}{p}\label{C00},\\
C_i^{j,k|}=&\delta_{j+k,N+i-1}+\frac{\phi_{j,k-N-i+1}}{j}+\frac{\phi_{j-N-i+1,k}}{k},\label{C1}\\
C_i^{|j,k}=&\frac12\Bigl((k-j)\delta_{j+k+2f,i+N-1}+(2k+2f-N-i+1)\psi_{j,k-N-i+1}\nonumber\\
&-(2j+2f-N-i+1)\psi_{k,j-N-i+1}\Bigr),\label{C2}\\
C_i^{j|k}=&\delta_{j+k,N+i-1}+\frac{\phi_{j,k-N-i+1}}{j}+\psi_{k,j-N-i+1-2f},\label{C3}
\end{align}
Recall that $\tau_{-(N-1)}\neq0$ which implies $C_0\neq0$. Then from the degree 1 terms in $H_i$ and $F_i$, one notices that there exists an (infinite dimensional) upper triangular matrix that takes $H_i,F_i$ to $\bar H_i, \bar F_i$ of the form of \eqref{form}, that is,
\begin{equation}
\bar H_i=J_i+\hbar D_i+\text{deg. 2 terms},\;\;\;\;\bar F_i=\Gamma_i+\text{deg. 2 terms}.\label{diagonal}
\end{equation}
It is important that there is only one $D_i$ for each $i$ in $\bar H_i$. This is exactly why we define $\tilde D_i$ by \eqref{Di}. Therefore, $\{H_{i} ,F_{i}\}_{ i\in\mathbb{Z}_{\geq1}}$ and $\{\bar H_{i} ,\bar F_{i}\}_{ i\in\mathbb{Z}_{\geq1}}$ are related by a linear transformation, and this proves that $\tilde{\mathcal{S}}_F$ forms a super Airy structure.
\end{proof}
It is worth emphasizing that the defining data of a super spectral curve is in a one-to-one correspondence with that of a super Airy structure of Proposition~\ref{prop:SAS}. Thanks to Theorem~\ref{thm:SAS1}, there exists a unique partition function $Z_F$ and free energy $\mathcal{F}_F= \log Z_F$ in the form:
\begin{equation}
\mathcal{F}_F=\sum_{g,n,m\geq0}^{2g+n+2m>2}\frac{\hbar^{g-1}}{n!(2m)!}\sum_{\substack{i_1,...,i_n>1\\j_1,...,j_{2m}\geq0}}F_{g,n|2m}(i_1,...,i_n|j_1,...,j_{2m})\prod_{k=1}^nx^{i_k}\prod_{l=1}^{2m}\theta^{j_l},\label{SASF}
\end{equation}
and such that \footnote{Since $\{\bar H_i,\bar F_i\}$ and $\{ H_i,F_i\}$ are linearly related by some upper triangular matrix, the resulting differential constraints $H_ie^{\mathcal{F}}= F_ie^{\mathcal{F}}=0$ are equivalent to $\bar H_ie^{\mathcal{F}_F}=\bar F_ie^{\mathcal{F}_F}=0$.\label{footnote:equiv}}
\begin{equation}
\forall \;i\in\mathbb{Z}_{>0},\;\;\;\;H_iZ_F=F_iZ_F=0.\label{SVconstraints}
\end{equation}
Note that $F_{g,n|2m}$ is symmetric under permutations of the $n$ first entries, anti-symmetric under permutations of the last $2m$ entries, with no further symmetry. Now, since a choice of parameters $(F,N,\tau_{l},\phi_{kl},\psi_{mn}^F,D_k,Q_l)$ uniquely fixes a pair of a super spectral curve and a super Airy structure, one may ask: are there any relation between $F_{g,n|2m}$ and $\omega_{g,n|2m}$ defined by Definition~\ref{def:STR}? The following is the main theorem of the present paper which presents an explicit dictionary between $F_{g,n|2m}$ and $\omega_{g,n|2m}$:
\begin{theorem}\label{thm:main}
~
\begin{enumerate}
\item
Consider the super Airy structure $\tilde{\mathcal{S}}_F$ in Proposition \ref{prop:SAS}, defined in terms of parameters $(F,N,\tau_{l},\phi_{kl},\psi_{mn}^F,D_k,Q_l)$. Let
\begin{equation}
F_{g,n|2m}(i_1,\ldots , i_n | j_1, \ldots, j_{2m} )
\end{equation}
be the coefficients of the associated unique free energy $\mathcal{F}_F$.
\item
Let $\mathcal{S}_F$ be the super spectral curve defined with the same parameters. Consider an infinite sequence of multilinear differentials $\omega_{g,n|2m}$ that respect the polarization,
and that satisfy the abstract super loop equations Definition \ref{def:SLE}. We expand the differentials in terms of the polarised basis as:
\begin{align}
\omega_{g,n|2m}(I|J)=\sum_{\substack{i_1,...,i_n>1\\j_1,...,j_{2m}\geq0}}\hat F_{g,n|2m}(i_1,...,i_n|j_1,...,j_{2m})\bigotimes_{k=1}^n d\xi_{-i_k}(z_k)\otimes\bigotimes_{l=1}^{2m}\eta_{-j_l-f}(u_l,\theta_l) .\label{thm}
\end{align}
\end{enumerate}
Then, for all $g,n,m$, and indices $i_1, \ldots, i_n$ and $j_1, \ldots, j_{2m}$,
\begin{equation}
\hat F_{g,n|2m}(i_1,...,i_n|j_1,...,j_{2m}) = F_{g,n|2m}(i_1,\ldots , i_n | j_1, \ldots, j_{2m} ).\label{F=F}
\end{equation}
\end{theorem}
\begin{proof}
Since the proof is based on computations, let us first briefly explain how the proof goes before showing tedious computations. We discussed two constraints in this section and the previous section: one is abstract super loop equations (Definition~\ref{def:SLE}) and the other is super Virasoro-like constraints for the partition function of a super Airy structure \eqref{SVconstraints}. What one has to show is to manipulate these two constraints and obtain a set of recursive equations for $\hat{F}_{g,n|2m}$ from Definition~\ref{def:SLE}, and another set of recursive equations for $F_{g,n|2m}$ from \eqref{SVconstraints} respectively. In particular, one finds that these two sets of recursive equations are precisely the same, hence Theorem~\ref{thm:SAS1} implies that \eqref{F=F} holds. We now move on to computational sides.
Let us consider the differential constraints given by the operators $\bar H_i,\bar F_i$ defined in \eqref{diagonal}. Then for $(g,n,m)=(1,1,0)$, it is easy to see that $\bar H_ie^{\mathcal{F}}=\bar F_ie^{\mathcal{F}}=0$ gives
\begin{equation}
F_{1,1|0}(i)=D_i.\label{F11D}
\end{equation}
See, for example, \cite[Theorem 2.20]{SAS} for justifying this consequence. On the other hand, $\omega_{1,1|0}$ is a part of the defining data of the super spectral curve. Thus, \eqref{thm} holds for $(g,n,m)=(1,1,0)$. Note that for $(g,n,m)=(0,3,0)$ and $(g,n,m)=(0,1,2)$, one can also show that $F_{0,3|0}=F_{0,1|2}=0$.
For any other $(g,n,m)$ with $2g+n+2m-2>0$, the strategy is the same as the proof given in \cite[Appendix A.3]{BO2}. It turns out that it is more convenient to consider the constraints coming from $H_ie^{\mathcal{F}}= F_ie^{\mathcal{F}}=0$ in order to match with the super topological recursion (see Footnote~\ref{footnote:equiv}). Let us denote by $I=\{i_1,i_2,...\}$ a collections of positive integers and by $J=\{j_1,j_2,...\}$ by a collection of nonnegative integers \footnote{In the previous section, we denoted by $I,J$ a collection of bosonic and fermionic variables. We abuse the notation here, but one should be able to decode whether they are collections of indices or variables from the context.}. Then, we introduce the following quantities:
\begin{align}
\Xi_{g,n+1|2m}[i,I|J]&=\sum_{k\geq0}C_kF_{g,n+1|2m}(i+k,I|J)+\sum_{k\geq0}C'_kF_{g-\frac12,n|2m}(i+k,I|J)\nonumber\\
&\;\;\;\;-\frac12 Q_0(i+N)F_{g-\frac12,n+1|2m}(i+N-1,I|J),\\
\Xi_{g,n|2m}[I|i,J]&=\sum_{k\geq0}C_kF_{g,n|2m}(I|k+i,J)+\sum_{k\geq0}C'_kF_{g-\frac12,n|2m}(I|k+i,J)\nonumber\\
&\;\;\;\;-Q_0(N+i-\frac{1-2f}{2})F_{g-\frac12,n|2m}(I|i+N-1,J),\\
\Xi_{g,n|2m}[k,l,I|J]&=F_{g-1,n+2|2m}(k,l,I|J)\nonumber\\
&\;\;\;\;+\sum_{g_1+g_2=g}\sum_{\substack{I_1\cup I_2=I \\ J_1\cup J_2=J}}(-1)^{\rho}F_{g_1,n_1+1|2m_1}(k,I_1|J_1)F_{g_2,n_2+1|2m_2}(l,I_2|J_2),\\
\Xi_{g,n|2m}[I|k,l,J]&=-F_{g-1,n|2m+2}(I|k,l,J)\nonumber\\
&\;\;\;\;+\sum_{g_1+g_2=g}\sum_{\substack{I_1\cup I_2=I \\ J_1\cup J_2=J}}(-1)^{\rho}F_{g_1,n_1|2m_1}(I_1|k,J_1)F_{g_2,n_2|2m_2}(I_2|l,J_2),
\end{align}
\begin{align}
\Xi_{g,n|2m}[k,I|l,J]&=F_{g-1,n+1|2m}(k,I|l,J)\nonumber\\
&\;\;\;\;+\sum_{g_1+g_2=g}\sum_{\substack{I_1\cup I_2=I \\ J_1\cup J_2=J}}(-1)^{\rho}F_{g_1,n_1+1|2m_1}(k,I_1|J_1)F_{g_2,n_2|2m_2}(I_2|l,J_2).
\end{align}
Then order by order in $\hbar$ as well as in variables $x^j,\theta^j$, we find from $H_i^2Z=0$ a sequence of constraints on the free energy $F_{g,n+1|2m}$ for $2g+n+2m-2>0$ with $(g,n,m)\neq(1,0,0)$ as follows:
\begin{align}
0=&\,\Xi_{g,n+1|2m}[i,I|J]+\sum_{k,l\geq0}\left(C_i^{k,l|}\Xi_{g,n|2m}^{(2)}[k,l,I|J]+C_i^{|k,l}\Xi_{g,n|2m}^{(2)}[I|k,l,J]\right)\nonumber\\
&+\sum_{k\geq0}\left(\sum_{l=1}^ni_lC_i^{-i_l,k|}F_{g,n|2m}(k,I\backslash i_l|J)+\sum_{l=1}^{2m}(-1)^{l-1}\frac{C_i^{|-j_l-2f,k}}{{1+\delta_{f,0}\delta_{j_l,0}}}F_{g,n|2m}(I|k,J\backslash j_l)\right),\label{BSAS}
\end{align}
where the $\delta_{f,0}\delta_{j_l,0}$ is a consequence of the fermionic zero mode $\Gamma_0$. Similarly, $F_i^2Z=0$ gives a sequence of constraints for $F_{g,n|2m}$ for $2g+n+2m-2>1$ with $m\geq1$
\begin{align}
0=&\,\Xi_{g,n|2m}[I|i,J]+\sum_{k,l\geq0}C_i^{k|l}\Xi_{g,n|2m}^{(2)}[k,I|l,J]\nonumber\\
&+\sum_{k\geq0}\left(\sum_{l=1}^ni_lC_i^{-i_l|k}F_{g,n-1|2m}(I\backslash i_l|k,J)+\sum_{l=1}^{2m-1}(-1)^{l-1}\frac{C_i^{k|-j_l-2f}}{1+\delta_{f,0}\delta_{j_l,0}}F_{g,n+1|2m-2}(k,I|J\backslash j_l)\right).\label{FSAS}
\end{align}
We now will show that exactly the same equations can be derived for $\hat F_{g,n|2m}$ for $2g+n+2m-2>0$ with $(g,n,m)\neq(1,1,0)$ from the abstract super loop equations. The abstract loop equations imply that
\begin{align}
\forall i\in\mathbb{Z}_{\geq1},\;\;\;\;0&=\underset{z=0}{\text{Res}}\,\frac{z^{i+N}}{dz}\left(\mathcal{Q}_{g,n+1|2m}^{BB}(z,I|J)+\mathcal{Q}_{g,n+1|2m}^{FF}(z,I|J)\right).\label{QB2},\\
0&=\underset{z=0}{\text{Res}}\,\frac{z^{i+N+2f-1}\Theta^{F}_z}{dz}\left(\mathcal{Q}_{g,n+1|2m}^{BF}(z,I|J)\right),\label{QF2}
\end{align}
where the extra power of $z^{2f-1}$ is inserted because $(\Theta_z^R)^2=zdz$ whereas $(\Theta_z^{NS})^2=dz$.
Let us compute terms that involve $\omega_{0,1|0}$ in \eqref{QB2}. Since $\omega_{g,n|2m}$ respects polarization by definition, we find that
\begin{align}
&\underset{z=0}{\text{Res}}\,\frac{z^{i+N}}{dz}\omega_{0,1|0}(z|)\omega_{g,n+1|2m}(z,I|J)\nonumber\\
&=\sum_{I,J}\sum_{l>-(N-1)}\sum_{i_0>0}\underset{z=0}{\text{Res}}\,z^{i+N}\left(z^{l-1}+\sum_{p>0}\frac{\phi_{l p}}{l}z^{p-1}\right)\left(z^{-i_0-1}+\sum_{q>0}\frac{\phi_{i_0 q}}{i_0}z^{q-1}\right)dz\nonumber\\
&\hspace{10mm}\times \tau_l\hat F_{g,n+1|2m}(i_0,I|J)\bigotimes_{k=1}^nd\xi_{-i_k}(z_k)\bigotimes_{l=1}^{2m}\eta_{-j_l}(u_l,\theta_l)\nonumber\\
&=\sum_{I,J}\sum_{k\geq0}C_kF_{g,n+1|2m}(i+k,I|J)\bigotimes_{c=1}^nd\xi_{-i_c}(z_c)\bigotimes_{l=1}^{2m}\eta_{-j_l}(u_l,\theta_l),\label{CkF}
\end{align}
where we used that $\phi_{kl}=0$ for any $l\leq0$ and $C_k$ agrees with \eqref{C0}. Note that $C_k$ depends on $\phi_{kl}$ due to nonzero $\tau_{l<1}$ unlike (A.37) in \cite[Appendix A.2]{BO2}. Next, terms with $\omega_{\frac12,1|0}$ in \eqref{QB2} are
\begin{align}
&\underset{z=0}{\text{Res}}\,\frac{z^{i+N}}{dz}\omega_{\frac12,1|0}(z|)\omega_{g-\frac12,n+1|2m}(z,I|J)\nonumber\\
&=\sum_{I,J}\sum_{k\geq0}C'_kF_{g-\frac12,n+1|2m}(i+k,I|J)\bigotimes_{c=1}^nd\xi_{-i_c}(z_c)\bigotimes_{l=1}^{2m}\eta_{-j_l}(u_l,\theta_l).\label{CkF2}
\end{align}
Also, the $Q_0$-dependent terms in \eqref{QB2} give
\begin{align}
&\frac12\left(\underset{\tilde z\rightarrow0}{{\rm Res}}\,\omega_{\frac12,1|0}(\tilde z)\right)\mathcal{D}_z\cdot\omega_{g-\frac12,n+1|2m}(z,I|J)\nonumber\\
&=-\sum_{I,J}\frac12Q_0(i+N)(F_{g-\frac12,n+1|2m}(N+i-1,I|J)\bigotimes_{c=1}^nd\xi_{-i_c}(z_c)\bigotimes_{l=1}^{2m}\eta_{-j_l}(u_l,\theta_l).\label{CkF3}
\end{align}
The sum of \eqref{CkF}, \eqref{CkF2}, and \eqref{CkF3} precisely agrees with the first term in \eqref{BSAS} when we expand term by term with respect to the basis $\bigotimes d\xi_I\otimes\bigotimes\eta_J$.
Similarly, terms involving $\omega_{0,1|0}$ $\omega_{\frac12,1|0}$, and the $Q_0$-dependent terms in \eqref{QF2} are respectively computed as
\begin{align}
&\underset{z=0}{\text{Res}}\,\frac{z^{i+N+2f-1}\Theta^{F}_z}{dz}\omega_{0,1|0}(z|)\omega_{g,n|2m}(I|z,J)\nonumber\\
&=\sum_{I,J}\sum_{k\geq0}C_kF_{g,n|2m}(I|i+k,J)\bigotimes_{c=1}^nd\xi_{-i_c}(z_c)\bigotimes_{l=2}^{2m}\eta_{-j_l}(u_l,\theta_l),\label{CkF4}
\end{align}
\begin{align}
&\underset{z=0}{\text{Res}}\,\frac{z^{i+N+2f-1}\Theta^{F}_z}{dz}\omega_{\frac12,1|0}(z|)\omega_{g-\frac12,n|2m}(I|z,J)\nonumber\\
&=\sum_{I,J}\sum_{k\geq0}C'_kF_{g-\frac12,n|2m}(I|i+k,J)\bigotimes_{c=1}^nd\xi_{-i_c}(z_c)\bigotimes_{l=2}^{2m}\eta_{-j_l}(u_l,\theta_l).\label{CkF5}
\end{align}
\begin{align}
&\underset{z=0}{\text{Res}}\,\frac{z^{i+N+2f-1}\Theta^{F}_z}{dz}\left(\underset{\tilde z\rightarrow0}{{\rm Res}}\,\omega_{\frac12,1|0}(\tilde z)\right)\left(\mathcal{D}_z\cdot\omega_{g-\frac12,n|2m}(I|z,J)+\frac{1-2f}{2}d\xi_0(z)\omega_{g-\frac12,n|2m}(I|z,J)\right)\nonumber\\
&=-\sum_{I,J}Q_0(i+N+\frac{1-2f}{2})(F_{g-\frac12,n+1|2m}(N+i-1,I|J)\bigotimes_{c=1}^nd\xi_{-i_c}(z_c)\bigotimes_{l=2}^{2m}\eta_{-j_l}(u_l,\theta_l).\label{CkF6}
\end{align}
The sum of \eqref{CkF4}, \eqref{CkF5}, and \eqref{CkF6} precisely agrees with the first term in \eqref{FSAS} when we expand term by term with respect to the basis $\bigotimes d\xi_I\otimes\bigotimes\eta_J$.
Computations for the rest of the terms are completely parallel to those in \cite[Appendix A.2]{BO2}, but here are even simpler thanks to the absence of the involution operator $\sigma$. Thus, we omit trivial yet tedious computations, and refer to the reader \cite{BO2} -- as a computational note, the difference between $(\Theta_z^R)^2=zdz$ and $(\Theta_z^{NS})^2=dz$ should be taken carefully. After all, one finds that $\hat{F}_{g,n|2m}$ satisfy precisely the same set of equations as the one that $F_{g,n|2m}$ do, i.e., \eqref{BSAS} and \eqref{FSAS}. Since uniqueness of solution is clear, $\hat{F}_{g,n|2m}=F_{g,n|2m}$. This proves Theorem~\ref{thm:main}.
\end{proof}
Theorem~\ref{thm:SAS1} and Theorem~\ref{thm:main} immediately show that a unique solution to the untwisted/$\mu$-twisted abstract super loop equations that respects the polarization exists. Thus, we have:
\begin{corollary}\label{coro:main}
There exists a solution to the untwisted/$\mu$-twisted abstract super loop equations that respects the polarization, and it is uniquely constructed by the untwisted/$\mu$-twisted super topological recursion of Definition~\ref{def:STR}.
\end{corollary}
\begin{remark}
If one drops all fermionic dependences from discussions, not only $\psi_{kl}$ but also all fermionic modes $\Gamma_i$ and vector spaces $V_z^F$, one gets the topological recursion without branch covers. In particular, Theorem~\ref{thm:main} holds for any positive integer $N$ which is more general than an analogous recursion for $r=2$ in \cite{BBCC}.
\end{remark}
\section{Applications to Superconformal Blocks}\label{sec:SCB}
In the previous section, we presented two dual ways of solving untwisted/$\mu$-twisted abstract super loop equations. In this section, we will apply them to compute the so-called Gaiotto vectors for $\mathcal{N}=1$ superconformal blocks. Thanks to the AGT correspondence, this realises an interesting application of the untwisted/$\mu$-twisted super topological recursion to $\mathcal{N}=2$ pure $U(2)$ supersymmetric gauge theory on $\mathbb{C}^2/\mathbb{Z}_2$.
\subsection{Gaiotto Vectors}
We will define the Gaiotto vector in the Neveu-Schwarz sector and the Ramond sector in this section. Since we would like to adapt the results of \cite{BF,Ito} to connect with four dimensional supersymmetric gauge theories, we follow the presentations of \cite{BF,Ito} in part. In particular, we call the Neveu-Schwarz/Ramond sector instead of the untwisted/$\mu$-twisted sector.
\subsubsection{Gaiotto vector in the Neveu-Schwarz Sector}
Let $\{L_n,G_r\}$ be generators of the $\mathcal{N}=1$ super Virasoro algebra in the Neveu-Schwarz sector of central charge $c$, that is, $n\in\mathbb{Z}$ and $r\in\mathbb{Z}+\frac12$, and we denote by $\mathcal{V}_{\Delta,NS}$ the Verma module of highest weight $\Delta$. Note that since we are interested in the Verma module corresponding to super Liouville field theory \eqref{L1}, \eqref{G1}, $c$ and $\Delta$ are given by
\begin{equation}
c=\hbar\left(\frac32-3Q^2\right),\;\;\;\;\Delta=\frac12(\tau_0^2-\hbar Q^2)\label{weightNS}
\end{equation}
Then, the highest weight state $\ket{\Delta}$ satisfies the following conditions:
\begin{equation}
L_0\ket{\Delta}=\Delta\ket{\Delta},\;\;\;\;\forall n,r>0\;\;\;\;L_n\ket{\Delta}=G_r\ket{\Delta}=0.
\end{equation}
We consider the $L_0$-eigenvalue decomposition $\mathcal{V}_{\Delta,NS}=\bigoplus_{M}\mathcal{V}_{\Delta,NS}^M$ where $M\in\frac12\mathbb{Z}_{\geq0}$, and each $\mathcal{V}_{\Delta,NS}^{M}$ is given as
\begin{equation}
\mathcal{V}_{\Delta,NS}^{M}=\text{Span}\left\{\prod_{i=1}^k\prod_{j=1}^l L_{-n_i}G_{-r_j}\ket{\Delta}\right\},
\end{equation}
where
\begin{equation}
n_1\geq\cdots\geq n_k>0,\;\;\;\;r_1>\cdots> r_l>0,\;\;\;\;\sum_{i=1}^kn_i+\sum_{j=1}^l r_j=M
\end{equation}
For $M\in\frac12\mathbb{Z}_{\geq0}$, let us assume that there exists a set of vectors $\ket{M}\in\mathcal{V}_{\Delta,NS}^{M}$ satisfying
\begin{equation}
L_1\ket{M}=\ket{M-1},\;\;\;\;\forall n, r>1\;\;\;\;L_n\ket{M}=G_r\ket{M}=0,\label{WhittakerNS}
\end{equation}
where $\ket{0}=\ket{\Delta}$. Note that we do \emph{not} impose anything for the action of $G_{\frac12}$. However, if one defines another set of vectors $\widetilde{\ket{M}}\in\mathcal{V}_{\Delta,NS}^M$ by
\begin{equation}
\widetilde{\ket{M-\frac12}}:=G_{\frac12}\ket{M},
\end{equation}
then it is easy to show from \eqref{WhittakerNS} that
\begin{equation}
L_1\widetilde{\ket{M}}=\widetilde{\ket{M-1}},\;\;\;\;\forall n, r>1\;\;\;\;L_n\widetilde{\ket{M}}=G_r\widetilde{\ket{M}}=0,\label{WhittakerNS1}
\end{equation}
Therefore, \eqref{WhittakerNS} is equivalent to \cite[Eq. (3.15)]{BF}
We are now ready to define the Gaiotto vector in the Neveu-Schwarz sector:
\begin{definition}[{\cite[Section 3.1]{BF}}]
Let us assume that there exists a set of vectors $\{\ket{M}\}\in\mathcal{V}_{\Delta,NS}^{M}$ satisfying \eqref{WhittakerNS}. Then, for a formal variable $\Lambda\in\mathbb{C}$, the ``Gaiotto vector in the Neveu-Schwarz sector'' $\ket{G}_{NS}\in\mathcal{V}_{\Delta,NS}$ is defined by
\begin{equation}
\ket{G}_{NS}:=\sum_{M\in\frac12\mathbb{Z}_{\geq0}}\Lambda^{2M}\ket{M}.
\end{equation}
\end{definition}
The Gaiotto vector naturally arises in the so-called ``Gaiotto limit'', ``Whittaker limit'', or ``irregular limit'' in the context of superconformal blocks. We refer to the readers \cite{BF,Ito} and references therein for further details. Notice that one can show from \eqref{WhittakerNS} that the Gaiotto vector satisfies:
\begin{equation}
L_1\ket{G}_{NS}=\Lambda^2\ket{G}_{NS},\;\;\;\;\forall n,r>1\;\;\;\;L_n\ket{G}_{NS}=G_r\ket{G}_{NS}=0.\label{WhittakerNS2}
\end{equation}
Indeed, one can alternatively take \eqref{WhittakerNS2} as a definition of the Gaiotto vector\footnote{Without supersymmetry, a vector $\ket{w}$ in the Verma module satisfying $L_1 \ket{w}=\Lambda\ket{w}$ and $L_{n\geq2}\ket{w}=0$ is called the ``Whittaker vector'' whereas the corresponding Gaiotto vector is rather defined as a formal sum of some cohomology classes of an appropriate instanton moduli space \cite[Section 2.1]{BBCC}. Thus, strictly speaking, one may call $\ket{G}$ in Definition~\ref{def:GaiottoNS} the Whittaker vector instead of the Gaiotto vector, though the AGT correspondence states that these two vectors are equivalent to each other. In the present paper, however, we call it the Gaiotto vector in order to emphasise a relation to four-dimensional supersymemtric gauge theory.}.
\begin{definition}\label{def:GaiottoNS}
For $\{\ket{M}\}\in\mathcal{V}_{\Delta,NS}^{M}$ for every $M\in\frac12\mathbb{Z}$, we consider a vector $\ket{G}_{NS}\in\mathcal{V}_{\Delta,NS}$ as a formal power series of $\Lambda$ by
\begin{equation}
\ket{G}_{NS}:=\sum_{M\in\frac12\mathbb{Z}_{\geq0}}\Lambda^{2M}\ket{M}.
\end{equation}
Then, $\ket{G}_{NS}$ is said to be the ``Gaiotto vector in the Neveu-Schwarz sector'' if it satisfies
\begin{equation}
L_1\ket{G}_{NS}=\Lambda^2\ket{G}_{NS},\;\;\;\;\forall n,r>1\;\;\;\;L_n\ket{G}_{NS}=G_r\ket{G}_{NS}=0.\label{WhittakerNS3}
\end{equation}
\end{definition}
One can easily derive \eqref{WhittakerNS} from \eqref{WhittakerNS3} order by order in $\Lambda$. We call a vector $\ket{M}$ satisfying \eqref{WhittakerNS} the ``Gaiotto vector of level $M$ in the Neveu-Schwarz sector''. We note that existence of such a vector is not \emph{a priori} guaranteed.
\subsubsection{Gaiotto Vector in the Ramond Sector}
Let $\{L_n,G_r\}$ be generators of the $\mathcal{N}=1$ super Virasoro algebra in the Ramond sector of central charge $c$, that is, $n,r\in\mathbb{Z}$, and we denote by $\mathcal{V}_{\Delta,R}$ the Verma module of highest weight $\Delta$. Note that in the Ramond sector, $c$ and $\Delta$ are given by
\begin{equation}
c=\hbar\left(\frac32-3Q^2\right),\;\;\;\;\Delta=\frac{\hbar}{16}+\frac12(\tau_0^2-\hbar Q^2)\label{weightR}
\end{equation}
Unlike the Neveu-Schwarz sector, there are two highest weight states $\ket{\Delta}_{\pm}$ satisfying the following conditions:
\begin{equation}
L_0\ket{\Delta}_{\pm}=\Delta\ket{\Delta}_{\pm},\;\;\;\;G_0\ket{\Delta}_{\pm}=\sqrt{\Delta-\frac{c}{24}}\ket{\Delta}_{\mp},\;\;\;\;\forall n,r>0\;\;\;\;L_n\ket{\Delta}_{\pm}=G_r\ket{\Delta}_{\pm}=0,
\end{equation}
We consider the $L_0$-eigenvalue decomposition $\mathcal{V}_{\Delta,R}=\bigoplus_{M}\mathcal{V}_{\Delta,R}^M$ where $M\in\mathbb{Z}_{\geq0}$ in the Ramond sector, and each $\mathcal{V}_{\Delta,R}^{M}$ is given as
\begin{equation}
\mathcal{V}_{\Delta,R}^{M}=\text{Span}\left\{\prod_{i=1}^k\prod_{j=1}^l L_{-n_i}G_{-r_j}\ket{\Delta}_{\pm}\right\},
\end{equation}
where
\begin{equation}
n_1\geq\cdots\geq n_k>0,\;\;\;\;r_1>\cdots> r_l>0,\;\;\;\;\sum_{i=1}^kn_i+\sum_{j=1}^l r_j=M
\end{equation}
We now define the Gaiotto vectors in the Ramond sector similar to Definition~\ref{def:GaiottoNS}:
\begin{definition}[{\cite[Section 3]{Ito}}]\label{def:GaiottoNS}
For $\{\ket{M}_{\pm}\}\in\mathcal{V}_{\Delta,R}^{M}$ for every $M\in\mathbb{Z}$, we consider two vectors $\ket{G}_{R\pm}\in\mathcal{V}_{\Delta,R}$ as formal power series of $\Lambda$ by
\begin{equation}
\ket{G}_{R\pm}:=\sum_{M\in\mathbb{Z}_{\geq0}}\Lambda^{2M}\ket{M}_{\pm}.
\end{equation}
Then, $\ket{G}_{R\pm}$ are said to be the ``Gaiotto vectors in the Ramond sector'' if it satisfies
\begin{equation}
L_1\ket{G}_{R\pm}=\frac{\Lambda^2}{2}\ket{G}_{R\pm},\;\;\;\;G_1\ket{G}_{R\pm}=0,\;\;\;\;\forall n,r>1\;\;\;\;L_n\ket{G}_{R\pm}=G_r\ket{G}_{R\pm}=0.\label{WhittakerR3}
\end{equation}
\end{definition}
Note that there are two Gaiotto vectors $\ket{G}_{R\pm}$ and they encode exactly the same information. We call $\ket{G}_{R+}$ the ``bosonic Gaiotto vector'' and $\ket{G}_{R-}$ the ``fermionic Gaiotto vector'', respectively. It is straightforward to show that $\ket{M}_{\pm}$ in the Gaiotto vectors satisfy:
\begin{equation}
L_1\ket{M}_{\pm}=\frac12\ket{M-1}_{\pm},\;\;\;\;G_1\ket{M}_{\pm}=0,\;\;\;\;\forall n,r>1\;\;\;\;L_n\ket{M}_{\pm}=G_r\ket{M}_{\pm}=0.\label{WhittakerR}
\end{equation}
We call a vector $\ket{M}_{+(-)}$ satisfying \eqref{WhittakerR} the ``bosonic (fermionic) Gaiotto vector of level $M$ in the Ramond sector'' \footnote{We abuse the notation and $\ket{M}$ without any subscript refers to the Gaiotto vector of level $M$ in the Neveu-Schwarz sector while $\ket{M}_{\pm}$ with subscript are in the Ramond sector.}. We again note that existence of such vectors is not \emph{a priori} guaranteed.
\subsection{Nekrasov Partition Function}
Let us now briefly review a conjectural relation to $\mathcal{N}=2$ pure $U(2)$ gauge theory on $\mathbb{C}^2/\mathbb{Z}_2$. See \cite{BF,BMT,Ito,Itothesis} for further details. For more general perspectives of the AGT correspondence, we refer to the readers \cite{LF,T1,T2}.
The Nekrasov partition function of pure $U(2)$ theory on $\mathbb{C}^2/\mathbb{Z}_2$ depends on three parameters $(\epsilon_1,\epsilon_2,a)$, similar to pure $U(2)$ theory on $\mathbb{C}^2$ where $\epsilon_1,\epsilon_2$ are the equivariant parameters of the $(\mathbb{C}^*)^2$-action and $\pm a$ are the eigenvalues of the vector multiplet scalar in the Coulomb branch. However, there are two distinct features about gauge theory on $\mathbb{C}^2/\mathbb{Z}_2$.
One of them is nontrivial holonomies. Let $A$ be a flat $U(2)$-connection of the gauge theory. Since the asymptotic region of $\mathbb{C}^2/\mathbb{Z}_2$ is isomorphic to $S^3/\mathbb{Z}_2$, there are noncontractable cycles, and the holonomy of a cycle in that region
\begin{equation}
U=\exp\left(i \oint A\right)
\end{equation}
satisfies $U^2=1$. Thus, there are four inequivalent classes of holonomies as
\begin{equation}
U=\{\text{diag}(1,1),\text{diag} (1,-1), \text{diag}(-1,1), \text{diag}(-1,-1)\}.
\end{equation}
As a consequence, the gauge theory admits \emph{two} sectors which we call the Neveu-Schwarz and Ramond sector due to the correspondence to superconformal blocks stated shortly. The holonomies of type $(1,1)$ and $(-1,-1)$ contribute to the Neveu-Schwarz sector whereas the Ramond sector is described by the holonomies of type $(1,-1)$ and $(-1,1)$. Note that the Ramond sector (holonomy of type $(1,-1)$ and $(-1,1)$) does not exist in $SU(2)$ theory \cite{Ito}.
Another difference is about their instanton moduli space. Since the $\mathbb{Z}_2$-action sends $(z_1,z_2)\in\mathbb{C}^2\mapsto(-z_1,-z_2)$, the path integral is computed by summing over the space of the $\mathbb{Z}_2$-invariant field configurations on $\mathbb{C}^2$. That is, the instanton moduli space for $U(2)$ gauge theory on $\mathbb{C}^2/\mathbb{Z}_2$ is an appropriate $\mathbb{Z}_2$-symmetric subspace of that for $U(2)$ gauge theory on $\mathbb{C}^2$. See \cite{BF,Ito} for practical computations of the Nekrasov partition function in terms of Young tableaux.
Let us now state the conjecture given in \cite{BF,Ito}\footnote{\cite{Ito} takes the BPZ conjugation but we use the standard Hermite conjugation. In particular, there is no $(-i)$ in \eqref{AGTR} in our notation.}:
\begin{conjecture}[\cite{BF,Ito}]\label{conj:AGT}
Let $Z_{{\rm Nek}}^{2M,(q_1,q_2)}$ be the $2M$-instanton contributions to the Nekrasov partition function for $\mathcal{N}=2$ pure $U(2)$ gauge theory on $\mathbb{C}^2/\mathbb{Z}_2$ of holonomy type $(q_1,q_2)$ where $q_1,q_2\in\{\pm1\}$. Also, let $\ket{M}, \ket{M}_{\pm}$ be the Gaiotto vectors of level $M$ in the Neveu-Schwarz sector and Ramond sector respectively. Then, they satisfy:
\begin{align}
\forall M\in\mathbb{Z}_{\geq0},\;\;\;\;\;\;\;\;Z_{{\rm Nek}}^{2M,(1,1)}&=\braket{M|M},\\
\forall M\in\mathbb{Z}_{\geq0}+\frac12,\;\;\;\;Z_{{\rm Nek}}^{2M,(-1,-1)}&=\braket{M|M},\\
\forall M\in\mathbb{Z}_{\geq0},\;\;\;\;\;\;Z_{{\rm Nek}}^{2M,(1,-1)}&=Z_{{\rm Nek}}^{2M,(-1,1)}=\,_+\braket{M|M}_+=\,_-\braket{M|M}_-.
\end{align}
Equivalently, let $Z_{{\rm Nek}}^{F}$ be the Nekrasov partition function for $\mathcal{N}=2$ pure $U(2)$ gauge theory on $\mathbb{C}^2/\mathbb{Z}_2$ with $F\in\{NS,R\}$, that is,
\begin{align}
&Z_{{\rm Nek}}^{NS}=\sum_{M\in\mathbb{Z}_{\geq0}}\Lambda^{4M}Z_{{\rm Nek}}^{2M,(1,1)}+\sum_{M\in\mathbb{Z}_{\geq0}+\frac12}\Lambda^{4M}Z_{{\rm Nek}}^{2M,(-1,-1)},\\
&Z_{{\rm Nek}}^{R}=\sum_{M\in\mathbb{Z}_{\geq0}}\Lambda^{4M}Z_{{\rm Nek}}^{2M,(1,-1)}=\sum_{M\in\mathbb{Z}_{\geq0}}\Lambda^{4M}Z_{{\rm Nek}}^{2M,(-1,1)},
\end{align}
Also, let $\ket{G}_{NS},\ket{G}_{R\pm}$ be the Gaiotto vectors in the Neveu-Schwarz sector and Ramond sector respectively. Then, they satisfy:
\begin{align}
Z_{{\rm Nek}}^{NS}&=\,_{NS}\braket{G|G}_{NS},\label{AGTNS}\\
Z_{{\rm Nek}}^{R}&=\,_{R+}\braket{G|G}_{R+} =\,_{R-}\braket{G|G}_{R-},\label{AGTR}
\end{align}
where the parameters $(\epsilon_1,\epsilon_2,a)$ and $(\hbar,Q_0,\tau_0)$ are identified as
\begin{equation}
-\epsilon_1\epsilon_2=\hbar,\;\;\;\;\epsilon_1+\epsilon_2=\hbar^{\frac12}\,Q_0,\;\;\;\;a=\tau_0.
\end{equation}
\end{conjecture}
\subsection{Gaiotto Vectors from Super Airy Structures}
Conjecture~\ref{conj:AGT} does not impose on how one should represent super Virasoro operators $\{L_n,G_r\}$. In this section, we represent $\{L_n,G_r\}$ as differential operators as in \eqref{L1} and \eqref{G1}, and we prove that the partition function of an appropriate super Airy structure becomes the Gaiotto vector in this representation, after a change of parameters. Before doing so, however, let us show some properties of the partition function of the super Airy structure with the following choice of parameters:
\begin{equation}
\biggl(F,N,\tau_{l},\phi_{kl},\psi_{mn}^F,D_k,Q_l\biggr)=\biggl(F,1,\tau_0\delta_{l,0},0,0,\frac{1+2f}{2}T\delta_{k,1},Q\delta_{l,0}\biggr)\label{STRG}
\end{equation}
When we take parameters as above, some properties of the free energy $\mathcal{F}_F$ can be checked explicitly. In particular, we can analyse the $T$-dependence of $\mathcal{F}_F$\footnote{This is analogous to \cite[Lemma 4.5]{BBCC}}:
\begin{lemma}\label{lem:power}
Let $F_{g,n|2m}(I|J)$ be the coefficients of the free energy $\mathcal{F}_F$ associated with the super Airy structure with the above choice of parameters. Then, they satisfy:
\begin{description}
\item[1] the $T$-dependence of $F_{g,n|2m}(I|J)$ is factored as
\begin{equation}
F_{g,n|2m}(I|J)=T^{i_1+\cdots i_n+j_1+\cdots j_{2m}+2fm} \tilde{F}_{g,n|2m}(I|J)
\end{equation}
where $\tilde{F}_{g,n|2m}(I|J)$ is independent of $T$.
\item[2] $F_{g,n|2m}(I|J)=0$ for $i_1+\cdots i_n+j_1+\cdots j_{2m}+2fm>g$.
\end{description}
\end{lemma}
\begin{proof}
We prove by induction in $2g+n+2m\geq3$. Since all we have to do is to look at \eqref{BSAS} and \eqref{FSAS} with the choice of parameters \eqref{STRG}, and since the proof is based on simple computations, we only give a sketch. When $2g+n+2m=3$, one finds from \eqref{BSAS} and \eqref{FSAS} (or from Proposition~\ref{prop:STR}) that
\begin{align}
&\forall i_1\in\mathbb{Z}_{>0}\;\;\;\;\;\;\;\,\,\,\,\,\,\,\,\,\,\,\,F_{1,1|0}(i_1|)=\frac{1+2f}{2}T\delta_{i_1,1},\\
&\forall i_1,i_2\in\mathbb{Z}_{>0}\;\;\;\;\,\,F_{\frac12,2|0}(i_1,i_2|)=0,\\
&\forall j_1,j_2\in\mathbb{Z}_{\geq0}\;\;\;\;F_{\frac12,0|2}(|j_1,j_2)=0.
\end{align}
Thus, it holds when $2g+n+2m=3$. Note that one can easily show $F_{g,n|2m}(I|J)=0$ whenever $g<1$ as discussed in Remark~\ref{rem:noF0}.
Let us now assume the above statements hold for all $g',n',m'$ whenever $3\leq 2g'+n'+2m'\leq2g+n+2m$. Then, $F_{g,n+1|2m}(i_0,I|J)$ can be computed by \eqref{BSAS}, and one notices that due to the Kronecker delta's in \eqref{C1} and \eqref{C2}, each term gives the same power of $T$, namely, $T^{i_0+i_1+\cdots i_n+j_1+\cdots j_{2m}+2fm}$. The Kronecker delta's also guarantee by induction that $F_{g,n+1|2m}(I|J)=0$ for all $i_0+i_1+\cdots i_n+j_1+\cdots j_{2m}+2fm>g$. Note that $F_{g,0|2\tilde{m}}(|j_1,\tilde J)$ with $2g+n+2m<2\tilde m$ cannot be computed by \eqref{BSAS}, hence, we have to check it separately. Nevertheless, similar to the previous case, the Kronecker delta in \eqref{C3} also implies that each term in \eqref{FSAS} is a monomial in $T$ of degree $T^{j_1+\cdots j_{2\tilde m}+2f \tilde m}$. This completes the proof.
\end{proof}
We now consider a change of parameters from $T$ in \eqref{STRG} to $\Lambda$ in Conjecture~\ref{conj:AGT}. Namely, we consider
\begin{equation}
\Lambda^2=\hbar T.\label{LamtoT}
\end{equation}
Since this modifies the powers of $\hbar$ in each term, let us see how $\mathcal{F}_F(\Lambda^2,\hbar)$ behaves. In particular, we would like to find the leading order in $\hbar$ after the change of parameters.
Thanks to Lemma~\ref{lem:power}, we are able to rewrite the free energy as follows:
\begin{align}
\mathcal{F}_{F}=&\sum_{g,n,m\geq0}^{2g+n+2m>2}\frac{\hbar^{g-1}}{n!(2m)!}\sum_{\substack{i_1,...,i_n>1\\j_1,...,j_{2m}\geq0}}F_{g,n|2m}(I|J)\prod_{k=1}^nx^{i_k}\prod_{l=1}^{2m}\theta^{j_l}\nonumber\\
=&\sum_{h,n,m\geq0}\frac{\hbar^{h-1}}{n!(2m)!}\sum_{\substack{i_1,...,i_n>1\\j_1,...,j_{2m}\geq0}}\Lambda^{2(i_1+\cdots i_n+j_1+\cdots j_{2m}+2fm)}\Phi_{h,n|2m}(I|J)\prod_{k=1}^nx^{i_k}\prod_{l=1}^{2m}\theta^{j_l},\label{powerLambda}
\end{align}
where
\begin{equation}
\Phi_{h,n|2m}(I|J)=F_{h+(i_1+\cdots i_n+j_1+\cdots j_{2m}+2fm),n|2m}(I|J)
\end{equation}
Two important remarks are in order. First, the leading order in $\hbar$ is still of order $\hbar^{-1}$ and $\hbar\,\mathcal{F}_F(\Lambda^2,\hbar)$ after the change of parameters is still a power series in $\hbar^{\frac12}$. Second, $\Phi_{h,n|2m}\neq0$ even for $2g+n+2m<3$ unlike $F_{g,n|2m}$ due to the change of parameters \eqref{LamtoT}.
With this under our belt, we show that the Gaiotto vector in the Neveu-Schwarz or the Ramond sector corresponds to the partition function of the super Airy structure with parameters determined by \eqref{STRG}:
\begin{proposition}\label{prop:main}
Consider the super Airy structure $\tilde{\mathcal{S}}_F$ in Proposition \ref{prop:SAS} with the parameters set by \eqref{STRG}, and let $Z_F$ be the unique partition function of $\tilde{\mathcal{S}}_F$. Then, for $F=NS$, $Z_{NS}$ becomes the Gaiotto vector $\ket{G}$ in the Neveu-Schwarz sector, and for $F=R$, $Z_R$ becomes the bosonic Gaiotto vector $\ket{G}_{R+}$ in the Ramond sector after the change of parameters $\Lambda=\hbar T$.
\end{proposition}
\begin{proof}
We first consider the Neveu-Schwarz sector. By construction, $Z_{NS}$ satisfies:
\begin{equation}
H_1Z_{NS}=(L_1-\Lambda^2)Z_{NS}=0,\;\;\;\;\forall i\geq1\;\;\;\;H_{i+1}Z_{NS}=L_{i+1}Z_{NS}=0,\;\;\;\;F_iZ_{NS}=G_{i+\frac12}Z_{NS}=0.
\end{equation}
where we used $\Lambda^2=\hbar T$.
Second, \eqref{powerLambda} implies that the free energy $\mathcal{F}_{NS}$ is a power series in $\Lambda$. More precisely,
\begin{equation}
\mathcal{F}_{NS}\in\Lambda^2\,\mathbb{C}\llbracket \Lambda^2\rrbracket.
\end{equation}
This shows that the leading term of the partition function is exactly 1:
\begin{equation}
Z_{NS}=e^{\mathcal{F}_{NS}}=1+\mathcal{O}(\Lambda^2).
\end{equation}
At last, we will need to identify $1$ in this representation with the highest weight vector $\ket{0}=\ket{\Delta}$ in the Neveu-Schwarz sector. This is indeed straightforward to check from \eqref{Heis}, \eqref{Cliff}, \eqref{L1}, and \eqref{G1}. Indeed for the Neveu-Schwarz sector, we have:
\begin{equation}
L_0\cdot1=\Delta\cdot1,\;\;\;\;\forall n,r>0\;\;\;\;L_n\cdot1=G_r\cdot1=0,
\end{equation}
where $\Delta$ coincides with \eqref{weightNS}. Therefore, the proposition holds for the Neveu-Schwarz sector.
Next, we turn to the Ramond sector. Notice that since the partition function of any super Airy structure is necessarily bosonic, we only consider $\ket{G}_{R+}$. Then, the rest goes parallel to the argument for the Neveu-Schwarz sector. Namely,
\begin{equation}
H_1Z_{R}=\left(L_1-\frac12\Lambda^2\right)Z_{R}=0,\;\;\;\;\forall i\geq1\;\;\;\;H_{i+1}Z_{R}=L_{i+1}Z_{R}=0,\;\;\;\;F_iZ_{R}=G_{i}Z_{R}=0,
\end{equation}
with the identification $\Lambda=\hbar T$. And it can be shown that
\begin{equation}
Z_{R}=e^{\mathcal{F}_R}=1+\mathcal{O}(\Lambda^2),
\end{equation}
hence, we expect $\ket{\Delta}_+=1$. At last, since the representation of $L_n,G_r$ in the Ramond sector is different from the one in the Neveu-Schwarz sector, we find
\begin{equation}
L_0\cdot1=\Delta\cdot1,\;\;\;\;\forall n,r>0\;\;\;\;L_n\cdot1=G_r\cdot1=0,
\end{equation}
with $\Delta$ given in \eqref{weightR}. This completes the proof.
\end{proof}
Since Theorem~\ref{thm:SAS1} guarantees existence of a unique solution of a super Airy structure, we obtain the following corollary:
\begin{corollary}
The Gaiotto vector in the Neveu-Schwarz sector and the bosonic Gaiotto vector in the Ramond sector exist. And they can be computed by the untwisted or $\mu$-twisted super topological recursion.
\end{corollary}
\begin{remark}
The relation between the bosonic and fermionic Gaiotto vectors $\ket{G}_{R\pm}$ in the Ramond sector is highly nontirivial, and the general relation is not known\footnote{\cite{Ito} computed up to level 2.}. As a consequence, existence of the fermionic Gaiotto vector is not supported by the discussions of the present paper. Note, however, that at the zero level, the fermionic highest weight state $\ket{0}_-=\ket{\Delta}_-$ in our representation is given by the extra variable $\theta^0$:
\begin{equation}
\ket{\Delta}_-=\frac{G_0\cdot1}{\sqrt{\Delta-c/24}}=\sqrt{2}\theta^0.
\end{equation}
\end{remark}
\subsubsection{Conjugate Operators}
Let us consider the Hermitian conjugate for the Verma module. This means that
\begin{equation}
\forall i\in\mathbb{Z}_{\neq0}\;\;\;\;(J_i)^{\dagger}=J_{-i},\;\;\;\;\forall r\in\mathbb{Z}+f\;\;\;\;(\Gamma_{r})^{\dagger}=\Gamma_{-r}.
\end{equation}
In order to split the zero mode $\Gamma_0$ into $\theta^0$ and $\hbar\partial_{\theta^0}$, we use the following notation:
\begin{equation}
\tilde \Gamma_0:=\frac12 \theta^0,\;\;\;\;(\tilde \Gamma_0)^{\dagger}:=\hbar\partial_{\theta^{0}},\;\;\;\;\forall i\in\mathbb{Z}_{\neq0}\;\;\;\;\tilde \Gamma_{i+f}=\Gamma_{i+f}
\end{equation}
Let us then rewrite \eqref{powerLambda} in terms of these modes instead of variables as follows:
\begin{align}
\mathcal{F}_F=&\sum_{g,n,m\geq0}^{2g+n+2m\geq3}\frac{\hbar^{g-1}}{n!(2m)!}\sum_{\substack{i_1,...,i_n>1\\j_1,...,j_{2m}\geq0}}F_{g,n|2m}(I|J)\prod_{k=1}^n\frac{J_{-i_k}}{i_k}\prod_{l=1}^{2m}(1+\delta_{j_l+f,0})\tilde \Gamma_{-j_l}\nonumber\\
=&\sum_{h,n,m\geq0}\frac{\hbar^{h-1}}{n!(2m)!}\sum_{\substack{i_1,...,i_n>1\\j_1,...,j_{2m}\geq0}}\Lambda^{2(i_1+\cdots i_n+j_1+\cdots j_{2m}+2fm)}\Phi_{h,n|2m}(I|J)\prod_{k=1}^n\frac{J_{-i_k}}{i_k}\prod_{l=1}^{2m}(1+\delta_{j_l+f,0})\tilde \Gamma_{-j_l},\label{powerLambda1}
\end{align}
where the $(1+\delta_{j_l+f,0})$ is inserted to cancel out the $\frac12$ in the definition of $\tilde\Gamma_0$. Hence, by construction, we find that $\bra{G}$ becomes a differential operator in this representation:
\begin{equation}
(Z_F)^{\dagger}=e^{(\mathcal{F}_F)^{\dagger}}.
\end{equation}
As a consequence, the norm $(\cdot|\cdot)$ is defined by
\begin{equation}
(Z_F|Z_F)=(Z_F)^{\dagger}\cdot Z_F\bigr|_{x=\theta=0}.\label{norm}
\end{equation}
Therefore, Proposition~\ref{prop:main} shows that \eqref{AGTNS} and \eqref{AGTR} in Conjecture~\ref{conj:AGT} are extended by super Airy structures as follows:
\begin{align}
Z_{{\rm Nek}}^{NS}&=\,_{NS}\braket{G|G}_{NS}=(Z_{NS}|Z_{NS}),\\
Z_{{\rm Nek}}^{R}&=\,_{R+}\braket{G|G}_{R+} =\,_{R-}\braket{G|G}_{R-}=(Z_R|Z_R)
\end{align}
\subsubsection{Graphical Interpretation}
\eqref{powerLambda1} suggests that one can compute the free energy $\mathcal{F}_F$ by summing over connected graphs of appropriate weights. For $h,n,m\in\mathbb{Z}_{\geq0}$ with $2h+n+2m\geq1$, let $\gamma_{h,n|2m}(I|J)$ be the connected planar graph of:
\begin{enumerate}
\item an $(n+2m)$-valent vertex which carries a nonnegative integer $h$,
\item $n$ bosonic edges whose external vertices are labelled clockwise by $I=(i_1,...,i_n)$,
\item $2m$ fermionic edges whose external vertices are labelled clockwise by $J=(j_1,...,j_{2m})$.
\end{enumerate}
See Figure~\ref{fig:graph1} below. We call $h$ the ``number of loops'' for convention, though it is just an integer associated with an $(n+2m)$-valent vertex. We also denote by $\mathbb{G}^{\text{conn}}$ the set of all such connected graphs. That is, every graph in $\mathbb{G}^{\text{conn}}$ is uniquely determined by a nonnegative integer $h$, a set of positive integers $I$, and a set of nonnegative integers $J$ with $2h+n+2m\geq1$.
\begin{figure}[h]
\begin{equation}
\gamma_{h,n|2m}(I|J)=
\vcenter{\hbox{
\begin{tikzpicture}
\draw[color=red, dashed](0,0) -- (-1,0.5);
\draw[color=red, dashed](0,0) -- (-1,-0.5) ;
\node(i1) at (-1.3,0.5) {$i_n$};
\node(dot) at (-1,0.05) {$\vdots$};
\node(in) at (-1.3,-0.5) {$i_1$};
\draw[color=red] (0,0) -- (1,0.5);
\draw[color=red] (0,0) -- (1,-0.5) ;
\node(j1) at (1.3,0.5) {$j_{1}$};
\node(dots) at (1,0.05) {$\vdots$};
\node(j2m) at (1.45,-0.5) {$j_{2m}$};
\node(n1) at (1,0.5) {$\bullet$};
\node(n2) at (1,-0.5) {$\bullet$};
\filldraw[color=red!60, fill=red!5, very thick](0,0) circle (0.35);
\node(0,0) {{\large $h$}};
\filldraw[color=black!100, fill=black!0](-1,0.5) circle (0.08);
\filldraw[color=black!100, fill=black!0](-1,-0.5) circle (0.08);
\end{tikzpicture}}}
\end{equation}
\caption{Pictorial representation of $\gamma_{h,n|2m}(I|J)$. $\circ$ ($\bullet$) denotes bosonic (fermionic) vertices, and dashed (solid) lines are bosonic (fermionic) edges.}\label{fig:graph1}
\end{figure}
From \eqref{powerLambda1}, the weigh $w$ of $\gamma_{h,n|2m}(I|J)$ is defined as
\begin{equation}
w\left(\gamma_{h,n|2m}(I|J)\right)=\hbar^{h-1}\Lambda^{2(i_1+\cdots i_n+j_1+\cdots j_{2m}+2fm)}\Phi_{h,n|2m}(I|J)\prod_{k=1}^n\frac{J_{-i_k}}{i_k}\prod_{l=1}^{2m}(1+\delta_{j_l+f,0})\tilde \Gamma_{-j_l}.\label{weightsred}
\end{equation}
As a note, $h$ appears on the exponent of $\hbar$ and this is why $h$ is called the number of loops. In addition, the symmetry factor $|\cdot|$ of $\gamma_{h,n|2m}(I|J)$ is given, by definition, as
\begin{equation}
|\gamma_{h,n|2m}(I|J)|=n!(2m)!.
\end{equation}
Then, \eqref{powerLambda1} can be written as
\begin{equation}
\frac{\mathcal{F}_F}{\hbar}=\sum_{\gamma\in\mathbb{G}^{\text{conn}}}\frac{w(\gamma)}{|\gamma|}\label{Fdiag}
\end{equation}
Moreover, since the partition function $Z_F$ is the exponential of $\mathcal{F}_F/\hbar$, it is computed by summing over both connected and disconnected graphs which we simply denote by $\mathbb{G}$:
\begin{equation}
Z_F=\sum_{\gamma\in\mathbb{G}}\frac{w(\gamma)}{|\gamma|},\label{Zgraph}
\end{equation}
where the weight of a disconnected graph is defined to be the product of weights of connected components. The symmetry factor of a disconnected graph is defined in a canonical way, that is, it is the product of symmetry factors of connected components times the product of factorials of the multiplicity of each component.
One can repeat the same steps for $(Z_F)^{\dagger}$. Let us denote by $\mathbb{G}'$ be the set of all graphs in $\mathbb{G}$ but with a different colour and with the opposite order of labelling (see Figure~\ref{fig:graph2} below). We similarly define the weight and the symmetry factor of a connected graph $\gamma'_{h,n|2m}(I'|J')\in\mathbb{G}'^{\text{conn}}\subset\mathbb{G}'$ by:
\begin{align}
w\left(\gamma'_{h,n'|2m'}(I'|J')\right)=&\hbar^{h'-1}\Lambda^{2(i'_1+\cdots i'_n+j'_1+\cdots j'_{2m}+2fm)}\Phi_{h',n'|2m'}(I'|J')\nonumber\\
&\;\;\;\;\times\prod_{k=1}^{n'}\frac{J_{-i'_k}^{\dagger}}{i'_k}\prod_{l=0}^{2m'-1}(1+\delta_{j'_{2m'-l}+f,0})\tilde \Gamma_{-j'_{2m'-l}}^{\dagger}\label{weightsblue}
\end{align}
\begin{equation}
|\gamma'_{h',n'|2m'}(I'|J')|=n'!(2m')!
\end{equation}
Note that the order of $\tilde \Gamma_{-j'_l}^{\dagger}$ is the opposite of \eqref{weightsred} due to conjugation. Then similar to \eqref{Zgraph}, $(Z_F)^{\dagger}$ is given by summing over all graphs in $\mathbb{G}'$ with the weights defined above
\begin{equation}
(Z_F)^{\dagger}=\sum_{\gamma'\in\mathbb{G}'}\frac{w(\gamma')}{|\gamma'|}.\label{Z'graph}
\end{equation}
Even though \eqref{Zgraph} and \eqref{Z'graph} are merely a change of notation from \eqref{powerLambda1}, this leads us to a graphical understanding of \eqref{norm}. Namely, the action of $(J_{-i'})^{\dagger},(\Gamma_{-j'})^{\dagger}$ on $J_{-i},\Gamma_{-j}$ is interpreted as connecting two edges of different colours, and the specialisation $x=\theta=0$ (as well as the action of $(J_{-i'})^{\dagger},(\Gamma_{-j'})^{\dagger}$ on $1$) implies that only closed graphs contribute to $(Z_F|Z_F)$. Let us denote by $\hat{\mathbb{G}}$ the set of all closed graphs given by every possible contraction among graphs in $\mathbb{G}$ and $\mathbb{G}'$ such that contraction is applied only between two edges of different colours. (See Figure~\ref{fig:graph2} below.) That is, a graph in $\hat{\mathbb{G}}$ is uniquely determined by:
\begin{itemize}
\item a set of graphs $\gamma_{h_k,n_k|2m_k}(I_k|J_k)\in\mathbb{G}$ for $k\geq1$ (up to permutation),
\item another set of graphs $\gamma'_{h'_l,n'_l|2m'_l}(I'_l|J'_l)\in\mathbb{G}$ for $l\geq1$ (up to permutation),
\item the information of how indices in $(I_k,J_k)_{k\geq1}$ are paired up with those in $(I'_l,J'_l)_{l\geq1}$.
\end{itemize}
Note that graphs in $\hat{\mathbb{G}}$ may be connected or disconnected, which depends on how indices are paired up.
The weight $w$ of a graph $\hat{\gamma}\in\hat{\mathbb{G}}$ is computed as follows. First for each component in $\mathbb{G}$ or $\mathbb{G}'$ we assign \eqref{weightsred} or \eqref{weightsblue}. Next, for bosonic indices $(i\in I,i'\in I')$ in a contraction pair, we replace $(J_{-i'})^{\dagger}J_{-i}$ with $\hbar\, i \delta_{i i'}$, and for fermionic indices $(j\in J,j'\in J')$ in a contraction pair, we replace $(\Gamma_{-j'})^{\dagger}\Gamma_{-j}$ with $\hbar \delta_{jj'}$. We then assign an appropriate sign $(-1)$ whenever two fermionic edges cross after contraction. After all, \eqref{norm} is written as
\begin{equation}
(Z_F|Z_F)=\sum_{\hat{\gamma}\in\hat{\mathbb{G}}}\frac{w(\hat{\gamma})}{|\hat{\gamma}|}.\label{graph1}
\end{equation}
\begin{figure}[h]
\begin{align*}
\vcenter{\hbox{
\begin{tikzpicture}
\draw[color=red, -](0,0.7) to [out=0,in=90] (1.5,0);
\draw[color=red, -](0,-0.7) to [out=0,in=270] (1.5,0);
\draw[color=blue, -](0,0.7) to [out=180,in=90] (-1.5,0);
\draw[color=blue, -](0,-0.7) to [out=180,in=270] (-1.5,0);
\draw[color=blue, dashed](-1.5,0) -- (-0,0);
\draw[color=red, dashed](0,0) -- (1.5,0);
\filldraw[color=red!60, fill=red!5, very thick](1.5,0) circle (0.4);
\node(h1) at (1.5,0) {{\large $h_1$}};
\filldraw[color=blue!60, fill=blue!5, very thick](-1.5,0) circle (0.4);
\node(h2) at (-1.5,0) {{\large $h_1'$}};
\node(dot) at (0,-0.7) {$\bullet$};
\node(dot) at (0,0.7) {$\bullet$};
\filldraw[color=black!100, fill=black!0](0,0) circle (0.08);
\node(i) at (0,0.3) {$i'_1=i_1$};
\node(i) at (0,1.05) {$j'_2=j_2$};
\node(i) at (0,-1.0) {$j'_1=j_1$};
\end{tikzpicture}}}\hspace{10mm}
\vcenter{\hbox{
\begin{tikzpicture}
\draw[color=red, -](-0.8,0.7) to [out=0,in=180] (0.8,-0.7);
\draw[color=blue, -](-0.8,-0.7) to [out=0,in=180] (0.8,0.7);
\draw[color=red, -](1.4,0) to [out=90,in=0] (0.8,0.7);
\draw[color=blue, -](-1.7,0) to [out=0,in=180] (-0.8,0.7);
\draw[color=red, -](1.4,0) to [out=-90,in=0] (0.8,-0.7);
\draw[color=blue, -](-1.7,0) to [out=0,in=180] (-0.8,-0.7);
\draw[color=red, dashed](1.4,0) -- (2.4,0);
\draw[color=blue, dashed](2.4,0) -- (3.5,0);
\draw[color=red, dashed](-3.5,0) -- (-2.4,0);
\draw[color=blue, dashed](-2.4,0) -- (-1.4,0);
\filldraw[color=red!60, fill=red!5, very thick](1.4,0) circle (0.4);
\node(h1) at (1.4,0) {{\large $h_2$}};
\filldraw[color=blue!60, fill=blue!5, very thick](3.5,0) circle (0.4);
\node(h1) at (3.5,0) {{\large $h'_2$}};
\filldraw[color=red!60, fill=red!5, very thick](-3.5,0) circle (0.4);
\node(h1) at (-3.5,0) {{\large $h_1$}};
\filldraw[color=blue!60, fill=blue!5, very thick](-1.4,0) circle (0.4);
\node(h2) at (-1.4,0) {{\large $h'_1$}};
\node(dot1) at (-0.8,0.68) {$\bullet$};
\node(dot2) at (0.8,0.68) {$\bullet$};
\filldraw[color=black!100, fill=black!0](2.4,0) circle (0.08);
\filldraw[color=black!100, fill=black!0](-2.4,0) circle (0.08);
\node(i) at (-2.4,0.3) {$i_1=i'_1$};
\node(i) at (2.4,0.3) {$i_2=i'_2$};
\node(i) at (0.8,1.0) {$j'_1=j_2$};
\node(i) at (-0.8,1.0) {$j'_2=j_1$};
\end{tikzpicture}}}
\end{align*}
\caption{Examples of graphs in $\hat{\mathbb{G}}$. Red graphs are in $\mathbb{G}$ with clockwise labelling, blue ones are in $\mathbb{G}'$ with counter-clockwise labelling. The weight of the second graph should be multiplied by $(-1)$ because the two fermionic edges cross after contraction.}\label{fig:graph2}
\end{figure}
Note that this is not an assumption, but rather a property that one can show. By expanding \eqref{norm} term by term, one finds that the weight of a connected closed graph computed with the above rule is divided by the product of the symmetry factors of each component in $\mathbb{G}$ and $\mathbb{G}'$, and also by the order of permutations of the components\footnote{This is because graphs in $\hat{\mathbb{G}}$ are defined up to permutations.}, which indeed agrees with the symmetry factor of the connected closed graph. For the contribution from a disconnected graph, one finds in \eqref{norm} that the weight is further divided by the product of factorials of the multiplicity of each connected closed components. These factors naturally appear from the expansion of exponentials. Therefore, one arrives at \eqref{graph1}.\footnote{See \cite[Section 4.4]{BBCC} for the graphical interpretation without fermions where the symmetry factor is explicitly written. We omit an explicit formula for the symmetry factor for brevity of notation}.
Finally, if we formally define $\mathcal{F}_{\text{Nek}}^{F}$ for $F\in\{NS,R\}$ by
\begin{align}
(Z_F|Z_F)&=\exp\frac{\mathcal{F}_{\text{Nek}}^{F}}{\hbar},
\end{align}
then, since $\mathcal{F}_{\text{Nek}}^{F}/\hbar$ is the logarithm of $(Z_F|Z_F)$, it can be computed by summing over all \emph{connected} graphs in $\hat{\mathbb{G}}$. Importantly, since every bosonic and fermionic contraction gives $\hbar$, it is clear by counting the power of $\hbar$ in \eqref{weightsred} and \eqref{weightsblue} that the weight of every connected graph in $\hat{\mathbb{G}}$ depends on $\hbar^{h-1}$ for some half-integer $h$. Thus, this implies that $\mathcal{F}_{\text{Nek}}^{F}$ admits a power series expansion in $\hbar^{\frac12}$, as expected from Conjecture~\ref{conj:AGT}.
\subsection{With Matters}
It is discussed in \cite{G} that the Gaiotto vector $\ket{\Delta,\Lambda,m}$ corresponding to $SU(2)$ gauge theory with a single hypermultiplet of mass $m$ is given by
\begin{align}
L_1\ket{\Delta,\Lambda,m}&=-2m\Lambda\ket{\Delta,\Lambda,m},\\
L_2\ket{\Delta,\Lambda,m}&=-\Lambda^2\ket{\Delta,\Lambda,m},\label{L2v}\\
\forall n>2,\;\;\;\;L_n\ket{\Delta,\Lambda,m}&=0.
\end{align}
Nothe that $\ket{\Delta,\Lambda,m}$ is not a Whittaker vector due to \eqref{L2v}. It can be easily shown that there exists a description in terms of Airy structures. This is because $L_1,L_2$ do not appear on the right hand sides of commutation relations for the subalgebra $\{L_1,L_2,L_3,...\}$ so that one can freely add/subtract constant terms by hand without changing their commutation relations as we did in \eqref{Hi}.
In supersymmetric cases, \cite{Ito} conjectures the condition for the Gaiotto vectors $\ket{\Delta,\Lambda,m}_{\pm}^{(s)}$ in the Ramond sector such that the norm corresponds to the Nekrasov partition function of $U(2)$ gauge theory on $\mathbb{C}^2/\mathbb{Z}_2$ with a single hypermultiplet of mass $m$ as follows\footnote{To the authors' best knowledge, an analogous vector in the Neveu-Schwarz sector is not discussed in literature.}:
\begin{align}
L_1\ket{\Delta,\Lambda,m}_{\pm}^{(s)}&=-\left(m-\frac{Q}{2}\right)\Lambda\ket{\Delta,\Lambda,m}^{(s)}_{\pm},\\
G_1\ket{\Delta,\Lambda,m}_{\pm}^{(s)}&=c_{\pm}^{(s)}\Lambda\ket{\Delta,\Lambda,m}_{\mp}^{(s)},\label{G1Rm}\\
L_2\ket{\Delta,\Lambda,m}_{\pm}^{(s)}&=-\frac12\Lambda^2\ket{\Delta,\Lambda,m}_{\pm}^{(s)},\label{L2Rm}\\
\forall n>1,\;\;\;\;L_{n+1}\ket{\Delta,\Lambda,m}_{\pm}^{(s)}&=G_{n}\ket{\Delta,\Lambda,m}_{\pm}^{(s)}=0,
\end{align}
where $s\in\{1,2\}$ is an additional labelling of the Gaiotto vectors and
\begin{equation}
c^{(1)}_{\pm}=\frac{\pm1+i}{2},\;\;\;\;c^{(2)}_{\pm}=\frac{\pm1-i}{2}.
\end{equation}
Note that \eqref{L2Rm} is a consequence of \eqref{G1Rm}.
Interestingly, however, it is not easy to figure out how to describe these Gaiotto vectors in terms of super Airy structures. This is because $G_1\ket{\Delta,\Lambda,m}_{\pm}^{(s)}\neq0$ and the relation between $\ket{\Delta,\Lambda,m}_{+}^{(s)}$ and $\ket{\Delta,\Lambda,m}_{-}^{(s)}$ is nontrivial and not known. In order to apply the framework of super Airy structures, a necessary condition is to find a fermionic operator $\hat G_1$ with the following property:
\begin{equation}
\hat G_1 \ket{\Delta,\Lambda,m}_{+}^{(s)}=0\;\;\Leftrightarrow\;\;G_1\ket{\Delta,\Lambda,m}_{+}^{(s)}=c_{+}^{(s)}\Lambda\ket{\Delta,\Lambda,m}_{-}^{(s)}.
\end{equation}
Even if one managed to find such $\hat G_1$, then one would still need to check whether there exists a set of differential operators including $\hat G_1$ that fits to Definition~\ref{def:SAS}.
\section{Conclusion}\label{sec:Conclusion}
We have proposed the notion of untwisted/$\mu$-twisted super spectral curves as well as abstract super loop equations. Then, we showed that new recursive formalisms, which we call the ``untwisted/$\mu$-twisted super topological recursion'', uniquely solve the untwisted/$\mu$-twisted super loop equations. We note that these new recursions are variants of the $\mathcal{N}=1$ super topological recursion of \cite{BO2}, which would be called the $\rho$-twisted super topological recursion in our notation. As noted in Section~\ref{sec:STR}, the difference between untwisted and $\mu$-twisted super spectral curves resembles the difference between Neveu-Schwarz punctures and Ramond punctures in the context of super Riemann surfaces. We further showed an alternative way of solving abstract super loop equations in terms of super Airy structures as untwisted/$\mu$-twisted modules of the $\mathcal{N}=1$ super Virasoro algebra. We then proved an equivalence between these super Airy structures and the untwisted/$\mu$-twisted super topological recursion, which is summarised in Theorem~\ref{thm:main}. Therefore, we have mathematically formalised the flowchart in Figure~\ref{fig:goal}.
We then applied these new recursions to computations of Gaiotto vectors for superconformal blocks. We showed in Proposition~\ref{prop:main} that the partition function of an appropriate super Airy structure coincides with the corresponding Gaiotto vector for superconformal blocks. Importantly, since the uniqueness and existence of the partition function of a super Airy structure is mathematically proven, Proposition~\ref{prop:main} serves as a proof of the uniqueness and existence of the Gaiotto vectors for superconformal blocks. Thanks to a conjectural extension of the AGT correspondence \cite{BF,Ito}, we notice that the super topological recursion have access to $\mathcal{N}=2$ pure $U(2)$ gauge theory on $\mathbb{C}^2/\mathbb{Z}_2$. In addition to applications discussed in \cite{BO2} such as supereigenvalue models \cite{BO,C,O}, our results provide another piece of evidence about how useful super Airy structures and the super topological recursion are.
There are a number of future directions one can take to generalise results of the present paper. For example, it is interesting to consider super Airy structures as modules of higher rank supersymmetric algebras such as $\mathcal{W}(\mathfrak{osp}(n|m))$-algebras. We expect that corresponding two-dimensional conformal field theory would be parafermion theory, and gauge theory counterpart would be $U(N)$ theory on $\mathbb{C}^2/\mathbb{Z}_p$ with an appropriate choice of $m,n,N,p$. See \cite{Manabe} and references therein for details about parafermion theory. Another aspect is to consider the Gaiotto vector in the Neveu-Schwarz sector with mass parameters. Unlike the Ramond sector discussed in \cite{Ito}, it is expected that the generalisation of \cite{G} is possible in the Neveu-Schwarz sector because both $L_1$ and $L_2$ can be shifted by constant terms without changing commutation relations. Furthermore, all the applications discussed in the present paper are when $N=1$ as in \eqref{STRG}. Since the super topological recursion stands for any higher $N$, it is interesting to see if there are any examples in physics or mathematics with $N>1$. Finally, as shown in Table~\ref{table:1} there are four types of super Airy structures as modules of the $\mathcal{N}=1$ super Virasoro algebra. On the other hand, the geometric counterpart, i.e., the super topological recursion is reported only for three of them ($\rho$-twisted one by \cite{BO2}, and untwisted/$\mu$-twisted ones in the present paper), and we hope to introduce the last $\sigma$-twisted super topological recursion in the future with some interesting applications.
\newpage
|
1,116,691,498,224 | arxiv | \section{Conclusions}
\label{sec:conclusions}
In this paper we have explored the properties of bosonic atoms
on the first excited band of an optical lattice. By computing
the phase diagrams for two- and three-dimensional systems, we
found Mott-insulating and superfluid phases with more subtle
quantum properties than those appearing in the lowest band Hubbard model.
Furthermore, we compared the Gutzwiller theory to the Gross-Pitaevskii
approach and established the parameter regimes where the latter
description provides a good approximation to the physical system.
Here we found that bosons on the $p$-band can form
a staggered-vortex superfluid composed of anti-ferromagnetically ordered
vortices and anti-vortices. Rotation breaks the degeneracy of the
vortex and anti-vortex state and it would be interesting to
explore how rotation favoring vortex lattice formation competes
with the physics of staggered-vortex superfluids. Also, in fairly shallow lattices where effects due
to interactions can be pronounced, dispersions can develop swallowtails
in the vicinity of the Brillouin zone edge and period doubled states can appear~\cite{Machholm2003a,Machholm2004a}. In the previous analysis which assumed an one-dimensional systems, the swallowtails were found to be related to
the existence of solutions corresponding to a train of solitons. It would be of interest to explore the similar situation in higher dimensions, where stability properties are often very different.
Experiments are typically done in optical lattices with an additional trapping potential acting at the background. Here we studied only the homogeneous solutions and this assumption is valid locally when the background trapping potential varies slowly compared to the lattice spacing. Our results can be applied in a trap using the local density approximation or by adding a site dependent energy offset to the Hamiltonian. However, the size of computations using the multi-flavor Gutzwiller ansatz grow quickly as a function of system size which at this stage limits us to fairly small systems. Inhomogeneous density distribution is easier to take into account within the mean-field approximation.
\section{Formalism}
\label{sec:theory}
The microscopic Hamiltonian for the dilute Bose gas
at low temperatures in a trap is given by
\begin{equation}
\begin{array}{lll}
\hat{H}_{micro} & = & \displaystyle{\int d{\bf r}\hat\psi^\dagger({\bf r})\left[
-\frac{\hbar^2\nabla^2}{2m}+V({\bf r})
\right]\hat\psi({\bf r})}\\ \\
& & \displaystyle{\!+\!\frac{g}{2}
\hat\psi^\dagger({\bf r})\hat\psi^\dagger({\bf r})
\hat\psi({\bf r})\hat\psi({\bf r})
\!-\!\mu\hat\psi^\dagger({\bf r})\hat\psi({\bf r})},
\end{array}
\label{eq:Hmicroscopic}
\end{equation}
where $\mu$ is the chemical potential, $m$ the atomic mass, $g$ is the interatomic interaction strength, and $\hat\psi({\bf r})$ and $\hat\psi^\dagger({\bf r})$ are the bosonic annihilation and creation operators respectively, while $V({\bf r})$ is the external trapping potential which in this
work is taken to be a lattice potential
\begin{equation}
V({\bf r})=V_L\sum_{\alpha\in \{x,y,z\}} \sin^2\left(
\frac{\pi {\bf r}_\alpha}{d}
\right),
\end{equation}
where $d$ is the lattice spacing and $V_L$ the lattice depth. For a deep lattice it is reasonable to expand the field operators in terms of the localized Wannier functions. Here we go beyond the
usual lowest band Hubbard model by also including the
first excited states ($p$-band). In a three dimensional lattice
this implies an expansion of the field operators
\begin{equation}
\hat\psi({\bf r})=\sum_{{\bf i},\sigma} w_{\sigma,{\bf i}}({\bf r})
\hat\psi_{\sigma,{\bf i}},
\end{equation}
where ${\bf i}=(i_x,i_y,i_z)$ labels the lattice site and
$\sigma\in \{0,x,y,z\}$ is the flavor index.
The bosonic operators $\hat\psi_{\sigma,{\bf i}}$ annihilate
a boson of flavor $\sigma$ from the site ${\bf i}$.
We compute the Wannier functions from the ideal gas
band structure calculations.
In this paper
we assume that the system has been prepared on an excited $p$-bands
and in the following
set the population of the lowest band to zero.
Substituting the operator expansions into the
Eq.~(\ref{eq:Hmicroscopic}) and ignoring all but the leading
order onsite interactions and nearest neighbor tunneling
processes we derive our fundamental Hamiltonian
\begin{equation}
\hat{H}=\hat{H}_0+\hat{H}_{nn}+\hat{H}_{FD},
\end{equation}
where the ideal part is given by
\begin{equation}
\hat{H}_0=\sum_{{\bf i}}
-\mu{\hat \psi}_{\sigma,{\bf i}}^\dagger{\hat \psi}_{\sigma,{\bf i}}
-\sum_{\sigma,\alpha}\sum_{<{\bf i},{\bf j}>_\alpha} t_{\alpha,\sigma}
{\hat \psi}_{\sigma,{\bf i}}^\dagger{\hat \psi}_{\sigma,{\bf j}}.
\end{equation}
Here
$\sum_{<{\bf i},{\bf j}>_\alpha}$ indicates the sum over nearest neighbors in the direction $\alpha\in\{x,y,z\}$. Since the Bloch functions diagonalize the single particle Hamiltonian, there
are no interband hopping terms in the Wannier representation considered
here \cite{Georges2007a}. The terms originating from interatomic interactions
are given by
\begin{equation}
\begin{array}{lll}
\hat{H}_{nn} & = & \displaystyle{\sum_{\bf i}\sum_\sigma \frac{U_{\sigma\sigma}}{2}
{\hat n}_{\sigma,\bf i}\left({\hat n}_{\sigma,\bf i}-1\right)} \\ \\
& &
\displaystyle{+\!\!\,\,\sum_{\bf i}\sum_{\sigma\sigma',\sigma\neq\sigma'}\!\!
U_{\sigma\sigma'}{\hat n}_{\sigma,\bf i}{\hat n}_{\sigma',\bf i}}
\end{array}
\label{eq:Hnn}
\end{equation}
and
\begin{equation}
\begin{array}{lll}
\hat{H}_{FD} & = & \displaystyle{\sum_{\bf i}\sum_{\sigma\sigma',\sigma\neq\sigma'}
\frac{U_{\sigma\sigma'}}{2}\left({\hat \psi}_{\sigma,{\bf i}}^\dagger {\hat \psi}_{\sigma,{\bf i}}^\dagger {\hat \psi}_{\sigma',{\bf i}} {\hat \psi}_{\sigma',{\bf i}}\right.}\\ \\
& & \displaystyle{+\left.{\hat \psi}_{\sigma',{\bf i}}^\dagger {\hat \psi}_{\sigma',{\bf i}}^\dagger {\hat \psi}_{\sigma,{\bf i}}{\hat \psi}_{\sigma,{\bf i}}
\right)},
\end{array}
\end{equation}
where $\hat{H}_{FD}$ contains terms that describe flavor changing
collisions which transfer atoms between bands. This term has a formal
similarity with terms responsible
for spin-dynamics in a spinor condensates~\cite{Law1998a,Stamper-Kurn1998b}.
However, the strength
of these terms is comparable to other interaction terms
as opposed to spinor condensates where it is usually
small, being proportional to the difference between singlet
and triplet scattering lengths (for spin-$1$ spinor condensate).
It should be kept in mind that there are circumstances when
nearest neighbor interactions~\cite{Scarola2005a} or
particle assisted tunneling processes~\cite{Duan2008a} might give
rise to new physics. These contributions are not
included in the formulation presented here where our focus
is in the most typical parameter regimes.
The various coupling strengths in the lattice model are
related to $g$ through
\begin{equation}
U_{\sigma\sigma'}=g\int d{\bf r} w_{\sigma,{\bf i}}({\bf r})^2w_{\sigma',{\bf i}}({\bf r})^2
\end{equation}
and the tunneling coefficients are given by
\begin{equation}
t_{\sigma,\alpha}=-\int d{\bf r} w_{\sigma,{\bf i}}({\bf r})
\left[-\frac{\hbar^2\nabla^2}{2m}+V({\bf r})\right]
w_{\sigma,{\bf i+1}_\alpha}({\bf r}),
\end{equation}
where by ${\bf i+1}_\alpha$ we indicate the neighboring site of ${\bf i}$ in the direction $\alpha$.
When the lattice is symmetric, the tunneling strength on the lowest
band is independent of direction. However, this is not true for the $p$-band where the directional
dependence of the tunneling strength must be kept, as the overlap integrals are very different
depending on whether one is integrating along the node of the Wannier function or orthogonal to it. This indeed has important consequences for the physics in these systems~\cite{Isacsson2005a,Liu2006a,Martikainen2008a}
It should be further noted, that since the parameters of our model are computed using real Wannier functions, we find some, not only quantitative, but also qualitative differences from the commonly used
models build on the harmonic approximation. In particular, many degeneracies appearing in the harmonic approximation are absent when real parameters are used.
\subsection{Validity of $p$-band single-band approximation}
In a harmonic potential, two atoms on the first excited states
have an energy $2\times \hbar\omega(3/2+1)$. This is equal
to the energy of one atom on the ground state and one atom on the
second excited state. This suggest that collisions between $p$-band atoms
can populate also the lowest $s$-band and the $d$-bands. This would clearly restrict
the validity of the models residing purely on the $p$-bands.
However, a real site in an optical lattice is not exactly
harmonic and this anharmonicity implies that the above processes are normally
off-resonant. The deviation between the real lattice potential
and the harmonic approximation is given by
\begin{equation}
\begin{array}{lll}
\Delta V & = & V_L\left[\sin^2 (\pi x/d)+\sin^2 (\pi y/d)+\sin^2 (\pi z/d)\right]
\\ \\
& & \displaystyle{-V_L\pi^2\left[\left(\frac{x}{d}\right)^2+\left(\frac{y}{d}\right)^2+
\left(\frac{z}{d}\right)^2\right].}
\end{array}
\end{equation}
In the first order perturbation theory around the harmonic
approximation, we find that in the limit of deep lattices
($V_L/E_R\gg 1$, where $E_R$ is the recoil energy)
the detuning $2E_{1,0,0}-E_{0,0,0}-E_{1,1,0}$, where subsripts denote
quantum numbers $\nu_x$, $\nu_y$, and $\nu_z$ of the harmonic oscillator states,
vanishes. This would be related to a process where two atoms from the $p$-band
scatter into one ground state atom and one atom occupying the
state $|\nu_x=1,\nu_y=1,\nu_z=0\rangle$.
Even though this detuning remains zero at first order in anharmonicity,
the process has a vanishing matrix element and can therefore be ignored.
On the other hand, a process where two atoms from the $p$-band
scatter into one ground state atom and one atom on the state
$|\nu_x=2,\nu_y=0,\nu_z=0\rangle$ (for example) can occur. For this process
the detuning $2E_{1,0,0}-E_{0,0,0}-E_{2,0,0}$ approaches a constant
value of $-2/3\, E_R$ in the limit of deep lattices. Note that
the oscillator energy $\hbar\omega$ has a $\sqrt{V_L}$ dependence in the same limit,
so even though the detuning approaches a constant for deep lattices, it
becomes small relative to the harmonic oscillator energy scale.
From this we can conclude
that as long as the bandwidths and interactions are very small compared to
recoil energy, we can safely ignore $d$-band atoms and processes which would
scatter atoms from the $p$-band to other bands.
We also note that one way to prevent atoms on the $p$-band to populate the $s$-band was outlined in Ref.~\cite{Liu2006a}. Here, fermionic atoms are occupying the lowest band and due to atom-atom interactions, the $p$-band atoms are blocked from occupying the lowest band.
\section{Gross-Pitaevskii approach}
\label{sec:GPsolutions}
In the mean-field approach we replace the operators
${\hat\psi_{\alpha,{\bf i}}}$ with complex numbers
$\psi_{\alpha,{\bf i}}$. This approximation amounts to a coherent
state ansatz in each site. In a Fock representation this is given by
\begin{equation}
\begin{array}{lll}
|\psi\rangle_{\bf i} & = & \displaystyle{\exp\left({-\frac{|\psi_{{\bf i},x}|^2+
|\psi_{{\bf i},y}|^2+|\psi_{{\bf i},z}|^2}{2}}\right)}\\ \\
& & \displaystyle{\times\sum_{(n_x,n_y,n_z)}
\frac{\psi_{{\bf i},x}^{n_x}\psi_{{\bf i},y}^{n_y}\psi_{{\bf i},z}^{n_z}}{±\sqrt{n_x!n_y!n_z!}}
|n_x,n_y,n_z\rangle_{\bf i}},
\end{array}
\end{equation}
where $\psi_{{\bf i},\alpha}=\langle{\hat\psi_{{\bf i},\alpha}}\rangle$
is the order parameter for the flavor $\alpha$ at site ${\bf i}=(i_x,i_y,i_z)$.
This mean-field approximation is expected to be reasonably accurate in the superfluid phase when interactions are much weaker than the tunneling strengths. In this same regime the effects due to
the $d$-band atoms, can also be safely ignored as long as the tunneling
strengths are much smaller than the anharmonicity induced
detuning discussed earlier.
Using the coherent state ansatz we can
derive the equations
of motion for the order parameters $\psi_\alpha$ from the
Euler-Lagrange equation
\begin{equation}
\frac{\partial L}{\partial\psi_{{\bf i},\alpha}^*}-\frac{d}{dt}\left(\frac{\partial L}{\partial\dot\psi_{{\bf i},\alpha}^*}\right)=0,
\end{equation}
with the Lagrangian given by
\begin{equation}
L=\sum_{{\bf i},\alpha}
i\frac{\hbar}{2}\left[\psi_{{\bf i},\alpha}^*\dot\psi_{{\bf i},\alpha}-
\psi_{{\bf i},\alpha}\dot\psi_{{\bf i},\alpha}^*\right]
-H_{MF}.
\end{equation}
Here $H_{MF}$ is the mean-field approximation
for the Hamiltonian in terms of the coherent state amplitudes.
What we find are the discretized versions
of the Gross-Pitevskii equation for each flavor. These equations
are non-linear and coupled, but can be solved numerically without
too much difficulty. Furthermore, in some special cases analytical results
can even be derived. We choose the lowest band tunneling strength as our unit
of energy and lattice spacing as our unit of length.
Then, for a three-dimensional lattice, the Gross-Pitaevskii
equations for different $p$-band flavors read
\begin{equation}
\begin{array}{lll}
i\hbar\frac{\partial\psi_{{\bf i},x}}{\partial t}&=&
-\sum_\alpha t_{x,\alpha}\left[\psi_{{\bf i+1_\alpha},x}-2\psi_{{\bf i},x}+\psi_{{\bf i-1_\alpha},x}\right]\\ \\
& & +\left[g_{xx}|\psi_{{\bf i},x}|^2+2g_{xy}|\psi_{{\bf i},y}|^2+2g_{xz}|\psi_{{\bf i},z}|^2\right]\psi_{{\bf i},x}\\ \\
& & +\frac{g_{xy}}{2}\psi_{{\bf i},y}^2\psi_{{\bf i},x}^*
+\frac{g_{xz}}{2}\psi_{{\bf i},z}^2\psi_{{\bf i},x}^*,
\end{array}
\end{equation}
\begin{equation}
\begin{array}{lll}
i\hbar\frac{\partial\psi_{{\bf i},y}}{\partial t}&=&
-\sum_\alpha -_{y,\alpha}\left[\psi_{{\bf i+1_\alpha},y}-2\psi_{{\bf i},y}+\psi_{{\bf i-1_\alpha},y}\right]\\ \\
& & +\left[g_{yy}|\psi_{{\bf i},y}|^2+2g_{xy}|\psi_{{\bf i},x}|^2+2g_{yz}|\psi_{{\bf i},z}|^2\right]\psi_{{\bf i},y}\\ \\
& &+\frac{g_{xy}}{2}\psi_{{\bf i},x}^2\psi_{{\bf i},y}^*
+\frac{g_{yz}}{2}\psi_{{\bf i},z}^2\psi_{{\bf i},y}^*,
\end{array}
\end{equation}
and
\begin{equation}
\begin{array}{lll}
i\hbar\frac{\partial\psi_{{\bf i},z}}{\partial t}&=&
-\sum_\alpha t_{z,\alpha}\left[\psi_{{\bf i+1_\alpha},z}-2\psi_{{\bf i},z}+\psi_{{\bf i-1_\alpha},z}\right]\\ \\
& & +\left[g_{zz}|\psi_{{\bf i},z}|^2+2g_{xz}|\psi_{{\bf i},x}|^2+2g_{yz}|\psi_{{\bf i},y}|^2\right]\psi_{{\bf i},z}\\ \\
& &+\frac{g_{xz}}{2}\psi_{{\bf i},x}^2\psi_{{\bf i},z}^*
+\frac{g_{yz}}{2}\psi_{{\bf i},y}^2\psi_{{\bf i},z}^*.
\end{array}
\end{equation}
In these equations the first term on the right hand side
is due to the kinetic energy in the lattice, the second term
originates from the density-density interactions, while the
last terms are due to the flavor changing collisions.
The generalization for the two-dimensional system with only
two flavors is straight forward.
\subsection{2-dimensional lattice}
In a two-dimensional system we have two degenerate $p$-bands.
On a mean-field level it is easy to investigate
the lowest energy wavefunctions in the broken symmetry phase.
When the lattice is very deep, the energy minimization can be done
in each site separately by ignoring the tunneling term entirely.
In this way we find that the lowest energy state in each site
is given by $\psi_x=e^{i\phi}/\sqrt{2}$ and $\psi_y=e^{i\phi\pm \pi/2}/\sqrt{2}$.
This corresponds to an onsite wavefunction
\begin{equation}
\langle {\hat\psi}({\bf x})\rangle=w_x({\bf x})\psi_x+w_y({\bf x})\psi_y.
\end{equation}
Since the Wannier functions are related to each other
and can be expressed as $w_x({\bf x})=f(x)w_0({\bf x})$
and $w_y({\bf x})=f(y)w_0({\bf x})$, this implies
\begin{equation}
\langle {\hat\psi}({\bf x})\rangle=\frac{e^{i\phi}}{\sqrt{2}}w_0({\bf x})
\left(f(x)\pm i f(y)\right).
\end{equation}
For deep lattices the Wannier functions approach the harmonic oscillator
states and $f(x)\sim x$. We can then clearly see that the mean-field
state corresponds to a vortex or anti-vortex state with an angular momentum
$\pm 1$ along the $z$-axis.
Within this approximation, any configuration of either vortex or anti-vortex states at each site are degenerate. However, when the tunneling term is non-zero, phases of the order parameters
in different sites must be correlated properly if the energy is to be minimized.
When tunneling strengths are positive, the lowest energy
condensed state has the same phase at each site. However,
on the $p$-band the tunneling strength for a flavor is negative
in the direction of the node in its localized Wannier function.
In this case, the lowest energy state has a $\pi$ phase-difference
between neighboring sites. For a two-dimensional system
it is possible to find the mean-field state which minimizes
the onsite problem as well as the tunneling problem simultaneously
and this state amounts to a checkerboard (or anti-ferromagnetic)
ordering of vortices and anti-vortices.
This is easy to see, since if at some site we have a vortex
state $\sim (x+iy)$ and we aim to minimize the kinetic energy
along $y$-direction, then the neighboring site should have
a same phase for the $x$-flavor while having a $\pi$-phaseshift
for the $y$-flavour. This implies an anti-vortex state $\sim (x-iy)$.
If we then try to minimize the kinetic energy along $x$-direction,
we see that the $x$-flavor should experience a $\pi$-phaseshift,
while for the $y$-flavour the phaseshift should vanish.
This implies an anti-vortex state $\sim e^{i\pi} (x-iy)$ with an
additional overall phaseshift of $\pi$.
\subsection{3-dimensional lattice}
In a three-dimensional lattice we have three degenerate bands, which opens up for novel phenomena not present in the two-dimensional case. Mimimizing the onsite problem we find that the lowest energy configuration becomes
\begin{equation}\label{120deg}
\langle\Psi\rangle=\left(\begin{array}{c}\langle\psi_x\rangle\\
\langle\psi_y\rangle\\
\langle\psi_z\rangle
\end{array}\right)=\sqrt{\frac{n_T}{3}}e^{i\phi}\left(\begin{array}{c}
1\\
\exp(2\pi i/3)\\
\exp(4\pi i/3)
\end{array}\right),
\end{equation}
where $n_T$ is the total onsite atom number and $\phi$ is a random phase.
The onsite wavefunction with equal number of atoms in each flavor
has an unit angular momentum per atom which points
not along the main axes, but diagonally ${\bf L}\propto (\pm 1,\pm 1,\pm1)$. Again minimization of the kinetic energy necessitates a special ordering of angular momentum in each site.
In the three-dimensional lattice the nearest neighbor angular momenta (in the direction $\alpha$) are related by a relation ${\bf L}({\bf i}+{\bf e}_\alpha)=\hat{R}_\alpha(\pi){\bf L}({\bf i})$, where
$\hat{R}_\alpha(\pi)$ is a rotation of $\pi$ around the axis $\alpha$.
The above results depend crucially on the magnitude of the inter-flavor coupling strengths $g_{xy}=g_{xz}=g_{yz}$ relative to the magnitudes of the $g_{\alpha\alpha}$ terms. In particular, it only holds when $g_{xy}< g_{xx}/3$. If one approximates the Wannier functions with the harmonic oscillator states one finds that $g_{xy}=g_{xx}/3$, but when real Wannier functions of an ideal
Bose gas are used, $g_{xy}<g_{xx}/3$ for fairly deep lattices and the above result holds.
That said, the result may be different for a shallow lattice. Furthermore, it is also unclear
what is the effect of the interaction-induced dressing of the Wannier functions~\cite{Li2006a} on the magnitudes of the effective $p$-band coupling strengths.
If it turns out that under some circumstances inter-flavor
coupling is larger and $g_{xy}>g_{xx}/3$, then the lowest energy configuration
breaks the permutational symmetry and is given by the vortex states
\begin{equation}\label{90deg}
\langle\Psi\rangle=\sqrt{\frac{n_T}{2}}e^{i\phi}\left(\begin{array}{c}
1\\
\exp(\pm \pi i/2)\\
0
\end{array}\right),
\end{equation}
where the angular momentum points along the $z$-axis. The vortex-anti-vortex states
with angular momentum along other axes are degenerate with the one shown here explicitly. It is seen that the state~(\ref{120deg}) has a mutual $120^\circ$ phase difference between the flavors, reminiscent of three interacting spin 1/2-particles placed on the corners of a triangle. The state (\ref{90deg}), on the other hand, shows a mutual $90^\circ$ phase pattern. In particular, the interaction terms proportional to $g_{\alpha\beta}$, with $\alpha=\beta$, favors a $90^\circ$ pattern, while those with $\alpha\neq\beta$ favor a $120^\circ$ configuration.
In order to achieve a better understanding how the particular limiting case $g_{xy}=g_{xx}/3$ comes about, let us again write the Hamiltonian as $H=H_{nn}+H_{FD}$ where in the mean-field approximation we have
\begin{equation}\label{en1}
\begin{array}{lll}
H_{nn} & = & g_{xx}\left[n_x^2+n_y^2+n_z^2\right]\\ \\
& & +4g_{xy}(n_xn_y+n_xn_z+n_yn_z),\\ \\
H_{FD} & = & 2g_{xy}\left[\cos(\Delta_{xy})n_xn_y+\cos(\Delta_{xz})n_xn_z\right.\\ \\
& & \left.+\cos((\Delta_{xz}-\Delta_{xy}))n_yn_z\right].
\end{array}
\end{equation}
Here, $n_\alpha=|\psi_\alpha|^2$, and $\Delta_{\alpha\beta}=\phi_\alpha-\phi_\beta$, with $\phi_\alpha$ being the phase of $\langle\psi_\alpha\rangle$. The energy functional can be written in the form
\begin{equation}
E[\langle\psi_x\rangle,\langle\psi_y\rangle,\langle\psi_z\rangle]=\mathbf{n}^T\mathbf{M}\mathbf{n},
\end{equation}
where $\mathbf{n}=(n_x,n_y,n_z)$ and
\begin{widetext}
\begin{equation}
\mathbf{M}=\left[\begin{array}{ccc}
g_{xx} & 2g_{xy}\left(2+\cos(\Delta_{xy})\right) & 2g_{xy}\left(2+\cos(\Delta_{xz})\right)\\
2g_{xy}\left(2+\cos(\Delta_{xy})\right) & g_{xx} & 2g_{xy}\left(2+\cos((\Delta_{xz}-\Delta_{xy}))\right)\\
2g_{xy}\left(2+\cos(\Delta_{xz})\right) & 2g_{xy}\left(2+\cos((\Delta_{xz}-\Delta_{xy}))\right) & g_{xx}
\end{array}\right].
\end{equation}
\end{widetext}
Thus, we have rewritten the single site problem in a quadratic form for the $n_\alpha$ variables.
For the general case, the eigenvalues are not analytically solvable. However, assuming $g_{xy}<g_{xx}/3$ we may use the fact that we know that the energy is minimized for $\Delta_{xy}=2\pi/3$ and $\Delta_{xz}=4\pi/3$ and we then obtain
\begin{equation}\label{pos}
\begin{array}{l}
\lambda_1=g_{xx}-3g_{xy},\\
\lambda_2=g_{xx}-3g_{xy},\\
\lambda_3=g_{xx}+6g_{xy}.
\end{array}
\end{equation}
Since $g_{xx}>3g_{xy}$, the matrix $\mathbf{M}$ is positive definite. However, putting $g_{xx}<3g_{xy}$ into Eq.~(\ref{pos}) results in a non positive definite matrix and we can conclude that the $120^\circ$ phase symmetry is broken in such a case. The possibility of the broken permutational symmetry was also
noted in Ref.~\cite{Isacsson2005a}.
We should point out that all the results rely on having an isotropic lattice configuration. Any deviation from the symmetric lattice will break this degeneracy and give a preferred direction for the axis of angular momentum. When kinetic energy is included the ordering of vortex anti-vortex states between sites is the same as in the two-dimensional system.
\section{Quantum states}
\label{sec:Gutzwiller2D}
The Gross-Pitaevskii mean-field approximation usually provides a sufficient description
when one considers the superfluid phase with a large number of atoms per site.
However, the local coherent state ansatz is not necessarily all
that good when the average onsite occupation number is small.
Furthermore, the mean-field description fails completely when the system is in a
Mott insulator phase {\it i.e.} when the onsite atom distribution is greatly sub-Poissonian. Therefore, to accurately describe the system properties in this regime a more precise many-body wave function is needed. This will also provide insight into which parameter regimes where the mean-field picture
is a good approximation.
We assume that the state vector has the generalized form of the Gutzwiller
approximation~\cite{Buonsante2008a}. This is a product of
on-site quantum states expanded in terms of the Fock states $|{\bf n}\rangle$
of the multiple flavor system
\begin{equation}
\label{eq:Gutzwiller}
|\psi\rangle=\prod_{\bf i} \sum_{\bf n }f_{{\bf n}}^{({\bf i})} |{\bf n}\rangle_{\bf i}
\end{equation}
where the index ${\bf i}$ runs over all lattice sites. The expansion
coefficient $f_{{\bf n}}^{({\bf i})}$ is the Gutzwiller amplitude
of the particular on-site Fock state. For our purposes, in
the $p$-band the relevant subspace is covered by the Fock states of the form
$|{\bf n}\rangle=|n_x,n_y,n_z\rangle$, where for example, $n_x$ is the occupation
number of the $p_x$-flavor.
Within the Gutzwiller approximation the energy of the system becomes
a functional of the amplitudes $f_{{\bf n}}^{({\bf i})}$. By utilizing a conjugate gradient method we minimize this functional giving the system ground state at zero temperature, $T=0$. Several aspects regarding the minimization were discussed in Ref. \cite{Larson2009a}. Here we only
mention that the sum over ${\bf n}$ must be cut off and in our numerical
scheme we include all the states with $\sum n_{\sigma} \le 8$ in 2D and
$\sum n_{\sigma} \le 6$ in 3D. Throughout this section we will choose the lattice amplitude to be fixed and instead assume that the ratio between tunneling and onsite interaction can be controlled via Feshbach resonances. The Wannier functions are calculated for a relatively deep lattice, $V_L=15E_R$
\subsection{2-dimensional lattice}
\label{sec:2d-gutzw}
For the two-dimensional lattice, the onsite Fock states then consist of $p_x$ and $p_y$ terms only. To get insight about possible correlations between neighboring lattice sites our effective computational subspace contains four lattice sites with two sites in both spatial directions. The computational $4t/U_{00}$-$\mu/U_{00}$
parameter region is chosen to be such that the total number of
atoms per site is relatively small. To investigate the effects of quantum fluctuations, this is the region of most experimental interest and it is also favorable numerically with reasonable
cut-offs.
Some resulting properties can be seen in Fig. \ref{fig:properties1}.
The absolute value of the two condensate order parameters $|\langle\psi_x\rangle|,\,|\langle\psi_y\rangle|$ are plotted in Fig. \ref{fig:properties1} (a). As in the standard $s$-band Bose-Hubbard model, the phase-space consists of Mott insulating lobes and superfluid regions. This is further evidenced in (b) where the total atom number $n_T$ is shown; within the Mott lobes $n_T$ attains an integer value. Due to symmetry reasons, it is not surprising that the absolute values of the two flavor order parameters are identical. However, in the SF phase, there is a phase difference of $\pm\pi/2$ between the two flavors e.g., $\langle\psi_x\rangle$ is real while $\langle\psi_y\rangle$ is imaginary. This suggests that the on-site ground state is a vortex; a result in agreement with our Gross-Pitaevskii calculations. Indeed, a plot of the scaled angular momentum $|L|/n_T\equiv\hat{L}_{z,{\bf i}}=-i\left(\hat{\psi}_{x,{\bf i}}^\dagger\hat{\psi}_{y,{\bf i}}-\hat{\psi}_{y,{\bf i}}^\dagger\hat{\psi}_{x,{\bf i}}\right)/n_T$ given in Fig.~\ref{fig:properties1} (d) verifies that for strong tunnelings and large onsite atom numbers the angular momentum is quantized. The existence of the vortex solution is also supported by the work of Watanabe and Pethick~\cite{Watanabe2007}. Namely, in a single harmonic trap within the the mean field approximation the energy functional is of the form
\begin{equation}
E_{MFHO}=
-\frac{\gamma}{4}\left[1-\left(\Delta n\right)\right]
\sin^2\phi
\label{eq:mfho}
\end{equation}
where $\gamma$ is the effective coupling constant, $\Delta n = n_x-n_y$
population difference between the two flavors, and $\phi$ is their relative phase. Eq. (\ref{eq:mfho}) is clearly minimized when $\Delta n=0$ and $\phi=\pm\pi/2$. Physically this means that the repulsive interaction favors a
vortex solution above a non-vortex one.
\begin{figure}
\includegraphics[width=8cm]{Fig1.eps}
\caption{(Color online) Properties of the two-dimensional two-flavor
Bose-Hubbard model as a function of the chemical potential and the inverse
interaction strength $4t/U_{00}$ where the factor 4 derives from the number
of nearest neighbors. For concreteness the parameters were computed for
a lattice depth of $V_L=15E_R$. The various plots show: order parameters (a), total atom number (b), atom number fluctuations (c), and angular momentum per particle (d).}
\label{fig:properties1}
\end{figure}
As discussed in the previous section, in the mean-field limit the
vortices on the lattice tend to order themselves in a
form of a checkerboard pattern with neighboring vortices and anti-vortices.
According to our Gutzwiller results this is true also more
generally in the superfluid phase. In fact, our ground state of
anti-ferromagnetic like vortex ordering is similar to the
staggered-vortex superfluid state discussed in
Ref.~\cite{Lim2008a} for a square optical lattice in an effective
staggered magnetic field. However, in the $p$-band such a state
appears even in the absence of effective magnetic fields.
The physics appearing for the multi-flavor Mott insulating states is possibly even more interesting. For example, as seen in Fig.~\ref{fig:properties1} (c), onsite number fluctuations $\Delta n_x^2$ (or equivalently $\Delta n_y^2$) for the individual flavors are not necessarily zero. For the Gutzwiller ansatz wave function~(\ref{eq:Gutzwiller}), no correlation between sites is allowed. As an outcome, for odd total number of atoms $n_T$ there is a set of degenerate Mott states, {\it e.g.} with $n_T=1$ all onsite interaction terms vanish and the state $|n_x=1,n_y=0\rangle$ is degenerate with $|n_x=0,n_y=1\rangle$ or any linear combination of these. However, tunneling between sites will normally break these degeneracies. The Gutzwiller approach is not able to capture such effects and therefore the kinetic energy term
\begin{equation}
\hat{T}=\sum_{\sigma,\alpha}\sum_{<{\bf i},{\bf j}>_\alpha}t_{\alpha,\sigma}\hat{\psi}_{\sigma,{\bf i}}^\dagger\hat{\psi}_{\sigma,{\bf j}}
\end{equation}
is taken into account within second order perturbation theory. We focus on the lowest Mott, $n_T=1$, and it turns out that the degeneracy is indeed lifted and the ground state shows a anti-ferromagnetic vortex structure. We note that the
favorability of a vortex state in the $n_T=1$ Mott state relies to the
non zero value of the transverse tunneling rate. If this tunneling
is compeletely neglected the energy is
minimized by a ferromagnetic state \cite{Isacsson2005a}. In the second
lowest Mott, $n_T=2$, the picture is simpler because the interactions break
the degeneracy and no perturbation theory is needed.
The state $|n_x=1,n_y=1\rangle$ is favored over $|n_x=2,n_y=0\rangle$ and
$|n_x=0,n_y=2\rangle$ due to the vanishing of the self terms proportional to
${\hat n}_{\sigma,\bf i}\left({\hat n}_{\sigma,\bf i}-1\right)$ in Eq.
(\ref{eq:Hnn}).
We further illustrate our results by plotting the absolute values of the Gutzwiller amplitudes $f_{\bf n}^{({\bf i})}$ in Fig. \ref{fig:absamplitudes} as bar graphs. Hence, these are the probabilities of the onsite state to be at a given
Fock state $|n_x,n_y \rangle$. In Fig. \ref{fig:absamplitudes} (a)
the single-site amplitudes of a superfluid state are
given when $4t/U_{00}=0.04$ and $\mu/U_{00}=0.7$. This state is
clearly a superposition of many Fock states whereas the
Mott state of Fig. \ref{fig:absamplitudes} (b) contains only one state.
This MI state is the minimum energy configuration for
$4t/U_{00}=0.01$ and $\mu/U_{00}=0.7$. Due to the small number of atoms, the superfluid atomic distribution depicted in Fig.~\ref{fig:absamplitudes} (a) is still sub-Poissonian. It is also evident from the figures that the states with extensive populations are negligible justifying our numerical cut-off at $n_\sigma\leq8$.
\begin{figure}
\includegraphics[width=8cm]{Fig2.eps}
\caption{(Color online) Absolute values of the Gutzwiller amplitudes at a
single lattice cite. The left figure (a), shows the atomic distribution
for a superfluid ground state with $4t/U_{00}=0.04$
and $\mu/U_{00}=0.7$. On the right figure (b), a Mott insulator state is plotted
for $4t/U_{00}=0.01$ and $\mu/U_{00}=0.7$. Expectedly, in this insulator phase
only the Fock state $|n_x,n_y \rangle=|1,1 \rangle$ is populated within the Gutzwiller approach.}
\label{fig:absamplitudes}
\end{figure}
\subsection{3-dimensional lattices}
\label{sec:Gutzwiller3D}
In a symmetric three dimensional lattice the $p$-band is described in terms of $3$-flavors.
In the Mott insulator with only one atom per site, for the same reason as for the two-dimensional case, the ground state is strongly degenerate within the Gutzwiller ansatz. As argued above, such states are not true eigenstates of our Bose-Hubbard Hamiltonian, and again for relatively deep lattices the breaking of this degeneracy, and hence the permutational symmetry breaking, is well described within second order perturbation theory. Using real Wannier functions to compute the model parameters we find that, in a theory which takes the kinetic energy into account perturbatively, the ferromagnetic state where only one
flavor is occupied has a lower energy than either anti-ferromagnetic states with checkerboard ordering or striped phases.
\begin{figure}
\includegraphics[width=8cm]{Fig3.eps}
\caption{(Color online) The condensate order parameters for the three-dimensional lattice.
}
\label{Fig3}
\end{figure}
With only two atoms per site the condensate order parameters
naturally vanish when entering the Mott insulating regime, but the local
angular momentum
$\langle \hat{L}\rangle=\frac{1}{\sqrt{3}}\left(\pm 1,\pm 1,\pm 1\right)$
is non-zero and
$\langle L^2\rangle$ is equal to $6$. Angular momentum per particle
$\sqrt{\sum_\alpha \langle\hat{L}_\alpha\rangle^2}/n_T$ is $1/2$ in this state
and is in a marked contrast to the superfluid regime, where
the onsite angular momentum per particle is equal to one.
In a superfluid phase, the half-quantum vortex
can occur in multi-component systems and
can be pictured
as a vortex in one of the component with the vortex free component
filling the vortex core~\cite{Leonhardt2000a,Martikainen2002b}.
However, being non-zero the expectation value of the angular momentum
is in qualitative agreement with the Gross-Pitaevskii solution
even in the Mott lobe.
More explicitly, the minimum energy state
in each site is maximally entangled angular momentum eigenstate
given by
\begin{equation}\label{symstate}
|\psi\rangle\frac{1}{\sqrt{3}}\left[e^{i\phi_1}|110\rangle
+e^{i\phi_2}|101\rangle+e^{i\phi_3}|011\rangle\right],
\end{equation}
where the amplitudes have $2\pi/3$ phase-differences.
For three atoms per site, the lowest energy Mott insulator
state has the onsite wavefunction $\psi=|111\rangle$, which was also found for the corresponding state in the two-dimensional lattice. Importantly it should be
noted, that commonly used harmonic approximation
for the Wannier states
predicts the properties of (for example) this insulating phase incorrectly.
If harmonic oscillator states are used to approximate
Wannier wavefunctions, the insulating state
with $3$ atoms per site is degenerate with more complicated
superposition states, but these degeneracies are removed once real
Wannier states are used to evaluate the parameters of the theory.
As we have demonstrated, tunneling between sites will remove the onsite degeneracies among the Mott insulating states, a fact that was already pointed out by Isacsson {\it et al.}~\cite{Isacsson2005a}. However, many of the degeneracies appearing in their work are actually artifacts of utilizing a harmonic approximation. Furthermore, we found that the $p$-band Mott lobes follow roughly the structure for the Mott lobes on the lowest band, as depicted in Figs.~\ref{fig:properties1} and~\ref{Fig3}. This is in contrast with the results of Ref.~\cite{Isacsson2005a} where the Mott lobes extends over larger parameter regimes, and they moreover show an anomalous behavior with large variations in the sizes of neighboring Mott lobes. This discrepancy seems to originate from a factor of $2$ missing for the cross terms proportional to $n_xn_y$, $n_yn_z$, and $n_yn_z$ in their work.
In Fig.~\ref{Fig:GutzvsGP_3D} we compare the Gutzwiller approach and the Gross-Pitaevskii approach by showing one component of the condensate order parameters, angular momenta per particle, as well
as the fluctuations of the $z$-component of the angular momentum as a function of $6t_{0}/U_{00}$.
In this figure we fixed $\mu/U_{00}$ in the Bose-Hubbard model phase diagram and changed $6t_{0}/U_{00}$
and for each point computed the corresponding solution of the Gross-Pitaevskii equations with the same density. The fixed values of $\mu/U_{00}$ were chosen in such a way that
the starting point was in the center of the Mott insulating phase with either
$1$, $2$, or $3$ atoms per site.
\begin{figure}
\includegraphics[width=0.90\columnwidth]{Fig4.eps}
\caption{(Color online) Comparison between the Gutzwiller approach (solid blue lines) and
the Gross-Pitaevskii theory (dashed red lines) in a three-dimensional system.
The parameters were computed for a lattice of depth $15\,{\rm E_R}$
in all directions. We fix $\mu/U_{00}$ in
the Bose-Hubbard model and changed $6t_{0}/U_{00}$.
We show comparisons for the condensate order parameter $\langle\psi_x\rangle$,
onsite angular momenta per particle $|L|/n_T$, as well as
for the fluctuations $\Delta L_x^2/n_T$.
In (a), (d), and (g) the strong coupling region was in a Mott state
with $1$ atom per site,
in (b), (e), and (h) the strong coupling region was in a Mott state
with $2$ atoms per site, and in
(c), (f), and (i) the strong coupling region was in a Mott state
with $3$ atoms per site. Note that we choose a specific Mott insulating
state with $n_T=1$ so that it had an angular momentum of $1$ per atom.
Since this region is in our approximation
strongly degenerate, many other
choices would have been equally justified.
}
\label{Fig:GutzvsGP_3D}
\end{figure}
With the exception of fluctuations of the single particle per site angular momentum, we can see that outside the Mott insulating regions the Gross-Pitaevskii
theory can quickly predict the value of the condensate order parameters
quite accurately. Angular momenta are in a sense sometimes
even better predicted by the Gross-Pitaevskii theory, since
in the Mott phase with $2$ atoms per site angular momentum is non-zero
and behaves qualitatively in the same way in the two different
approaches. However, with $3$ atoms per site the angular momentum vanishes in the Gutzwiller approach, but is non-zero in the Gross-Pitaevskii
approach. Also the fluctuations of angular momentum agree well in the SF regime. These results give us a benchmark for the reliable use of the Gross-Pitaevskii formalism for the description of the
excited band bosons, and we especially find that the mean-field treatment is surprisingly accurate even relatively close to the Mott boundaries were quantum fluctuations are known to become significant.
Earlier we pointed out the possibility of the broken permutational symmetry when $g_{xx}>3g_{xy}$.
In this case the order parameters can be unequal. Interestingly, we find that this broken symmetry is also reflected in the Mott insulating state, where
the exact ground state (with two atoms per site in this example)
carrying angular momentum changes into a superposition
\begin{equation}\label{asymstate}
|\psi\rangle\frac{1}{\sqrt{3}}\left[\sqrt{p_x}e^{i\phi_1}|200\rangle
+\sqrt{p_y}e^{i\phi_2}|020\rangle+\sqrt{p_z}e^{i\phi_3}|002\rangle\right]
\end{equation}
with possibly unequal number of atoms in different flavors, in contrast to the symmetric state (\ref{symstate}). In particular, for the state (\ref{asymstate}) the angular momentum vanishes.
As one moves to the superfluid phase from the Mott phase, the permutational symmetry breaking can
manifest itself by a single non-vanishing order parameter $\langle\psi_\alpha\rangle$ followed by a transition into a state with two non-vanishing (and equal) order parameters~\cite{Isacsson2005a}.
\section{Introduction}
\label{sec:intro}
Systems of cold atoms in optical lattices have seen a
dramatic experimental progress in the recent past~\cite{Bloch2008a,Lewenstein2007a}.
Due to realization of optical lattices, low densities, and low temperatures,
a fantastic degree of control has been obtained which has made detailed studies
of strongly correlated quantum systems possible. For example,
the Mott-superfluid transition~\cite{Jaksch1998a,Greiner2002a}
has been successfully observed in optical lattices. This transition, due to the competition between kinetic energy and repulsive on-site interactions between lowest band bosons,
can occur even at $T=0$ and is therefore driven by quantum fluctuations. For large
interactions, the energy is minimized in an incompressible state with fixed
atom numbers at each lattice site, while for weaker interactions the kinetic energy favors
atomic tunneling which drives the system into a superfluid.
The early experiments were confined to the lowest band and while
increasing interactions can make excited band populations
non-negligible~\cite{Kohl2006a}, the lowest band still dominates. In fact, for very strong interactions, it has been theoretically shown that the lowest band Mott insulator turns into a Mott insulator at the $p$-band~\cite{Alon2005a}. Experimentally, however, atomic population residing on the excited bands is obtained by couple atoms from the lowest band to the excited bands.
This was experimentally demonstrated by accelerating the lattice for a short period~\cite{Browaeys2005a}, or more recently by coupling atoms from the lowest band Mott insulator into the first excited $p$-band of the lattice via Raman transitions between bands~\cite{Mueller2007a}. In the latter of these two, it was in particular found that the lifetimes of $p$-band atoms are considerably
longer than the tunneling time-scale in the lattice and they were also able to
explore how coherence on the excited band establishes. These experiments pave the way to explore also equilibrium physics of the purely $p$-band bosons~\cite{Isacsson2005a} and furthermore provide possible routes to realize supersolids~\cite{Scarola2005a} or novel phases~\cite{Liu2006a,Xu2007a} on the excited
bands of an optical lattice. An alternative way to populate higher bands is by considering fermions with a filling factor larger than one~\cite{Wu2008a,CongjunWu2008a,Martikainen2008a}. In this case the Pauli exclusion principle ensures that the fermions that cannot populate the lowest band, must
occupy the excited bands~\cite{Kohl2005a}.
In this paper we explore the properties of bosons occupying the first excited bands of an optical lattice. In a periodic potential where the lattice depths are equal in all directions, the (non-interacting) bands are degenerate and a multi-flavor description of the quantum states of atoms is required~\cite{Isacsson2005a,Baillie2009a}. This fact together with the non-isotropic tunneling on the $p$-band add new features and possibilities both for the description of the superfluid
as well as insulating phases. For example, onsite flavor changing collisions can induce
fluctuations in the number of atoms of different flavors even in the insulating phases, giving them non-trivial characteristics. Furthermore, such collisions together with anisotropic tunneling cause different types of phaselockings (both locally as well as between sites) between flavor condensates in the broken symmetry phases.
While in many places we confirm the general picture provided by the somewhat simplified model considered by Isacsson and Girvin~\cite{Isacsson2005a}. Nonetheless, we also find differences which arise due to; the use of real Wannier functions (as opposed to the approximated ones given by a harmonic ansatz), through the inclusion of nearest neighbor tunneling in all directions, or through difference in accounting for the inter-flavor interactions. For future reference, we also compare the Gross-Pitaevskii type mean-field theory with the more accurate Gutzwiller approach and find the parameter regions where the Gross-Pitaevskii approach is reasonably accurate.
The paper is organized as follows. In Sec.~\ref{sec:theory} we derive our model Hamiltonian
and by taking anharmonicity of the lattice potential into account, we outline under what conditions the physical description can be restricted to the first excited $p$-band. We then proceed by deriving mean-field Gross-Pitaevskii equations for the $p$-band bosons and discuss salient features of their solutions for a homogeneous system both for two-dimensional as well as for three-dimensional systems in
Sec.~\ref{sec:GPsolutions}. In Secs.~\ref{sec:Gutzwiller2D} and ~\ref{sec:Gutzwiller3D}, we study the $p$-band physics in two- and three-dimensional systems employing the Gutzwiller ansatz and outline the ways how these solutions differ from the mean-field ones. We conclude with a brief discussion in Sec.~\ref{sec:conclusions}.
|
1,116,691,498,225 | arxiv | \section{Overview}
\par The squeezing of the quantum fluctuations is one of the most fundamental
manifestations of the Heisenberg uncertainty relation, which is among the most
important principles of quantum mechanics.
A long time ago, great effort has been paid to squeezing of the radiation field
due to its strong application in the optical communication~\cite{YS78} and weak
signal detections~\cite{CMC81}. Accordingly, it was established the relationship
between the squeezing of the atoms and that of the radiation field~\cite{WX03}
and the possibility of squeezed atoms to radiate a squeezed field~\cite{POUMO01}.
Moreover, much attention has been devoted to atomic spin squeezing~\cite{MB02,
GB03, SM01, VPG00, WX01, Wx04, YWS07, ZPP91}. Spin squeezed states~\cite{WX01,
VPG00, AALU02, WBI94, KIUE93, AGPU90, LUYEFL02, KUMOPO97, WEMO02, HEYO01, MUZHYO02, POMO01, THMAWI02, GAROBU02, STGEDOMA03, ZHSOLI02, WASOMO01} are quantum correlated states with reduced fluctuations in
one of the collective components. Spin squeezed states offer an interesting possibility for
reducing quantum noise in precision measurements~\cite{WBI94,WBI92,UOMK01,
AALU02, DPJN03, JKP01, MRKSIMW01}, with potentially possible applications in
ultra-precise spectroscopy, atomic interferometers and high precision atomic
clocks~\cite{WBI94,WBI92}.
\par Interestingly, it was found that spin squeezing is closely related to a key
ingredient in quantum information theory and implies quantum
entanglement~\cite{SDCZ01, SORENSEN02, BERSAN02, RAFUSAGU0137}. As there are
various kinds of entanglement, a question arises: what kind of entanglement does
spin squeezing imply? In Ref.~\cite{UOMK01}, it has been found that, for a two-qubit symmetric state, spin squeezing is equivalent to its bipartite entanglement, i.e., spin squeezing implies bipartite entanglement and vice versa. Wang and Sanders~\cite{WASA03}, presented a generalization of the results of Ref.~\cite{UOMK01} to the multiqubit case, where the authors showed that spin squeezing implies pairwise entanglement for arbitrary symmetric multiqubit states. More quantitatively, for spin states with a certain parity, if a state is spin squeezed according to the definition for Kitagawa and Ueda~\cite{KIUE93}, a quantitative relation is obtained between the spin squeezing parameter and the concurrence~\cite{HW97,HW98,XHCY02,ZL03}, quantifying the entanglement of two half-spin particles~\cite{UOMK01,WASA03}.
\par The close relation between the entanglement and spin squeezing enhances the
importance of spin squeezing which motivate us to explore the role of Kerr
medium~\cite{MSAT07,MSAT109,MSAT209,MSAT309} and the nonlinear binomial state on the spin squeezing and
entanglement.
\par A binomial state is one of the important nonclassical states of light which has
attracted much attention in the field of quantum optics in the last few decades,
see for example ~\cite{Buzek90, STSATE85,Vidiella94}. This, in fact, is due to the importance of these states which
have been experimentally produced and characterized. This state is regarded as
one of the intermediate states of the coherent state~\cite{
Buzek90, STSATE85, Vidiella94}. It can be simply produced from the action of a
single mode squeeze operator on the state $\mid
p,0\rangle$~\cite{STSATE85}, where $p$ is the Bargmann
number. It also represents a linear combination from the number states with
coefficients chosen such that the photon counting probability distribution is
the binomial distribution with mean $M|p|$, where $0 < |p|<
1$.
\par The scope of this communication is to employ the spin squeezing and quantum
entropy to elucidate entanglement for two-atom system prepared initially in
Bell-state interacting with single cavity field prepared initially in a binomial
state. We introduce our Hamiltonian model and give exact expressions for the
full wave function of the atomic Bell-state and field systems, shedding light on
the important question of the relation between spin squeezing and quantum entropy
behaviors. We also examine the evolution of the quasiprobability distribution Husimi $Q$-function for the model under consideration. With the help of the quasiprobability distribution, one gains further insight into the mechanism of the interaction of the model subsystems. In the language of quantum information theory a definition of spin
squeezing is presented for this system. The utility of the definition is
illustrated by examining spin squeezing from the point of view of quatum
information theory for the present system. This analysis is applicable to any
quantum tripartite-state subject to spin squeezing with appropriate initial
conditions.
\section{model solution and the reduced density operator}
\par Our system is as follows: A light field is initially in the binomial state (the
state which interpolates between the nonlinear coherent and the number states )
$$
|\psi_{F}(0)\rangle=|p,M\rangle=\sum_{n=0}^{\infty}~b_{n}|n\rangle,\eqno(1)
$$
where $|n\rangle$ is the eigenstate of number operator $\widehat{a}^{\dag}\widehat{a}=\widehat{n}$, $\widehat{n}|n\rangle=n|n\rangle$, while, the coefficients $b_{n}$ are given by
$$
b_{n}=\sqrt{\binom{M}{n}~|p|^{n}(1-|p|)^{M-n}},\eqno(2)
$$
where $M>0$ (i.e., $M$ in general is any real positive number), $0 <|p|< 1$,
interacts with two identical two-level atoms are initially in the following Bell
state
$$
|\psi_{A}^{\uparrow\uparrow\downarrow\downarrow}(0)\rangle=\gamma_{1}
|\uparrow\uparrow\rangle+\gamma_{4}|\downarrow\downarrow\rangle,\eqno(3)
$$
The binomial states have the properties
$$
|p,M\rangle=
\begin{cases}
\mid M\rangle &|p|\rightarrow 1\cr
\mid 0\rangle &|p|\rightarrow 0\cr
\mid\alpha\rangle &|p|\rightarrow 0,~~~M\rightarrow \infty,~~~M|p|=\alpha^{2}.\cr
\end{cases}\eqno(4)
$$
For this system, the atoms-field interaction is governed by the ($N=2$)
Tavis-Cummings model(TCM)~\cite{TCM68}. The Tavis-Cummings model
(TCM)~\cite{TCM68} describes the simplest fundamental interaction between a
single mode of quantized electromagnetic field and a collection of $N$ atoms
under the usual two-level and rotating wave approximations when all of the atoms
couple {\it identically} to the field. The two-atom ($N=2$) TCM is governed by
the Hamiltonian
$$
\widehat{H}=\omega
\widehat{n}+\frac{1}{2}\omega_{0}(\widehat{\sigma}_{z}^{(1)}+\widehat{\sigma}_{z}^{(2)})+f(\chi,
\widehat{n})
+\lambda\sum_{j=1}^{2}\biggl(\widehat{\sigma}_{-}^{(j)}~\widehat{a}^{\dag}+\widehat{\sigma}_{+}^{(j)}~a\biggr)
,\eqno(5)
$$
where $f(\chi, \widehat{n})=\chi \widehat{n} (\widehat{n}-1)+2\sqrt{\chi}~
\widehat{n}$ represents the nonlinear term with,
$$f(\chi, \widehat{n})\mid n\rangle=\bigl(\chi
n(n-1)+2\sqrt{\chi} n\bigr)\mid n\rangle=f(\chi, n)\mid n\rangle.\eqno(6)
$$
We denote by $\chi$ the dispersive part of the third order susceptibility of the Kerr-like medium~\cite{MSAT07,MSAT109,MSAT209,MSAT309}. The parameter $\lambda$ represents the atoms-field coupling constant. The operators $\widehat{\sigma}_{\pm}^{(i)}$ and $\widehat{\sigma}_{z}^{(i)}$, ($i\in\{1,2\}$)
display a local $SU(2)$ algebra for the $i$-th atom in the 2D supspace spanned
by the ground (excited) state $|\downarrow\rangle$,($|\uparrow\rangle$) and obey the commutation
relations $[\widehat{\sigma}_{+}^{(i)},\widehat{\sigma}_{-}^{(i)}]=\widehat{\sigma}_{z}^{(i)}$,
($i\in\{1,2\}$), and $\widehat{a}$($\widehat{a}^{\dag}$) is bosonic annihilation (creation) operator
for the single mode field of frequency $\omega$. To make calculation to be more
clear and simple, we put
$$
\kappa\sum_{j=1}^{2}~\widehat{\sigma}_{s}^{(j)}=\widehat{J}_{s};~~~~~ s=z, -, +,\eqno(7)
$$
the parameter $\kappa=\frac{1}{2}, 1, 1$ if $s=z, -, +$, respectively.\\
The Hamiltonian (5), with the detuning parameter $\Delta=\omega_{0}-\omega$, takes the form
$$
\widehat{H}=\widehat{H}_{0}+\widehat{H}_{INT}=\omega\bigl(\widehat{n}+\widehat{J}_{z}\bigr)+\Delta~\widehat{J}_{z}+f(\chi,
\widehat{n})
+\lambda~\bigl(\widehat{J}_{-}~\widehat{a}^{\dag}+\widehat{J}_{+}~\widehat{a}\bigr),\eqno(8)
$$
Consider, at $t=0$, the two atoms are in the Bell state, Eq. (3). The initial
state of the system is a decoupled pure state, and the sate vector can be
written as
$$
|\psi_{AF}(0)\rangle=\sum_{n=0}^{\infty}b_{n}\bigl[\gamma_{1}|\uparrow\uparrow,
n\rangle+\gamma_{4}|\downarrow\downarrow,n\rangle\bigr],\eqno(9)
$$
As the time goes, the evolution of the system in the interaction picture can be
obtained by solving the Schr\"{o}dinger equation
$$
i\frac{d}{dt}|\psi_{AF}(t)\rangle=H|\psi_{AF}(t)\rangle,\eqno(10)
$$
and using the condition Eq. (9) to obtain the time-dependent wave function in the
form
$$
|\psi_{AF}
(t)\rangle=|U\rangle~|1\rangle+|R\rangle~|2\rangle+|S\rangle~|3\rangle+|T\rangle
~|4\rangle,\eqno(11)
$$
where
$$
|U\rangle=\sum_{n=0}^{\infty}~b_{n}\biggl[A_{n}(t)|n\rangle+H_{n-2}
(t)|n-2\rangle\biggr],\eqno(12)
$$
$$
|R\rangle=\sum_{n=0}^{\infty}~b_{n}\biggl[B_{n+1}(t)|n+1\rangle+G_{n-1}
(t)|n-1\rangle\biggr]=|S\rangle,\eqno(13)
$$
and
$$
|T\rangle=\sum_{n=0}^{\infty}~b_{n}\biggl[D_{n+2}(t)|n+2\rangle+E_{n}
(t)|n\rangle\biggr],\eqno(14)
$$
where we used the notations
$$
|1\rangle=|\uparrow\uparrow\rangle,~~~~|2\rangle=|\uparrow\downarrow\rangle~~~~|3\rangle=|\downarrow\uparrow\rangle~~~~~|4\rangle=|\downarrow\downarrow\rangle.\eqno(15)
$$
the group of the complex amplitudes $A_{n}(t)$, $D_{n+2}(t)$, $B_{n+1}(t)$ and $C_{n+1}(t)$ are given, respectively by
$$
A_{n}(t)=\frac{1}{2\Gamma_{1}\Gamma_{2}}\sum_{k=0}^{2}
D^{k}_{n+2}(0)[\eta^{2}_{k}-2\Gamma_{2}^{2}+(\alpha_{2}+\alpha_{3})\eta_{k}
+\alpha_{2}\alpha_{3}]e^{i\eta_{k} t},\eqno(16)
$$
$$
B_{n+1}(t)=-\frac{1}{2\Gamma_{2}}\sum_{k=0}^{2}D^{k}_{n+2}(0)(\eta_{k}+\alpha_{3
})e^{i\eta_{k} t}=C_{n+1}(t),\eqno(17)
$$
$$
D_{n+2}(t)=\sum_{k=0}^{2} D^{k}_{n+2}(0)e^{i\eta_{k} t},\eqno(18)
$$
with
$$
\alpha_{1}=\Delta+f(\chi, n),~~~ \alpha_{2}=f(\chi, n+1),~~~ \alpha_{3}=-\Delta+f(\chi, n+2)\eqno(19)
$$
$$
\Gamma_{1}=\lambda \sqrt{n+1},~~~\Gamma_{2}=\lambda \sqrt{n+2},\eqno(20)
$$
where
$$
\eta_{k}=-\frac{X_{1}}{3}+\frac{2}{3}\biggl(\sqrt{X_{1}^{2}-3X_{2}}\biggr)
\cos
(\theta^{k}),\eqno(21)
$$
with
$$
\theta^{k}=\biggl(
\frac{1}{3}\cos^{-1}
\biggl[
\frac{9X_{1}X_{2}-2X_{1}^{3}-27X_{3}}{2(X_{1}^{2}-3X_{2})^{\frac{3}{2}}}
\biggr]+\frac{2k\pi}{3}\biggr),
k=0,1,2,\eqno(22)
$$
and
$$
X_{1}=\alpha_{1}+\alpha_{2}+\alpha_{3},~~~X_{2}=\alpha_{2}\alpha_{3}+\alpha_{1}(\alpha_{2}+\alpha_{3})-2(\Gamma_{1}^{2}
+\Gamma_{2}^{2}),~~~ X_{3}=\alpha_{1}\alpha_{2}\alpha_{3}-2(\alpha_{1}\Gamma_{2}^{2}+\alpha_{3}
\Gamma_{1}^{2}), \eqno(23)
$$
where the complex coefficients $D^{k}_{n+2}(0)$, $k=0,1,2$ are given by
$$
D^{k}_{n+2}(0)=\frac{2\gamma_{1}\Gamma_{1}\Gamma_{2}}{\eta_{kr}\eta_{ks}}~~~~,k,
r,s=0,1,2;~~~~ k\neq r\neq s \eqno(24)
$$
If in Eqs. (16-24), we let $\gamma_{1}\rightarrow \gamma_{4}$ and
$$
\alpha_{1}=-\Delta+f(\chi, n),~~~\alpha_{2}=f(\chi, n-1),~~~ \alpha_{3}=\Delta+f(\chi, n-2),\eqno(25)
$$
$$
\Gamma_{1}=\lambda \sqrt{n},~~~\Gamma_{2}=\lambda \sqrt{n-1},\eqno(26)
$$
and replacing the group of complex amplitudes $A_{n}(t)$, $D_{n+2}(t)$, $B_{n+1}(t)$ and $C_{n+1}(t)$ by the other group of complex amplitudes $E_{n}(t)$, $F_{n-1}(t)$, $G_{n-1}(t)$, $H_{n-2}(t)$, we obtain easily the last group, respectively.\\
The reduced density operator of the subsystem is given by
$$
\rho(t)_{A(F)}=\mathbf{Tr}_{F(A)}\rho(t)_{AF}=\mathbf{Tr}_{F(A)}|\psi_{AF}
(t)\rangle\langle\psi_{AF}(t)|.\eqno(27)
$$
Then the reduced atomic density operator in matrix form is given by
\begin{displaymath}
\mathbf{\rho_{A}} = \left( \begin{array}{cccc}
\rho_{A}^{11}~&~\rho_{A}^{12}~&~\rho_{A}^{13}~&~\rho_{A}^{14}\\
\\
\rho_{A}^{21}~&~\rho_{A}^{22}~&~\rho_{A}^{23}~&~\rho_{A}^{24}\\
\\
\rho_{A}^{31}~&~\rho_{A}^{32}~&~\rho_{A}^{33}~&~\rho_{A}^{34}\\
\\
\rho_{A}^{41}~&~\rho_{A}^{42}~&~\rho_{A}^{43}~&~\rho_{A}^{44}\\
\end{array} \right),\eqno(28)
\end{displaymath}
where the elements $\rho_{A}^{sr}$, $s,r\in\{1,2,3,4\}$ are given by
$$
\rho_{A}^{11}=\langle
U|U\rangle=\sum_{n=0}^{\infty}\biggl\{|b_{n}|^{2}\biggl(|A_{n}|^{2}+|H_{n-2}
|^{2}\biggr)
+b_{n}b_{n+2}^{\ast}A_{n}H_{n}^{\ast}+b_{n}b_{n-2}^{\ast}A_{n-2}^{\ast}H_{n-2}
\biggr\},\eqno(29)
$$
$$
\rho_{A}^{22}=\langle
R|R\rangle=\sum_{n=0}^{\infty}\biggl\{|b_{n}|^{2}\biggl(|B_{n+1}|^{2}+|G_{
n-1}|^{2}\biggr)
+b_{n}b_{n+2}^{\ast}B_{n+1}G_{n+1}^{\ast}+b_{n}b_{n-2}^{\ast}B_{n-1}^{\ast}G_{n-1}
\biggr\}=\langle
S|S\rangle=\rho_{A}^{33},\eqno(30)
$$
$$
\rho_{A}^{44}=\langle
T|T\rangle=\sum_{n=0}^{\infty}\biggl\{|b_{n}|^{2}\biggl(|D_{n+2}|^{2}+|E_{n}
|^{2}\biggr)
+b_{n}b_{n+2}^{\ast}D_{n+2}E_{n+2}^{\ast}+b_{n}b_{n-2}^{\ast}D_{n}^{\ast}E_{n}
\biggr\},\eqno(31)
$$
$$
\rho_{A}^{12}=\langle
R|U\rangle=\sum_{n=0}^{\infty}\biggl\{b_{n}b_{n-1}^{\ast}A_{n}B_{n}^{\ast}
+b_{n}b_{n-3}^{\ast}B_{n-2}^{\ast}H_{n-2}
$$
$$
+b_{n}b_{n+1}^{\ast}A_{n}G_{n}^{\ast}
+b_{n}b_{n-1}^{\ast}G_{n-2}^{\ast}H_{n-2}\biggr\}=(\rho_{21})^{\ast}=\langle
S|U\rangle=\rho_{A}^{13},\eqno(32)
$$
$$
\rho_{A}^{14}=\langle
T|U\rangle=\sum_{n=0}^{\infty}\biggl\{b_{n}b_{n-2}^{\ast}A_{n}D_{n}^{\ast}
+b_{n}b_{n-4}^{\ast}D_{n-2}^{\ast}H_{n-2}
$$
$$
+|b_{n}|^{2}A_{n}E_{n}^{\ast}
+b_{n}b_{n-2}^{\ast}E_{n-2}^{\ast}H_{n-2}\biggr\}=(\rho_{41})^{\ast},\eqno(33)
$$
$$
\rho_{A}^{23}=\langle
S|R\rangle=\sum_{n=0}^{\infty}\biggl\{|b_{n}|^{2}|B_{n+1}|^{2}
+b_{n}b_{n-2}^{\ast}C_{n-1}^{\ast}G_{n-1}
$$
$$
+b_{n}b_{n+2}^{\ast}B_{n+1}F_{n+1}^{\ast}
+|b_{n}|^{2}|F_{n-1}|^{2}\biggr\}=\langle
R|S\rangle=\rho_{32},\eqno(34)
$$
$$
\rho_{A}^{24}=\langle
T|R\rangle=\sum_{n=0}^{\infty}\biggl\{b_{n}b_{n-1}^{\ast}B_{n+1}D_{n+1}^{\ast}
+b_{n}b_{n-3}^{\ast}D_{n-1}^{\ast}G_{n-1}
$$
$$
+b_{n}b_{n+1}^{\ast}B_{n+1}E_{n+1}^{\ast}
+b_{n}b_{n-1}^{\ast}E_{n-1}^{\ast}G_{n-1}\biggr\}=(\rho_{42})^{\ast}=\rho_{A}^{34},\eqno(35)
$$
\section{Spin Squeezing}
Spin squeezing phenomenon reflects the reduced quantum
fluctuations in one of the field quadratures at the expense of the other
corresponding stretched quadrature.
In the literature,~\cite{KIUE93,WBI94,PZM02, SDCZ01,ZPP91,WIZO81}, there are
several definitions of spin squeezing and which one is the best is still an
unsolved issue. Squeezing or reduction of quantum fluctuations, for arbitrary
operators $A$ and $B$ which obey the commutation relation $[A,B]=C$, is the
product of the uncertainties in determining their expectation values as
follows~\cite{WIZO81}:
$$
\Delta A \Delta B\geq\frac{1}{2}|\langle C\rangle|,\eqno(36)
$$
where $(\Delta A)^2=\langle A^2\rangle-\langle A\rangle^2$ and $(\Delta
B)^2=\langle B^2\rangle-\langle B\rangle^2$. \\
In this work spin squeezing parameters are based on angular momentum commutation
relations. From the commutation relation $[J_{x},J_{y}]=i J_{z}$ the uncertainty
relation between different componenets of the angular momentum given by
$$
\Delta J_{x} \Delta J_{y}\geq\frac{1}{2}|\langle J_{z}\rangle|,\eqno(37)
$$
where
$$
J_{x}=\frac{1}{2}(J_{+}+J_{-}),\eqno(38)
$$
$$
J_{y}=\frac{1}{2i}(J_{+}-J_{-}),\eqno(39)
$$
where the operators $J_{+}$, $J_{-}$ and $J_{z}$ are given by (6).
Without violating Heisenberg's uncertainty relation, it is possible to
redistribute the uncertainty unevenly between $J_{x}$ and $J_{y}$, so that a
measurement of either $J_{x}$ or $J_{y}$ becomes more precise than the standard
quantum limit $\sqrt{|\langle J_{z}\rangle|/2}$. States with this property are
called spin squeezed states in analogy with the squeezed states of a harmonic
oscillator. Consequently, the two squeezing parameters can be written as
$$
F_{1}=(\Delta J_{x})^2-\frac{1}{2}|\langle
J_{z}\rangle|=\frac{1}{2}\biggl(1-\frac{1}{2}\biggl\langle
(J_{+}+J_{-})\biggr\rangle^{2}-|\langle J_{z}\rangle|\biggr),\eqno(40)
$$
$$
F_{2}=(\Delta J_{y})^2-\frac{1}{2}|\langle
J_{z}\rangle|=\frac{1}{2}\biggl(1-\frac{1}{2i}\biggl\langle
(J_{+}-J_{-})\biggr\rangle^{2}-|\langle J_{z}\rangle|\biggr),\eqno(41)
$$
If the parameter $F_{1}$ ($F_{2}$) satisfies the condition $F_{1}<0$
($F_{2}<0$), the fluctuation in the component $J_{x}$ ($J_{y}$) is said to be
squeezed.\\
Using the wave function (11), we can easily compute the following time-dependent
expectation values of the operators $J_{-}$, $J_{+}$ and $J_{z}$, respectively
in the forms
$$
\langle
J_{-}\rangle=2(\rho_{A}^{12}+\rho_{A}^{24})=\langle
J_{+}\rangle^{\ast}
,\eqno(42)
$$
$$
\langle J_{z}\rangle=\rho_{A}^{11}-\rho_{A}^{44}
,\eqno(43)
$$
\section{Quantum Entropy}
Quantum entropy, as a natural generalization of Boltzmann classical entropy, was
proposed by von Neumann~\cite{NEUMANN27}. It has been applied, in particular, as
a measure of quantum entanglement, quantum decoherence, quantum optical
correlations, purity of states, quantum noise or accessible information in
quantum measurement (the capacity of the quantum channel). Entropy is related to the density matrix, which provides a
complete statistical description of the system.
Since we have assumed that the two two-level atoms and the single-mode binomial
field are initially in a disentangled pure state, the total entropy of the
system is zero. In terms of Araki $\&$ Lieb inequality of the
entropy~\cite{ARLE70}
$$
|S_{A}(t)-S_{F}(t)|\leq S_{AF}(t)\leq |S_{A}(t)+S_{F}(t)|,\eqno(44)
$$
we can find that the reduced entropies of the two subsystems are identical,
namely, $S_{A}(t)=S_{F}(t)$. Here, the field entropy can be obtained by
operating the atomic entropy. The quantum field entropy can be defined as follows~\cite{VPRK97}
$$
S(\rho_{A})=-\sum_{s=1}^{4}~\Pi_{A}^{(s)}\ln \Pi_{A}^{(s)},\eqno(45)
$$
where $\Pi_{A}^{(s)}$, ($s=1,2,3,4$) is the eigenvalue of the reduced density
matrix, Eq. (27), and can be represented by the roots of forth order algebraic equation
$$
c_{0}+c_{1}\Pi_{A}+c_{2}\Pi_{A}^{2}+c_{3}\Pi_{A}^{3}+\Pi_{A}^{4}=0,\eqno(46)
$$
with the coefficients, $c$'s, are given by
$$
c_{3}=-\rho_{11}-\rho_{22}-\rho_{33}-\rho_{44},\eqno(47)
$$
$$
c_{2}=-|\rho_{41}|^{2}-2|\rho_{42}|^{2}-2|\rho_{12}|^{2}-|\rho_{23}|^{2}
+2\rho_{22}(\rho_{11}+\rho_{44})+\rho_{22}^{2}+\rho_{11}\rho_{44},\eqno(48)
$$
$$
c_{1}=2|\rho_{14}|^{2}\rho_{22}+2|\rho_{24}|^{2}(\rho_{11}+\rho_{22})+2|\rho_{12
}|^{2}(\rho_{22}+\rho_{44})
-\rho_{22}^{2}(\rho_{11}+\rho_{44})
$$
$$
-2\rho_{11}\rho_{22}\rho_{44}-\Re(\rho_{23})(|\rho_{42}|^{2}+|\rho_{12}|^{2})
-2\Re(\rho_{41}\rho_{12}\rho_{24})+|\rho_{32}|^{2}(\rho_{11}+\rho_{44}),
\eqno(49)
$$
$$
c_{0}=|\rho_{41}|^{2}|\rho_{23}|^{2}-\rho_{22}\rho_{33}|\rho_{41}|^{2}-\rho_{11}
\rho_{33}|\rho_{42}|^{2}
-\rho_{44}\rho_{22}|\rho_{12}|^{2}-\rho_{11}\rho_{44}|\rho_{23}|^{2}-\rho_{33}
\rho_{44}|\rho_{12}|^{2}
$$
$$
+\rho_{11}\rho_{22}\rho_{33}\rho_{44}
-\rho_{11}\rho_{22}|\rho_{24}|^{2}
+(\rho_{11}|\rho_{24}|^{2}+\rho_{44}|\rho_{12}|^{2})\Re(\rho_{23})
+(\rho_{22}+\rho_{33}-\Re(\rho_{23}))\Re(\rho_{41}\rho_{12}\rho_{24}),\eqno(50)
$$
where we used the notations
$$
\rho_{22}=\rho_{33},~~~~~\rho_{12}= \rho_{13},~~~~~\rho_{24}=
\rho_{34}.\eqno(51)
$$
The eigenvalues $\Pi_{A}^{(s)}$, ($s=1,2,3,4$) are given, respectively by
$$
\Pi_{A}^{(s)}=\frac{U_{s}+(-1)^{s}V_{s+1}}{2};~~~~ s=1,2\eqno(52)
$$
$$
\Pi_{A}^{(s)}=\frac{U_{s}+(-1)^{s+1}V_{s+2}}{2};~~~~ s=3,4\eqno(53)
$$
where
$$
U_{s}=-\frac{c_{3}}{2}+(-)^{s}f, V_{s}=\sqrt{z_{3}+(-1)^{s}z_{4}},\eqno(54)
$$
with
$$
z_{3}=-2~c_{2} + \frac{3~c_{3}^{2}}{4} -f^{2}, z_{4}=\frac{8 c_{1} - 4 c_{2}
c_{3} + c_{3}^{3}}{4~f}\eqno(55)
$$
\section{Husimi $Q$-function}
For measuring the quantum state of the radiation field, balanced homodyning has become a well established method, it directly measures phase sensitive quadrature distributions. The homodyne measurement of an electromagnetic field gives all possible linear combinations of the field quadratures. The average of the random outcomes of the measurement is connected with the marginal distribution of any quasi-probability used in quantum optics. It has been shown from earlier studies~\cite{EWIGNER32, ZWIGNer32, KAGL69,
HICOSCWG84} that the quasi-probability
functions $W$-, (Husimi) $Q$-, and (Glauber-Sudershan) $P$-function, are important for the statistical description of a microscopic system
and provide insight into the nonclassical features of the radiation fields.
Therefore, we devote the present section to concentrate on one of these functions, that is the Husimi $Q$-function which has the nice property of being always positive and further advantage of being readily measurable by quantum tomographic techniques~\cite{BOTAWA98, MANTOM97}. In fact, Husimi $Q$-function is not only a convenient tool to calculate the expectation values of anti-normally ordered products of operators, but also to give some insight into the mechanism of interaction for the model under consideration. The relation between the phase-space measurement; Husimi $Q$-function; and the classical information-theoretic entropy associated with quantum fields was introduced by Wehrl~\cite{WEHRL79}. However, on expanding the von Neumann quantum entropy in a power series of classical entropies, it was shown explicity~\cite{PEKRPELUSZ86} that the first term of this expansion is the Wehrl entropy. Thus, Husimi $Q$-function can be related to the von Neumann quantum entropy in different approaches~\cite{WEHRL79, PEKRPELUSZ86, FAGU06, CAALCARA09, HUFAN09, MIMAWA00, MIWAIM01, BERETA84}.\\
The Husimi $Q$-function can be given in the form as
~\cite{HICOSCWG84, HUSIMI40, FuSOLO001}
$$
Q(\alpha)=\frac{\langle \alpha\mid\rho_{F}\mid\alpha\rangle}{\pi},\eqno(56)
$$
where $\rho_{F}$ is the reduced density operator of the cavity field given by (27). The state $\mid\alpha\rangle$ represents the well-known coherent state with amplitude $\alpha=X+i Y$. Inserting $\rho_{F}$ into Eq. (56), we
can easily obtain the Husimi $Q$-function of the cavity field
$$
Q(\alpha)=\frac{1}{\pi}(\mid\langle\alpha\mid
U\rangle\mid^{2}+\mid\langle\alpha\mid T\rangle\mid^{2})\eqno(57)
$$
where
$$
\langle\alpha\mid
U\rangle=e^{-\mid\alpha\mid^{2}/2}\sum_{n=0}^{\infty}\biggl[b_{n}\frac{\alpha^{
\ast n}}{\sqrt{n!}}A_{n}(t)+b_{n+2}\frac{\alpha^{\ast
n}}{\sqrt{n!}}H_{n}(t)\biggr]\eqno(58)
$$
and
$$
\langle\alpha\mid
T\rangle=e^{-\mid\alpha\mid^{2}/2}\sum_{n=0}^{\infty}b_{n}\biggl[\frac{\alpha^{
\ast n}}{\sqrt{n!}}E_{n}(t)+\frac{\alpha^{\ast n+2}}{\sqrt{(n+2)!}}D_{n+2}(t)\biggr]\eqno(59)
$$
\section{Discussion of the numerical results}
Using different sets of parameters for the initial state of the system we
calculate numerically the quantum entropy $S_{A}$, spin squeezing parameters
$F_{1}$ and $F_{2}$ and atomic population $\langle \sigma_{z}\rangle$ as a
reference function. All results are plotted as functions of the Rabi angle
$\lambda t$. For each set of parameters four pictures are displayed. The pictures (a and b) show, respectively, squeezing parameters $F_{1}$ and
$F_{2}$, while the pictures (c and d) show the quantum entropy
$S_{A}$ and the atomic population $\langle \sigma_{z}\rangle$. For
all our plots we set the Bell-state parameters $\gamma_{1}=\frac{1}{\sqrt{2}}$
and $\gamma_{4}=i\gamma_{1}$. In figures 1 and 2 we have plotted the above
mentioned functions with $|p|=0.9$, $\chi/\lambda=0$ and various values of the
detuning parameter $\Delta/\lambda$. From these figures we can easily notice
that, just after the onset of interaction these functions fluctuate for short
period of time. This short period of revival is followed by a long period of
collapse. The period of revival starts longer for one period of time with high
amplitude to become wider with smaller amplitude as time goes.
This is because
the width and heights of the revivals belonging to the different series of
eigenvalues are different.
Furthermore, as we increase the detuning parameter $\Delta/\lambda$ from its
value $\Delta/\lambda=0$ (resonance case), the overlap of revivals noticeably
decreases with the increase in $\Delta/\lambda$. Also, the periods of
oscillations within the revival decrease with the increase in the detuning
parameter $\Delta/\lambda$.
For the population, $\langle \sigma_{z}\rangle$, the
period of revival depends on the average number $M|p|$ of photons whereas the
time of collapse depends on the dispersion, $M|p|(1-|p|)$, in the photon number
distribution~\cite{JoshPur87}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=.9\linewidth]
{ffep1.eps}
\vspace{-.7cm}
\caption{Spin squeezing parameters $F_{1}$ (a), $F_{2}$ (b), atomic
entropy $S_{A}$ (c), and atomic inversion (d) with $|p|=0.9$,
$M=50$, $\chi/\lambda=0$ and $\Delta/\lambda=0$}
\end{center}
\end{figure}
Moreover, from these figures we can see that spin
squeezing parameters $F_{1}$ and $F_{2}$ experience collapse and revival where
atomic population exhibits collape and revival as time going.
When we turn our attention to the role that spin squeezing parametres play to
discover entanglement properties, we can realize that the behaviors of both
squeezing parameter $F_{1}$ and atomic entropy $S_{A}$ are equivalent, i.e.,
entanglemet implies spin squeezing and vice versa.
This can be understood as follows: quantum entropy $S_{A}$ oscillates when
$F_{1}$ exhibits oscillations with same periods of time.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.9\linewidth]
{ffep2.eps}
\vspace{-0.5cm}
\caption{ The same as Fig. (1) but for $\Delta/\lambda=10$}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.9\linewidth]
{ffep3.eps}
\vspace{-0.5cm}
\caption{The same as Fig. (1) but for $\chi/\lambda=0.5$ }
\end{center}
\end{figure}
Furthermore, the
function $S_{A}$ goes to its maximum when $F_{1}$ shows oscillations around very
small value (between 0.45 and 0.5) of its maximum and when $F_{2}$ shows
collapse equal to its maximum, while $S_{A}$ reaches its minimum when squeezing
occurs.
This behavior occurs periodically for both $S_{A}$ and $F_{1}$. This means that,
on on-resonance atomic-system-field interaction, we can, with full success,
understand entanglement dynamics from the dynamic of spin squeezing parameters
$F_{1}$ and $F_{2}$ and vice versa.\\
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.9\linewidth]
{ffep4.eps}
\vspace{-0.5cm}
\caption{ The same as Fig. (1) but for $\chi/\lambda=5.0$ }
\end{center}
\end{figure}
Let us now come to the case of off-resonance interaction between the atomic
system and the cavity field. In this case the same general behavior (with
periods shift to right when $\Delta/\lambda=10$) is noticed. Additionally, the
oscillations become more dense with reduced maximum of $S_{A}$ corresponding to
the increase of the oscillation interval of $F_{1}$ (between 0.4 and 0.5 and
become longer as $\Delta/\lambda$ increase) and when some intervals of collapse
begin to appear.\\
The surprising and very interesting is the effect of the nonlinear medium
individually and in the presence of the detuning parameter $\Delta/\lambda$.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.9\linewidth]
{ffep5.eps}
\vspace{-0.5cm}
\caption{The same as Fig. (3) but for $\Delta/\lambda=5.0$ }
\end{center}
\end{figure}
To
examine the effect of these mentioned parameters, we recall figures (3-5). These
figures have been pictured by the setting of different values of the parameter
$\chi/\lambda$ individually and in the presence of the detuning parameter
$\Delta/\lambda$. It is easy to see the change in figures shape where the
standard behavior was changed completely. For a weak Kerr medium such that
$\chi/\lambda=0.5$, our reference function, $\langle\sigma_{z}\rangle$, shows
behavior similar to the modified Jayned-Cumming model with Kerr
medium~\cite{ETCU63,JOSHPUR92} accompanied with reduced amplitude of
oscillations.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.9\linewidth]
{ffep6.eps}
\vspace{-0.5cm}
\caption{The same as Fig. (1) but for $|p|=0.98$, $M=100$}
\end{center}
\end{figure}
Furthermore, the population $\langle\sigma_{z}\rangle$ and spin squeezing parameter $F_{2}$ oscillate periodically with fixed periods are equivalent but $F_{1}$ does not.
A quick look at the squeezing parameter $F_{1}$ one can realize easily that it oscillates rapidly. The oscillations overlap for a period of time (except for some instances) to become dense to show periodically wave packets of Gaussian envelope with amplitude decreases as time develops. The more the Gaussian-packet-envelope amplitude decreases the more the entropy increases, i.e., stronger entanglement can be showed, see figure 3. This behavior becomes more clear when we consider the detuning parameter in our numerical computations, see figure 5.
\vspace{-0.5cm}
\begin{center}
\begin{figure}[h]
\includegraphics[width=0.9\linewidth]
{ffep7.eps}
\vspace{-0.5cm}
\caption{The same as Fig. (6) but for $\chi/\lambda=0.5$ }
\end{figure}
\end{center}
Furthermore, when the nonlinear medium becomes stronger, $\langle \sigma_{z}\rangle$, $F_{1}$ and $F_{2}$ show chaotic behavior with no indications of revivals or any other regular structure. This is
accompanied with change in the entropy maximum from slow to rapid increase as
$\chi/\lambda$ increases with time develops (see Fig. 4). This behavior is
dominant without or with high values of $\Delta/\lambda$.
\\
With the increase of $|p|$ and $M$, such that $|p|=0.98$, and $M=100$, i.e.; on
increasing the average number $M|p|$ of photon, the oscillations of squeezing
parametres $F_{1}$, $F_{2}$, $\langle \sigma_{z}\rangle$ and quantum entropy
$S_{A}$ becomes more dense. This means that every two neighbor overlapping
revivals start to overlap again. This is because with larger mean photon $M|p|$,
these functions have bigger values with time evolution, which causes dense
oscillations of the cavity field parametres. In other words, there are more
revival series since more eigenvalues can be found in this model. However, the same behavior, we saw before for $|p|=0.9$, is seen again except
for the envelope width becomes wider which resulted in fewer packets of
oscillation appear in the same period of time, see figure 6 and 7. It is
worth to note that each revival series of oscillations corresponds to a beat
frequency. Moreover, on weak Kerr medium the relation between the atomic entropy
and spin squeezing parametres seems more complicated. In this case all of them
oscillate with no indications of revivals or any other regular structure where
the Gaussian envelope completely disappears. The effect of
strong Kerr medium individually and with the coexistence of small detuning
exhibits behavior similar to that when $|p|=0.9$.
\par At the end we are going to focus our attention on the representation of the field in phase space which provides some aspects of the field dynamics. Figure 8 shows mesh plots (left) and the corresponding contour plots (right) of the Husimi $Q$-function in the complex $\alpha$-plane $X=Re(\alpha)$, $Y=Im(\alpha)$ for the Rabi angle $\lambda t=\pi/4$ and different values of Kerr parameter $\chi/\lambda$, while all other parameters are kept without change as in Fig. 1. From figure 8a, it is clear that the state of the field is a squeezed state, since the Husimi $Q$-function has different widths in the $X$ and $Y$ directions. On the other hand, the squeezing is generated by the nonlinearity inherent to the system by the binomial state. This can be explained in terms of a superposition of different numbers of states of different phases, creating a deviation from the classical phase.
It is well known that the squeezed states is a general class of the minimum-uncertainty states~\cite{SACHNEBU03}. Bearing in mind that the nonlinearity of the binomial state yields squeezing of the quantum field and, as a result, entanglement between the two sub-systems is also produced~\cite{BUCHDADUMORU01}.
It is of particular interest to see how the Husimi $Q$-function behaves once the Kerr medium is added. When $\chi/\lambda$ increases by a fraction value, we notice clearly the single blob become almost perfectly circular with radius $|\alpha|\approx 7.5$ rotates in the counterclockwise direction, see Figs. 8a, b, d and 8f. This behavior of the quantum field distribution is similar to that of the thermal state~\cite{SACHNEBU03} which means that once the Kerr medium is added, makes it clear that rethermalization of the binomial field is indeed taking place. Now, it is perfectly sensible to ask, what is the situation if the Kerr Medium parameter is increased by an integer value? In this case, when the state evolves further, a multi-component structure develops as shown in Figs 8c, e and 8g, respectively. In these figures, the Husimi $Q$-function demonstrates that the quantum states obtained corresponding to Schr\"{o}dinger cat states. Moreover, we see that the cat states have different number of components at different values of $\chi/\lambda$. Different mechanisms demonstrated that, for the case of a radiation field propagating in a nonlinear medium, Schr\"{o}dinger cat states are generated~\cite{FuSOLO001, MIMAWA00} with different number of components at different times in the evolution~\cite{DYAO97}. It was shown that the splitting of the Husimi function, which is the signature of the formation of Schr\"{o}dinger cat states, is related strongly to quantum entanglement~\cite{VAOR95, ORPAKA95, JEOR94, MIMAWA00, MIWAIM01}.\\
To discuss the evolution of the Husimi $Q$-function in the case of resonance and fixed value of the Kerr parameter, i.e., $\chi/\lambda=5.0$ (strong Kerr medium), we have setted various values for the Rabi angle $\lambda t$ in our computations, i.e., $\lambda t=0.0, \pi/6, \pi/4, \pi/3, \pi/2$ and $\pi$. The results are displayed in figure 9. A collision of six blobs occurs gradually when $\lambda t=\pi/6$, which implies the rethermalisation of the quantum field. At the time evolution of $\lambda t=\pi, \pi/2, \pi/3$ and $\pi/4$ the distribution of the Husimi $Q$-function splits into two, three and four blobs corresponding to Schr\"{o}dinger cat states corresponding to different number of components at different times in the evolution. The center of blobs lies on a circle with radius $|\alpha|$ centered at $X=Y=0.0$.\\
\newpage
\vspace{-.2cm}
\begin{figure}[tpbh]
\begin{center}
\includegraphics[width=0.3\linewidth]
{1.eps}
\includegraphics[width=0.22\linewidth]
{q1.eps}
\\
\includegraphics[width=0.3\linewidth]
{2.eps}
\includegraphics[width=0.22\linewidth]
{q2.eps}
\\
\includegraphics[width=0.3\linewidth]
{3.eps}
\includegraphics[width=0.22\linewidth]
{q3.eps}
\\
\includegraphics[width=0.3\linewidth]
{4.eps}
\includegraphics[width=0.22\linewidth]
{q4.eps}
\caption {Husimi Q-function with $|p|=0.9$, $M=50$, $t=\pi/4$,
$\Delta/\lambda=0$ and (a)$\chi/\lambda=0.0$, (b)$\chi/\lambda=0.5$, (c) $\chi/\lambda=1.0$, (d) $\chi/\lambda=1.5$, (e) $\chi/\lambda=2.0$, (f) $\chi/\lambda=2.5$ and (g) $\chi/\lambda=5.0$ }
\end{center}
\end{figure}
\begin{figure}[tpbh]
\begin{center}
\includegraphics[width=0.3\linewidth]
{5.eps}
\includegraphics[width=0.22\linewidth]
{q5.eps}
\\
\includegraphics[width=0.3\linewidth]
{6.eps}
\includegraphics[width=0.22\linewidth]
{q6.eps}
\\
\includegraphics[width=0.3\linewidth]
{7.eps}
\includegraphics[width=0.22\linewidth]
{q7.eps}
\begin{center}
FIG. 8: continued
\end{center}
\end{center}
\end{figure}
\newpage
\begin{figure}[tpbh]
\begin{center}
\includegraphics[width=0.3\linewidth]
{8.eps}
\includegraphics[width=0.22\linewidth]
{q8.eps}
\\
\includegraphics[width=0.3\linewidth]
{9.eps}
\includegraphics[width=0.22\linewidth]
{q9.eps}
\\
\includegraphics[width=0.3\linewidth]
{10.eps}
\includegraphics[width=0.22\linewidth]
{q10.eps}
\\
\includegraphics[width=0.3\linewidth]
{11.eps}
\includegraphics[width=0.22\linewidth]
{q11.eps}
\caption{Husimi Q-function with $|p|=0.9$, $M=50$, $\chi=5.0$,
$\Delta/\lambda=0$ and (a) $t=0.0$, (b)$t=\pi/6$, (c) $t=\pi/4$, (d) $t=\pi/3$, (e) $t=\pi/2$ and (f) $t=\pi$ }
\end{center}
\end{figure}
\newpage
\begin{figure}[tpbh]
\begin{center}
\includegraphics[width=0.3\linewidth]
{12.eps}
\includegraphics[width=0.22\linewidth]
{q12.eps}
\\
\includegraphics[width=0.3\linewidth]
{13.eps}
\includegraphics[width=0.22\linewidth]
{q13.eps}
\begin{center}
FIG. 9: continued
\end{center}
\end{center}
\end{figure}
\newpage
\section{Conclusion}
In Conclusion, we have shown that spin squeezing implies entanglement for quantum tripartite-state, where the subsystem includes the bipartite-state is identical. We have proved that spin squeezing parameters can be a convenient tool to give some insight into the mechanism of entanglement for the model under consideration. Moreover, a subsystem cavity field contains a nonlinear medium enhances noticeably the dynamics of entanglement specially when interacts with atomic subsystem off-resonantly. More clear insight into the relation between entanglement and the phase space distribution i.e., Husimi $Q$-function, is also illustrated. In this situation, the strong Kerr medium stimulates the creation of Schr\"{o}dinger cat states which is necessary for the generation of entanglement.
\newpage
|
1,116,691,498,226 | arxiv | \section{Introduction}
\label{sec:introduction}
Dark matter (DM) has been a standing problem in modern physics for more than eighty years. Although it is not yet clear if DM is in the form of new weakly interacting particles or more compact objects like primordial black holes, there is little doubt about its existence. The particle scenario for DM is also well motivated theoretically since almost all theories Beyond the Standard Model have particles that can play the role of DM. For this reason, a great deal of experimental effort has been dedicated to the discovery of DM through the production of these particles in colliders \cite{Mitsou:2013rwa}; the detection of the products of DM decay or annihilation \cite{Gaskins:2016cha}; and the direct detection of recoil events due to DM scattering off Standard Model particles in underground detectors \cite{Goodman:1984dc, Drukier:1986tm}.
Direct detection constraints are usually derived based on the potential observation of nuclear recoils when DM particles scatter inside an underground detector. Such experiments give stringent constraints on DM-nucleon cross sections over a wide range of DM masses (see e.g.~Refs.~\cite{Agnese:2015nto,Angloher:2015ewa,Akerib:2016vxi, Tan:2016zwf}) and constraints can also be derived from searches for DM-electron interactions~\cite{Essig:2011nj}. However, there is a limit to the mass of DM particles which can be probed by underground detectors. Given that the speed of halo DM particles has an upper bound (set by the escape speed of the Galaxy), sufficiently light DM particles will not have enough energy to create a nuclear recoil above the energy threshold of the experiment, making the scattering event undetectable. With typical energy thresholds of $\mathcal{O}(1 \,\,\mathrm{keV})$, direct detection constraints are substantially weakened for DM masses below a few GeV.
This opens the possibility that if DM is sufficiently light, it could evade current constraints while still interacting strongly enough with nucleons or electrons to have an appreciable probability of scattering in the Earth. The underground scattering of DM particles with ordinary matter has the possibility to distort the DM density and velocity distribution near the Earth's surface, altering the recoil spectrum expected in direct detection experiments.
Light DM is not the only scenario where the stopping effect of underground nuclei or electrons might be important. In two-component DM models, one can envision a dominant weakly interacting component and a small but nevertheless non-negligible strongly interacting DM component. Since this subdominant component accounts for a tiny fraction of the DM relic abundance, it can easily evade direct detection constraints despite having potentially large DM-nucleon cross section. Such two-component DM models are well motivated since a small strongly interacting DM component could provide the seeds for building up the supermassive black hole of the Galaxy via gravothermal collapse~\cite{Pollack:2014rja}.
The effect of Earth-scattering on DM particles for contact, long range or dipole interactions has been studied before in~Refs.~\cite{Collar:1992qc,Collar:1993ss,Hasenbalg:1997hs,Foot:2003iv,Zaharijas:2004jv,Sigurdson:2004zp,Mack:2007xj,Cline:2012is,Daci:2015hca,Lee:2015qva}. Additionally, the stopping power of nuclei, bound electrons and free electrons in metallic layers of the Earth was estimated in Ref.~\cite{Kouvaris:2014lpa}. This stopping effect of underground atoms can induce a diurnal modulation in the observed DM recoil signal~\cite{Collar:1992qc,Hasenbalg:1997hs,Kouvaris:2014lpa} (as well as a distinctive top-down asymmetry in directional detectors \cite{Kouvaris:2015laa}). This modulation has an easy explanation: as the Earth is moving with respect to the rest frame of the DM halo, DM particles cross the Earth in larger numbers from the direction opposite to the velocity of the Earth in the Galactic rest frame. Due to the rotation of the Earth around its own axis, the DM particles that come from this preferred direction travel different distances underground during the day in order to reach the detector. Since the probability of particles interacting is larger when they travel larger distances underground, this creates a variation in the DM signal with a period of one day. Similar diurnal modulation effects are produced also in the context of mirror DM~\cite{Foot:2011fh} and dissipative hidden sector DM~\cite{Foot:2014osa,Clarke:2015gqw}.
The DAMA collaboration has searched for signs of such a diurnally modulated signal, with null results~\cite{Bernabei:2015nia}. If such a modulation due to Earth-scattering were observed in the future, however, it would allow us to unequivocally identify DM as the source of the signal. It may also provide hints about the strength and structure of DM-nucleon interactions. Thus, correctly accounting for these effects may be crucial for identifying and characterising DM.
Previous studies have focused primarily on the stopping effects of DM interactions in the Earth. DM particles which scatter are assumed to be deflected away from the detector or to lose energy without being deflected \cite{Bernabei:2015nia}, leading to an attenuation of the total DM flux. However, such an approach is not generally self-consistent. Any particle which interacts must necessarily be deflected from its trajectory. These particles do not disappear, but instead emerge from the Earth's surface on a different trajectory. Earth-scattering does not simply reduce the flux of DM particles but redistributes this flux, decreasing the density of DM at some points on the Earth while \textit{increasing it} at others. Any self-consistent calculation must conserve the total flux of DM particles and while early Monte-Carlo studies \cite{Collar:1992qc,Collar:1993ss,Hasenbalg:1997hs} did indeed account for this effect, it appears to have been largely neglected in the literature since.
In this paper, we study the deformation of the velocity distribution and recoil spectrum in the case where the DM-nucleon cross section is sufficiently large that DM particles undergo at most one underground scatter before reaching the detector. This `single-scatter' approximation allows us to evaluate the effects of Earth-scattering analytically, including contributions from both attenuation and deflection of particles. We study the size of the diurnal modulation of the DM signal for detectors at different latitudes, using a realistic model for the Earth and exploring a range of DM-nucleon contact interactions in the context of non-relativistic effective field theory \cite{Fitzpatrick:2012ix}. The \textsc{EarthShadow} code used to perform this study (as well as the numerical results produced) are made publicly available online \cite{EarthShadow}. The key result of the paper is illustrated in Fig.~\ref{fig:Rvt}, which shows the amplitude and phase of the daily modulation for low-mass DM in detectors at a number of underground labs. The modulation depends sensitively on the form of the DM-nucleon interaction, which may allow different interactions to be distinguished if such a modulation is observed in future.
The paper is organized as follows: In Sec.~\ref{sec:formalism}, we outline the non-relativistic effective field theory framework and enumerate the DM-nucleon interactions which we consider in this work. Sec.~\ref{sec:calculation} constitutes the main calculation of the paper: analytic expressions for the DM velocity distribution at the surface of the Earth when Earth-scattering is accounted for. In Sec.~\ref{sec:effects}, we present numerical results, showing the impact of Earth-scattering on the DM distribution for low- and intermediate-mass DM and for a selection of interaction types. In Sec.~\ref{sec:modulation}, we translate these results into predictions for the diurnal modulation. Finally, we discuss the implications of our results for future DM searches in Sec.~\ref{sec:discussion}, followed by a summary of our conclusions in Sec.~\ref{sec:conclusion}.
\section{NREFT formalism}
\label{sec:formalism}
In this section we briefly review the non-relativistic effective theory of DM-nucleon interactions.~This is the theoretical framework used here to calculate cross-sections, mean free paths and scattering probabilities for DM particles travelling through the Earth.~The theory was formulated in~\cite{Chang:2009yt,Fan:2010gt,Fitzpatrick:2012ix,Fitzpatrick:2012ib,Anand:2013yka}, and subsequently developed in~\cite{Menendez:2012tm,Cirigliano:2012pq,DelNobile:2013sia,Klos:2013rwa,Peter:2013aha,Hill:2013hoa,Catena:2014uqa,Catena:2014hla,Catena:2014epa,Gluscevic:2014vga,Panci:2014gga,Vietze:2014vsa,Barello:2014uda,Catena:2015uua,Schneck:2015eqa,Dent:2015zpa,Catena:2015vpa,Kavanagh:2015jma,D'Eramo:2016atc,Catena:2016hoj,Kahlhoefer:2016eds}.~In the remainder of this paper, we will refer to the above formalism as Non-Relativistic Effective Field Theory, or NREFT for short.
The relevant ``degrees of freedom'' in the effective theory of DM-nucleon interactions are four Hermitian operators~\cite{Fitzpatrick:2012ix}:~($i$ times) the momentum transfer operator, $i\mathbf{\hat{q}}$, the transverse relative velocity operator, $\mathbf{\hat{v}}^{\perp}$, and the DM particle and nucleon spin operators, $\mathbf{\hat{S}}_\chi$ and $\mathbf{\hat{S}}_N$, respectively.~By construction, $\mathbf{\hat{q}}$ and $\mathbf{\hat{v}}^{\perp}$ obey the orthogonality condition $\mathbf{\hat{v}}^{\perp} \cdot \mathbf{\hat{q}}=0$.~In this context, the Hamiltonian density for DM-nucleon interactions, $\hat{\mathcal{H}}_{\chi N}$, is given by a linear combination of interaction operators, each of which is a Galilean invariant scalar combination of $i\mathbf{\hat{q}}$, $\mathbf{\hat{v}}^{\perp}$, $\mathbf{\hat{S}}_\chi$ and $\mathbf{\hat{S}}_N$.
\begin{table*}[t!]
\centering
\begin{tabular}{ll}
\hline
\toprule
\toprule
\toprule
$\hat{\mathcal{O}}_1 = \mathbb{1}_{\chi N}$ & $\hat{\mathcal{O}}_9 = i{\bf{\hat{S}}}_\chi\cdot\left({\bf{\hat{S}}}_N\times\frac{{\bf{\hat{q}}}}{m_N}\right)$ \\
$\hat{\mathcal{O}}_3 = i{\bf{\hat{S}}}_N\cdot\left(\frac{{\bf{\hat{q}}}}{m_N}\times{\bf{\hat{v}}}^{\perp}\right)$ \hspace{2 cm} & $\hat{\mathcal{O}}_{10} = i{\bf{\hat{S}}}_N\cdot\frac{{\bf{\hat{q}}}}{m_N}$ \\
$\hat{\mathcal{O}}_4 = {\bf{\hat{S}}}_{\chi}\cdot {\bf{\hat{S}}}_{N}$ & $\hat{\mathcal{O}}_{11} = i{\bf{\hat{S}}}_\chi\cdot\frac{{\bf{\hat{q}}}}{m_N}$ \\
$\hat{\mathcal{O}}_5 = i{\bf{\hat{S}}}_\chi\cdot\left(\frac{{\bf{\hat{q}}}}{m_N}\times{\bf{\hat{v}}}^{\perp}\right)$ & $\hat{\mathcal{O}}_{12} = {\bf{\hat{S}}}_{\chi}\cdot \left({\bf{\hat{S}}}_{N} \times{\bf{\hat{v}}}^{\perp} \right)$ \\
$\hat{\mathcal{O}}_6 = \left({\bf{\hat{S}}}_\chi\cdot\frac{{\bf{\hat{q}}}}{m_N}\right) \left({\bf{\hat{S}}}_N\cdot\frac{\hat{{\bf{q}}}}{m_N}\right)$ & $\hat{\mathcal{O}}_{13} =i \left({\bf{\hat{S}}}_{\chi}\cdot {\bf{\hat{v}}}^{\perp}\right)\left({\bf{\hat{S}}}_{N}\cdot \frac{{\bf{\hat{q}}}}{m_N}\right)$ \\
$\hat{\mathcal{O}}_7 = {\bf{\hat{S}}}_{N}\cdot {\bf{\hat{v}}}^{\perp}$ & $\hat{\mathcal{O}}_{14} = i\left({\bf{\hat{S}}}_{\chi}\cdot \frac{{\bf{\hat{q}}}}{m_N}\right)\left({\bf{\hat{S}}}_{N}\cdot {\bf{\hat{v}}}^{\perp}\right)$ \\
$\hat{\mathcal{O}}_8 = {\bf{\hat{S}}}_{\chi}\cdot {\bf{\hat{v}}}^{\perp}$ & $\hat{\mathcal{O}}_{15} = -\left({\bf{\hat{S}}}_{\chi}\cdot \frac{{\bf{\hat{q}}}}{m_N}\right)\left[ \left({\bf{\hat{S}}}_{N}\times {\bf{\hat{v}}}^{\perp} \right) \cdot \frac{{\bf{\hat{q}}}}{m_N}\right] $ \\
\bottomrule
\bottomrule
\bottomrule
\hline
\end{tabular}
\caption{\textbf{Interaction operators relevant for the present analysis.}~The operator $\mathbb{1}_{\chi N}$ is the identity in the two-particle spin space, and $m_N$ is the nucleon mass.~All interaction operators have the same mass dimension.~In the above expressions, we omit the nucleon index $i$ for simplicity. Operator $\oper{1}$ corresponds to the standard spin-independent (SI) interaction, and $\oper{4}$ corresponds to the standard spin-dependent (SD).}
\label{tab:operators}
\end{table*}
Neglecting two-body DM-nucleon interactions, the Hamiltonian density for non-relativistic DM-nucleus scattering is:\footnote{Two-body corrections to Eq.~(\ref{eq:H_chiT}) have been estimated in chiral perturbation theory~\cite{Toivanen:2008zz,Cirigliano:2012pq,Menendez:2012tm,Klos:2013rwa,Hoferichter:2015ipa,Hoferichter:2016nvd}.}
\begin{eqnarray}
\hat{\mathcal{H}}_{\chi T} &\equiv & \sum_{i=1}^{A} \hat{\mathcal{H}}_{\chi N}^{(i)} = \sum_{i=1}^{A} \sum_{\tau=0,1} \sum_{j} c_j^{\tau}\hat{\mathcal{O}}_{j}^{(i)} \, t^{\tau}_{(i)} \,,
\label{eq:H_chiT}
\end{eqnarray}
where $t^0_{(i)}=\mathbb{1}_{2\times 2}$, $t^1_{(i)}=\tau_3$, and $\tau_3$ is the third Pauli matrix.~The matrices $t^\tau_{(i)}$, $\tau=0,1$, act on the $i$-th nucleon isospin space, and $A$ is the mass number of the target nucleus $T$.~Isoscalar and isovector coupling constants are denoted by $c_j^0$ and $c_j^1$, respectively, and are related to the coupling constants for protons and neutrons as follows:~$c^{p}_j=(c^{0}_j+c^{1}_j)/2$, $c^{n}_j=(c^{0}_j-c^{1}_j)/2$.~All coupling constants have dimension [mass]$^{-2}$.~The interaction operators $\hat{\mathcal{O}}_{j}^{(i)}$ in Eq.~(\ref{eq:H_chiT}) are Galilean invariant scalar combinations of $i\mathbf{\hat{q}}$, $\mathbf{\hat{v}}^{\perp}$, $\mathbf{\hat{S}}_\chi$ and $\mathbf{\hat{S}}_N$.~They are labelled by a nucleon index, $i$, and an interaction index, $j$.~Only fourteen independent Galilean invariant DM-nucleon interaction operators appear in Eq.~(\ref{eq:H_chiT}) if DM has spin less than or equal to 1/2~\cite{Anand:2013yka}.~For spin 1 DM, two additional interaction operators might be constructed~\cite{Dent:2015zpa}.~However, these are only relevant when the interference between the operators $\hat{\mathcal{O}}^{(i)}_4$ and $\hat{\mathcal{O}}^{(i)}_5$, and between $\hat{\mathcal{O}}^{(i)}_8$ and $\hat{\mathcal{O}}^{(i)}_{9}$ is not negligible, and will therefore not be considered further here.~We list the interaction operators relevant for the present analysis in Tab.~\ref{tab:operators}, omitting (from now onwards) the nucleon index $i$ for simplicity.~We point out that the operators $\oper{1}$ and $\oper{4}$ correspond to the familiar spin-independent (SI) and spin-dependent (SD) interactions respectively.
The differential cross-section for DM-nucleus scattering, ${\rm d}\sigma/{\rm d}q^2$, can be written as the sum of eight distinct terms~\cite{Anand:2013yka}:
\begin{align}
\frac{{\rm d}\sigma}{{\rm d}q^2}
&=\frac{1}{(2J+1) v^2}\sum_{\tau,\tau'} \bigg[ \sum_{k=M,\Sigma',\Sigma''} R^{\tau\tau'}_k \left(v_{T}^{\perp 2}, {q^2 \over m_N^2} \right) W_k^{\tau\tau'}(q^2) \nonumber\\
&+{q^{2} \over m_N^2} \sum_{k=\Phi'', \Phi'' M, \tilde{\Phi}', \Delta, \Delta \Sigma'} \hspace{-0.4 cm}R^{\tau\tau'}_k\left(v_{T}^{\perp 2}, {q^2 \over m_N^2}\right) W_k^{\tau\tau'}(q^2) \bigg] \,,
\label{eq:dsigma}
\end{align}
where $J$ is the spin of the target nucleus, $v$ the DM-nucleus relative velocity, $q$ the momentum transfer, $v_{T}^{\perp 2}=v^2-q^2/(4\mu_{T}^2)$, and $\mu_{T}$ the DM-nucleus reduced mass.~The eight ``DM-response functions'' $R^{\tau\tau'}_k$ depend on $q^2/m_N^2$, $v_{T}^{\perp 2}$, and the isoscalar and isovector coupling constants $c_j^\tau$.~We list them in the Appendix.~The eight nuclear response functions $W_k^{\tau\tau'}$ are quadratic in matrix elements of nuclear charges and currents generated in the scattering of DM by nuclei, and must be computed numerically.~For the elements considered in this investigation (see Tab.~\ref{tab:elements}) we adopt the nuclear response functions found in Ref.~\cite{Catena:2015vpa} using the {\sffamily NuShellX@MSU} shell-model code~\cite{Brown:2014} and phenomenological nucleon-nucleon interactions~\cite{Brown:2001zz}. For concreteness, we will assume isoscalar interactions ($c^p = c^n = c^0/2$) throughout this work.
\section{Earth-scattering calculation}
\label{sec:calculation}
With a formalism for DM-nucleus interactions in hand, we now present the main calculation of the Earth-scattering effect. If DM particles scatter with nuclei in the Earth as they travel underground, they will emerge near the surface of the Earth with a different energy and direction. The energy and direction of DM particles is encoded in the local velocity distribution $f(\mathbf{v})$, meaning that the effect of Earth-scattering will be to induce perturbations in $f(\mathbf{v})$, which will vary depending on the position of the detector on the Earth's surface. We will therefore write the perturbed velocity distribution as $\tilde{f}(\mathbf{v}, \gamma)$, where the angle $\gamma$ describes the position of the detector with respect to the average DM velocity. A more detailed definition of $\gamma$ is given in Sec.~\ref{sec:speeddist} and illustrated in Fig.~\ref{fig:gamma}.
In order to calculate this perturbed DM velocity distribution, we assume that the DM scatters at most once, which we will refer to as the `single scatter' approximation. This is roughly equivalent to assuming $R_\oplus \lesssim \lambda$, where $R_\oplus$ is the Earth's radius and $\lambda$ is the typical mean free path of the DM particles. In this case, the perturbed velocity distribution contains two contributions:
\begin{equation}\label{eq:fpert}
\tilde{f}(\mathbf{v},\gamma) = f_{\rm A}(\mathbf{v},\gamma) + f_{\rm D}(\mathbf{v},\gamma)\,.
\end{equation}
Here, $f_{\rm A}$ is the \textit{attenuated} population of particles: those particles whose trajectories pass through the detector and which have not scattered before reaching the detector. Instead, $f_{\rm D}$ is the \textit{deflected} population of particles: those whose trajectories did not initially pass through the detector but which have scattered towards the detector during Earth-crossing.
In our analysis, we neglect gravitational focusing of DM particles by the Earth, which may also lead to percent-level distortions in the local velocity distribution at low $v$ \cite{Lee:2013wza,Kouvaris:2015xga}. We also assume that the time scale over which a DM particle crosses the Earth is negligible compared with the Earth's rotational period; a DM particle with a typical speed of $\sim220 \textrm{ km s}^{-1}$ will take only $\mathcal{O}(30 \,\, \mathrm{seconds})$ to cross the Earth. This means that we can assume that the detector has a fixed position in calculating the perturbed velocity distribution.
With these caveats, we now proceed to describe the \textit{free} velocity distribution $f_0(\mathbf{v})$, which one would expect in the absence of Earth-scattering. We then calculate the two contributions to the perturbed distribution shown in Eq.~\ref{eq:fpert}: \textit{attenuation} and \textit{deflection}.
\subsection{Free velocity distribution}
\label{sec:speeddist}
We assume that the free DM velocity distribution is described by the Standard Halo Model (SHM) \cite{Green:2011bv}, which has the following analytic form in the laboratory frame:
\begin{equation} \label{eq:SHM}
f_0(\mathbf{v}) = \frac{1}{N} \exp \left[ -\frac{(\mathbf{v} - \langle \mathbf{v}_\chi \rangle)^2}{2\sigma_v^2} \right] \times \, \Theta (v_\mathrm{esc} - \left|\mathbf{v} - \langle \mathbf{v}_\chi \rangle\right|)\,.
\end{equation}
Here, the normalisation constant $N$ is given by:
\begin{equation}
N = (2\pi \sigma_v^2)^{3/2} \left( \mathrm{erf} \left( \frac{v_\mathrm{esc}}{\sqrt{2}\sigma_v}\right) - \sqrt{\frac{2}{\pi}} \frac{v_\mathrm{esc}}{\sigma_v} \exp \left( -\frac{v_\mathrm{esc}^2}{2\sigma_v^2} \right) \right)\,,
\end{equation}
and we assume $\sigma_v = 156 \textrm{ km s}^{-1}$ for the velocity dispersion of the halo, and $v_\mathrm{esc} = 533 \textrm{ km s}^{-1}$ for the local escape speed in the Galactic frame \cite{Piffl:2013mla}. In the SHM, the average DM velocity in the Earth's frame arises from the motion of the Earth through the halo: $\langle \mathbf{v}_\chi \rangle = -\mathbf{v}_e$. We assume a constant value of $v_e = 220 \textrm{ km s}^{-1}$ for the Earth's speed.
It will be useful in the following sections to have an explicit coordinate expression for $f_0(\mathbf{v})$. In a coordinate system in which the detector lies along the positive $z$-axis (such as that illustrated in Figs.~\ref{fig:attenuation} and \ref{fig:deflection}) we can choose (without loss of generality) $\langle \mathbf{v}_\chi \rangle$ to lie in the $x$-$z$ plane. We can then write the angle between a given DM velocity $\mathbf{v} = (v, \theta, \phi)$ and the average velocity $\langle \mathbf{v}_\chi \rangle$ in terms of the polar coordinates as:
\begin{equation}\label{eq:gamma}
\mathbf{v} \cdot \langle \mathbf{v}_\chi \rangle = v v_e \left( \sin\gamma \sin\theta \cos\phi + \cos\gamma \cos\theta \right)\,.
\end{equation}
\begin{figure}[t!]
\centering
\includegraphics[width=0.30\textwidth]{plots/Earth-gamma.pdf}
\caption{\textbf{Geometry of the detector position $\mathbf{r}_\mathrm{det}$ measured with respect to the mean DM velocity $\langle \mathbf{v}_\chi \rangle$.} The angle between these two vectors is denoted $\gamma$. For $\gamma = 0$, the average DM particle experiences the maximal Earth-crossing distance before reaching the detector. Instead, for $\gamma = \pi$ the Earth crossing distance is minimal. We note that the \textit{true} DM velocities are distributed about the mean value $\langle \mathbf{v}_\chi \rangle$ (according to Eq.~\ref{eq:SHM}). We also remind the reader that the flux of DM particles (before scattering) is spatially uniform, so DM particles enter the Earth's surface at all points (not only along the diameter). } \label{fig:gamma}
\end{figure}
The angle $\gamma$ should be interpreted as the angle between $\langle \mathbf{v}_\chi \rangle$ and the position of the detector $\mathbf{r}_\mathrm{det}$ on the Earth's surface:
$\gamma = \cos^{-1} \left(\langle \hat{\mathbf{v}}_\chi \rangle \cdot \hat{\mathbf{r}}_\mathrm{det}\right)$. This is illustrated in Fig.~\ref{fig:gamma}. An angle of $\gamma = 0$ corresponds to a flux of DM particles which must (on average) cross the entire Earth before reaching the detector. Instead, $\gamma = \pi$ corresponds to the case where the majority of DM particles pass the detector \textit{before} crossing the Earth. We describe how to calculate the value of $\gamma$ for a given position on the Earth in Sec.~\ref{sec:modulation}.
\subsection{Attenuation}
\begin{figure}[tb!]
\centering
\subfloat[Attenuation]{\label{fig:attenuation}
\includegraphics[width=0.33\textwidth]{plots/Earth-attenuation.pdf}}\hspace{1cm}
\subfloat[Deflection]{\label{fig:deflection}
\includegraphics[width=0.30\textwidth]{plots/Earth-deflection.pdf}}
\caption{\textbf{Geometry for the scattering of DM particles in the Earth.} \textbf{(a) Attenuation:} Particles with velocity $\mathbf{v} = (v, \theta,\phi)$ must cross the Earth along the trajectory $\mathrm{AB}$ without scattering in order to arrive at the detector. \textbf{(b) Deflection:} Particles with initial velocity $\mathbf{v}' = (v', \theta', \phi')$ will reach the detector with velocity $\mathbf{v} = (v, \theta, \phi)$ if they interact along the line $\mathrm{AB}$ and scatter through an angle $\alpha$.} \label{fig:geometry}
\end{figure}
We now calculate the effects of attenuation on the DM velocity distribution. In Fig.~\ref{fig:attenuation}, we show the scattering geometry for DM particles impinging on the detector. Particles with an initial velocity $\mathbf{v} = (v, \theta,\phi)$ will reach the detector with that same velocity $\mathbf{v}$ if they do not scatter during their passage through the Earth. If any such particle scatters, however, it will be deflected from the trajectory shown in Fig.~\ref{fig:attenuation} and will no longer reach the detector (assuming that the finite size of the detector can be neglected). Thus, the population of DM particles reaching the detector with velocity $\mathbf{v}$ will be depleted.
The survival probability for a particle with velocity $\mathbf{v}$ is given by:
\begin{align} \label{eq:survival}
\begin{split}
p_\mathrm{surv}(v) &= \exp \left[ - \int_{\mathrm{AB}} \frac{\mathrm{d}l}{\lambda(\mathbf{r}, v)} \right] = \exp \left[ - \sum_{i}^{\mathrm{species}} \sigma_i(v) \int_{\mathrm{AB}} n_i(\mathbf{r}) \mathrm{d}l \right]\,,
\end{split}
\end{align}
where the integral is over the path $\mathrm{A}\rightarrow \mathrm{B}$ from the surface of the Earth to the detector, as illustrated in Fig.~\ref{fig:attenuation}. We have also written the mean free path $\lambda$ in terms of the total interaction cross section with Earth species $i$ and the number density of that species: $\lambda(\mathbf{r}, v)^{-1} = \sum_{i}^{\mathrm{species}} \sigma_i (v) n_i(\mathbf{r})$.
If the number density of particles in the Earth were uniform, then the integral over the DM path in Eq.~\ref{eq:survival} would simply be equal to the Earth-crossing distance, $d$, as shown in Fig.~\ref{fig:attenuation}. This is given by
\begin{align} \label{eq:dcross}
\begin{split}
d(\cos\theta) &= (R_\oplus-l_D) \cos\theta + \sqrt{2R_\oplus l_D - l_D^2 + (R_\oplus-l_D)^2 \cos^2\theta} \\
&\approx
\begin{cases}
2 R_\oplus \cos\theta & \quad \theta \in [0, \pi/2]\\
0 & \quad \theta \in [\pi/2, \pi]\,,
\end{cases}
\end{split}
\end{align}
where $R_\oplus \approx 6371 \,\, \mathrm{km}$ is the Earth's radius and $l_D$ is the depth of the detector underground. The last line in Eq.~\ref{eq:dcross} is obtained in the limit $l_D \ll R_\oplus < \lambda$. From now on, we assume that this inequality holds (i.e.~that a typical DM particle is unlikely to scatter in the shallow region of the Earth above the detector) and set $l_D$ to zero.
However, the Earth's density is not uniform, so we must account for the radial density profiles of each of the Earth elements $n_i(\mathbf{r}) = n_i(r)$. The distance $l$ from A to some point along the line AB can be written in terms of the distance $r$ of that point from the Earth's centre as,
\begin{equation}
l = R_\oplus \cos\theta \pm \sqrt{r^2 - R_\oplus^2\sin^2\theta}\,.
\end{equation}
With this, we can perform the integral along $\mathrm{AB}$ and calculate an \textit{effective} Earth-crossing distance, $d_\mathrm{eff}$:
\begin{align}\label{eq:deff}
\begin{split}
d_{\mathrm{eff},i}(\cos\theta) &= \frac{1}{\overline{n}_i}\int_{\mathrm{AB}} n_i(\mathbf{r}) \mathrm{d}l = \ 2\int_{R_\oplus \sin\theta}^{R_\oplus} \frac{n_i(r)}{\overline{n}_i} \frac{r \, \mathrm{d}r}{\sqrt{r^2 - R_\oplus^2\sin^2\theta}}\,.
\end{split}
\end{align}
Here, we have defined the number density averaged over the Earth's radius, $\overline{n}_i$:
\begin{align}
\label{eq:nbar}
\begin{split}
\overline{n}_i &= \frac{1}{R_\oplus}\int_{0}^{R_\oplus} n_i(r) \,\mathrm{d}r\,.
\end{split}
\end{align}
The velocity distribution of these \textit{attenuated} particles (i.e.~those which survive the Earth crossing) is therefore related to the free distribution by:
\begin{align}
\label{eq:attenuated}
\begin{split}
f_A(\mathbf{v}, \gamma) &= f_0(\mathbf{v}) \exp \left[ - \sum_{i}^\mathrm{species} \sigma_i(v) \,\overline{n}_i \,d_{\mathrm{eff},i}(\cos\theta) \right] = f_0(\mathbf{v}) \exp \left[ - \sum_{i}^\mathrm{species} \frac{d_{\mathrm{eff},i}(\cos\theta)}{\overline{\lambda}_i(v)} \right]\,.
\end{split}
\end{align}
Here, we have defined the average mean free path due to scattering with a given Earth species $i$ as $\bar{\lambda}_i(v)= [\sigma_i (v) \bar{n}_i]^{-1}$. We consider the contribution of 8 elements, which are summarised in Table~\ref{tab:elements} with tabulated density profiles taken from Ref.~\cite{Lundberg:2004dn} (using data from Refs.~\cite{Geochemistry, Britannica}). We perform the integral in Eq.~\ref{eq:deff} numerically and tabulate the values of $d_\mathrm{eff}$ as a function of $\cos\theta$ for each element. We note that the majority of Earth elements have zero-spin, so we would not expect a large Earth-scattering effect for operators which couple predominantly to the nuclear spin.
\begin{table}[t!]\centering
\begin{tabular}{@{}lllllllll@{}}
\toprule
\toprule
Element & A & $m_A$ [GeV] & $\bar{n}$ [$\mathrm{cm}^{-3}$] & & Core & & Mantle\\
\hline
Oxygen & 16 &14.9 & $3.45 \times 10^{22}$ & & 0.0 & & 0.440 \\
Silicon & 28 & 26.1 & $1.77 \times 10^{22}$ & & 0.06 & & 0.210\\
Magnesium & 24 & 22.3 & $1.17 \times 10^{22}$ & & 0.0 & & 0.228 \\
Iron & 56 & 52.1 & $6.11 \times 10^{22}$ & & 0.855 & & 0.0626 \\
Calcium & 40 & 37.2 & $7.94 \times 10^{20}$ & & 0.0 & & 0.0253 \\
Sodium & 23 & 21.4 & $1.47 \times 10^{20}$ & & 0.0 & & 0.0027 \\
Sulphur & 32 & 29.8 & $2.33 \times 10^{21}$ & & 0.019 & & 0.00025 \\
Aluminium & 27 & 25.1 & $1.09 \times 10^{21}$ & & 0.0 & & 0.0235 \\
\bottomrule
\bottomrule
\end{tabular}
\caption{\textbf{Summary of Earth elements included in this analysis.}~Next to last and last columns report the mass fractions of each element in the Earth's core and mantle, respectively (values from Tab.~1 in Ref.~\cite{Lundberg:2004dn}). The core and mantle constitute roughly 32\% and 68\% of the Earth's total mass respectively.}
\label{tab:elements}
\end{table}
\subsection{Deflection}
In Fig.~\ref{fig:deflection}, we show the scattering geometry for particles which are deflected away from their initial trajectory and towards the detector. In order to arrive at the detector with velocity $\mathbf{v} = (v, \theta, \phi)$, DM particles with initial velocity $\mathbf{v}' = (v', \theta',\phi')$ must scatter somewhere along the line $\mathrm{AB}$ and be deflected through an angle $\alpha$. The contribution to the DM velocity distribution at velocity $\mathbf{v}$ is then obtained by integrating over the path $\mathrm{AB}$ and over the initial DM velocity distribution $f_0(\mathbf{v}')$.
\subsubsection{Contribution from a single interaction point}
We focus on an interaction region around the point $\mathrm{C}$ in Fig.~\ref{fig:deflection}. We assume this region has infinitesimal length $\mathrm{d}l$ (oriented along the line AB) and an arbitrary cross sectional area $\mathrm{d}S$ (perpendicular to the line AB). The path length of an incoming DM particle crossing this region is then $\mathrm{d}l/\cos\alpha$, meaning that the probability of scattering inside this region is\footnote{For brevity, we suppress the sum over the different elements in the Earth.}
\begin{equation}\label{eq:pscat}
\mathrm{d}p_\mathrm{scat} = \frac{\mathrm{d}l}{\lambda(\mathrm{r}, v')\cos\alpha}\,.
\end{equation}
The rate of particles entering the interaction region and scattering into the direction $\mathbf{v}$ is then,
\begin{align}\label{eq:flux_in}
\begin{split}
\bigg[n_\chi \,f_0(\mathbf{v}') \,\mathbf{v}' \cdot \mathrm{d}\mathbf{S} \, \mathrm{d}^3\mathbf{v}' \bigg] \bigg[ \, \mathrm{d}p_\mathrm{scat} \, P(\mathbf{v}' \rightarrow \mathbf{v}) \, \mathrm{d}^3\mathbf{v} \bigg]\,,
\end{split}
\end{align}
where the first bracket is the rate of particles entering the interaction region, with the surface vector $\mathrm{d}\mathbf{S}$ pointing along $\mathbf{v}$ (i.e. along the line from A to B). The number density of DM particles is denoted by $n_\chi$. By convention, we will keep $n_\chi$ constant before and after scattering, so that changes in the overall density are instead encoded in the velocity distribution. The second term in brackets is the probability of scattering from $\mathbf{v}'$ to $\mathbf{v}$ (i.e.~$\mathrm{d}p_\mathrm{scat}$ is the probability of scattering, and $P(\mathbf{v}' \rightarrow \mathbf{v})$ is the probability of scattering into a particular velocity given that the particle has scattered).
The rate of deflected particles leaving the interaction region with velocity $\mathbf{v}$ is defined (in analogy to the incoming rate) to be,
\begin{equation}\label{eq:flux_out}
n_\chi f_D(\mathbf{v}, \gamma) \,\mathbf{v}\cdot\mathrm{d}\mathbf{S} \, \mathrm{d}^3\mathbf{v}\,.
\end{equation}
The distribution $f_D(\mathbf{v})$ is not constrained to be normalised to unity. Instead, the overall normalisation is determined by setting equal the incoming and outgoing fluxes of particles in the scattering region. Equating Eqs.~\ref{eq:flux_in} and \ref{eq:flux_out}, we obtain an expression for the contribution to $f_D(\mathbf{v}, \gamma)$ from interactions at point $\mathrm{C}$,
\begin{equation}\label{eq:fD1}
f_D(\mathbf{v},\gamma) = \frac{\mathrm{d}l}{\lambda(\mathbf{r},v')}\frac{v'}{v} f_0(\mathbf{v}') P(\mathbf{v}' \rightarrow \mathbf{v}) \, \mathrm{d}^3 \mathbf{v}'\,,
\end{equation}
where we have used the fact that $\mathbf{v}' \cdot \mathrm{d}\mathbf{S} = v' \cos\alpha \, \mathrm{d}S$ and $\mathbf{v} \cdot \mathrm{d}\mathbf{S} = v\,\mathrm{d}S$.
\subsubsection{Contribution from all interaction points}
We now integrate over all points which give a contribution to the deflected velocity distribution. For a fixed final velocity $\mathbf{v}$, we must integrate over the line $\mathrm{AB}$ in Fig.~\ref{fig:deflection}. We assume that the DM particles scatter at most once, meaning that the initial velocity distribution $f(\mathbf{v}')$ is not distorted by the passage through the Earth. We therefore take $f(\mathbf{v}')$ to be spatially uniform. The integration over the path length $l$ along $\mathrm{AB}$ reduces to the same form given in Eq.~\ref{eq:deff}. The contribution of a given initial DM velocity $\mathbf{v}'$ to the deflected distribution is then:
\begin{equation}\label{eq:fD2}
f_D(\mathbf{v},\gamma) = \frac{\mathrm{d}_\mathrm{eff}(\cos\theta)}{\overline{\lambda}(v')}\frac{v'}{v} f_0(\mathbf{v}') P(\mathbf{v}' \rightarrow \mathbf{v}) \, \mathrm{d}^3 \mathbf{v}'\,.
\end{equation}
\subsubsection{Kinematics}
We now consider how to calculate $P(\mathbf{v}' \rightarrow \mathbf{v})$. The deflection angle $\alpha$ is fixed geometrically by the directions of DM particles incoming and outgoing from point $\mathrm{C}$:\footnote{We remind the reader that the primed angles $(\theta',\phi')$ describe the incoming DM particle direction (with the positive $z$-axis oriented from the centre of the Earth to the detector), while the unprimed angles $(\theta,\phi)$ describe the final DM direction (in the same coordinate frame).}
\begin{equation}\label{eq:cosalpha}
\cos\alpha = \sin\theta \sin\theta' \cos(\phi - \phi') + \cos\theta \cos\theta'\,.
\end{equation}
For a given deflection angle $\alpha$, DM mass $m_\chi$ and target nuclear mass $m_A$, the ratio $v'/v$ is fixed by kinematics:
\begin{equation} \label{eq:kappa}
\frac{v'}{v} = \frac{m_\chi + m_A}{m_\chi \cos\alpha \pm \sqrt{m_A^2 - m_\chi^2 \sin^2\alpha}} \equiv \kappa^\pm(\alpha, m_\chi, m_A)\,.
\end{equation}
We note that for $m_\chi \leq m_A$, the only valid solution is $v' = \kappa^+ v$ for all values of $\alpha$. Instead, for $m_\chi > m_A$, we require $\cos\alpha > (1 - m_A^2/m_\chi^2)^{1/2}$, in which case both solutions are valid: $v' = \kappa^\pm v$.
We can now write the scattering probability as
\begin{align}\label{eq:pv}
\begin{split}
P(\mathbf{v}' \rightarrow \mathbf{v}) &= \frac{1}{v^2} \sum_{a=\pm} \delta(v - v'/\kappa^a)P^a(\hat{\mathbf{v}}) \,.
\end{split}
\end{align}
The factor of $1/v^2$ is required to ensure correct normalisation and we implicitly assume that the negative-sign solution in the sum is included only in the case that $m_\chi > m_A$. The term $P^\pm(\hat{\mathbf{v}})$ is now simply the probability distribution for the \textit{direction} of $\mathbf{v}$. The deflection of the DM particle is azimuthally symmetric, so we can write:
\begin{align}\label{eq:ppm}
\begin{split}
P^\pm(\hat{\mathbf{v}}) &=\frac{1}{2\pi}P^\pm(\cos\alpha)\,.
\end{split}
\end{align}
The probability distribution of $\cos\alpha$ is given by:
\begin{align}
\begin{split}
P^\pm(\cos\alpha) &= \frac{1}{\sigma}\left.\frac{\mathrm{d}\sigma}{\mathrm{d}\cos\alpha}\right|_\pm\,,
\end{split}
\end{align}
where $\sigma$ is the total cross section. With this definition, the distribution of $\cos\alpha$ is normalised such that:
\begin{equation}
\int \left(P^+(\cos\alpha) + P^-(\cos\alpha) \right) \, \mathrm{d}\cos\alpha = 1\,,
\end{equation}
where the integral is over all kinematically allowed values of $\cos\alpha$, i.e.:
\begin{equation}\label{eq:cosalpharange}
\cos\alpha \in \begin{cases}
[-1, 1] & \quad \text{ for } m_\chi \leq m_A\,,\\
\left[\sqrt{1- m_A^2/m_\chi^2}, 1\right] & \quad \text{ for } m_\chi > m_A\,.
\end{cases}
\end{equation}
We again emphasise that the $P^-$ term is only included for $m_\chi > m_A$.
The differential cross section with respect to $\cos\alpha$ can be obtained by a change of variables:
\begin{equation} \label{eq:dsigmadcosalpha}
\left. \dbd{\sigma}{\cos\alpha}\right|_\pm = \left.\left(\dbd{E_R}{\cos\alpha} \dbd{\sigma}{E_R} \right)\right|_\pm \,.
\end{equation}
Again, from kinematics, we can calculate $E_R$ as a function of $\cos\alpha$:
\begin{align}\label{eq:ER}
\begin{split}
E^{\pm}_R(\cos\alpha) &= \frac{m_\chi^2 v'^2}{(m_\chi + m_A)^2}\left( m_\chi \sin^2\alpha + m_A \mp \cos\alpha\sqrt{m_A^2 - m_\chi^2 \sin^2\alpha}\right)\,.
\end{split}
\end{align}
The derivative appearing in Eq.~\ref{eq:dsigmadcosalpha} then follows straightforwardly. It is now apparent why we have maintained the superscript $\pm$ throughout: the recoil energy as a function of $\cos\alpha$ depends on which of the two kinematic solutions is being considered. As a result, we obtain a different distribution for $\cos\alpha$ in each case. The distributions $P^\pm(\cos\alpha)$ can be calculated from a given differential cross section using the prescription we have just outlined. In Fig.~\ref{fig:Pcosalpha}, we plot the distributions of deflection angles for a number of the operators described in Sec.~\ref{sec:formalism}.
\begin{figure*}[t]
\includegraphics[width=0.32\textwidth]{{plots/Pcosalpha_mx=0p5_Fe.pdf}}
\includegraphics[width=0.32\textwidth]{plots/Pcosalpha_mx=50_Fe.pdf}
\includegraphics[width=0.315\textwidth]{plots/Pcosalpha_mx=100_Fe.pdf}
\caption{\textbf{Probability distributions for the DM deflection angle $\alpha$.} Within each panel we show $P(\cos\alpha)$ for 6 different operators from Table~\ref{tab:operators}, while each panel shows a different DM particle mass: $m_\chi = $ 0.5 GeV (\textbf{left}), $m_\chi = $ 50 GeV (\textbf{centre}), and $m_\chi = $ 100 GeV (\textbf{right}). We fix the DM speed at $v = 220 \textrm{ km s}^{-1}$ and consider scattering with Iron nuclei ($m_\mathrm{Fe} \approx 52 \, \, \mathrm{GeV}$). DM particles which scatter through only a small angle relative to their incoming direction have $\cos\alpha \rightarrow 1$. Note the different scale on the x-axis in the right panel.}\label{fig:Pcosalpha}
\end{figure*}
Substituting Eqs.~\ref{eq:pv} and \ref{eq:ppm} into Eq.~\ref{eq:fD2}, we obtain:
\begin{align}\label{eq:fD3}
\begin{split}
f_D(\mathbf{v},\gamma) &= \frac{1}{2\pi}\frac{\mathrm{d}_\mathrm{eff}(\cos\theta)}{\overline{\lambda}(v')}\frac{v'}{v^3} f_0(\mathbf{v}')\times \sum_{a=\pm} \delta(v - v'/\kappa^a)P^a(\cos\alpha) \, \mathrm{d}^3 \mathbf{v}'\,.
\end{split}
\end{align}
We can rewrite the $\delta$-function in terms of $v'$,
\begin{equation}
\delta(v - v'/\kappa^\pm) = \kappa^\pm \delta(v' - \kappa^\pm v)\,,
\end{equation}
and then perform the integral over all incoming velocities $\mathbf{v}'$:\footnote{We remind the reader that $\kappa^\pm$ depends on $\cos\alpha$, which in turn depends on the incoming and outgoing DM angles (see Eq.~\ref{eq:cosalpha}). This means that $\kappa^\pm$ must remain inside the integral over $\hat{\mathbf{v}}'$.}
\begin{equation}\label{eq:fD4}
f_D(\mathbf{v},\gamma) = \sum_{a=\pm} \int \mathrm{d}^2\hat{\mathbf{v}}' \frac{d_\mathrm{eff}(\cos\theta)}{\overline{\lambda}(\kappa^a v)} \frac{(\kappa^a)^4}{2\pi} f_0(\kappa^a v, \hat{\mathbf{v}}') P^a(\cos\alpha)\,.
\end{equation}
\subsubsection{Summing over elements}
We now reintroduce the sum over the different species in the Earth, to give the final deflected velocity distribution:
\begin{equation}\label{eq:fD4}
f_D(\mathbf{v},\gamma) = \sum_{i}^{\mathrm{species}}\sum_{a=\pm} \int \mathrm{d}^2\hat{\mathbf{v}}' \,\, \frac{d_{\mathrm{eff},i}(\cos\theta)}{\overline{\lambda}_i(\kappa_i^a v)} \frac{(\kappa^a_i)^4}{2\pi} f_0(\kappa^a_i v, \hat{\mathbf{v}}') P^a_i(\cos\alpha)\,.
\end{equation}
We note that many of the terms in Eq.~\ref{eq:fD4} have now acquired an $i$ index: the kinematic term $\kappa^\pm$ depends on the target nuclear mass; the effective Earth-crossing distance $d_\mathrm{eff}$ depends on the density profile of the species; and both the mean free path $\overline{\lambda}$ and distribution of $\cos\alpha$ depend on the DM-nucleus cross section. We also emphasise that for some species, we will need to include both terms in the $a=\pm$ sum, while for others, only the $a=+$ term is required (depending on the DM mass).
\subsection{DM speed distribution}
In order to explore the impact of Earth-scattering on event rates in (non-directional) direct detection experiments, we must calculate the DM \textit{speed} distribution at the detector, given by:\footnote{Note that we use the notation $f(\mathbf{v})$ for the full 3 dimensional velocity distribution and $f(v)$ for the distribution of the modulus $v = |\mathbf{v}|$. These two definitions are related by Eq. ~\ref{eq:speedpert}.}
\begin{align}\label{eq:speedpert}
\tilde{f}(v, \gamma) = v^2 \int \mathrm{d}^2 \hat{\mathbf{v}}\, \tilde{f}(\mathbf{v},\gamma)= v^2 \int \mathrm{d}^2 \hat{\mathbf{v}}\, \left(f_A(\mathbf{v},\gamma) + f_D(\mathbf{v},\gamma)\right) = f_A(v,\gamma) + f_D(v, \gamma)\,.
\end{align}
An analogous definition relates $f_0(\mathbf{v})$ and $f_0(v)$.
The \textit{attenuated} speed distribution is obtained straightforwardly by integrating over Eq.~\ref{eq:attenuated}:
\begin{align}\label{eq:speeddist_attenuation}
\begin{split}
f_A(v,\gamma) = v^2 \int_{0}^{2\pi} \mathrm{d}\phi \int_{-1}^1 \mathrm{d}\cos\theta \,\,f_0(v,\theta, \phi) \exp \left[ - \sum_{i}^\mathrm{species} \frac{d_{\mathrm{eff},i}(\cos\theta)}{\overline{\lambda}_i(v)} \right]\,.
\end{split}
\end{align}
The coordinate description for $f_0(v, \theta, \phi)$ for a given value of $\gamma$ is obtained from Eqs.~\ref{eq:SHM} and \ref{eq:gamma}. With this, it is straightforward to evaluate Eq.~\ref{eq:speeddist_attenuation} numerically for fixed $\gamma$.
Similarly, the \textit{deflected} speed distribution is given by
\begin{equation}\label{eq:fDfinal}
f_D(v, \gamma) = v^2 \sum_{i}^{\mathrm{species}}\sum_{a=\pm} \int \mathrm{d}^2\hat{\mathbf{v}} \int \mathrm{d}^2\hat{\mathbf{v}}' \,\,\frac{d_{\mathrm{eff},i}(\cos\theta)}{\overline{\lambda}_i(\kappa_i^a v)} \frac{(\kappa^a_i)^4}{2\pi} f_0(\kappa^a_i v, \hat{\mathbf{v}}') P^a_i(\cos\alpha)\,.
\end{equation}
The angle $\phi$ enters only through the definition of $\cos\alpha$ (Eq.~\ref{eq:cosalpha}) in the factor $\cos(\phi - \phi')$. Because we are integrating over all values of $\phi$, we can make use of the shift symmetry of the integral and eliminate $\phi'$ from the expression for $\cos\alpha$:
\begin{equation}\label{eq:cosalpha2}
\cos\alpha = \sin\theta \sin\theta' \cos\phi + \cos\theta \cos\theta'\,.
\end{equation}
Now, the angle $\phi'$ enters only in the definition of the velocity distribution $f(\kappa^a_i v, \theta', \phi')$. We can therefore perform the integral over $\phi'$ independently of the other angular variables. The deflected speed distribution is then given by:
\begin{equation}\label{eq:speeddist_deflection}
f_D(v, \gamma) = v^2 \sum_{i}^{\mathrm{species}}\sum_{a=\pm} \int_{-1}^{1} \mathrm{d}\cos\theta \int_{0}^{2\pi} \mathrm{d}\phi \int_{-1}^{1} \mathrm{d}\cos\theta' \,\,\frac{d_{\mathrm{eff},i}(\cos\theta)}{\overline{\lambda}_i(\kappa_i^a v)} \frac{(\kappa^a_i)^4}{2\pi} f_0(\kappa^a_i v, \theta') P^a_i(\cos\alpha)\,,
\end{equation}
where we define $f(v', \theta') = \int_{0}^{2\pi} f(v', \theta', \phi') \, \mathrm{d}\phi'$. Again, we take the coordinate expression for $f(v', \theta', \phi')$ from Eqs.~\ref{eq:SHM} and \ref{eq:gamma} for a given value of $\gamma$. The integral over $\phi'$ can be performed numerically and tabulated as a function of $v'$ and $\theta'$. The 3-dimensional integral in Eq.~\ref{eq:speeddist_deflection} is then evaluated by Monte Carlo integration (as described in Sec.~\ref{sec:numerics}).
\subsection{Average scattering probability}
\label{sec:pscat}
With this framework, it is straightforward to calculate the average probability that DM particles will scatter at least once, assuming that they cross the Earth's surface. We average over both the velocity distribution of incoming DM particles and over the possible Earth-crossing trajectories of the incident particles. For particles entering the Earth's surface at a point $\mathbf{r}$ with a velocity $\mathbf{v}$, the probability of scattering at least once is given by
\begin{align} \label{eq:dpscat}
1- \exp\left[- \sum_{i}^\mathrm{species} \frac{d_{\mathrm{eff, i}}(-\hat{\mathbf{v}}\cdot\hat{\mathbf{r}})}{\bar{\lambda}_i(v)}\right]\,.
\end{align}
The rate of particles entering the Earth's surface at position $\mathbf{r}$ (with velocity $\mathbf{v}$) is
\begin{align}\label{eq:fluxelement}
f_0(\mathbf{v}) (-\mathbf{v}\cdot\hat{\mathbf{r}}) \, \mathrm{d}^3\mathbf{v} \, \mathrm{d}^2\mathbf{r} \qquad \text{for } \mathbf{v}\cdot \hat{\mathbf{r}} < 0\,,
\end{align}
where we restrict only to particles travelling \textit{inward} through the Earth's surface. Integrating Eq.~\ref{eq:fluxelement} over the Earth's surface and over DM velocities, we obtain the total rate of particles entering the Earth: $\pi R_\oplus^2 \langle v\rangle$. If instead we weight the integral by the scattering probability in Eq.~\ref{eq:dpscat}, we obtain the average scattering probability for particles crossing the Earth:
\begin{align}\label{eq:pscat}
p_\mathrm{scat} &= 1- \frac{2}{\langle v \rangle}\int_{0}^1 \mathrm{d}\cos\theta \int_0^{v_\mathrm{esc} +v_e} \, \mathrm{d}v \, (v \cos\theta) f_0(v) \exp\left[- \sum_{i}^\mathrm{species} \frac{d_{\mathrm{eff, i}}(\cos\theta)}{\bar{\lambda}_i(v)}\right]\, \,,
\end{align}
where we have written $-\hat{\mathbf{v}} \cdot \hat{\mathbf{r}} = \cos\theta$.
\subsection{Numerical implementation}
\label{sec:numerics}
In order to compute the perturbed velocity distributions, speed distributions and average scattering probabilities, we have written the \textsc{EarthShadow} code, which we make publicly available \cite{EarthShadow}. Version 1.0 is available as a \textsc{Mathematica} module and accompanying notebook. This is supplemented with a number of data tables, including tabulated values of $\overline{n}$ and $d_\mathrm{eff}$ (as a function of $\cos\theta$) for each of the 8 elements in Table~\ref{tab:elements}. We also include tabulated values of the differential cross section $\mathrm{d}\sigma/\mathrm{d}E_R$ for each of these elements (as well as for Xenon) for the range of NREFT operators listed in Table~\ref{tab:operators}.
In order to calculate the speed distribution at the detector, we integrate using Quasi-Monte Carlo Integration \cite{Morokoff1995} using a minimum of 10000 samples of the integrand. For a given set of values for $m_\chi$, $\gamma$ and $v$, evaluation of $f_A$ takes roughly $0.5\,\mathrm{s}$ on a single core, while evaulation of $f_D$ takes roughly $10\,\mathrm{s}$. For $f_D$, we increase the number of integrand evaluations as a function of $m_\chi$, once $m_\chi$ is above the Oxygen mass. This is because the constraint $ \cos\alpha > (1- m_A^2/m_\chi^2)^{1/2}$ means that the integrand is zero over a wide range of angles $\{\theta, \,\theta',\, \phi,\, \phi' \}$. As a result, we perform 150000 samples of the integrand for a DM mass of 300 GeV. By varying the number of integrand samples, we have verified that for a given value of $v$, the error on $f_D(v, \gamma)$ is $\mathcal{O}(1\%)$. Note that $f_D$ is linear in $c^2$, the coupling of the DM particle to nucleons. This means that the deflected distribution does not need to be recalculated for different values of the couplings and can simply be rescaled. This is not true for the attenuated distribution (which is non-linear in $c^2$), though in that case the calculation is much faster.
\begin{comment}
\subsection{Limiting cases}
\label{sec:limits}
Finally, we present results for standard spin-independent (SI) interactions in the limiting cases of very light DM ($m_\chi \ll m_A$) and very heavy DM ($m_\chi \ll m_A$).
\subsubsection{Light Dark Matter}
For Operator $\oper{1}$ (SI interactions), the differential cross section is independent of recoil energy. For light DM ($m_\chi \rightarrow 0$), the recoil energy is given by $E_R \propto (1-\cos\alpha)$ (c.f.~Eq.~\ref{eq:ER}). Thus, the distribution of $\cos\alpha$ is uniform - $P(\cos\alpha) = \frac{1}{2}$ - leading to isotropic deflection (see the left panel of Fig.~\ref{fig:Pcosalpha}). Furthermore, light DM does not lose any energy when scattering, so we can set the kinematic factor $\kappa = 1$.
This allows us to simplify the calculation of $f_D(v)$:
\begin{align}\label{eq:speeddist_lowmass}
\begin{split}
f_D(v) &= v^2 \sum_{i}^{\mathrm{species}} \int_{-1}^{1} \mathrm{d}\cos\theta \int_{0}^{2\pi} \mathrm{d}\phi \int_{-1}^{1} \mathrm{d}\cos\theta' \,\,\frac{d_{\mathrm{eff},i}(\cos\theta)}{\overline{\lambda}_i(v)} \frac{1}{4\pi} f_0(v, \theta')\\
&= \sum_{i}^{\mathrm{species}} \int_{-1}^{1} \mathrm{d}\cos\theta \int_{0}^{2\pi} \,\,\frac{d_{\mathrm{eff},i}(\cos\theta)}{\overline{\lambda}_i(v)} \frac{1}{2} f_0(v)
\end{split}
\end{align}
\end{comment}
\section{Effects on the DM speed distribution}
\label{sec:effects}
We now investigate the effects of Earth-scattering on the DM speed distribution for three of the operators described in Sec.~\ref{sec:formalism}. These operators are:
\begin{align}
\begin{split}
\hat{\mathcal{O}}_1 &= \mathbb{1}_{\chi N}\,,\\
\hat{\mathcal{O}}_8 &= {\bf{\hat{S}}}_{\chi}\cdot {\bf{\hat{v}}}^{\perp}\,,\\
\hat{\mathcal{O}}_{12} &= {\bf{\hat{S}}}_{\chi}\cdot \left({\bf{\hat{S}}}_{N} \times{\bf{\hat{v}}}^{\perp} \right)\,.
\end{split}
\end{align}
The operator $\oper{1}$ is the operator which mediates the familiar spin-independent (SI) interaction, while $\oper{8}$ and $\oper{12}$ are operators which are higher order in the DM-nucleon relative velocity. A systematic analysis of all of the operators in Tab.~\ref{tab:operators} is beyond the scope of this work and we focus instead on these three operators because each one leads to distinctive behaviour in the deflection of scattered DM particles.
In the left panel of Fig.~\ref{fig:Pcosalpha}, we show the distribution of the deflection angle $\alpha$ for light DM particles scattering off Iron nuclei. For $\oper{1}$, we see that the deflection of scattered particles is isotropic. In the limit $m_\chi \ll m_A$, the differential scattering cross section for $\oper{1}$ is independent of the recoil energy. For light DM particles, the recoil energy is related to the deflection angle as $E_R = m_\chi^2 v'^2 (1-\cos\alpha)/m_A$ (see Eq.~\ref{eq:ER}). The uniform distribution of recoil energies therefore translates to a uniform distribution of deflection angles. Operator $\oper{8}$ instead has a differential cross section which peaks at small recoil energies, corresponding to deflections which are distributed preferentially in the forward direction ($\cos\alpha > 0$). In contrast, $\oper{12}$ leads to a cross section which increases with $E_R$, meaning that deflection of the DM particles in the backward direction ($\cos\alpha < 0$) is preferred.
\begin{figure*}[t]
\centering
\includegraphics[width=0.32\textwidth]{{plots/Limits_O1.pdf}}
\includegraphics[width=0.32\textwidth]{plots/Limits_O8.pdf}
\includegraphics[width=0.32\textwidth]{plots/Limits_O12.pdf}
\caption{\textbf{Current limits on the operators considered in this work.} We show limits for three of the operators described in Sec.~\ref{sec:formalism}: $\oper{1}$ (\textbf{left}), $\oper{8}$ (\textbf{middle}), and $\oper{12}$ (\textbf{right}). In each case, we show limits from CRESST-II~\cite{Angloher:2015ewa} and LUX~\cite{Akerib:2016vxi} on the product of the effective DM-proton cross section $\tilde{\sigma}_j^p$ (defined in Eq.~\ref{eq:sigeff}) and the local DM density $\rho_\chi$ (normalised such that $\rho_\chi = \rho_{0.3}/ 0.3 \,\,\mathrm{GeV}\,\,\mathrm{cm}^{-3}$). Limits for $\oper{8}$ and $\oper{12}$ are calculated as in Appendix~\ref{app:eventrates}. Also plotted are contours of the scattering probability $p$, defined in Eq.~\ref{eq:pscat}.} \label{fig:limits}
\end{figure*}
In Fig.~\ref{fig:limits}, we show current limits on DM-nucleon interactions for each of these three operators. At low mass, the most stringent limits come from the CRESST-II experiment~\cite{Angloher:2015ewa}, while at high mass we consider limits from the LUX WS2014-16 run~\cite{Akerib:2016vxi} (noting that similar limits are obtained by PANDA-X II \cite{Tan:2016zwf}). For operator $\oper{1}$, we present the limits on the standard DM-proton SI cross section, as reported by the collaborations. This is related to the operator coefficient $c_1^0$ as:
\begin{equation}
\sigma_1^p = \left(\frac{c_1^0}{2}\right)^2 \frac{\mu_{\chi p}^2}{\pi}\,,
\end{equation}
where $c_1^0/2$ is the coupling to protons (assuming isoscalar interactions). For operators $\oper{8}$ and $\oper{12}$, we define the effective cross sections (in analogy with the SI cross section) as:
\begin{equation}\label{eq:sigeff}
\tilde{\sigma}_j^p= \left(\frac{c_j^0}{2}\right)^2 \frac{\mu_{\chi p}^2}{\pi}\,.
\end{equation}
Approximate limits on $\tilde{\sigma}_p^j$ are calculated for CRESST-II and LUX as described in Appendix~\ref{app:eventrates}. Note that the limits on the cross sections are degenerate with the local DM density and in all cases we keep this fixed to $\rho_\chi = 0.3 \,\, \mathrm{GeV} \,\, \mathrm{cm}^{-3}$.
Also plotted in Fig.~\ref{fig:limits} are contours of equal scattering probability, $p_\mathrm{scat} = $ 1\%, 10\% and 50\%, calculated according to Eq.~\ref{eq:pscat}. With its low threshold, CRESST-II has sensitivity to DM particles with masses around 0.5 GeV. However, current constraints are weak enough that for all three operators, DM particles with such a low mass may still have a large enough interaction cross section to give a 10\% scattering probability in the Earth. For operator $\oper{12}$, the CRESST-II constraints are even weaker, as this operator gives rise to both spin-independent and spin-dependent interactions, while the majority of target nuclei in the CRESST-II experiment have zero spin.
In light of these constraints, we will focus on light DM with a mass of $m_\chi = 0.5 \,\, \mathrm{GeV}$. However, we will also briefly examine the case of heavier DM with a mass of $m_\chi = 50 \,\, \mathrm{GeV}$. We note that the SI cross section required to give a $10\%$ scattering probability for DM of mass 50 GeV is currently excluded by LUX by more than 6 orders of magnitude. Even so, it is instructive to explore how the high mass case differs from the light DM examples, bearing in mind that such high mass candidates could evade current constraints if they are (very) sub-dominant. In all cases, we fix the DM-nucleon couplings such that the average probability of scattering for DM particles is $p_\mathrm{scat} = 10\%$, as defined in Eq.~\ref{eq:pscat}. We then calculate the perturbed speed distribution $\tilde{f}(v, \gamma)$ for a range of values of $\gamma$, which defines the average direction of incoming DM particles relative to the detector position (see Fig.~\ref{fig:gamma}). We remind the reader that we do not require that the perturbed speed distribution be correctly normalised to unity. Thus, Earth-scattering may affect not only the shape but also the overall normalisation of $\tilde{f}(v, \gamma)$.
\subsection{Low Mass}
\begin{figure*}[t]
\includegraphics[width=0.32\textwidth]{{plots/SpeedDistribution_1D_op=1.pdf}}
\includegraphics[width=0.32\textwidth]{plots/SpeedDistribution_1D_op=8.pdf}
\includegraphics[width=0.32\textwidth]{plots/SpeedDistribution_1D_op=12.pdf}
\caption{\textbf{Perturbed DM speed distribution due to low-mass DM ($m_\chi = 0.5 \,\,\mathrm{GeV}$) scattering in the Earth.}The DM-nucleon couplings are normalised to give an average scattering probability of $p_\mathrm{scat} = 10\%$, as defined in Eq.~\ref{eq:pscat}. We show results for the three operators $\oper{1}$ (\textbf{left}), $\oper{8}$ (\textbf{middle}) and $\oper{12}$ (\textbf{right}). The dashed lines correspond to the free DM speed distribution without Earth-scattering $f_0(v)$ while the solid lines correspond to different average incoming DM direction (see Fig.~\ref{fig:gamma}): $\gamma = 0$ leads to maximal Earth-crossing, while $\gamma = \pi$ leads to minimal Earth-crossing before reaching the detector. Each solid line corresponds to a horizontal slice through Fig.~\ref{fig:SpeedDist-2D}.}\label{fig:SpeedDist-1D}
\end{figure*}
In Fig.~\ref{fig:SpeedDist-1D} we show the effects of Earth-scattering on the speed distribution for light DM ($m_\chi = 0.5 \,\, \mathrm{GeV}$) for three values of $\gamma$. As one might expect, for particles which must cross most of the Earth before reaching the detector ($\gamma = 0$, solid green)
, the predominant effect is that of attenuation, leading to a reduced DM population. For Operator $\oper{1}$ (left panel of Fig.~\ref{fig:SpeedDist-1D}), the size of this effect increases with increasing DM speed. In fact, the total scattering cross section for $\oper{1}$ is velocity-independent. However, the SHM velocity distribution (Eq.~\ref{eq:SHM}) becomes increasingly anisotropic as we increase $v$. For large $v$, more of the DM particles are travelling parallel to the mean DM velocity $\langle \mathbf{v}_\chi \rangle$, meaning that the average Earth-crossing distance for particles to reach the detector increases. For Operators $\oper{8}$ and $\oper{12}$ (centre and right panels of Fig.~\ref{fig:SpeedDist-1D}), attenuation also increases as a function of $v$, though in this case the predominant cause is that the total cross section increases with the DM speed: $\sigma_{8, 12} \propto v^2$.
As we increase $\gamma$, the typical DM particle must travel through less of the Earth before reaching the detector. For $\gamma = \pi/2$ (solid blue line), particles travelling along $\hat{\mathbf{v}} = \langle \hat{\mathbf{v}}_\chi \rangle$ must cross a negligibly small depth of the Earth before reaching the detector (typically on the order of a few kilometres). However, the distribution of DM velocities about the average means that some particles will still be travelling an appreciable distance through the Earth and therefore attenuation still has an effect. However, for all three operators in Fig.~\ref{fig:SpeedDist-1D} the effects of deflection towards the detector are more significant, leading to an \textit{increase} in the DM population for $\gamma = \pi/2$. For Operator $\oper{1}$, this increase is roughly a $2\%$ effect, increasing to $\sim 4\%$ for $\gamma = \pi$. In this latter case, the average DM particle arrives at the detector having only passed a small distance through the Earth (equal to $l_D$ the underground depth of the detector), meaning that the effects of attenuation are minimal. We note that at the highest DM speeds ($v = v_e + v_\mathrm{esc} \approx 753 \textrm{ km s}^{-1}$) the enhancement due to deflection reduces to zero. This is because the speed of DM particles is reduced on scattering. Particles at $v = v_e + v_\mathrm{esc}$ must scatter down to smaller speeds, while there are no DM particles with larger speeds (i.e.~above the escape velocity) which can scatter down to $v = v_e + v_\mathrm{esc}$ and give an enhancement.
For Operator $\oper{8}$, the enhancement in the DM speed distribution for $\gamma = \pi/2$ is greater than for $\gamma = \pi$. As we saw in Fig.~\ref{fig:Pcosalpha}, $\oper{8}$ leads to DM deflection preferentially in the forward direction. For $\gamma = \pi$, this means that particles are unlikely to scatter back once they have already passed the detector. For $\gamma = \pi/2$, the effects of attenuation are relatively small, but there is a much greater probability of particles scattering towards the detector. For Operator $\oper{12}$, instead, the population of DM particles is once again increasing as a function of $\gamma$. Operator $\oper{12}$ favours backward scattering, meaning that the contribution to $\tilde{f}(v, \gamma)$ due to deflection is small for $\gamma = 0$ but maximal for $\gamma = \pi$. As attenuation becomes less important, deflection becomes more important, meaning that $\tilde{f}(v, \gamma)$ varies more rapidly as a function of $\gamma$.
\begin{figure}[h!]
\centering
\includegraphics[width=0.495\textwidth]{plots/SpeedDistribution_op=1.pdf}
\includegraphics[width=0.495\textwidth]{plots/SpeedDistribution_op=1_highmass.pdf}
\includegraphics[width=0.495\textwidth]{plots/SpeedDistribution_op=8.pdf}
\includegraphics[width=0.495\textwidth]{plots/SpeedDistribution_op=8_highmass.pdf}
\includegraphics[width=0.495\textwidth]{plots/SpeedDistribution_op=12.pdf}
\includegraphics[width=0.495\textwidth]{plots/SpeedDistribution_op=12_highmass.pdf}
\clearpage
\caption{\textbf{Percentage change in the speed distribution due to Dark Matter scattering in the Earth.} Results for the standard SI interaction (Operator $\oper{1}$) are shown in the top row, while results for two other DM-nucleon operators (see Sec.~\ref{sec:formalism}) are shown in the middle row (Operator $\oper{8}$) and bottom row (Operator $\oper{12}$). Results are shown for two DM masses: 0.5 GeV (\textbf{left column}) and 50 GeV (\textbf{right column}). In all cases, the DM-nucleon couplings are normalised to give an average scattering probability of $p_\mathrm{scat} = 10\%$, as defined in Eq.~\ref{eq:pscat}. In each panel, the $x$-axis shows the DM particle speed $v$ while the $y$-axis shows the angle $\gamma$ between the average incoming DM velocity $\langle \hat{\mathbf{v}}_\chi \rangle$ and the detector position (see Fig.~\ref{fig:gamma}). The angle $\gamma = 0$ corresponds to maximal Earth-crossing before reaching the detector, while $\gamma = \pi$ corresponds to minimal Earth-crossing. The dashed contour corresponds to no change in the DM speed distribution. }\label{fig:SpeedDist-2D}
\end{figure}
We now extend the discussion to consider a wider range of values of $\gamma$. Figure~\ref{fig:SpeedDist-2D} shows the percentage difference between the free and perturbed speed distributions as a function of both $v$ and $\gamma$. In each plot, we show contours of $\pm 1\%$, $\pm 5\%$, $\pm 10\%$, $\pm 25\%$ and $\pm 50\%$ change in the speed distribution. The dashed contours correspond to zero change in the speed distribution: $\tilde{f}(v ,\gamma) = f_0(v)$.
In the left column of Fig.~\ref{fig:SpeedDist-2D}, we note immediately that (for a given operator) the effects of depletion appear to be much larger than the effects of enhancement. For operator $\oper{1}$, compare the maximum enhancement of $\mathcal{O}(3\%)$ with the maximum depletion of $\mathcal{O}(-25\%)$. The attenuation of particles is relevant only when the DM particles must cross a large fraction of the Earth before reaching the detector (i.e.~for small $\gamma$). These scattered particles will of course lead to an enhanced flux at other points on the surface of the Earth. For operator $\oper{1}$ the deflection of low mass DM particles is isotropic, meaning that this enhancement due to deflection is distributed almost uniformly across the Earth's surface. Particles which are significantly depleted from a small region of the surface with $\gamma \rightarrow 0$ are redistributed over the entire Earth, giving a small enhancement at any given point.
In fact, to first order in $R_\oplus/\lambda$, the effects of attenuation and deflection should be equal. This means that the total rate $\Gamma_\mathrm{out}$ of DM particles passing outward through the surface of the Earth (taking into account both of these contributions) should be the same as the total inward flux $\Gamma_\textrm{in}$. We could in principle check this by integrating the perturbed velocity distribution over all positions $\mathbf{r}$ on the Earth's surface:\footnote{Note that here we have written the perturbed distribution in terms of the position vector $\mathbf{r}$ on the surface of the Earth rather than explicitly in terms of the angle $\gamma$.}
\begin{align}
\begin{split}
\Gamma_\mathrm{in} &= \int_{\mathbf{v}\cdot \mathbf{r} < 0} \mathrm{d}^2\mathbf{r} \int \mathrm{d}^3\mathbf{v} \,f_0(\mathbf{v})\, (-\mathbf{v}\cdot \mathbf{r})\,,\\
\Gamma_\mathrm{out} &= \int_{\mathbf{v}\cdot \mathbf{r} > 0} \mathrm{d}^2\mathbf{r} \int \mathrm{d}^3\mathbf{v} \,\tilde{f}(\mathbf{v}, \mathbf{r}) \,(\mathbf{v}\cdot \mathbf{r})\,.
\end{split}
\end{align}
In this work, we have calculated the perturbed speed distribution $\tilde{f}(v, \mathbf{r})$ (defined in Eq.~\ref{eq:speedpert}) rather than the perturbed velocity distribution $\tilde{f}(\mathbf{v}, \mathbf{r})$. We can however re-express $\Gamma_\mathrm{out}$ in terms of the \textit{speed} distribution:
\begin{align}
\begin{split}
\Gamma_\mathrm{out} &= \int_{\mathbf{v}\cdot \mathbf{r} > 0} \mathrm{d}^2\mathbf{r} \int \mathrm{d}^3\mathbf{v} \,\tilde{f}(\mathbf{v}, \mathbf{r}) \,(\mathbf{v}\cdot \mathbf{r}) +\int_{\mathbf{v}\cdot \mathbf{r} < 0} \mathrm{d}^2\mathbf{r} \int \mathrm{d}^3\mathbf{v} \,\tilde{f}(\mathbf{v}, \mathbf{r})\, (-\mathbf{v}\cdot \mathbf{r})\\
&\qquad- \int_{\mathbf{v}\cdot \mathbf{r} < 0} \mathrm{d}^2\mathbf{r} \int \mathrm{d}^3\mathbf{v} \,\tilde{f}(\mathbf{v}, \mathbf{r})\, (-\mathbf{v}\cdot \mathbf{r})\\
&= \int \mathrm{d}^2\mathbf{r} \int \mathrm{d}^3\mathbf{v} \,\tilde{f}(\mathbf{v}, \hat{\mathbf{r}}) \,|\mathbf{v}\cdot \mathbf{r}| - \int_{\mathbf{v}\cdot \mathbf{r} < 0} \mathrm{d}^2\mathbf{r} \int \mathrm{d}^3\mathbf{v} \,\tilde{f}(\mathbf{v}, \mathbf{r})\, (-\mathbf{v}\cdot \mathbf{r})\\
&= \left[ 2\pi R_\oplus^2 \int_{-1}^{1} \mathrm{d}\cos\gamma \int_{0}^{v_e + v_\mathrm{esc}} \mathrm{d}v \, \tilde{f}(v, \gamma) \,v \,|\cos\gamma|\right] - \Gamma_\textrm{in}\,.
\end{split}
\end{align}
Here, we have made use of the fact that scattering in the Earth can only affect the distribution of particles with velocities pointing away from the Earth (so $\tilde{f}(\mathbf{v}, \mathbf{r}) = f_0(\mathbf{v})$ for $\mathbf{v}\cdot \mathbf{r} < 0$). As in Sec.~\ref{sec:pscat}, the rate of DM particles flowing inward through the Earth's surface can be calculated straightforwardly from the equation for $f_0(\mathbf{v})$ (or from geometric arguments), giving $\Gamma_\mathrm{in} = \pi R_\oplus^2 \langle v \rangle$. Thus, we can calculate $\Gamma_\mathrm{out}$ from the distributions presented in this section and compare with $\Gamma_\mathrm{in}$. The results of this comparison are shown in Tab.~\ref{tab:fluxes}. In the case of operator 1, the depletion due to attenuation and the enhancement due to deflection cancel to within about $10\%$, meaning that the total DM particle flux through the Earth is conserved to better than $1\%$.\footnote{We work to first order in $R_\oplus/\lambda$, so we would expect errors of the order $p_\mathrm{scat}^2$ (or $1\%$ for $p_\mathrm{scat} = 10\%$).}
\begin{table}[t!]\centering
\begin{tabular}{@{}lllll@{}}
\toprule
\toprule
DM mass [GeV] & Operator & $\Delta \Gamma_\mathrm{out}^\mathrm{Atten.}/\Gamma_\mathrm{in}$ & $\Delta \Gamma_\mathrm{out}^\mathrm{Defl.}/\Gamma_\mathrm{in}$ & $\Gamma_\mathrm{out}/\Gamma_\mathrm{in}$ \\ \hline
0.5 & $\oper{1}$ & $-7.8\%$ & $+7.0\%$ & $99.2\%$ \\
0.5 & $\oper{8}$ & $-8.0\%$ & $+7.3\%$ & $99.2\%$ \\
0.5 & $\oper{12}$ & $-7.8\%$ & $+7.2\%$ & $99.4\%$ \\
50 & $\oper{1}$ & $-7.5\%$ & $+7.3\%$ & $99.9\%$ \\
50 & $\oper{8}$ & $-8.0\%$ & $+8.4\%$ & $100.4\%$ \\
50 & $\oper{12}$ & $-7.3\%$ & $+6.6\%$ & $99.3\%$ \\
\bottomrule
\bottomrule
\end{tabular}
\caption{\textbf{Rate of DM particles passing inward ($\Gamma_\mathrm{in}$) and outward ($\Gamma_\mathrm{out} $) through the Earth's surface once scattering has been accounted for.} In all cases, we normalise the average scattering probability to 10\%. We show the percentage change in the outward flow rate $\Delta\Gamma_\mathrm{out}$ due to attenuation (`Atten.') and deflection (`Defl.') separately. In the rightmost column, we give the total outward flow rate including both effects. Conservation of particle number implies that $\Gamma_\mathrm{out} = \Gamma_\mathrm{in}$.}
\label{tab:fluxes}
\end{table}
We now turn to operator $\oper{8}$ (middle left panel of Fig.~\ref{fig:SpeedDist-2D}). As we have already discussed, operator $\oper{8}$ leads to deflection predominantly in the forward direction. This means that the enhancement due to deflected particles is maximal around $\gamma = 0$, though this is not observable due to the strong attenuation. At $\gamma = \pi$, the attenuation effect is negligible, but (unlike for $\oper{1}$) very few particles are scattered back, leading to only a small enhancement due to deflection. The largest enhancement therefore occurs between these two extrema, around $\gamma = \pi/2$. This enhancement is maximised at a speed of $v \sim 700 \textrm{ km s}^{-1}$. For operator $\oper{8}$, the scattering cross section increases with $v$, meaning that particles just below the escape velocity are most likely to scatter. These particles lose a small amount of energy when they are deflected, leading to a peak around $v \sim 700 \textrm{ km s}^{-1}$. In this case too, the DM particle flux is well conserved (see Tab.~\ref{tab:fluxes}).
Finally, for operator $\oper{12}$, the enhancement in the DM speed distribution peaks sharply towards $\gamma = \pi$ with a strong attenuation effect for $\gamma = 0$. Particles are deflected preferentially in the backwards direction, meaning that the enhancement due to deflection provides little replenishment of the DM flux for $\gamma = 0$. The calculation of the attenuation effects is exact, but calculation of the deflection relies on the first-order, `single scatter' approximation we have employed. With attenuation effects of $\mathcal{O}(50\%)$, we may be concerned that this approximation will break down. However, such large scattering probabilities occur only over a small range of angles (say, $\gamma < \pi/4$). In fact, for a ring on the Earth's surface defined by a given value of $\gamma$, the surface area of that ring scales as $\sin\gamma$. This means that the DM wind sees only a small area of the Earth's surface with a small value of $\gamma$ or, physically, that only a small fraction of particles will cross the full diameter of the Earth. Most particles will traverse the Earth on trajectories with much smaller crossing distances, for which the linear approximation holds. The success of the linear approximation can be verified explicitly by calculating the DM number density at the surface after scattering and, as before, the attenuation and deflection effects cancel to a high degree ($\Gamma_\mathrm{out}/\Gamma_\mathrm{in} = 99.4\%$).
\subsection{High mass}
We now explore the effects of Earth-scattering for higher mass DM particles. In the right column of Fig.~\ref{fig:SpeedDist-2D}, we show the percentage change in the speed distribution for DM with mass $m_\chi = 50 \, \, \mathrm{GeV}$, interacting through operators $\oper{1}$, $\oper{8}$ and $\oper{12}$.
As for light DM, the greatest effects of attenuation are experienced by the fastest DM particles when they must cross the maximal distance through the Earth to reach the detector ($\gamma = 0$), with negligible attenuation for large values of $\gamma$. However, in contrast to the light DM case, there is no enhancement in the speed distribution for speeds above $\sim300 \textrm{ km s}^{-1}$. At 50 GeV, the DM particles have a mass within a factor of a few of the Earth species which they scatter off. This means that the transfer of kinetic energy is significantly more efficient. Particles with high speeds which scatter in the Earth are therefore down-scattered to much smaller speeds, leading to a substantial enhancement for $v < 100 \textrm{ km s}^{-1}$.
As shown in Fig.~\ref{fig:Pcosalpha}, the deflection of particles is increasingly focused in the forward direction as the DM mass is increased. At a given value of $v$, this leads to a slight increase in the enhancement for $\gamma = 0$ compared with $\gamma = \pi$. However, from kinematics, the more energy a DM particle loses in a collision, the larger the deflection angle must be. Forward deflection (which leads to an enhancement at $\gamma = 0$) is more likely than backward deflection (enhancement at $\gamma = \pi$), but backward deflection leads to a larger energy loss of the DM particles. At very low speeds, these two effects balance and for $\oper{1}$ (top right panel) and $\oper{12}$ (bottom right panel) there is a large enhancement for all values of $\gamma$.
For operator $\oper{8}$ (middle right panel), there is instead little enhancement for large $\gamma$ and for small $v$. As can be seen from the middle panel of Fig.~\ref{fig:Pcosalpha}, the deflection of DM particles for this operator is even more strongly focused in the forward direction, with a negligible probability of backward deflection. This also means that the probability of losing a large amount of energy is small which, coupled with the increasing cross section as a function of $v$, leads to a negligible enhancement at low $v$.
We note that even though there is substantial enhancement in the DM speed distribution at low speeds for $\oper{1}$ and $\oper{12}$, this does not spoil the validity of the single-scatter approximation. These low speed particles give little contribution to the flux of DM leaving the Earth. As such, the total number of DM particles is well preserved (Tab.~\ref{tab:fluxes}): as for low mass DM, the rate of DM particles leaving the Earth matches the rate of particles entering to an accuracy of better than $1\%$.
We briefly comment on the effects of Earth-scattering for ultra-heavy DM. For DM particles much heavier than the nuclei in the Earth ($m_\chi \gg m_A$), the scattering kinematics requires that $\cos\alpha \rightarrow 1$ (see Eq.~\ref{eq:cosalpharange} and the right panel of Fig.~\ref{fig:Pcosalpha}) and therefore that $\kappa \rightarrow 1$ (i.e.~the DM particles lose no energy through scattering). Applying these limits in Eq.~\ref{eq:speeddist_deflection}, we see that the deflected DM population is always equal to the population of particles lost to attenuation (to first order in $R_\oplus/\lambda$). Physically, ultra-heavy DM particles are not deflected by scattering in the Earth and so they continue to the detector unaffected. We therefore expect the effects of Earth-scattering to diminish as the DM mass tends to infinity, though this conclusion could change in the presence of multiple scatters or long-range interactions.
\section{Modulation signatures}
\label{sec:modulation}
We now explore how the effects of Earth-scattering on the DM speed distribution translate into effects on the direct detection event rate. The effect will vary not only as a function of the detector latitude, but also over the course of a day, as the detector moves due to the Earth's rotation. This will give rise to interesting modulation signatures.
We first consider the calculation of the DM-induced direct detection event rate as a function of $\gamma$. For low-mass DM, we will examine the event rate in a mock detector inspired by CRESST-II \cite{Angloher:2015ewa}. Calculation of the CRESST-II event rate is detailed in Appendix~\ref{app:CRESST-II} (including the contribution from DM scattering on both Oxygen and Calcium nuclei in the detector). For simplicity, we will compare the total number of expected signal events with and without the effects of Earth-scattering. Note that factors such as the local DM density and the detector exposure simply affect the overall normalisation of the rate and will cancel in the ratio.
In Fig.~\ref{fig:RvGamma}, we show the ratio of the number of events expected with Earth-scattering $N_\mathrm{pert}$ to the number expected without Earth-scattering $N_\mathrm{free}$ in this CRESST-II-like detector for the three operators discussed in Sec.~\ref{sec:speeddist}. Solid lines include the effects of both attenuation and deflection while dashed lines show the results when only attenuation is accounted for. For operator $\oper{1}$, we see that deflection leads to an enhancement of a few percent in the event rate, relative to the case of attenuation-only. The size of this effect is independent of $\gamma$, which is consistent with the isotropic deflection of low mass DM for $\oper{1}$. Including both effects, there is a 15\% variation in the total direct detection event rate over the full range of $\gamma$.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.45\textwidth]{plots/RvGamma_lowmass.pdf}
\includegraphics[width=0.45\textwidth]{plots/RvGamma_highmass.pdf}
\caption{\textbf{Ratio of the number of direct detection signal events with Earth-scattering $N_\mathrm{pert}$ and without Earth-scattering $N_\mathrm{free}$.} In the left panel, we show results for $m_\chi = 0.5 \, \, \mathrm{GeV}$ (in a CRESST-II-like detector \cite{Angloher:2015ewa}) and in the right panel for $m_\chi = 50 \, \, \mathrm{GeV}$ (in a LUX-like detector \cite{Akerib:2016vxi}). In all cases, we normalise the DM-nucleon couplings to give average scattering probability in the Earth of 10\%. Dashed lines show the results when only the effects of attenuation are included in the calculation. The angle $\gamma = 0$ ($\gamma = \pi$) corresponds to maximal (minimal) Earth-crossing before reaching the detector (see Fig.~\ref{fig:gamma}). }\label{fig:RvGamma}
\end{figure*}
For operator $\oper{12}$, the size of the effect is much larger, with $N_\mathrm{pert}/N_\mathrm{free}$ ranging from 0.55 to 1.12 depending on the value of $\gamma$. For light DM, only the particles with the largest speeds have enough kinetic energy to contribute to the recoil rate. As can be seen from the bottom right panel of Fig.~\ref{fig:SpeedDist-2D}, the effect on the speed distribution is greatest at high speeds, where the cross section is enhanced ($\sigma_{12} \sim v^2$). We also note that the effect of deflection on the direct detection rate is very small for $\gamma = 0$, but gives a more than $10\%$ enhancement to the rate at $\gamma = \pi$. Once again, this is consistent with the observations in the previous section: $\oper{12}$ favours backward deflection, giving a large contribution when the DM particles transit the Earth after having passed the detector.
Finally, for $\oper{8}$ the enhancement due to deflection is maximised at $\gamma = 0$ due to preferentially forward scattering. As we increase $\gamma$, both attenuation and deflection effects reduce in size. However, the effects of deflection decrease more slowly, resulting in a peak in the event rate around $\gamma = \pi/2$. Depending on the range of $\gamma$ probed by a given detector, this may result in a phase shift in the daily modulation of the DM signal for $\oper{8}$ relative to operators $\oper{1}$ and $\oper{12}$.
We comment briefly on the modulation for the higher mass 50 GeV particle (right panel of Fig.~\ref{fig:RvGamma}). In this case, the only observable effect is a reduction in the signal rate for small $\gamma$. The reason for this can be seen in the right column of Fig.~\ref{fig:SpeedDist-2D}; the only substantial enhancement in the velocity distribution is at low speeds, which fall below the 1.1 keV threshold of the LUX experiment. We note that in this case, the main consequence of including deflected DM particles is that they partially replenish those particles lost to attenuation, reducing the size of the modulation effect.
\subsection{Daily modulation}
We now consider how this modulation as a function of $\gamma$ translates into a modulation as a function of time. To do this, we need to calculate $\gamma$ for a given time and detector latitude. First, we define a coordinate system in which the positive $z$-direction points along the Earth's North pole. The position of a detector at latitude $\theta_l$ is then\footnote{We assume by convention that latitudes in the Northern hemisphere are positive, while latitudes in the Southern hemisphere are negative.}
\begin{equation}
\hat{\mathbf{r}}_\mathrm{det} = \left(\cos\theta_l \cos\omega t,\, \cos\theta_l \sin \omega t, \,\sin\theta_l\right)\,,
\end{equation}
where $\omega = 2\pi/\mathrm{day}$ is the angular velocity of the Earth's rotation. Here, we define $t = 0$ as the time at which the detector position is maximally aligned with the Earth's velocity. In this coordinate system, the Earth's velocity with respect to the Galactic rest-frame can be written
\begin{equation}
\hat{\mathbf{v}}_e = \left(\sin\alpha , 0, \cos\alpha\right)\,,
\end{equation}
where the angle $\alpha$ varies between $36.3^\circ$ and $49.3^\circ$ over the course of a year \cite{Kouvaris:2014lpa}. For concreteness, in this work, we fix the angle $\alpha$ to $42.8^\circ$. The average DM velocity is simply $\langle \mathbf{v}_\chi \rangle = -\mathbf{v}_e$, so we obtain:
\begin{align}
\begin{split}
\gamma &= \cos^{-1} (\langle \hat{\mathbf{v}}_\chi \rangle \cdot \hat{\mathbf{r}}_\mathrm{det})\\
&=\cos^{-1}\left(-\cos\theta_l \cos\omega t \sin \alpha - \sin\theta_l\cos\alpha\right)\,.
\end{split}
\end{align}
With this expression, we can directly map the results of Fig.~\ref{fig:RvGamma} onto the rate as a function of time (for a given detector latitude).
\begin{figure*}[t!]
\centering
\includegraphics[trim={0 0 2.1cm 0} ,clip ,width=0.46\textwidth]{plots/EarthMap_O1_t=0.pdf}
\includegraphics[width=0.522\textwidth]{plots/EarthMap_O1_t=12.pdf}
\caption{\textbf{Earth-scattering effects over the surface of the Earth.} Relative enhancement in the event rate in a CRESST-II-like detector \cite{Angloher:2015ewa} due to the effects of Earth scattering. We assume a DM mass of $m_\chi = 0.5 \,\,\mathrm{GeV}$, interacting through the standard SI operator $\oper{1}$ (with normalisation fixed such that $p_\mathrm{scat} = 10\%$). The black cross shows the point on the Earth at which the average DM particle would appear to be coming from directly overhead. There is a 12 hour time difference between the left and right panels. Animations available online at \href{https://github.com/bradkav/EarthShadow/tree/master/videos}{github.com/bradkav/EarthShadow}.} \label{fig:EarthMap}
\end{figure*}
In Fig.~\ref{fig:EarthMap}, we show the ratio of the rate with Earth-scattering to the rate without Earth-scattering over the surface of the Earth, for 0.5 GeV DM particles interacting through the operator $\oper{1}$. The black cross (at a latitude of $~42.8^\circ \,\mathrm{N}$) shows the point on the Earth at which DM particles appear to be coming from directly overhead. As expected the maximum reduction in the expected event rate (dark blue) occurs on the opposite side of the Earth; particles emerging from this dark blue region have crossed almost the entire diameter. We notice also that the red region of the maps is much larger than the blue region.\footnote{We use an equal-area Mollweide projection to produce the maps in Fig.~\ref{fig:EarthMap}, so such a comparison between areas is reasonable.} As discussed in Sec.~\ref{sec:effects}, the effects of attenuation are large but focused only over a small area of the Earth. The two panels of Fig.~\ref{fig:EarthMap} compare the effects of Earth-scattering at 12 hour intervals. As the Earth rotates, the apparent source of the DM wind travels across the sky and the pattern of Earth-scatter effects rotates across the surface of the Earth. Animations showing the modulation signal over the entire surface of the Earth, as well as at a selection of underground laboratories, can be found online accompanying the \textsc{EarthShadow} code \cite{EarthShadow}.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.45\textwidth]{plots/Rvt_LNGS.pdf}
\includegraphics[width=0.45\textwidth]{plots/Rvt_CJPL.pdf}
\includegraphics[width=0.45\textwidth]{plots/Rvt_INO.pdf}
\includegraphics[width=0.45\textwidth]{plots/Rvt_SUPL.pdf}
\caption{\textbf{Daily modulation in the event rate.} Ratio of numbers of events in a CRESST-II-like detector \cite{Angloher:2015ewa} with and without the effects of Earth scattering for a DM particle of mass $m_\chi = 0.5 \,\,\mathrm{GeV}$. We fix the average scattering probability at $p_\mathrm{scat} = 10\%$. Dashed lines show the modulation when only attenuation is included, while solid lines show the effect when both attenuation and deflection are included. Each panel shows results for a lab in a different location (latitude given in parentheses).}\label{fig:Rvt}
\end{figure*}
In Fig.~\ref{fig:Rvt}, we show the ratio of the rate with and without Earth-scattering for detectors in 4 locations on the Earth. For the \href{https://www.lngs.infn.it/en}{LNGS lab} in Italy (top left, $\theta_l = 42.5^\circ \, \mathrm{N}$) Earth-scattering always leads to a net increase in the DM signal. For $\oper{1}$, however, there is no appreciable modulation, only a few percent increase in the total rate, regardless of the time. As is clear from Fig.~\ref{fig:EarthMap}, the enhancement due to Earth-scattering is roughly constant at high latitudes. At LNGS, we have $\theta_l \approx \alpha$, meaning that $\gamma$ is always relatively large. In other words, the DM wind appears to be coming from directly above for most of the day, leading to minimal attenuation. The main contribution then is a constant enhancement due to isotropic deflection of the DM particles over the Earth's surface.
Instead, a modulation of a few percent is observed for $\oper{8}$. At $t \sim 0 \,\,\mathrm{hr}$, the DM wind comes from directly overhead ($\gamma = \pi$) and the enhancement is at a minimum due to the limited deflection of DM particles back towards the detector. For $\oper{12}$, the phase of the modulation is reversed; the predominantly backwards deflection enhances the rate when DM particles come from directly overhead.
As we move towards the Equator, the incoming direction of the DM wind varies more rapidly over the course of a day. In addition, the size of the attenuation effect becomes larger, as the underground distance travelled by DM particles at $t = 12 \, \, \mathrm{hr}$ becomes larger. For the CJPL lab \cite{Li:2014rca} in China (top right, $\theta_l = 28.2^\circ \,\mathrm{N}$) we see a more pronounced modulation than at the more northerly LNGS lab. At the India-based Neutrino Observatory \cite{Indumathi:2015hfa}, under construction in India (bottom left, $\theta_l = 9.7^\circ \, \mathrm{N}$) this modulation is more pronounced still. Most notable in this case is that for $\oper{8}$, the diurnal modulation is no longer purely sinusoidal. For this operator, DM particles predominantly scatter forwards, so we would expect a maximum enhancement due to deflection when the Earth-crossing distance is maximal ($t = 12 \, \, \mathrm{hr}$). However, at this time, the attenuation effect is also maximal and dominates the deflection effect. We therefore observe a minimum in the rate at $t = 12 \,\, \mathrm{hr}$ as well as at $t = 0 \, \, \mathrm{hr}$.
In the Southern hemisphere, the SUPL lab \cite{Urquijo:2016dxd} in Australia (bottom right, $\theta_l = 37.1^\circ \, \mathrm{S}$) is located such that at certain times of day DM particles must cross most of the Earth before reaching the detector ($\theta_l \approx - \alpha$). It therefore probes the low $\gamma$ regime, with the dominant effect being attenuation of the DM flux. The daily modulation observed here has the largest amplitude, in the range $10-30\%$ depending on the interaction. Attenuation dominates only over a small area of the Earth (blue regions in Fig.~\ref{fig:EarthMap}), but if the detector falls within this region, the effects can be substantial. While attenuation dominates, we note that the inclusion of the deflected DM population is not negligible in this case, as it counteracts this attenuation effect and in some cases even leads to a net enhancement.
\section{Discussion}
\label{sec:discussion}
We have demonstrated the effects of Earth-scattering for low mass DM on the expected event rate at direct detection experiments. Contrary to the common lore, `Earth-shadowing' in fact \textit{increases} the direct detection rate over much of the Earth's surface, as illustrated in Fig.~\ref{fig:EarthMap}. This has a large impact on the expected modulation signatures for detectors in different locations, as shown in Fig.~\ref{fig:Rvt}. In the Northern hemisphere (such as at LNGS), there is a net increase in the direct detection rate, but a relatively small modulation over the course of a day. Instead, labs in the Southern Hemisphere (such as SUPL) experience the largest modulation, due to strong attenuation effects. We have also demonstrated that the properties of this modulation depend sensitively on the interaction of DM with nuclei. Different interactions generally lead to the deflection of scattered DM particles in different directions, which has the possibility to alter both the amplitude and phase of the modulation. Laboratories closer to the Equator (such as the Indian-based Neutrino Observatory) typically have substantial contributions from both deflection and attenuation over the course of a day leading to more complicated time variation of the signal.
The observation of such a modulation would be an indication of DM with relatively strong interactions with nucleons. The location-dependence of the modulation would then act as a useful verification of the DM origin of the signal. In addition, if the detailed time variation of the signal could be characterised, this may help us to distinguish between different DM-nucleon interactions. We note also that the size of the modulation depends only on the DM interaction cross section and not on the local DM density. Thus, the observation of a modulation due to Earth-scattering would allow us to determine both quantities separately.
We have focused here on a benchmark point just below the current CRESST-II limits, giving a scattering probability of 10\% in the Earth. This parameter space will soon be explored by CRESST-III \cite{Strauss:2016sxp} as well as other low-threshold experiments such as SuperCDMS SNOLAB \cite{Agnese:2016cpb} and DAMIC \cite{Aguilar-Arevalo:2016ndq}. We encourage experimental collaborations to set limits not only on the standard spin-independent interactions but also on the more general operators of the NREFT. As we showed for Operator $\oper{12}$, the limits may be comparatively weaker than in the SI case, opening a wider parameter space for interesting Earth-scattering effects.
Of course, if a DM signal is observed by upcoming experiments, the number of events required to detect and characterise a modulation signature will depend on the size of the effect. From a theoretical perspective, it will be necessary to compare the Earth-scattering effect with diurnal modulation from other sources, such as gravitational focusing \cite{Kouvaris:2015xga} or time-variation of the detector velocity due to the Earth's rotation \cite{Civitarese:2016uuc}. Experimentally, it would be necessary to account for the various detector responses and their possible time variation (although in contrast to annual modulation, diurnal modulations set much less stringent requirements on detector stability). Exploring the prospects for confirming a diurnal signal in a given experiment is beyond the scope of the current study and we leave this for future work.
We have focused on light (0.5 GeV) DM particles interacting through the three operators $\oper{1}$, $\oper{8}$ and $\oper{12}$. Of course, the space of DM models is much wider than this, including not only the full range of DM-nucleon contact operators from NREFT, but also long-range and DM-electron interactions. The \textsc{EarthShadow} code we have developed can be used to calculate the Earth-scattering effects for any DM mass in the range $0.1-300$~GeV and for any NREFT interaction. Furthermore, the framework we have developed can be extended to accommodate more general interactions or to investigate the \textit{directional} signatures which arise from Earth-scattering. We encourage the reader to use these tools to further explore the DM-interaction parameter space for other interesting signatures.
Though we have considered arbitrary DM-nucleon interactions, the calculations we have performed are not valid for arbitrary interaction strengths. We have worked in the `single scatter' regime, where we account for only a single scatter in the deflected DM population. As the DM-nucleon interaction strength increases, this approximation will no longer be valid. It may be possible to account for additional scatters analytically but this is likely to become rapidly intractable. An alternative approach may be to perform calculations in a `many-scatter' or `diffusion' regime in which the DM particles are not assumed to follow ballistic trajectories but instead interact so strongly that they are best described as diffusing through the Earth. A final possibility for extending these calculations would be the use of Monte-Carlo simulations, which would allow a smooth interpolation between the `single-scatter' and `diffusion' regimes. This final possibility will be explored in an upcoming publication \cite{Emken}. The calculations presented in this work will then act as a crucial validation check for the development of such simulations.
\section{Summary}
\label{sec:conclusion}
In this work, we have presented a new analytic calculation of the Earth-scattering effect and its impact on the DM speed distribution at the Earth's surface. Working in the `single scatter' regime, we have demonstrated that our calculations are self-consistent, in that they conserve the flux of DM particles through the Earth. Though our calculations are valid for all DM masses and for a wide range of DM-nucleon interactions, we have focused on low mass (0.5 GeV) DM and a small subset of the NREFT operators introduced in Ref.~\cite{Fitzpatrick:2012ix}. The conclusions of this study are summarised as follows:
\begin{itemize}
\item Earth-scattering substantially reduces the speed distribution when DM particles must cross a large fraction of the Earth to reach the detector. This effect dominates for latitudes in the range $36^\circ\,\mathrm{S} - 49^\circ \,\mathrm{S}$. The rotation of the Earth then leads to a diurnal modulation with a large amplitude ($10-30\%$) for detectors in the Southern Hemisphere.
\item At other latitudes (and particularly in the Northern hemisphere), Earth-scattering typically leads to a net \textit{increase} in the DM speed distribution. In this case, the overall rate is slightly enhanced, but the modulation amplitude is typically smaller ($1-10\%$).
\item Different DM-nucleon interactions cause DM particles to be deflected in different directions after they have scattered in the Earth. The result is that the expected phase and amplitude of the modulation depends on the type of interaction which is assumed for the DM particle.
\end{itemize}
The benchmark parameters assumed in this work are not excluded by current experiments, but could be explored with an increase in exposure or by upcoming detectors. If we are lucky enough to detect DM which interacts strongly with nuclei, the characteristic diurnal modulation described here would act as unequivocal confirmation of the signal. Such a signal would also allow us to determine the local DM density and cross section independently. The dependence of the modulation on the specific form of the DM-nucleon interaction may even provide insights into the particle identity of DM. Though we have not explored all possible DM parameters and interactions, we have demonstrated that signatures of Earth-scattering would have profound consequences if they were to be observed in the future.
\acknowledgments
BJK is supported by the European Research Council ({\sc Erc}) under the EU Seventh Framework Programme (FP7/2007-2013)/{\sc Erc} Starting Grant (agreement n.\ 278234 --- `{\sc NewDark}' project). BJK also acknowledges the hospitality of the Institut d'Astrophysique de Paris, where part of this work was completed. CK is partially funded by the Danish National Research Foundation, grant number DNRF90 and by the Danish Council for Independent Research, grant number DFF 4181-00055.
|
1,116,691,498,227 | arxiv | \section{Introduction}\label{sec:intro}
Two questions have been on every cosmologist's mind ever since the BICEP2 collaboration announced the detection of B-mode polarization~\cite{Ade:2014xna}: Is the signal cosmological? And, is it from inflation? Measuring the precise shape of the B-mode spectrum can help to address both of these questions.
A distinguished feature of inflationary perturbations is the fact that they are correlated over apparently acausal scales.
For scalar perturbations, this leads to a distinctive cross-correlation in the cosmic microwave background~(CMB) between temperature perturbations and E-mode polarization~\cite{Coulson:1994qw}.
The detection of superhorizon TE correlations by WMAP~\cite{Peiris:2003ff} is arguably the most convincing piece of evidence that the observed
density perturbations were generated during inflation~\cite{Spergel:1997vq, Dodelson:2003ip}. In~\cite{Baumann:2009mq}, it was pointed out that an analogous causality test can be performed for inflationary tensor modes.
In this paper, we revisit and refine this proposal in light of the BICEP2 result.
The conventional $E$- and $B$-modes~\cite{Kamionkowski:1996ks, Zaldarriaga:1996xe}
are ill-suited to address questions of causality, since they are defined non-locally in terms of the Stokes parameters of the radiation field. We will therefore work with a local alternative~\cite{Smith:2006vq} to the standard $E$- and $B$-modes which we will denote by ${\cal E}$ and ${\cal B}$.
In the flat-sky limit, we have ${\cal E} = \nabla^2 E$ and ${\cal B} = \nabla^2 B$, where $\nabla^2$ is the two-dimensional Laplacian in the plane orthogonal to the line-of-sight.
Fig.~\ref{fig:real} shows inflation's prediction for the ${\cal B}$-mode correlation function in real space. Unlike the correlation functions sourced by scalar fluctuations, the signal does not have a peak at the acoustic scale ($2\theta_a \sim 1.2^\circ$). Instead the tensor-induced signal peaks around the horizon scale ($\theta_c \equiv 2\theta_h\sim 2.3^\circ$) corresponding to the time when the inflationary gravitational waves re-entered the horizon and started oscillating. Causality forbids such a superhorizon signal for any post-inflationary mechanism, such as phase transitions~\cite{JonesSmith:2007ne} or defects~\cite{Seljak:1997ii, Pogosian:2007gi}, for gravitational wave production~\cite{Hu:1997hp, Spergel:1997vq}.
A measurement of ${\cal B}$-mode correlations above 2 degrees therefore constitutes an important test for the inflationary origin of the signal.
\begin{figure}[t!]
\centering
\includegraphics[width=0.47\textwidth]{Figures/Fig1.pdf}
\vskip -3pt
\caption{\label{fig:ClBtildeAll} Local ${\cal B}$-mode correlation function for $r=0.13$. The dashed and solid parts of the curve represent subhorizon and superhorizon scales, respectively.}
\label{fig:real}
\end{figure}
Fig.~\ref{fig:powerspectrum} shows the corresponding superhorizon signal in harmonic space.
(See \S\ref{sec:supsignal} for the precise definition of the superhorizon power spectrum.)
We see that the superhorizon information is encoded in the precise shape and the locations of the peaks of the spectrum. Notice that the superhorizon signal isn't just in the lowest multipole moments. In fact,
the asymptotic scaling of the spectrum, $C_\ell^{\cal B} \sim \ell^4$ for $\ell \ll 80$, is universal and does not help to distinguish causal sources from inflation~\cite{Baumann:2009mq}.
Extra care must be taken when working with ${\cal E}$- and ${\cal B}$-modes as we are dealing with derivatives of the raw data, corresponding to a blue noise spectrum in harmonic space.
In order not to become dominated by small-scale noise, smoothing needs to be applied to the data. However, if the smoothing scale is chosen to be too large, it induces a transfer of spurious subhorizon power to superhorizon scales. Conversely, a small smoothing scale reduces the signal-to-noise. We will discuss the optimal strategy for minimizing the effects of spurious modes while maximizing the signal-to-noise for the true superhorizon signal.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{Figures/Fig2.pdf}
\vskip -4pt
\caption{\label{fig:powerspectrum}The superhorizon ${\cal B}$-mode power spectrum (solid) and the full ${\cal B}$-mode power spectrum (dashed) for $r=0.13$.}
\end{figure}
\vskip 4pt
The paper is organized as follows. In \S\ref{sec:signal}, we review the concept of the local ${\cal B}$-modes and present the superhorizon ${\cal B}$-mode signal predicted by inflation.
In \S\ref{sec:noise}, we examine all potential sources of noise. We show that subhorizon modes can contaminate the superhorizon signal, especially if smoothing is applied to the data to suppress small-scale noise. In \S\ref{sec:method}, we introduce an estimator of the superhorizon part of the signal and define the signal-to-noise ratio. We also present a measure for the amount of contamination from spurious subhorizon modes and describe ways to minimize their effects.
In \S\ref{sec:forecast}, we provide forecasts for the detectability of the superhorizon nature of inflationary B-modes for both current and future CMB polarization experiments.
Our conclusions are stated in \S\ref{sec:conclusion}.
A few appendices contain additional reference materials:
Appendix~\ref{sec:harmonic} provides details of a similar analysis in harmonic space, Appendix~\ref{sec:EffectiveNoise} describes the derivation of the effective noise in multi-frequency experiments, and Appendix~\ref{sec:Experiments} lists the instrumental specifications of the CMB experiments considered in this work.
All CMB spectra are computed with
CAMB~\cite{Lewis:1999bs} using the best-fit parameters of the $\Lambda$CDM model~\cite{Ade:2013zuv}: $h=0.67$, $\Omega_b h^2 = 0.022$, $\Omega_c h^2=0.12$, $\tau=0.093$, $A_s=2.2\times 10^{-9}$, and $n_s=0.96$.
The primordial tensor spectrum is taken to be scale-invariant, $n_t = 0$.
\section{The Signal}
\label{sec:signal}
We begin with a brief review of the superhorizon signature of inflationary B-modes~\cite{Baumann:2009mq}.
\subsection{Local B-modes}\label{sec:bmodes}
The polarization of the CMB is characterized by a symmetric, traceless rank-2 tensor defined in the plane perpendicular to the line-of-sight $\hat{{\mathbf{n}}}$:
\begin{equation}
P_{ij} = U\sigma_{ij}^{(1)} + Q\sigma_{ij}^{(3)}\ ,
\end{equation}
where $\sigma_{ij}^{(I)}$ denotes the Pauli matrices. Since the Stokes parameters $Q$ and $U$ transform non-trivially under rotations of the coordinates, it is more convenient to work with two invariants that can be constructed from the polarization tensor: a scalar ${\cal E} \equiv \nabla_i\nabla_j P_{ij}$ and a pseudo-scalar ${\cal B}\equiv \epsilon_{kj}\nabla_k\nabla_i P_{ij}$, corresponding to the gradient and curl parts of the polarization tensor, respectively. In the flat-sky limit, these ${\cal E}$- and ${\cal B}$-modes are related to the Stokes parameters and the ordinary $E$- and $B$-modes by~\cite{Zaldarriaga:1998rg}
\begin{align}
{\cal E}({\mathbf{x}}) &= \nabla^2 E({\mathbf{x}}) =(\partial_x^2 - \partial_y^2)Q({\mathbf{x}}) + 2\partial_x\partial_y U({\mathbf{x}})\ ,\\
{\cal B}({\mathbf{x}}) &= \nabla^2 B({\mathbf{x}}) = (\partial_x^2 - \partial_y^2)U({\mathbf{x}}) - 2\partial_x\partial_y Q({\mathbf{x}})\ .\label{Bdef}
\end{align}
By construction, ${\cal E}$ and ${\cal B}$ are local functions of the Stokes parameters, whereas $E$ and $B$ are defined non-locally in terms of $Q$ and $U$. Being just a linear transformation of the conventional $B$-modes, the local ${\cal B}$-modes are also a signature of tensor (and vector) modes in the initial conditions.
Any scalar field on the celestial sphere can be expanded in terms of spherical harmonics, so we write
\begin{align}
X(\hat{\mathbf{n}}) &\equiv \sum_{\ell m} a_{X,\ell m}Y_{\ell m}(\hat{\mathbf{n}})\ ,
\end{align}
where $X=\{T, {\cal E},{\cal B}\}$. Assuming statistical isotropy, the two-point statistics of the multipole moments are described in terms of the angular power spectrum:
\begin{equation}
\langle a_{X,\ell m\phantom{'}} \hskip -2pt a_{X,\ell' m'}^* \rangle = C_\ell^{X} \delta_{\ell \ell'} \delta_{mm'}\ ,
\end{equation}
where the angle brackets denote the ensemble average.
The late-time power spectrum, $C_\ell^X$, can be related to quantum zero-point fluctuations in both the spacetime metric and the matter fields during inflation~\cite{Baumann:2009ds}.
Given a measurement of the harmonic coefficients $a_{X,\ell m}$, we define estimators of the angular power spectra as
\begin{equation}
\widehat C_\ell^X \equiv \frac{1}{2\ell+1} \sum_m a_{X,\ell m}^{\phantom{*}} a_{X,\ell m}^*\ .
\end{equation}
The power spectrum of the local ${\cal B}$-modes is related to that of the conventional $B$-modes by~\cite{Baumann:2009mq, Durrer}
\begin{equation}
C_\ell^{{\cal B}}=n^2_\ell C_\ell^B\ ,
\end{equation}
where $n_\ell \equiv \sqrt{(\ell+2)!/(\ell-2)!}$\hskip2pt. A harmonic transformation gives the corresponding correlation function in real space:
\begin{equation}\label{corr}
C^{\cal B} (\theta) = \sum_{\ell} \frac{2\ell+1}{4\pi} C^{\cal B}_\ell P_\ell(\cos\theta)\ ,
\end{equation}
where $\theta$ is the angle between pairs of line-of-sight directions $\hat{{\mathbf{n}}}_1$ and $\hat{{\mathbf{n}}}_2$, i.e.~$\cos \theta \equiv \hat{{\mathbf{n}}}_1 \cdot \hat{{\mathbf{n}}}_2$. The relation between (\ref{corr}) and the correlation function of the conventional $B$-modes is
\begin{equation}\label{BtildeCorr}
C^{{\cal B}}(\theta) =\nabla^2 (\nabla^2+2) C^B(\theta)\ .
\end{equation}
Again, the non-local nature of the ordinary $B$-modes is manifest: (\ref{BtildeCorr}) implies that $C^{{\cal B}}$ vanishes for any $C^B$ living in the kernel of $\nabla^2(\nabla^2+2)$, even if $C^B$ is non-zero.
\subsection{Superhorizon Signal}
\label{sec:supsignal}
Having defined the local ${\cal B}$-modes, we can analyze causality constraints on their correlation functions.
The superhorizon part of the two-point correlation function is identified most directly in real space:
\begin{equation}\label{supsignal}
S^{\cal B}(\theta) \equiv H(\theta-\theta_c) \hskip 1pt C^{\cal B}(\theta)\ ,
\end{equation}
where $H$ is the Heaviside step function and $\theta_c \simeq 2.3^\circ$ is (twice) the angle subtended by the particle horizon at recombination.
The corresponding signal in harmonic space is
\begin{align}
S_\ell^{\cal B} &\,=\, 2\pi \int_{-1}^1 {\rm d} \cos \theta \, \, S^{\cal B}(\theta)\, P_{\ell}(\cos\theta) \nonumber \\[4pt]
&\,=\, \sum_{\ell'} M_{\ell\ell'} C_{\ell'}^{\cal B} \ , \label{powerspectrum}
\end{align}
where the mode-coupling matrix $M_{\ell\ell'}$ is
\begin{equation}\label{couplingmtx}
M_{\ell\ell'} \equiv \frac{2\ell'+1}{2} \underbrace{\int_{-1}^{x_c} P_\ell(x)P_{\ell'}(x)\hskip 1pt {\rm d} x}_{\equiv\, I_{\ell\ell'}}\ ,
\end{equation}
with $x_c \equiv \cos \theta_c$.
We label the complementary subhorizon signal as $(S^{\cal B}_\ell)^\dagger \equiv C_\ell^{\cal B} - S^{\cal B}_\ell$.
The mode-coupling integrals $I_{\ell\ell'}$ in (\ref{couplingmtx}) can be calculated analytically. The off-diagonal terms are given by
\begin{align}
I_{\ell\ell'} &= \frac{ (\ell-\ell') x_c P_\ell P_{\ell'} + \ell' P_\ell P_{\ell'-1} - \ell P_{\ell-1} P_{\ell'} }{\ell(\ell+1)-\ell'(\ell'+1)}\ ,
\end{align}
where the Legendre polynomials are evaluated at $x_c$, while the diagonal terms are determined by the recursion relation
\begin{equation}\label{Idiag}
I_{\ell\ell} = \frac{2\ell-1}{2\ell+1}I_{\ell-1,\ell-1} + \frac{2\ell-1}{2\ell+1}\frac{\ell+1}{\ell}I_{\ell+1,\ell-1} - \frac{\ell-1}{\ell}I_{\ell,\ell-2}\ .
\end{equation}
We can think of the kernel~(\ref{couplingmtx}) as an operator projecting the power spectrum onto its superhorizon subspace.
In fig.~\ref{fig:powerspectrum}, we show the superhorizon part of the power spectrum predicted by inflation.
We see that the features of the real space correlation function above $\theta_c$ are encoded in the oscillations of the power spectrum with the frequency of the oscillations corresponding to the horizon size at recombination.
\section{The Noise}\label{sec:noise}
Next, we describe the sources of noise that we will take into account in our analysis.
\subsection{Instrumental Noise}
We represent instrumental noise by an uncorrelated Gaussian random field.
Assuming white noise in the Stokes parameters, the noise power spectrum for $B$-modes can be expressed as~\cite{Knox:1995dq}
\begin{equation}\label{NlB}
N_\ell^B = \Delta_P^2 e^{\ell(\ell+1)/\ell_b^2}\ ,
\end{equation}
where $\Delta_P$ is the noise level of polarization sensitive detectors.
The exponential factor in (\ref{NlB}) represents the effect of deconvolving the Gaussian beam effect from the signal, with $\ell_b \equiv\sqrt{8\ln 2}/\theta_b$ and $\theta_b$ the full width at half maximum of the beam. The noise level is determined by
\begin{equation}
\Delta_P^2 = \frac{2\text{NET}^2\Omega_\text{sky}}{N_\text{det}t_\text{obs}Y} \equiv s_P^2 f_\text{sky}\ , \label{noise2}
\end{equation}
where NET is the noise equivalent temperature of detectors, $N_\text{det}$ denotes the number of detectors, $t_\text{obs}$ is the time of observation, $Y$ characterizes the detector yield, and $\Omega_\text{sky} = 4\pi f_\text{sky}$ is the observed sky area.\footnote{The current generation of experiments achieves $ \text{NET} = 350 \mu{\rm K}\sqrt{s}$ and $Y=0.25$, with $N_{\rm det} \sim {\cal O}(10^3)$~\cite{Wu:2014hta}.
In \cite{Abazajian:2013oma} ground-based experiments have been classified by the number of detectors as Stage-II, Stage-III and Stage-IV for $N_{\rm det} \sim 10^3$, $10^4$ and $10^5$, respectively.}
We will find it useful to consider the effective sensitivity of a full-sky experiment, $s_P$, and rescale it by the observed sky fraction, $f_{\rm sky}$. In an experiment with multiple frequency channels, a heuristic measure of the effective noise level is
\begin{equation}
\Delta_{P,\text{eff}}^2 = \left[\sum_{i} \frac{1}{\Delta_{P,i}^2}\right]^{-1}\ ,
\end{equation}
where $\Delta_{P,i}$ denotes the noise level of channel $i$. The instrumental specifications and the effective noise levels for a selection of current and upcoming B-mode experiments are listed in Table \ref{tab:specs} and in more detail in Appendix~\ref{sec:Experiments}.
\begin{table}[t!]
\centering
\begin{tabular}{ l c r c c }
\hline
& $\theta_b\,[']$ & $f_\text{sky}$ [\%] & $\Delta_{P,\text{eff}}\,[\mu\text{K}']$ & $s_{P,\text{eff}}\,[\mu\text{K}']$ \\ [0.5ex]
\hline
BICEP2 & 29 & 2.4\hphantom{11} & \hphantom{1}5.2 & 33.6 \\
Keck Array & 29 & 2.4\hphantom{11} & \hphantom{1}2.2 & 14.2\\
PolarBeaR-2 & \hphantom{1}4 & 20\hphantom{11} & 10.7 & 23.9\\
Simons Array & \hphantom{1}3 & 20\hphantom{11} & \hphantom{1}6.3 & 14.1\\
SPTPol & \hphantom{1}1 & 6\hphantom{11} & \hphantom{1}4.4 & 17.8\\
LiteBIRD & 16 & 70\hphantom{11} & \hphantom{1}1.8 & \hphantom{1}2.2 \\
COrE & \hphantom{1}1 & 70\hphantom{11} & \hphantom{1}1.8 & \hphantom{1}2.2 \\[1ex]
\hline\hline
\end{tabular}
\caption{Instrumental specifications for current and upcoming CMB polarization experiments~\cite{Ade:2014gua, Bock, Kermish:2012eh, Bouchet:2011ck, Austermann:2012ga, doi:10.1117/12.926158, Matsumura:2013aja}.}
\label{tab:specs}
\end{table}
As can be seen from (\ref{Bdef}), a measurement of ${\cal B}$-modes effectively involves taking derivatives of both the signal and the noise. This has the benefit that the observables become local quantities, but at the same time the noise spectrum for ${\cal B}$-modes acquires a factor of $n_\ell^2 \sim \ell^4$ relative to the noise for a $B$-mode measurement. White noise spectra for the Stokes parameters then translate into a blue spectrum for ${\cal B}$-modes, $N_\ell^{\cal B} \sim \ell^4$, implying a large contribution from small-scale noise.
Because of the drastic difference in the properties of the noise, it is important to analyze the detectability of ${\cal B}$-modes separately, adopting a different strategy from measurements of $B$-modes if necessary.
In order to compensate for the blue noise spectrum, we will apply a low-pass filtering to both the signal and the noise:\hskip 2pt\footnote{Calligraphic font will from now on denote filtered quantities.}
\begin{equation}
{\cal C}_\ell^{\cal B} \equiv f_\ell C_\ell^{\cal B} \ ,\quad
{\cal N}_\ell^{\cal B} \equiv f_\ell N_\ell^{\cal B}\ , \label{smoothing}
\end{equation}
where $f_\ell$ denotes a filtering function.
In real space, the procedure~(\ref{smoothing}) corresponds to a convolution with a certain window function $f(\theta,\theta') = \sum_\ell \frac{2\ell+1}{2}f_\ell \hskip 1pt P_\ell(\cos\theta)P_\ell(\cos\theta')$.
Depending on the experimental strategy, different window functions may be more suitable. For our purposes, there are several conditions that the filtering function $f_\ell$ needs to satisfy: (i) it needs to be sufficiently smooth to avoid the Gibbs phenomenon, (ii) it should decay early enough to suppress the small-scale noise efficiently, and (iii)~it should retain the shape of the power spectrum up to $\ell\sim 100$ in order not to cause any distortion of the superhorizon features. A simple choice which satisfies the above requirements is a Gaussian filtering function:
\begin{equation}
f_\ell = e^{-\ell(\ell+1)/\ell_s^2}\ , \label{equ:Fell}
\end{equation}
where $\ell_s$ defines the smoothing scale.
To satisfy the second and third conditions, we choose $100<\ell_s<\ell_b$, in which case the first condition is automatically satisfied.
\subsection{Leakage}\label{sec:leakage}
The filtering of the ${\cal B}$-mode spectrum is a necessary evil.
An inevitable consequence of the filtering process is a transfer of part of the subhorizon signal to superhorizon scales (and vice versa). For lack of a better term, we will call this contamination {\it leakage}. Since the spurious modes due to leakage can confuse the detection of the true superhorizon signal, it will be important to treat them carefully in our analysis.
In fig.~\ref{fig:CBfiltered}, we show the filtered subhorizon and superhorizon ${\cal B}$-mode correlation functions. As we can see, there is a non-negligible amount of leakage around $\theta\sim 2^\circ$. On the other hand, the positive peak of the superhorizon signal at $\theta\sim 3^\circ$ is relatively clean and still serves as an unambiguous test of the inflationary superhorizon spectrum.
When we want to make sure that we don't suffer from a large amount of leakage, we therefore focus on correlations with $\theta \gtrsim \theta_0 \equiv 2.6^\circ$.
Moreover, at fixed $\theta$, the leakage can be reduced by working with larger values of~$\ell_s$.
However, making $\ell_s$ too large will reduce the signal-to-noise of the signal we wish to measure.
In \S\ref{sec:forecast}, we will discuss the optimal balance between minimal leakage and maximal signal-to-noise.
\begin{figure}[t!]
\centering
\includegraphics[width=0.47\textwidth]{Figures/Fig3.pdf}
\caption{Local ${\cal B}$-mode correlation function for $r=0.13$ using the Gaussian filter (\ref{equ:Fell}) with $\ell_s=200$. The solid and dashed lines correspond to the superhorizon and subhorizon signals, respectively. }
\label{fig:CBfiltered}
\end{figure}
\subsection{Foregrounds}
Our ability to detect the primordial B-mode signal depends crucially on how well we can separate the signal from foreground contamination. The two major sources of foregrounds in the microwave range are polarized emissions from synchrotron and thermal dust.
Their distinct frequency dependences, in principle, allow them to be distinguished from the primary CMB signal.
\subsubsection{Synchrotron}
Synchrotron radiation arises from the acceleration
of relativistic cosmic-ray electrons in the magnetic field of the Galaxy.
This is the dominant contribution to the polarized foreground emission below 70 GHz.
\vskip 4pt
If the electrons have a power law distribution of energies, $N(E) \propto E^{-p}$, then the antenna temperature\footnote{Antenna temperature units are defined in reference to the Rayleigh-Jeans law, whereas thermodynamic temperature units are defined as the blackbody temperature obeying Planck's law. We calibrate quantities in thermodynamic temperature units, so that the primary CMB spectrum is frequency-independent.} of the signal is predicted to have a power-law dependence on frequency, $T(\nu) \propto \nu^{\beta_s}$, with $\beta_s = -\frac{1}{2}(p+3)$.
This simple ansatz for the frequency spectrum fits observations rather well with $\beta_s\simeq -2.9 $~\cite{Ade:2014zja}.
The variation of the spectral index across the sky is of order 10\%.
The angular spectrum of the synchrotron emission is found to obey an approximate power law, $F_\ell^{s,B} \propto \ell^{\alpha_s}$, with $\alpha_s \simeq -2.6$~\cite{Page:2006hz}.
Combining the above facts, we are led to the following ansatz
for the synchrotron $B$-mode power spectrum in thermodynamic temperature units~\cite{Baumann:2008aq}:
\begin{equation}
F_\ell^{s,B}(\nu) = A_s \left(\frac{\ell}{\ell_0}\right)^{\alpha_s} h^s(\nu,\nu_0)\ , \label{Fs}
\end{equation}
where $A_s$ is the amplitude of synchrotron emission defined at a reference frequency $\nu_0$ and a reference scale~$\ell_0$. The function
\begin{equation}
h^s(\nu,\nu') \equiv \left(\frac{\nu}{\nu'}\right)^{2\beta_s} \left(\frac{f(\nu)}{f(\nu')}\right)^2
\end{equation}
encapsulates the spectral dependence, where the factor $f(\nu)$ accounts for the conversion from antenna temperature to thermodynamic temperature~\cite{Tegmark:1999ke}:
\begin{equation}
f(\nu) \equiv \frac{(e^x - 1)^2}{x^2 e^x}\ , \quad x\equiv \frac{h\nu}{kT_{\rm cmb}} \approx
\frac{\nu}{56.8 \,{\rm GHz}}\ ,
\end{equation}
where $T_{\rm cmb}=2.725\, {\rm K}$ is the CMB blackbody temperature~\cite{Fixsen:2009ug}.
\subsubsection{Thermal Dust}
Thermal emission from interstellar dust grains aligned with the Galactic magnetic field
produces the dominant polarized foreground above 70 GHz.
\vskip 4pt
The frequency dependence of the dust intensity takes the form of a modified blackbody, $I_\nu \propto \nu^{\beta_d} B_\nu(T_d) $, where
the Planck spectrum $B_\nu(T_d)$ is determined by the observed dust temperature, $T_d\simeq 19.7\, {\rm K}$~\cite{Abergel:2013fza}. The mean spectral index is found to be $\beta_d \simeq 1.5$ at microwave frequencies~\cite{Ade:2014zja}, with a variation of about 1\% across the sky (much less than the variation of the synchrotron spectral index).
The angular spectrum again satisfies a power law,
$F_\ell^{d,B} \propto \ell^{\alpha_d}$, with $\alpha_d \simeq -2.3$~\cite{Aumont}.
The dust $B$-mode power spectrum can therefore be modelled as~\cite{Baumann:2008aq}
\begin{equation}
F_\ell^{d,B}(\nu) = A_d \left(\frac{\ell}{\ell_0}\right)^{\alpha_d} h^d(\nu,\nu_0) \ , \label{Fd}
\end{equation}
where $A_d$ is the amplitude of the polarized dust emission defined at a reference frequency $\nu_0$ and a reference scale~$\ell_0$. The spectral function for dust is\hskip1pt\footnote{Eq.~(\ref{hd}) corrects a typo in~\cite{Tucci:2004zy,Verde:2005ff}.}
\begin{equation}\label{hd}
h^d(\nu,\nu')\equiv \left(\frac{\nu}{\nu'}\right)^{2\beta_d}
\left(\frac{B_\nu(T_d)}{B_{\nu'}(T_d)} \frac{g(\nu)}{g(\nu')}\right)^2\ ,
\end{equation}
where $g(\nu)$ is the conversion factor from intensity to thermodynamic temperature units~\cite{Tegmark:1999ke},
\begin{equation}
g(\nu) \equiv \frac{f(\nu)}{\nu^2} \ .
\end{equation}
The amplitude in (\ref{Fd}) can be written as $A_d = p^2 I_d$, where $I_d$ is the unpolarized dust intensity and $p$ is the polarization fraction.
Both $p$ and $I_d$ can vary significantly across the sky.\footnote{The latest Planck measurements suggest that the mean polarization fractions over most parts of the sky (including highly polarized regions) fall in the range of 3 to 14\%~\cite{Ade:2014gna}. The dust intensity is constrained by the Finkbeiner-Davis-Schlegel (FDS) dust map~\cite{Finkbeiner:1999aq}.}
The precise amount of foreground contamination, therefore, depends on the region of the sky under consideration.
In $\S$\ref{sec:forecast}, we will consider a few different choices for the amplitude of the dust polarization, and allow for the relatively large uncertainties that still exist.
\subsubsection{Foreground Residuals}
Multi-frequency observations allow some degree of foreground cleaning based on the distinct frequency dependence of the foregrounds.
Detailed algorithms for foreground cleaning are discussed in \cite{Dunkley:2008am}.
Following~\cite{Verde:2005ff, Baumann:2008aq}, we will assume that the foregrounds can be subtracted by the template cleaning method (e.g.~\cite{Eriksen:2005dr,Stompor:2008sf}), and simply parameterize the foreground residuals by rescaling the foreground amplitudes by two scale-independent factors, $\epsilon_x \in [0,1]$, with $x=\{s,d\}$ denoting synchrotron and dust, respectively.
We propagate the noise of the template map into the foreground residuals.
After cleaning, the residual foreground spectrum, then, is~\cite{Tucci:2004zy,Verde:2005ff}
\begin{equation}\label{RlB}
R_\ell^B \equiv \sum_x \left[\epsilon_x F^{x,B}_\ell + {\sf N}_\ell^{x,B} h^x(\nu,\nu_{\rm ref}^x)\right]\ ,
\end{equation}
where $\nu^x_{\rm ref}$ is the reference frequency used as the template and ${\sf N}_\ell^{x,B}$ is the noise level of the template map for $x$.
We treat the foreground residuals as additional sources of uncorrelated noise (see Appendix~\ref{sec:EffectiveNoise} for a discussion). For an experiment with multiple frequency channels, we seek to find a linear combination of the maps with weightings chosen in such a way to minimize the variance of the power spectrum~\cite{Tegmark:1999ke}.
In Appendix~\ref{sec:EffectiveNoise}, we derive the optimal weighting scheme and show that the effective noise of the combined map is~\cite{Baumann:2008aq}
\begin{equation}
\label{Neff}
N_{{\rm eff},\ell}^B = \left[ \sum_{i} \frac{1}{N_{i,\ell}^B + R_{i,\ell}^B }\right]^{-1}\ ,
\end{equation}
where the subscript $i$ denotes the value at frequency $\nu_{i}$.
Appendix~\ref{sec:EffectiveNoise} also explains that any correlations between the foreground residuals at different frequencies tend to reduce the effective noise level, so working with (\ref{Neff}) is a conservative choice.
\subsection{Lensing}
Even in the absence of primordial B-modes, a curl component of CMB polarization is generated by the lensing of primordial E-modes~\cite{Zaldarriaga:1998ar, Hu:2000ee}.
This effect has to be considered an additional source of noise for the signal we are trying to measure.
\vskip 4pt
On large angular scales, the lensing $B$-modes act like white noise with an effective amplitude of $4.4\, \mu{\rm K}'$. In the low-noise regime ($\lesssim 5\, \mu\text{K}'$), the lensing effect provides a significant limitation to a measurement of the primordial signal, especially for low values of $r$. Since lensing does not induce any spectral distortions to the primary CMB, multi-frequency observations do not help to distinguish between these two signals. However, several methods have been proposed to reduce the lensing noise statistically~\cite{Okamoto:2003zw, Hirata:2003ka} (see \cite{Smith:2008an} for a comprehensive discussion). The most promising delensing procedure involves reconstructing the lensing potential from measurements of small-scale CMB polarization, which is subsequently used to remove the lensing contribution to the large-scale B-mode signal. This requires CMB experiments with high sensitivity and resolution (small beam size). Details of this approach to delensing can be found e.g.~in~\cite{Smith:2010gu}.
In the absence of sky cuts, foregrounds, and instrumental systematics, a detection of the primordial tensor amplitude down to $r\sim10^{-6}$, in principle, is achievable~\cite{Seljak:2003pn}. Nevertheless, experimental limitations and the presence of foregrounds practically limit an accurate quantification of the residual lensing, resulting in a possible bias in the estimator of the lensing potential. To avoid these practical uncertainties, we assume that the lensing estimator is unbiased, or that any significant biases are known and can be eliminated. Thus, the residual lensing contributes only to the variance, and does not bias the signal. The issue of potential lensing bias is the subject of many investigations in the literature, e.g.~\cite{Shimon:2007au,Hanson:2009dr,Hanson:2010gu,Hanson:2010rp,Namikawa:2012pe,BenoitLevy:2013bc,vanEngelen:2013rla}, but is beyond the scope of the present work.
We consider delensing in a heuristic way by multiplying the amplitude of the lensing $B$-modes $L_\ell^B$ by a scale-independent delensing fraction,
\begin{equation}
L_\ell^B \ \to\ \epsilon_L L_\ell^B\ , \label{equ:lensing}
\end{equation}
where $\epsilon_L \in [0,1]$, and treat it as an additional noise. On large scales, both the residual spectrum and the original spectrum are approximately white noise (see e.g.~fig.~1 in \cite{Boyle:2014kba}). Therefore, the ansatz (\ref{equ:lensing}) is a sufficiently good approximation to more sophisticated expressions for the lensing residuals found in Appendix~A of \cite{Smith:2010gu}. The residual lensing power spectrum is then incorporated into the effective noise as
\begin{equation}
N_{{\rm eff},\ell}^B = \left[ \sum_{i} \frac{1}{{N}^{B}_{i,\ell} + R_{i,\ell}^B}\right]^{-1} + \epsilon_L L_{\ell}^B \ .
\end{equation}
Further justification for this formula is given in Appendix~\ref{sec:EffectiveNoise}.
\section{Methodology}
\label{sec:method}
We now describe our method for quantifying the detectability of the superhorizon ${\cal B}$-mode signal. We first construct an estimator of the signal and then use it to define the signal-to-noise ratio of the measurement. We will explain that leakage introduces a bias in the estimator and describe a simple debiasing procedure.
The methodology in this section and the next will be formulated mostly in real space, but see Appendix~\ref{sec:harmonic} for an equivalent treatment in harmonic space.
\subsection{Superhorizon Estimator}
We would like to define an estimator of the superhorizon signal~(\ref{supsignal}),
given an estimator $\widehat{\cal C}_\ell$ for the total ${\cal B}$-mode power spectrum after filtering, $\langle \widehat{\cal C}_\ell\rangle = {\cal C}_\ell^{\cal B} \equiv f_\ell C_\ell^{\cal B}$. The associated covariance matrix is\hskip 1pt \footnote{Lensing induces a non-Gaussian contribution to the covariance matrix whose explicit expression can be found in~\cite{Smith:2004up}. We have checked that the degradation caused by the non-Gaussian lensing covariance is much smaller than the systematic uncertainties due to the leakage.}
\begin{equation} \label{equ:CovMat}
\mathscr{C}[\widehat{\cal C}_\ell,\widehat{\cal C}_{\ell'}] = \frac{2}{(2\ell+1) f_\text{sky}} \left({\cal C}^{\cal B}_\ell + {\cal N}^{\cal B}_{{\rm eff},\ell}\right)^2\, \delta_{\ell\ell'}\ .
\end{equation}
Selecting the total signal in the angular interval $\Theta \equiv [\theta_{\rm min}, \theta_{\rm max}]$, with $\theta_{\rm min} \ge \theta_c$, defines
an estimator of the superhorizon signal\hskip 1pt\footnote{In Appendix~\ref{sec:harmonic}, we define the harmonic space equivalent of the estimator (\ref{realestimator}).}
\begin{equation}\label{realestimator}
\widehat{\mathcal{S}}(\theta; \theta_{\rm min}) \,\equiv\, \sum_\ell \frac{2\ell+1}{4\pi}\hskip 1pt \widehat{\cal C}_\ell \hskip 1pt P_\ell(\cos\theta)\hskip 1pt \Pi(\theta)\ ,
\end{equation}
where
\begin{equation}
\Pi(\theta) \equiv \left\{ \begin{array}{ll} 1 &\quad \theta \in [\theta_{\rm min}, \theta_{\rm max}] \\[6pt] 0 &\quad \text{otherwise} \end{array} \right. \ .
\end{equation}
For now, we will keep $\theta_{\rm min}$ general. The precise definition of $\theta_{\rm max}$ is not important, but will be limited by the maximum angular extent of a partial sky observation.
The covariance of the estimator is given by
\begin{align}\label{covreal}
\mathscr{C}[\widehat{\mathcal{S}}(\theta),\widehat{\mathcal{S}}(\theta')] &= \sum_{\ell\ell'} \frac{2\ell+1}{4\pi}\frac{2\ell'+1}{4\pi} \mathscr{C}[\widehat{\cal C}_\ell,\widehat{\cal C}_{\ell'}] \nonumber\\
&\quad \times P_\ell(\cos\theta)P_{\ell'}(\cos\theta')\hskip 1pt\Pi(\theta)\Pi(\theta')\ .
\end{align}
We emphasize that the estimator (\ref{realestimator}) is {\it biased}, since the total signal contains spurious contributions from the filtered subhorizon modes (see~\S\ref{sec:leakage}).
In \S\ref{sec:bias}, we will quantify this bias and define a debiased version of the estimator.
\subsection{Signal-to-Noise}
\label{sec:snr}
To define the signal-to-noise of the measurement, we discretize (\ref{realestimator}) and (\ref{covreal}), and split the signal into $N$ uniformly spaced angular bins $\Theta_{b} \equiv \{\theta_{(b)} \pm \frac{1}{2}\Delta \theta \}$, for $b=1,\ldots, N$. A natural sampling interval is $\Delta\theta \simeq 180^\circ/\ell_\star$, where $\ell_\star$ is the multipole moment at which the covariance matrix~(\ref{covreal}) converges.\footnote{The convergence of (\ref{covreal}) at $\ell_\star$ means that we effectively take into account $\ell_\star$ independent modes of ${\cal C}_\ell^{\cal B}$, in which case the rank of the matrix (\ref{equ:CovMat}) is $\ell_\star$. Since the transformation from harmonic space to real space is linear, the rank of the corresponding covariance matrix in real space is also $\ell_\star$. By restricting to a proper subinterval, $\Theta \equiv [\theta_{\rm min}, \theta_{\rm max}]$, we effectively reduce the rank by a factor of $\sim 180^\circ/(\theta_{\rm max} - \theta_{\rm min})$. Thus, a natural sampling interval is $\Delta\theta = 180^\circ/\ell_\star$. (In practice, the optimal $\Delta\theta$ is slightly larger, since the signal decays before it reaches $\ell_\star$ and we also include the non-Gaussian part of the covariance.) Errors at different angular separations are strongly correlated within the interval $\Theta$, and oversampling will result in an ill-behaved covariance matrix.}
The average signal assigned to each bin is
\begin{equation}
\widehat{\mathcal{S}}_{b} \equiv \frac{1}{Z_{b}}\int\limits_{\Theta_{b}} {\rm d}\theta\sin\theta\,\, \widehat{\mathcal{S}}(\theta)\ ,
\end{equation}
where $Z_{b} \equiv \int_{\Theta_{b}} {\rm d} \theta\, \sin \theta$ is a normalization factor.
The binned covariance matrix is given by
\begin{equation}\label{covmtxreal}
\mathscr{C}_{bb'} \equiv \frac{1}{Z_{b} Z_{b'}} \int\limits_{\Theta_{b}}\int\limits_{\Theta_{b'}} {\rm d} \theta {\rm d} \theta' \sin\theta\sin\theta'\, \mathscr{C}[\widehat{\mathcal{S}}(\theta),\widehat{\mathcal{S}}(\theta')]\ ,
\end{equation}
and the signal-to-noise ratio is defined as
\begin{equation}\label{SNR}
\left({\rm S/N}\right)^2 = \sum_{bb'} \widehat{\mathcal{S}}_{b} \hskip 1pt\mathscr{C}^{-1}_{bb'} \widehat{\mathcal{S}}_{b'}\ ,
\end{equation}
where $\mathscr{C}^{-1}_{bb'}$ is the inverse of (\ref{covmtxreal}).
In \S\ref{sec:forecast}, we will evaluate (\ref{SNR}) for various experimental configurations.
\subsection{Leakage and Debiasing}
\label{sec:bias}
Since the total signal ${\cal S}(\theta)$ contains spurious modes from the leakage of the filtered subhorizon modes,
$\widehat{\cal S}(\theta)$ is a biased estimator of the true superhorizon signal~(\ref{supsignal}).\footnote{Another type of bias arises from the $E$-$B$ mixing in partial sky observations. This bias is well-understood and can be treated by substituting the pseudo-$C_\ell$ estimators considered in~\cite{Smith:2005gi} for $\widehat{C}_\ell^{\cal B}$.}
We will quantify this bias by comparing the signal-to-noise of the expected total signal with that of the spurious subhorizon modes.
\vskip 4pt
Let us write the estimator~(\ref{realestimator}) as
$\widehat{\cal S}(\theta) = \widetilde{\cal S}(\theta) + {\cal S}^\dagger(\theta)$, where $ \widetilde{\cal S}(\theta)$ denotes the unbiased estimator (i.e.~the estimator of the pure superhorizon component) and ${\cal S}^\dagger(\theta)$ is the subhorizon signal.
The total signal-to-noise (\ref{SNR}) can then be written as
\begin{align}
\left({\rm S/N}\right)^2 &= \sum_{bb'} \left( \widetilde{\cal S}_{b} \hskip 1pt\mathscr{C}^{-1}_{bb'} \widetilde{\cal S}_{b'} + 2\hskip 1pt\widetilde{\cal S}_{b} \hskip 1pt\mathscr{C}^{-1}_{bb'} {\cal S}^\dagger_{b'} + {\cal S}^\dagger_{b} \hskip 1pt\mathscr{C}^{-1}_{bb'} {\cal S}^\dagger_{b'} \right) \nonumber \\[2pt]
&\equiv ({\rm S}/{\rm N})^2_+ + ({\rm S}/{\rm N})^2_\times + ({\rm S}/{\rm N})^2_-\ ,
\end{align}
where $({\rm S}/{\rm N})_+$ and $({\rm S}/{\rm N})_-$ denote the parts coming from the true superhorizon modes and the subhorizon leakage, respectively, while $({\rm S}/{\rm N})_\times$ stands for their cross-correlation. We will use
\begin{equation}\label{LF}
\delta \equiv \frac{({\rm S}/{\rm N})_-}{{\rm S/N}}
\end{equation}
as a diagnostic tool for quantifying the amount of leakage and, hence, the bias in the estimator~(\ref{realestimator}).
For small values of $\delta$, we know that the expected signal is dominated by the true superhorizon modes.
We will consider optimizing the analysis (e.g.~by adjusting $\ell_s$ and $\theta_{\rm min}$), so that we get the maximum signal-to-noise while keeping the leakage fraction (\ref{LF}) small.
We typically take an acceptable leakage fraction to be~$\delta \le 0.1$.
Alternatively, we can correct for the bias of the estimator~(\ref{realestimator}) through a simple debiasing procedure.
Subtracting the expected ensemble average of the spurious subhorizon mode from the estimator~(\ref{realestimator}) leads to an unbiased estimator of the pure superhorizon signal:
\begin{equation}
\label{debiased}
\widetilde{\cal S}(\theta) \equiv \widehat{\cal S}(\theta) - {\cal S}^\dagger(\theta)\ .
\end{equation}
In this case, we can treat the subhorizon signal as an extra source of noise.
Applying this debiasing procedure, we may improve the signal-to-noise by allowing a smaller smoothing scale $\ell_s$ and/or a larger angular interval $\Theta$.
\section{Signal-to-Noise Forecasts}
\label{sec:forecast}
Finally, we are ready to investigate the detectability of the superhorizon ${\cal B}$-mode signal for current and future experiments.
The signal-to-noise will, of course, depend on the tensor-to-scalar ratio of the primordial fluctuations. We will consider both a fiducial value of $r=0.13$ (which corresponds to the amplitude suggested by BICEP2~\cite{Ade:2014xna} and is also the canonical value of $m^2 \phi^2$ chaotic inflation~\cite{Linde:1983gd}), as well as the wider range $r=[0.001,0.2]$.
\subsection{Preliminaries}
We will use the estimators (\ref{realestimator}) and (\ref{debiased}) defined on the interval $\Theta = [\theta_{\rm min}, \theta_{\rm max}]$, with $\Delta\theta=0.30^\circ$.
For simplicity, we will fix $\theta_{\rm max} = 6.0^\circ$ throughout.
For $\theta_{\rm min}$, we will consider two different choices:
\begin{enumerate}
\item[(I)] For the biased estimator~(\ref{realestimator}), we compute the signal-to-noise on an interval with $\theta_{\rm min} = 2.6^\circ$, where the leakage from subhorizon modes is guaranteed to be small and constrained by causality.
\item[(II)] For the debiased estimator~(\ref{debiased}), we compute the signal-to-noise on an extended interval with $\theta_{\rm min} = 1.0^\circ$, which is where the filtered pure superhorizon signal\footnote{Below we will show that the Gaussian (\ref{equ:Fell}) with $\ell_s =200$ is a conservative filter function. We will take this as our fiducial choice of filtering, but also investigate the possibility of optimizing the smoothing scheme in particular examples. } starts to become appreciable (c.f.~fig.~\ref{fig:CBfiltered}).
\end{enumerate}
The estimator (I) is clearly more conservative, but also rejects a significant fraction of the inflationary superhorizon signal. The estimator (II), on the other hand, includes all superhorizon modes, but is less immune to spurious subhorizon contamination due to leakage. Although the known bias due to the inflationary subhorizon modes has been corrected for in the estimator~(\ref{debiased}),
a signal on the interval $[1^\circ, 2^\circ]$ from non-inflationary sources is strictly speaking not forbidden by causality. To perform a true causality test of inflationary tensor modes, we therefore aim to detect the signal with the estimator (I).
Nevertheless, we will also show results for the estimator~(II) which quantifies the signal-to-noise of the total superhorizon signal from inflation. In that case, the caveat that we just stated should be kept in mind.
\vskip 4pt
We will consider two sets of foreground models:
\begin{itemize}
\item Ground-based experiments (\S\ref{sec:ground}) can target small, but exceptionally clean, patches of the sky, and lower estimates for the foreground amplitudes are therefore appropriate.
\item Space-based all-sky experiments (\S\ref{sec:space}) can't use the cleanest patches only, so we will use higher foreground levels in those cases.
\end{itemize}
Our precise choices for the foreground amplitudes will depend on the experiment under consideration and will be presented in the following sections.
\subsection{Ground-Based Experiments}
\label{sec:ground}
We first consider the capabilities of ground-based experiments, as illustrated by a few representative examples.
\subsubsection{Keck Array}
The BICEP2 experiment has recently been upgraded to the Keck Array~\cite{Ogburn:2012ma}.
The Keck Array, unlike BICEP2, has multiple frequency channels, and the combination of its 95, 150, and 220 GHz detectors yields an effective noise of $\Delta_{P,{\rm eff}}=2.2\, \mu{\rm K}'$ ($s_{P,{\rm eff}}=14.2\, \mu\text{K}'$).
The 95 and 150 GHz channels are already in operation, and the 220~GHz channel will be added soon. In the near future, the BICEP3 experiment~\cite{Ahmed:2014ixy} will start to observe the same part of the sky, with higher sensitivity at 95~GHz.
In combination with the Keck Array, the effective noise will then reduce to $\Delta_{P,{\rm eff}}=1.4\, \mu{\rm K}'$ ($s_{P,{\rm eff}}=9\, \mu\text{K}'$). In the following, we will refer to this combination of the Keck Array and BICEP3 simply as the `Keck Array'.
Like for BICEP2, observations are made in the ``Southern Hole'' ($f_{\rm sky} = 0.024$), a region where both galactic and extragalactic foreground emissions are expected to be very low. For the foreground amplitudes in the Southern Hole, we will use the estimates given in~\cite{Flauger:2014qra}:
\begin{align}
A_s &\ =\ \xi_s \times (1.5 \times 10^{-7}\, \mu{\rm K}^2) \ , \label{As}\\
A_d &\ =\ \xi_d \times (1.8 \times 10^{-6}\, \mu{\rm K}^2)\ , \label{Ad}
\end{align}
where these amplitudes are measured at $\nu_0=100\, {\rm GHz}$ and $\ell_0=100$.
The parameters $\xi_s$ and $\xi_d$ allow for our uncertainties concerning the synchrotron and dust amplitudes in the Southern Hole. We will use $\xi_s = [0.67,1.33]$ and $\xi_d = [0.33,1.67]$ which corresponds to the 1$\sigma$ uncertainties in~\cite{Flauger:2014qra}.
Using the 220 GHz map of the Keck Array as a template, internal foreground removal of polarized dust emission at lower frequencies will be possible to some extent. This requires the spectral index of the dust signal to be well-constrained, which will be the case if external information from Planck is folded in. Our uncertainty in the level of foreground residuals that can ultimately be achieved will be characterized by the parameters $\epsilon_s$ and $\epsilon_d$ in (\ref{RlB}).
The large beam size of the Keck Array ($\theta_b \sim 30'$) means that internal delensing will not be possible; yet a joint analysis with a higher resolution experiment observing the same part of the sky may allow some modest amount of delensing.
SPTPol~\cite{Hanson:2013hsb} is indeed also observing in the Southern Hole, but its current sensitivity isn't at a level that would make delensing a realistic possibility. In the following, we will therefore assume Keck Array observations without any delensing as the default, i.e.~$\epsilon_L = 1$ in~(\ref{equ:lensing}), but also give results invoking a small amount of delensing, $\epsilon_L =\{0.5, 0.3\}$, as might become possible with an upgrade of SPTPol.
\subsubsection{Simons Array}
The Simons Array~\cite{Lee} is a planned successor of the PolarBeaR experiment~\cite{Kermish:2012eh, doi:10.1117/12.926158}.
Located in the Atacama desert in Chile, it will provide high-resolution observations of a relatively large fraction of the sky ($f_{\rm sky} =0.2$).
The frequency bands of the Simons Array are the same as those of the Keck Array: 95, 150, 220~GHz. The effective noise level is $\Delta_{P,{\rm eff}}=6.3\, \mu{\rm K}'$ ($s_{P,{\rm eff}}=14.1\, \mu{\rm K}'$).
In the absence of detailed information about the polarized emission in the region observed by the Simons Array, we will use the same foreground levels (\ref{As}) and (\ref{Ad}) as for the Keck Array, with the same, relatively large, uncertainties.
Its small beam size ($\theta_b=2.7'$ at 220 GHz) allows the Simons Array to serve as a useful probe to the gravitational lensing of the CMB on small angular scales, and internal delensing will be possible to some degree. We will thus show results for $\epsilon_L = \{0.5, 0.3\}$.
\subsubsection{Results}
In fig.~\ref{fig:SNR}, we present results for the signal-to-noise achievable by the Keck Array and the Simons Array for the fiducial value $r=0.13$ as a function of the level of foreground cleaning $\epsilon_d$. Shown are various levels of the delensing fraction $\epsilon_L = \{1,0.5,0.3\}$.
We see that a $3\sigma$ detection will marginally be possible with the Simons Array if both delensing and foreground cleaning can be achieved to a relatively high standard.
On the other hand, a detection with the Keck Array does not look feasible.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{Figures/Fig4.pdf}
\vskip -4pt
\caption{Signal-to-noise on the interval $[2.6^\circ,6.0^\circ]$ for $r=0.13$ as a function of $\epsilon_d$. The plot shows experiments with Keck Array (bottom) and Simons Array (top) specifications for three different delensing fractions: $\epsilon_L=1.0$ (red, dot-dashed), $\epsilon_L=0.5$ (dashed, blue), and $\epsilon_L=0.3$ (solid, black). The bands correspond to the uncertainty in the foreground amplitudes, $\xi_s=[0.67,1.33]$ and $\xi_d =[0.33,1.67]$.}
\label{fig:SNR}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{Figures/Fig5.pdf}
\vskip -4pt
\caption{Signal-to-noise (solid) and leakage fraction (dashed) for $r=0.13$ as a function of $\ell_s$ for experiments with Keck Array (red) and Simons Array (black) specifications.
Only a single curve is shown for $\delta$ because the curves for the Keck Array and the Simons Array are almost identical.
The plot assumes $\epsilon_L=\{\epsilon_s,\epsilon_d\}=0.5$. Decreasing the smoothing scale from $\ell_s=200$ to $\ell_s=150$ increases the signal-to-noise by about~15\%. The leakage fraction $\delta$ is less than 10\% as long as $\ell_s\gtrsim 140$.}
\label{fig:ls}
\end{figure}
The above results were derived using our canonical choice of filtering: the Gaussian filter (\ref{equ:Fell}) with $\ell_s=200$.
Slight improvements in the signal-to-noise are possible by optimizing the smoothing scheme. Fig.~\ref{fig:ls} shows the dependence of the signal-to-noise and the leakage fraction on the smoothing parameter $\ell_s$ for $r=0.13$ and $\epsilon_L= \{\epsilon_s,\epsilon_d\}=0.5$.
We see that the signal-to-noise initially increases with~$\ell_s$, reaches a maximum at $\ell_s \simeq 120$, and then decreases as more small-scale noise is allowed for higher $\ell_s$. At the maximum, ${\rm S/N}=2.2$ and 3.8 for the Keck Array and the Simons Array, respectively. The leakage fraction at the maximum is $\delta=0.11$.
For optimal results, we pick the smoothing scale in such a way that it maximizes the signal-to-noise while keeping $\delta<0.1$ for all values of $r$ that yield ${\rm S/N}>3$. The optimal smoothing scale for both experiments is then $\ell_s=150$, giving a 15\% increase in the signal-to-noise (see fig.~\ref{fig:ls}).\footnote{We have also tested other forms of filtering functions.
For example, using a tanh-filter, we were able to achieve a 10 to $20\%$ improvement on the overall signal-to-noise with similar degrees of leakage for various parameters and values of $r$.
This is because the tanh-filter is characterized by two smoothing parameters (the cut-off scale and the width), and this extra degree of freedom allows us to control the filtering process more precisely, giving us more optimized results. However, for simplicity of presentation, all the results in the paper were produced with the Gaussian filter (\ref{equ:Fell}). \label{foot:filter} } With this optimization, a more than $3\sigma$ detection becomes possible with the Simons Array even for only modest amounts of cleaning, $\epsilon_L=\{\epsilon_s,\epsilon_d\}=0.5$.
To achieve a similar level of significance with the Keck Array, we still require a high level of cleaning, $\epsilon_L=\{\epsilon_s,\epsilon_d\}=0.1$.
\vskip 4pt
One may argue that we have been too conservative by choosing $\theta_{\rm min} =2.6^\circ$ as our criterion for the superhorizon signal. In particular, as can be seen from fig.~\ref{fig:CBfiltered}, a large part of the inflationary superhorizon signal isn't captured by this definition. In order to quantify the size of the total signal, we therefore also consider the extended interval with $\theta_{\rm min} = 1^\circ$. We use the debiased estimator so that the known leakage of inflationary subhorizon modes is corrected for.
Fig.~\ref{fig:debiased} shows the signal-to-noise on the interval $[1.0^\circ,6.0^\circ]$ as a function of $r$ without optimization of the filtering. We see that a $3\sigma$ detection will be possible if $r\gtrsim 0.1$ and 0.04 for the Keck Array and the Simons Array, respectively, assuming a modest amount of delensing and foreground removal of 50\%. With the optimization described above, we get ${\rm S/N}>3$ if $r\gtrsim 0.05$ and 0.025 for the Keck Array and the Simons Array, respectively.
While this detection wouldn't constitute a perfect causality test, it would still be a strong indication for inflationary superhorizon tensors. Moreover, at sufficiently high ${\rm S/N}$ it will be possible to measure the shape of the signal in fig.~\ref{fig:CBfiltered}, which would further strengthen this interpretation.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{Figures/Fig6.pdf}
\vskip -4pt
\caption{Signal-to-noise on the extended interval $[1.0^\circ,6.0^\circ]$ as a function of $r$ for experiments with Keck Array (red) and Simons Array (black) specifications. The foreground amplitudes have been fixed to the mean values in (\ref{As}) and (\ref{Ad}).
The different curves correspond to $\epsilon_L=\{\epsilon_s,\epsilon_d\}=0.1$ (solid), $\epsilon_L=0.5$, $\{\epsilon_s,\epsilon_d\}=0.1$ (dashed), and $\epsilon_L=\{\epsilon_s,\epsilon_d\}=0.5$~(dot-dashed). }
\label{fig:debiased}
\end{figure}
\subsection{Space-Based Experiments}
\label{sec:space}
To perform a true causality test, the ${\cal B}$-mode signal has to be measured above $\theta_{\rm min} = 2.6^\circ$. We have seen that, for $r >0.1$, this is (marginally) possible with ground-based experiments.
For $r < 0.1$, on the other hand, a future satellite mission will be required.
For purposes of illustration, we now examine the LiteBIRD~\cite{Matsumura:2013aja} and COrE~\cite{Bouchet:2011ck} proposals.
\vskip 4pt
All-sky surveys don't have the luxury of observing only the cleanest patches of the sky, so we need to adjust our estimates for the expected foreground levels accordingly.
The level of polarized synchrotron emission is constrained by the WMAP polarization measurements between 23 and 94 GHz~\cite{Page:2006hz}.
Those results imply
\begin{equation}
A_s\ \simeq\ 5.8 \times 10^{-7}\, \mu{\rm K}^2 \ ,
\end{equation}
which is comparable to the 95\% upper limit of the synchrotron amplitude determined by DASI~\cite{Leitch:2004gd}.
For polarized dust emission, we take the template used by the Planck collaboration in~\cite{Aumont, Ade:2014zja} which, for $f_{\rm sky}=0.7$, gives
\begin{equation}
A_d \ \simeq\ 5.5 \times 10^{-5}\, \mu{\rm K}^2\ .
\end{equation}
This choice is consistent with the FDS model~\cite{Finkbeiner:1999aq} with an average polarization fraction of about~7\%. Both of the above amplitudes are defined with respect to $\nu_0 = 100\, {\rm GHz}$ and $\ell_0=100$.
\subsubsection{LiteBIRD}
LiteBIRD~\cite{Matsumura:2013aja} is a next-generation full-sky satellite experiment, optimized to probe large-scale B-mode polarization. It is equipped with six frequency bands in the range from 60 to 280 GHz. This frequency coverage is wide enough to perform a high level of foreground removal of both synchrotron and dust~\cite{Katayama:2011eh}. We will therefore consider relatively small values of $\epsilon_s$ and $\epsilon_d$, namely 0.1 (realistic) and 0.01 (optimistic).
The large beams of the LiteBIRD experiment mean that delensing will only be possible in a joint analysis with external data sets~\cite{Namikawa:2014yca}. We will assume that this will be possible only to a modest degree, $\epsilon_L \ge 0.5$.
\subsubsection{COrE}
COrE~\cite{Bouchet:2011ck} is a proposed space mission which is anticipated to deliver a full-sky CMB polarization map with a sensitivity 10 to 30 times better than its predecessor Planck. With 15 frequency bands between 45 and 795~GHz, COrE will allow a very high degree of foreground cleaning, so we will consider $\{\epsilon_s, \epsilon_d\} = $ 0.1 (pessimistic) and 0.01 (realistic).
The small beams of COrE also mean that a significant amount of internal delensing can be achieved, so we take a delensing fraction of $\epsilon_L = 0.1$ as a realistic assumption~\cite{Bouchet:2011ck}.
\subsubsection{Results}
Fig.~\ref{fig:SNR2} displays the signal-to-noise for LiteBIRD and COrE as a function of $r$. %
We see that a $3\sigma$ detection will be possible if $r>0.04$ (0.01) with $\{\epsilon_s,\epsilon_d\}=0.1$, and $r>0.02$ (0.007) with $\{\epsilon_s,\epsilon_d\}=0.01$, for LiteBIRD~(COrE).
Depending on the actual delensing level attained by these experiments, the detection bounds stated above may shift slightly. In any case, incorporating the optimization scheme described earlier, the signal-to-noise can be improved by about 20\%.
Thus, both LiteBIRD and COrE are capable of detecting the superhorizon ${\cal B}$-mode signal for $r\gtrsim 0.01$, in most realistic scenarios.
For $0.001<r<0.01$, a statistically significant detection will only be possible if the extended interval $[1.0^\circ,6.0^\circ]$ is used.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{Figures/Fig7.pdf}
\vskip -4pt
\caption{Signal-to-noise on the interval~$[2.6^\circ,6.0^\circ]$ as a function of $r$ for experiments with COrE (black) and LiteBIRD (red) specifications. The solid lines correspond to $\{\epsilon_s,\epsilon_d\}=0.01$, while the dashed lines assume $\{\epsilon_s,\epsilon_d\}=0.1$. The delensing fractions have been fixed to
$\epsilon_L=0.5$ and $\epsilon_L=0.1$ for LiteBIRD and COrE, respectively.}
\label{fig:SNR2}
\end{figure}
\subsection{Summary}\label{sec:summary}
The conclusions of this section are summarized in fig.~\ref{fig:sfsky}, which shows the signal-to-noise on the interval $[2.6^\circ,6.0^\circ]$ for $r=0.13$ as a function of the sky fraction $f_{\rm sky}$ and the effective instrumental sensitivity $s_{P,\rm eff}$.
This time the residual foreground amplitudes have been fixed to $A_s=5.8 \times 10^{-9}\, \mu{\rm K}^2$ and $A_d=5.5\times 10^{-7}\, \mu{\rm K}^2$ at $\nu_0=100\, {\rm GHz}$, $\ell_0=100$.
As we can see, for experiments with high instrumental sensitivity, $s_{P,{\rm eff}}\lesssim 20\, \mu{\rm K}'$, sky coverage is the main factor determining whether the signal is detectable. This is because $\text{S/N}\propto \sqrt{f_\text{sky}}$
in the cosmic variance limit, whereas $\text{S/N}\propto 1/s_{P,{\rm eff}}^2$ for experiments dominated by instrumental noise.
Hence, full-sky satellite missions have the best prospects for measuring the superhorizon ${\cal B}$-mode signal, though ground-based experiments such as the Simons Array can be feasible, if $r \gtrsim 0.1$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.42\textwidth]{Figures/Fig8.pdf}
\vskip -4pt
\caption{ Signal-to-noise on the interval $[2.6^\circ,6.0^\circ]$ for $r=0.13$ as a function of $f_\text{sky}$ and $s_{P,{\rm eff}}$. The plot was created using the optimized Gaussian filter with $\ell_s=150$ and assumes 50\% delensing. The dashed line indicates the $3\sigma$ detection bound.}
\label{fig:sfsky}
\end{figure}
\section{Conclusions}\label{sec:conclusion}
The significance of a detection of primordial B-modes cannot be overstated~\cite{Baumann:2008aq, Abazajian:2013vfg, Baumann:2014nda, Baumann:2014cja}. Hence, the community is eagerly awaiting a confirmation of the cosmological character of the signal observed by the BICEP2 team~\cite{Ade:2014xna}. However, even if the signal is established to be of primordial origin, we still wish to determine whether it was generated by vacuum fluctuations during inflation or has an alternative, post-inflationary origin.
In this paper, we have revisited the proposal of~\cite{Baumann:2009mq} for using the superhorizon part of the B-mode spectrum in real space as a model-insensitive diagnostic of inflationary gravitational waves. We found that the causality test for B-modes in its original form is not unambiguous, since we must deal with the issue of the mixing between subhorizon and superhorizon modes that is induced by the finite resolution of the experiment and the smoothing of the raw data.
We have quantified this effect and shown how future experiments have to be designed in order to maximize the signal-to-noise of the superhorizon signal while rejecting unwanted contaminations from spurious subhorizon modes.
We have found that future ground-based experiments are capable of detecting the superhorizon ${\cal B}$-mode signal at more than $3\sigma$ significance, if the tensor-to-scalar ratio is as large as suggested by BICEP2~\cite{Ade:2014xna}, i.e.~if $r\gtrsim 0.1$. If the value of $r$ is significantly smaller, then the measurement will require a full-sky survey.
We have found that a $3\sigma$ detection is possible with LiteBIRD and COrE as long as $r\gtrsim 0.01$, and if 90\% foreground cleaning and more than 50\% delensing can be achieved.
We believe that using the superhorizon estimator is a powerful model-independent way to test for the inflationary origin of tensor modes and look forward to seeing it applied to future data, including the experiments considered in this work.
\vskip 10pt
{\it Acknowledgements.} We thank Anthony~Challinor, Levon~Pogosian and Matias~Zaldarriaga for helpful discussions. D.B. and H.L.~gratefully acknowledge support from the European Research Council (ERC STG grant 279617). H.L.~also acknowledges support from the Cambridge Overseas Trust, the Lord Rutherford Memorial Research Fellowship, the Sims Empire, and the William Georgetti Scholarship. S.S.~acknowledges support from the Croucher Foundation.
\newpage
|
1,116,691,498,228 | arxiv | \section{Two quantization methods and the issue of
frequenc\-y decomposition}
The conventional (``old'') method to quantize a
theory containing a momentum squared constraint is inspired from
the situation of a (scalar) particle moving in a space-time
background.
Restricting ourselves to finite dimensional systems, the wave
equation (minisuperspace Wheeler-DeWitt equation) is of the type
\begin{equation}
{\cal C}\,\psi \equiv
(-\nabla_\alpha \nabla^\alpha + U )\,\psi = 0\, ,
\label{WDW}
\end{equation}
with $\nabla_\alpha \nabla^\alpha$ being the Laplacian with respect to
some Pseudo-Riemannian metric $g_{\alpha\beta}$ on a finite
dimensional manifold ${\cal M}$, the real function $U$ playing
the role of a potential.
The space of sufficiently well-behaved solutions
admits the well-known indefinite Klein-Gordon type scalar product
\begin{equation}
Q(\psi_1,\psi_2)= -\,\frac{i}{2}\, \int_\Sigma d\Sigma^\alpha\,
(\psi_1^*\stackrel{\leftrightarrow}{\nabla}_\alpha \psi_2) \, ,
\label{KG}
\end{equation}
where $\Sigma$ is a spacelike hypersurface
(with sufficiently regular asymptotic behaviour).
If the background structure
$({\cal M},g_{\alpha\beta},U)$ admits a local symmetry with timelike
trajectories (as e.g. for flat $g_{\alpha\beta}$ and
constant non-negative $U$,
in which case (\ref{WDW}) is the Klein-Gordon equation)
there is a unique decomposition of wave functions
into positive and negative frequency modes
($Q$ restricted to the positive/negative frequency sector
being a positive/negative definite scalar product, and the two
sectors being orthogonal to each other).
In the case of a generic background, it is common folklore that there
exists no such unique decomposition
\cite{Kuchar}.
\medskip
However, there is another quantization scheme for the same sort
of systems that starts from the inner product
\begin{equation}
\langle\psi_1,\psi_2\rangle
=\int_{\cal M} d^n x\,\sqrt{-g}\,\psi_1^*\psi_2
\label{KIN}
\end{equation}
on the Hilbert space of square integrable functions on the manifold.
The basic idea of how to proceed dates back to the Sixties
(the earliest reference I am aware of is Nachtmann
\cite{Nachtmann}),
but there does not seem to have emerged a tradition from that
(see however Refs. \cite{RumpfUrbantke}).
After a re-invention of the ansatz
in the Nineties
\cite{Higuchi,Landsmanetal,Marolf},
this approach has been developed further and has become a viable
method by which the quantization of full general relativity is currently
being attacked \cite{Aetal}.
It runs under several names, the best known being
``refined algebraic quantization'' (others being
''Rieffel induction'' and ''group averaging'').
\medskip
Without going into the details, I just summarize that (\ref{KIN})
gives rise to a positive definite inner product
$\langle\, , \,\rangle_{{}_{\rm phys}}$ on a suitably defined
set of solutions of (\ref{WDW}).
(When inserting two solutions of (\ref{WDW})
into (\ref{KIN}), one would in general
obtain an infinite result, but if the wave operator $\cal C$ in
(\ref{WDW}) is self-adjoint, this can be cured by ``dividing by an
infinite constant'' or, more precise, by averaging over the group
generated by $\cal C$). Thus, upon completion,
one ends up with what is usually called
the physical Hilbert space
$({\cal H}_{{}_{\rm phys}}, \langle\, , \,\rangle_{{}_{\rm phys}})$.
\medskip
Most researchers employing this new scheme simply forget about the
structure (\ref{KG}) that has played an important role in the early
years of quantum gravity. Since in quantum gravity or quantum cosmology,
(mini)superspace plays a role fundamentally different from the
space-time manifold in the particle quantization problem,
one may take the point of view that the notion of positive and
negative frequency modes does not play any substantial role there.
However, the structure (\ref{KG}) still exists, even if it is
not payed any attention.
(I ignore here the problem that (\ref{KG}) is ill-defined in the
full superspace context and must be regularized.
In the framework under consideration, $Q$ is well-defined
on ${\cal H}_{{}_{\rm phys}}$.)
\medskip
What can we infer from the fact that $Q$ and
$\langle\, , \,\rangle_{{}_{\rm phys}}$
{\it coexist} on one and the same
space? The former quantity may be represented
in terms of the latter by
$Q(\psi_1,\psi_2) = \langle \psi_1, K \psi_2\rangle_{{}_{\rm phys}}$,
where $K$ is a linear (supposedly self-adjoint) operator.
In reasonable cases, its
(generalized) eigenvalues come in pairs
$(-\lambda,\lambda)$ off zero (in the case of the Klein-Gordon equation
we have even $K^2=1$), so that the Hilbert space uniquely decomposes as
${\cal H}_{{}_{\rm phys}} = {\cal H}^+ \oplus {\cal H}^-$.
Moreover, $Q$ is positive/negative definite on ${\cal H}^\pm$ and
the two subspaces are orthogonal to each other with respect to both scalar
products. This decomposition has been singled out by the
global structure of
$({\cal M},g_{\alpha\beta},U)$ (note that no
local symmetry is necessary for the construction to work).
Recently, Hartle and Marolf have exploited the coexistence
of the two scalar products, though with different motivation
\cite{HartleMarolf}.
\section{Understanding new issues in terms of old
$\,\,\,\,\,\,\,\,\,$ methods?}
Can the decomposition defined above be viewed as ``the correct''
identification of positive and negative
frequencies? Note that the refined algebraic quantization scheme
provides a structure that
is in a sense invisible for the Klein-Gordon type approach
{\it although it does not need any additional input}.
Perhaps a clarification of this situation could improve our
understanding of what happens when we quantize a constrained system.
I cannot resolve this puzzle, but I would like to
mention a candidate for a procedure defined
within the Klein-Gordon framework
but transcending the differential geometric
setting. It might possibly show us a way how to make contact between
the two methods.
\medskip
In Ref. \cite{FE}, a framework for treating quite general wave equations
of the type (\ref{WDW}) with positive potential was proposed.
Writing the wave function as $\psi = \chi D e^{i S}$,
(with $S$ being a sufficiently globally regular
solution of the classical Hamilton-Jacobi
equation and $D$ a real function satisfying a certain
conservation equation), the wave equation (\ref{WDW}) reads
$i\, \partial_t \chi = (\,\frac{1}{2}\,\partial_{tt} + h)\chi$.
Here $t=-S$ and $h$ is a linear differential operator acting
tangential to the hypersurfaces $\Sigma_t$ of constant $t$.
Although resembling a WKB scheme, no approximation
is applied so far.
In Ref. \cite{FE} it is argued that if an operator
$H$ is defined as a series of the type
\begin{equation}
H = h -\,\frac{1}{2}\, h^2-\,\frac{i}{2}\,\dot{h} +
\frac{1}{2}\, h^3 +\frac{i}{2}\,\{h,\dot{h}\}-\,
\frac{1}{4}\, \ddot{h} + \dots
\label{iter}
\end{equation}
(as emerging from the iteration of a certain differential equation),
where $\dot{h}\equiv [ \partial_t, h]$, then
any solution of the Schr{\"o}dinger equation
\begin{equation}
i\,\partial_t \chi = H \chi
\label{SCHR}
\end{equation}
solves (\ref{WDW}).
The actual convergence of (\ref{iter}) seems to depend
on the particular background $({\cal M},g_{\alpha\beta},U)$, and
it is here that some models might be excluded (while
remaining intact from the point of view of differential geometry).
In case of convergence (which has been checked for the simple
cases $h=a + b t + c t^2$, $h=\alpha/t$ and
$h=\beta/t^2$ (this last one having applications to
FRW quantum cosmology),
the set of solutions $\psi$ obtained in this way
forms a subspace ${\cal H}^{\prime +}$ which is independent of the
choice of the pair $(S,D)$ --- called a ``WKB-branch'' ---
in which it is calculated and, together with
its complex conjugate ${\cal H}^{\prime -}$, decomposes
${\cal H}_{{}_{\rm phys}}$ into a direct orthogonal sum.
$Q$ is positive/negative definite on ${\cal H}^{\prime\pm}$
just as was the case for ${\cal H}^\pm$ above.
\medskip
I cannot answer the natural question arising here, whether
${\cal H}^{\prime\pm}$
have something to do (or are even identical) with
${\cal H}^\pm$ (except for the flat space Klein-Gordon
equation, where they {\it are} identical).
Maybe pursuing this route could clarify
why the decomposition based on ${\cal H}^\pm$ is invisible to
the Klein-Gordon quantization scheme, at least as long as one remains
within the pure differential geometric framework.
|
1,116,691,498,229 | arxiv | \section{\@startsection{section}{1}{0in}{0.1\baselineskip}{0.1\baselineskip}{\normalfont\normalsize\bfseries}}
\makeatother
\makeatletter
\renewcommand\subsection{\@startsection{subsection}{1}{-\parindent}{0.1\baselineskip}{0.1\baselineskip}{\normalfont\normalsize\textit}}
\makeatother
\renewcommand{\-}{-\allowbreak}
\begin{document}
\begin{centering}
\ \\
\vspace{1.5in}
\Large{Balls, cups, and quasi-potentials: quantifying stability in stochastic systems}\\
\vspace{1in}
\normalsize{Ben C.~Nolting and Karen C.~Abbott }\\ \vspace{0.2in}
Department of Biology\\
Case Western Reserve University\\
Cleveland, OH 44106 U.S.A\\ \vspace{0.2in}
\end{centering}
\vspace{\fill}
\noindent \today \ draft\\
\noindent Manuscript accepted at Ecology
\newpage
\section*{Abstract}
When a system has more than one stable state, how can the stability of these states be compared? This deceptively simple question has important consequences for ecosystems, because systems with alternative stable states can undergo dramatic regime shifts. The probability, frequency, duration, and dynamics of these shifts will all depend on the relative stability of the stable states. Unfortunately, the concept of ``stability" in ecology has suffered from substantial confusion and this is particularly problematic for systems where stochastic perturbations can cause shifts between coexisting alternative stable states. A useful way to visualize stable states in stochastic systems is with a ball\-in\-cup diagram, in which the state of the system is represented as the position of a ball rolling on a surface, and the random perturbations can push the ball from one basin of attraction to another. The surface is determined by a potential function, which provides a natural stability metric. However, systems amenable to this representation, called gradient systems, are quite rare. As a result, the potential function is not widely used and other approaches based on linear stability analysis have become standard. Linear stability analysis is designed for local analysis of deterministic systems and, as we show, can produce a highly misleading picture of how the system will behave under continual, stochastic perturbations. In this paper, we show how the potential function can be generalized so that it can be applied broadly, employing a concept from stochastic analysis called the quasi\-potential. Using three classic ecological models, we demonstrate that the quasi\-potential provides a useful way to quantify stability in stochastic systems. We show that the quasi\-potential framework helps clarify long\-standing confusion about stability in stochastic ecological systems, and we argue that ecologists should adopt it as a practical tool for analyzing these systems.
\vspace{0.5in}
\noindent{\it Keywords:} alternative stable states, stochastic dynamics, regime shifts, quasi\-potential, Freidlin\-Wentzell, stochastic differential equations, Hamilton\-Jacobi, resilience
\newpage
\section*{Introduction}
Researchers have long been fascinated by the possibility for ecosystems to have more than one stable state \citep{May:1977tm, Beisner:2003wu}. Such ecosystems have been observed in both natural \citep{vandeKoppel:2001wp} and experimental \citep{Chase:2003iv} settings. Systems with multiple (i.e., alternative) stable states can can abruptly shift from one stable state to another, sometimes with catastrophic consequences \citep{Scheffer:2003is}, so understanding their properties is crucially important.
Unfortunately, the understanding of alternative stable states has been significantly hampered by ambiguity about the term ``stable". \citet{Grimm:1997tg} note that stability is ``one of the most nebulous terms in the whole of ecology," and they catalog 163 different definitions. Much of this confusion arises when researchers attempt to apply tools designed for the analysis of deterministic models to stochastic models. Fortunately, there is a well\-developed mathematical framework, the Freidlin\-Wentzell quasi\-potential \citep{Freidlin:2012wd}, that provides a rigorous yet natural way to understand alternative stable states in stochastic systems. In this paper, we explain how this tool can clarify much of the confusion about stability in ecological systems by translating intuitive concepts into quantifiable mathematical properties. Through three examples, we show how the quasi\-potential serves as a useful metric of stability, and allows for effective stability comparison between alternative stable states. The results from quasi\-potential analysis often contrast with those from standard stability analysis, and our examples explore these discrepancies. Furthermore, the quasi\-potential allows for stability to be quantified on a continuum that corresponds well with the system's dynamics, and it can be applied to any system state, regardless of whether that state is a deterministic equilibrium. Using the quasi\-potential, a system can be decomposed into orthogonal components, and we explain how this decomposition can be interpreted ecologically. Finally, the quasi\-potential offers insight into the most probable paths a system will take in transitioning from one state to another.
Holling's foundational work on resilience and stability anticipated the quasi\-potential's basic essence \citep{Holling:1973wh}; later, \citet{Tuljapurkar:1979kd} made the insight that Holling's intuitive ideas were connected to the mathematical work of Freidlin and Wentzell \citeyearpar{Freidlin:1970wk}. At that time, numerical methods were insufficient to allow for general, practical computation of quasi\-potentials \citep*[see][]{Ludwig:1975tw}, so Tuljapurkar and Semura's insight did not receive the recognition it deserved. In subsequent decades, the flurry of research on alternative stable states largely overlooked this insight. Recently, the quasi\-potential has been embraced by researchers analyzing models in other areas of biology, although it often appears under other names, and is disconnected from the Freidlin\-Wentzell formulation (but see Zhou 2012). These applications include gene regulatory networks \citep{Lv:2014hp,Zhou:2012hz}, neural networks \citep{Yan:2013dx}, and evolution \citep{Zhang:2012ks, Wang:2011ef}. Very recently, it has been applied to a predator\-prey system \citep{Xu:2014hg}, and with countless other possibilities for application, we argue that the quasi\-potential is poised to become a major quantitative tool in ecology.
This paper makes three novel contributions to the field of ecology. First, it shows how the quasi\-potential can clarify the confusing tangle of stability concepts that confront ecologists. Second, it demonstrates how the quasi\-potential can be used to quantify stability in systems with alternative stable states, and how the results can be different from and often more useful than deterministic methods. Finally, it shows how a new numerical algorithm for the computation of quasi\-potentials \citep{Cameron:2012ex} can be expanded for application to systems with multiple stables states, and highlights the utility of the quasi\-potential for understanding such systems.
We use three well\-established ecological models to illustrate these ideas. First, we show how traditional linear stability analysis fails to capture the salient features of a stochastic lake eutrophication model, and explain how the system's potential function provides more useful analytic insights. Next, we move to higher\-dimensional systems, where potential functions rarely exist. We explore a consumer\-resource model with alternative stable states that does not have a potential function. We explain how the quasi\-potential is defined, and show its usefulness in analyzing this model. Finally, we explore another consumer\-resource model with a stable limit cycle to demonstrate how the quasi\-potential is useful when stable states are more complicated than point equilibria. We conclude by discussing the quasi\-potential as a unifying framework for existing notions of stability in stochastic systems.
\medskip
\section*{Example 1: Lake Eutrophication}
Lake ecosystems are among the most well\-studied examples of alternative stable states in ecology. A foundational model by \citet{Carpenter:1999wt} successfully describes the coexistence of a eutrophic state, corresponding to high phosphorous concentration, and an oligotrophic state, corresponding to low phosphorous concentration. Later work by \citet{Guttal:2007dp} showed how stochasticity can cause this system to switch between the two stable states, and we will use their model as a starting point for exploring the quantification of stochastic stability.
The underlying deterministic model (i.e., the ``deterministic skeleton") describes how the nutrient (phosphorous) concentration $x$ changes over time:
\begin{equation}
\label{lakedet}
\frac{dx}{dt}=c-sx+r\frac{x^{q}}{x_{0}^{q}+x^{q}}.
\end{equation}
$c$ is the nutrient inflow rate and $s$ is the nutrient loss rate
(due to sedimentation, outflow, and sequestration
in benthic plants). The last term represents nutrient recycling. $r$ is the maximum recycling rate, $x_{0}$ is the half\-saturation constant, and $q$ specifies
the shape of the sigmoidal recycling curve. At
$s\!=\!1$, $r\!=\!1$, $x_{0}\!=\!1$, $q\!=\!8$, and $c\!=\!0.53$ (as in \citealt{Guttal:2007dp}),
the system has alternative stable states: a low phosphorous oligotrophic
state, $x_{L}\!=\!0.537$, and a high phosphorous eutrophic state,
$x_{H}\!=\!1.491$, separated by an unstable
equilibrium (a saddle), $x_{S}\!=\!0.971$.
The standard technique for studying systems like this one, is linear stability analysis. The eigenvalue of the linearized system at $x_{S}$ is $\lambda_{S}\!=\!1.032$, so it is an unstable equilibrium. The eigenvalues corresponding to $x_{L}$ and $x_{H}$ are $\lambda_{L}\!=\!-0.899$ and $\lambda_{H}\!=\!-0.797$, respectively, so both $x_{L}$ and $x_{H}$ are stable equilibria. The more negative the eigenvalue, the faster the return to the equilibrium following a small perturbation; $\lambda_{L}\!<\!\lambda_{H},$ so the linear analysis indicates that the oligotrophic state is more stable than the eutrophic state.
\medskip
\subsection*{Ball\-in\-cup}
An alternative approach to quantifying stability, and one that is fundamental to the theory of alternative stable
states, is the ``ball\-in\-cup'' heuristic \citep{Beisner:2003wu}.
In this framework, the state of the system is represented by the position
of a ball rolling on a surface. The ball rolls downhill, but is also
subject to continual, stochastically varying perturbations. In the absence
of perturbations, the ball will roll to the bottom
of a valley. Such locations correspond to stable equilibria
of the deterministic skeleton of the system ($x_L$ and $x_H$ in our example); a system with alternative
stable states has more than one valley. The ``cup'' is the area surrounding an equilibrium
that is attracted to it; this is called its domain (or basin) of attraction.
The ball\-in\-cup framework is not just a useful metaphor -- it can also
yield a mathematical description. For the lake system, define
\begin{equation} \label{Udef}
U(x)=-\int_{x_{H}}^{x}f(\xi)d\xi ,
\end{equation}
($\xi$ is a dummy variable for integration), so that the differential equation becomes:
$
\frac{dx}{dt} = -U'(x)$.
The dynamics of this system turn out to be equivalent to a ball\-in\-cup
system with surface specified by the function $U$. In analogy with
the physics of the ball\-in\-cup metaphor, $U$ is called the ``potential
function" or simply the ``potential". For the lake system, this surface has local
minima at $x_{L}$ and $x_{H},$ as shown in figure~\ref{Fig1}a.
When random perturbations are present, the ball can
be jostled from one basin of attraction to another. Note that stochasticity
lies at the heart of the theory of alternative stable states. In a
purely deterministic system, the ball would roll to an equilibrium
and stay there. The presence or absence of other stable states
would be irrelevant, because the ball would have no way of visiting
them. Perhaps the surface could change over time, so that the basin of attraction
occupied by the ball ceases to be a basin, and the ball rolls out to a different stable state. This situation corresponds to a bifurcation
of the system's deterministic skeleton; the ball's transition requires the destruction of a stable state. In this paper, we are interested in
how systems can transition between \textit{coexisting} alternative stable states. Perturbations are required for the system to undergo these transitions; therefore, we argue that the appropriate framework for an alternative stable
state model is a stochastic one. Furthermore, real ecological systems are always subject to random perturbations. In order to apply the ball\-in\-cup heuristic to a perturbed system, we next demonstrate an approach to incorporating stochasticity into model~\eqref{lakedet}.
\medskip
\subsection*{Stochastic Differential Equation Model}
If the nutrient concentration varies randomly over time,
the lake can shift from one stable state to the other. To study this scenario, we translate the original deterministic model into a stochastic differential equation. A brief explanation of stochastic differential equation models is provided in appendix~\ref{subsec:SDEAppendix}, and more extensive accounts can be found in textbooks \citep[e.g.][]{Allen:2007ww}. Here, we give an informal description of the major concepts, and use discrete\-time analogies to avoid overly technical mathematical terminology.
To emphasize that nutrient concentration is now a stochastic process, and not just a deterministic function of time, we switch notation from $x(t)$ to $X\!(t)$. For each $t\!>\!0$, $x(t)$ is a number, but $X\!(t)$ is a random variable, which can take on any of a set of possible values according to probabilistic rules. A realization of the stochastic process is a deterministic function of time associated with a specific set of random events; this can be thought of as an observed time series, or the result of a single simulation run.
In the original model~\eqref{lakedet}, the external input of nutrients occurs at a constant rate $c$. In a small time interval $dt$, the external input is $c\, dt$. In reality, this input is likely to vary randomly; this is commonly modeled by adding a Gaussian white noise process, $dW\!(t)$ (``noise" is used synonymously with ``stochastic" or ``random"). At each $t\!>\!0$, $dW\!(t)$ is a normally distributed random variable with mean zero and variance $dt$. Since the values are independent of $t$, this is simply written as $dW$. The white noise process we describe here has no temporal autocorrelation, and its frequency spectrum is uniform -- the descriptor ``white" is used in analogy with white light. The accumulated change obtained by adding $dW$ over time yields a Wiener process, also known as Brownian motion. White noise is a useful starting point, but many applications require other types of noise; for example, colored noise might be used instead when perturbations are autocorrelated \citep[e.g.][]{Sharma:2014eh}. A discussion about generalizing the framework in this paper to different noise types is included in the Limitations and Generalizations section.
If the constant input rate $c$ is perturbed by a Gaussian white noise process with intensity $\sigma$ (analogous to the standard deviation in discrete time systems), then the external input in a small interval $dt$ is $c\, dt+\sigma dW$. The change in nutrient concentration over this time interval is given by
\begin{equation}
\label{lakestoch}
dX=\left(c-X+\frac{X^{q}}{1+X^{q}}\right)\,dt+\sigma\,dW.
\end{equation}
Again using equation \eqref{Udef} to define the potential, this system can equivalently be written as
\begin{equation}
\label{stochpot}
dX=-U'(X)\,dt+\sigma\,dW.
\end{equation}
In terms of the ball\-in\-cup heuristic, the shape
of the surface is specified by the potential function $U$, and this is
independent of $\sigma$. The noise intensity $\sigma$ only contributes to the movement of the ball on this surface, as determined by the last term in equation~\eqref{stochpot}.
We have described this model in terms of change over discrete time intervals, but it is also valid in the continuous time limit, $dt\!\rightarrow\!0$. For continuous time, which will be the focus of the rest of this paper,~\eqref{lakestoch} is called a stochastic differential equation. The notation in the stochastic differential equation $dX\!=\!\ldots$ is different than the deterministic differential equation notation $\frac{dx}{dt}=\ldots$, because the former must be defined using integral equations (the realizations of $W\!(t)$ are not differentiable anywhere, so $\frac{dW}{dt}$, and hence $\frac{dX}{dt}$, would not make sense. We use the It\^{o} integration scheme to define stochastic differential equations in this paper; see appendix~\ref{subsec:SDEAppendix}).
\medskip
\subsection*{Utility of the potential for understanding the stochastic lake eutrophication model}
One approach to understanding the stochastic lake eutrophication model is to calculate realizations (i.e. simulations) of~\eqref{lakestoch} for particular values of $\sigma$. This approach is limited, because it requires setting a particular $\sigma$; we will see later that the potential function provides a more general way of studying system dynamics. A realization with $\sigma=0.2$ is shown in figure~\ref{Fig1}b. All simulations in this paper were done with \textit{Mathematica}, and the code is available as a supplementary file. The realization in figure~\ref{Fig1}b, which is typical of realizations for this system with $\sigma=0.2$,
switches between the two stable states. It spends more
time near $x_{H}$ than $x_{L}$; this suggests that the eutrophic (higher phosphorous) state
is more stable than the oligotrophic (lower phosphorous) state for this set of parameter
values. Note that this behavior is in contrast to the results of the linear
stability analysis of the deterministic skeleton. It is, however, in agreement with what the potential function tells us about the system, as we will demonstrate below.
For \eqref{lakestoch}, we find that $U\!(x_{L})\!=\!0.011$, $U\!(x_{S})\!=\!0.047$,
and $U\!(x_{H})\!=\!0$. Note that it is the relative, not the
absolute, values of the potential function that are important, so the
minimum value of the potential can be set at $0$. $U\!(x_{H})\!<\!U\!(x_{L})$,
so the potential function indicates that the eutrophic state is more
stable than the oligotrophic state. This corresponds to the intuitive
notion that we obtained from examining realizations like the one in figure~\ref{Fig1}b, but it contradicts
the results from the linear stability analysis. This discrepancy
arises because the linear stability analysis considers only
an infinitesimal neighborhood of an equilibrium. In the presence of
continuous stochastic perturbations, the system will leave such an
infinitesimal neighborhood, and the linear analysis of the skeleton breaks down. The linear analysis provides
information about the curvature of the potential surface at the bottom
of basins of attraction, but this information is purely local, in
that it does not take into account the larger geometry of the surface.
Therefore, the potential function provides a more appropriate measure
of stability for analyzing alternative stable states than linear stability analysis.
The potential function also relates to other important features of the stochastic system. The probability density function, $p(x,\!t)$, associated
with the random variable $X$ in~\eqref{lakestoch} describes the probability that $X\!(t)\!=\!x$. It is the solution to the Fokker\-Planck
equation:
\begin{equation}
\frac{\partial p(x,t)}{\partial t}=\frac{\partial}{\partial x}\left(U'(x)\,p(x,t)\right)+\frac{\sigma^{2}}{2}\frac{\partial^{2}p(x,t)}{\partial x^{2}}.
\end{equation}
The steady\-state solution, $p_{s}(x)\!=\!\displaystyle{\lim_{t\rightarrow\infty}}p(x,\!t)$, is given by:
\begin{equation}
\label{steadystate}
p_{s}(x)=\frac{1}{Z}\exp\!\left(\!-\frac{2U(x)}{\sigma^{2}}\!\right),
\end{equation}
where $Z\!=\!\int_{0}^{\infty}\!\exp\left(-\frac{2\,U(x)}{\sigma^{2}}\right)\!dx$ is a normalization constant.
This equation shows that the steady\-state probability density is maximized
at the values of $x$ that minimize $U$, confirming that the minima (valleys) in $U$ correspond to the most likely system states.
The potential can be used
to gain insight about the time it takes the system to switch between
alternative stable states. If $\tau_{x_{L}}^{x_{H}}$ is the expected
time it takes a trajectory starting at $x_{L}$ to reach $x_{H}$,
(i.e., the mean first passage time), then \citep{kramers1940}:
\begin{equation} \label{MFPT1}
\tau_{x_{L}}^{x_{H}}=\frac{2\pi}{\sqrt{U''(x_{L})\left|U''(x_{S})\right|}}\exp\left(\frac{2}{\sigma^{2}}\left(U(x_{S})-U(x_{L})\right)\right)\left(1+\mathcal{O}(\sigma)\right).
\end{equation}
Swapping $x_H$ for $x_L$ yields a comparable expression for the expected time to reach $x_{L}$ from $x_H$. The asymptotic notation $\mathcal{O}(\cdot)$ describes the error of the approximation as $\sigma\rightarrow 0$. The expected time for a trajectory to leave a basin of attraction around one of the stable states is thus largely dependent on the
depth of that basin -- the difference between peak $U$ (which occurs at the saddle equilibrium, $x_{S}$) and the value of $U$ at the stable equilibrium.
The eigenvalue obtained in linear stability analysis describes the curvature of the potential at an equilibrium, equal to the second derivative of $U$; it determines the prefactor that multiplies the exponential function in equation~\eqref{MFPT1}. For a fixed valley depth, increased curvature is associated with decreased mean first passage time. For instance, note that $\lambda_{L}\!=\!-U''\!(x_{L})$. As $x_{L}$ becomes more stable in the deterministic sense (i.e., as $\lambda_{L}$ becomes more negative), the curvature at $x_{L}$ increases, and the mean first passage time decreases (similar statements hold for $x_{H}$). At first glance, this seems counterintuitive -- increasing stability is associated with decreased escape time -- but it makes sense because, for a fixed valley depth, increased curvature decreases the horizontal distance between equilibria.
Knowledge about the
potential function thus provides information about the steady\-state
probability distribution, mean first passage times, and transition
frequencies, motivating its use as a
stability metric \citep{Zhou:2012hz, Wang:2011ef}. The potential function is especially useful because
it does not depend on the noise intensity $\sigma$ (in contrast to
the steady\-state probability distribution and mean first passage
times; see appendix~\ref{subsec:SmallNAppendix}).
\medskip
\section*{Example 2: Consumer and Resource With Alternative Stable States}
If the potential is so good at quantifying biologically\-relevant model behaviors, why isn't it routinely applied in ecology? Unfortunately, in most
cases, there will not exist a function $U$ that satisfies the mathematical definition of a potential (see appendix~\ref{subsec:FWAppendix}). Systems that have such a function are called ``gradient systems". One\-dimensional systems are always gradient systems, but systems with more than a single state variable almost never are. For non\-gradient systems, we cannot use a potential function
to quantify stability, as we did in the first example. It is for this reason that ecologists typically rely on approaches like linear stability analysis instead; although these approaches give more limited biological insights, they are more widely applicable mathematically. In what follows, we show how to generalize the potential for non\-gradient systems, thus allowing us to apply the many desirable features of potential analysis to a much broader range of ecological systems.
For an ecological example of a two\-dimensional non\-gradient system, we turn to a model of phytoplankton
and zooplankton populations. Let $R$ be the phytoplankton (resource) population density
and $C$ the zooplankton (consumer) population density. Using the deterministic skeleton of a standard plankton consumer\-resource model
\citep{Collie:1994wc, Steele:1981tn}, we obtain
the stochastic differential equations
\begin{equation}
\label{conres}
\begin{gathered}
dR=\left(\alpha R\left(1-\frac{R}{\beta}\right)-\frac{\delta R^{2}C}{\kappa+R^{2}}\right)dt+\sigma_{1}dW_{1}\\[5pt]
dC=\left(\frac{\gamma R^{2}C}{\kappa+R^{2}}-\mu C^{2}\right)dt+\sigma_{2}dW_{2} .
\end{gathered}
\end{equation}
Here $W_{1}$ and $W_{2}$ are independent Wiener processes. The resource has logistic growth in the absence of consumers, with maximum
growth rate $\alpha$ and carrying capacity $\beta$. Consumption
of resources is represented by a sigmoidal Type III functional response.
$\delta$ is the
maximum consumption rate, and $\kappa$ controls
how quickly the consumption rate saturates. $\gamma$ determines the conversion from resources to consumers. The consumers
have a quadratic mortality term with coefficient $\mu$, which represents
the negative impacts of intraspecific competition. $\sigma_{1}$ and
$\sigma_{2}$ are the noise intensities for the resource and consumer
populations, respectively.
The additive form of the stochastic terms in this model represent random inputs
and losses of resources and consumers. In situations where inherent
growth parameters (e.g., $\alpha$ or $\gamma$) are stochastic, other
forms of stochasticity would be appropriate. We will deal with additive noise here; the more general case is considered in appendix~\ref{subsec:ONS}.
We will analyze \eqref{conres} with parameters set at $\alpha\!=\!1.54$, $\beta\!=\!10.14$,
$\gamma\!=\!0.476$, $\delta\!=\!\kappa\!=\!1$, and $\mu\!=\!0.112509$. A phase plot of the deterministic skeleton is shown in figure~\ref{Fig2}a. The deterministic skeleton
of this system has five equilibria: $\mathbf{e}_{0}\!=\!(0,0)$,
$\mathbf{e}_{A}\!=\!(1.405,2.808)$,
$\mathbf{e}_{B}\!=\!(4.904,4.062)$,
$\mathbf{e}_{S}\!=\!(4.201,4.004)$,
$\mathbf{e}_{P}\!=\!(\beta,0)$.
A linear stability analysis shows that $\mathbf{e}_{0}$ is an unstable equilibrium and $\mathbf{e}_{P}$ is a saddle point. $\mathbf{e}_{A}$ and $\mathbf{e}_{B}$ are stable equilibria, and $\mathbf{e}_{S}$ is a saddle point that lies between them. Equilibria and their stability are summarized in figure~\ref{Fig2}a.
The eigenvalues of the Jacobian are $-0.047\pm0.458i$ at $\mathbf{e}_{A}$ and $-0.377$ and $-0.093$ at $\mathbf{e}_{B}$. For $\mathbf{e}_{A}$ the real part of the eigenvalue with largest real part is $-0.047$, and for $\mathbf{e}_{B}$ it is $-0.093$; therefore, the stability analysis
concludes that $\mathbf{e}_{B}$ is more stable, because this
value is more negative than it is for $\mathbf{e}_{A}$.
A realization of the stochastic system ($\sigma_{1}=\sigma_{2}=0.05$, Figure~\ref{Fig2}c) shows switching between the two stable states. It
is typical of most realizations we generated, in that it spends more
time near $\mathbf{e}_{A}$ (dotted white lines) than $\mathbf{e}_{B}$ (dashed black lines). This realization, which had initial condition $\left(x_{0},\,y_{0}\right)=\left(1,\,2\right)$, spent 87\% of its time in the basin of attraction corresponding to $\mathbf{e}_{A}$. Intuitively,
it seems that $\mathbf{e}_{A}$ should be classified as more stable
than $\mathbf{e}_{B}$, but as in Example 1, this is not what was obtained via
the standard linear stability analysis.
Recall that realizations are of limited utility for stability analysis, because each value of $\sigma$ will produce different dynamics and different steady\-state probability distributions (see appendix D and supplementary figure 1). The potential is defined independently of $\sigma$, and hence would be ideal for providing more general insights than $\sigma$-specific realizations. Of course, we do not have a potential function $U$ for this or any other non\-gradient system and
hence cannot compare $U(\mathbf{e}_{A})$ and $U(\mathbf{e}_{B})$.
Instead, we turn to the Freidlin\-Wentzell quasi\-potential, which generalizes
the notion of a potential.
\medskip
\section*{Generalizing The Potential}
For higher\-dimensional
models, we need to introduce a little bit of new notation. We can write an $n$\-dimensional system of stochastic differential equations with additive noise as
\begin{equation}
\label{gradient3}
d\mathbf{X}=f(\mathbf{X})\,dt+\sigma\,d\mathbf{W}.
\end{equation}
$\mathbf{X}\!=\!\left(X_{1},\ldots,X_{n}\right)$ is a column vector of
state variables and $\mathbf{W}\!=\!\left(W_{1},\ldots,W_{n}\right)$ is
a column vector of $n$ independent Wiener processes. We use the lowercase notation $\mathbf{x}\!=\!\left(x_{1},\ldots,x_{n}\right)$ to indicate a point in phase space (as opposed to a stochastic process). $f$
is the deterministic skeleton of the system. It is a vector field: for every point $\mathbf{x}$, $f\!(\mathbf{x})$ specifies the direction that a deterministic trajectory will move. $\sigma$ is the noise intensity. More general ways of incorporating noise are considered in appendix~\ref{subsec:ONS}.
Following the same
general approach as in example 1, the Fokker\-Planck equation
for a two dimensional version of~\eqref{gradient3}, with $\mathbf{X}\!=\!\left(X_{1},X_{2}\right)$, $\mathbf{x}\!=\!\left(x_{1},x_{2}\right)$ and $f\!=\!\left(f_{1},f_{2}\right)$, is:
\begin{equation}
\label{FP}
\frac{\partial p}{\partial t}=-\frac{\partial}{\partial x_{1}}\left(f_{1}p\right)-\frac{\partial}{\partial x_{2}}\left(f_{2}p\right)+\frac{\sigma^{2}}{2}\left(\frac{\partial^{2}p}{\partial x_{1}^{2}}+\frac{\partial^{2}p}{\partial x_{2}^{2}}\right).
\end{equation}
In the gradient case in Example 1, the steady\-state solution of the Fokker\-Planck equation was of the form
\eqref{steadystate} (replacing $x$ with $\mathbf{x}$ and obtaining $Z$ via integration over the positive quadrant). Here, there is no function $U$ to play that role, but using
the same general approach, assume that there is a function $V(\mathbf{x})$
such that:
\begin{equation}
\label{effecpot}
p_{s}(\mathbf{x})\asymp\exp\left(-\frac{2V(\mathbf{x})}{\sigma^{2}}\right).
\end{equation}
The symbol $\asymp$ denotes logarithmic equivalence, details about which are in appendix~\ref{subsec:FWAppendix}. When noise intensity is small, we can obtain an approximation for $V$ (using asymptotic expansion; see appendix~\ref{subsec:SmallNAppendix}). This approximation, denoted by $V_{0}(\mathbf{x})$, satisfies
\begin{equation}
\label{HJE}
\nabla V_{0}\cdot\nabla V_{0}+f\cdot\nabla V_{0}=0,
\end{equation}
where the gradient operator $\nabla$ takes a scalar function $\psi$ as an input, and
returns a vector,
$\nabla\psi\!=\!\left(\frac{\partial\psi}{\partial x_{1}},\frac{\partial\psi}{\partial x_{2}},\ldots,\frac{\partial\psi}{\partial x_{n}}\right)$, that is the multi\-dimensional analogue of the derivative.
Intuitively, if one thinks of $\psi(\mathbf{x})$ as specifying the height of a landscape at a particular point $\mathbf{x}$, then
$-\nabla\psi(\mathbf{x})$ points in direction of the steepest descent (as water would flow).
Equation \eqref{HJE} is the static Hamilton\-Jacobi equation. Interestingly, $V_{0}$ has key properties that make it a useful analog of a potential in a gradient system. First, $V_{0}$ is independent
of the noise intensity $\sigma$, just as the potential function $U$
was in the gradient case. Second, if $\mathbf{x}(t)$ is trajectory
of the deterministic skeleton of \eqref{gradient3}, then
\begin{equation}
\frac{d}{dt}\left(V_{0}\left(\mathbf{x}(t)\right)\right)=\nabla V_{0}\cdot f\left(\mathbf{x}(t)\right)=-\nabla V_{0}\cdot\nabla V_{0}\leq0,
\end{equation}
and $\frac{d}{dt}\left(V_{0}\left(\mathbf{x}(t)\right)\right)=0$
only where $\nabla V_{0}=0$. Thus $V_{0}$ is a Lyapunov function
for the deterministic system, which is an important feature for the ball\-in\-cup metaphor. If $V_{0}(\mathbf{x})$ specifies an two\-dimensional surface, then, in the absence of perturbations,
trajectories will always move ``downhill''. Again, this parallels
the role that $U$ played in the gradient systems. Third, we can interpret the relationship between $f$ and the surface $V_{0}$. $f$ is the deterministic skeleton that causes trajectories to move across the landscape, and $-\nabla V_{0}$ is the component of $f$ that causes trajectories to move downhill. The remaining component of $f$, which we denote by $Q$ and call the ``circulatory" component, is defined as:
\begin{equation}
Q\left(\mathbf{x}\right)=f\left(\mathbf{x}\right)+\nabla V_{0}\left(\mathbf{x}\right).
\end{equation}
$V_{0}$ satisfies the Hamilton\-Jacobi equation, so
$Q\cdot\nabla V_{0}\!=\!f\cdot\nabla V_{0}+\nabla V_{0}\cdot\nabla V_{0}=0$, hence $\nabla V_{0}$ and $Q$ are perpendicular at every point. This motivates the label ``circulatory" -- in the absence of other forces, $Q$ would cause trajectories to circulate around level sets of $V_{0}$.
The function $V_{0}$ generalizes the potential function to non\-gradient systems and extends to $n$\-dimensional systems. Interestingly, $V_{0}$ is
a scalar multiple of a function called the Freidlin\-Wentzell quasi\-potential.
The quasi\-potential has extremely important properties, which we explore
in the next section before applying all of these ideas to example~2.
\medskip
\section*{The Freidlin\-Wentzell Quasi\-potential}
Freidlin and Wentzell \citeyearpar{Freidlin:2012wd} analyzed stochastic differential equations using
a large deviation principle, which is an asymptotic law determining the probabilities
of different trajectories. These concepts can be best interpreted by imagining the state of the system (the position of the ball, or the current combination of population densities) being randomly perturbed within a ``force field'' imposed by the deterministic skeleton. Suppose the system starts at the stable state $\mathbf{e}_{A}$
and travels to another state $\mathbf{x}$. To complete this journey, the
populations will need to do some ``work'' against the force field (i.e., they need to go ``uphill");
this work is provided by random perturbations. Trajectories
that require the least amount of work (require the least extreme
stochastic perturbations) are the most likely. Suppose that $\mathbf{\theta}\!\left(t\right)$ specifies a path, parameterized by $t$,
that goes from the stable equilibrium $\mathbf{\theta}(0)=\mathbf{e}_{A}$ to another state $\mathbf{\theta}(T)=\mathbf{x}$. $T$ is total time it takes the populations to move along this path from $\mathbf{e}_{A}$ to $\mathbf{x}$. The amount of work required for the populations to follow a given path can be quantified by a functional $S_{T}$ called the action
(see appendix~\ref{subsec:FWAppendix} for details).
In order to determine the amount of work it takes to get to some state $\mathbf{x}$, one must minimize the action over all possible paths from $\mathbf{e}_{A}$ to $\mathbf{x}$, and all path durations $T\!>\!0$. The minimum action is called the quasi\-potential, denoted $\Phi_{\mathbf{e}_{A}}\!(\mathbf{x})$. The quasi\-potential depends on the starting point $\mathbf{e}_{A}$; when there are multiple stable states, the corresponding quasi-potentials can be stitched together to obtain a global quasi\-potential, $\Phi(\mathbf{x})$ \citep{roy1994}; see further details in appendix~\ref{subsec:GQP}. $\Phi$ is related to $V_{0}$ by $\Phi=2\,V_{0}$ (appendix~\ref{subsec:HJEQP}). In this paper, we use $V_{0}$ instead of $\Phi$, because $V_{0}$ agrees with the true potential in gradient systems. The multiple of $2$ in the relationship $\Phi=2\,V_{0}$ is an inconvenient result of the Freidlin\-Wentzell definition. Conceptually, these two functions measure the same properties, and computing one immediately yields the other.
The quasi\-potential can be calculated by solving the static
Hamilton\-Jacobi equation~\eqref{HJE}. This is a numerically difficult task,
however; standard finite difference and finite element methods typically
break down when applied to this kind of non\-linear partial differential equation.
Ordered upwind methods \citep{Sethian:2001wx} are an innovative approach that
circumvent the problems encountered by traditional methods. The basic
idea is to create an expanding front of points where the solution
is known, and march outward by considering and accepting solution values at adjacent
points in ascending order. For use in systems of the form~\eqref{gradient3}, the
standard ordered upwind method was enhanced by \citet{Cameron:2012ex}.
Cameron's algorithm allows for efficient computation of the quasi\-potential. It forms the basis for \textit{QPot}, a freely\-available R package we have developed \citep{Moore2015} that includes a full set of tools for analyzing two-dimensional autonomous stochastic differential equations. \textit{QPot} can be downloaded at CRAN \footnote{The Comprehensive R Archive Network, https://cran.r-project.org} or GitHub \footnote{https://github.com/bmarkslash7/QPot}. To calculate the quasi-potential, users simply input the deterministic skeleton of the system, the domain, and the mesh size (although many other options are available). Computation time for the ordered upwind method depends on the model and mesh size; example~2 took less than ten minutes on a fairly average personal computer.
The Freidlin-Wentzell construction of the quasi-potential provides a mathematically rigorous justification for the Wentzel-Kramers-Brillouin (WKB) ansatz, which can be used to approximate mean first passage times in the small noise limit \citep{Bressloff:2014dx}. The WKB method has been applied to calculate expected extinction times for several specific models in population dynamics and epidemiology \citep{Meerson:2009bv, Roozen:1989cs, vanHerwaarden:1995el, Ovaskainen:2010cla}.
\medskip
\section*{Example 2 Continued}
We generated solutions to the static Hamilton\-Jacobi equation for the system~\eqref{conres} using base points $\mathbf{e}_{A}$ and
$\mathbf{e}_{B}$, and then matched them into a global quasi\-potential
by enforcing continuity at $\mathbf{e}_{S}$ and setting the minimum to 0. We divided this function by two to obtain $V_{0}$. The ordered upwind method was implemented using Cameron's algorithm \citep{Cameron:2012ex}. \textit{Mathematica} was used for data processing and graphics generation, and the code is available as a supplementary file.
For the consumer\-resource
system \eqref{conres}, the resulting surface for $V_{0}$ and a corresponding contour plot are shown in figure~\ref{Fig4}a-b. We find that $V_{0}(\mathbf{e}_{A})\!=\!0$,
$V_{0}(\mathbf{e}_{S})\!=\!0.007$, $V_{0}(\mathbf{e}_{B})\!=\!0.006$.
The relative values of $V_{0}$ can be used to make calculations regarding
first passage times and calculate transition rates between $\mathbf{e}_{A}$
and $\mathbf{e}_{B}$. The most fundamental observation, however,
is that $V_{0}(\mathbf{e}_{A})\!<\!V_{0}(\mathbf{e}_{B})$,
which indicates that $\mathbf{e}_{A}$ is more stable than $\mathbf{e}_{B}$.
This contrasts with the linear stability analysis, but agrees with the
qualitative picture obtained from realizations of the system. As in
example 1, analyzing the system through the lens of a potential
(or quasi\-potential) function yields a completely different conclusion
than the deterministic analysis, and one that aligns much more clearly with the simulated dynamics we observe. Furthermore, $V_{0}(\mathbf{e}_{S})$
and $V_{0}(\mathbf{e}_{B})$ are closer to each other than
they are to $V_{0}(\mathbf{e}_{A})$. This indicates that
$\mathbf{e}_{S}$ and $\mathbf{e}_{B}$ have similar stabilities, and it encourages us to move beyond the dichotomous classification of equilibria as either stable or unstable, which is often applied in linear stability analysis. The stable vs.~unstable dichotomy
classifies $\mathbf{e}_{A}$ and $\mathbf{e}_{B}$ as alike,
and $\mathbf{e}_{S}$ as different. The quasi\-potential shows
that it is $\mathbf{e}_{B}$ and $\mathbf{e}_{S}$ that are
alike, and $\mathbf{e}_{A}$ that is different. By quantifying
stability on a useful continuum, the quasi\-potential offers a more nuanced
perspective.
$V_{0}$ also provides a useful way to decompose the deterministic skeleton of equations~\eqref{conres} into physically interpretable parts, $f=-\nabla V_{0}+Q$. This decomposition is shown in figure~\ref{Fig5}a-b. $-\nabla V_{0}$ represents the part of the system that moves the system towards stable states, while $Q$ represents the part that causes consumer\-resource cycling.
\medskip
\section*{Example 3: Predator and Prey With A Limit Cycle}
The quasi\-potential allows for stability analysis of attractors that
are more complicated than equilibrium points. As discussed in \citet{Cameron:2012ex} and \citet{Freidlin:2012wd} and explained in appendix~\ref{subsec:FWAppendix}, the quasi\-potential can be defined for compact sets, such as limit cycles. As an example
of a non\-gradient system with a limit cycle, consider a stochastic version of the Rosenzweig\-MacArthur predator\-prey model \citep[e.g.][]{logan2009mathematical}:
\begin{equation}
\label{predprey}
\begin{gathered}
dR=\left(\alpha R\left(1-\frac{R}{\beta}\right)-\frac{\delta RC}{\kappa+R}\right)dt+\sigma_{1}dW_{1}\\[5pt]
dC=\left(\frac{\gamma RC}{\kappa+R}-\mu C\right)dt+\sigma_{2}dW_{2} .
\end{gathered}
\end{equation}
Here $R$ is the resource density, $C$ is the consumer density,
and $W_{1}$ and $W_{2}$ are independent Wiener processes. Consumption of resources is represented by a Type II functional response; otherwise the resource dynamics are the same as in example 2. In the absence of resources, the consumer density decreases at an exponential
rate determined by $\mu$. $\sigma_{1}$ and $\sigma_{2}$ are the
noise intensity for the resource and consumer densities, respectively.
We present the analysis of this model with $\alpha\!=\!1.5$, $\beta\!=\!45$,
$\gamma\!=\!5$, $\delta\!=\!10$, $\kappa\!=\!18$, and $\mu\!=\!4$.
Figure~\ref{Fig2}b,d shows a stream plot of the system's deterministic skeleton, and a realization with noise intensities $\sigma_{1}\!=\!\sigma_{2}\!=\!0.8$ over time interval $[0,50]$. This choice of noise intensity and time scale was made to illustrate clear population cycles with amplitude shifts.
Surface and contour plots of $V_{0}$ for system~\eqref{predprey} are shown in figure~\ref{Fig4}c-d. Recall that $V_{0}$ provides
a decomposition of the deterministic system into a ``downhill" force and a ``circulatory" force, as shown in figure~\ref{Fig5}c-d. In this case, $-\nabla V_{0}$ causes trajectories to be attracted to the limit
cycle's trough. The circulatory component causes trajectories to cycle
in this trough. This decomposition harkens back to \citet{Holling:1973wh}, who made the following observation about dynamical
systems: ``There are two components that are important:
one that concerns the cyclic behavior and its frequency and amplitude,
and one that concerns the configuration of forces caused by the positive
and negative feedback relations." The latter is described by the
gradient of $V_{0}$, the former by the circulatory component.
Therefore, we see that the Freidlin\-Wentzell approach provides a systematic
way to distinguish between the two concepts identified by Holling.
In this example, we cannot contrast the quasi\-potential results
with the traditional linear stability analysis, because the latter only applies to equilibrium points.
\medskip
\section*{Limitations and Generalizations}
In this paper, we have focused on applying the quasi-potential framework to stochastic differential equations models that share several characteristics: 1) time is continuous, 2) state variables are continuous, 3) noise is additive and the noise intensity is the same for both state variables, 4) noise is a direct perturbation to the state variables (as opposed to a perturbation to parameter values), 5) noise is white (as opposed to colored), and 6) noise occurs continually with low intensity (as opposed to occurring as discrete, abrupt events). For models with discrete state variables, different approaches in large deviation theory are needed \citep{Wainrib:2013bs}. However, our approach can be adapted to work in systems that deviate from several of the other characteristics. For instance, characteristic~1 is not a limitation of the quasi-potential framework; \citet{Kifer:1990df} describes how analogous concepts can be applied to discrete-time Markov chains \citep{Kifer:1990df,Faure:2014eu}. Variable transformations (see appendix~\ref{subsec:ONS}) can be used to compute quasi-potentials for systems that deviate from characteristic~3 (e.g.~those with noise terms of unequal intensity ($\sigma_{1}\neq\sigma{2}$), noise that scales with population density (demographic stochasticity; $\sigma_{i}\,\sqrt{X_{i}}\,dW_{i}$), or multiplicative environmental stochasticity ($\sigma_{i}\,X_{i}\,dW_{i}$) \citep{Hakoyama:2000jz}). Perturbations to parameters rather than state variables can be accommodated by explicitly modeling the parameter as a state variable with its own differential equation \citep{Allen:2007ww}. A similar approach can be applied to models with colored noise (i.e., models that do not have characteristic~5). The noise process itself can be explicitly modeled as a state variable with its own differential equation (e.g., an Ornstein-Uhlenbeck process). Unfortunately, increasing the dimensionality of the state space in these ways makes the process of numerically calculating the quasi-potential even more challenging. Given the pace of development of numerical techniques \citep{Cameron:2012ex}, however, it is conceivable that solving such systems will soon be more practical.
Characteristic~6, which states that noise occurs continually with low intensity, is central to the quasi-potential framework. The expressions relating the quasi-potential to steady-state probability distributions and mean first passage times are based on the assumption that the noise intensity is very small. As a rule of thumb, these approximations are only useful when $\sigma^{2}$ is much less than $2\,\Delta V_{0}$, where $\Delta V_{0}$ is the difference in the quasi-potential between the stable equilibrium and the saddle. In appendix~\ref{subsec:MFPTa}, we provide details on how mean first passage time scales with noise intensity, and present a numerical examination of these concepts applied to example 2. For systems that experience extreme events and external shocks (e.g., natural disasters, extreme climactic conditions, invasive species introductions, etc.), the quasi-potential no longer provides complete information. If a shock directly impacts the state variable (e.g., if the lake system in example 1 were to receive a massive pulse of phosphorous run-off), the ball in the ball-in-cup diagram would experience a large, instantaneous horizontal displacement (perhaps skipping over intervening valleys and hills). If the system reverts to deterministic dynamics, or stochastic dynamics with lower\-intensity perturbations after the shock, the quasi-potential will still be useful for describing the system's response after the shock. In the presence of large shocks, though, the quasi-potential loses its ability to make probabilistic predictions. If a shock impacts the state variable indirectly (e.g., if an invasive species entered the lake and fundamentally altered the phosphorous cycling), the shape of the quasi-potential surface would change dramatically. The interaction between a dynamically changing quasi\-potential surface and state\-variable noise would be difficult to analyze using the methods presented here.
The three examples in this manuscript show that the quasi-potential often provides a more informative stability metric than traditional linear analysis. Linear stability is much easier to measure in the field, though. This can be done by slightly perturbing a system and measuring the time it takes to return to equilibrium. Before the quasi-potential can be calculated, a model must be fit to observed data and validated. This limitation is also shared by other methods for analyzing systems with alternative stable states, which depend explicitly \citep[e.g.][]{Boettiger:2012jc} or implicitly \citep[e.g.][]{Dakos:2008dy} on underlying models. Fortunately, carefully controlled experiments \citep{Dai:2012gx} and advances in model\-fitting \citep{Ives:2008kj} point toward a promising future for the empirical study of shifts between alternative stable states through models.
\medskip
\section*{A Path Through the Quagmire of Stability Concepts}
Systems with alternative stable states are only interesting when perturbations
can cause shifts between states; when these stochastic perturbations are continual and random, as in most ecological systems, stochastic models are appropriate. When state and time
variables are continuous, stochastic differential equations like \eqref{gradient3} are the
best option. The
three examples presented in this paper show that the quasi\-potential
provides a useful way to study such stochastic differential equation
models. In particular, it provides a way to quantify the relative
stability of alternative stable states.
Unfortunately, many notions of stability were developed for a deterministic
context, and these can be misleading when applied to stochastic systems
(as in examples 1 and 2). Our goal is not
to add to the existing tangle of stability definitions \citep{Grimm:1997tg}, but rather to provide
a clarifying mathematical interpretation. Many existing definitions
can be related to the ball\-in\-cup heuristic, and the quasi\-potential
shows that this metaphor has a useful and rigorous mathematical meaning.
The translation between mathematical model and potential surface is
easy in gradient systems (in particular, for one\-dimensional systems,
which are always gradient systems). The translation for more general
systems is less obvious, but the quasi\-potential fills that need.
Figure~\ref{Fig6}a is a ball\-in\-cup diagram of the potential for a one\-dimensional system that helps to illustrate several important concepts associated with stability. These concepts are equally relevant for higher dimensional systems, where the ball rolls on a multi\-dimensional surface specified by $V_{0}$ (half the Freidlin\-Wentzell quasi\-potential) instead of a curve.
One metric of stability for an equilibrium $\mathbf{e_{0}}$ is the curvature of $V_0$ at $\mathbf{e_{0}}$ (dashed black line in figure~\ref{Fig6}a). The greater the curvature, the more difficult it is to perturb the system away from $\mathbf{e_{0}}$, and in this sense, the more stable $\mathbf{e_{0}}$ is. In one dimension, the curvature at $\mathbf{e_{0}}$ is $V''(\mathbf{e_{0}})$, which is minus the eigenvalue obtained in linear stability analysis. In higher dimensions, the eigenvalues are again directly related to curvature, now along different planar sections of $V_0$ (see appendix~\ref{subsec:curve}). Thus, measuring the curvature of $V_{0}$ at $\mathbf{e_{0}}$ is equivalent to determining asymptotic stability through linear stability analysis.
Asymptotic stability has a long history in ecology \citep{May:1973wc}. The primary problem with this metric is
that it is purely local -- once a trajectory is perturbed outside of
a tiny neighborhood of an equilibrium, nonlinear effects can come
into play and the approximation is no longer informative. Furthermore, this approach views perturbations as being
isolated one\-time events. With this view, a system is displaced, and
then the dynamics proceed deterministically without further perturbation. In reality,
perturbations often take place on a continual basis. Indeed, as noted
by \citet{Ives:1995tm}, ``To apply generally to ecological communities,
stability needs to be defined for stochastic systems in which environmental
perturbations are continuous and equilibrium densities are never achieved.'' Likewise, \citet{Neubert:1997wk} write, ``real ecosystems are seldom
if ever subject to single, temporally isolated perturbations. Nevertheless,
our analyses, together with most theoretical and experimental studies
of resilience, ignore the effects of continual stochastic disturbances
in the hope that the deterministic results will shed light on the
stochastic case."
A second metric of stability of an equilibrium $\mathbf{e_{0}}$ is the minimum distance between $\mathbf{e_{0}}$ and the boundary of its domain of attraction (dotted line in figure~\ref{Fig6}a). The width of the basin of attraction measures the magnitude of perturbation that a system can sustain and still be guaranteed to return to $\mathbf{e_{0}}$. One problem with this metric
is that, like asymptotic stability, it views perturbations as singular,
isolated events. For this metric, it is only the
boundary of basins of attraction that matter, not the shape or height
of $V_{0}$. If perturbations happen continuously,
the shape and height of the $V_{0}$ are important. Nonetheless, this basin width metric can be extremely useful.
A third metric of stability is the height of $V_{0}$ (gray line in figure~\ref{Fig6}a). \citet{Holling:1973wh} anticipated this concept, and called it resilience, which he explained with ball\-in\-cup diagrams. He defines one aspect of
resilience, writing: ``the height of the lowest point of the basin
of attraction ... will be a measure of how much the forces have to
be changed before all trajectories move to extinction of one or more
of the state variables''. Holling had no way of defining the surface,
and so could not actually quantify notions like {}``height''; the
quasi\-potential solves this problem. Holling's identification of the
difference between asymptotic stability and this definition of resilience (basin height) is hugely important, and it has major consequences for the analysis of alternative stable states.
This third metric is perhaps the most useful of the three we have explored. Unlike the first two metrics, it is
appropriate for use in systems that undergo continuous stochastic perturbations. As we saw in the examples in this paper, it can be used to compute mean first passage times, and is directly related to steady\-state probability densities.
These three metrics of stability can yield conflicting information about alternative stable states. Figure~\ref{Fig6}b shows these three metrics for the equilibria $\mathbf{e}_{A}$ and $\mathbf{e}_{B}$ from example 2. Note that the basin width metric and the quasi\-potential metric show that $\mathbf{e}_{A}$ is more stable than $\mathbf{e}_{B}$, but the asymptotic stability metric shows the reverse. Appendix~\ref{subsec:AnotherEx} demonstrates that the equilibria in a multi-stable system can exhibit any combination of the three stability metrics. That is, one equilibria can be classified as most stable according to the first metric, but not the second or third; or by the first and second, but not the third; etc.
Resilience is a concept closely related to stability, and like stability, it is defined in different ways by different authors. In a large review of the ecological literature, \citet{MyersSmith:2012dr} found that resilience was used in many ambiguous
and contradictory ways. Some authors, like \citet{Holling:1973wh} view stability and resilience as distinct properties; others, like \citet{Harrison:1979uya} define resilience as a single aspect of stability. \citet{Pimm:1984tu} and \citet{Neubert:1997wk} define resilience as essentially the asymptotic stability metric, while \citet{Harrison:1979uya}, \citet{Peterson:1998cn}, and \citet{Gunderson:2000ja} define it as essentially the basin width metric. \citet{Ives:2007jba} defines Holling's resilience using the dominant eigenvalue of the saddle that separates alternative stable states; like the asymptotic stability metric, this is the result of applying a local analysis to the deterministic skeleton of a system.
\citet{Hodgson:2015dm} argue that resilience cannot be quantified by a single metric, and use a potential function to illustrate the different components of resilience, which include latitude (the width of the basin of attraction) and elasticity (the asymptotic stability metric). The quasi-potential framework aids this clarification about resilience by extending it to multi-dimensional systems.
The quasi-potential is also useful for understanding several other concepts related to stability. Reactivity \citep{Neubert:1997wk}
differs from asymptotic stability, in that it quantifies the immediate
(as opposed to long\-term) growth or decay of perturbations. In the
quasi\-potential framework, reactivity is related to the circulatory
component of the vector field. In the neighborhood of asymptotically
stable equilibria with high reactivity, the circulatory component
of the vector field will carry trajectories away from the equilibrium
before bringing them back.
\citet{Harrison:1979uya} defined resistance
as the ability of a system to avoid displacement during a time of
stress. The stress is quantified in terms of an environmental parameter
distinct from the state variables, and hence the interpretation of
resistance depends on the parameter under examination. Resistance is best viewed as a measure of how
dramatically $V_{0}$ changes due to environmental parameter changes.
Finally, Harrison defined persistence as the ability of a system to stay in
a given range when continual perturbations are applied. He notes that
this is the property that is most biologically useful, and that stochastic
differential equations are the best mathematical modeling tool to
assess it. Unlike his definitions of resilience and resistance, this
definition views the dynamics of the system as stochastic and subject
to continual perturbations. He was unable to venture
far with the mathematical analysis for this definition, but the quasi\-potential provides a way forward. Mathematically, persistence can be defined as the first passage time for a system to leave a specified domain, which is directly related to the quasi\-potential. Thus Harrison's persistence is another manifestation of the quasi\-potential.
Despite the confusing array of stability concepts currently used in ecology, we believe that the quasi\-potential concept provides hope for clarity. The three metrics associated with the quasi\-potential show how many of these concepts are deeply related (figure~\ref{Fig6}).
The mathematics developed
by Freidlin and Wentzell \citeyearpar{Freidlin:2012wd}, coupled with numerical advances by Cameron \citeyearpar{Cameron:2012ex},
make the quasi\-potential a practical and accessible tool for ecologists
to study alternative stable states. This paper's goal is to demonstrate
the utility of the quasi\-potential, and to properly position it in
terms of existing ecological ideas.
\medskip
\section*{Acknowledgements}
This work was supported by a Complex Systems Scholar grant to K.C.A.~from the James S.~McDonnell Foundation. Special thanks to M.K.~Cameron for assistance with implementing the quasi\-potential analysis and for providing C code. C.~Boettiger and an anonymous reviewer provided valuable feedback that improved the quality of this paper. We thank S.~Catella, K.~Dixon, C.~Moore, C.~Stieha, A.~Barbaro, A.~Alsenafi, R.~Snyder, J.~Burns, and the rest of the CWRU ecology group for helpful discussions on earlier versions of this manuscript.
\medskip
|
1,116,691,498,230 | arxiv | \section{}
{T}he quasicrystal(QC)s have unique lattice structure with rotational symmetry forbidden in periodic crystals~\cite{Shechtman,Tsai,Takakura}.
In the QC, it has been unresolved whether the magnetic long-range order is realized~\cite{Suzuki}.
In the 1/1 AC composed of rare earths, which is the periodic crystal with the same local atomic configuration with the QC,
magnetic long-range orders have been observed in
Cd$_6$R (R=Pr, Nd, Sm, Gd, Tb, Dy, Ho, Eu, and Tm)~\cite{Tamura2010,Mori,Tamura2012} and Au-SM-R (SM=Si, Al, Ge, and Sn; R=Gd, Tb, Dy, and Ho)~\cite{Hiroto2013,Hiroto2014,Das}.
Among them, the magnetic structures have recently been identified experimentally as
the ferromagnetic (FM) order in Au$_{70}$Si$_{17}$Tb$_{13}$~\cite{Hiroto} and antiferromagnetic (AFM) order in Au$_{72}$Al$_{14}$Tb$_{14}$~\cite{Sato2019}.
However, the ordering mechanism remains elusive although theoretical analyses based on the spin model and the Hubbard model in mostly small clusters or low spatial-dimension systems were reported~\cite{Axe,Wessel,Kons,Jan2007,Hucht,Thiem,Komura,Sugimoto,Koga2017,STS},
where the magnetic anisotropy originating from the crystalline electric field (CEF) at the rare earth was not taken into account microscopically.
Moreover, recent experimental discovery of the FM long-range order in the Tb-based QC~\cite{Tamura2021} has opened a new stage of research, which also calls for theoretical study from this viewpoint.
In this report, we present our theoretical discovery of topological magnetic textures and magnetic long-range orders in the Tb-based 1/1 AC and QC.
We clarify that the valences of the ligand ions surrounding Tb play a key role in controlling the magnetic anisotropy in the CEF, which is crucial to drive unique magnetic textures in the AC and QC.
We find the hedgehog state characterized by topological charge $n=1$ and
also the whirling-moment states characterized by unusually large topological charge $n=3$. These topological states
are shown to be realized as the AFM orders with emergent monopole and antimonopole in the 1/1 AC, which exhibit topological Hall effect under applied magnetic field accompanied by the topological as well as metamagnetic transition.
\section*{Results}
\subsection*{Crystalline electric field}
Let us start with the CEF analysis of the Tb-based QC and AC by considering the Au-SM-R system.
The 1/1 AC Au-SM-R consists of the Tsai-type cluster with concentric shell structures as illustrated in Figs.~\ref{fig:atoms}A-\ref{fig:atoms}E where Au$_{70}$Si$_{17}$Tb$_{13}$ is shown as a typical case~\cite{Hiroto}.
In Fig.~\ref{fig:atoms}C, Tb atoms are located at each vertex of the icosahedron (IC), forming the Tb 12 cluster.
The local configuration of atoms surrounding Tb is shown in Fig.~\ref{fig:atoms}F where the $z$ axis is taken as the direction passing through a Tb site from the center of the IC and the $y$ axis is taken so as the $yz$ plane to be the mirror plane.
\begin{figure}[t]
\includegraphics[width=7.5cm]{atoms.eps
\caption{(color online)
Tsai-type cluster consists of (A) cluster center, (B) dodecahedron, (C) IC, (D) icosidodecahedron, and (E) defect rhombic triacontahedron
with Tb (gray), Si (blue), and Au (yellow).
(F) Local configuration around Tb.
The number labels surrounding Si and Au.
}
\label{fig:atoms}
\end{figure}
Since Tb$^{3+}$ has $4f^{8}$ configuration i.e., more than half of the closed 4f shell $(4f^{14})$,
the CEF Hamiltonian for Tb$^{3+}$ is expressed as $H_{\rm CEF}=6|e|V_{\rm cry}$ in the hole picture on the basis of the point charge model. Here, the potential $V_{\rm cry}$ is given by
$
V_{\rm cry}=\sum_{i=1}^{16}q_i/|{\bm R}_i-{\bm r}|,
$
where ${\bm R}_i$ is the position vector of the ligand ions Au$^{{\rm Z}_{\rm Au}+}$ and Si$^{{\rm Z}_{\rm Si}+}$ with $Z_{\rm Au}$ and $Z_{\rm Si}$ being the valences of Au and Si respectively.
Then the charge $q_i$ is given by $q_i=Z_{\rm Au}|e|$ and $q_i=Z_{\rm Si}|e|$ on the $i$th site in Fig.~\ref{fig:atoms}F.
Recently, $H_{\rm CEF}$ in rare-earth based QCs and ACs has been formulated by the operator equivalents~\cite{WM2021} as
\begin{eqnarray}
H_{\rm CEF}=
\sum_{\ell=2,4,6}\left[B_{\ell}^{0}(c)O_{\ell}^{0}(c)+\sum_{\eta=c,s}\sum_{m=1}^{\ell}B_{\ell}^{m}(\eta)O_{\ell}^{m}(\eta)\right].
\label{eq:H}
\end{eqnarray}
Here, $O_{\ell}^{m}(c)$ and $O_{\ell}^{m}(s)$ are the Stevens operators~\cite{Stevens} and $B_{\ell}^{m}(\eta)$ is the coefficient.
\begin{figure}[t]
\includegraphics[width=6.5cm]{E_CEF.eps
\caption{(color online)
(A) The $\alpha$ dependence of the CEF energy lebels $E_n$ for $Z_{\rm Si}=\alpha Z_{\rm Au}$ with $Z_{\rm Au}=0.223$.
Inset shows the enlargement for $0\le \alpha\le 1$.
(B) The $\alpha$ dependence of the $x$ (top panel), $y$ (middle panel), and $z$ (bottom panel) components of the largest magnetic moment for the CEF ground state.
(C) The $\alpha$ dependence of the momentum direction $\theta$ defined as angle from the $z$ axis in the $y$--$z$ plane (see inset) for $0.37\le \alpha\le 4.0$. In inset, the points directed by ${\bm J}$ are indicated for $\alpha=0.37$ (orange square), $\alpha=0.8$ (purple inverted triangle), $\alpha=1.0$ (blue triangle), $\alpha=1.5$ (black circle), and $\alpha=4.0$ (red square).
}
\label{fig:E_CEF}
\end{figure}
In metallic crystals, the valences of Au and Si are known to be 1 and 4 respectively in normal metals~\cite{Pearson} and alloyed AC~\cite{Mizutani}. In reality, screening effect by conduction electrons reduces $Z_{\rm Au}$ and $Z_{\rm Si}$ from these values.
Hence, we analyze the CEF energies for $0\le \alpha\le 4$ in $Z_{\rm Si}=\alpha Z_{\rm Au}$. Here, $Z_{\rm Au}$ is set to be 0.223 as a typical value,
which was determined by the neutron measurement in Au$_{70}$Si$_{17}$Tb$_{13}$~\cite{Hiroto}.
By diagonalizing $H_{\rm CEF}$ for the total angular momentum $J=6$ as the ground multiplet by the Hund's rule, we obtain the CEF energies $E_n$ for $n=0$--$12$ as shown in Fig.~\ref{fig:E_CEF}A.
Note here that the choice of $Z_{\rm Au}$ itself does not affect the eigenstates of $H_{\rm CEF}$ $|\psi_n\rangle$ as far as $\alpha$ is the same.
The CEF energies are seen to split into almost 7 to 8 levels, some of which are nearly degenerate.
The energy difference between the ground state and the first-excited state is typically in the order of $10^2$~K to $10^3$~K. Hence, the CEF ground state is dominant over the lower-temperature properties.
Next, to clarify the principal axis of the magnetic moment in the CEF ground state, we calculate $3\times 3$ matrix $M_{\xi,\zeta}\equiv \langle\psi_0|\hat{J}_{\xi}\hat{J}_{\zeta}|\psi_0\rangle$ for $\xi, \zeta=x, y,$ and $z$, where $\hat{J}_{\xi}$ is the operator of the total angular momentum.
By diagonalizing $M$, we obtain the normalized eigenvector for the largest eigenvalue, which gives the largest moment direction ${\bm J}=(J_x, J_y, J_z)$.
We find that the moment direction changes depending on $\alpha$ as shown in Fig.~\ref{fig:E_CEF}B.
For $0.37\le \alpha\le 4.0$, the moment is lying in the mirror plane i.e. the $yz$ plane in Fig.~\ref{fig:atoms}F,
while for $0\le \alpha<0.37$, the moment is directed as ${\bm J}\parallel (1,0,0)$.
At $\alpha=0.37$, ${\bm J}$ points to the $\theta=95.06^{\circ}$ direction from the $z$ axis
and as $\alpha$ increases $\bm J$ rotates to anticlock-wise direction and tends to approach $\theta=0^{\circ}$ i.e., the pseudo 5-fold axis [$z$ axis in Fig.~\ref{fig:atoms}F].
\subsection*{Minimal model with magnetic anisotropy}
To clarify how magnetic anisotropy by the CEF affects magnetism on the IC, we consider the minimal model
\begin{eqnarray}
H=-\sum_{\langle i,j\rangle}J_{i,j}\hat{\bm J}_i\cdot\hat{\bm J}_j,
\label{eq:HI}
\end{eqnarray}
where $J_{ij}$ is exchange interaction between the $i$th and $j$th Tb site on the IC [see Fig.1(c)]. Here, $\hat{\bm J}_i$ represents the unit Ising-spin-vector operators of Tb$^{3+}$ whose direction is restricted to either parallel or antiparallel to the moment direction shown in Fig.2B.
We consider the nearest neighbor (N.N.) interaction $J_1$
and next N.N. (N.N.N.) interaction $J_2$.
So far, in several Tb-based ACs, positive Weiss temperature has been observed, which indicates the FM interaction between the magnetic moments at Tb sites~\cite{Suzuki}.
Actually, in the Tb-base QC where the FM long-range order has been observed, the positive Weiss temperature is observed~\cite{Tamura2021}.
Hence, we focus on the FM interactions $J_1>0$ and $J_2>0$.
The case for the AFM interaction will be reported elsewhere.
\begin{figure}
\centering
\includegraphics[width=14cm]{PD3.eps
\caption{(color online)
(A) The ground-state phase diagram of the model (\ref{eq:HI}) on an IC and the 1/1 AC for $J_2/J_1$ with $J_1>0$ and $\theta$ as well as $\alpha$.
In the 1/1 AC, open (filled) symbols indicate the FM (AFM) orders of the magnetic IC.
Contour plot of the topological charge $n$ on an IC is also shown.
(B) Magnetic textures of each symbol in (A). Hedgehog state (red square) and
antiwhirling state (orange square).
(C) Hedgehog state for $\theta=33^{\circ}$.
(D) Antiwhirling state for $\theta=75^{\circ}$ characterized by $n=1$.
}
\label{fig:PD}
\end{figure}
By performing numerical calculations, we determine the ground-state phase diagram in Fig.~\ref{fig:PD}A.
Here we plot the wide range of $\theta$ as well as $\alpha$ for the horizontal axis. If we consider the effects of the Au-SM mixed sites in the Au-SM-Tb QCs and ACs, the $\alpha$ dependence of $\theta$ is expected to change slightly from that in Fig.~\ref{fig:E_CEF}C~\cite{WM2021}. Hence, the $J_2/J_1$--$\theta$ phase diagram is relevant generally to the rare-earth based QC and AC with strong magnetic anisotropy.
The phase diagram for $0\le\alpha\le 0.37$ (see Fig.~\ref{fig:E_CEF}B) is shown in SI (Fig.~S1).
\subsection*{Magnetic textures on icosahedron and topological charges}
In Fig.~\ref{fig:PD}A,
we find that unique magnetic textures appear depending on $J_2/J_1$ and $\theta$ as denoted by each symbol, which is shown in Fig.~\ref{fig:PD}B.
We find that the hedgehog state~\cite{STS} (red square $\theta=0^{\circ}$ in Fig.~\ref{fig:PD}B), where all the moments at the 12 Tb sites on the IC are directed outword,
appears in the wide range of $\theta$ for $0\le\theta\le 33^{\circ}$ for $J_2/J_1=1/10$ in Fig.~\ref{fig:PD}A.
The hedgehog state for $\theta=33^{\circ}$ is shown in Fig.~\ref{fig:PD}C.
In the hedgehog states, the total magnetic moment on the IC ${\bm J}_{\rm tot}=\sum_{i=1}^{12}\langle\hat{\bm J}_i\rangle$ is zero.
The ${\bm J}_{\rm tot}={\bf 0}$ state is also realized
as the whirling-moment state denoted by orange squares in Figs.~\ref{fig:PD}A and \ref{fig:PD}B where the magnetic moments are whirling if they are seen from the (111) direction (SI, Fig.~S2).
To clarify the topological character of each magnetic texture,
we define
the scalar chirality of the IC with its center-of-mass position ${\bm R}$ by $\chi({\bm R})=\sum_{i,j,k\in{\rm IC}}\chi_{i,j,k}$ with $\chi_{i,j,k}\equiv {\bm J}_i\cdot({\bm J}_j\times{\bm J}_k)$ where the order of $i$, $j$, and $k$ is defined in the anticlockwise direction with respect to the normal vector of the triangle formed by the $i$, $j$, and $k$th sites, $\hat{n}_{i,j,k}$, pointing outward from ${\bm R}$.
In the same way, we define the solid angle subtended by the twelve moments on the IC as
$\Omega({\bm R})=\sum_{i,j,k\in{\rm IC}}\Omega_{ijk}$.
Here, $\Omega_{ijk}$ is the solid angle for three magnetic moments ${\bm J}_i$, ${\bm J}_j$, and ${\bm J}_k$,
which is given by $\Omega_{ijk}=2\tan^{-1}[\chi_{ijk}/(1+{\bm J}_i\cdot{\bm J}_j+{\bm J}_j\cdot{\bm J}_k+{\bm J}_k\cdot{\bm J}_i)]$~\cite{Eriksson}.
The topological charge $n$ is defined as $n\equiv\Omega({\bm R})/(4\pi)$ per an IC.
Then we find that the hedgehog is characterized by $n=1$. This implies that the magnetic moments at the 12 Tb sites on the IC cover all the sphere once, which is regarded as the emergent monopole acting as the source of emergent field~\cite{Dirac,Volovik,Kotiuga,Feldtkeller,Doring,Kabanov,Thiaville,Milde,Castelnovo,Morris,Aoyama}.
Interestingly, we find that unusually large magnitude of the topological charge $n=-3$ is realized in the
whirling state in the large-$\theta$ region for $\theta\gsim 83^{\circ}$ and $J_2/J_1>2$ (orange square) (we call this state the
antiwhirling state hereafter) in Fig.~\ref{fig:PD}A.
Furthermore, we find that the topological transition from $n=-3$ to $n=1$ occurs around $\theta=79.5^{\circ}$ (see Fig.~\ref{fig:PD}D) in the antiwhirling region in Fig.~\ref{fig:PD}A.
We also show the contour plot of the topological charge $n$ in Fig.~\ref{fig:PD}A.
The finite topological charge with integer $n\ne 0$ appears in the ${\bm J}_{\rm tot}={\bf 0}$ states.
As distinct from topological spin textures intensively studied in periodic crystals~\cite{Nagaosa,Kanazawa,Tokura,Aoyama},
our finding is that the total angular momentum, i.e., the orbital angular momentum coupled to spin, forms the novel topological textures designed by the CEF-anisotropy protected IC.
\subsection*{Magnetism in 1/1 approximant}
Next, let us consider the Tb-based 1/1 AC with the body-centered-cubic (bcc) lattice structure composed of the Tsai-type cluster.
In the unit cell, there are two ICs, one of which is located at the bcc center and the other is at the bcc corner.
When the model (\ref{eq:HI}) is applied to the 1/1 AC, where $\langle i,j\rangle$ in Eq.~(\ref{eq:HI}) is not only taken into account for the intra-IC Tb pairs but also for the inter-IC Tb pairs, we find that the N.N. (N.N.N.) inter-IC $\langle\hat{\bm J}_i\cdot\hat{\bm J}_j\rangle$ equals to the N.N.N. (N.N.) intra-IC $\langle\hat{\bm J}_i\cdot\hat{\bm J}_j\rangle$.
In the AC Au$_{70}$Si$_{17}$Tb$_{13}$ as a typical case, the nearest-neighbor (N.N.) Tb distance in the intra IC is $0.38a$ and the next N.N. (N.N.N.) Tb distance is $0.61a$ where $a=14.726~{\rm \AA}$ is the lattice constant of the bcc unit cell~\cite{Hiroto}. On the other hand, the N.N. and N.N.N. Tb distances for the inter ICs are $0.37a$ and $0.53a$ respectively. Hence, the N.N. Tb distances for the intra IC and inter IC are close. As for the N.N.N. Tb distances, both values for the intra IC and inter IC are also close.
Hence, if the inter-IC N.N. and N.N.N. interactions are set to be $J_1$ and $J_2$ respectively as done for the intra-IC N.N. and N.N.N. interactions respectively,
either of the FM or AFM arrangement of the magnetic IC
at the bcc center and corner can be determined as the ground state by evaluating the energy of the inter-IC contributions in Eq. (\ref{eq:HI}).
The result for the 1/1 AC is shown in Fig.~\ref{fig:PD}A, where the FM (AFM) order of the magnetic IC is denoted by the open (filled) symbols.
The ${\bm J}_{\rm tot}={\bm 0}$ states
are realized as all AFM orders.
Namely, the hedgehog $(n=1)$ state and antihedgehog $(n=-1)$ state where all the moments are inverted from the hedgehog are located at the bcc center and corner respectively as shown in Fig.~\ref{fig:Tb_AC}A.
The whirling state $(n=3)$ where all the moments are inverted from the antiwhirling state and the antiwhirling state $(n=-3)$ at the bcc corner and center are realized respectively as in Fig.~\ref{fig:Tb_AC}B.
These are regarded as emergent monopole and antimonopole with the ``charge'' $n$, acting as the source and sink of emergent field respectively.
It is noteworthy that neutron measurements with the analysis based on the model (\ref{eq:HI}) on an IC identified $\theta=86^{\circ}$ and $J_2/J_1=2.3$ for Au$_{72}$Al$_{14}$Tb$_{14}$ showing the AFM order (orange square in Fig.~\ref{fig:PD}B)~\cite{Sato2019} and $\theta=80^{\circ}$ and $J_2/J_1>0$ for Au$_{70}$Si$_{17}$Tb$_{13}$ showing the FM order (pink inverted triangle in Fig.~\ref{fig:PD}B)~\cite{Hiroto}, which are consistent with our results in Fig.~\ref{fig:PD}A.
This indicates that the magnetic anisotropy arising from the CEF i.e. $\theta$ plays a key role in stabilizing the AFM or FM order in the 1/1 AC.
Furthermore, the
former
is revealed to be characterized by the unusually large topological charge $|n|=3$.
These results indicate that by slightly changing the ratio of valences of ligand ions $\alpha=Z_{\rm Si(Al)}/Z_{\rm Au}$,
the magnetic and topological states can be switched. This
is feasible by controlling the compositions of rare-earth based AC and QC.
\begin{figure}[t]
\includegraphics[width=8.5cm]{Tb_AC.eps
\caption{(color online)
In the 1/1 AC, at the bcc center and corner in the unit cell,
(A) hedgehog state ($n$=1) and antihedgehog state ($n$=$-1$) for $\theta=28^{\circ}$
and
(B) antiwhirling state ($n$=$-3$) and whirling state ($n$=3) for $\theta=86^{\circ}$
are located respectively.
(C), (D) The magnetic-field dependence of the topological charge $|n|$, total chirality $|{\bm \chi}^{\rm T}|$, and magnetization $m$ applied to (A), (B) respectively with $J_1$ being the N.N. interaction in unit of K. Insets in (C) and (D) show Tb moments on the IC for $H_{\rm ext}>H_{\rm M}$.
}
\label{fig:Tb_AC}
\end{figure}
\subsection*{Topological Hall effect}
Notable is that intriguing phenomena such as the topological Hall effect are expected to emerge in the magnetic textures in Fig.~\ref{fig:PD}A.
The topological Hall conductivity $\sigma_{\mu\nu}^{\rm T}$ is proportional to the total magnetic chirality multiplied by a geometrical factor
${\bm \chi}^{\rm T}=\sum_{\langle i,j,k\rangle}\chi_{ijk}\hat{n}_{ijk}$, i.e., $\sigma_{\mu\nu}^{\rm T}\propto\epsilon_{\mu\nu\rho}\chi_{\rho}^{\rm T}$~\cite{Tatara}, where $\sum_{\langle ijk\rangle}$ denotes the summation over all the three sites on each IC and $\hat{n}_{ijk}$ represents the surface normal. Here, ${\bm \chi}^{\rm T}$ plays a role as
emergent fictitious magnetic field.
In the hedgehog state (red square) and antiwhirling state (orange square) in Fig.~\ref{fig:PD}A,
we confirmed ${\bm \chi}^{\rm T}={\bm 0}$.
Then, we apply magnetic field ${\bm H}_{\rm ext}$ to the 1/1 AC as
${\cal H}=H-g_{J}\mu_{\rm B}\sum_{i}\hat{\bm J}_i\cdot{\bm H}_{\rm ext}$, where $g_{J}$ is Land{\'e}'s $g$ factor and $\mu_{\rm B}$ is the Bohr magneton.
The result for ${\bm H}_{\rm ext}\parallel (0,0,1)$ applied to the hedgehog-antihedgehog AF ordered state (Fig.~\ref{fig:Tb_AC}A) is shown in Fig.~\ref{fig:Tb_AC}C.
We find that the metamagnetic transition occurs at $H_{\rm M}/J_1=3.47$~kOeK$^{-1}$, where the moments at half of Tb sites in an IC are flipped,
giving rize to the change in the topological charge $|n|$=1~$\to$~0 as well as ${\bm \chi}^{\rm T}={\bm 0}$~$\to$~$(-4.35,0,0)$.
This implies the topological Hall effect to be observed in $\sigma_{yz}^{\rm T}$ for $H_{\rm ext}>H_{\rm M}$.
The result for the whirling-antiwhirling AF ordered state is shown in Fig.~\ref{fig:Tb_AC}D.
At $H_{\rm M}/J_1=4.46$~kOeK$^{-1}$, metamagnetic transition occurs as observed in Au$_{72}$Al$_{14}$Tb$_{14}$~\cite{Sato2019}.
We find that the change in the topological charge $|n|$=3~$\to 0$ as well as the total chirality ${\bm \chi}^{\rm T}$=${\bm 0}$~$\to(-6.32,0,3.01)$ occurs at $H_{\rm M}$. This implies the topological Hall effect to be observed in $\sigma_{yz}^{\rm T}$ and $\sigma_{xy}^{\rm T}$ for $H_{\rm ext}>H_{\rm M}$.
\subsection*{Magnetism in quasicrystal}
To explore the magnetic long-range order in the QC,
we apply the model (\ref{eq:HI}) to the Cd$_{5.7}$Yb-type QC~\cite{Takakura}, where the Tb-12 cluster, i.e., IC, is located at the origin surrounded by 30 ICs located at each vertex of the $\tau^3$-times enlarged icosidodecahedron from that in Fig.~\ref{fig:atoms}D and they are repeatedly arranged outward in the self-similar manner. Here $\tau=(1+\sqrt{5})/2$ is the golden ratio.
In the model~(\ref{eq:HI}) applied to the QC,
we set the FM N.N. interaction $J_1>0$ and N.N.N. interaction $J_2>0$ not only for the intra IC but also for the inter ICs which are the neighboring pairs on the line connected at each vertex of the icosidodecahedron (see Fig.~\ref{fig:atoms}D).
Then we find that the FM long-range order is realized for $64.1^{\circ}\lsim\theta\lsim 80^{\circ}$ (at least in the region shown in Fig.~\ref{fig:QC_mom}A), as shown in Fig.~\ref{fig:QC_mom}B.
The total magnetic moment in the IC ${\bm J}_{\rm tot}$ points to the (111) direction.
It is noted that by applying the model (2) with the AFM interactions $J_1<0$ and $J_2<0$ to the Cd$_{5.7}$Yb-type QC, we have confirmed that there exists the region of $\theta$-$J_2/J_1$ in the ground-state phase diagram where the uniform arrangement of the hedgehog state is stabilized.
\section*{Discussion}
We have discovered that the magnetic state on the IC denoted by pink inverted triangle in Fig.~\ref{fig:PD}B orders ferromagnetically in the QC, as shown in Fig.~\ref{fig:QC_mom}B.
On the other hand, the hedgehog (whirling) state encounters huge degeneracy in the ground state because of strong frustration to realize the hedgehog-antihedgehog (whirling-antiwhirling) AF order due to the triangular network in the icosidodecahedron (see Fig.~\ref{fig:QC_mom}B and also Fig.~\ref{fig:atoms}D).
Namely, within the model (2), the AFM states are degenerated. This is reminiscent of the classical spin 1/2 Heisenberg model on the triangular lattice where the macroscopic degeneracy remains in the ground state~\cite{Wannier}.
This is in sharp contrast to the AFM order realized in the 1/1 AC with the N.N. FM interaction which forms the bipartite lattice in terms of the IC.
To lift the degeneracy, it is necessary to take into account the effect beyond the model (2) such as transverse components of the exchange interaction and/or the longer-range interactions as the RKKY interaction. Such an analysis will be important for each material by considering the material dependent factors, which is left for the next step of the study.
Melting the hedgehog-antihedgehog (whirling-antiwhirling) AF order by
considering these effects under
the frustration in the QC, which may cause non-trivial topological liquid analog to the quantum spin liquid, calls for an interesting future study.
\begin{figure}[t]
\includegraphics[width=6.5cm]{QC_mom.eps
\caption{(color online)
(A) FM long range order in QC realized in the $\theta$-$J_2/J_1$ plane.
(B) The magnetic moments in the Tb-12 clusters at the origin and at the vertices of the icosidodecahedron are shown for $\theta=80^{\circ}$.
Green (brown) lines at front (back) side connects the vertices of the icosidodecahedron.
(C) The $\theta$ dependence of the total magnetic moment in the IC ${\bm J}_{\rm tot}=(J_x^{\rm IC},J_x^{\rm IC},J_x^{\rm IC})$ in the FM phase.
}
\label{fig:QC_mom}
\end{figure}
\section*{Materials and Methods}
\subsection*{Quasicrystal and approximant crystal}
The QC and the AC consist of the Tsai-type cluster shown in Figs.~\ref{fig:atoms}A-\ref{fig:atoms}E.
The AC retains the periodicity as well as the local atomic configuration common to the QC.
There exist a series of ACs such as 1/1 AC, 2/1 AC, 3/2 AC, $\cdots$, where the $n\to\infty$ limit in the $F_{n-1}/F_{n-2}$ AC corresponds to the QC with $F_{n}$ being the Fibonacci number.
In the rare-earth based 1/1 AC composed of the Tsai-type cluster, there exist two ICs in the unit cell of the body-center-cubic (bcc) lattice, where the rare-earth atoms are located at the twelve vertices of each IC.
\subsection*{Analysis of the crystalline electric field}
In the CEF Hamiltonian~(\ref{eq:H}),
the coefficient $ B_{\ell}^{m}(\eta)$ is given by
$ B_{\ell}^{m}(\eta)=-|e|C_{\ell}^{m}\langle r^{\ell}\rangle \alpha_{\ell}h_{\ell}^{m}(\eta)$,
where the explicit forms of $ C_{\ell}^{m}$ and $ h_{\ell}^{m}(\eta)$ are defined in ref.~\cite{WM2021}.
The Stevens factors for Tb$^{3+}$ are given as $\alpha_2=-1/99$, $\alpha_4=2/16335$, and $\alpha_6=-1/891891$~\cite{Hutchings}.
The Dirac-Fock calculation for Tb$^{3+}$ yeilds $\langle r^2\rangle=0.2302$~\AA$^2$, $\langle r^4\rangle=0.1295$~\AA$^4$, and $\langle r^6\rangle=0.1505$~\AA$^6$~\cite{Freeman}.
Since the Stevens operators $O_{\ell}^{m}(\eta)$ are given in ref.~\cite{WM2021}, the matrix element of $H_{\rm CEF}$ for Tb$^{3+}$ are obtained for the basis set of the eigenstates of the total angular momentum $J=6$ and the $z$ component $|J=6, J_z \rangle$ with $J_z=6, 5, \cdots, -5$, and $-6$.
\subsection*{Magnetically ordered state in quasicrystal}
In the calculation of (\ref{eq:HI}) applied to the Cd$_{5.7}$Yb-type QC~\cite{Takakura}, we have evaluated the ground-state energy of the model (\ref{eq:HI}) on the ICs located at each vertex of the $\tau^3$-times enlarged icosidodecahedron from that in Fig.~\ref{fig:atoms}D.
Here, $\tau=(1+\sqrt{5})/2$ is the golden ratio.
In the icosidodecahedron, there exist 30 vertices at which 30 ICs are located. For the neighboring pairs of ICs, whose number is 60, we set the N.N. interaction $J_1$ and N.N.N. interaction $J_2$ between the magnetic moments at the N.N. and N.N.N. Tb sites respectively.
In the region shown in Fig.~\ref{fig:QC_mom}A, we have confirmed that the FM arrangement of the magnetic textures on the IC has the lowest energy. Namely, the magnetic state shown in Fig.~\ref{fig:PD}B on the IC located at the origin is surrounded by the same magnetic states on the 30 ICs located at each vertex of the icosidodecahedron (see Fig.~\ref{fig:QC_mom}B) and they are repeatedly arranged outward in the self-similar manner. Hence the FM long-range order is realized in the QC.
\subsection*{Data Availability}
All study data are included in the article and/or SI.
\subsection*{Acknowledgments}
This work was supported by JSPS KAKENHI Grant Numbers JP18K03542 and JP19H00648.
\newpage
|
1,116,691,498,231 | arxiv | \section{Introduction}
\label{scatt08.sect-intro}
The purpose of this work is to present the scattering theory for a quantum particle described by a tight-binding Hamiltonian $H=H_0+V$ acting on the Hilbert space $\ell^2({\mathbb Z}^d)$ where $H_0$ is a periodic operator with a single band and $V$ is a finite rank perturbation. Most of the present work is focusing on the case of dimension $d\geq 3$. All along this paper, it will be assumed that the Fourier transform of $H_0$ acts on $L^2({\mathbb T}^d)$ as a multiplication operator by a real analytic Morse function ${\mathcal E}(k)$ having only one maximum and one minimum. Operators of this type appear in solid state physics as effective one-band Hamiltonians for electrons or holes in periodic media. The analyticity reflects the exponential decay of the hopping amplitude of the particle and the Morse condition is generic.
\subsection{Main Results}
\label{scatt08.ssect-results}
Scattering theory for a Schr\"odinger operator with a periodic potential has already been considered \cite{New2,BY,GN}. The above scattering problem has also been addressed in the physics literature, for example in \cite{Eco}. The present work is going further. Initially, it was motivated by the remark by Kellendonk and Richard \cite{KR06} that Levinson's theorem \cite{Lev} relating the number of bound states to the total scattering phase can be interpreted as a special case of the Atyiah-Singer index theorem. As it turns out, this nice basic idea requires a substantial amount of technicalities when it comes to mathematical justification \cite{KR2,KR3,KR4}. For indeed, the global character of Levinson's theorem requires several technical steps. First, a complete control on the nature of the singularities of the Green function of $H_0$ is needed, a task that is easy on the continuum, but more involved in the present case. In addition, it requires a conjugate operator to $H_0$ in order to shift the energy, replacing the dilation operator used for the continuum situation. Moreover, the potential term $V$ may create embedded eigenvalues and threshold singularities that must be analyzed thoroughly since they contribute to Levinson's theorem. At last, for the index theorem to apply, it is necessary to prove that both the wave operator and the scattering matrix can be expressed as suitable continuous function of the energy and dilation operators. This was achieved in \cite{KR3} through an explicit calculation in dimension $d=1$ using the techniques of \cite{Jen}. In higher dimension \cite{KR4}, the explicit calculation turns out to be harder, but it is possible to prove sufficient regularity of the wave operators and the scattering matrix. Unfortunately, the work \cite{BY} is insufficient to implement this program completely for the class of models considered here. The present paper is supplementing these points.
\vspace{.2cm}
In the light of the previous introduction, the main results of the paper can be summarized as follows:
\begin{itemize}
\item A lattice analog of the dilation operator is constructed (see Theorem~\ref{theo-dilation}). It is a self-adjoint unbounded operator $A$ such that $\imath[H_0,A]=F(H_0)$ where $F$ is a positive function on the spectrum of $H_0$ vanishing only at the band edges.
\item Explicit expressions for the wave operators, the scattering matrix and the time delay operators are obtained in terms of $H_0$, $V$ and $A$ (see Theorem~\ref{scatt08.theo-waveop3d}, Theorem~\ref{scatt08.theo-Sin3d} and Theorem~\ref{scatt08.theo-Tin3d}).
\item A series of results concerning the existence of embedded eigenvalues and threshold singularities.
\item A Levinson type theorem is derived which is now briefly described (see Theorem~\ref {scatt08.theo-Levinson3d} for details). The essential spectrum of $H$ is the spectrum $[E_-,E_+]$ of $H_0$ and $E_\pm$ are called the {\em band edges} or {\em thresholds}. Let $P_{\mbox{\rm\tiny pp}}$ be the eigenprojection on the eigenvalues $N$ be the total number of eigenvalues, including the embedded and the threshold eigenvalues (the latter have to be distinguished from threshold resonances). Further let $S$ be the scattering operator and let $T=-\imath S^{-1}[A,S]$ be the {\em time delay operator} seen as acting on $\ell^2({\mathbb Z}^d)$. Finally let $m_\pm\in\{0,1\}$ be the degeneracies of the {\em threshold resonances} at $E_\pm$ also called {\em half-bound states} singularities (higher degeneracies are possible, but not dealt with here). Then for $d=3$ and isotropic extrema of ${\mathcal E}$,
\begin{equation}
\label{scatt08.eq-LevinsonTh}
\frac{1}{2\imath\pi}\; \TR\left(S^{-1}[A,S]\right) \;+\;
\TR(P_{\mbox{\rm\tiny pp}})
\;=\;
-\,\frac{m_+}{2}\,-\,\frac{m_-}{2}
\;.
\end{equation}
For $d\geq 5$ it is proved that always $m_\pm=0$ and that \eqref{scatt08.eq-LevinsonTh} holds. For $d=4$, \eqref{scatt08.eq-LevinsonTh} is proved under the hypothesis that $m_\pm=0$.
\end{itemize}
\noindent As already pointed out, this paper is restricted to dimension $d\geq 3$. Dimensions $d=1$ and $d=2$ require a detailed asymptotic expansions of the free Green function near the band edges \cite{New,BGDW,KR3}. The one-dimensional case has been treated in \cite{CK,HKS}. The case $d=2$ will be addressed in a future publication.
\vspace{.2cm}
While most of these results are technically new, similar results have been already obtained in the past. The scattering problem in ${\mathbb R}^d$ with standard Laplacian perturbed by a decaying potential together with a proof of Levinson's theorem can be found in standard textbooks such as \cite{New,RS}. In this situation, also threshold resonances have been analyzed in details (see \cite{Bol} for a review). The scattering of an electron in a periodic potential by an impurity has been considered by physicists for a long time, in connection with the transport properties of semiconductiors. This theory is based on the KKR equations (for Korriga, Kohn and Rostoker) and in this context Levinson's theorem given in equation~(\ref{scatt08.eq-LevinsonTh}) above is also known under the name of {\em Friedel sum rule}. This was investigated by Newton \cite{New2} for $d=3$ with an impurity potential that could lead to threshold resonances, but not embedded eigenvalues. Levinson's theorem for periodic potentials in dimension $d=1$ was proved by Firsova \cite{Fir}. The mathematical aspects of scattering theory in a periodic potential has been considered by Birman and Yafaev \cite{BY} and bears many similarities with the present approach. However, the latter work does not deal with the critical points of the band functions, and hence does not lead to a proof of Levinson's theorem. The scattering of a lattice electron by a localized impurity is also addressed in the book by Economou \cite{Eco}, however, the explicit formulas obtained in the present work for the scattering matrix and the wave operators are missing. The problem of embedded eigenvalues has long been considered as an irrelevant curiosity. Indeed, even though they are non-generic within the class of finite rank perturbations considered here (as follows from the arguments in Section~\ref{scatt08.ssect-spec}), they may occur in practical devices, in particular, when a compact part of the lattice is inaccessible to a particle coming from the outside (see Example~\ref{scatt08.exam-int} below). The eigenvalues of the Hamiltonian inside have an influence on the scattering outside in various ways, as can be seen in equation~(\ref{scatt08.eq-LevinsonTh}). An analogous effect occurs for microwaves reflected by a cavity, as was shown, for instance, in \cite{DSF}.
\subsection{Strategy of proofs}
\label{scatt08.ssect-discu}
As suggested by the formula $\imath[H_0,A]=F(H_0)$, the conjugate operator $A$ is the generator of an energy shift. For its construction the classical energy gradient flow is slowed down near the band edges which are also called thresholds. This flow can be implemented as a strongly continuous one-parameter group of unitary operators in the Hilbert space and $A$ is then simply the generator of this group. The unitary implementation of a vector field has already been carried out in [HS,ABG], however, these constructions excluded energy surfaces with critical points. Removing this constraint is crucial for the proof of Levinson's theorem and this is probably the main conceptual contribution of this paper to the scattering theory in periodic lattices. The proof also covers dimension $d=2$.
\vspace{.2cm}
The introduction of the conjugate operator $A$ is closely linked to an important tool of calculation used in this paper, namely the {\em rescaled energy and Fermi surface} (REF) representation giving an adequate spectral representation of both $H_0$ and $A$. It shows that the Hilbert space $\ell^2({\mathbb Z}^d)$ is isomorphic to $L^2({\mathbb R})\otimes L^2(\Sigma,\nu)$ where ${\mathbb R}$ is a rescaled energy variable and $\Sigma$ is a reference Fermi surface given by some level set of ${\mathcal E}$ furnished with a Riemannian volume $\nu$. The influence of the critical points on the dynamics defined by $A$ lies on a set of zero Lebesgue measure explaining why the unitary group $e^{\imath tA}$ is globally defined. While Morse's theory shows that the topology of the level sets changes at the passage through a critical value, the previous result, on the opposite, shows that the topology of the Fermi surface does not play any role for the Hilbert space isomorphism. Changing $H_0$ into $B=f(H_0)$, for a suitable function $f$ such that $\imath[B,A]={\mathbf 1}$, leads to a representation where $B$ is the multiplication by a variable $b\in{\mathbb R}$, called the {\em rescaled energy} while $A$ becomes the derivative $-\imath\partial_b$ and acts as an infinitesimal rescaled energy shift.
\vspace{.2cm}
Given the REF representation, it is possible to compute all standard objects of scattering theory explicitly. In order to limit the technical difficulties, this work will be restricted to dimension $d\geq 3$ and to a compactly supported perturbation. The wave operator, minus the identity, is then an explicit continuous function in $A$ and $B$ with values in the algebra of compact operators on $L^2(\Sigma,\nu)$. From this formula and the invariance principle, an expression for the on-shell scattering matrix is then readily deduced. Up to an explicit partial isometry, it is a finite dimensional unitary matrix expressed in terms of the perturbation $V$ and the Green function of $H_0$. The spectral property of the time delay operator linking it to the resolvent also follows form this analysis. This allows to give a first short proof of Levinson's theorem by a contour integration argument when there are no embedded eigenvalues and no threshold singularities.
\vspace{.2cm}
However, both threshold singularities and embedded eigenvalues may occur for adequate choices of $V$, sometimes with physical meaning. This situation is covered by the second proof of Levinson's theorem which follows closely the $K$-theoretic arguments of Kellendonk and Richard \cite{KR06,KR2}. Following these authors, a \Cs ${\mathscr E}$ is generated by continuous functions of $A$ and of $B$ with values in the compact operators on $L^2(\Sigma,\nu)$ and having well-defined limits at $\pm\infty$ which coincide in the four corners $A=\pm\infty$ and $B=\pm\infty$. It contains the ideal ${\mathcal J}$ of those functions vanishing at $\infty$ and the extension is precisely by the \Cs ${\mathscr A}$ of operators fibered over $A$ or $B$, again coinciding in the four corners. A large amount of effort is then dedicated to proving that the wave operator belongs to the Toeplitz extension ${\mathscr E}$, even when embedded eigenvalues and threshold singularities are present. It follows that the wave operator is a lift of the scattering operator which combined with contributions stemming from the thresholds is an element of ${\mathscr A}$. This leads to a $K$-theoretic version of the proof of Levinson's theorem.
\vspace{.2cm}
\noindent {\bf Notations:} As usual, $|A|=(A^*A)^{\frac{1}{2}}$, $\Re e\,A=\frac{1}{2}(A+A^*)$ and $\Im m\,A=\frac{1}{2\imath}(A-A^*)$ for any operator $A$. Furthermore, throughout there is a rescaled energy variable $b=f(E)$ associated with the bijection $f$ from the spectrum of the unperturbed operator to ${\mathbb R}$ which is defined in \eqref{scatt08.eq-energychange} below. For objects depending on energy both $E$ and $b$ will be used as indices, for example $P_b=P_E$, $\Pi_b=\Pi_E$, $C_b=C_E$ and so on.
\vspace{.2cm}
\noindent {\bf Acknowledgments:} We thank A. Knauf, S. Golenia, S. Richard and J. Kellendonk for numerous comments. The work of J.~B. was support in part by NSF Grant No. 0600956 and 0901514, that of H. S.-B. in part by the DFG. After the first version of this paper was submitted, two papers on related matters appeared on the archives. The article \cite{BSPL11} provides a variational principle giving an upper bound on the number of eigenvalues (including embedded ones) and \cite{KKN} analyzes the Friedel sum rule via the spectral shift function.
\vspace{.5cm}
\section{Analysis of the unperturbed lattice Hamiltonian}
\label{scatt08.sect-freeop}
\subsection{Unperturbed Hamiltonian and its energy band}
\label{sec-normal}
The tight-binding Hamiltonians considered in this work act on the Hilbert space
$\ell^2({\mathbb Z}^d)$ of square summable sequences of complex numbers indexed by the
$d$-dimensional lattice ${\mathbb Z}^d$. The {\em free Hamiltonian} $H_0$ on
$\ell^2({\mathbb Z}^d)$ is supposed to be of the form
\begin{equation}
\label{scatt08.eq-freeHam} \langle n\,|\,H_0\,\phi\rangle \;=\;
\sum_{m\in{\mathbb Z}^d}
{\mathcal E}_{n-m}\;\langle m|\phi\rangle\,,
\qquad
\phi \in \ell^2({\mathbb Z}^d)\,,
\end{equation}
\noindent where the ${\mathcal E}_n$'s are the Fourier coefficients of a real-analytic real-valued function
${\mathcal E}(k)=\sum_{n\in{\mathbb Z}^d} e^{\imath k n} {\mathcal E}_n$ on the $d$-dimensional torus ${\mathbb T}^d={\mathbb R}^d/ (2\pi {\mathbb Z}^d)$. Hence we restrict ourselves to a free operator with a single band. As $H_0$ is translation invariant, it is diagonalized by the discrete Fourier transform $\Ff: \ell^2({\mathbb Z}^d)\to L^2({\mathbb T}^d)$, where $L^2({\mathbb T}^d)$ is the Hilbert space of square integrable functions on ${\mathbb T}^d$. It is densely defined by
$$
(\Ff\phi)(k)\;=\;
\frac{1}{(2\pi)^{\frac{d}{2}}}\;
\sum_{n\in{\mathbb Z}^d} e^{\imath k n}\,\langle n|\phi\rangle\;.
$$
and is unitary. Now ${\widehat{H}}_0=\Ff H_0 \Ff^*$ is a multiplication operator on $L^2({\mathbb T}^d)$ by the function ${\mathcal E}$. The main hypothesis on $H_0$ are expressed in terms of this function ${\mathcal E}$. The set of critical points $\cri\subset{\mathbb T}^d$ at which the gradient $\nabla {\mathcal E}$ w.r.t. the euclidean metric vanishes is finite due to the analyticity of ${\mathcal E}$ and each critical point is supposed to be non-degenerate, namely for any $k^*\in\cri$ the Hessian ${\mathcal E}''(k^*)$ is a real symmetric invertible $d\times d$ matrix. In other words, ${\mathcal E}$ is supposed to be a so-called Morse function \cite{Nic}. Recall that the index of a critical point $k^*$ is the number of negative eigenvalues of ${\mathcal E}''(k^*)$. Then the Morse inequalities state that the number of critical points with index $p$ is larger than or equal to the Betti number $\beta_p$ of the torus ${\mathbb T}^d$, which is equal to the binomial coefficient $d$ over $p$. In particular, there must exist critical points of ${\mathcal E}$ with signature $p$ for every $p=0,\ldots,d$. For the discrete Laplacian, the energy band ${\mathcal E}(k)=2\sum_{j=1}^d\cos(k_j)$ is a Morse function for which the Morse inequalities become equalities. We also assume that there are only two critical points $k^*_-$ and $k_+^*$ of definite signature corresponding to the minimal and maximal values $E_-={\mathcal E}(k^*_-)$ and $E_+={\mathcal E}(k^*_+)$ of ${\mathcal E}$. Hence all other critical points $k^*\in\cri$ are supposed to have critical values ${\mathcal E}(k^*)$ in $(E_-,E_+)$ and to be of indefinite signature. Note that ${\mathcal E}(\cri)$ is the set of all critical values. A spectral interval is called non-critical if it does not contain any critical value.
\subsection{The classical energy flow}
\label{scatt08.sec-flow}
Let $F:[E_-,E_+]\to{\mathbb R}_{\geq 0}$ be a real analytic function vanishing only at the band edges $E_-$ and $E_+$ and satisfying $F(E-E_-)\leq C|E-E_-|$ and $F(E_+-E)\leq C|E_+-E|$ for some constant $C$. Below we will choose
\begin{equation}
\label{scatt08.eq-Fchoice}
F(E)\;=\;2\;
\frac{(E-E_-)(E_+-E)}{E_+-E_-}\,,
\end{equation}
\noindent but this particular choice will only become relevant for the calculation of the wave operators in Section~\ref{scatt08.ssect-waveREF}. Then let ${\widehat{X}}$ be the vector field on ${\mathbb T}^d$ defined by
\begin{equation}
\label{scatt08.eq-Xvec} {\widehat{X}}(k) \;=\;
F\bigl({\mathcal E}(k)\bigr)\;
\frac{\nabla {\mathcal E}(k)}{|\nabla{\mathcal E}(k)|^2}\;,\qquad k\in{\mathbb T}^d\;.
\end{equation}
\noindent Apart from the factor $F\circ {\mathcal E}$, the vector field ${\widehat{X}}$ is precisely the one used in the standard argument of Morse theory \cite{Nic} as well as in the proof of the coarea formula \cite{Sak}. As ${\mathcal E}$ and $F$ are smooth, this vector field is smooth away from the set $\cri$ of critical points. At the critical points $k^*_\pm$ with extremal energy ${\mathcal E}(k^*_\pm)=E_\pm$, the function $k\mapsto F({\mathcal E}(k^*_\pm+k))$ vanishes linearly by the assumption on $F$ and hence the vector field has a source or a sink there. At all other critical points with critical values lying inside the band $[E_-,E_+]$, the vector field ${\widehat{X}}$ has a singularity which has to be dealt with below. Let $\theta_b:{\mathbb T}^d\setminus\cri\to{\mathbb T}^d$ be the flow of ${\widehat{X}}$, that is, $\partial_b\theta_b={\widehat{X}}\circ\theta_b$ and $\theta_0=\mbox{\rm id}$. The somewhat unconventional choice of $b$ as notation for the time parameter is due to its interpretation as rescaled energy variable below, which is dual to the spectral parameter $a$ of the dilation operator $A$. The flow $\theta_b$ is not complete because an orbit can reach one of the critical points with indefinite signature in a finite time. Choosing orbits which stay away from these critical points or times which are sufficiently small, one can calculate the flow of energy along the orbits. By the definition of the vector field ${\widehat{X}}$,
$$
\partial_b \,{\mathcal E}(\theta_b(k))\;=\;F({\mathcal E}(\theta_b(k)))
\;.
$$
This equation shows that the flow $\theta_b$ maps constant energy surfaces to constant energy surfaces. Moreover, the energy flow is governed by a simple ordinary differential equation of first order which can be integrated. Choosing some reference energy $E_r\in(E_-,E_+)$, it leads to the following invertible function
\begin{equation}
\label{scatt08.eq-energychange}
f(E)\;=\;
\int^E_{E_r}\frac{de}{F(e)}\,.
\end{equation}
\noindent Then $b=f({\mathcal E}(\theta_b(k)))-f({\mathcal E}(k))$ and
\begin{equation}
\label{scatt08.eq-Hevolv}
{\mathcal E}(\theta_b(k))\;=\;
f^{-1}\bigl(b+f({\mathcal E}(k))\bigr)\,.
\end{equation}
\noindent If $F$ is given by equation~\eqref{scatt08.eq-Fchoice} and if $E_r=(E_++E_-)/2$, it gives
\begin{equation}
\label{scatt08.eq-Fchoiceconclusion}
f(E)\;=\;
\frac{1}{2}\,
\ln\left(\frac{E-E_-}{E_+-E}\right)\,,
\hspace{1cm}
f^{-1}(b)\;=\;E_r+\Delta\,\tanh(b)\;,
\hspace{1cm}
F(f^{-1}(b))\;=\;\frac{\Delta}{\cosh^{2}(b)}\,,
\end{equation}
\noindent where $\Delta=(E_+-E_-)/2$. By restricting $\theta_b$ to an adequate subset of ${\mathbb T}^d$, a complete flow can be constructed. Let $\criM$ be the union of $\cri$ and of the set of points reaching one of the critical points $k^*\in\cri$ in finite time (either positive or negative). It is important to remark that, under this flow, almost all points reach the maximum and the minimum eventually, but it takes an infinite time to do so. Therefore the finite time condition is a strong constraint. In fact, $\criM$ is the union of $\cri$ and the stable and unstable manifolds of all critical points of indefinite signature.
\begin{proposi}
\label{scatt08.prop-complete}
The set $\criM$ is compact and has zero Lebesgue measure. The flow $\theta_b:{\mathbb T}^d\setminus\criM\to{\mathbb T}^d\setminus\criM$ is defined for all $b\in{\mathbb R}$, that is, ${\widehat{X}}$ is complete on ${\mathbb T}^d\setminus\criM$. In addition, $\lim_{b\to \pm\infty}\,\theta_b(k)=k^*_\pm$ for all $k\in{\mathbb T}^d\setminus\criM$. Furthermore, for any open neighborhood $U$ of $\criM$ there exists an open subset $V\subset U$ which contains $\criM\setminus\{k^*_-,k^*_+\}$ and is invariant under the flow $\theta$.
\end{proposi}
\noindent {\bf Sketch of a proof.} The vector field ${\widehat{X}}$ is gradient-like in the terminology of \cite{Nic} (it is actually a gradient vector field). Hence \cite[Section~2.4]{Nic} shows that $\lim_{b\to \pm\infty}\,\theta_b(k)\in\cri$ and that the stable and unstable manifolds of all critical points of indefinite signature are locally smooth submanifolds of ${\mathbb T}^d$. For each critical point, the sum of the dimensions of the stable and unstable manifolds is equal to $d$. Along the flow on these submanifolds the energy increases with a finite speed, except in neighborhoods of $k_\pm^*$. Hence either the submanifolds reach another critical point in a finite time (non-generic) or the points $k_\pm^*$ in infinite time. Consequently, the points $k_\pm^*$ compactify the stable and unstable manifolds. As the number of critical points is finite, the set $\criM$ is compact with zero Lebesgue measure. To prove the last statement of the proposition, let $k^*$ be a critical point of indefinite signature. Then let $V(k^*)$ be an open neighborhood of $k^*$ contained in $U$. Then $V=\bigcup_{k^*}\bigcup_{b\in{\mathbb R}}\theta_b(V(k^*))$ is an open set that is invariant by the flow. A compactness argument can be used to show that $V\subset U$ by choosing $V(k^*)$ sufficiently small.
\hfill $\Box$
\vspace{.2cm}
The level set of ${\mathcal E}$ corresponding to an energy $E\in (E_-,E_+)$ is defined by
$$
\Sigma_E\;=\;
\left\{
k\in{\mathbb T}^d\setminus\criM\;\Big|\;{\mathcal E}(k)\;=\;E
\right\}\,.
$$
\noindent These level sets will be called the {\em quasi-Fermi surfaces}. This terminology is introduced to stress that $\Sigma_E$ is a strict subset of Fermi surface ${\mathcal E}^{-1}(E)$ because the points on the stable and unstable manifolds of all critical points with indefinite signature are excluded. However, the difference is only of measure zero. A reference quasi-Fermi surface will be taken at energy $E_r$ and denoted by $\Sigma=\Sigma_{E_r}$. Because the singularities are excluded, the sets $\Sigma_E$ are smooth open submanifolds of ${\mathbb T}^d$ of codimension $1$ which, for $d\geq 2$, have several connected components. Now the flow $\theta_b$ maps these connected components diffeomorphically into each other. By the above arguments, for each energy $E$, there is a time $b=f(E)$ such that the flow $\theta_b$ maps the reference quasi-Fermi surface $\Sigma$ diffeomorphically into $\Sigma_E$. Consequently we have:
\begin{proposi}
\label{scatt08.prop-Morsediffeo} For $E\in(E_-,E_+)$, the map
$\theta_{f(E)}:\Sigma\to\Sigma_E$ is a diffeomorphism.
\end{proposi}
For our purposes below, we will also need properties of the divergence of ${\widehat{X}}$. A straightforward calculation gives
$$\mbox{\rm div}({\widehat{X}})(k) \;=\;
F'({\mathcal E}(k)) + F({\mathcal E}(k))\;
\left(
\frac{\Delta {\mathcal E}(k)}{|\nabla{\mathcal E}(k)|^2}
-2\,
\frac{\langle\nabla {\mathcal E}(k)|{\mathcal E}''(k)|\nabla {\mathcal E}(k)\rangle}
{|\nabla{\mathcal E}(k)|^4}
\right)\,,
$$
\noindent where Dirac notation is also used for vectors in ${\mathbb R}^d$. Near a critical point $k^*$, one has $\nabla {\mathcal E}(k^*+k)={\mathcal E}''(k^*)\,k+{\mathcal O}(k^2)$, leading to
$$
\mbox{\rm div}({\widehat{X}})(k^*+k) \;=\;
F'({\mathcal E}(k^*+k))+
F({\mathcal E}(k^*+k))
\left(
\frac{\mbox{\rm Tr}({\mathcal E}''(k^*))}{\langle k|{\mathcal E}''(k^*)^2|k\rangle}-
2\,
\frac{\langle k|{\mathcal E}''(k^*)^3|k\rangle}
{\langle k|{\mathcal E}''(k^*)^2|k\rangle^2}
+{\mathcal O}(|k|^{-1})
\right)\,.
$$
\noindent Both $F$ and $F'$ are regular, hence $|\mbox{\rm div}({\widehat{X}})(k^*+k)| \leq C/|k|^2$ and thus $\mbox{\rm div}({\widehat{X}})$ is an integrable function for dimension $d\geq 3$. Furthermore, near the extrema $k^*_\pm$, namely at the band edges, ${\mathcal E}(k^*_\pm+k)={\mathcal E}(k^*_\pm)+\frac{1}{2}\langle k|{\mathcal E}''(k^*_\pm)|k\rangle+{\mathcal O}(|k|^3)$, $F'({\mathcal E}(k^*_\pm+k))=\mp 2+{\mathcal O}(|k|^2)$ and $F({\mathcal E}(k^*_\pm+k))=\mp\langle k|{\mathcal E}''(k^*_\pm)|k\rangle+{\mathcal O}(|k|^3)$. Therefore, setting
$$
g_\pm(k)\;=\;
\frac{
\mbox{\rm Tr}({\mathcal E}''(k_\pm^*))\,
\langle k|{\mathcal E}''(k_\pm^*)|k\rangle\,
\langle k|{\mathcal E}''(k_\pm^*)^2|k\rangle
-2\,\langle k|{\mathcal E}''(k_\pm^*)|k\rangle\,
\langle k|{\mathcal E}''(k_\pm^*)^3|k\rangle}
{\langle k|{\mathcal E}''(k_\pm^*)^2|k\rangle^2}\,,
$$
\noindent leads to
\begin{equation}
\label{scatt08.eq-divXvicinity2}
\mbox{\rm div}({\widehat{X}})(k^*_\pm+k) \;=\; \mp\,
\bigl(
2\,+\,g_\pm(k)+{\mathcal O}(|k|)
\bigr)\,.
\end{equation}
\noindent The functions $g_\pm$ are homogeneous of degree $0$ and can thus be seen as functions on the sphere ${\mathbb S}^{d-1}$. In dimension $d=1$, one has $g_\pm(k)=-1$. In higher dimension, $g_\pm(k)=d-2$ whenever ${\mathcal E}''(k^*_\pm)$ is a multiple of the identity (isotropy of the extrema). Otherwise $g_\pm$ are non-trivial.
\subsection{Construction of the dilation operator}
\label{scatt08.ssect-conjOp}
\noindent The aim of this section is the construction of an unbounded conjugate (or dilation) operator $A$ such that $\imath [A,H_0]=F(H_0)$ where $F$ is as above. The basic idea is to implement the flow $\theta_b$ of ${\widehat{X}}$ in $L^2({\mathbb T}^d)$ as a strongly continuous group of unitaries. Let ${\mathcal D}$ denote the set of smooth functions on ${\mathbb T}^d$ vanishing in some neighborhood of $\criM$. Since $\criM$ has zero Lebesgue measure and is compact, ${\mathcal D}$ is dense in $L^2({\mathbb T}^d)$. Furthermore, Proposition~\ref{scatt08.prop-complete} implies that every function in ${\mathcal D}$ vanishes on a flow invariant open subset containing $\criM\setminus\{k^*_-,k^*_+\}$. Hence for $\phi\in{\mathcal D}$, the following operator can be defined
\begin{equation}
\label{scatt08.eq-unitarygroup}
({\mathcal W}_b\,\phi)(k)\;=\;
\exp\left(
\frac{1}{2}
\int_0^b du\;
\dive({\widehat{X}})(\theta_u(k))
\right)\;
\phi(\theta_b(k))\,,
\end{equation}
\noindent because the singularities of ${\widehat{X}}$ {are} not reached, due to the restriction on the support of $\phi$. The unitarity of ${\mathcal W}_b$ follows from the change of variables $k\mapsto \theta_b(k)$ and from the Jacobian formula
\begin{equation}
\label{scatt08.eq-detcalc}
\det(\theta'_b(k))\;=\;
\exp\left(
\int_0^b du \;
\dive({\widehat{X}})(\theta_u(k))
\right)\,.
\end{equation}
\noindent This latter relation follows from integrating $\partial_b \ln \det( \theta'_b(k))=\dive({\widehat{X}})( \theta_b(k))$ with the initial condition $\det(\theta'_0)=1$. Furthermore, the group property $\theta_b\circ\theta_u=\theta_{b+u}$ immediately implies ${\mathcal W}_b{\mathcal W}_u={\mathcal W}_{b+u}$. It can be checked, by a direct calculation, that $\|{\mathcal W}_b\phi\|=\|\phi\|$ for $\phi\in{\mathcal D}$. In addition, using the Lebesgue dominated convergence theorem, $\lim_{b\to0}{\mathcal W}_b\phi=\phi$ for $\phi\in{\mathcal D}$. It follows, from a $3\epsilon$ argument, that ${\mathcal W}_b$ can be extended as a one-parameter, strongly continuous group of unitary operators on $L^2({\mathbb T}^d)$. By Stone's theorem the generator ${\widehat{A}}=\frac{1}{\imath}\partial_b {\mathcal W}_b|_{b=0}$ is self-adjoint and ${\mathcal W}_b=\exp(\imath b {\widehat{A}})$. Also \cite[Corollary 3.1.7]{BR} implies that ${\mathcal D}$ is a core for ${\widehat{A}}$ because ${\mathcal D}$ is left invariant under ${\mathcal W}_b$. The derivation of equation (\ref{scatt08.eq-unitarygroup}) leads to
\begin{equation}
\label{scatt08.eq-dilaction}
{\widehat{A}}\,\phi \;=\;
\frac{1}{\imath}
\left(
{\widehat{X}}(\phi)+\frac{1}{2}\,\dive({\widehat{X}})\,\phi
\right)\;,
\end{equation}
\noindent where ${\widehat{X}}(\phi)=\langle {\widehat{X}}|\nabla\rangle\phi$ is the action of the vector field on the function $\phi\in {\mathcal D}$. Note that the multiplicative (zero order) operator $\frac{1}{2}\,\dive({\widehat{X}})$ is needed to make the r.h.s. of \eqref{scatt08.eq-dilaction} symmetric w.r.t. the scalar product in $L^2({\mathbb T}^d)$. The desired commutator property $\imath [A,H_0]=F(H_0)$ now follows directly from \eqref{scatt08.eq-dilaction} because $\imath [{\widehat{A}},{\widehat{H}}_0]={\widehat{X}}({\mathcal E})=F({\widehat{H}}_0)$. This can be summarized as follows:
\begin{theo}
\label{theo-dilation} Let ${\mathcal E}$ be a Morse function with only one maximum and one local minimum and let $F$ be a smooth function vanishing linearly at the two extremal values $E_-$ and $E_+$ and nowhere else. Let ${\mathcal W}_b$ be defined by {\rm \eqref{scatt08.eq-unitarygroup}} for $\phi\in{\mathcal D}$ and with ${\widehat{X}}$ and $\theta_b$ given by {\rm \eqref{scatt08.eq-Xvec}} and its flow. Then ${\mathcal W}_b$ is a strongly continuous one-parameter group of unitary operators on $L^2({\mathbb T}^d)$. Its generator ${\widehat{A}}=\frac{1}{\imath}\partial_b {\mathcal W}_b|_{b=0}$ is self-adjoint with core ${\mathcal D}$ and satisfies
$$
\imath [{\widehat{A}},{\widehat{H}}_0]\;=\;F({\widehat{H}}_0)\;,
\qquad
\imath [{\widehat{A}},f({\widehat{H}}_0)]\;=\;\one\;.
$$
\end{theo}
A few comments conclude this section. The vector field ${\widehat{X}}$ defined by \eqref{scatt08.eq-Xvec} has singularities stemming from critical points of ${\mathcal E}$ with indefinite signature (where $F$ does not vanish). This leads to singularities in both the principal and subprincipal symbol of the differential operator ${\widehat{A}}$ as given in \eqref{scatt08.eq-dilaction}. As shown in Section~\ref{scatt08.sec-flow}, the singularity of the principal symbol is integrable in dimension $d\geq 2$ while the subprincipal symbol is integrable for $d\geq 3$. It has been shown above that this does not prevent \eqref{scatt08.eq-dilaction} from defining a self-adjoint operator. There is another similarity between $A=\Ff^*{\widehat{A}}\Ff$ and the usual dilation operator used for the Laplacian in $L^2({\mathbb R}^d)$. Let $X_j=\Ff^*{\widehat{X}}_j\Ff$ be the operator on $\ell^2({\mathbb Z}^d)$ associated with the $j$th component ${\widehat{X}}_j$ of ${\widehat{X}}$. Also let $Q=(Q_1,\ldots,Q_d)$ be the position operator defined by $Q_j\,\phi(n) =n_j\,\phi(n)$, for $n\in {\mathbb Z}^d$ and $\phi$ decreasing sufficiently fast. Then the Fourier transform of the r.h.s. of \eqref{scatt08.eq-dilaction} leads to
\begin{equation}
\label{scatt08.eq-dil}
A\;=\;\frac{1}{2}\,
\sum_{j=1}^d
\left( X_j\,Q_j+Q_j\,X_j\right)\,.
\end{equation}
\noindent Comparing with the usual dilation operator on ${\mathbb R}^d$, $X_j$ can be interpreted as the lattice analog of the $j$th component of the momentum operator.
\subsection{Change of variables and REF representation}
\label{scatt08.ssect-change}
\noindent This section is devoted to the definition and the properties of the {\it rescaled energy and Fermi surface} (REF) representation. The proof of Theorem~\ref{theo-dilation} was mainly based on the change of variables $\theta_b:{\mathbb T}^d\to{\mathbb T}^d$ with Jacobian \eqref{scatt08.eq-detcalc}. It will be supplemented by the {\em coarea formula} (see {\it e.g.} \cite{Sak} for a proof and note that $\criM$ is of zero measure). If $\nu_E$ denotes the Riemannian volume measure on $\Sigma_E$ (induced by the euclidean metric on ${\mathbb T}^d$),
\begin{equation}
\label{scatt08.eq-varchange2}
\int_{{\mathbb T}^d} dk\;\phi(k)\;=\;
\int^{E_+}_{E_-} dE\;
\int_{\Sigma_E}\nu_E(d\sigma)\;
\frac{1}{|\nabla{\mathcal E}(\sigma)|}\;
\phi(\sigma)\,.
\end{equation}
\noindent This holds for $\phi$ in the set ${\mathcal D}$. For the reference energy surface $\Sigma=\Sigma_{E_r}$, the measure is simply denoted by $\nu=\nu_{E_r}$. The coarea formula leads to the following:
\begin{lemma}
\label{scatt08.lem-varchange} Let $\phi\in{\mathcal D}$. Then its integral can be written in the following three equivalent ways:
\begin{eqnarray}
\int_{{\mathbb T}^d} dk\;\phi(k) & = &
\int_{\mathbb R} db
\int_{\Sigma}\nu(d\sigma)\;
\Big|\det(\theta'_b|_{T_\sigma\Sigma})\Big|\;
\Big|{\widehat{X}}(\theta_b(\sigma))\Big|\;
\phi\left\{\theta_b(\sigma)\right\}\,,
\label{scatt08.eq-varchange3a}\\
& = &
\int_{\mathbb R} db
\int_{\Sigma}\nu(d\sigma)\;
\exp\left(
\int_0^b du \;\dive({\widehat{X}})(\theta_u(\sigma))
\right)\;
\Big|{\widehat{X}}(\sigma)\Big|\;
\phi\left(\theta_b(\sigma)\right)\,,
\label{scatt08.eq-varchange3} \\
& = &
\int^{E_+}_{E_-}dE
\int_{\Sigma}\nu(d\sigma)\;
\frac{
|\det(\theta'_{f(E)}|_{T_\sigma\Sigma})|
}{|\nabla {\mathcal E}({\theta}_{f(E)}(\sigma))|}\;
\phi\left(\theta_{f(E)}(\sigma)\right)\,,
\label{scatt08.eq-varchange4}
\end{eqnarray}
where $\theta'_b|_{T_\sigma\Sigma}$ denotes the derivative of $\theta_b$ restricted to the tangent space of $\Sigma$ at $\sigma$ {\rm (}so that this is a $(d-1)\times (d-1)$ matrix{\rm )}.
\end{lemma}
\noindent {\bf Proof:} Starting from the coarea formula (\ref{scatt08.eq-varchange2}), the substitution $b=f(E)$ given in (\ref{scatt08.eq-energychange}) and the diffeomorphism of Proposition~\ref{scatt08.prop-Morsediffeo} will be used in the following change of variables:
\begin{eqnarray*}
\int_{{\mathbb T}^d} dk\;\phi(k) & = &
\int_{\mathbb R} db
\int_{\Sigma_{f^{-1}(b)}}
\nu_{f^{-1}(b)}(d\sigma)\;
\frac{F(f^{-1}(b))}{|\nabla {\mathcal E}(\sigma)|}\;
\phi(\sigma) \\
& = &
\int_{\mathbb R} db\;
\int_{\Sigma}\nu(d\sigma)\;
\Big|\det(\theta'_b|_{T_\sigma\Sigma})\Big|\;
\frac{F({\mathcal E}(\theta_b(\sigma)))}{|\nabla {\mathcal E}(\theta_b(\sigma))|}\;
\phi(\theta_b(\sigma)) \,.
\end{eqnarray*}
\noindent In the second equality, the identity $F(f^{-1}(b))=F({\mathcal E}(\sigma))$ for $\sigma\in\Sigma_{f^{-1}(b)}$ was used. Replacing the definition of ${\widehat{X}}$ already shows \eqref{scatt08.eq-varchange3a} as well as \eqref{scatt08.eq-varchange4}. Next $\theta_b'$ can be decomposed as $\theta'_b|_{T_\sigma{\mathbb T}^d}=\theta'_b|_{T_\sigma\Sigma}\oplus \theta'_b|_{(T_\sigma\Sigma)^\perp}$ implying
\begin{equation}
\label{scatt08.eq-factorize}
|\det(\theta'_b|_{T_\sigma{\mathbb T}^d})|\;=\;|\det(\theta'_b|_{T_\sigma\Sigma})|\,
|\theta'_b|_{(T_\sigma\Sigma)^\perp}|
\;.
\end{equation}
In order to compute $\theta'_b|_{(T_\sigma\Sigma)^\perp}$ it should be remarked that the derivative of the equation $\partial_b\theta_b={\widehat{X}}\circ\theta_b$ is $\partial_b\theta'_b={\widehat{X}}'\circ\theta_b\,\theta'_b$, leading to $\theta'_b({\widehat{X}}(\sigma))={\widehat{X}}(\theta_b(\sigma))$. As the one-dimensional space ${(T_\sigma\Sigma)^\perp}$ is spanned by ${\widehat{X}}(\sigma)$, it follows that
$$
|\theta'_b|_{(T_\sigma\Sigma)^\perp}|
\;=\;
\Bigl|\theta'_b\Bigl( \frac{{\widehat{X}}(\sigma)}{|{\widehat{X}}(\sigma)|}\Bigr)\Bigr|
\;=\;
\frac{|{\widehat{X}}(\theta_b(\sigma))|}{|{\widehat{X}}(\sigma)|}
\;.
$$
Consequently
\begin{equation}
\label{scatt08.eq-Jacobians}
\Big|\det(\theta'_b|_{T_\sigma\Sigma})\Big|\;
\Big|{\widehat{X}}(\theta_b(\sigma))\Big|\;=\;
\exp\left(
\int_0^b du \;\dive({\widehat{X}})( \theta_u(\sigma))
\right)\;
\Big|{\widehat{X}}(\sigma)\Big|\,.
\end{equation}
\noindent Replacing this in \eqref{scatt08.eq-varchange3a} proves \eqref{scatt08.eq-varchange3}.
\hfill $\Box$
\vspace{.2cm}
The following notation will be useful
\begin{equation}
\label{scatt08.eq-JacFac}
d_{b}(\sigma)\;=\;
\Big|\det(\theta'_b|_{T_\sigma\Sigma})\Big|^{\frac{1}{2}}\;
\Big|{\widehat{X}}(\theta_b(\sigma))\Big|^{\frac{1}{2}}\;=\;
\exp\left(
\frac{1}{2}\;\int_0^{b} du \;\dive({\widehat{X}})(\theta_u(\sigma))
\right)\;
\Big|{\widehat{X}}(\sigma)\Big|^{\frac{1}{2}}\,.
\end{equation}
\noindent From (\ref{scatt08.eq-varchange3}), it follows that the map
${\mathcal U}$ defined on ${\mathcal D}$ by
\begin{equation}
\label{scatt08.eq-Udef}
({\mathcal U} \phi)_{b}(\sigma)\;=\;
d_{b}(\sigma)\;
\phi(\theta_{b}(\sigma))\,,
\hspace{2cm}
\phi\in {\mathcal D}\subset L^2({\mathbb T}^d)\,,
\end{equation}
\noindent extends to a unitary from $L^2({\mathbb T}^d)$ to $L^2({\mathbb R})\otimes L^2(\Sigma,\nu)$. The variable $b$ is the rescaled energy difference w.r.t. the reference quasi-Fermi surface $\Sigma$. Expressing this in terms of ${\mathcal W}_b$ (see equation~(\ref{scatt08.eq-unitarygroup})), leads to $({\mathcal U} \phi)_{b}(\sigma)=|{\widehat{X}}(\sigma)|^{\frac{1}{2}}({\mathcal W}_{b}\phi)(\sigma)$. The inverse, acting on $\psi\in L^2({\mathbb R}) \otimes L^2(\Sigma,\nu)$, is given by
$$({\mathcal U}^* \psi)(k)\;=\;
d_{b}(\theta_{-b}(k))^{-1}\;
\psi_{b}(\theta_{-b}(k))\,,
\hspace{2cm}
b=f({\mathcal E}(k))\,.
$$
\noindent Note that ${\mathcal U}$ is unitary. The expression $\widetilde{H}_0={\mathcal U}\widehat{H}_0{\mathcal U}^*={\mathcal U}\Ff H_0\Ff^*{\mathcal U}^*$ will be called the REF representation of $H_0$. Any operator in the REF representation will carry a tilde. The operator $(\widetilde{B}\psi)_{b}=b\psi_{b}$ is the {\em rescaled energy}. Its conjugate operator clearly is $\widetilde{A}$ with $(\widetilde{A}\psi)_{b}=\frac{1}{\imath}\partial_{b}\psi_{b}$. Both of these operators are unbounded and have the standard self-adjoint domains. The following result states that these notations are consistent with the above.
\begin{proposi}
\label{scatt08.prop-rep}
The following relations hold
$$
{\mathcal U}\,{\widehat{H}}_0\,{\mathcal U}^*\;=\;
f^{-1}(\widetilde{B})\otimes {\bf 1}_{\Sigma}\,,
\hspace{2cm}
{\mathcal U}\,f({\widehat{H}}_0)\,{\mathcal U}^*\;=\;\widetilde{B}\,,
\hspace{2cm}
{\mathcal U}\,{\widehat{A}}\,{\mathcal U}^*\;=\;\widetilde{A}\,.
$$
\end{proposi}
\noindent {\bf Proof:} The only point to be checked is how the commutation relations of $H_0$ and $A$, as proved in Theorem~\ref{theo-dilation}, are implemented under ${\mathcal U}$. The first identity results from (\ref{scatt08.eq-Udef}) and
$$
f^{-1}(b)\phi(\theta_{b}(\sigma))\;=\;
{\mathcal E}(\theta_{b}(\sigma))
\phi(\theta_{b}(\sigma))\;=\;
\Big({\widehat{H}}_0\phi\Big)(\theta_{b}(\sigma))\,.
$$
\noindent The second formula is obtained from the first one through (unbounded) functional calculus. The third one follows from
$$
(\widetilde{A}\otimes{\bf 1}\,{\mathcal U}\phi)_{b}(\sigma)\;=\;
\frac{1}{\imath}\,
\partial_{b}\,
(|{\widehat{X}}|^{\frac{1}{2}}\,
e^{\imath b{\widehat{A}}}\phi)(\sigma)\;=\;
({\frac{1}{2}}\,
e^{\imath b{\widehat{A}}}\,{\widehat{A}}\, {\widehat{A}}\, \phi)(\sigma)\;=\;
({\mathcal U} {\widehat{A}}\,\phi)_{b}(\sigma)\,,
$$
\noindent where ${\mathcal U}$ is expressed in terms of the unitary group ${\mathcal W}_b=e^{\imath b{\widehat{A}}}$ up to the factor $|{\widehat{X}}(\sigma)|^{\frac{1}{2}}$ which does not depend on $b$.
\hfill $\Box$
\vspace{.2cm}
It is worth comparing the previous construction to the usual one used in scattering theory on ${\mathbb R}^d$, where $H_0=-\Delta$ is the Laplacian acting on $L^2({\mathbb R}^d)$. Then, the (unitary) Fourier transform $\Ff:L^2({\mathbb R}^d)\mapsto L^2({\mathbb R}^d)$ diagonalizes $H_0$, that is, $\Ff H_0\Ff^*$ is the operator of multiplication by ${\mathcal E}(k)=k^2$. This function has only one critical point at $k^*_-=0$ corresponding to the minimum of energy $E_-=0$. The vector field ${\widehat{X}}$ is defined as in \eqref{scatt08.eq-Xvec}, now with $k\in{\mathbb R}^d$. Let the reference energy be $E_r= 1$ so that the (quasi-) Fermi surface $\Sigma$ is the unit sphere ${\mathbb S}^{d-1}$. Furthermore let $F(E)=2E$, which vanishes at the only critical value. Then ${\widehat{X}}(k)=k$ and $f(E)=\int^E_1\frac{de}{2e}=\frac{1}{2}\ln(E)$. The flow is $\theta_b(\sigma)=e^b\sigma$. As ${\rm div}({\widehat{X}})=d$, it follows that $d_b(\sigma)=e^{\frac{1}{2} db}$. Therefore the unitary transformation ${\mathcal U}:L^2({\mathbb R}^d)\to L^2({\mathbb R})\otimes L^2({\mathbb S}^{d-1})$ to the REF representation is given by
$$
({\mathcal U}\phi)_b(\sigma)=
e^{\frac{1}{2}db}\,\phi(e^b\sigma)\,.
$$
\noindent This transformation is discussed and used, {\it e.g.}, by Jensen \cite{Jen} and also \cite{KR3}.
\vspace{.2cm}
\subsection{EF representation}
\label{scatt08.sec-EFrep}
\noindent Another natural useful representation is the {\it energy and Fermi surface} (EF) representation. A local version of this representation is used in the paper by Birman and Yafaev \cite{BY}. It is associated with the unitary map ${\mathcal V}:L^2({\mathbb T}^d)\to L^2([E_-,E_+])\otimes L^2(\Sigma,\nu)$ defined on ${\mathcal D}$ by
$$
({\mathcal V}\phi)_E(\sigma)\;=\;
\frac{
|\det(\theta'_{f(E)}|_{T_\sigma\Sigma})|^{\frac{1}{2}}
}{
|\nabla {\mathcal E}({\theta}_{f(E)}(\sigma))|^{\frac{1}{2}}
}\;
\phi({\theta}_{f(E)}(\sigma))\,,
\hspace{2cm}
\phi\in{\mathcal D}\,.
$$
\noindent The unitarity follows directly from \eqref{scatt08.eq-varchange4}. It is related to the unitary operator ${\mathcal U}$ as follows
\begin{equation}
\label{scatt08.eq-REFlinkEF}
({\mathcal V}\phi)_E(\sigma)\;=\;
\frac{1}{F(E)^{\frac{1}{2}}}\;
\frac{1}{|{\widehat{X}}(\sigma)|^{\frac{1}{2}}}\;
({\mathcal U}\phi)_{f(E)}(\sigma)\,.
\end{equation}
\noindent The EF representation of an operator on $L^2({\mathbb T}^d)$ is then obtained by conjugation with ${\mathcal V}$. It will carry a circle instead of a tilde, such as $\overset{\;\circ}{H}_0={\mathcal V}\widehat{H}_0{\mathcal V}^*$, $\overset{\;\,\circ}{A}={\mathcal V}\widehat{A}\,{\mathcal V}^*$ and so on. Any operator that is a direct integral in the REF representation is also a direct integral in the EF representation. The first example of this type is the Hamiltonian $H_0$ itself:
$$
(\overset{\;\circ}{H}_0\phi)_E(\sigma)\;=\;E\,\phi_E(\sigma)\,.
$$
\noindent More generally, given any fibered operator $\widetilde{O}=\int^{\oplus} db\,\widetilde{O}_b$ in the REF representation, its EF representation is given by $\overset{\;\circ}{O}=\int^{\oplus} dE\,\overset{\;\circ}{O}_E$ with $\overset{\;\circ}{O}_E=\widetilde{O}_{f(E)}$. Another example will be the scattering matrix below. The dilation operator in the EF representation can be easily deduced from \eqref{scatt08.eq-REFlinkEF}:
$$
(\overset{\;\,\circ}{A}\phi)_E(\sigma) \;=\; F(E)\,
\frac{1}{\imath}\,\partial_E
\phi_E(\sigma)+
\frac{1}{2\imath}\;F'(E)\,\phi_E(\sigma)\,,
$$
\noindent where $\phi$ is in the domain of $\overset{\;\,\circ}{A}$, in particular, its derivative is square integrable and $\phi$ vanishes at the boundaries of $[E_-,E_+]$.
\vspace{.2cm}
\subsection{Boundary values of the free resolvent}
\label{scatt08.ssect-LAP}
\noindent Let $\Lambda\subset{\mathbb Z}^d$ be a finite set. Eventually, $\Lambda$ will be the support of the perturbation. Associated with $\Lambda$ is the subspace $\ell^2(\Lambda)={\mathbb C}^{|\Lambda|}$. Let $\piso^\ast:{\mathbb C}^{|\Lambda|}\to \ell^2({\mathbb Z}^d)$ be the canonical injection obtained by extending elements of $\ell^2({\mathbb Z}^d)$ by zero outside $\Lambda$. It is a partial isometry such that $\piso^*\piso$ is the $|\Lambda|$-dimensional projection in $\ell^2({\mathbb Z}^d)$ onto the subspace of elements supported by $\Lambda$, while $\piso\,\piso^*=\one_{{\mathbb C}^{|\Lambda|}}$. The finite volume Green matrix is defined by:
$$
G^\piso_0(z)\;=\;\piso \;(z-H_0)^{-1}\;\piso^*\;.
$$
\noindent This is a matrix of size $|\Lambda|\times |\Lambda|$. If $\Lambda=\{0\}$ it will be called the Green function. An important basic fact about the Green matrix is its Herglotz property, that is, $-\Im m\,G_0^\piso(z)=\imath(G_0^\piso(z)-G_0^\piso(z)^*)/2>0$ for $\Im m(z)>0$. This implies, in particular, that $G_0^\piso(z)$ is invertible for $\Im m(z)\neq 0$. The boundary values of $G_0^\piso(z)$ on the real axis will be analyzed in this section. Gieseker, Kn\"orrer and Trubowitz \cite{GKT} studied thoroughly the Fermi surfaces for dimensions $d\geq 2$ and for generic periodic potentials. They showed that it is an algebraic variety and constructed a compactification. They also investigated the nature of the van Hove singularities, which, in two dimension produce a logarithmic divergence of the density of states, namely the diagonal elements of $\Im m\,G_0^\piso(E-\imath 0)$. For $d=2$ and the discrete Laplacian, these limit behaviors can also be read off the explicit formulas for the Green function given in \cite{Eco}, but for $d\geq 3$ only numerical results and toy models seem to be known.
\begin{proposi}
\label{scatt08.prop-Green3d}
Let $d\geq 3$ and let ${\mathcal E}$ be analytic. The weak limits $G^\piso_0(E\pm\imath 0)=\lim_{\epsilon\downarrow 0} G^\piso_0(E\pm\imath \epsilon)$ exist. Furthermore:
\vspace{.1cm}
\noindent {\rm (i)} Away from the critical values of ${\mathcal E}$, the map $E\in{\mathbb R}\mapsto G^\piso_0(E\pm\imath 0)$ is real analytic. At the critical
points it is H\"older continuous.
\vspace{.1cm}
\noindent {\rm (ii)} $\Im m\,G^\piso_0(E-\imath 0)= - \Im m\,G^\piso_0(E+\imath 0)$ vanishes on $(-\infty,E_-]\cup[E_+,\infty)$. It is a positive matrix
with
nonzero diagonal entries on $(E_-,E_+)$.
\vspace{.1cm}
\noindent {\rm (iii)} The map $E\in{\mathbb R}\mapsto \Re e\,G^\piso_0(E)$ is negative and decreasing on $(-\infty,E_-]$ and positive and
decreasing
on $[E_+,\infty)$. Furthermore, $G^\piso_0(\pm\infty)=0$.
\vspace{.1cm}
\noindent {\rm (iv)} For $E\in[E_-,E_+]$ close to $E_\pm$,
$$
\Im m\,G^\piso_0(E-\imath 0)
\;=\;
D_\pm\;|E-E_\pm|^{\frac{d}{2}-1}\;M^\piso_\pm
\;+\;
{\mathcal O}(|E-E_\pm|^{\frac{d}{2}})
\;,
$$
where $M^\piso_\pm=|v^\piso_\pm\rangle\langle v^\piso_\pm|$ is the projection on the vector $v^\piso_\pm=(|\Lambda|^{-\frac{1}{2}}\,e^{\imath n\cdot k^*_\pm})_{n\in\Lambda}\in{\mathbb C}^{|\Lambda|}$ and
$$D_\pm \;=\;
\frac{2^{\frac{d}{2}-1}\pi\, |\Lambda|\,|{\mathbb S}^{d-1}|}{(2\pi)^d}\;|\det({\mathcal E}''(k_\pm^*))|^{\frac{1}{2}}\; .
$$
\vspace{.1cm}
\noindent {\rm (v)} There are matrices $N^\piso_\pm<0$ such that
$$
\Re e \,G^\piso_0(E)\;=\;G^\piso_0(E_\pm)\;+\;
\left\{
\begin{array}{cc}
{\mathcal O}(E-E_\pm)& d=3\;,
\\
D_\pm |E-E_\pm|\ln\left(\frac{1}{|E-E_\pm|}\right)\,M^\piso_\pm+ {\mathcal O}(E-E_\pm)
& d=4\;,
\\
(E-E_\pm)\,N^\piso_\pm+o(E-E_\pm)
& d\geq 5\;.
\end{array}
\right.
$$
\end{proposi}
\noindent {\bf Proof:} The proofs given below are detailed extensions of the work of van Hove \cite{VH}. For $m,n\in\Lambda$, the matrix elements of $G^\piso_0(z)$ are given by
$$\langle m|G^\piso_0(z)|n\rangle \;=\;
\langle m|(z-H_0)^{-1}|n\rangle \;=\;
\int_{{\mathbb T}^d} \frac{d^dk}{(2\pi)^d}\;
\frac{e^{\imath (n-m)\cdot k}}{z- {\mathcal E}(k)}\,.
$$
\noindent {\bf (i) Outside the critical values: } By construction the matrix $G^\piso_0(z)$ is holomorphic for $z\notin \sigma(H_0)$. In particular, since the spectrum of $H_0$ is the interval $\sigma(H_0)=[E_-,E_+]$, it follows that the map $E\in{\mathbb R}\setminus[E_-,E_+]\mapsto G_0^\piso(E)$ is real analytic and converges to zero at $\pm \infty$. Moreover, its derivative is negative. In particular, if the limit of this matrix exists at $E_\pm$, this limit is a negative matrix at $E_-$ and a positive matrix at $E_+$. Now, since ${\mathcal E}$ is analytic, it follows that it has only a finite number of critical points and it admits a holomorphic continuation in $({\mathbb T}+\imath {\mathbb R})^d$ in a small neighborhood of the form $B_\eta = \{k+\imath \kappa \in ({\mathbb T}+\imath {\mathbb R})^d\,|\, \max_{1\leq i\leq d}{|\kappa_i|} <\eta \}$. It follows that, for $\epsilon>0$ small enough, the manifold defined as the set ${\mathbb T}_\epsilon^d = \{k+\imath \epsilon\nabla {\mathcal E}(k)\,|\, k\in{\mathbb T}^d\}$ is entirely contained in $B_\eta$. Using the Cauchy formula, it follows that
$$\langle m|G^\piso_0(z)|n\rangle \;=\;
\int_{{\mathbb T}_\epsilon^d} \frac{d^dk'}{(2\pi)^d}\;
\frac{e^{\imath (n-m)\cdot k'}}{z- {\mathcal E}(k')}\,.
$$
\noindent Since $k'\in {\mathbb T}_\epsilon^d$, it follows that $k'=k+\imath \epsilon\nabla{\mathcal E}(k)$ for some $k\in{\mathbb T}^d$, so that, using a Taylor expansion,
$$\Im m\,{\mathcal E}(k') \;=\; \epsilon\,|\nabla{\mathcal E}(k)|^2\;+\; {\mathcal O}(\epsilon^2)\;.
$$
\noindent Consequently, if $E\in[E_-,E_+]\setminus {\mathcal E}(\cri)$ is not a critical value, there is $\rho >0$ such that, if $|z-E|<\rho$, the distance of $\dist\{z, {\mathcal E}({\mathbb T}_\epsilon^d)\}>0$ does not vanish. In particular, $G^\piso_0(z)$ extends as a holomorphic function of $z$ from $\Im m(z) <0$ to a neighborhood of $E$. In particular, the boundary value $G^\piso_0(E-\imath 0)$ is analytic in $E$ in $[E_-,E_+]\setminus {\mathcal E}(\cri)$. A similar argument applies to $G^\piso_0(E+\imath 0)$.
\vspace{.1cm}
\noindent {\bf (ii) Partitioning:} For any $k^*\in\cri$, let $B_\delta(k^*)$ be the open ball centered at $k^*$ of radius $\delta>0$. Let also $\overline{B}_{\delta/2}(k^*)$ be the closed ball also centered at $k^*$ of radius $\delta/2$. Let $U_{\mbox{\rm\tiny reg}}$ be the open set obtained by removing from ${\mathbb T}^d$ the union of the balls $\overline{B}_{\delta/2}(k^*)$, $k^*\in\cri$. It follows that the family $\{U_{\mbox{\rm\tiny reg}}\}\cup\{B_\delta(k^*)\,|\, k^*\in\cri\}$ is a finite open cover of ${\mathbb T}^d$. Let then $\{\chi_{\mbox{\rm\tiny reg}}\}\cup \{\chi_{k^*}\,|\, k^*\in\cri\}$ be a smooth partition of unity associated with this open cover. The previous integral can be decomposed into a sum
\begin{equation}
\label{scatt08.eq-part}
\langle m|(z-H_0)^{-1}|n\rangle \;=\;
G_{\mbox{\rm\tiny reg}}(z) + \sum_{k^*\in\cri} G_{k^*}(z)\,,
\hspace{1cm}
G_{k^*}(z) \;=\; \int_{B_\delta(k^*)} \frac{d^dk}{(2\pi)^d}\;\chi_{k^*}(k)\;
\frac{e^{\imath (n-m)\cdot k}}{z- {\mathcal E}(k)}\,.
\end{equation}
\noindent The contribution $G_{\mbox{\rm\tiny reg}}$ is regular because the integral vanishes around all critical points. Using the coarea formula and the results of Appendix~\ref{sec-Borel}, it follows that $G_{\mbox{\rm\tiny reg}}$ is holomorphic in the complement of the spectrum of $H_0$ and its boundary values are smooth everywhere on the real line.
\vspace{.1cm}
\noindent {\bf (iii) Non extremal critical points:} The boundary values of the $G_{k^*}$'s, however, may not be smooth because of the contribution of the critical point. Let $k^*$ be one of the critical points of signature $d=(d_+,d_-)$ with $d_\pm \neq 0$ and in the following $G_\ast= G_{k^*}$ will denote its contribution to the previous decomposition. If $\delta$ is small enough, the Morse lemma \cite{Nic} implies that there exists a neighborhood $U$ of $k^*$ containing $B_\delta(k_*)$ and a diffeomorphism $\varphi:B_\delta(0)\to U$ such that $\varphi(0)=k^*$ and ${\mathcal E}_\varphi={\mathcal E}\circ \varphi$ is quadratic:
$$
{\mathcal E}_\varphi(k)
\;=\;
E_\ast \,+\, \frac{1}{2}\,\sum_{i=1}^{d_+} k_i^2 \,-\,\frac{1}{2}\,\sum_{j=d_++1}^{d} k_j^2
\;,
$$
for $\|k\| < \delta$ and where $E_\ast={\mathcal E}(k^*)$. This diffeomorphism has a Jacobian matrix $J= \varphi'(0)$ satisfying $J\mbox{\rm diag}(\one_{d_+},-\one_{d_-})J^*= {\mathcal E}''(k^*)^{-1}$. In particular, the Jacobi determinant of $\varphi$ stays close to $|\det({\mathcal E}''(k^*))|^{-1/2}$ over the neighborhood $U$ and is a smooth function. It follows that the integral defining $G_\ast$ is given by
$$G_\ast(z) \;=\;
\int_{\|k\|<\delta} \frac{d^dk}{(2\pi)^d}\;
\bigl|\det(\varphi'(k))\bigr|\,\chi_{k^*} (\varphi(k))\;
\frac{e^{\imath (n-m)\cdot \varphi(k)}}{z- {\mathcal E}_\varphi(k)}\,.
$$
\noindent It will be convenient to use the following polar variables
$$k_i \;=\; r_+ \omega_+\, \hspace{.5cm} \mbox{\rm if}\;\; 1\leq i\leq d_+\;,
\hspace{2cm}
k_j \;=\; r_- \omega_-\, \hspace{.5cm} \mbox{\rm if}\;\; d_+< j\leq d\;,
$$
\noindent were $r_\pm \geq 0$ are the radial variables and $\omega_\pm \in {\mathbb S}^{d_\pm -1}$ the angular ones. It follows that
$$G_\ast(z) \;=\;
\int_{r_+^2+r_-^2<\delta^2}
\frac{r_+^{d_+-1}dr_+\, r_-^{d_--1}dr_-}{(2\pi)^d}\;
\frac{F(r_+,r_-)}{z- E_\ast - \frac{1}{2}(r_+^2-r_-^2)}\;,
$$
\noindent where $F$ is a smooth function with support inside the disk $r_+^2+r_-^2<\delta^2$ given by
$$F(r_+,r_-) \;=\;
\int_{{\mathbb S}^{d_+ -1}\times {\mathbb S}^{d_- -1}} d\omega_+ \,d\omega_-\;
\bigl|\det(\varphi'(k))\bigr|\,\chi_{k^*} (\varphi(k))\;
e^{\imath (n-m)\cdot \varphi(k)}\,.
$$
\noindent Equivalently $G_\ast$ can be expressed as
$$G_\ast(z) \;=\;
\int_{{\mathbb R}} \frac{\rho(e) de}{z-E_\ast -e}\,,
$$
\noindent where $\rho$ is defined by
$$\rho(e) \;=\;
\int_{r_+^2+r_-^2<\delta^2}
\frac{r_+^{d_+-1}dr_+\, r_-^{d_--1}dr_-}{(2\pi)^d}\;
F(r_+,r_-)\;
\delta\left(
\frac{r_+^2-r_-^2}{2}-e
\right)\;.
$$
\noindent If $e >0$, the usual rule followed by the Dirac distribution $\delta$ leads to
\begin{equation}
\label{scatt08.eq-rho}
\rho(e) \;=\;
\int_0^\delta \frac{dr}{(2\pi)^d}\;\,
r^{d_--1} (e+r^2)^{(d_+-2)/2} \,\;
F(\sqrt{r^2+e},r)
\end{equation}
\noindent For $e <0$, a similar formula holds by exchanging $d_+$ with $d_-$ and $F(r,r')$ with $F_s(r,r')=F(r',r)$. The previous expression shows that, if $d_-\geq 2$, the Lebesgue dominated convergence theorem implies that the limits $\rho(\pm0)$ exist and are equal. In particular, $\rho$ is continuous at $e=0$. Moreover, since $d\geq 3$, if $d_+=1$, then $d_-=d-1\geq 2$. Then $r^{d_--1} (e+r^2)^{(d_+-2)/2}= r^{d-2} (e+r^2)^{-1/2}\leq r^{d-5/2}$ showing that, again, $\rho$ is continuous at $e=0$.
\vspace{.1cm}
Equation~(\ref{scatt08.eq-rho}) also shows that $\rho$ is differentiable for $e \neq 0$. Moreover, its derivative is given by the sum of two terms $\rho'_1+\rho'_2$ with
$$\rho'_1(e)\;=\; \frac{d_+-2}{2}
\int_0^\delta \frac{dr}{(2\pi)^d}\;\,
r^{d_--1} (e+r^2)^{d_+/2-2} \,\;
F(\sqrt{r^2+e},r)\,,
$$
$$\rho'_2(e)\;=\; \frac{1}{2}
\int_0^\delta \frac{dr}{(2\pi)^d}\;\,
r^{d_--1} (e+r^2)^{d_+/2-3/2} \,\;
\partial_1 F(\sqrt{r^2+e},r)\,.
$$
\noindent The same argument as before shows that, if $d\geq 3$, $\rho'_2$ admits a finite limit as $\pm e\downarrow 0$. However, these two limits may not be equal, if $F\neq F_s$. On the other hand, if $d\geq 5$, $\rho'_1$ also admits limits and the two limits coincide. For $d=3,4$, however, it follows that $d_+ <4$ so that $\rho'_1$ may diverge at $e\rightarrow 0$. Nevertheless, the integrand can be bounded by
$$r^{d_--1} (e+r^2)^{d_+/2-2} \;\leq\;
e^{-\alpha}\, r^{d-5+2\alpha}\;,
$$
\noindent which is integrable if $\alpha >1/2$ for $d=3$ and $\alpha>0$ for $d=4$. Hence in both cases, there is $K>0$ such that
$$\left|\partial_e\rho\right| \;\leq\;
\frac{K}{e^\alpha}
\hspace{1cm}\Longrightarrow \hspace{1cm}
|\rho(e)|\;\leq\; \frac{K}{1-\alpha}\,e^{1-\alpha}\,,
$$
\noindent showing that $\rho$ is H\"older continuous at the critical points. Using the Plemelj-Privalov theorem (Lemma~\ref{scatt08.lem-hilb} of Appendix~\ref{sec-Borel}), it follows that the same is true for the boundary values of $G_\ast$.
\vspace{.1cm}
\noindent {\bf (iv) Near the extrema: }The behavior near the maximum or the minimum can be treated similarly so that it is enough to consider only the minimum at $k_-^*$. Again by the Morse lemma, there is a neighborhood $U$ of $k_-^*$ containing $B_\delta(k_-^*)$ and a diffeomorphism $\varphi:B_\delta(0)\to U$ with $\varphi(0)=k_-^*$ and such that ${\mathcal E}\circ \varphi(k) = E_-+ (1/2) \sum_{i=1}^d k_i^2$. Introducing the polar coordinates $r= \|k\|$ and $\omega\in{\mathbb S}^{d-1}$ so that $k=r\omega$, the contribution $G_-(z)=G_{k_-^*}(z)$ is given by the integral
$$G_-(z) \;=\;
\int_0^\delta \frac{r^{d-1}dr}{(2\pi)^d} \;
\frac{e^{\imath(n-m)\cdot k_-^*}\;F(r)}{z-E_- -\frac{1}{2}r^2}\,,
\hspace{1cm}
F(r) \;=\;
\int_{{\mathbb S}^{d-1}} d\omega \,
|\varphi'(r\omega)|\, \chi_-(\varphi(r\omega))\,
e^{\imath (n-m)\cdot(\varphi(r\omega)-k_-^*)}\,.
$$
\noindent with $\chi_-$ a smooth function with support in $U(k_-^*)$ which is equal to $1$ on the ball $\|k-k_-^*\|\leq \delta/2$. In particular, $F$ is smooth and bounded in $0<r<\delta$, it vanishes in a neighborhood of $r=\delta$ and all its derivatives have a limit at $r=0$. Consequently, the integration domain can be extended to $[0,\infty)$ without change. At this point two remarks should be made:
\vspace{.1cm}
\noindent (1) $F(0)=|\det(\varphi'(0))|\, |{\mathbb S}^{d-1}| > 0$ and the Morse lemma shows that $|\det(\varphi'(0))|=\det({\mathcal E}''(k_-^*))^{1/2}$.
\vspace{.1cm}
\noindent (2) The expression $e^{\imath (n-m)\cdot k_-^*}$ is the matrix element $|\Lambda|\langle m| M_-^\piso |n\rangle$ of the projection matrix $M_-^\piso$.
\vspace{.1cm}
\noindent The change of variable $e=r^2/2$ yields
\begin{equation}
\label{eq-G-}
G_-(z) \;=\; \frac{e^{\imath(n-m)\cdot k_-^*}}{(2\pi)^d}
\int_0^\infty de\;\frac{(2e)^{\frac{d}{2}-1}F(\sqrt{2e})}{z-E_- -e}\;.
\end{equation}
\noindent Since $d\geq 3$, the function $e\in [0,\infty)\mapsto e^{\frac{d}{2}-1}F(\sqrt{2e})$ is continuous and vanishes at $e=0$ like $e^{d/2-1}$. Hence it can be continued as a H\"older continuous function on the entire real line with support in $[0,\frac{\delta^2}{2})$. Consequently, thanks to the Lemma~\ref{scatt08.lem-hilb}, $G_-(E\pm \imath 0)$ is also continuous w.r.t. $E$. In particular, it has a finite value at $E=E_-$. Since the other contributions to $G_0^\piso$ are regular near $E_-$, $G_0^\piso(E\pm\imath 0)$ is also a H\"older continuous function of $E$ near $E=E_-$.
\vspace{.2cm}
All contributions in equation~(\ref{scatt08.eq-part}) other than $G_-$ being analytic near $E=E_-$, it follows that any singularity of $G_0^\piso(E\pm\imath 0)$ near $E=E_-$ is coming from $G_-$. In addition, the imaginary part of the other contributions to $G_0^\piso$ vanishes on the real axis at $E=E_-$ since $H_0$ is selfadjoint. Hence the only contribution to its imaginary part is coming from $G_-$. Thanks to the Lemma~\ref{scatt08.lem-hilb} it follows from \eqref{eq-G-} that this imaginary part is exactly
$$\Im m \;G_0^\piso(E\pm\imath 0) \;=\;\mp\,\pi\,
(2(E-E_-))^{\frac{d}{2}-1}
|\Lambda|\,M_-^\piso\;
\frac{\det({\mathcal E}''(k_-^*))^{\frac{1}{2}}\, |{\mathbb S}^{d-1}|}{(2\pi)^d}\;
+\;{\mathcal O} \bigl((E-E_-)^{\frac{1}{2}}\bigr) \;.
$$
\noindent On the other hand, the real part can be estimated by considering the subdominant contribution of $G_-$ given by the difference
\begin{equation}
\label{scatt08.eq-gminusderiv}
G_-(E\pm\imath 0)- G_-(E_-) \;=\; 2(E-E_-)\;
\frac{e^{\imath(n-m)\cdot k_-^*}}{(2\pi)^d}
\int_0^\infty de\;\frac{(2e)^{\frac{d}{2}-2}F(\sqrt{2e})}{E-E_- -e \pm\imath 0 }\;.
\end{equation}
\noindent The same argument as before shows that the integral defines a continuous function of $E$ on the real line if $d\geq 5$. Consequently, $E\in{\mathbb R}\mapsto G_0^\piso(E\pm\imath 0)$ is continuously differentiable in a small neighborhood of $E=E_-$. Since the derivative is negative outside of the spectrum of $H_0$, it follows that the claim (v) of the Proposition~\ref{scatt08.prop-Green3d} holds for $d\geq 5$.
\vspace{.2cm}
For $d=4$, equation~(\ref{scatt08.eq-gminusderiv}) shows that Lemma~\ref{scatt08.lem-hilbplus} applies. For indeed, the function $e\in[0,\infty|\mapsto F(\sqrt{2e}) \in{\mathbb C}$ is smooth because the Taylor expansion of $F(r)$ near the origin contains only even terms and $F(0)\neq 0$. Consequently
$$G_-(E\pm\imath 0)- G_-(E_-) \;=\;
(E-E_-)\ln(|E-E_-|)\;
\frac{e^{\imath(n-m)\cdot k_-^*}F(0)}{(2\pi)^{d-1}} \;+\; {\mathcal O}(|E-E_-|)\;.
$$
\noindent Since all other contributions to the real part of $G_0^\piso(E\pm\imath 0)$ are regular at $E=E_-$, it follows that
$$\Re e\; G_0^\piso(E\pm\imath 0) \,=\,G_0^\piso(E_-) +
|\Lambda|M_-^\piso\;
\frac{\det({\mathcal E}''(k_-^*))^{1/2}\, |{\mathbb S}^{d-1}|}{(2\pi)^{d-1}}\,
(E-E_-)\ln(|E-E_-|)\,+\, {\mathcal O}(|E-E_-|)\,.
$$
\vspace{.1cm}
\noindent At last, for $d=3$, returning to the variable $r=\sqrt{2e}$, equation~(\ref{scatt08.eq-gminusderiv}) becomes
$$G_-(E\pm\imath 0)- G_-(E_-) \;=\; 2(E-E_-)\;
\frac{e^{\imath(n-m)\cdot k_-^*}}{(2\pi)^3}
\int_0^\infty dr\; \frac{F(r)}{E-E_- -\frac{r^2}{2} \pm\imath 0 }\;.
$$
\noindent The integral on the r.h.s. can be decomposed into two contributions
$$I(E) \;=\;
\int_0^\infty dr\;\frac{F(r)}{E-E_- -\frac{r^2}{2} \pm\imath 0 }\;=\;
I_1(E)\;+\;I_2(E)\;,
$$
\noindent with
\begin{equation}
\label{scatt08.eq-I12}
I_1(E)\;=\;
F(0)\,
\int_0^\infty dr\;\frac{1}{E-E_- -\frac{r^2}{2} \pm\imath 0 }\;,
\hspace{1cm}
I_2(E)\;=\;
\int_0^\infty dr\;\frac{F(r)-F(0)}{E-E_- -\frac{r^2}{2} \pm\imath 0 }\;.
\end{equation}
\noindent The first part can be computed explicitly to give
$$I_1(E) \;=\; \pm \,\imath \;\frac{\pi\,F(0)}{\sqrt{2(E-E_-)}}\;,
\hspace{2cm}
\mbox{\rm if}\;\; E>E_- \,.
$$
\noindent This part is singular and gives a nontrivial contribution to $G_0^\piso(E\pm\imath 0)$ of the form
$$G_0^\piso(E\pm\imath 0) \;=\;G_0^\piso(E_-)
\,\pm \,\imath\; \pi\, \sqrt{2(E-E_-)}\;
|\Lambda|\;M_-^\piso\;
\frac{\det({\mathcal E}''(k_-^*))^{\frac{1}{2}}\, |{\mathbb S}^{d-1}|}{(2\pi)^d}\;
+\;\hat{I}_2(E)\,,
$$
\noindent where $\hat{I}_2(E)$ comes from the contribution of $I_2$. Since the matrix $M_-^\piso$ is a projection, the singularity does not contribute to the real part of this expression. On the other hand, the integral $I_2$ can be treated by using two remarks: (a) $F(r)-F(0) = {\mathcal O}(r^2)$ at $r=0$, (b) for $0 \leq E-E_-\leq \frac{\delta^2}{4}$ and, for $r>\delta$, the integral converges to a smooth function of $E$. Hence the contribution of the integral coming from $r>\delta$ does not produce any singularity, while the contribution for $r\leq \delta$ is regular at $r=0$ leading to a contribution that is continuous at $E=E_-$ thanks to the Lemma~\ref{scatt08.lem-hilb}. This finishes the proof of Proposition~\ref{scatt08.prop-Green3d}.
\hfill $\Box$
\vspace{.3cm}
\subsection{Localized states in the REF representation}
\label{sec-ONB}
\noindent The REF representation of the localized state at site $m\in{\mathbb Z}^d$ is $\psi_m={\mathcal U}\Ff\,|m\rangle$. The states $(\psi_m)_{m\in{\mathbb Z}^d}$ form an orthonormal basis in $L^2({\mathbb R})\otimes L^2(\Sigma,\nu)$. More explicitly, they are given by
\begin{equation}
\label{scatt08.eq-ONbasis}
\psi_{m,b}(\sigma)\;=\;
\frac{1}{(2\pi)^{\frac{d}{2}}}\;
d_b(\sigma)\;
e^{\imath m\cdot \theta_b(\sigma)}\,,
\end{equation}
\noindent for any $\sigma\in\Sigma$ avoiding $\criM$.
It will be convenient below to consider $\psi_{m,b}$ as a state in $L^2(\Sigma,\nu)$. These restricted localized states are not normalized, but their norm is independent of $m$:
$$
\|\psi_{m,b}\|^2_{L^2(\Sigma,\nu)}\;=\;
\frac{1}{(2\pi)^d}\;
\int_\Sigma \nu(d\sigma)\;
|d_b(\sigma)|^2\,.
$$
\noindent This norm as well as scalar products between these states are linked to the resolvent.
\begin{lemma}
\label{scatt08.lem-DOS}
The following holds
$$\langle\psi_{n,b}|\psi_{m,b}\rangle_{L^2(\Sigma,\nu)}\;=\;
\frac{F(f^{-1}(b))}{\pi}\;
\langle n|
\,\mp\Im m\Bigl((f^{-1}(b)\pm\imath 0-H_0)^{-1}\Bigr)\,
|m\rangle\,.
$$
\end{lemma}
\noindent {\bf Proof:} Thanks to the coarea formula and the Plemelj-Privalov theorem (see Lemma~\ref{scatt08.lem-hilb})
\begin{eqnarray*}
\langle n|\,\Im m\bigl((E\pm\imath 0-H_0)^{-1}\bigr)\,|m\rangle
& = &
\frac{1}{2\imath}\int^{E_+}_{E_-} de\;
\left(\frac{1}{E\pm\imath 0-e}-\frac{1}{E\mp\imath 0-e}\right)\;\int_{\Sigma_{e}}
\frac{\nu_e(d\sigma)}{(2\pi)^d}\;
\frac{e^{\imath(m-n)\cdot \sigma}}{|\nabla {\mathcal E}(\sigma)|}
\\
& = &
\mp \pi\;
\int_{\Sigma_{E}}
\frac{\nu_E(d\sigma)}{(2\pi)^d}\;
\frac{1}{|\nabla {\mathcal E}(\sigma)|}\;e^{\imath(m-n)\cdot \sigma}
\,.
\end{eqnarray*}
\noindent Always using $E=f^{-1}(b)$, the map $\theta_b:\Sigma\to\Sigma_E$ is a diffeomorphism. Thus the associated change of variables gives
\begin{eqnarray*}
\langle n|\,\Im m\bigl((E\pm\imath 0-H_0)^{-1}\bigr)\,|m\rangle
& = &
\mp \,\pi\;
\int_{\Sigma}
\frac{\nu(d\sigma)}{(2\pi)^d}\;
\bigl|\det(\theta'_b|_{T_\sigma\Sigma})\bigr|
\;\frac{1}{|\nabla {\mathcal E}(\theta_b(\sigma))|}
\;e^{\imath(m-n)\cdot \theta_b(\sigma)}
\\
& = &
\frac{\mp \,\pi}{F(f^{-1}(b))}\;
\int_{\Sigma}
\frac{\nu(d\sigma)}{(2\pi)^d}\;
\bigl|\det(\theta'_b|_{T_\sigma\Sigma})\bigr|
\;|{\widehat{X}}(\theta_b(\sigma))|
\;e^{\imath(m-n)\cdot \theta_b(\sigma)}\,.
\end{eqnarray*}
\noindent Now the formula follows from the definition of $\psi_{m,b}$ and \eqref{scatt08.eq-JacFac}.
\hfill $\Box$
\vspace{.2cm}
\begin{coro}
\label{scatt08.lem-partialiso} Let us introduce the operator ${\mathcal R}_b= \sum_{m\in\Lambda} |\psi_{m,b}\rangle\langle m|$ mapping $\ell^2(\Lambda)={\mathbb C}^{|\Lambda|}$ onto the subspace of $L^2(\Sigma,\nu)$ spanned by the $\left(\psi_{m,b}\right)_{m\in\Lambda}$. The range of its adjoint ${\mathcal R}_b^*= \sum_{m\in\Lambda} |m\rangle\langle \psi_{m,b}|$ is denoted by $\Ff_b\subset{\mathbb C}^{|\Lambda|}$. Further let $\piso_b$ be a partial isometry from $\Ff_b$ onto the subspace of $L^2(\Sigma,\nu)$ spanned by the $\left(\psi_{m,b}\right)_{m\in\Lambda}$.
\vspace{.1cm}
\noindent {\rm (i)} The following holds
$${\mathcal R}_b^\ast{\mathcal R}_b \;=\;
\frac{F(E)}{\pi}\;
\Im m\,G_0^\piso(E-\imath 0)\,,
\hspace{2cm}
b=f(E)\,.
$$
\vspace{.1cm}
\noindent {\rm (ii)} If $P_b$ denotes the orthogonal projection in ${\mathbb C}^{|\Lambda|}$ onto the subspace $\Ff_b$, then
$${\mathcal R}_b \;=\; \piso_b \,
\sqrt{
\frac{F(E)}{\pi}
}\;
\left(
\Im m\,G_0^\piso(E-\imath 0)
\right)^{\frac{1}{2}}\;
P_b\,,
\hspace{2cm}
b=f(E)\,.
$$
\vspace{.1cm}
\noindent {\rm (iii)} The map $b\in{\mathbb R}\mapsto {\mathcal R}_b\in {\mathcal B}\left({\mathbb C}^{|\Lambda|},L^2(\Sigma,\nu)\right)$ is norm continuous.
\end{coro}
\noindent {\bf Proof:} (i) is a re-phrasing of Lemma~\ref{scatt08.lem-DOS} and (ii) just the usual polar decomposition. (iii) Since ${\mathcal R}_b$ has finite rank, the norm continuity follows form the strong continuity. In turns the strong continuity follows from the continuity of the inner products $\langle\psi_{n,b}|\psi_{m,b}\rangle_{L^2(\Sigma,\nu)}$. The latter property follows from Lemma~\ref{scatt08.lem-DOS} and from the continuity of $F$, $f^{-1}$ and the imaginary part of the Green function (see Proposition~\ref{scatt08.prop-Green3d}), with respect to $E$ or to $b$.
\hfill $\Box$
\vspace{.2cm}
It is worth remarking that $P_b=\Pi_b^*\Pi_b$ and $\Pi_b=\Pi_b P_b$. Furthermore, $\Im m\,G_0^\piso(f^{-1}(b)\pm\imath 0)$ commutes with $P_b$. The next lemma is a technical result which will be needed to deal with threshold singularities in dimension $d=3$.
\begin{lemma}
\label{scatt08.lem-limitstate}
Let $d\geq 2$ and let the extrema of ${\mathcal E}$ be isotropic in the sense that ${\mathcal E}''(k^*_\pm)$ is a multiple of the identity matrix. Then
\begin{equation}
\label{scatt08.eq-limitstate}
\lim_{b\to\pm\infty}\;
\frac{1}{\|\psi_{m,b}\|_{L^2(\Sigma,\nu)}}\;
\psi_{m,b} \;=\;
e^{\imath m\cdot k^*_\pm}\;
\psi_\pm\;,
\end{equation}
where $\psi_\pm\in L^2(\Sigma,\nu)$ are normalized states given by
$$
\psi_\pm(\sigma) \;=\;
C\;\exp\left(\frac{1}{2}\;
\int_0^{\infty} du \;
\bigl(
\dive({\widehat{X}})(\theta_u(\sigma))\pm d
\bigr)\right)\;
|{\widehat{X}}(\sigma)|^{\frac{1}{2}}\,,
\qquad C>0\,.
$$
\end{lemma}
\noindent {\bf Proof:} Let us first argue that $\psi_\pm$ are well-defined and normalizable. The isotropy hypothesis and \eqref{scatt08.eq-divXvicinity2} imply $\mbox{\rm div}({\widehat{X}})(k^*_\pm+k) =\mp\,d+{\mathcal O}(|k|)$. But $\theta_u(\sigma)$ converges to $k^*_\pm$ as $u\to\pm\infty$ at an exponential rate. Therefore the integral in the exponential exists and hence $\psi_\pm$ are well-defined. Thanks to the definition \eqref{scatt08.eq-ONbasis} of $\psi_{m,b}$ and to equation~(\ref{scatt08.eq-JacFac}) the conclusion of the lemma follows.
\hfill $\Box$
\begin{rem}
\label{scatt08.rem-niso}
{\em Without the isotropy assumption that ${\mathcal E}''(k_\pm^*)$ is a multiple of the identity, the l.h.s. of \eqref{scatt08.eq-limitstate} does not converge to a state in $L^2(\Sigma,\nu)$.
\hfill $\Box$
\end{rem}
Lemma~\ref{scatt08.lem-DOS} also implies that the states $\psi_{m,b}$ are in general not orthogonal in $L^2(\Sigma,\nu)$, not even linearly independent as show the next results.
\begin{lemma}
\label{scatt08.lem-typrank}
Using the notation of the {\rm Corollary~\ref{scatt08.lem-partialiso}}, one has:
\vspace{.1cm}
\noindent {\rm (i)} $\Ff_b= \Ran\bigl(\Im m\,G_0^\piso(E-\imath 0)\bigr)$ whenever $E_-<E<E_+$ and $ b=f(E)$.
\vspace{.1cm}
\noindent {\rm (ii)} In any interval of ${\mathbb R}$ not containing a critical value $f({\mathcal E}(\cri))$, there is a discrete subset without
accumulation points outside of which the dimension of $\Ff_b$ is constant.
\end{lemma}
\noindent {\bf Proof:} The first result follows directly from Lemma~\ref{scatt08.lem-DOS}. Since $F$ does not vanish on $(E_-,E_+)$, the image is also the image of $\Im m\,G_0^\piso(E-\imath 0)$. In particular,
$$\dim (\Ff_b) \;=\; \rank\bigl(\Im m\,G_0^\piso(E-\imath 0)\,\bigr)\;,
\hspace{2cm}
b=f(E)\,.
$$
As $E\mapsto \Im m\,G_0^\piso(E-\imath 0)$ is real-analytic away from the critical values ${\mathcal E}(\cri)$, the statement (ii) follows from analytic perturbation theory.
\hfill $\Box$
\begin{comment}
\noindent To establish the other result, it ought to be remarked that the dimension $d(E)$ of the kernel of $\rho^\piso(E)= \Im m\,G_0^\piso(E-\imath 0)$ is upper semi-continuous w.r.t. $E$. This is because $d(E) =\TR\left(\chi_{\{0\}}(\rho^\piso(E))\right)$ where $\chi_{\{0\}}$ is the characteristic function of $\{0\}\subset {\mathbb R}$ and $\chi_{\{0\}}$ is the infimum of the continuous functions on ${\mathbb R}$ such that $g(0)=1$ and $0\leq g(x)\leq 1$ for all $x\in{\mathbb R}$. Since $d(E)$ takes on values in the finite set $\{0,1,\ldots, |\Lambda|\}$, it follows that, given a non-critical interval $J$ in $[E_-,E_+]$, the set $U_j$ of $E\in J$ for which $d(E)\leq j$ is open. Moreover, $U_0\subset U_1\subset \cdots \subset U_{|\Lambda|}=J$. The smallest integer such that $U_j\neq \emptyset$ will be denoted by $d_J$, so that $|\Lambda|-d_J$ is the maximal rank of $\rho^\piso(E)$ on the interval $J$. Since the diagonal elements of $\rho^\piso(E)$ are not zero (see Proposition~\ref{scatt08.prop-Green3d}), one has $d_J < |\Lambda|$. Proposition~\ref{scatt08.prop-Green3d} also shows that $\rho^\piso(E)$ is the imaginary part of a function that is holomorphic with analytic continuation in the second sheet away from the critical points. Consequently, $\rho^\piso(E)$ is real analytic on $J$. Since it is a selfadjoint matrix (actually a positive matrix), all its eigenvalues are also analytic functions of $E$ on $J$. Therefore, the eigenvalues that are not identically vanishing have only isolated zeroes. In particular, this set is discrete and closed in $J$. Since $\Lambda$ is finite, the union $J^*$ of the set of such zeros over the eigenvalues is also discrete and closed. Hence, $J^*$ is at most countable and has no accumulation point in $J$ and none of the eigenvalues of $\rho^\piso(E)$ vanish in $J\setminus J^*$. It follows that $U_{d_J}$ is dense in $J$. In particular the dimension of $\Ff_b$ is $|\Lambda|-d_J$ on the image of $J\setminus J^*$ by $f$.
\hfill $\Box$
\end{comment}
\vspace{.2cm}
The next question concerns whether the rank is indeed changing as a function of $E$. In addition, it is important to have examples leading to a non-maximal typical rank because this will allow produce embedded eigenvalues later on. The following result will help to construct such examples.
\begin{lemma}
\label{scatt08.lem-kernel}
With the hypothesis of {\rm Lemma~\ref{scatt08.lem-typrank}}, the orthogonal complement $\Ff_b^\perp$ in ${\mathbb C}^{|\Lambda|}$ is given by
$$\Ff_b^\perp \;=\; \Ker (\Im m\,G^\piso_0(E-\imath 0))\,,
\hspace{2cm}
b=f(E)\,.
$$
\noindent In addition, a vector $v= (v_m)_{m\in\Lambda}$ belongs to $\Ff_b^\perp$ {\rm (}with $b=f(E)${\rm )} if and only if its Fourier transform $\hat{v}(k)= \sum_{m\in\Lambda} v_m\,e^{\imath m\cdot k}$ is vanishing identically on the energy surface $\Sigma_E$.
\end{lemma}
\noindent {\bf Proof:} The first relation comes directly from the first result of Lemma~\ref{scatt08.lem-typrank}. As in the proof of Lemma~\ref{scatt08.lem-DOS},
$$\langle v | \Im m\,G^\piso_0(E-\imath 0)|v\rangle\;=\;
\pi\; \int_{\Sigma_{E}}
\frac{\nu_E(d\sigma)}{(2\pi)^d}\;
\frac{|\hat{v}(\sigma)|^2}{|\nabla {\mathcal E}(\sigma)|}\;.
$$
\noindent In particular, since $\Im m\,G^\piso_0(E-\imath 0) \geq 0$, it follows that $v\in\Ker \bigl(\Im m\,G^\piso_0(E-\imath 0)\bigr)$ if and only if $\hat{v}$ vanishes $\nu_E$-almost everywhere on $\Sigma_E$. But since $\hat{v}$ is a trigonometric polynomial, it has to vanish everywhere on $\Sigma_E$.
\hfill $\Box$
\begin{proposi}
\label{scatt08.prop-fperp}
If $E\in [E_-,E_+]$ is not a critical energy, a vector $v=(v_m)_{m\in\Lambda}$ belongs to $\Ff_b^\perp$, with $b=f(E)$, if and only if there is $w\in\ell^2({\mathbb Z}^d)$ such that $\Pi^* v=(E{\mathbf 1} -H_0)w$. Then $w$ admits a Fourier transform having the same degree of regularity as ${\mathcal E}$. More precisely, if ${\mathcal E}$ is of class $C^r$ with $r\in{\mathbb N}\cup\{\infty\}\cup\{\omega\}$, so is the Fourier transform of $w$. Moreover, if ${\mathcal E}$ is a trigonometric polynomial, then $w$ has a finite support and the previous claim applies to any $E\in (E_-,E_+)$.
\end{proposi}
\noindent {\bf Proof:} A direct calculation shows indeed that, if $\Pi^* v=(E{\mathbf 1} -H_0)w$ with some $w\in\ell^2({\mathbb Z}^d)$, then $\langle v|\Im m\,G_E^\piso(E-\imath 0)|v\rangle =0$. Conversely, if $v\in \Ff_b^\perp$, then $\hat{v}$ is a trigonometric polynomial vanishing on $\Sigma_E$ thanks to the Lemma~\ref{scatt08.lem-kernel}. Therefore, since $E$ is not critical, the energy surface $\Sigma_E$ is a smooth manifold and $\hat{w}(k)= \hat{v}(k)/(E-{\mathcal E}(k))$ is analytic in a neighborhood of this surface. Since it is analytic outside as well, the result follows. The Fourier coefficients of $\hat{w}$ defines a vector $w\in\ell^2({\mathbb Z}^d)$, which decay exponentially fast at infinity by analyticity. In addition, if ${\mathcal E}$ is a polynomial, it follows that $\hat{w}$ is a polynomial as well by Hilbert's Nullstellensatz, which applies even if $E$ is critical. Therefore $w$ has finite support.
\hfill $\Box$
\vspace{.2cm}
The last proposition suggests examples of situations for which the kernel is trivial or not.
\begin{exam}
\label{scatt08.exam-1pt}
{\em If $\Lambda$ is reduced to one point, then $\rho^\piso(E)$ being positive for $E_-<E<E_+$ by Proposition~\ref{scatt08.prop-Green3d}, the kernel is trivial.
\hfill $\Box$
\end{exam}
\begin{exam}
\label{scatt08.exam-Ffb}
{\em Let $H_0$ be the discrete Laplacian in dimension $d=2$ with band function ${\mathcal E}(k_1,k_2)=\cos(k_1)+\cos(k_2)$. If $\Lambda=\{(0,0),(-1,0),(0,-1), (1,0),(0,1),(1,1),(1,2),(2,1)\}$ and $\hat{w}(k)=e^{\imath k_1}-1$, then neither $H_0w$ nor $w$ are supported in $\Lambda$. Nevertheless $(\frac{1}{2}+H_0){w}$ is supported by $\Lambda$. Hence for the energy $E=-\frac{1}{2}=f(b)$ the set $\Ff_E^\perp$ is nonempty.
\hfill $\Box$
\end{exam}
\begin{exam}
\label{scatt08.exam-Ffb2}
{\em Again $H_0$ is the discrete Laplacian in dimension $d=2$. Here we choose the set $\Lambda=\{(0,0),(-1,0),(0,-1),(1,1),(1,2),(2,1)\}$, then $\hat{w}(k)=e^{\imath k_1}e^{\imath k_2}-1$ is supported in $\Lambda$ and so is ${\mathcal E}(k)\hat{w}(k)$. Therefore for any $E=f^{-1}(b)$ the vector $\hat{v}=(E-{\mathcal E})\hat{w}$ is the Fourier transform of a compactly supported vector.
\hfill $\Box$
\end{exam}
If ${\mathcal E}$ is a trigonometric polynomial, more can be said. In such a case there is a finite set $S\subset{\mathbb Z}^d$ such that ${\mathcal E}(k)= \sum_{s\in {\mathbb Z}^d} {\mathcal E}_s e^{\imath s\cdot k}$ with ${\mathcal E}_s\neq 0\iff s\in S$. This set $S$ is called the support of $H_0$. Since $H_0$ is self-adjoint, ${\mathcal E}$ is real-valued so that ${\mathcal E}_{-s}=\overline{{\mathcal E}_s}$ for all $s\in S$. In particular, $S$ is invariant under the parity map $s\mapsto -s$. The $S$-interior of a finite set will now be defined as the set of its points that cannot jump to the outside using hopping terms from $H_0$:
\begin{defini}
\label{scatt08.def-Sint}
Let $\Lambda$ and $S$ be subsets of ${\mathbb Z}^d$. Then the $S$-interior $\Lambda^S$ of $\Lambda$ is the set of $x\in \Lambda$ such that $x+S\subset \Lambda$.
\end{defini}
\begin{exam}
\label{scatt08.exam-int}
{\em If ${\mathcal E}$ is a trigonometric polynomial with support $S$ and if $\Lambda$ is a finite set with nonempty $S$-interior, then any $w$ supported by $\Lambda^S$ satisfies $(E-H_0)w(x) =0$ outside $\Lambda$. Hence the dimension of $\Ff_E^\perp$ is at least equal to the number of points in the $S$-interior of $\Lambda$.
\hfill $\Box$
\end{exam}
\begin{proposi}
\label{scatt08.th-Ffbpoly}
Let $\Lambda\subset{\mathbb Z}^d$ be finite and let ${\mathcal E}$ be a polynomial. Then, there is a finite subset ${\mathscr V}={\mathscr V}(\Lambda)$ in the spectrum of $H_0$, such that $\dim(\Ff_b^\perp)$ is constant for $E$ outside of ${\mathscr V}$, if $f(b)=E$.
\end{proposi}
The proof of this proposition requires several steps that are described in the next four subsections.
\subsubsection{Prime vectors and convexity}
\label{scatt08.sssect-conv}
\begin{lemma}[Prime vectors]
\label{scatt08.lem-prime}
A vector $a=(a_1,\ldots,a_d)\in{\mathbb Z}^d$ is called prime if it satisfies one of the following equivalent definitions:
\vspace{.1cm}
\noindent {\rm (i)} Any $\lambda >0$ such that $\lambda a\in {\mathbb Z}^d$ must satisfy $\lambda \geq 1$.
\vspace{.1cm}
\noindent {\rm (ii)} The greatest common divisor of the coordinates of $a$ is equal to $1$.
\vspace{.1cm}
\noindent {\rm (iii)} The map $\phi_a: x\in{\mathbb Z}^d\mapsto a\cdot x\in {\mathbb Z}$ is onto.
\end{lemma}
\noindent {\bf Proof of the equivalence: } (i)$\Rightarrow$(ii) Let $p\geq 1$ be the greatest common divisor of the coordinates of $a$. It follows that $a/p\in{\mathbb Z}^d$. Therefore $1/p \geq 1$ implying that $p=1$.
\vspace{.1cm}
\noindent (ii)$\Rightarrow$(iii) Let the greatest common divisor of the coordinates of $a$ be equal to one. The map $\phi_a$ is a group homomorphism. In particular, its image is a subgroup of ${\mathbb Z}$. Therefore there is an integer $p\geq 1$ such that this image coincides with $p{\mathbb Z}$. The coordinates of $a$ are given by $a_i=\phi_a(e_i)$, where $\{e_1,\ldots,e_d\}$ denotes the standard basis of ${\mathbb Z}^d$. Thus, there are integers $b_i$ such that $a_i=pb_i$ for all $i$'s. Since the greatest common divisor of the coordinates is $1$, it follows that $p=1$.
\vspace{.1cm}
\noindent (iii)$\Rightarrow$(i) Let $a$ be such that $\phi_a$ is onto. Then there is $y\in{\mathbb Z}^d$ such that $\phi_a(y)=1$. Let $\lambda >0$ satisfiy $\lambda a\in {\mathbb Z}^d$. It follows that $\lambda \in {\mathbb Q}$ and that $\phi_{\lambda a}(x) = \lambda \phi_a(x)\in {\mathbb Z}$. In particular, $\phi_{\lambda a}(y)=\lambda \in {\mathbb Z}$, showing that $\lambda \geq 1$. Hence (i) holds.
\hfill $\Box$
\begin{defini}
\label{scatt08.def-halfplane}
A {\rm (}positive{\rm )} half-plane in ${\mathbb Z}^d$ is a set of the form ${\mathscr H}_{a,m}^+=
\{x\in{\mathbb Z}^d\,|\, a\cdot x \geq m\}$ where $a$ is a prime vector in ${\mathbb Z}^d$ and $m\in{\mathbb Z}$. The associated {\rm (}oriented{\rm )} affine hyperplane ${\mathscr H}_{a,m}$ is defined similarly as ${\mathscr H}_{a,m}=\{x\in{\mathbb Z}^d\,|\, a\cdot x =m\}$.
\end{defini}
\begin{defini}
\label{scatt08.def-polytope}
Let $\Lambda\subset {\mathbb Z}^d$.
\vspace{.1cm}
\noindent {\rm (i)} A prime vector $a$ is a $\Lambda$-direction if there is $m\in{\mathbb Z}^d$ such that $\Lambda\subset {\mathscr H}_{a,m}^+$.
\vspace{.1cm}
\noindent {\rm (ii)} An oriented affine hyperplane ${\mathscr H}_{a,m}$ {\rm (}resp. half-plane ${\mathscr H}_{a,m}^+${\rm )} is called a contact hyperplane {\rm (}resp.
a contact half-plane{\rm )} for $\Lambda$ whenever $\Lambda\subset {\mathscr H}_{a,m}^+$ and $\Lambda\cap{\mathscr H}_{a,m}\neq \emptyset$.
\vspace{.1cm}
\noindent {\rm (iii)} The convex hull of $\Lambda$, denoted by $\Conv(\Lambda)$, is the intersection of all its contact half-planea. If
no prime vector is a $\Lambda$-direction, then $\Conv(\Lambda)={\mathbb Z}^d$.
\vspace{.1cm}
\noindent {\rm (iv)} $\Lambda$ is convex whenever it coincides with its convex hull.
\vspace{.1cm}
\noindent {\rm (v)} A finite convex set is called a polytope.
\end{defini}
It is easy to check that a polytope has a finite number of contact hyperplanes.
\begin{lemma}
\label{scatt08.lem-sldz}
Let $a$ be a prime vector in ${\mathbb Z}^d$. Then there is a matrix $A\in \SL(d,{\mathbb Z})$, such that $\phi_a\circ A(n,y)=n$ for all $(n,y)\in {\mathbb Z}\times {\mathbb Z}^{d-1}$.
\end{lemma}
\noindent {\bf Proof:} Thanks to the Lemma~\ref{scatt08.lem-prime}, there is a vector $b_1\in{\mathbb Z}^d$ such that $\phi_a(b_1)=1$. On the other hand, $\Ker(\phi_a)$ is a subgroup of ${\mathbb Z}^d$ and therefore it is free. In particular, it admits a basis $\{b_2,\ldots,b_d\}$ (minimal set of generators). Clearly the vectors $b_j$ can be chosen to be prime. Then let $A$ be the $d\times d$ matrix with column given by the family $\{b_1,\ldots,b_d\}$. By construction, $A$ has integer coefficients. Moreover, $b_1 =Ae_1$ and therefore $\phi_a(Ae_1)=1$. In addition, $\phi_a(Ae_j)=\phi_a(b_j)=0$ whenever $j\geq 2$. Consequently, if $y\in {\mathbb Z}^d$ is such that $Ay=0$, then $y_1=0$ and $\sum_{j=2}^d y_jb_j=0$. Since $\Ker(\phi_a)$ is free, this implies that $y_j=0$ for all $j$'s. Hence $A$ is one-to-one. On the other hand, if $x\in{\mathbb Z}^d$, then $x-\phi_a(x) b_1\in\Ker({\phi_a})$. Therefore there are $y_2,\ldots,y_d\in {\mathbb Z}$ such that $x-\phi_a(x) b_1 = \sum_{j=2}^d y_jb_j$. Setting $y_1=\phi_a(x)$ and $y=(y_1,\ldots,y_d)$ it follows that $x=Ay$. Hence $A$ is also onto. Therefore the matrix $A$ is invertible and its inverse has also integer entries. In particular, changing the sign of one of the $b_j$'s if necessary, $\det(A)= 1$.
\hfill $\Box$
\subsubsection{The $A$-Fourier transform}
\label{scatt08.sssect-AFourier}
\noindent Let $s\in{\mathbb Z}^d$ and let $A\in \SL(d,{\mathbb Z})$. Then let $T_s$ and $U_A$ be the operators acting on $\ell^2({\mathbb Z}^d)$ defined by
$$\left(T_s\psi\right)(x) \;=\;
\psi(x-s)\,,
\hspace{1cm}
\left(U_A\psi\right)(x) \;=\;
\psi(A^{-1}x)\,,
\hspace{1cm}
\psi\in \ell^2({\mathbb Z}^d)\,.
$$
\noindent Then both $T_s$ and $U_A$ are unitary operators. Moreover, if $s,t\in {\mathbb Z}^d$, one has $T_{s+t}=T_sT_t$ and $T_0={\mathbf 1}$. In a similar way, if $A,B\in \SL(d,{\mathbb Z})$ then $U_AU_B= U_{AB}$ and $U_{\rm id}=\one$. In particular, $(U_A)^{-1}=U_{A^{-1}}= U_A^\ast$. In addition, $U_AT_sU_A^{-1}= T_{As}$ leading to
$$H_0\;=\; \sum_{s\in S} {\mathcal E}_s\,T_s
\hspace{1cm}\Longrightarrow \hspace{1cm}
U_AH_0U_A^{-1}\;=\;
\sum_{s\in AS} {\mathcal E}_{A^{-1}s}\,T_s\,,
$$
\noindent Thus $U_A$ changes the support of $H_0$ from $S$ into $AS$. Let now $a$ be a prime vector and let $A\in\SL(d,{\mathbb Z})$ be chosen to satisfy $\phi_a(Ae_1)=1$. The partial Fourier transform $\Ff_A$ is the unitary transformation from $\ell^2({\mathbb Z}^d)$ into $\ell^2({\mathbb Z})\otimes L^2({\mathbb T}^{d-1})$ defined by
$$(\Ff_A\psi)_n(p) \;=\;
\sum_{y\in{\mathbb Z}^{d-1}} \psi\big(A(n,y)\big)\,e^{\imath p\cdot y}\,,
\hspace{2cm}
\psi\in \ell^2({\mathbb Z}^d)\;,\;\;p\in{\mathbb T}^{d-1}\;.
$$
\noindent It follows that
$$(\Ff_A H_0\psi)_n(p) \;=\;
\sum_{r\in \phi_a(S)} {\mathcal E}_r^A(p)
(\Ff_A\psi)_{n-r}(p)\,,
\hspace{2cm}
{\mathcal E}_r^A(p)\;=\;
\sum_{t\in{\mathbb Z}^{d-1},\, (r,t)\in A^{-1}S}
{\mathcal E}_{A(r,t)} e^{\imath p\cdot t}\,.
$$
\noindent Since $S$ is finite, each of the ${\mathcal E}_r^A$'s is a trigonometric polynomial. Moreover, $S$ is invariant under the reflection $s\mapsto -s$, so that $\phi_a(S)\subset [-r_a,r_a]$ if $r_a(S)= \max{\phi_a(S)}$ and both $\pm r_a\in \phi_a(S)$. Consequently, since $\phi_a(A(r,t))=r$, it follows that ${\mathcal E}_r^A\neq 0$ if and only if $r\in\phi_a(S)$. In particular, ${\mathcal E}_{\pm r_a}^A\neq 0$.
\subsubsection{The case of convex $\Lambda$}
\label{scatt08.sssect-Lconv}
\noindent From the definition of the $S$-interior, it follows immediately that the $S$-interior of a convex set $\Lambda$ coincides with the intersection $\Lambda^S= \bigcap_{(a,m)}{\mathscr H}_{a,m+r_a(S)}^+$ over the pairs $(a,m)$ such that ${\mathscr H}_{a,m}^+$ is an oriented contact half-space for $\Lambda$. This leads to the following result.
\begin{proposi}
\label{scatt08.prop-Ffb}
Let $\Lambda\subset {\mathbb Z}^d$ be a polytope and let $S$ be the support of $H_0$. Then the equation $(E-H_0)w(x) =0$ for $x\notin \Lambda$ admits a solution $w$ with finite support if and only if $w$ is supported by the $S$-interior of $\Lambda$.
\end{proposi}
\noindent {\bf Proof:} Let $a\in{\mathbb Z}^d$ be a prime vector and $m\in{\mathbb Z}$ be such that ${\mathscr H}_{a,m}$ is an oriented contact hyperplane of $\Lambda$. Let also $A\in \SL(d,{\mathbb Z})$ be chosen such that $\phi_a(Ae_1)=1$. Then for $w\in\ell^2({\mathbb Z}^d)$ let $w_n(p)$ denote the partial Fourier transform $(\Ff_Aw)_n(p)$. It follows that, since $\phi_a(S)\subset [-r_a(S),r_a(S)]$, $w$ satisfies $(E-H_0)w(x)=0$ for $\phi_a(x)< m$, namely
\begin{equation}
\label{scatt08.eq-Fba}
\sum_{|r|\leq r_a(S)} \left(E\,\delta_{r,0}-{\mathcal E}_r^a(p)\right)
w_{n-r}(p)\;=\;0\,,
\hspace{2cm}
\forall\; n<m\,.
\end{equation}
\noindent In the following $r_a(S)$ will be denoted by $r_a$. In particular, ${\mathcal E}_{-r_a}\neq 0$. Since $w$ is finitely supported, there is $N\in{\mathbb N}$ such that $w_n=0$ for $n<-N$. In addition, each component $w_n(p)$ is a trigonometric polynomial in $p$. Writing the equation~(\ref{scatt08.eq-Fba}) for $n=-N-r_a$ leads to
$${\mathcal E}_{-r_a}(p) w_{-N}(p)\;=\;0\,
\hspace{1cm}\Longrightarrow\hspace{1cm}
w_{-N}\;=\;0\,.
$$
\noindent Proceeding to write the equation~(\ref{scatt08.eq-Fba}) for $n=-N-r_a+l$ for $l=1,\ldots, m-1+N+r_a$, gives, by the same argument, $w_{n}=0$ for $n<m+r_a$. Hence the support of $w$ is contained in the half-plane ${\mathscr H}_{a,m+r_a}^+$. Since this is true for any contact hyperplane ${\mathscr H}_{a,m}$, the support of $w$ is contained in the $S$-interior of $\Lambda$. Conversely, if $w$ is supported by $\Lambda^S$, it follows that $(E-H_0)w$ is supported by $\Lambda$.
\hfill $\Box$
\subsubsection{Conclusion of the proof of Proposition~\ref{scatt08.th-Ffbpoly}}
\label{scatt08.sssect-Theopoly}
\begin{comment}
\noindent Before completing the proof of Proposition~\ref{scatt08.th-Ffbpoly}, the following result will be needed.
\begin{lemma}
\label{scatt08.lem-constker}
Let $I$ be an interval of the real line. Let $s\in I\mapsto A(s)\in \mbox{\rm Mat}(n\times n,{\mathbb C})$ be a polynomial map. Then the dimension of the kernel of $A$ is constant away from a finite subset of $I$.
\end{lemma}
\noindent {\bf Proof:} First, $\Ker(A)=\Ker(A^* A)$. Therefore there is no loss of generality in assuming that $A$ is a positive matrix for $s\in I$. If so, the projection onto the kernel is given by $\chi_{\{0\}}(A)$ where $\chi_{\{0\}}$ is the characteristic function of $\{0\}\subset {\mathbb R}$. Using the same argument as in the proof of Lemma~\ref{scatt08.lem-typrank}, it follows that there is a sequence $U_0\subset U_1\subset \cdots \subset U_n=I$ of open subsets of $I$ such that $s\in U_j\,\iff\, \dim(\Ker(A(s)))\leq j$. Let $d_m$ denote the smallest $j$ such that $U_j\neq \emptyset$. Hence $d(s)\geq d_m$ for all $s\in I$. It follows that, for $s\in U_{d_m}$, the kernel of $A$ has dimension $d_m$ so that there is a polynomial $D(z,s)\in {\mathbb C}[z,x]$ such that (i) $\det (z-A(s))= z^{d_m}D(z,s)$ and (ii) $p(s)= D(0,s)\neq 0$. Therefore $p$ is a polynomial in $s$ which is not identically zero on the open set $U_{d_m}$. Hence the set of zeroes of $p$ is finite and therefore $U_{d_m}$ is the complement of a finite set, proving the claim.
\hfill $\Box$
\vspace{.2cm}
\end{comment}
\noindent {\bf Proof of Proposition~\ref{scatt08.th-Ffbpoly}:} Since $\Lambda\subset \Conv(\Lambda)$, it follows that the equation $(E-H_0)w(x)=0$ is satisfied for $x\notin \Conv(\Lambda)$. Thanks to the Proposition~\ref{scatt08.prop-Ffb}, it follows that $w$ is supported in the $S$-interior of $\Conv(\Lambda)$. Let $R$ be the orthogonal projection on the subspace $\ell^2(\Conv(\Lambda)^S)$ and let $P$ the orthogonal projection on $\ell^2(\Lambda)$. Then both $P$ and $R$ are finite dimensional. In addition, the previous equation is satisfied if and only if $w\in \Ker(({\mathbf 1} -P)(E-H_0)R)=\Ker(R(E-H_0)({\mathbf 1} -P)(E-H_0)R)$. The matrix $A(E)= R(E-H_0)({\mathbf 1} -P)(E-H_0)R$ is finite dimensional, acts on $\ell^2(\Conv(\Lambda)^S)$ and is a polynomial in the variable $E\in [E_-,E_+]$. Therefore
its kernel has a constant dimension away from a finite set by analytic perturbation theory.
\hfill $\Box$
\vspace{.3cm}
\section{Scattering by a finite range perturbation}
\label{scatt08.sect-finiteRange}
\noindent This section is dedicated to the scattering of a lattice particle by a finite range perturbation. In the first part, the Green matrix of the perturbed Hamiltonian will be investigated, in the second part, various formulas will be derived for the wave operators, the scattering matrix and the time delay operator. At last, the Levinson theorem will be proved.
\vspace{.1cm}
Let $\Lambda\subset{\mathbb Z}^d$ be a finite subset, with $|\Lambda|$ points. Let $\piso:\ell^2({\mathbb Z}^d)\to{\mathbb C}^{|\Lambda|}$ be the corresponding partial isometry (see the introduction of Section~\ref{scatt08.ssect-LAP}). The perturbation will be a finite rank selfadjoint operator $V:\ell^2({\mathbb Z}^d)\to\ell^2({\mathbb Z}^d)$, supported on $\Lambda$, namely $V=\piso^*\piso V\piso^*\piso$. Hence $V$ is encoded in the $|\Lambda|\times|\Lambda|$ matrix $V^\piso=\piso V\piso^*$. For convenience, $V^\piso$ will be assumed to be invertible (this hypothesis can be dropped if the kernel is eliminated like in Section~\ref{sec-ONB}). The perturbed Hamiltonian describing the scattering is then $H=H_0+V$. A typical example for a local perturbation is a potential with support $\Lambda$, namely $V=\sum_{n\in\Lambda}v_n\,|n\rangle\langle n|$ with $v_n\in{\mathbb R}\backslash\{0\}$.
\vspace{.2cm}
\subsection{Green function}
\label{scatt08.ssect-Greenspec}
Let $G^\piso(z)=\piso (z-H)^{-1}\piso^*$ be the Green matrix of the perturbed Hamiltonian. As for the unperturbed case, it is also a Herglotz matrix which is invertible for $\Im m(z)\neq 0$. The following formulas are well-known.
\begin{lemma}
\label{scatt08.lem-Greenperturb}
For $z\in{\mathbb C}\setminus{\mathbb R}$,
\begin{equation}
\label{scatt08.eq-Greenperturb1}
G^\piso(z)\;=\;
\bigl(G_0^\piso(z)^{-1}-V^\piso\bigr)^{-1}\;=\;
\bigl(\one-G_0^\piso(z)V^\piso\bigr)^{-1}G_0^\piso(z)\,,
\end{equation}
\noindent Let the $T$-matrix be defined by
\begin{equation}
\label{scatt08.eq-Tmatrix}
T(z)\;=\;\piso^*\,T^\piso(z)\,\piso\,,
\hspace{2cm}
T^\piso(z)\;=\;
\bigl(\one-V^\piso G_0^\piso(z)\bigr)^{-1}V^\piso\,.
\end{equation}
\noindent Then
\begin{equation}
\label{scatt08.eq-Greenperturb2}
\frac{1}{z-H}\;=\;
\frac{1}{z-H_0}+
\frac{1}{z-H_0}\,T(z)\,\frac{1}{z-H_0}\,,
\end{equation}
\end{lemma}
\noindent {\bf Proof:} The resolvent identity yields
$$\frac{1}{z-H_0}\;=\;
\frac{1}{z-H}-\frac{1}{z-H_0}\,V\,\frac{1}{z-H}\;=\;
\left({\bf 1}-\frac{1}{z-H_0}\,V\right)\,\frac{1}{z-H}\,.
$$
\noindent Applying $\piso$ and $\piso^*$ from the left and right respectively gives
$$G_0^\piso(z)\;=\;
\bigl({\bf 1}-G_0^\piso(z)V^\piso\bigr)G^\piso(z)\,.
$$
\noindent Now $G_0^\piso(z)$ is Herglotz and thus invertible since $z\notin {\mathbb R}$. Hence, $G_0^\piso(z)^{-1}-V^\piso$ is also Herglotz and invertible, leading to the invertibility of ${\bf 1}-G_0^\piso(z)V^\piso= G_0^\piso(z)(G_0^\piso(z)^{-1}-V^\piso)$. To prove \eqref{scatt08.eq-Greenperturb2}, the resolvent identity gives the factor ${\mathbf 1}-\frac{1}{z-H_0}\,V$. This operator is invertible, because it is a finite rank perturbation of ${\mathbf 1}$ and any element in its kernel is an eigenvector of $H=H_0+V$ with eigenvalue $z\notin{\mathbb R}$, namely the kernel is trivial. Using the identity $({\mathbf 1}-A)^{-1} A =({\mathbf 1}-A)^{-1}(A-{\mathbf 1}+{\mathbf 1})= ({\mathbf 1}-A)^{-1} -{\mathbf 1}$, the inverse can be written as
$$ \left({\bf 1}-\frac{1}{z-H_0}\,V\right)^{-1}\;=\;
{\bf 1}+
\frac{1}{z-H_0}\,
\piso^*\,V^\piso\,\piso\,
\left({\bf 1}-\frac{1}{z-H_0}\,V\right)^{-1}\,.
$$
\noindent Since
$$\piso \left({\bf 1}-\frac{1}{z-H_0}\,V\right)\;=\;
\left({\bf 1}-G^\piso_0(z)\,V^\piso\right)\piso\;,
$$
\noindent it follows that
$$\piso\,\left({\bf 1}-\frac{1}{z-H_0}\,V\right)^{-1}\;=\;
\left({\bf 1}-G^\piso_0(z)\,V^\piso\right)^{-1}\,\piso\,,
$$
\noindent When combined with the resolvent identity this completes the proof.
\hfill $\Box$
\vspace{.2cm}
\subsection{Spectral analysis}
\label{scatt08.ssect-spec}
Because the perturbation has finite range, the essential spectrum of $H$ is given by the essential spectrum of $H_0$. However, $H$ may have some discrete spectrum, which, since $H$ is selfadjoint, is given by the simple poles of the resolvent on the real axis. Thanks to Proposition~\ref{scatt08.prop-Green3d}, it follows from equation~\eqref{scatt08.eq-Greenperturb1} that the only way to get a polar singularity in the Green matrix of $H$ is for ${\bf 1}-G^\piso_0(z)\,V^\piso=\left((V^\piso)^{-1}-G^\piso_0(z)\right)V^\piso$ to have a nontrivial kernel (recall that we restrict ourselves to the case of invertible $V^\piso$). This can be analyzed using the determinant of ${\bf 1}-G^\piso_0(z)\,V^\piso$ which is also called the {\em perturbation determinant} \cite{Yaf}. Furthermore, if $E$ is an eigenvalue of $H$,
\begin{equation}
\label{eq-embedcount}
\mbox{\rm multiplicity of}\;
E\;=\;
\dim\Ker\left(
(V^\piso)^{-1}-G^\piso_0(E\pm\imath 0)
\right)\,.
\end{equation}
\noindent If $E\not\in[E_-,E_+]$, it is called an {\em isolated eigenvalue} while, if $E\in(E_-,E_+)$, it is called an {\em embedded eigenvalue}. For $E=E_\pm$, a non-trivial kernel of $(V^\piso)^{-1}-G^\piso_0(E_\pm)$ leads to a {\em threshold singularity} which will be dealt with below. With any $E\in{\mathbb R}$ is associated the subspace of ${\mathbb C}^{|\Lambda|}$
$${\mathcal S}_E\;=\;
\Ker\bigl(\,(V^\piso)^{-1}-\Re e\;G^\piso_0(E)\,\bigr)\,.
$$
\noindent Then the multiplicity of the eigenvalue $E\in{\mathbb R}\setminus [E_-,E_+]$ of $H$ is also equal to $\dim({\mathcal S}_E)$. The embedded eigenvalues are characterized in the next result (where the space $\Ff_E$ is the space $\Ff_b$ for $E=f(b)$ used in Corollary~\ref{scatt08.lem-partialiso}).
\begin{proposi}
\label{scatt08.prop-embedded}
Let $V$ have finite range. Then $E\in(E_-,E_+)$ is an embedded eigenvalue if and only if $\Ff^\perp_E\cap{\mathcal S}_E$ is non-trivial and the dimension of this intersection is equal to the multiplicity of $E$. If ${\mathcal E}$ is analytic, the associated eigenvectors are decaying exponentially fast at infinity. If ${\mathcal E}$ is a trigonometric polynomial, the the eigenvectors have a finite support.
\end{proposi}
\begin{rem}
\label{scatt08.rem-KV}
{\em The last statement, namely that the eigenvectors have compact support, was proved in a slightly different context in \cite{KV}.
\hfill $\Box$
\end{rem}
\noindent {\bf Proof:} Let $E=f^{-1}(b)\in(E_-,E_+)$ be an embedded eigenvalue and $v\in{\mathbb C}^{|\Lambda|}$ be the associated vector in the kernel of $(V^\piso)^{-1}-G^\piso_0(E\pm\imath 0)$. Because $\Im m(G^\piso_0(E-\imath 0))\geq 0$ and $V^\piso$ is self-adjoint, this is equivalent to having $\Im m\,G^\piso_0(E-\imath 0)v=0$ and $(V^\piso)^{-1}v=\Re e \,G^\piso_0(E)v$, or alternatively $v\in\Ff^\perp_E\cap{\mathcal S}_E$. As shown in Lemma~\ref{scatt08.lem-kernel} and in Proposition~\ref{scatt08.prop-fperp}, for any vector $v\in\Ff^\perp_E\cap{\mathcal S}_E$ there is $w\in\ell^2({\mathbb Z}^d)$ such that $v=(E{\mathbf 1}-H_0)w$. Moreover $w$ decays exponentially fast at infinity and has finite support if ${\mathcal E}$ is a trigonometric polynomial.
\hfill $\Box$
\begin{exam}
\label{scatt08.exam-gpim1}
{\em Let $\Lambda$ be such that $\Ff_E^\perp\neq \{0\}$ (see Examples~\ref{scatt08.exam-Ffb} to \ref{scatt08.exam-int}). Chosing $V^\Pi=(\Re e\,G^\piso_0(E))^{-1}$ whenever the latter is invertible leads to a Hamiltonian $H$ with an embedded eigenvalue of multiplicity $\dim(\Ff^\perp_E)$.
\hfill $\Box$
\end{exam}
\begin{exam}
\label{scatt08.exam-barrier}
{\em Let us present another way to construct Hamiltonians with embedded eigenvalues, again by perturbing a periodic $H_0$ with a polynomial energy band ${\mathcal E}$. Let $P=\Pi^*\Pi$ and $Q=\one-\Pi^*\Pi$ and then set $V= -PH_0Q-QH_0P$. Clearly $V$ has finite range. Now $H=H_0+V$ splits into a direct sum of $PH_0P$ and $QH_0Q$. The former has finite rank and admits a spectrum of eigenvalues with eigenvectors supported in $\Lambda$. By the minimax principle, it follows that all these eigenvalues belong to $[E_-,E_+]$. On the other hand, $QH_0Q$ is a finite rank perturbation of $H_0$, so it has the same essential spectrum. Hence the eigenvalues of $PH_0P$ are embedded indeed.
\hfill $\Box$
\end{exam}
It is natural to address the question of whether embedded eigenvalues exist if $V$ is a multiplication operator with support in $\Lambda$. The following result gives a negative answer.
\begin{proposi}
\label{scatt08.prop-noembed}
Let $H_0$ be a trigonometric polynomial and $V=\sum_{n\in\Lambda}v_n\,|n\rangle\langle n|$ is a potential with entries $v_n\not = 0$ for $n\in\Lambda$. Then $H_0+V$ has no embedded eigenvalues.
\end{proposi}
\noindent {\bf Proof:} Clearly $V^\piso$ is invertible in ${\mathbb C}^{|\Lambda|}$. Thanks to Propositions~\ref{scatt08.prop-fperp} and \ref{scatt08.prop-embedded}, if $E$ is an embedded eigen\-value, then there is a vector $v\in{\mathbb C}^{|\Lambda|}$ such that $(V^\piso)^{-1}v- G_0^\piso(E)) v=0$ and $\Pi^*v=(E-H_0)w$ for some $w\in\ell^2({\mathbb Z}^d)$. It then follows from Proposition~\ref{scatt08.prop-Ffb} that $w$ is supported in the $S$-interior $\Lambda^S$ of $\Lambda$, if $S$ is the support of $H_0$. If $\Lambda^S$ is empty, then $v=0$ and there is no embedded eigenvalue. Now let $\Lambda^S$ be non-empty. Because $w$ is supported by $\Lambda^S$, one has $G_0^\piso(E)\Pi(E\,\one-H_0)w=\Pi w$ so that replacing $v=\Pi (E\,\one-H_0)w$ shows $\Pi(E\,\one-H_0)w=\Pi Vw$ and thus $(E\,\one-H_0)w=Vw$. Now the r.h.s is supported in $\Lambda^S$ since $V$ is a multiplication operator. Applying again Proposition~\ref{scatt08.prop-Ffb} shows that $w$ is supported in the $S$-interior $(\Lambda^S)^S$ of $\Lambda^S$. Now this procedure can be iterated. As the iterated $S$-interior of $\Lambda$ is empty after a finite number of steps, one concludes that $w=0$.
\hfill $\Box$
\vspace{.2cm}
Let us next investigate threshold singularities in more detail, now considering only the case of dimension $d\geq 3$. They appear at either one of the band edges $E_\pm$ whenever $(V^\piso)^{-1}-G^\piso_0(E_\pm\pm\imath 0)$ has a non-trivial kernel, which is equivalent to ${\mathcal S}_{E_\pm}$ being non-trivial. Hence $\dim({\mathcal S}_{E_\pm})$ is the multiplicity of the threshold singularity at $E_\pm$. Again it is easy to produce such singularities by an adequate choice of $V$. Such a singularity can either be a threshold eigenvalue or a threshold resonance (the latter is also called a half-bound state) depending upon whether the equation $H\psi=E_\pm\psi$ has a square integrable solution $\psi$ or not \cite{New,KJ}. In order to analyze the threshold singularities let the following space be defined
\begin{equation}
\label{scatt08.eq-Fborthpm}
{\mathcal T}_\pm \;=\;
\left\{
\,v\in{\mathbb C}^{|\Lambda|} \,|\,
\hat{v}\,\mbox{ has a zero of order at least } 5-d
\mbox{ at } k_\pm^*\,
\right\}\,.
\end{equation}
\noindent The definition of ${\mathcal T}_\pm$ is somewhat similar to $\Ff_b^\perp$ as given in Proposition~\ref{scatt08.prop-fperp}. However, ${\mathcal T}_\pm$ is likely to be larger than the limit of $\Ff_b^\perp$ as $b\to\pm\infty$ because it tests zeros only at a single point. In addition, ${\mathcal T}_\pm$ coincides with ${\mathbb C}^{|\Lambda|}$ for $d\geq 5$.
\begin{proposi}
\label{scatt08.prop-halfbounded}
Let $d\geq 3$ and $V$ be of finite range. Then the multiplicity of $E_\pm$ as threshold eigenvalue of $H$ is equal to $\dim\bigl({\mathcal S}_{E_\pm}\cap{\mathcal T}_{\pm}\bigr)$. Moreover, the multiplicity of $E_\pm$ as threshold resonance of $H$ is equal to $\dim\bigl({\mathcal S}_{E_\pm}\bigr)-\dim\bigl({\mathcal S}_{E_\pm}\cap{\mathcal T}_{\pm}\bigr)$. In particular, for $d\geq 5$ all threshold singularities lead to threshold eigenvalues.
\end{proposi}
\noindent {\bf Proof:} Let $v\in{\mathcal S}_{E_\pm}$. Then $V^\piso G^\piso_0(E_\pm)v=v$ and $V(E_\pm-H_0)^{-1}\piso^*v=\piso^*v$. Hence there is a threshold eigenvector whenever the equation $\piso^*v=(E_\pm-H_0)\psi$ admits a solution $\psi\in\ell^2({\mathbb Z}^d)$. If so, then $H\psi=E_\pm \psi$ indeed. Using the Fourier transform, the equation leads to the following solution $\hat{\psi}(k)=\hat{v}(k)/(E_\pm-{\mathcal E}(k))$. The denominator satisfies $E_\pm-{\mathcal E}(k)=\langle (k-k_\pm^*)|{\mathcal E}''(k_\pm^*)|(k-k_\pm^*)\rangle+{\mathcal O}((k-k_\pm^*)^3)$. This quadratic singularity is integrable in dimension $d\geq 3$ so that indeed $\psi\in\ell^\infty({\mathbb Z}^d)$ is well-defined. If $d\geq 5$, then the singularity is also square integrable so that $\psi\in \ell^2({\mathbb Z}^d)$ is an eigenvector. In dimension $d=4$, $\hat{\psi}$ is square integrable only if $\hat{v}$ has a zero at $k_\pm^*$. In dimension $d=3$ the zero has to be of order $2$. Combining this leads to the definition \eqref{scatt08.eq-Fborthpm} and to the conclusion above.
\hfill $\Box$
\vspace{.3cm}
\subsection{The wave operator as an integral operator}
\label{scatt08.ssect-waveint}
\noindent The potential being finite rank, the Kato-Rosenblum theorem for trace class scattering theory \cite{RS,Yaf} implies that the wave operators
$$\Omega_\pm \;=\;
\mbox{\rm s-}\!\!\!\lim_{t\rightarrow \pm \infty}\;
e^{\imath Ht}\;e^{-\imath H_0t}\,,
$$
\noindent exist and are complete, that is, $\Ran(\Omega_+)=\Ran(\Omega_-)=P_{\mbox{\tiny\rm ac}}(H)$ where $P_{\mbox{\tiny\rm ac}}(H)$ is the projection on the absolutely continuous subspace of $H$. Then the wave operators are partial isometries satisfying
\begin{equation}
\label{scatt08.eq-Omegaid}
\Omega_\pm^\ast \, \Omega_\pm \;= \;{\mathbf 1}\;, \qquad
\Omega_\pm \, \Omega_\pm^\ast \;=\; P_{\mbox{\tiny\rm ac}}(H)\;=\;
{\mathbf 1}\,-\,P_{\mbox{\tiny\rm pp}}(H)\,,
\end{equation}
\noindent where $P_{\mbox{\tiny\rm pp}}(H)$ is the projection on the pure-point spectrum of $H$ and the last equality holds because there is no singular continuous spectrum. In addition, if one sets $\Omega(t)=e^{\imath H t}e^{-\imath H_0 t}$, then $\Omega(t)e^{\imath H_0 s}=e^{\imath H s}\Omega(t-s)$ for all $s\in{\mathbb R}$. Passing to the limit $t\rightarrow \pm\infty$ yields the intertwining relation
$$
\Omega_{\pm}\,g(H_0)\;=\;
g(H)\,\Omega_\pm\,,
\hspace{2cm}
g\in C_0({\mathbb R})\;.
$$
\noindent Birman's invariance principle \cite{RS,Yaf} can now be expressed as follows. The function $f:(E_-,E_+)\to{\mathbb R}$ defined in \eqref{scatt08.eq-Fchoiceconclusion} is smooth and has positive derivative. It is therefore admissible for the invariance principle so that
\begin{equation}
\label{scatt08.eq-invarprin}
\Omega_\pm\;=\;
\mbox{\rm s-}\!\!\!\lim_{t\rightarrow \pm \infty}\;
P_{\mbox{\tiny\rm ac}}(H)\;
e^{\imath f(H)t}\;
e^{-\imath f(H_0)t}\;.
\end{equation}
\noindent However, $H$ may have some spectrum outside $(E_-,E_+)$ and possibly eigenvalues at $E_\pm$ so that $f(H)$ may not be well defined. However, using the completeness of the wave operators, $P_{\mbox{\tiny\rm ac}}(H)$ can be inserted discounting all eigenvalues. Since $H$ has the same essential spectrum as $H_0$, this eliminates all ambiguity in the definition. It will allow to derive an explicit formula for $\widehat{\Omega}_\pm=\Ff\Omega_\pm\Ff^*$ which will serve as a tool to calculate the wave operator and the scattering operator.
\vspace{.2cm}
\begin{proposi}
\label{scatt08.prop-waveop}
The following formula holds
$$
\bigl((\widehat{\Omega}_\pm-\one)\phi\bigr)(k)\;=\;
\lim_{\epsilon\downarrow 0}\;
\int_{{\mathbb T}^d}\frac{dk'}{(2\pi)^d}\,
\sum_{n,m\in\Lambda} \,
\langle n|
\,T({\mathcal E}(k')\mp\imath \epsilon)\,
|m\rangle\;
\frac{e^{\imath(k\cdot n-k'\cdot m)}}
{{\mathcal E}(k')\mp\imath \epsilon-{\mathcal E}(k)}\;
\phi(k')\,.
$$
\end{proposi}
\noindent {\bf Proof:} It follows from DuHamel's formula and a Tauberian lemma \cite{RS} that
$$
\Omega_\pm \;=\;
{\mathbf 1} \,\pm\,\imath\,
\mbox{\rm s-}\!\!\lim_{t\rightarrow \infty}\;
\int_0^t ds\; e^{\pm\imath Hs}\;V\;e^{\mp\imath H_0s}\;=\;
{\mathbf 1} \,\pm\,\imath\,
\mbox{\rm s-}\!\lim_{\epsilon\downarrow 0}\;
\int_0^\infty ds\; e^{-\epsilon s}\;
e^{\pm\imath Hs}\;V\;e^{\mp\imath H_0s}\;.
$$
\noindent Hence
$$\left((\widehat{\Omega}_\pm-{\bf 1})\phi\right)(k) \;=\;
\pm \,\imath\lim_{\epsilon\downarrow 0}\;
\int_0^\infty ds\; e^{-\epsilon s}\;
\left(
\Ff\, e^{\pm\imath Hs}\;V\;e^{\mp\imath H_0s}\,\Ff^*\,\phi
\right) (k)\,.
$$
\noindent In the following, the notation $V_{l,m}=\langle l|V|m\rangle$ will be used. In addition,
$$\langle m|\,e^{\mp\imath H_0s}\,\Ff^*\,|\phi\rangle \;=\;
\int_{{\mathbb T}^d}
\frac{dk'}{(2\pi)^d}\;
e^{-\imath k'\cdot m}\;
\,e^{\mp\imath {\mathcal E}(k')s}\,\phi(k')
$$
\noindent Consequently the previous formula leads to
$$\left((\widehat{\Omega}_\pm-{\bf 1})\phi\right)(k) \;=\;
\pm \imath
\sum_{l,m\in\Lambda} \,V_{l,m}\;
\lim_{\epsilon\downarrow 0}\;
\int_0^\infty ds\; e^{-\epsilon s}\;
\left(
\Ff\, e^{\pm\imath Hs}|l\rangle
\right) (k)
\int_{{\mathbb T}^d}
\frac{dk'}{(2\pi)^d}\;
e^{-\imath k'\cdot m}\;
\,e^{\mp\imath {\mathcal E}(k')s}\,\phi(k')\,.
$$
\noindent The integral over $s$ can be performed to give
$$\left((\widehat{\Omega}_\pm-{\bf 1})\phi\right)(k) \;=\;
\lim_{\epsilon\downarrow 0}\;
\int_{{\mathbb T}^d}\frac{dk'}{(2\pi)^d}\;
\sum_{l,m\in\Lambda} V_{l,m}\,e^{-\imath k'\cdot m}\;
\left(
\Ff\;\frac{1}{{\mathcal E}(k')\mp\imath \epsilon-H}\;|l\rangle
\right)(k)\; \phi(k')\,.
$$
\noindent In the previous expression, it becomes possible to compute the part in the parenthesis. For indeed, using the resolvent identity as in Lemma~\ref{scatt08.lem-Greenperturb} yields
$$\frac{1}{z{\mathbf 1}-H}\;\piso^* \;=\;
\frac{1}{z{\mathbf 1} -H_0}\; \piso^*\;
\frac{1}{{\mathbf 1}- V^\piso G_0^\piso(z)}\,,
$$
\noindent and remarking that $l\in\Lambda$. Hence, passing to the Fourier space leads to
$$\left(
\Ff\;\frac{1}{z-H}\;|l\rangle
\right) (k) \;=\;
\sum_{n\in\Lambda}\;
\frac{1}{z-{\mathcal E}(k)}\,e^{\imath k\cdot n}\,
\langle n|
\left(
{\mathbf 1} -V^\piso G_0^\piso(z)
\right)^{-1}\,|l\rangle\,.
$$
\noindent Replacing this in the above expression for $\widehat{\Omega}_\pm-{\bf 1}$ completes the proof.
\hfill $\Box$
\vspace{.3cm}
\subsection{The wave operator in the REF representation}
\label{scatt08.ssect-waveREF}
\noindent In this section, the REF representation will be used to calculate the wave operator $\widetilde{\Omega}_\pm={\mathcal U}\widehat{\Omega}_\pm{\mathcal U}^*$ in dimension $d\geq 3$. It is an operator on $L^2({\mathbb R})\otimes L^2(\Sigma,\nu)$. From Proposition~\ref{scatt08.prop-waveop}, the definition \eqref{scatt08.eq-Udef}, the change of variables formula \eqref{scatt08.eq-varchange3} and the definition \eqref{scatt08.eq-ONbasis} of the states $\psi_{m,b}$, it follows that
$$
((\widetilde{\Omega}_\pm-{\bf 1})\phi)_b \,=\,
\lim_{\epsilon\downarrow 0}\;
\int db'\;
\sum_{n,m\in\Lambda}
|\psi_{n,b}\rangle\;
\frac{
\langle n|\,T(f^{-1}(b')\mp\imath \epsilon)\,|m\rangle
}{f^{-1}(b')\mp\imath \epsilon-f^{-1}(b)}\;
\langle \psi_{m,b'}|\phi_{b'}\rangle\,,
$$
\noindent where $\langle \psi_{m,b'}|\phi_{b'}\rangle$ stands for the inner product in the Hilbert space $L^2(\Sigma,\nu)$ and the integral of $b'$ carries over ${\mathbb R}$. Thanks to Corollary~\ref{scatt08.lem-partialiso}, the sums over $n$ and $m$ can be computed to give
\begin{equation}
\label{eq-Omegaformula}
((\widetilde{\Omega}_\pm-{\bf 1})\phi)_b\,=\,
\lim_{\epsilon\downarrow 0}
\int \frac{db'}{\pi}\,
\frac{F(f^{-1}(b))^{\frac{1}{2}}\,F(f^{-1}(b'))^{\frac{1}{2}}}
{f^{-1}(b')\mp\imath \epsilon-f^{-1}(b)}
\,\Pi_b^*\,
\bigl|\Im m\,G^\piso_0(f^{-1}(b))\bigr|^{\frac{1}{2}}
\bigl(e^{\frac{b'}{2}}+e^{-\frac{b'}{2}}\bigr)\,
(\widetilde{O}_{\pm}\phi)_{b'}\,,
\end{equation}
\noindent where $\widetilde{O}_{\pm}=\int\! db\,\widetilde{O}_{\pm,b}$ with
\begin{equation}
\label{scatt08.eq-Odef}
\widetilde{O}_{\pm,b} \;=\;
\lim_{\epsilon\downarrow 0}\;
\frac{1}{e^{\frac{b}{2}}+e^{-\frac{b}{2}}}\;
T^\piso(f^{-1}(b)\mp\imath \epsilon)\;
\bigl|\Im m\,G^\piso_0(f^{-1}(b))\bigr|^{\frac{1}{2}}\;
\Pi_{b}\;.
\end{equation}
\noindent It is part of the proof of the following result to show that the limit in \eqref{scatt08.eq-Odef} exists and that the expression \eqref{scatt08.eq-Tmatrix} for the $T$-matrix can be replaced to give
\begin{equation}
\label{scatt08.eq-Odef2}
\widetilde{O}_{\pm,b}\;=\;
\frac{1}{e^{\frac{b}{2}}+e^{-\frac{b}{2}}}\;
\Bigl(\,(V^\piso)^{-1}-\Re e \,G^\piso_0(f^{-1}(b))\,
\mp\,\imath\,
\bigl|\Im m\,G^\piso_0(f^{-1}(b))\bigr|\Bigr)^{-1}\;
\bigl|\Im m\,G^\piso_0(f^{-1}(b))\bigr|^{\frac{1}{2}}
\;\Pi_{b}\,.
\end{equation}
\begin{theo}
\label{scatt08.theo-waveop3d}
Let $d\geq 3$ and let $V$ have finite support. In addition, $F$ is chosen as in {\rm \eqref{scatt08.eq-Fchoice}} and the following will be assumed:
\vspace{.1cm}
\noindent {\rm (i)} If $d=3$, the threshold singularities have multiplicity at most $1$ and any vector $w_\pm$ in the kernel
of $(V^\piso)^{-1}-G^\piso_0(E_\pm)$ has a Fourier transform satisfying $\hat{w}_\pm(k^*_\pm)\not = 0$.
\vspace{.1cm}
\noindent {\rm (ii)} If $d=4$, there are no threshold singularities.
\vspace{.1cm}
\noindent {\rm (iii)} The embedded eigenvalues lie neither on the critical values of ${\mathcal E}$ nor in the set ${\mathscr V}$ described in
{\rm Proposition~\ref{scatt08.th-Ffbpoly}}. The corresponding zeros of $b\in{\mathbb R}\mapsto (V^\piso)^{-1}-G^\piso_0(f^{-1}(b))$ are of first order in the
real part.
\vspace{.1cm}
\noindent Then the operators $\widetilde{O}_{\pm,b}$ are well-defined, continuous in $b$ and uniformly bounded. The wave operators are given by
\begin{equation}
\label{scatt08.eq-Omegarep3d2}
\widetilde{\Omega}_\pm \;=\;
{\bf 1}+
\sum_{\kappa=\pm 1}
\imath\;\Pi_{\widetilde{B}}^*\;
\bigl|\Im m\,G^\piso_0(f^{-1}({\widetilde{B}}))\bigr|^{\frac{1}{2}}\,
e^{\kappa\frac{\widetilde{B}}{2}}\;
\left(
\pm\,{\bf 1}\,+\,
\frac{e^{\pi\widetilde{A}}-\kappa\,\imath}
{e^{\pi\widetilde{A}}+\kappa\,\imath}
\right)
\,\widetilde{O}_{\pm}\,.
\end{equation}
\end{theo}
Formula \eqref{scatt08.eq-Omegarep3d2} shows that the wave operator can be calculated in terms of $\widetilde{O}_{\pm}$ and the dilation operator $\widetilde{A}$. It is similar to those obtained by Kellendonk and Richard for continuous scattering systems \cite{KR2,KR3,KR4}, however, we stress that we also allow for embedded eigenvalues, a fact that is closely linked to proving that the inverse in \eqref{scatt08.eq-Odef2} exists. In fact, if the kernels of $(V^\piso)^{-1}-\Re e\,G^\piso_0(E)$ and of $\Im m\,G^\piso_0(E)$ have a non-trivial intersection, the associated pole in \eqref{scatt08.eq-Odef2} is attained on the orthogonal complement of the range $\Ff_b$ of $\bigl|\Im m\,G^\piso_0(f^{-1}(b))\bigr|^{\frac{1}{2}}$ and thus one can prove that the inverse on $\Ff_b$ exists (in the sense of Lemma~\ref{scatt08.lem-kerA+iB}). Energies at which this happens are exactly the embedded eigenvalues by Proposition~\ref{scatt08.prop-embedded}. The intersection of the kernels is supposed to be a regular singular point and this allows to argue for the continuity of $\widetilde{O}_{\pm,b}$. Hypothesis (iii) holds generically for $V^\piso$ (within the non-generic situations of embedded eigenvalues). Another comment is that the condition (i) imposed for $d=3$ implies that the threshold singularity is a threshold resonance. This is because, by Proposition~\ref{scatt08.prop-halfbounded}, $\hat{w}_\pm(k^*_\pm)\not =0$ implies ${\mathcal S}_{E_\pm}\cap{\mathcal T}_\pm=\{0\}$. Again this is the generic behavior in dimension $d=3$. It is possible to treat threshold singularities for $d=4$, but this is technically more involved and not carried out here.
\vspace{.2cm}
\noindent {\bf Proof} of Theorem~\ref{scatt08.theo-waveop3d}: Let us first suppose that $\widetilde{O}_{\pm}$ are well-defined and bounded with fibers $\widetilde{O}_{\pm,b}$ depending continuously on $b$, and then show how \eqref{scatt08.eq-Omegarep3d2} follows from \eqref{eq-Omegaformula}. Thanks to the formulas \eqref{scatt08.eq-Fchoiceconclusion}, ${\mathcal E}(\theta_{b}(\sigma))=f^{-1}(b)=E_r+\Delta\tanh(b)$ and $F(f^{-1}(b))=\Delta\cosh^{-2}(b)$, a bit of algebra now leads to
$$((\widetilde{\Omega}_\pm-{\bf 1})\phi)_b \;=\;
\Pi^*_b\;
\bigl|\Im m\;G^\piso_0(f^{-1}(b))\bigr|^{\frac{1}{2}}\,
\int \frac{db'}{\pi}\;
\frac{1}{\sinh(b'-b)\mp\imath 0} \;
\bigl(e^{\frac{b'}{2}}+e^{-\frac{b'}{2}}\bigr)\;
(\widetilde{O}_{\pm}\phi)_{b'}\,.
$$
\noindent In the previous formula, $\widetilde{O}_{\pm}\phi$ is a vector in the Hilbert space $L^2({\mathbb R})\otimes {\mathbb C}^{|\Lambda|}$. As previously let $\widetilde{A}=-\imath\partial_b$ be the generator of the translation group in $L^2({\mathbb R})\otimes L^2(\Sigma,\nu)$ as well as $L^2({\mathbb R})\otimes {\mathbb C}^{|\Lambda|}$. Changing the integration variable $b'$ to $u=b'-b$ leads to $(\widetilde{O}_{\pm}\phi)_{u+b}=(e^{\imath \widetilde{A}u}\widetilde{O}_{\pm}\phi)_{b}$. Hence
$$
\Big((\widetilde{\Omega}_\pm-{\bf 1})\phi\Big)_b\;=\;
\sum_{\kappa=\pm 1}\;
\Pi_b\;
\bigl|\Im m\;G^\piso_0(f^{-1}(b))\bigr|^{\frac{1}{2}}\,
e^{\kappa\frac{b}{2}}
\int\frac{du}{\pi}\;
\frac{1}{\sinh(u)\mp\imath 0} \;
e^{\kappa\frac{u}{2}}
\left(e^{\imath \widetilde{A}u}\widetilde{O}_{\pm}\phi
\right)_{b}\,.
$$
\noindent Now \eqref{scatt08.eq-Omegarep3d2} is obtained from the following identity:
$$\int\frac{du}{\imath\pi}\;
\frac{1}{\sinh\bigl(u\bigr)\mp\imath 0}\;
e^{\kappa\frac{u}{2}}\;
e^{\imath \widetilde{A}u}\;=\;
\pm\one\;+\;
\frac{
e^{\pi\widetilde{A}}-\kappa\,\imath
}{
e^{\pi\widetilde{A}}+\kappa\,\imath
}\;.
$$
It remains to show the above mentioned properties of the operators $\widetilde{O}_{\pm,b}$ defined in \eqref{scatt08.eq-Odef2}. We first check that for every $b\in{\mathbb R}$ they are well-defined and continuous in $b$. This analytical issue is tied to embedded eigenvalues. In fact, away from them there would be nothing to prove because the inverse in \eqref{scatt08.eq-Odef2} exists by \eqref{eq-embedcount} and the fact that $A+\imath B$ is invertible for any invertible operator $A=A^*$ and non-negative operator $B\geq 0$ (see the proof of Lemma~\ref{scatt08.lem-kerA+iB}). Now focussing on embedded eigenvalues, let us set
$$
A_b\;=\;
(V^\piso)^{-1}-\Re e \,G^\piso_0(f^{-1}(b))
\;,
\qquad
B_b\;=\;
\bigl|\Im m\,G^\piso_0(f^{-1}(b))\bigr|
\;.
$$
These matrices $A_b$ and $B_b$ have nothing to do with the dilation operator and the rescaled energy operator, and only appear again in the following lines and Appendix~\ref{app-inverses}. Then $A_b$ is self-adjoint and $B_b$ is non-negative, and by Proposition~\ref{scatt08.prop-Green3d}(i) both are real analytic in $b$ as long as $f^{-1}(b)$ is not a critical value of ${\mathcal E}$. Now the properties of hypothesis (iii) of Theorem~\ref{scatt08.theo-waveop3d} guarantee that Lemma~\ref{lem-linindep} of Appendix~\ref{app-inverses} can be applied because, in particular, the zeros of $b\in{\mathbb R}\mapsto A_b+\imath B_b$ are of first order in $A_b$. This lemma implies that $\widetilde{O}_{\pm,b}$ is even analytic for $b=f^{-1}(E)$ away from the set ${\mathcal E}(\cri)$ of critical energies and away from the exceptional set ${\mathscr V}$ of Proposition~\ref{scatt08.th-Ffbpoly}. Continuity at the latter points follows again from Proposition~\ref{scatt08.prop-Green3d} because there are no embedded eigenvalues there. In particular, let us also note that the dimension of $\Pi_b$ changes at points $b$ corresponding to energies in ${\mathscr V}$, but in \eqref{scatt08.eq-Odef2} this does not lead to discontinuities due to the factor $|\Im m\,G^\piso_0(f^{-1}(b))|^{\frac{1}{2}}$ directly following $\Pi_b$.
\vspace{.1cm}
Now we check that the operator $\widetilde{O}_{\pm}$ seen as a linear map from $L^2({\mathbb R})\otimes L^2(\Sigma,\nu)$ to $L^2({\mathbb R})\otimes {\mathbb C}^{|\Lambda|}$ is actually bounded. This means that we have to analyze the limits $b\to\pm\infty$ of $\widetilde{O}_{\pm}$, which depend on the behavior at the thresholds. Here the factor $e^{\frac{b}{2}}+e^{-\frac{b}{2}}$ introduced in \eqref{scatt08.eq-Odef} will turn out to be crucial. We start by expanding the inverse in \eqref{scatt08.eq-Odef2} around the band edge $E_\pm$ using items (iv) and (v) of Proposition~\ref{scatt08.prop-Green3d}:
\begin{equation}
\label{scatt08.eq-expandinv}
(V^\piso)^{-1}-G^\piso_0(f^{-1}(b)\mp\imath 0)\;=\;
(V^\piso)^{-1}-G^\piso_0(E_\pm)
\;\mp\imath\;
D_\pm M^\piso_\pm\,e^{-|b|(d-2)}+
{\mathcal O}(e^{-|b|d},|b|e^{-2|b|})\,.
\end{equation}
\noindent If there is no threshold singularity, then by definition $(V^\piso)^{-1}-G^\piso_0(E_\pm)$ is invertible and thus the inverse in \eqref{scatt08.eq-Odef2} remains bounded as $b\to\pm\infty$ and the other factors lead to $\lim_{b\to\pm\infty}\widetilde{O}_{\pm,b}=0$. If there is a threshold singularity of multiplicity $\dim({\mathcal S}_{E_\pm})=1$ in $d=3$, the operator $(V^\piso)^{-1}-G^\piso_0(E_\pm)$ has a kernel of dimension $1$ spanned by some vector $w_\pm$. If, in addition, $w_\pm$ is not orthogonal to the one-dimensional projection $M^\piso_\pm$, namely if $\langle v^\piso_\pm |w_\pm\rangle=|\Lambda|^{-\frac{1}{2}}\hat{w}(k^*_\pm)\not =0$, then $\bigl((V^\piso)^{-1}-G^\piso_0(f^{-1}(b)\mp\imath 0)\bigr)^{-1}={\mathcal O}(e^{|b|})$. As $\bigl|\Im m\,G^\piso_0(f^{-1}(b))\bigr|^{\frac{1}{2}}={\mathcal O}(e^{-\frac{|b|}{2}})$ (because $d=3$) the added prefactor $(e^{\frac{b}{2}}+e^{-\frac{b}{2}})^{-1}$ assures that $\widetilde{O}_{\pm,b}$ remains bounded as $b\to\pm\infty$. (Its convergence is analyzed in Proposition~\ref{scatt08.prop-R} below.)
\vspace{.1cm}
For $d\geq 5$, the term of order $e^{-|b|(d-2)}$ in \eqref{scatt08.eq-expandinv} is dominated by the terms of order $e^{-2|b|}$. Hence, \eqref{scatt08.eq-expandinv} becomes rather
$$(V^\piso)^{-1}-G^\piso_0(f^{-1}(b)\mp\imath 0)\;=\;
(V^\piso)^{-1}-G^\piso_0(E_\pm) -N^\piso_\pm\,e^{-2|b|}+
{\mathcal O}(e^{-3|b|})\,,
$$
\noindent which follows also from Proposition~\ref{scatt08.prop-Green3d}. In this case, $N^\piso_\pm$ is invertible so that for an arbitrary threshold singularity $\bigl((V^\piso)^{-1}-G^\piso_0(f^{-1}(b)\mp\imath 0)\bigr)^{-1}={\mathcal O}(e^{2|b|})$, even in the most singular case where $(V^\piso)^{-1}=G^\piso_0(E_\pm)$. This is compensated in \eqref{scatt08.eq-Odef2} by the other factor $(e^{\frac{b}{2}}+e^{-\frac{b}{2}})^{-1}\bigl|\Im m\,G^\piso_0(f^{-1}(b))\bigr|^{\frac{1}{2}}={\mathcal O}(e^{-\frac{|b|}{2}(d-1)})$. Consequently $\widetilde{O}_{\pm}$ is bounded for $d\geq 5$. This concludes the proof.
\hfill $\Box$
\vspace{.2cm}
\subsection{The scattering operator}
\label{scatt08.ssect-scat}
\noindent Whenever the wave operators are complete, the scattering operator is defined by:
$$
S\;=\;\Omega_+^* \Omega_-\,.
$$
\noindent It is unitary and satisfies $[S,H_0]=0$. Hence, in the REF representation, $[\widetilde{S},\widetilde{B}]=0$ and thus $\widetilde{S}=\int db\,\widetilde{S}_b$ with unitary operators $\widetilde{S}_b$ on $L^2(\Sigma,\nu)$. The intertwining relation and the invariance principle \eqref{scatt08.eq-invarprin} imply that for any admissible function $f$ with $f'>0$, one has
$$\mbox{\rm s-}\!\!\!\lim_{t\to\pm\infty}\;
e^{\imath t f(H_0)}\,
\Omega_\pm\,
e^{-\imath t f(H_0)}\;=\;
\Omega_\pm^*\Omega_\pm \;=\;{\bf 1}\,,
\hspace{2cm}
\mbox{\rm s-}\!\!\!\lim_{t\to\mp\infty}\;
e^{\imath t f(H_0)}\,
\Omega_\pm \,
e^{-\imath t f(H_0)}\;=\;
\Omega_\mp^*\Omega_\pm\,.
$$
\noindent The second expression is either $S$ or $S^*$. Let now $f$ be chosen as in \eqref{scatt08.eq-Fchoiceconclusion}. In the REF representation, Proposition~\ref{scatt08.prop-rep} then leads to
\begin{equation}
\label{scatt08.eq-Blimits}
\mbox{\rm s-}\!\!\!\lim_{t\to\pm\infty}\;
e^{\imath t \widetilde{B}}\,
\widetilde{\Omega}_\pm\,
e^{-\imath t \widetilde{B}} \;=\;
{\bf 1}\,,
\hspace{1cm}
\mbox{\rm s-}\!\!\!\lim_{t\to\infty}\;
e^{\imath t \widetilde{B}}\,
\widetilde{\Omega}_-\,
e^{-\imath t \widetilde{B}} \;=\;
\widetilde{S}\,,
\hspace{1cm}
\mbox{\rm s-}\!\!\!\lim_{t\to-\infty}\;
e^{\imath t \widetilde{B}}\,
\widetilde{\Omega}_+\,
e^{-\imath t \widetilde{B}}\;=\;
\widetilde{S}^*\,.
\end{equation}
\noindent Using the explicit formula for $\widetilde{\Omega}_-$ given in Theorem~\ref{scatt08.theo-waveop3d} now leads to an explicit expression for the on-shell scattering matrix. The structure of such formulas (in particular, the EF representation of the formula \eqref{scatt08.eq-Sformula} in the proof below) is well-known and has appeared in various guises (see \cite{New,Yaf} for a list of references).
\begin{theo}
\label{scatt08.theo-Sin3d}
Let the assumptions of {\rm Theorem~\ref{scatt08.theo-waveop3d}} hold. Then the on-shell scattering matrix $\widetilde{S}_b$ is a unitary operator on $L^2(\Sigma,\nu)$ depending continuously on $b$ and given by
$$\widetilde{S}_b\;=\;
(\one-\piso_b^*\piso_b)+
\piso_b^* (C_b-\imath)(C_{b}+\imath)^{-1}\piso_b\,,
$$
\noindent where the selfajoint $L\times L$ matrix $C_b:P_b\,{\mathbb C}^{|\Lambda|}\to P_b\,{\mathbb C}^{|\Lambda|}$ is defined by
$$C_b\;=\; P_b
\bigl|\Im m\, G^\piso_0(f^{-1}(b))\bigr|^{-\frac{1}{2}}
\Bigl((V^\piso)^{-1}-\Re e\,G^\piso_0(f^{-1}(b))\Bigr)
\bigl|\Im m \,G^\piso_0(f^{-1}(b))\bigr|^{-\frac{1}{2}}\,
P_b\,.
$$
\end{theo}
\noindent {\bf Proof:} For any function $g$ the following formula holds $e^{\imath t \widetilde{B}}g(\widetilde{A}) e^{-\imath t \widetilde{B}}=g(\widetilde{A}-t)$. The limits $t\to\pm\infty$ can be taken whenever $g$ has limits at infinity. The function appearing in \eqref{scatt08.eq-Omegarep3d2} is of that type. The middle formula in equation~(\ref{scatt08.eq-Blimits}) and the expression of $\widetilde{\Omega}_-$ given in Theorem~\ref{scatt08.theo-waveop3d} leads to the calculation of $\widetilde{S}$, namely
\begin{equation}
\label{scatt08.eq-Sformula0}
\widetilde{S}_b\;=\;
\one+
\sum_{\kappa=\pm 1}
\imath\;\Pi_{b}^*\;
\bigl|\Im m\,G^\piso_0(f^{-1}(b))\bigr|^{\frac{1}{2}}\,
e^{\kappa\frac{b}{2}}\;
(-2)\;\widetilde{O}_{\pm,b}\,,
\end{equation}
\noindent Because Theorem~\ref{scatt08.theo-waveop3d} states that $\widetilde{O}_{\pm,b}$ is continuous in $b$, this formula already shows that $\widetilde{S}_b$ is continuous in $b$. Using equation~\eqref{scatt08.eq-Odef2}, it now follows that
$$\widetilde{S}_b\;=\;
\one-2\,\imath\;
\Pi_{b}^*\;
\bigl|\Im m\,G^\piso_0(f^{-1}(b))\bigr|^{\frac{1}{2}}\,
\Bigl((V^\piso)^{-1}-G^\piso_0(f^{-1}(b)+ \imath 0)\Bigr)^{-1}\,
\bigl|\Im m\,G^\piso_0(f^{-1}(b))\bigr|^{\frac{1}{2}}\;
\Pi_{b}\,.
$$
\noindent After simplification, one gets
\begin{equation}
\label{scatt08.eq-Sformula}
\widetilde{S}_b\;=\;
\one-2\,\imath\;
\piso^*_b(C_{b}+\imath)^{-1}\piso_b\,.
\end{equation}
\noindent This allows to prove the claim.
\hfill $\Box$
\vspace{.2cm}
Similar formulas hold for the EF-representation of the scattering matrix. The comments made in Section~\ref{scatt08.sec-EFrep} and the results of Theorem~\ref{scatt08.theo-Sin3d} lead to (with $C_E=C_b$ for $b=f(E)$),
$$\overset{\;\circ}{S}_E\;=\;
\widetilde{S}_{f(E)}\;=\;
(\one-\piso_{E}^*\piso_{E})+
\piso_{E}^* (C_E-\imath)(C_E+\imath)^{-1}\piso_{E}\,.
$$
\noindent It is now possible to get results on the asymptotics of the scattering matrix.
\begin{proposi}
\label{scatt08.prop-Slimits}
Let the assumptions of {\rm Theorem~\ref{scatt08.theo-waveop3d}} hold.
\vspace{.1cm}
\noindent {\rm (i)} If there are no threshold singularities or if $d\geq 5$, then $\lim_{b\to\pm\infty}\;\widetilde{S}_b=\one$.
\vspace{.1cm}
\noindent {\rm (ii)} If for $d=3$ there is a threshold singularity of multiplicity $1$ at $E_\pm$ and the extremum of ${\mathcal E}$ at $E_\pm$
is isotropic in the sense that ${\mathcal E}''(k_\pm^*)$ is a multiple of the identity, then
$$
\lim_{b\to\pm\infty}\;\widetilde{S}_b\;=\;
\one -2\,|\psi_\pm\rangle\,\langle\psi_\pm|\,,
$$
where $\psi_\pm\in L^2(\Sigma,\nu)$ are the states given in {\rm Lemma~\ref{scatt08.lem-limitstate}}.
\end{proposi}
\noindent {\bf Proof.} (i) If there are no threshold singularities, then $\lim_{b\to\pm\infty}\widetilde{O}_{\pm,b}=0$ as was shown in Section~\ref{scatt08.ssect-waveREF}. As $\bigl|\Im m\,G^\piso_0(f^{-1}(b))\bigr|^{\frac{1}{2}}\,e^{\kappa\frac{b}{2}}$ is bounded, it follows from \eqref{scatt08.eq-Sformula0} that $\lim_{b\to\pm\infty}\;\widetilde{S}_b=\one$. For $d\geq 5$, it has been shown that $\widetilde{O}_{\pm,b}$ is uniformly bounded even in the presence of threshold singularities. As the factor $\bigl|\Im m\,G^\piso_0(f^{-1}(b))\bigr|^{\frac{1}{2}}\,e^{\kappa\frac{b}{2}}$ vanishes in the limits $b\to\pm\infty$, the same conclusion holds thanks to equation~(\ref{scatt08.eq-Sformula0}).
\vspace{.2cm}
\noindent (ii) For $d=3$, it is assumed that there is a threshold singularity of multiplicity $1$ and the extrema are isotropic. Starting from \eqref{scatt08.eq-Sformula} the inverse of $C_b+\imath$ can be expanded using Proposition~\ref{scatt08.prop-Green3d} to give
\begin{eqnarray*}
(C_b+\imath)^{-1} & = &
D_\pm\, e^{-|b|}\,M^\piso_\pm
\Bigl(
(V^\piso)^{-1}-
\Re e\,G^\piso(E_\pm)+
\imath\,D_\pm\,
e^{-|b|}\,M^\piso_\pm
\Bigr)^{-1}\, M^\piso_\pm +
{\mathcal O}(e^{-\frac{|b|}{2}})\\
& =&
-\,\imath\,M^\piso_\pm +
{\mathcal O}(e^{-\frac{|b|}{2}})\,,
\end{eqnarray*}
\noindent where, in the second equality, Lemma~\ref{scatt08.lem-limit} stated below is used. It can indeed be applied thanks to the hypothesis stated in Theorem~\ref{scatt08.theo-waveop3d}. On the other hand ${\mathcal R}_b v^\piso_\pm=\psi_{0,b}+{\mathcal O}(e^{\mp b})$ so that it follows that $\Pi^*_b M^\piso_\pm\Pi_b= \|\psi_{0,b}\|^{-2}\,|\psi_{0,b}\rangle\langle\psi_{0,b}|+{\mathcal O}(e^{\mp b})$. Hence Lemma~\ref{scatt08.lem-limitstate} allows to conclude the proof.
\hfill $\Box$
\begin{lemma}
\label{scatt08.lem-limit}
Let $P\in \mbox{\rm Mat}(n\times n,{\mathbb C})$ be a one-dimensional orthogonal projection and $A=A^*\in \mbox{\rm Mat}(n\times n,{\mathbb C})$ have a one-dimensional kernel not lying in the kernel of $P$. Then
$$
\lambda P(A+\imath\lambda P)^{-1}P\;=\;-\,\imath \,P+{\mathcal O}(\lambda)
\;.
$$
\end{lemma}
\noindent {\bf Proof:} This follows from a short calculation using Cramer's rule.
\hfill $\Box$
\vspace{.3cm}
\subsection{The contributions of the threshold singularities}
\label{scatt08.sec-threshold}
\noindent Just as the scattering operator is obtained as a rescaled energy boost of the wave operator in \eqref{scatt08.eq-Blimits}, it is natural to study the dilation operator boost of the wave operator. As the scattering operator is obtained as boost of $\widetilde{\Omega}_-$, it is sufficient to consider
$$
\widetilde{R}_\pm \;=\;
\mbox{\rm s-}\!\!\!\lim_{t\to\pm\infty}\;
e^{\imath t \widetilde{A}}\,
\widetilde{\Omega}_-\,
e^{-\imath t \widetilde{A}}\,.
$$
\noindent It ought to be remarked that the limits in $\widetilde{R}_\pm$ approach the critical values $E_\pm$ respectively. This is because the identity $e^{\imath t \widetilde{A}}g(\widetilde{B}) e^{-\imath t \widetilde{A}}=g(\widetilde{B}+t)$ holds for any function $g$. From the definition it follows that $[\widetilde{R}_\pm,\widetilde{A}]=0$ so that
$\widetilde{R}_\pm=\int da\,\widetilde{R}_{\pm,a}$ with operator fibers
$\widetilde{R}_{\pm,a}$ acting on $L^2(\Sigma,\nu)$. Since the operator $ \widetilde{A}$ has continuous spectrum, it follows that $\lim_{t\rightarrow \pm\infty} e^{\imath t \widetilde{A}}K e^{-\imath t \widetilde{A}}=0$ for any compact operator $K$. In particular, since $\Omega_-$ is unitary modulo a compact operator, $\widetilde{R}_\pm$ is unitary. Consequently each $\widetilde{R}_{\pm,a}$ is unitary. The following operators are associated to $\widetilde{R}_\pm$ in a similar manner as the time delay is associated to the scattering operator:,
$$
\widetilde{T}_\pm\;=\;
\pm\;\frac{1}{\imath}\;
(\widetilde{R}_\pm)^{-1}\,
[\widetilde{B},\widetilde{R}_\pm]\,.
$$
\begin{proposi}
\label{scatt08.prop-R}
Let the assumptions of {\rm Theorem~\ref{scatt08.theo-waveop3d}} hold.
\vspace{.1cm}
\noindent {\rm (i)} If there are no threshold singularities or if $d\geq 5$, then $\widetilde{R}_\pm=\one$ and $\widetilde{T}_\pm=0$.
\vspace{.1cm}
\noindent {\rm (ii)} If $d=3$, if there is a threshold resonance at $E_\pm$ of multiplicity $1$ and if the extrema of ${\mathcal E}$ are
isotropic, then
$$\widetilde{R}_{\pm,a}\;=\;
\Bigl(\one\,-\,|\psi_\pm\rangle\langle\psi_\pm|\Bigr)\;+\;
\frac{e^{\pi{a}}\mp\,\imath}{e^{\pi{a}}\pm\,\imath}\;
|\psi_\pm\rangle\langle\psi_\pm|\,,
\hspace{2cm}
\TR({T}_\pm)\;=\;\frac{1}{2}\,.
$$
\end{proposi}
\noindent {\bf Proof:} Statement (i) is a consequence of the proof of Theorem~\ref{scatt08.theo-waveop3d} and the arguments below, so let us focus on the case $d=3$. Equation~(\ref{scatt08.eq-Omegarep3d2}) gives an explicit expression of the wave operator. Each term in the sum over $\kappa =\pm 1$ in its r.h.s. is the product of three terms that can be treated separately under the boost action. The first factor is
\begin{equation}
\label{scatt08.eq-firstfac}
\lim_{b\to\pm\infty}
\Pi_{b}^*\;
\bigl|\Im m\,G^\piso_0(f^{-1}(b))\bigr|^{\frac{1}{2}}\,
e^{\kappa\frac{b}{2}}\;=\;
(D_\pm)^\frac{1}{2}\;
\delta_{\kappa,\pm}\;
|\psi_\pm\rangle\langle v^\piso_\pm|\,,
\end{equation}
\noindent where $\delta_{\kappa,\pm 1}$ is the Kronecker delta. Details for the proof of the second equality are similar to the proof of Proposition~\ref{scatt08.prop-Slimits}. Taking into acount the Ket $\langle v^\piso_\pm|$ in \eqref{scatt08.eq-firstfac} permits to treat the last factor using Lemma~\ref{scatt08.lem-limit}. This gives
$$\lim_{t\to\pm\infty}\;
e^{\imath t \widetilde{A}}\,
\langle v^\piso_\pm|\,
\widetilde{O}_{\pm}\,
e^{-\imath t \widetilde{A}}\;=\;
\lim_{b\to\pm\infty}
\langle v^\piso_\pm|
\Bigl(\,(V^\piso)^{-1}-G^\piso_0(f^{-1}(b)+\imath 0)\,\Bigr)^{-1}\;
\frac{
\bigl|\Im m\,G^\piso_0(f^{-1}(b))\bigr|^{\frac{1}{2}}
}{
e^{\frac{b}{2}}+e^{-\frac{b}{2}}
}\;\Pi_{b}\,,
$$
\noindent The limit on the r.h.s. is given by $\imath (D_\pm)^{-\frac{1}{2}} \langle \psi_\pm|$. Replacing these factors leads to the formula for $R_{\pm,a}$. Using the integral over $a$, the trace of $T_\pm$ is given, up to the sign, by the rotation number of the map $a\in{\mathbb R}\mapsto (e^{\pi a}\mp\,\imath)/(e^{\pi a}\pm\,\imath)\in{\mathbb S}^1$. The latter is equal to $\pm\,\frac{1}{2}$. The sign in the definition of $T_\pm$ compensate the previous sign leading to the final result.
\hfill $\Box$
\vspace{.3cm}
\subsection{The time delay operator}
\label{scatt08.ssect-timedelay}
\noindent The time delay operator $T$ is the derivative of the scattering matrix w.r.t. the energy (the notation $T$ should not be confused the $T$-matrix). More formally, it is defined by $T=\frac{1}{\imath}\,S^{-1}[A,S]$ whenever $S$ is differentiable w.r.t. to the dilation $A$. In the REF it becomes
$$\widetilde{T}\;=\;
\int db\;\widetilde{T}_b\,,
\hspace{2cm}
\widetilde{T}_b\;=\;
\frac{1}{\imath}\,
(\widetilde{S}_b)^{-1}
\partial_b\widetilde{S}_b\,,
$$
\noindent while in the EF representations it is given by
$$\overset{\;\circ}{T}\;=\;
\int^{E_+}_{E_-}dE\;\overset{\;\circ}{T}_E\,,
\hspace{2cm}
\overset{\;\circ}{T}_E\;=\;
\frac{1}{\imath}\;
(\overset{\;\circ}{S}_E)^{-1}
\partial_E\overset{\;\circ}{S}_E\,.
$$
\noindent The {\em total time delay} is the trace of $T$. The formula given in the following result is sometimes called the {\em spectral property} of the time-delay \cite{TO,New}
\begin{theo}
\label{scatt08.theo-Tin3d}
Let the assumptions of {\rm Theorem~\ref{scatt08.theo-waveop3d}}. In addition, suppose that $\Ff_E={\mathbb C}^{|\Lambda|}$ for almost all $E$ and that there are no threshold eigenvalues. Then, for almost all $E\in [E_-,E_+]$,
\begin{equation}
\label{scatt08.eq-Tin3d}
\TR_{L^2(\Sigma,\nu)}(\overset{\;\circ}{T}_E)\;=\;
\lim_{\epsilon\downarrow 0}\;
2\;\Im m \;\TR_{\ell^2({\mathbb Z}^d)}
\left(
\frac{1}{E-\imath \epsilon-H} \;-\;
\frac{1}{E-\imath \epsilon-H_0}
\right)\,.
\end{equation}
\end{theo}
\begin{rem}
\label{scatt08.rem-Tin3da}
{\em The condition $\Ff_E={\mathbb C}^{|\Lambda|}$ for almost all $E$, implies that $H$ has no embedded eigenvalues (see Proposition~\ref{scatt08.prop-embedded}). If $H$ has embedded eigenvalues at energy $E$ with multiplicity $n(E)$, then the r.h.s. of equation (\ref{scatt08.eq-Tin3d}) must be modified by subtracting $n(E)/\epsilon$ to the trace to compensate for the singularity occurring at this energy. One should also be able to deal with a threshold singularity by subtracting the adequate contribution. However, no details are provided here.
\hfill $\Box$
\end{rem}
\begin{rem}
\label{scatt08.rem-Tin3db}
{\em The previous formula for the total time delay is well-known for potential scattering when $H_0$ is the Laplacian in ${\mathbb R}^d$ (see \cite{CN} for a list of references). It can be proved by a direct calculation in the REF representation (following the lines of \cite{TO}) or by a computation inspired by the Birman-Krein formula \cite{Yaf} for the scattering phase, an approach used below.
\hfill $\Box$
\end{rem}
\noindent {\bf Proof of Theorem~\ref{scatt08.theo-Tin3d}: }The following notation will be used $\Pi_E=\piso_{f(E)}$, $C_E=C_{f(E)}$ etc. From equation~\eqref{scatt08.eq-Sformula}, it follows that
$$
\frac{1}{2\imath}\,\partial_E\overset{\;\circ}{S}_E\;=\;
-\partial_E\Pi^*_E(C_E+\imath)^{-1}\Pi_E +
\Pi^*_E(C_E+\imath)^{-1}\partial_E C_E(C_E+\imath)^{-1}\Pi_E-
\Pi^*_E(C_E+\imath)^{-1}\partial_E\Pi_E\,.
$$
\noindent The equation $\Pi_E\Pi_E^*=\one_{{\mathbb C}^{|\Lambda|}}$ implies $\partial_E\Pi_E\Pi_E^*=-\Pi_E\partial_E\Pi_E^*$. Hence,
$$\TR_{L^2(\Sigma,\nu)}(\overset{\;\circ}{T}_E)\;=\;
\frac{1}{\imath}\;
\TR_{\Ff_E}
\left(
\Pi_E(\overset{\;\circ}{S}_E)^*\Pi_E^*\,
\Pi_E\partial_E \overset{\;\circ}{S}_E \Pi_E^*
\right)\;=\;
2\,\TR_{{\mathbb C}^{|\Lambda|}}
\left(
(C_E^2+1)^{-1}\partial_E C_E
\right)\,.
$$
\noindent This can be rewritten as
$$\TR_{L^2(\Sigma,\nu)}(\overset{\;\circ}{T}_E)\;=\;
\frac{1}{\imath}\;
\partial_E\,
\ln\,\det
\left(
\frac{C_E-\imath}{C_E+\imath}
\right)\,.
$$
\noindent Therefore dividing out the imaginary part of the Green function appearing in the definition of $C_E$ (see Theorem~\ref{scatt08.theo-Sin3d}) gives
\begin{eqnarray*}
\TR_{L^2(\Sigma,\nu)}(\overset{\;\circ}{T}_E)
& = &
\frac{1}{\imath}\;
\partial_E\,\ln\,\mbox{det}\Bigl(
P_E\bigl((V^\piso)^{-1}-G^\piso_0(E-\imath 0)\bigr)P_E
\bigl(P_E((V^\piso)^{-1}-G^\piso_0(E+\imath 0))P_E\bigr)^{-1}\Bigr)
\\
& = & 2\;\Im m\;\partial_E\,\ln\,\mbox{det}
\bigl(P_E((V^\piso)^{-1}-G^\piso_0(E-\imath 0))P_E\bigr)
\,.
\end{eqnarray*}
\noindent On the other hand, using \eqref{scatt08.eq-Greenperturb2} and the cyclicity of the trace, leads to
$$
\TR_{\ell^2({\mathbb Z}^d)}
\bigl((z-H)^{-1}-(z-H_0)^{-1}\bigr)\;=\;
\TR_{{\mathbb C}^{|\Lambda|}}
\Bigl(
\piso(z-H_0)^{-2}\piso^*
\bigl((V^\piso)^{-1}-G^\piso_0(z)\bigr)^{-1}
\Bigr)\,.
$$
\noindent Since $\partial_zG^\piso_0(z)=-\piso(z-H_0)^{-2}\piso^*$, it follows that
$$
\TR_{\ell^2({\mathbb Z}^d)}
\bigl((z-H)^{-1}-(z-H_0)^{-1}\bigr)\;=\;
\partial_z\;
\TR_{{\mathbb C}^{|\Lambda|}}
\Bigl( \ln((V^\piso)^{-1}-G^\piso_0(z)\bigr)\Bigr)\,.
$$
\noindent This leads to the identity.
\hfill $\Box$
\vspace{.3cm}
\subsection{A Levinson-type theorem}
\label{scatt08.ssect-Levinson}
\begin{theo}
\label{scatt08.theo-Levinson3d}
Let the assumptions of {\rm Theorem~\ref{scatt08.theo-waveop3d}} hold. Further let $N=\TR({P}_{\mbox{\rm\tiny pp}})$ be the number of bound states of $H$, including embedded eigenvalues and threshold eigenvalues. Then
\begin{equation}
\label{scatt08.eq-Levinson}
\frac{1}{2\pi}\;
\TR_{\ell^2({\mathbb Z}^d)}(T)\;+\;N\;=\;
\left\{
\begin{array}{cc}
-\,\frac{1}{2}\,\dim({\mathcal S}_{E_+})-
\frac{1}{2}\,\dim({\mathcal S}_{E_-})
& \mbox{\rm if }d=3\,,\\
0 & \mbox{\rm if }d\geq 4\,,
\end{array}
\right.
\end{equation}
\noindent where $\dim({\mathcal S}_{E_\pm})\in\{0,1\}$ is the multiplicity of the threshold resonance.
\end{theo}
Two proofs will be provided. The first one will require stronger hypothesis to show how the argument principle combined with the spectral property of the time delay can be used as in most standard references \cite{RS,New}. It might be possible to lift these hypothesis with more technical effort (see Section~\ref{sec-example} where this is done for a point interaction). The second proof requires only the stated hypothesis. It is based on the approach proposed by Kellendonk and Richard \cite{KR06} and uses on the index theorem for \CsS.
\vspace{.2cm}
\noindent {\bf First proof} of Theorem~\ref{scatt08.theo-Levinson3d}: It will be assumed that there are no embedded eigenvalues and no threshold singularities and that $\Ff_E={\mathbb C}^{|\Lambda|}$ for almost all $E$ (these assumptions hold for the case of a single site impurity). The number $N$ of eigenvalues is obtained by counting the poles of the resolvent using the Cauchy formula and a contour integration. The contour is given by two circles, one large counterclockwise oriented circle $\gamma$ around the spectrum of $H$ and a second small clockwise oriented circle $\Gamma$ around the spectrum of $H_0$ (but not touching it). Then
\begin{equation}
\label{scatt08.eq-argprin}
N\;=\;
\oint_{\Gamma+\gamma}
\frac{dz}{2\pi\imath}\;
\TR_{\ell^2({\mathbb Z}^d)}
\bigl((z-H)^{-1}-(z-H_0)^{-1}\bigr)\,.
\end{equation}
\noindent The resolvent identity implies that the contribution of $\gamma$ vanishes in the limit where its radius goes to infinity. Then let $\Gamma$ converge to the concatenation of the two intervals $[E_-+\imath 0,E_++\imath 0]$ and $[E_--\imath 0,E_+-\imath 0]$. Since it has been assumed that there is no threshold singularity, the regularity of the Green function at the band edges implies that the small circle connecting these contours near the band edges have a vanishing contribution in the contour integral. Thus
$$N\;=\;
\int_{E_-}^{E_+}
\frac{dE}{2\pi\imath}\;
\TR_{\ell^2({\mathbb Z}^d)}
\left(
\frac{1}{E+\imath 0-H}-
\frac{1}{E+\imath 0-H_0}-
\frac{1}{E-\imath 0-H}+
\frac{1}{E-\imath 0-H_0}
\right)\,.
$$
\noindent The formula for the total time delay, proved in Theorem~\ref{scatt08.theo-Tin3d}, gives
$$N\;=\; -\,
\frac{1}{2\pi}\;
\int^{E_+}_{E_-}dE\;\TR_{L^2(\Sigma,\nu)}(\overset{\;\circ}{T}_E)\;.
$$
\noindent The r.h.s. is nothing but the trace of $T$ expressed in the EF representation, leading to the result.
\hfill $\Box$
\vspace{.2cm}
\noindent {\bf Second proof} of Theorem~\ref{scatt08.theo-Levinson3d}: As a preamble let us construct an extension the algebra $C_0({\mathbb R})$ of the continuous functions on ${\mathbb R}$ vanishing at $+\infty$ and $-\infty$. Then set $C_{\infty}({\mathbb R})=C(\overline{{\mathbb R}})$ where $\overline{{\mathbb R}}= \{-\infty\}\cup {\mathbb R}\cup\{+\infty\}$ which are the continuous functions having limits at $+\infty$ and $-\infty$. The evaluation map $\mbox{ev}$ is the $\ast$-homomorphism defined by $\mbox{ev}:g\in C(\overline{{\mathbb R}})\mapsto \mbox{ev}(g)= (g(+\infty),g(-\infty))\in {\mathbb C}^2={\mathbb C}\oplus {\mathbb C}$. The kernel of this map is precisely $C_0({\mathbb R})$ leading to a short exact sequence $0\to C_0({\mathbb R})\hookrightarrow C_\infty({\mathbb R})\overset{\mbox{\rm\tiny ev}}{\to}{\mathbb C}\oplus {\mathbb C}\to 0$. Next let us consider a two-dimensional version of this extension. Let $C_\infty({\mathbb R}^2)$ be those continuous functions on ${\mathbb R}^2$ having a continuous limit function built from the limits of $f$ in the directions of ${\mathbb R}^2$. These directions can described by ${\mathbb S}^1$ so that one obtains an exact sequence is $0\to C_0({\mathbb R}^2)\hookrightarrow C_\infty({\mathbb R}^2)\overset{\mbox{\rm\tiny ev}}{\to}C({\mathbb S}^1)\to 0$. On the other hand the limit points at infinity can also be seen as a square the corners of which being given by points of coordinates $(\pm\infty,\pm\infty)$. Equivalently, $C_\infty({\mathbb R}^2)$ can be seen as the subalgebra of the $C^\ast$-tensor product $C_\infty({\mathbb R})\otimes C_\infty({\mathbb R})$ generated by functions that are continuous at the four corners of the square at infinity (let us point out that there is a unique $C^\ast$-norm on the tensor product since these algebras are abelian). The exact sequence relevant for the proof of Levinson's theorem is a non-commutative analog of this, where the two commuting coordinate functions in $C_\infty({\mathbb R})\otimes C_\infty({\mathbb R})$ are replaced by the operators $\widetilde{A}$ and $\widetilde{B}$ obeying the canonical commutation relations, and then everything is tensorized with the $C^\ast$-algebra of compact operators.
\vspace{.2cm}
Let ${\mathscr J}$ be the \Cs generated by operators of the form $f(\widetilde{A})\otimes K$ and $g(\widetilde{B})\otimes K$ with $f,g\in C_0({\mathbb R})$ and $K$ in the set ${\mathcal K}$ of compact operators on $L^2(\Sigma,\nu)$. Let ${\mathscr E}$ denote the extension of ${\mathscr J}$ obtained by allowing $f$ and $g$ to have nonzero finite limits at $\pm\infty$. Evaluation at infinity of ${\mathscr E}$ gives the algebra ${\mathscr A}$ which is the subalgebra of $\bigl(C_\infty(\widetilde{A})\oplus C_\infty(\widetilde{B})\oplus C_\infty(\widetilde{A})\oplus C_\infty(\widetilde{B})\bigr)\otimes{\mathcal K}$ of operators having coinciding limits at each of the four corners. This leads to the following short exact sequence (see \cite{KR06}):
$$0\,\to\,{\mathscr J}\,
\hookrightarrow\,{\mathscr E}\,
\overset{\mbox{\rm\tiny ev}}{\to}\,{\mathscr A}\,\to\,0\,,
$$
\noindent It induces a six-term exact sequence in $K$-theory \cite{Bla98}, leading to a canonical index map $\mbox{\rm Ind}: K_1({\mathscr A})\to K_0({\mathscr J})$. As it turns out, using the unitary operators ${\mathcal U}\Ff$ where ${\mathcal U}$ is defined in equation (\ref{scatt08.eq-Udef}) and $\Ff$ is the Fourier transform, the elements of ${\mathscr J}$ are mapped into compact operators on $\ell^2({\mathbb Z}^d)$. Therefore $K_0({\mathscr J})={\mathbb Z}$ \cite{Bla98}. Hence the index is an integer. At this level of generality, the index is obtained as follows. Given a unitary element $S$ in ${\mathscr A}$, it is lifted to an element $\Omega\in{\mathscr E}$ such that $\mbox{ev}(\Omega)=S$. Then, $\mbox{ev}(\Omega\Omega^\ast-{\mathbf 1})=0=\mbox{ev}(\Omega^\ast\Omega-{\mathbf 1})$. This means that, if $\Omega$ can be chosen to be a partial isometry, then both $\Omega\Omega^\ast-{\mathbf 1}$ and $\Omega^\ast\Omega-{\mathbf 1}$ are compact projections. Therefore the index of $S$ is defined by $\mbox{\rm Ind}(S)= \TR(\Omega\Omega^\ast- \Omega^\ast\Omega)$ which is indeed an integer. At a more computational level, if there is a derivation $\partial$ acting in ${\mathscr A}$ and if there is a faithful $\partial$-invariant trace on ${\mathscr A}$, the index of $S$ is computed as $\mbox{\rm Ind}(S)= \TR(S^{-1}\partial S)/(2\pi)$ \cite{Bla98}.
\vspace{.2cm}
In the present situation, combining the $S$-matrix, the operators $R_\pm$ and ${\mathbf 1}$, one for each side of the square at infinity, make up a unitary element $\widetilde{S}_{\mbox{\rm\tiny tot}}=\widetilde{S}\oplus\widetilde{R}_+\oplus{\bf 1}\oplus\widetilde{R}_-$ of ${\mathscr A}$. In addition, in both cases, thanks to equation~(\ref{scatt08.eq-Blimits}) and to the definition of $R_\pm$ (see Section~\ref{scatt08.sec-threshold}), this unitary operator can be seen as $\mbox{ev}(\Omega_-)$ where $\Omega_-$ is one of the wave operators. If it can be proved that $\Omega_-\in{\mathscr E}$, then Levinson's theorem follows immediately from the following two remarks:
\vspace{.1cm}
\noindent (i) $\Omega_-^\ast\Omega_-={\mathbf 1}$ and ${\mathbf 1} -\Omega_-\Omega_-^\ast$ is the projection onto the pure point spectrum of $H$.
\vspace{.1cm}
\noindent (ii) The algebra ${\mathscr A}$ admits a trace $\TR_{{\mathscr A}}$ which consists of the trace on ${\mathcal K}$ followed by the integrals over $a$ and $b$ respectively along each side of the square at infinity. In addition, the derivatives $\partial_a$ and $\partial_b$ define a derivation $\partial_{{\mathscr A}}$ acting on ${\mathscr A}$ which leaves $\TR_{{\mathscr A}}$ invariant. These derivatives are also given by the commutators with $\widetilde{A}$ or $\widetilde{B}$ respectively. Therefore, the index of $\widetilde{S}_{\mbox{\rm\tiny tot}}$ is given by
$$\mbox{\rm Ind}(\widetilde{S}_{\mbox{\rm\tiny tot}})\;=\;
\TR \left(\Omega\Omega^\ast- \Omega^\ast\Omega\right) \;=\;
\frac{1}{2\pi} \;\TR_{{\mathscr A}}
\left((\widetilde{S}_{\mbox{\rm\tiny tot}})^{-1}\partial_{{\mathscr A}} \widetilde{S}_{\mbox{\rm\tiny tot}}\right)\,,
$$
\noindent which is exactly equation~(\ref{scatt08.eq-Levinson}) when written in the language used previously.
\vspace{.2cm}
The main remaining point is to prove that the wave operators are elements of the \Cs ${\mathscr E}$. In some sense, most of the technical preparations above have been dedicated to proving precisely this point. The formula in Theorem~\ref{scatt08.theo-waveop3d} shows that $\widetilde{\Omega}_\pm$ is given by a concatenation of three operators: first the operators $\widetilde{O}_\pm=\int db\;\widetilde{O}_{\pm,b}$ which by Theorem~\ref{scatt08.theo-waveop3d} has fibers $\widetilde{O}_{\pm,b}$ depending continuously on $b$ with limits at $b=\pm\infty$ (due to Proposition~\ref{scatt08.prop-R}), then a smooth function of $\widetilde{A}$ which also has limits at infinity, and finally the imaginary part of the Green matrix also having these properties. One notes that the limits in the four corners coincide due to Propositions~\ref{scatt08.prop-Slimits} and \ref{scatt08.prop-R}. For indeed
$$
\lim_{b\to \pm \infty} \widetilde{S}_b \;=\;
\one-2\,|\psi_\pm\rangle \langle \psi_\pm|\,,
\quad
\lim_{b\to \pm \infty} \one_b\;=\; \one\,,
\quad
\lim_{a\to -\infty} \widetilde{R}_{\pm,a} \;=\;
\one-2\,|\psi_\pm\rangle \langle \psi_\pm|\,,
\quad
\lim_{a\to +\infty}\widetilde{R}_{\pm,a} \;=\; \one\,.
$$
\noindent In conclusion, $\widetilde{\Omega}_-\in{\mathscr E}$.
\hfill $\Box$
\vspace{.2cm}
\subsection{Example of a point interaction}
\label{sec-example}
Here we discuss the example of the perturbation $V=\lambda\,|0\rangle\langle 0|$, $\lambda\in{\mathbb R}$, localized on one site. Hence $\Pi=|0\rangle\langle 0|$, $\Lambda=\{0\}$ and, one has $L=|\Lambda|=1$ even if ${\mathcal E}$ is a polynomial. Furthermore $G^\Pi_0(z)=G_0(z)$ is a number (and not a matrix of larger size). The behavior of the real and imaginary part of $G_0(E-\imath 0)$ can directly be read off Proposition~\ref{scatt08.prop-Green3d}. Note that, in particular, the imaginary part does not vanish on $(E_-,E_+)$. Let us introduce the critical coupling constants $\lambda_{\pm}=1/G_0(E_\pm)$. Note that $\lambda_{-}<0$ and $\lambda_{+}>0$. Because the perturbation determinant is $1-\lambda G_0(z)$, the operator $H=H_0+V$ has an eigenvalue smaller than $E_-$ if and only if $\lambda<\lambda_{-}$, and an eigenvalue larger than $E_+$ if and only if $\lambda>\lambda_{+}$. If $\lambda=\lambda_{\pm}$ there is a threshold singularity at $E_\pm$. In dimension $d=3$ and $d=4$ this singularity is a threshold resonance, whereas for $d\geq 5$ it is a threshold eigenvalue. There is never an embedded eigenvalue (also not for polynomial ${\mathcal E}$).
\vspace{.2cm}
The scattering matrix given by \eqref{scatt08.eq-Sformula} differs from the identity only on a one-dimensional subspace:
$$
\overset{\;\circ}{S}_E
\;=\;(\one-\piso_{E}^*\piso_{E})\,+\,
\frac{\lambda^{-1}-G_0(E-\imath 0)}{\lambda^{-1}-G_0(E+\imath 0)}
\;\piso_{E}^* \piso_{E}
\;.
$$
Hence
$$
\TR(\overset{\;\circ}{T}_E)\;=\;
\frac{1}{\imath}\;
\partial_E\;\ln
\left(
\frac{\lambda^{-1}-G_0(E-\imath 0)}{\lambda^{-1}-G_0(E+\imath 0)}
\right)
\;.
$$
From this one readily deduces the winding number $\TR(T)$ if $\lambda\not =\lambda_\pm$. For the exceptional cases of threshold singularities, we need more precise information about the Green function as given in Proposition~\ref{scatt08.prop-Green3d}. For $d=3$ and $d\geq 5$ one has
$$
G_0(E-\imath 0)\;=\;
G_0(E_\pm)+N_\pm (E_\pm-E)+\imath D_\pm|E-E_\pm|^{\frac{d-2}{2}}\,+\,o(E-E_\pm)
\;.
$$
The constants satisfy $D_\pm>0$, and, for $d\geq 5$, one also has $N_\pm<0$. Hence, if $\varphi(E)$ denotes the phase of $\lambda_{\pm}^{-1}-G_0(E-\imath 0)$, one has $\varphi(E_+)=0$ and $\varphi(E_-)=\pi$ with one loop in the clockwise orientation for $d\geq 5$, while in $d=3$ one has $\varphi(E_+)-\varphi(E_-)=\frac{\pi}{2}$ for $\lambda=\lambda_{\pm}$. Hence the total scattering phase is
$$
\TR(T)\;=\;
\int^{E_+}_{E_-}dE\;\,\TR(\overset{\;\circ}{T}_E)
\;=\;
2\pi\;
\left\{
\begin{array}{cc}
0 & \mbox{ for }\lambda\in(\lambda_{-},\lambda_{+})\;,
\\
-\,1 & \mbox{ for }\lambda<\lambda_{-}\mbox{ or }\lambda>\lambda_{+}\;,
\\
-\,1 & \mbox{ for }\lambda=\lambda_{\pm}\mbox{ and } d\geq 5\;,
\\
-\,\frac{1}{2} & \mbox{ for }\lambda=\lambda_{\pm}\mbox{ and } d=3\;.
\end{array}
\right.
$$
This fits with Levinson's theorem. In particular, in the threshold case for $d\geq 5$ there is a threshold eigenvalue, while for $d=3$ there is a threshold resonance (no bound state) but the correction on the r.h.s. of \eqref{scatt08.eq-Levinson} is $-\frac{1}{2}$ because $\dim({\mathcal S}_{E_\pm})=1$.
\vspace{.2cm}
It is also instructive to complete the argument principle proof in the situation $\lambda\in[\lambda_-,\lambda_+]$. We start from \eqref{scatt08.eq-argprin} and remove $\gamma$ as above. Replacing the resolvent identity \eqref{scatt08.eq-Greenperturb2}, then gives
$$
0\;=\;
\int_\Gamma\frac{dz}{2\pi\imath}\;
\frac{\langle 0|(z-H_0)^{-2}|0\rangle}{\lambda^{-1}-G_0(z)}
\;=\;
\int_{G_0(\Gamma)}\frac{dG}{2\pi\imath}\;
\frac{1}{\lambda^{-1}-G}
\;,
$$
where in the second equality we used $\langle 0|(z-H_0)^{-2}|0\rangle=-\partial_z G_0(z)$. Now let us analyze the path $G_0(\Gamma)$ using the results of Proposition~\ref{scatt08.prop-Green3d}. It is always a closed curve going through the real axis twice exactly at $G_0(E_-)$ and $G_0(E_+)$. It is positively oriented and the limit curve is approached from the inside (as $\epsilon\downarrow 0$). The dimension $d$ now leads to the following crucial differences as to how the real axis is crossed. For $d=3$ it crosses transversally, as for an circle, while for $d\geq 4$, it is a spike pointing inward. Hence, if $\lambda=\lambda_\pm$, the singularity leads to a contribution equal to $\frac{1}{2}$ for $d=3$ and equal to $1$ for $d\geq 4$. In particular, this shows that $\TR(T)=-1$ for $\lambda=\lambda_\pm$ also for $d=4$, a case that was not covered by Theorem~\ref{scatt08.theo-Levinson3d}.
|
1,116,691,498,232 | arxiv | \section{Neural network optimization}
\label{app:NN}
Before the analysis in the main paper was carried out, a number of optimization studies were performed which we will describe here.
Initial studies included linear regression and decision tree methods. Ultimately for the models discussed in this work, better convergence was found with the discussed sequential feed-forward neural networks.
In our optimization studies, we
fixed all but one -- or in some cases two -- hyperparameters. This optimization was sufficient to produce networks, performing at the sub percentage level validation mean absolute percentage error ($\bar{E}_{\%}$). The hyperparameters in Table~\ref{tab:nnhypers} were used as a default from which to improve on, and the range of hyperparameter testing is given in Table~\ref{tab:nn-hypers-test}. Network performance was assessed with the minimum value of the $\bar{E}_{\%}$. The minimum $\bar{E}_{\%}$ was recorded for ten networks of each configuration, and the corresponding mean and standard error were calculated.
The optimal shape of the network was found at seven hidden layers with $410\pm 25$ neurons per layer. However, the variance of the network performance effectively plateaued, and the stochastic nature of network training dominated the variation in the $ \bar{E}_{\%}$ for networks with more than three hidden layers and two hundred neurons. Typically, increased network size increases the danger of over-fitting. Over fitting was only seen with small numbers of input training data points. Training time and forward propagation time can be reduced by using smaller networks. Thus, a network with three hidden layers and two hundred neurons per layer is advised. However, when using more training data points, one can see performance improvements from increased degrees of freedom.
The Adam optimizer performs very well across a wide variety of tasks. This problem was no exception, with Adam providing modest improvements over Nadam and Adamax. Notably, SGD failed to converge, implying that per-parameter historical weighting was essential. In addition, the use of a learning rate reduction after a set number of epochs was seen to improve $\bar{E}_{\%}$ with large data-sets; however, this was not tested rigorously. Adjustment of the $\beta_1$ and $\beta_2$ hyperparameters was investigated, though network performance was agnostic to both parameters with values above 0.8. Although the Adam optimizer possesses an adaptive learning rate, extreme learning rate values yielded the expected poor convergence. A learning rate in the region of $[4\cdot 10^{-4},2 \cdot 10^{-2}]$ provided constant $\bar{E}_{\%}$.
Network performance should scale with training data size, and this was seen. The network performance exhibited power-law scaling with training data size, and exponentially more data points are required to improve performance by the same amount. Thus, training time becomes the primary constraint on performance improvement.
The variation of network performance with batch size was negatively correlated. The optimal performance of the network was found with mini-batches of size 8. Below this, network, convergence was unreliable. The preference of the network to lower batch sizes is in line with other typical training results. Training time is drastically affected by lowering the batch size as parallelization is reduced. However, the performance improvements are significant enough to suggest a batch size of 8.
\onecolumngrid
\begin{table*}[b]
\begin{minipage}{.75\textwidth}
\begin{tabular}{|l | l |l |}
\hline
Hyperparameter $\quad$ & Default Value $\quad$ & Test Range \\ [0.5ex]
\hline\hline
Hidden layers* & 3 &$[2,8]$\\
\hline
Neurons/layer* & 512 & $[10,510]$ \\
\hline
Optimizer & Adam & SGD, RMSprop, Adam,
\\
& & NAdam, Adamax\\
\hline
Adam: $\beta_1$ and $\beta_2 $ & $\beta_1 = 0.9, \, \beta_2 = 0.99 \quad$ &$\beta_1 [0.5,0.99], \, \beta_2 [0.5,0.99]$\\
\hline
Learning Rate & 0.001 &$[10^{-4},10^{-1}]$ \\
\hline
Loss function & MAPE & MAPE, MAE, MSE,
\\
& & Huber Loss\\
\hline
Training epochs & 100 & $[0,100]$ \\
\hline
Training set size & $10^5$ &$[16,10^5]$\\
\hline
Batch size & 256& $[4,500]$ \\
\hline
\end{tabular}
\caption{\small
The hyperparameters explored while analyzing network performance.
*Hidden layers and neuron count were varied simultaneously}
\label{tab:nn-hypers-test}
\end{minipage}
\end{table*}
\twocolumngrid
\end{document}
|
1,116,691,498,233 | arxiv | |
1,116,691,498,234 | arxiv | \section{Introduction}
\label{S:intro}
There are numerous problems which can be reformulated as finding a
dominant eigenvector of a stochastic matrix --- computing invariant
probabilities in Markov chains, various problems in econometrics,
sociometry, bibliometrics, sport etc., see an excellent survey on
history of such applications \cite{giants}. Recently, ranking in
the Web (PageRank) attracted a lot of attention \cite{BrinPage,LanMeyer}.
Let us briefly remind this approach and discuss some problems which arise in relation to it. There
are $n$ pages (nodes of a graph) connected with outgoing and
incoming links. Then $ x_i$ --- score of i-th page of the Web,
$i=1,...,n$ is defined as follows:
{\em $x_i = \sum_{j\in L_i} x_j/n_j$, where $n_j$ is the number of outgoing links of page $j$ and $L_i$
is the set of pages linked to page $i$}.
Thus the score vector $x$ satisfies the equation
\begin{eqnarray}
Px=x
\eeai{1up}
with $P_{ij}=1/n_j$ if page $j$ cites page $i$
and $P_{ij}=0 $ otherwise. I.e. $x$ is an eigenvector
corresponding to eigenvalue $1$ of the (column) stochastic matrix $P$ of size $n\times n$.
In the example below $n=7$, outgoing links are denoted by arrows.
\begin{figure}[htb]
\centerline{\includegraphics[width=0.5\linewidth]{graph-image.jpg}}
\caption{Test example}\label{test}
\end{figure}
The corresponding matrix $P$ and its dominant (or principal) eigenvector $\bar{x}$ are
$$P=\left(\begin{tabular}{ccccccc}
0 & 0 & 1/3 & 0 & 0 & 0 & 0 \\
1/2 & 0 & 0 & 0 & 0 & 0 & 0 \\
1/2 & 1 & 0 & 1/2 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 \\
0 & 0 & 1/3 & 1/2 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0 & 1 \\
0 & 0 & 1/3 & 0 & 0 & 1 & 0 \\
\end{tabular}\right),$$
$$\bar{x}=(0,0,0,0,0,0.5,0.5)^T.$$
We see that the scores of pages 6 and 7 equal 0.5 while all other
pages have zero scores. Of course it contradicts common sense and our
intuition about ``true scores''. There are several problems related to the above
definition:
\begin{enumerate}
\item We have assumed that $n_j\neq 0$ for all $j$. Yet, in applications, there are nodes
with no outgoing links --- \emph{dangling nodes} (e.g. pdf files).
One can easily tackle this problem by taking $n_j=n$ for all dangling
nodes. However more complicated problem of ``traps'' (absorbing nodes 6 and 7 in the above example) remains open.
\item In the case of disconnected
subgraphs (typical for real-life problems) $\bar{x}$ is not unique.
\item Small changes in $P$ may result in dramatic perturbations of $\bar{x}$ ---
the computed scores are non-robust.
\item Power method with iterations $x_{k+1}=Px_k$ do not converge for cyclic
matrices; for acyclic ones it may converge slowly due to the small gap $1-|\lambda_2|$
(here $\lambda_2$ is the second eigenvalue of $P$).
\end{enumerate}
PageRank medicine for these problems goes back to the pioneering
paper \cite{BrinPage}: allow the $i$-the page to receive a part of its score ``for free'' -- replace
the dominant eigenvector $\bar{x}$ of $P$ with the (unique) solution to the equation
\[x^{\alpha}=\alpha Px^{\alpha}+(1-\alpha)e,\; e_i=1/n, \;i=1,...,n.\]
In other words, the ``true'' adjacency matrix $P$ is replaced with
\begin{eqnarray}
M=\alpha P+(1-\alpha)E, \;\; E_{ij}=1/n, \;i,j=1,...,n,
\eeai{PR}
so that $x^{\alpha}=Mx^{\alpha}$ is the unique dominant eigenvector of stochastic matrix $M$.
Unlike a dominant eigenvector of $P$, $x^{\alpha}$ is
robust to variations in links, with fast convergence of power iterations (as
$\alpha^k$). As a justification the model of \emph{surfer in the Web}
is proposed in \cite{BrinPage}: a surfer follows links with probability $\alpha$ and jumps to
an arbitrary page with probability $1-\alpha$. Then $x^{\alpha}_i$
is the (asymptotically) average time share he spends on page $i$.
\par
Here we propose a different modification of the original score
definition \rf{1up}. Note that in most of applications nominal
matrices are known approximately and they are subject to
perturbations, so robustness issues become important. Our goal is to
provide a robust counterpart to standard techniques in the spirit of
Robust Optimization framework \cite{BTN}. In the hindsight, it
shares many common features with with Robust Least Squares
\cite{EGL}. To the best of our knowledge this approach to ranking
problems is new.
\section{Problem formulation}
We say that matrix $P\in \Real{n\times n}$ is \emph{stochastic}
(column-stochastic) if $P_{ij}\geq 0 \quad\forall i,j, \quad\sum_i
P_{ij}=1 \quad\forall j$. The set of all stochastic matrices is
denoted $\mathcal{S}$.
The celebrated Perron-Frobenius theorem states that
there exist a \emph{dominant} eigenvector $\bar{x}\in \Sigma$ (here $\Sigma=\big\{u\in \Real{n}|\,\sum_i u_i=1,\, u_i\ge 0\big\}$
being the standard simplex of $\Real{n}$) of the ``nominal'' matrix $P$:
$$P\bar{x}=\bar{x}.$$
\par
We are looking for a robust extension of the dominant eigenvector.
Let $\mathcal{P}\subset \mathcal{S}$ stand for the set of
\emph{perturbed} stochastic matrices $F=P+\xi$, where $\xi$ is a
perturbation. The function $\max_{F\in \mathcal{P}} \|Fx-x\|$, where
$\|\cdot\|$ is some norm, can be seen as a measure of ``goodness''
of a vector $x$ as common dominant eigenvector of the family
$\mathcal{P}$. We say that the vector $\hat{x}$ is a robust solution
of the eigenvector problem on $\mathcal{P}$ if \begin{eqnarray}
\hat{x}\in \mathop{\hbox{\rm Argmin$\,$}}_{x\in \Sigma}\left\{\max_{F\in \mathcal{P}} \|Fx-x\|\right\}.
\eeai{def}
Let us consider some choices of the parameters of the above definition -- uncertainty set $\mathcal{P}$ and the norm $\|\cdot\|$ -- for the PageRank problem.
Recall that our motivation is to ``immunize'' the score $\hat{x}$ against small perturbations of the adjacency matrix $P$. We may consider,
that the uncertainty of the $j$-th column of $P$, in terms of the $\ell_1$-norm of elements, is bounded by $\varepsilon_j$.
In other words, if $[\xi]_j$ is the $j$-th column of the perturbation matrix, then $\|[\xi]_j\|_1\le \varepsilon_j$, $j=1,...,n$. From now on we assume that $\varepsilon_j>0$, $j=1,...,n$. Further, we may fix the ``total uncertainty budget'' $\varepsilon\ge \|\xi\|_1$, where with some notational abuse, $\|\xi\|_1=\sum_{ij}|\xi_{ij}|$. With a ``natural choice'' of the norm $\|\cdot\|=\|\cdot\|_1$, the definition \rf{def} now reads
\begin{eqnarray}
\hat{x}^{(1)}&\in &\mathop{\hbox{\rm Argmin$\,$}}_{x\in \Sigma}\Big\{\max_{\xi} \|(P+\xi)x-x\|_1\Big| \nonumber \\
&&e^T[\xi]_{j}=0,\;\|[\xi]_j\|_1\le \varepsilon_j,\;\|\xi\|_1\le \varepsilon\Big\}
\eeai{all1}
(here $e_i=n^{-1}$, $i=1,...,n$).
Let $n_j$ be the number of outgoing links on page $j$. If we set $\varepsilon_j\le {1\over n_j}$, then $F=P+\xi$ is a stochastic matrix.
\par
A different robust eigenvector may be obtained if, instead of fixing a bound for the $\ell_1$-norm of the perturbation, we bound its Frobenius norm: $\|
\xi\|_F\le \varepsilon$. When combining it with the choice of the norm $\|\cdot\|=\|\cdot\|_2$ in \rf{def}, we come to the definition
\begin{eqnarray}
\hat{x}^{(2)}&\in &\mathop{\hbox{\rm Argmin$\,$}}_{x\in \Sigma}\Big\{\max_{\xi} \|(P+\xi)x-x\|_2\Big| \nonumber \\
&&e^T[\xi]_{j}=0,\;\|[\xi]_j\|_1\le \varepsilon_j,\;\|\xi\|_F\le \varepsilon\Big\}.
\eeai{1and2}
An interesting simple version of the problem \rf{1and2} may be obtained when substituting the ``column-wise constraints'' on $\xi$ with the requirement that $P+\xi$ is stochastic:
\begin{eqnarray}
\hat{x}^{(F)}&\in&\mathop{\hbox{\rm Argmin$\,$}}_{x\in \Sigma}\Big\{\max_{\xi} \|(P+\xi)x-x\|_2\Big|\nonumber \\
&& F+\xi \mbox{ is stochastic, }\|\xi\|_F\le \varepsilon\}.
\eeai{all2}
Obviously, there is a vast choice of uncertainty sets and
norms, which will result in different definitions of robust eigenvector.
Three definitions above may be viewed as a starting point for this study.
\section{Theoretical analysis}
Observe that the min-max problems involved in the definitions \rf{all1} -- \rf{all2} of the robust eigenvector are not convex-concave. Indeed, if $\Xi$ is the corresponding set of perturbation matrices, the function
\[
\phi_\Xi(x)= \max_{\xi\in \Xi} \|(P+\xi)x-x\|
\]
cannot be computed efficiently.
However, one can easily upper bound the optimal values of these problems by those of simple convex problems as follows.
\begin{proposition}
\label{spto1}
Let \[\Xi_1=\{\xi \in \Real{n\times n}\big|\;e^T[\xi]_{j}=0,\;\|[\xi]_j\|_1\le \varepsilon_j,\;\|\xi\|_1\le \varepsilon\},\] and let $\|\cdot\|=\|\cdot\|_1$. Then
\[
\phi_{\Xi_1}(x)\le \varphi_1(x),\;\varphi_1(x)=\|Px-x\|_1+{\varepsilon}g_1(x),
\]
where the convex function $g_1$ is defined as
\begin{eqnarray*}
g_1(x)=\min_{x=u+v}\Big\{ \|u\|_\infty+\sum_j{\varepsilon_j\over \varepsilon}|v|_j\Big\}.
\end{eqnarray*}
\end{proposition}
The proofs of the statements of this section are postponed till the appendix.
\par
Note that $g_1$ is a norm. Further, when $\varepsilon_1=...=\varepsilon_n$ and $s={\varepsilon\over \varepsilon_1}$ is integer, $g_1(x)$ is simply the
sum of $s$ largest in magnitude elements of $x$. It is worth to mention that computing $g_1(x)$ requires $O(n)$ (up to a logarithmic factor in $n$) arithmetic operations.
We have an analogous bound for function $\phi_\Xi(x)$, involved in the definition \rf{1and2}:
\begin{proposition}
\label{spto2}
Let \[\Xi_2=\{\xi\in \Real{n\times n}\big|\;e^T[\xi]_{j}=0,\;\|[\xi]_j\|_1\le \varepsilon_j,\;\|\xi\|_F\le \varepsilon\},\] and let $\|\cdot\|=\|\cdot\|_2$. Then
\[
\phi_{\Xi_2}(x)\le \varphi_2(x),\;\varphi_2(x)=\|Px-x\|_2+{\varepsilon}g_2(x),
\]
where the convex function $g_2$ satisfies
\begin{eqnarray*}
g_2(x)=\min_{x=u+v}\Big\{ \|u\|_2+\sum_j{\varepsilon_j\over \varepsilon}|v|_j\Big\}.
\end{eqnarray*}
\end{proposition}
What was said about the numerical cost of computing $g_1(x)$ may be repeated here: computing $g_2(x)$ requires $O(n)$ (up to ``logarithmic factors'') arithmetic operations.
\par
We have a particularly simple and intuitive bound for \rf{all2}:
\begin{proposition}
\label{spto22}
Let \[
\Xi_F=\{P+\xi \mbox{ is stochastic, }\|\xi\|_F\le \varepsilon\}.\] Then
\[\phi_{\Xi_F}(x)\leq \varphi_F(x), \; \varphi_F(x)=\|Px-x\|_2+\varepsilon \|x\|_2.\]
\end{proposition}
Note that the upper bound $\varphi_F(x)$ for $\phi_{\Xi_F}$
is not very conservative. Indeed, the corresponding worst-case perturbation is
$$\xi^*=\frac{\varepsilon (Px-x)x^T}{\|Px-x\|\|x\|},$$
and for $F=P+\xi^*$ we have $\sum_i F_{ij}=1$, however the
condition $F_{ij}\geq 0$ (required for $F$ to be stochastic) may be
violated.
\par
The results above can be summarized as follows: \\
let $\|\cdot\|_{(1)}$ and $\|\cdot\|_{(2)}$ be some norms (in the above examples we have $\|\cdot\|_{(1)}$ set to $\ell_1$- or $\ell_2$-norm, and
$\|\cdot\|_{(2)}=g_1,\;g_2$ or $\|\cdot\|_2$). With certain abuse of terminology, we refer to the vector
\begin{eqnarray}
\hat{x}\in \mathop{\hbox{\rm Argmin$\,$}}_{x\in \Sigma}\left\{\varphi(x)=\|Px-x\|_{(1)}+\varepsilon \|x\|_{(2)}\right\}
\eeai{robeig}
as (computable) \emph{robust dominant eigenvector} of the corresponding family of matrices.
Let us discuss some properties of the robust eigenvector.
\begin{enumerate}
\item $\hat{x}$ coincides with $\bar{x}$ if $P$ is regular and
$\varepsilon$ is small enough. Recall that stochastic
matrix $P$ is called \emph{regular}, if its largest eigenvalue
$\lambda_1=1$ is simple, while all other eigenvalues lay inside the
unit circle $1-|\lambda_i|\geq \mu>0, \;i=2,...,n$. For instance,
matrices with all positive entries are regular. Thus dominant
eigenvector of a regular matrix is robust with respect to small
perturbations. This is the case of matrix $M$ in \rf{PR}.
\item In the case $\|x\|_{(2)}=\|\cdot\|_2$ vector $\hat{x}$
is unique due to the strict convexity of $\|x\|_2$ on $\Sigma$.
\item For large $\varepsilon$ vector $\hat{x}$ is close to $e\in \Sigma, \;e_i=n^{-1},\; i=1,...,n$.
\end{enumerate}
\section{Solving optimization problem \rf{robeig}}\label{solving}
For irregular matrices or for matrices with small gap $\mu>0$ the robust
dominant vector $\hat{x}$ can not be found in explicit form. To
compute it one should minimize the function $\varphi(x)$ over
standard simplex $\Sigma$. In the above examples these are well structured convex optimization problems, and as such they can be efficiently solved using available optimization software. For instance, medium-size problems
(say, when $n\leq 1.e3-1.e4$) may be solved using interior-point methods. For large-scale problems one can use methods of mirror-descent family \cite{NY, LNS, NaPo09CDC} and solve a
saddle-point reformulation of \rf{robeig}, as well as various randomized algorithms (see
\cite{IshTemp, NaPo09CDC} and references therein). Finally, for
huge-dimensional problems Yu. Nesterov \cite{Nesterov} proposed recently special randomized
algorithms which can work with sparse matrices and $n=1.e6-1.e9$. We do not discuss these methods here.
Instead, we propose a ``fast and dirty'' method for minimization of \rf{robeig},
which happens to be a close relative of the PageRank technique.
\par
The method starts the
minimization process from the point $x_1$, which is the minimizer
of $\|x\|_{(2)}$ over $\Sigma$. Then we apply the power method with averaging to minimize
the first term in $\varphi(x)$:
\[x_k=\frac{x_1+Px_1+...+P^{k-1}x_1}{k}\]
(note that power method without averaging may be non convergent,
for instance for a cyclic $P$). We have
\[
\|Px_k-x_k\|_{(1)}=\frac{\|P^k x_1-x_1\|_{(1)}}{k}\leq \frac{c}{k},
\]
that is $\|Px_k-x_k\|_{(1)}=O(1/k)$.
\par
To fix the ideas, let us consider the case of Proposition \ref{spto22},
where the norms $\|\cdot\|_{(1)}$ and $\|\cdot\|_{(2)}$ are the Euclidean norms. We have $x_1=e$, and
when writing the method in recurrent form
we get
\begin{equation}\label{alg}
x_{k+1}=\left(1-\frac{1}{k+1}\right)Px_k+\frac{1}{k+1}e.
\end{equation}
When denoting $M_k=\left(1-\frac{1}{k+1}\right)P+\frac{1}{k+1}E, \quad E_{ij}=1/n$,
the iterative method reads $x_{k+1}=M_k x_k, \quad x_1=e.$ We
apply the method until $\varphi_k=\varphi(x_k)$ starts to arise.
Thus we arrive to
\\{\em Algorithm 1}
\\{\tt begin:} $x_1=e$
\\{\tt $k$-th iteration:} $x_{k+1}=M_k x_k$,
\[
M_k=(1-\frac{1}{k+1})P+\frac{1}{k+1}E\]
\\{\tt stop:} $\varphi_{k+1}>\varphi_k, \bar{x}=x_k$\\
and $\bar{x}$ is the approximate solution. Note that the numerical complexity
of one iteration is dominated with that of multiplication $Px$ (and, of
course, we should not write $M_k$ in dense form for sparse $P$).
If we compare Algorithm 1 with PageRank algorithm: \[ x_{k+1}=M x_k, \quad
M=\alpha P+(1-\alpha) E, \quad\alpha=const,
\]
we conclude that our
algorithm is PageRank with varying $\alpha$, equipped with special
stopping rule. The proposed method is also closely related to the the regularized algorithm of \cite{ifac}. However, unlike the method of \cite{ifac}, Algorithm 1 the stopping rule is related to the uncertainty level.
\par\noindent
{\bf Choice of $\varepsilon$} is an important issue for the described
approach.
One may consider the following heuristics for Web ranking problems:
we may assume that the number
$n_j$ of outgoing links of the $j$-th page is known with accuracy $\pm 1$ for $qn$ pages, for some $q<1$. Keeping in mind that the average $n_j=m=20$
for the whole Web, and taking $q=0.5$ we get for uncertainty in
$P$, measured in Frobenius norm, $\varepsilon\simeq
\sqrt{qn}/m\simeq 0.03\sqrt{n}=30$ for $n=10^6$. In small problems
(like the examples in this paper) the choice $\varepsilon\simeq 1$ appears to be convenient.
\section{Numerical results}
\label{numsec}
\subsection{Simulated graphs}
To illustrate an application of the robust eigenvector, proposed above, let us go back to the example in the introduction. We present of Figure \ref{results} the scores computed for this example using three algorithms: 1) PageRank algorithm with $\alpha=0.85$, 2) exact minimization of $\varphi_F$ with $\varepsilon=1$, 3) Algorithm 1 with $\varepsilon=1$.
\begin{figure}[htb]
\centerline{\includegraphics[width=0.75\linewidth]{google_eps_1_iterative_and_exact2.png}}
\caption{Results for test example} \label{results}
\end{figure}
We observe that the scores for all three methods are close and
differ dramatically from the dominant eigenvector of the
nominal matrix. The robust approach generates the scores which are acceptable from the ``common sense'' point of view. It is worth
to mentioning that Algorithm 1 requires only 4 iterations
($\varphi_{4}>\varphi_3, \bar{x}=x_3$).
\par
We applied the proposed approach to the
simulated examples proposed in \cite{ifac}. The corresponding graphs may be constructed in arbitrary dimensions (from modest to huge
ones), with the dominant eigenvectors of matrices
$P$ and $M$ which can be computed explicitly. Furthermore, the corresponding matrix-vector product $Px$ can be computed very efficiently without storing the
matrix $P$ itself.
\par
Consider the graph \emph{(Model 1)} of Figure \ref{Model 1}.
\begin{figure}[h]
\begin{center}
\begin{minipage}[h]{0.5\linewidth}
\includegraphics[width=1\linewidth]{graph1.png}
\caption{Model 1}\label{Model 1}
\end{minipage}
\end{center}
\end{figure}
The nodes are shown in circles, the references are denoted by arrows. To get
rid of \((n,n)\) as a dangling node we assume the ability of equally
possible jump from this node to any other. There are $N=n^{2}$ nodes
in the model. We are interested in finding the vector $x=(x_{ij})$ of scores.
Taking arbitrary value of $x_{1 1}$, say $x_{1 1}=1$, we get the
system of equations equivalent to \rf{1up}:
$$\left\{
\begin{array}{ll}
x_{i 1}=x_{1 i}=\frac{1}{2}x_{i-1, 1}+1,\quad i=2, ..., n,\quad x_{11}=1,\nonumber \\
x_{i j}=\frac{1}{2}x_{i-1, j}+\frac{1}{2}x_{i, j-1}+1,\quad j, i=2, ..., n-1,\nonumber \\
x_{n j}=x_{j n}=x_{n, j-1}+\frac{1}{2}x_{n-1, j}+1,\quad j=2, ..., n-1,\nonumber \\
x_{n n}=x_{n-1, n}+x_{n, n-1}+1.\nonumber \\
\end{array}
\right.
$$
These equations can be solved explicitly (from $x_{1 1}$ we find
$x_{2 1}$ etc). Then we can proceed to standard normalization and
get the scores \(x_{ij}^*=\frac{x_{ij}}{\sum_{i,j}x_{ij}}\).
Similarly the scores for PageRank matrix $M$ can be found from
equations (it is convenient to take $y_{1 1}=\alpha$, then \mbox{$y_{n
n}=N\alpha$}):
$$\left\{
\begin{array}{ll}
y_{i 1}=y_{1 i}=\alpha(\frac{1}{2}y_{i-1, 1}+1),\quad i=2, ..., n,\nonumber\\
y_{i j}=\alpha(\frac{1}{2}y_{i-1, j}+\frac{1}{2}y_{i, j-1}+1),\qquad j, i=2, ..., n-1,\nonumber \\
y_{n j}=y_{j n}=\alpha (y_{n, j-1}+\frac{1}{2}y_{n-1, j}+1),\qquad j=2, ..., n-1,\nonumber \\
y_{n n}=\alpha (y_{n-1, n}+ y_{n, n-1}+1)\nonumber \\
\end{array}
\right.
$$
and normalization \(y_{ij}^*=\frac{y_{ij}}{\sum_{i,j}y_{ij}}\).
\par
Note that even very small perturbations may strongly
corrupt the graph on Figure \ref{Model 1}. Thus, taking large penalty parameter $\varepsilon$, as it was suggested in the discussion at the end of Section \ref{solving} would lead to
complete ``equalization'' of the scores. That is why we choose small
penalty coefficient in this example (specifically, it was taken
$\varepsilon=0.01$).
\par
Let us denote $x_{\alpha}=y^*$. We compare now the score vector $x^*$ for
nominal $P$ with the score $x_{\alpha}$ by PageRank and the
results obtained by Algorithm 1. On Figure \ref{model1-diag} we
present the scores of diagonal nodes, i.e. $x_{i, i}, i=1,\dots ,
n$ for $n=200$ (that is $N=40 000$) and
$\alpha=0.85$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.75\linewidth]{model1-diag.png}
\caption{Diagonal scores} \label{model1-diag}
\end{center}
\end{figure}
Figure \ref{model1-lastrow} shows the scores $x_{n i}, i=1,\dots , n$ of the last row.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.75\linewidth]{model1-lastrow.png}
\caption{Last row scores} \label{model1-lastrow}
\end{center}
\end{figure}
From the geometry of our graph (Figure \ref{Model 1}) it is natural to expect
the ``true'' scores $x_{ij}$ to increase with $i, j$ and the
highest scores should be achieved in south-east corner of the square.
The score vector, obtained as dominant eigenvector of $P$, possesses these
properties. PageRank solution and robust solution provide both
strongly ``equalized'' ranks; the robust solution appears to better fit the expected one.
\par
Let us modify slightly the graph of Model 1: in the new \emph{Model
2} we link the node \((n, n)\) {\em only} to the node \((1, 1)\).
For this model the system of equations for which defines the nominal
scores is similar to the system for Model 1, and the dominant
eigenvector of $P$ and PageRank vector $x^{\alpha}$ can be computed
explicitly. On the other hand, it is obvious that the power method
would not converge for this model. Indeed if $x^0_{1 1}=1, x^0_{i
j}=0, \{i j\}\neq \{1 1\}$ then after $2n-1$ iterations we return at
the same state, and matrix $P$ is cyclic.
The results for Model 2 with the same parameters ($n=200,
\varepsilon =0.01$) are presented at Figures \ref{model2-diag} and \ref{model2-lastrow}.
\begin{figure}[htb]
\begin{center}
\begin{minipage}[h]{\linewidth}
\includegraphics[width=0.75\linewidth]{model2-diag.png}
\caption{Diagonal scores for the second model.} \label{model2-diag}
\end{minipage}
\hfill
\begin{minipage}[h]{\linewidth}
\includegraphics[width=0.75\linewidth]{model2-lastrow.png}
\caption{Last row scores for the second model.}
\label{model2-lastrow}
\end{minipage}
\end{center}
\end{figure}
In this example, same as for Model 1, the score vector obtained
using PageRank is close to the robust one. And may be robust scores
are slightly closer to our expectations of ``common sense'' ranks.
\subsection{``Real-life'' data}
We have tested the proposed techniques on the experimental datasets, used in experiments on link analysis ranking algorithms described in \cite{www10}. We present here experimental results for adjacency matrices from two refined datasets:
{\tt Computational Complexity (CompCom)} and {\tt Movies}, downloaded from \href{http://www.cs.toronto.edu/\~tsap/experiments/}{http://www.cs.toronto.edu/\~tsap/experiments/}. {\tt CompCom} data represent 789 pages with totalize 1449 links (with average number of links per page is equal to 1.8365). {\tt Movies} dataset has 4754 notes and 17198 (average number of links per page is 3.6176). We compare high accuracy solution to the problem \rf{robeig} (with the norms $\|\cdot\|_{(1)}$ and $\|\cdot\|_{(2)}$ are Euclidean norms) obtained using {\em MOSEK} optimization software \cite{Mosek} with the scores by Algorithm 1 and those obtained by the PageRank algorithm. The same value of penalization parameter $\varepsilon=1$ in \rf{robeig} is used in all experiments. On Figure \ref{compcom} we present the 20 largest elements of the score vectors for the {\tt CompCom} data. In this experiment the optimal value of the problem \rf{robeig} is $0.0587$. The corresponding value for solution supplied by Algorithm 1 (after $3$ iterations) is $0.0756$.
It should be noted that the high scores $x^{\alpha}_{645}$ and $ x^{\alpha}_{654}$ by PageRank are pure artifacts -- they correspond to the pages which receive numerous links but only reference each other.
The techniques proposed in this paper seem better handle this sort of problem than the classical PageRank algorithm.
\par
Figure \ref{movies} shows the 20 largest elements of the score vectors for the {\tt Movies} data. In this case the best scores computed by 3 competitors are quite close. The optimal value of the problem \rf{robeig} is $0.0288$; the approximate solution supplied by Algorithm 1 (after $4$ iterations) attains the value $0.0379$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.75\linewidth]{cc.pdf}
\caption{20 highest scores for {\tt CompCom} matrix} \label{compcom}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.75\linewidth]{m.pdf}
\caption{20 highest scores for {\tt Movies} matrix} \label{movies}
\end{center}
\end{figure}
\section{Directions for future research}
\begin{enumerate}
\item \emph{Structured perturbations.} It is often natural
to assume that the perturbation $\xi$ of nominal matrix $P$ possesses some structure. For instance, we
can consider $\xi=M\Delta N$, where $\|\Delta\|_F\leq
\varepsilon$, and $M, N$ are rectangular matrices of
appropriate dimensions, or $\xi=\sum t_i A_i$, where $\|t\|_2\leq
\varepsilon$, with fixed matrices $A_i\in \Real{n\times n}$.
\item \emph{Implicit uncertainty model.}
Family $\mathcal{P}$ can be defined implicitly. Suppose that we can sample randomly from $\mathcal{P}$, so that \emph{random} realizations
$F_i\in \mathcal{P},\; i=1,...,N$ are available. One may try to find $x$ which is approximate dominant
eigenvector for all of them. For instance, one can fix
$\varepsilon>0$ and look for solution of a system of convex inequalities
\[\|F_i x-x\|\leq \varepsilon, \quad i=1,...N, \quad x\in \Sigma.\]
Such system can be solved iteratively using the mirror-descent
algorithm. This approach is close to \cite{ineq, PolTempo}.
\item \emph{Computational research.} The development of
numerical methods tailored for optimization problem \rf{robeig}
and their comparison is of great interest.
\item \emph{Real-life examples.} We plan to test extensively the proposed approach on the large-scale Web rank problems data available from different
Internet databases.
\end{enumerate}
\section{ACKNOWLEDGMENTS}
The authors would like to acknowledge Andrey Tremba's assistance when implementing numerical simulations of Section \ref{numsec} and enlightening discussions with Arkadi Nemirovski and Yuri Nesterov.
|
1,116,691,498,235 | arxiv | \section{Introduction}
\label{sec:introduction}
One may consider infinitely divisible random elements in very general
settings where there is a binary operation on the carrier space such
that it makes sense to speak of the ``addition'' of two random
elements. Once one rules out pathologies such as random elements with
idempotent distributions (that is, probability measures that are equal
to their convolution with themselves), the main step towards
understanding infinitely divisible random elements usually consists of
establishing their representation as a sum of the points of a Poisson
random measure and possibly also constant terms and random elements
with Gaussian-like distributions. Results of this type start from the
classical L\'evy-Khinchin theorem for Euclidean space, with further
extensions to Banach spaces \cite{arauj:gin80,ros90}, groups
\cite{par67}, function spaces \cite{bas:ros13,kab:stoev12,raj:ros89},
and general semigroups \cite{MR974112}.
A {\em convex cone} is a semigroup equipped with a scaling, that is,
with an action of the group of positive real numbers that interacts
suitably with the semigroup operation. There is a natural notion of
stability for random elements of such a carrier space that extends the
usual notion of stability for random elements of Euclidean space, see
\cite{dav:mol:zuy08}. For strictly stable random elements, the
intensity measure of the corresponding Poisson process -- the L\'evy
measure of the random element -- is scale-homogeneous, that is,\ its value
on any set scaled by any positive real number equals the value on the
original set up to a factor given by a (fixed) power of the scaling
constant. A scale-homogeneous measure on $\bR^d$ that is finite
outside a neighborhood of the origin can be represented in polar
coordinates as the product of a probability measure on the unit sphere
(the directional or angular component) and a scale-homogeneous measure
on the positive reals (the radial component). A nontrivial
scale-homogeneous measures on the positive reals that is finite
outside neighborhoods of the origin is necessarily of the form $c \,
\alpha \, t^{-(\alpha + 1)} \, dt$ for some constants $c > 0$ and
$\alpha > 0$. The unit sphere need not be the unit sphere for the
Euclidean norm: the unit sphere for any norm on $\bR^d$ serves equally
well. The situation becomes more complicated for more general spaces
where there may be no natural candidates to play the r\^ole of the
unit sphere.
Such a polar decomposition of the L\'evy measure leads to a
representation of stable random elements as the sum of a series
obtained from a sequence of i.i.d. random elements scaled by the
points of a Poisson process on the positive real line with intensity
measure of the form $\alpha \, t^{-(\alpha + 1)} \, dt$ or,
equivalently by points of the form $\Gamma_k^{-\frac{1}{\alpha}}$,
where $(\Gamma_k)_{k \in \bN}$ are the points of a unit intensity
Poisson process on the positive real line. Such a representation was
first obtained in \cite{lep:wood:zin81} for Euclidean space and since
then it and its various extensions have been called the {\em LePage
series} representation. For example, such a representation was given
for random elements of a suitable space of functions with the
semigroup operation being addition in \cite{sam:taq94} and with the
semigroup operation being maximum in \cite{haan84}.
Section~\ref{sec:decomp-homog-meas} presents the main result that
concerns a polar decomposition of scale-homogeneous measures on quite
general spaces. Proposition~\ref{proposition:renorm} is inspired by
the argument used in \cite[Th.~10.3]{evan:mol15} to derive the LePage
series representation for stable metric measure spaces. A similar
rescaling argument was used in the context of max-stable sequences in
\cite{haan84}.
In Section~\ref{sec:strictly-stable-rand} we review the definition of
a convex cone, introduce stable random elements on such carrier
spaces, and apply the general decomposition results to derive the
series representation of strictly stable random elements in convex
cones. Such a series representation is a consequence of the
polar decomposition of the L\'evy measure and expresses any strictly
stable random element as a sequence of i.i.d. random elements scaled
by a power of the successive points of the unit intensity Poisson
point process on the positive half line. The key hypotheses we require are a
one-sided continuity property of bounded semicharacters and the existence of a
measurable transversal for the action of the positive real numbers on
the carrier space.
Section~\ref{sec:applications} presents several applications, some in
well-known settings and others in more novel contexts, where our
approach streamlines the derivation of the LePage
series representation of a stable random elements.
\section{Decomposition of scale-homogeneous measures}
\label{sec:decomp-homog-meas}
\begin{definition}
Let $\bX$ be a measurable space with a $\sigma$-algebra $\salg$, and
let $(x,t)\mapsto tx\in\bX$ be a \emph{pairing} of $x\in\bX$ and
$t\in\bR_{++}:=(0,\infty)$ which is $\salg \otimes \cB(\bR_{++}) /
\salg$-measurable, where $\cB(\bR_{++})$ is the Borel
$\sigma$-algebra on $\bR_{++}$. Assume that this pairing is an {\em
action} of the group $\bR_{++}$ (where the group operation is the
usual multiplication of real numbers) on $\bX$; that is, for all
$t,s>0$ and $x\in\bX$, $t(sx)=(ts)x$ and $1 x = x$. We refer to
such a pairing as a \emph{scaling}.
\end{definition}
\begin{remark}
\label{remark:scaling-one-s}
For $B\in\salg$ and $t\in\bR_{++}$, set $tB:=\{tx:\; x\in B\}$. The
measurability of the map $x\mapsto tx$ for each $t\in\bR_{++}$
yields that $s B=\{x:\; (s^{-1})x\in B\}\in\salg$ for all
$B\in\salg$ and $s\in\bR_{++}$.
\end{remark}
\begin{definition}
A measure $\nu$ on $\bX$ is said to be \emph{$\alpha$-homogeneous}
for some $\alpha\in\bR \setminus \{0\}$ if
\begin{equation}
\label{eq:nu-x-homogeneous}
\nu(sB)=s^{-\alpha}\nu(B),\qquad B\in\salg,\; s>0.
\end{equation}
\end{definition}
\begin{remark}
By redefining the pairing to be $(x,t) \mapsto t^{-1} x$, we may
assume that $\alpha > 0$ and we will do so from now on. Moreover,
by redefining the pairing to be $(x,t) \mapsto t^{\frac{1}{\alpha}}
x$ we could even assume that $\alpha = 1$, but we choose not to do
so in order that the notation $t x$ retains its usual meaning when
$\bX$ is a subset of some vector space over $\bR$.
\end{remark}
We begin with a technical lemma.
\begin{lemma}
\label{lemma:extend_homo_completion}
Suppose that $\nu$ is an $\alpha$-homogeneous measure on $(\bX,
\salg)$. Write $\salg^\nu$ for the completion of the
$\sigma$-algebra $\salg$ with respect to the measure $\nu$. Then $s
B \in \salg^\nu$ with $\nu(sB)=s^{-\alpha}\nu(B)$ for all $s > 0$
and $B \in \salg^\nu$.
\end{lemma}
\begin{proof}
Suppose that $A \subset \bX$ is a $\nu$-null set. That is, there
exists $N \in \salg$ such that $A \subseteq N$ and $\nu(N) = 0$.
For any $s > 0$, $s A \subseteq s N \in \salg$ and $\nu(s N) =
s^{-\alpha}\nu(N) = 0$ by \eqref{eq:nu-x-homogeneous}, and so $s A$
is also $\nu$-null. Now, $B \in \salg^\nu$ if and only if there
exists $C \in \salg$ such that $B \triangle C$ is $\nu$-null, in
which case $\nu(B) = \nu(C)$. Since $(s B) \triangle (s C) = s (B
\triangle C)$ for $s > 0$ and the latter set is $\nu$-null by the
above, we see that $s B \in \salg^\nu$ and $\nu(s B)= \nu(s C) =
s^{-\alpha}\nu(C) = s^{-\alpha}\nu(B)$.
\end{proof}
\begin{remark}
Note that Lemma~\ref{lemma:extend_homo_completion} implies that the
map $x \mapsto s x$ is $\salg^\nu / \salg^\nu$-measurable for any
$s>0$. It does not say that the map $(x, s) \mapsto s x$ is
$\salg^\nu \otimes \cB(\bR_{++}) / \salg^\nu$-measurable.
\end{remark}
\begin{notation}
For $I\subseteq\bR_{++}$ and $B \in \salg$, put
\begin{displaymath}
IB:=\bigcup_{t\in I} tB.
\end{displaymath}
\end{notation}
The following is the first of a series of related assumptions that
require the existence of a suitably rich family of subsets or
nonnegative functions on our carrier space.
\begin{assumption}
\label{assumption:U}
There exist sets $U_k\in\salg$, $k\in\bN$, such that
\begin{itemize}
\item[(i)] if $x\in U_k$, then $tx\in U_k$ for all $t\geq1$
(that is, $[1,\infty) U_k = U_k$);
\item[(ii)] the sets $V_k:=(0,\infty) U_k$, $k\in\bN$, cover $\bX$
(that is, $\bigcup_{k \in \bN} V_k = \bX$).
\end{itemize}
\end{assumption}
\begin{proposition}
\label{proposition:renorm}
Suppose that Assumption~\ref{assumption:U} holds.
The following are equivalent for a measure $\nu$ on $(\bX, \salg)$.
\begin{itemize}
\item[i)]
The measure $\nu$ is a nontrivial $\alpha$-homogeneous measure with
\begin{equation}
\label{Eq:nu-finite}
\nu(U_k)<\infty,\qquad k\in\bN\,.
\end{equation}
\item[ii)] The measure $\nu$ is the push-forward of the measure
$\pi\otimes\theta_\alpha$ by the map $(x,t)\mapsto tx$, where
$\pi$ is a probability measure on $\bX$ such that
\begin{equation}
\label{eq:pi-Sk}
\int_0^\infty \pi(tU_k)t^{\alpha-1} \, dt < \infty,\qquad k\in\bN,
\end{equation}
and $\theta_\alpha$ is the measure on $\bR_{++}$ given by
$\theta_\alpha(dt) := \alpha t^{-(\alpha+1)} \, dt$.
\item[(iii)] For a probability measure $\pi$ on $\bX$ such that
\eqref{eq:pi-Sk} holds,
\begin{equation}
\label{Eq:nu-rep}
\nu(B) = \alpha \int_0^\infty \pi(t B) \, t^{\alpha-1} \, dt
\end{equation}
for all $B \in \salg$.
\end{itemize}
\end{proposition}
\begin{proof}
Statement (iii) is just a restatement of statement (ii), so it
suffices to show that (i) $\Longrightarrow$ (ii) and (iii)
$\Longrightarrow$ (i).
\textsl{(i) $\Longrightarrow$ (ii).} Fix $k\in\bN$. Note that
$V_k=\bigcup_{n\in\bN} 2^{-n} U_k$, so that $V_k\in\salg$. Put
\begin{displaymath}
\bar U_k:=\bigcap_{t<1} t U_k
= \bigcap_{n \in \bN} (1 - 2^{-n}) U_k \in\salg,
\end{displaymath}
and observe that $V_k=(0,\infty) \bar U_k$. The $\alpha$-homogeneity of
$\nu$ and \eqref{Eq:nu-finite} yield that
\begin{displaymath}
\nu(\bar U_k)= \inf_{t<1} t^{-\alpha}\nu(U_k)=\nu(U_k)<\infty.
\end{displaymath}
For $x\in V_k$, set
\begin{displaymath}
\tau(x):=\sup\{t>0:\; x\in t\bar U_k\} \in (0,\infty].
\end{displaymath}
Since
\begin{displaymath}
\{x\in V_k:\; \tau(x)\geq t\}=\bigcap_{s<t} s\bar U_k
= t \bigcap_{s<1}s\bar U_k = t\bar U_k,
\end{displaymath}
the function $\tau: V_k\mapsto (0,\infty]$ is
$\salg$-measurable. Clearly, $\tau(sx)=s\tau(x)$ for all $s>0$.
Also, setting
\begin{displaymath}
N_k := \{x\in V_k:\; \tau(x)=\infty\} = \bigcap_{t > 0} t \bar U_k,
\end{displaymath}
we have $s N_k = N_k$ for all $s > 0$ and
\begin{displaymath}
\nu(N_k)
=\inf_{t>0} t^{-\alpha}\nu(\bar U_k)=0.
\end{displaymath}
Define
\begin{displaymath}
W_k:=\{x\in V_k:\; \tau(x)=1\}
= \bar U_k \setminus \left(\bigcup_{t > 1} t \bar U_k\right)
\subseteq V_k \setminus N_k.
\end{displaymath}
Note that
\begin{equation}
\label{barU_k_minus_N_k_is_product}
\bar U_k \setminus N_k = [1,\infty) W_k
\end{equation}
and
\begin{equation}
\label{V_k_minus_N_k_is_product}
V_k \setminus N_k = (0,\infty) W_k.
\end{equation}
Set $V'_1:=V_1$ and $V'_i:= (V_i\cap (V_1\cup\cdots\cup V_{i-1})^c)$
for $i\geq 2$. Then put $V_j'' = V_j' \setminus N_j$, $W_j''
:=W_j\cap V_j''$ and $\bar U_j'':=\bar U_j \cap V_j''$ for
$j\in\bN$. The sets $\{V_k''\}_{k\in\bN}$ are disjoint, $\bX
\setminus \bigcup_{k \in \bN} V_k'' \subseteq \bigcup_{k \in \bN}
N_k$ is a $\nu$-null set, and $\bar U_k''=\{tx:\; t\geq1,\, x \in
W_k''\}$ for all $k\in\bN$. Therefore, it is possible to assume
without loss of generality that all of the sets $N_k$ are empty, the
sets $\{\bar U_k\}_{k \in \bN}$ are pairwise disjoint, the sets
$\{V_k\}_{k\in\bN}$ form a measurable partition of $\bX$, and that
$\tau(x)$ is uniquely defined and finite for all $x \in \bX$.
The sets $\{x\in V_k:\; \tau(x)=t\}=t W_k$ are disjoint for
different $t>0$ and their union is $V_k$. The map $x\mapsto
(\tau(x)^{-1}x,\tau(x))$ is therefore a well-defined bimeasurable
bijection between $V_k$ and $W_k\times\bR_{++}$. Let $\tilde \nu$
be the push-forward of $\nu$ by the map $x \mapsto (\tau(x)^{-1} x,
\tau(x))$ and define a measure $\rho_k$ on $W_k$ by $\rho_k(A) =
\tilde \nu(A \times [1,\infty))$. Note that $\rho_k$ is a finite
measure, since
\[
\rho_k(W_k)
= \tilde \nu(W_k \times [1,\infty))=\nu(\bar U_k)<\infty
\]
by \eqref{Eq:nu-finite}. The $\alpha$-homogeneity of $\nu$ is
equivalent to the scaling property $\tilde \nu(A \times s B) =
s^{-\alpha} \tilde \nu(A \times B)$ for $s>0$, measurable $A
\subseteq W_k$, and Borel $B \subseteq \bR_{++}$. Thus, if we let
$\theta_\alpha$ be the measure on $\bR_{++}$ given by
$\theta_\alpha(dt) = \alpha t^{-(\alpha+1)} \, dt$, then
\[
\begin{split}
\tilde \nu(A \times [b,\infty))
& = \tilde \nu(A \times b [1,\infty)) \\
& = b^{-\alpha} \tilde \nu(A \times [1,\infty)) \\
& = \rho_k(A) \times \theta_\alpha([b,\infty)) \\
\end{split}
\]
for $A \subseteq W_k$. The restriction of $\tilde \nu$
to $W_k \times \bR_{++}$ is therefore $\rho_k \otimes \theta_\alpha$ and
hence the restriction of $\nu$ to $V_k$ is the push-forward of
$\rho_k \otimes \theta_\alpha$ by the map $(y,t) \mapsto ty$.
We can think of $\rho_k$ as a measure on all of $V_k$. For $c_k
\in \bR_{++}$, let $\eta_k$ be the measure on $V_k$ that assigns
all of its mass to the set $c_k W_k$ and is given by $\eta_k(A) =
c_k^\alpha \rho_k(c_k^{-1} A)$. We have
\[
\begin{split}
(\eta_k \otimes \theta_\alpha)(\{(y,t) : ty \in B\})
& = \int \eta_k(t^{-1} B) \alpha t^{-(\alpha+1)} \, dt \\
& = \int c_k^\alpha \rho_k(c_k^{-1} t^{-1} B) \alpha t^{-(\alpha+1)} \, dt \\
& = \int \rho_k(s^{-1} B) \alpha s^{-(\alpha+1)} \, ds \\
& = (\rho_k \otimes \theta_\alpha)(\{(y,t) : ty \in B\}) \\
& = \nu(B) \\
\end{split}
\]
for measurable $B\subset V_k$,
and so $\eta_k$ is a finite measure with total mass
$c_k^\alpha \rho_k(W_k)$ that has the property that
the push-forward of $\eta_k \otimes \theta_\alpha$ by
the map $(y,t) \mapsto ty$ is the restriction of
$\nu$ to $V_k$.
We can regard $\eta_k$ as being a finite measure on all of $\bX$
and, by choosing the constants $c_k$, $k \in \bN$, appropriately we
can arrange for $\pi := \sum_{k \in \bN} \eta_k$ to be a probability
measure such that \eqref{Eq:nu-rep} holds, and this also implies
\eqref{eq:pi-Sk}.
\textsl{(iii) $\Longrightarrow$ (i).} It is easy to see that $\nu$
given by \eqref{Eq:nu-rep} is $\alpha$-homogeneous, and
\eqref{Eq:nu-finite} follows from \eqref{eq:pi-Sk} and
\eqref{Eq:nu-rep}.
\end{proof}
\begin{remark}
A key observation in the proof of
Proposition~\ref{proposition:renorm} is that it is possible to
define measurable sets $W_k$ such that
\eqref{barU_k_minus_N_k_is_product} and
\eqref{V_k_minus_N_k_is_product} hold, and if $x \in W_k$, then $t x
\notin W_k$ for any $t \ne 1$. We would like to reverse the
construction in Proposition~\ref{proposition:renorm} by starting
with a suitable collection of sets $\{W_k\}_{k\in\bN}$ with this
last property and conclude that if we put $\bar U_k := [1,\infty)
W_k$, then $\bar U_k \in \salg$ and if we construct sets from each
of the $\bar U_k$ in the manner that the $W_k$ are constructed in
the proof, then we recover the $W_k$. There is, however, a slightly
delicate point here: if $B \in \salg$, then it is not necessarily
the case that $[1,\infty) B = \bigcup_{t \ge 1} t B = \{t x: t \ge
1, \, x \in B\} \in \salg$.
\end{remark}
\begin{lemma}
\label{lemma:projection}
Let $\nu$ be a $\sigma$-finite $\alpha$-homogeneous measure on
$(\bX, \salg)$ and suppose that $\salg$ is $\nu$-complete.
\begin{itemize}
\item[i)] For $B \in \salg$ and a Borel set $I \subseteq \bR_{++}$,
the set $I B$ also belongs to $\salg$.
\item[ii)] If $\nu([t,u) B) < \infty$ for $B \in \salg$ and $0 < t <
u < \infty$, then $\nu([s,\infty) B) < \infty$ for all $s > 0$.
\end{itemize}
\end{lemma}
\begin{proof}
For part (i), observe that
\[
\begin{split}
I B
& =
\{y \in \bX : \; y = t x, \; \text{for some} \; t \in I \; \text{and} \; x \in B\} \\
& =
\{y \in \bX : \; t^{-1} y = x, \; \text{for some} \; t \in I \; \text{and} \; x \in B\} \\
& =
\{y \in \bX : \; u y = x, \; \text{for some} \; u \in I^{-1} \; \text{and} \; x \in B\} \\
& =
\{y \in \bX : \; u y \in B, \; \text{for some} \; u \in I^{-1}\} \\
& =
\Pi(\{(y,u) \in \bX \times I^{-1}: \; u y\in B\}), \\
\end{split}
\]
where $\Pi: \bX \times \bR_{++} \to \bX$ is the projection map
defined by $\Pi((x,t)) = x$. The map $t \mapsto t^{-1}$ from
$\bR_{++}$ to itself is a $\cB(\bR_{++})$-measurable bijection and
it is its own inverse, so the set $I^{-1}$ is
$\cB(\bR_{++})$-measurable. It follows that the set $\{(y,u) \in
\bX \times I^{-1} :\; u y\in B\}$ is $\salg \otimes
\cB(\bR_{++})$-measurable. The projection theorem (see,
for example, \cite[Th.~12.3.4]{MR752692} or \cite[Section~III.44]{MR521810})
and the $\nu$-completeness of $\salg$ yield that $I B \in \salg$.
For part (ii), note that
\[
\begin{split}
[s,\infty) B
& = \left(\bigcup_{n \geq0} \frac{s}{t} \left(\frac{u}{t}\right)^n [t,u)\right) B \\
& \subseteq \bigcup_{n \geq0} \frac{s}{t} \left(\frac{u}{t}\right)^n [t,u) B. \\
\end{split}
\]
By the $\alpha$-homogeneity of $\nu$
\[
\nu\left(\frac{s}{t} \left(\frac{u}{t}\right)^n [t,u) B\right)
=
\left(\frac{s}{t}\right)^{-\alpha} \left(\frac{u}{t}\right)^{-n \alpha} \nu([t,u) B),
\]
and the result follows from the subadditivity of $\nu$ and the
summability of the relevant geometric series.
\end{proof}
It is possible to obtain the conclusion of
Proposition~\ref{proposition:renorm} under another assumption.
\begin{assumption}
\label{assumption:X}
There exists $\bS\in\salg$ such that
\begin{displaymath}
\bX=\{tx:\; t\in\bR_{++}, \, x\in\bS\},
\end{displaymath}
and $\{x,tx\}\subset\bS$ for some $x\in\bX$ and $t>0$ implies that
$t=1$.
\end{assumption}
\begin{definition}
The {\em orbits} of the action of the group $\bR_{++}$ on $\bX$ are
the sets of the form $\{t x : t \in \bR_{++}\}$ for some $x \in
\bX$. The orbits are the equivalence classes for the equivalence
relation on $\bX$ where we declare $x$ and $y$ to be equivalent if
$y = t x$ for some $t \in \bR_{++}$. Assumption~\ref{assumption:X}
says that the measurable set $\bS$ intersects each equivalence class
in exactly one point. Such a set is called a \emph{transversal}.
\end{definition}
The following result is immediate from Lemma~\ref{lemma:projection} and the
proof of Proposition~\ref{proposition:renorm}.
\begin{proposition}
\label{proposition:sphere}
Let $\nu$ be a $\sigma$-finite, $\alpha$-homogeneous measure on $\bX$.
\begin{itemize}
\item[i)] Suppose that Assumption~\ref{assumption:U} and the
finiteness condition \eqref{Eq:nu-finite} hold. Set
\[
W_k
:= \left(\bigcap_{s<1} s U_k\right)
\setminus \left(\bigcup_{t > 1} t \left(\bigcap_{s<1} s U_k\right)\right).
\]
Then, by possibly replacing $\bX$ with a set $\bX' \in \salg$ such
that $t \bX' = \bX'$ for all $t >0$ and $\nu(\bX \setminus
\bX')=0$, Assumption~\ref{assumption:X} holds for $\bS :=
\bigcup_{k \in \bN} W_k$. Moreover, for any nonempty intervals
$I_k := [a_k, b_k) \subset \bR_{++}$, $k \in \bN$, it is the
case that $I_k W_k \in \salg$ and $\nu(I_k W_k)<\infty$ for
$k\in\bN$.
\item[ii)] Suppose that that Assumption~\ref{assumption:X} holds and
there is a pairwise disjoint family of measurable sets
$\{W_k\}_{k\in\bN}$ such that $\bS = \bigcup_{k \in \bN} W_k$ and
a family of nonempty intervals $I_k := [a_k, b_k) \subset
\bR_{++}$, $k \in \bN$, such that $\nu(I_k W_k)<\infty$ for
all $k\in\bN$, where $I_k W_k$ is guaranteed to belong to
$\salg^\nu$ for all $k \in \bN$. Then
Assumption~\ref{assumption:U} and the finiteness condition
\eqref{Eq:nu-finite} hold with $(\bX,\salg)$ replaced by $(\bX,
\salg^\nu)$ and $U_k := [1,\infty)W_k$ for $k\in\bN$.
\end{itemize}
\end{proposition}
\begin{definition}
Recall that a {\em Borel space} is a measurable space that is Borel
isomorphic to a Borel subset of a Polish space equipped with the
trace of the Borel $\sigma$-algebra on the Polish space.
\end{definition}
\begin{lemma}
\label{lemma:pair-of-maps}
Suppose that $\nu$ is a $\sigma$-finite $\alpha$-homogeneous measure
on $(\bX,\salg)$ and that Assumption~\ref{assumption:X} is
satisfied. For $x \in \bX$, define $\tau(x) \in \bR_{++}$ by the
requirement that $\tau(x)^{-1}x\in\bS$. Then the following hold.
\begin{itemize}
\item[i)] The maps $(x,t)\mapsto tx$ from $\bS\times\bR_{++}$ to
$\bX$ and $x\mapsto(\tau(x)^{-1}x,\tau(x))$ from $\bX$ to
$\bS\times\bR_{++}$ are mutually inverse.
\item[ii)] If $\salg$ is $\nu$-complete, then
$x\mapsto(\tau(x)^{-1}x,\tau(x))$ is $\salg / \salg_{| \bS}
\otimes \cB(\bR_{++})$-measurable, where $\salg_{| \bS}$ is the
$\sigma$-algebra induced by $\salg$ on $\bS$.
\item[iii)] If $(\bX,\salg)$ is a Borel space, then
$x\mapsto(\tau(x)^{-1}x,\tau(x))$ is $\salg / \salg_{| \bS}
\otimes \cB(\bR_{++})$-measurable.
\end{itemize}
\end{lemma}
\begin{proof}
Part (i) is clear from the definition of $\tau$. Turning to part
(ii), it suffices to show that the map $\tau$ is $\salg$-measurable,
but this follows from Lemma~\ref{lemma:projection} and the fact that
$\{x \in \bX : \tau(x) \ge t\} = [t,\infty) \bS$. For part (iii),
first note that if $\bX$ is a Borel space, then so is $\bS$ equipped
with the trace $\sigma$-algebra and hence also $\bS \times
\bR_{++}$. Now apply the result of Kuratowski (see,
for example, \cite[Section I.3]{par67}) that a measurable bijection between
two Borel spaces has a measurable inverse.
\end{proof}
Proposition~\ref{proposition:renorm} required
Assumption~\ref{assumption:U} concerning the existence of suitable
countable family of subsets of the carrier space $\bX$ and the
hypothesis \eqref{Eq:nu-finite} that these sets all have finite $\nu$
mass. The following result replaces such requirements by the
finiteness of integrals $\int h \, d\nu$ for a countable family of
measurable functions $h:\bX\mapsto\bR_+$, $n\in\bN$.
\begin{theorem}
\label{thr:general-polar}
Suppose that Assumption~\ref{assumption:X} holds. Let $\cH$ be a
countable family of measurable functions $h:\bX\mapsto\bR_+$ such
that for all $h \in \cH$ and $x \in \bX$ the function $ t \mapsto
h(tx)$ is right-continuous (or left-continuous) on $\bR_{++}$.
A $\sigma$-finite measure $\nu$ on $\bX$ is $\alpha$-homogeneous and
satisfies
\begin{equation}
\label{eq:nu-h-integral}
\int_\bX h(x) \, \nu(dx) <\infty,\qquad h\in\cH,
\end{equation}
and
\begin{equation}
\label{Eq:nu-support-h}
\nu\left(\bigcap_{h\in\cH} \{x\in\bX:\; h(x)=0\}\right)=0
\end{equation}
if and only if there exists a probability measure $\pi$ on $\bX$
such that \eqref{Eq:nu-rep} holds,
\begin{equation}
\label{Eq:levy-condition-pi-general}
\int_{\bX}\int_0^\infty h(tx) t^{-(\alpha+1)} \, dt \, \pi(dx) <\infty,
\qquad h\in\cH,
\end{equation}
and
\begin{displaymath}
\pi\left((0,\infty) \bigcap_{h \in \cH} \{x\in\bX:\; h(x)=0\}\right)=0.
\end{displaymath}
\end{theorem}
\begin{proof}
\textsl{Necessity.} Enumerate $\cH$ as $\{h_n\}_{n \in \bN}$. Assume
that for all $x \in \bX$ the function $t \mapsto h_n(tx)$ is
right-continuous in $t\in\bR_{++}$. Denote by $\bQ_{++}$ the set of
strictly positive rational numbers. For $n,k,j\in\bN$ and
$r\in\bQ_{++}$, define
\begin{displaymath}
U_{nkrj}:=\{x\in\bS:\; h_n(sx)\geq 2^{-k},\; s\in[r,r+2^{-j}]\}.
\end{displaymath}
Since $B:=\{x \in \bS:\;h_n(x)\geq 2^{-k}\} \in \salg$, the
right-continuity property and Remark~\ref{remark:scaling-one-s} yield that
\begin{displaymath}
U_{nkrj}=\bigcap_{s\in\bQ\cap[r,r+2^{-j}]}s^{-1}B\in \salg.
\end{displaymath}
Put $J_{rj} :=[r,r+2^{-j}]$. Then $J_{nj} U_{nkrj} \in \salg^\nu$
and, by \eqref{eq:nu-h-integral},
\begin{displaymath}
\nu(J_{nj} U_{nkrj})
\leq 2^k \int_{J_{nj} U_{nkrj}} h_n(x) \, \nu(dx)
\leq 2^k \int_\bX h_n(x) \, \nu(dx)<\infty.
\end{displaymath}
Put $\bS' = \bigcup_{n,k,r,j} U_{nkrj}$. If $x\in\bS\setminus\bS'$,
$n,k,j \in \bN$, and $r \in \bQ_{++}$, there exists $s \in
[r,r+2^{-j}]$ (depending on $x,n,k,r,j$) such that $h_n(s
x)<2^{-k}$. The right-continuity of $t \mapsto h_n(t x)$ yields
that $h_n(r x)=0$ for all $r\in\bQ_{++}$ and thence $h_n(tx)=0$ for
all $t\in\bR_{++}$.
Enumerate $(U_{nkrj}, J_{rj})$, $n,k,j\in\bN$ and $r\in\bQ_{++}$, as
$(W_n, I_n)$, $n \in \bN$, so that $\bS' = \bigcup_{n \in \bN} W_n$
and $\nu(I_n W_n)< \infty$ for all $n \in \bN$. By
\eqref{Eq:nu-support-h}, the measure $\nu$ assigns all of its mass
to the set $\bX':= (0,\infty) \bS' \in \salg^\nu$, and the result
follows from the decomposition of $\nu$ guaranteed by
Proposition~\ref{proposition:renorm} and
Proposition~\ref{proposition:sphere}. Finally,
\eqref{Eq:levy-condition-pi-general} follows from the change of
variables in \eqref{eq:nu-h-integral} using the polar decomposition of
$\nu$ as $\pi\otimes\theta_\alpha$.
If all functions $h_n(tx)$ are left continuous in $t\in\bR_{++}$,
then the definition of $U_{nkrj}$ should be modified by working with
the interval $[r-2^{-j},r]$ for $2^{-j}<r$.
\textsl{Sufficiency} is immediate by checking that the measure $\nu$
defined by \eqref{Eq:nu-rep} is $\alpha$-homogeneous.
\end{proof}
\begin{remark}
\label{rem:sphere-from-h}
If $h_n(tx)\to 0$ as $t\downarrow0$ for all $n\in\bN$, then it is
not necessary to require Assumption~\ref{assumption:X} in
Theorem~\ref{thr:general-polar}. A measurable transversal can be
constructed as follows. For each $n\in\bN$, let
\begin{displaymath}
W_n:=\{x\in\bX:\; h_k(x)=0,k<n,\; h_n(x)\neq 0\}.
\end{displaymath}
Next, partition $W_n$ into measurable sets
\begin{align*}
W_{nj}&:=\{x\in W_n:\; \sup_{t>0}
h_n(tx)\in(2^{-j},2^{-j+1}]\},\qquad j\geq 1,\\
W_{n0}&:=\{x\in W_n:\; \sup_{t>0}
h_n(tx)>1\}.
\end{align*}
Finally, define
\begin{displaymath}
\tau_{nj}(x):=\inf\{t>0:\; h_n(tx)>2^{-j}\},\qquad x\in W_{nj},
\end{displaymath}
and
\begin{displaymath}
S_{nj}:=\{\tau_{nj}(x)^{-1}x:\; x\in W_{nj}\}.
\end{displaymath}
Then $\bS:=\bigcup_{n,j\in\bN} S_{nj}$ satisfies
Assumption~\ref{assumption:X} in the complement of the set
$\{x\in\bX:\; h_n(x)=0,n\in\bN\}$.
\end{remark}
\begin{remark}
Theorem~\ref{thr:general-polar} asserts that an $\alpha$-homogeneous
measure $\nu$ is the push-forward of the measure $\pi\otimes
\theta_\alpha$ under the map $(x,t)\mapsto tx$ from $\bX \times
\bR_{++}$ to $\bX$. In this case we say that $\nu$ admits a
\emph{polar representation}. It follows from the proof that we may
replace $\bX$ by a subset that is invariant under the action of
$\bR_{++}$ in such a way that the probability measure $\pi$ assigns
all of its mass to a transversal.
\end{remark}
\section{Strictly stable random elements on convex cones}
\label{sec:strictly-stable-rand}
\label{sec:convex-cones}
\begin{definition}
A \emph{convex cone} $\bK$ is an abelian measurable
semigroup with neutral element $\neutral$ and a scaling
$(x,a) \mapsto a x$ by $\bR_{++}$ that has the properties
\begin{align*}
a(x+y)&=ax+ay,\qquad a>0,\;x,y\in\bK,\\
a\neutral&=\neutral,\qquad a>0.
\end{align*}
\end{definition}
\begin{remark}
The simplest examples of convex cones are $\bK = \bR^d$ and $\bK =
\bR_+^d$ with the usual scaling by $\bR_{++}$, but there are many
other examples, some of which we will recall later in this paper.
\end{remark}
\begin{remark}
In contrast to the many classical studies of convex cones that are convex
subsets of vector spaces over the reals which are closed under
multiplication by nonnegative scalars (see, for example, \cite{fuc:lus81}),
we do not assume the validity of the second distributivity law; that
is, we do not require that $ax+bx=(a+b)x$ for $a,b > 0$ and $x \in
\bK$.
Stable random elements of convex cones have been studied in
\cite{dav:mol:zuy08} under the assumptions that the scaling is
jointly continuous and that $\bK':=\bK\setminus\{\neutral\}$ is a
Polish space.
\end{remark}
\begin{definition}
An \emph{involution} is a measurable map $x\mapsto x^*$ satisfying
$(x+y)^*=x^*+y^*$, $(ax)^*=ax^*$, and $(x^*)^*=x$ for all
$x,y\in\bK$ and $a>0$. We assume that $\bK$ is equipped with an
involution.
\end{definition}
\begin{definition}
A measurable function $\chi$ that maps $\bK$ into the unit
disk $\bD$ in the complex plane is called a {\em bounded
semicharacter} (or, more briefly, a \emph{character}) if
$\chi(\neutral)=1$, $\chi(x+y)=\chi(x)\chi(y)$, and
$\chi(x^*)=\overline{\chi(x)}$ for all $x,y\in\bK$.
\end{definition}
\begin{remark}
The family of all characters form a convex cone when equipped with
pointwise multiplication as the semigroup operation, the topology of
pointwise convergence, the neutral element $\one$ being the
character identically equal to $1$, the involution being the complex
conjugate, and the scaling defined by $(a\chi)(x):=\chi(ax)$,
$x\in\bK$, $a>0$.
\end{remark}
We assume in the following that there exists a \emph{separating}
family $\bKH$ of characters in the usual sense that for each $x\neq y$
there exists $\chi\in\bKH$ such that $\chi(x)\neq\chi(y)$. Such a
family does not exist for all semigroups, see
\cite[Ex.~8.20]{dav:mol:zuy08}. We also assume that the characters in
$\bKH$ generate the $\sigma$-algebra on $\bK$ and that the family
$\bKH$ is closed under taking finite products, contains the constant
function $1$, and so is a semigroup itself. Note that $\bKH$ is not
assumed to be closed under scaling and so it is not necessarily a convex cone.
The distribution of a $\bK$-valued random element $\xi$ is
determined by its \emph{Laplace transform} $\bE\chi(\xi)$,
$\chi\in\bKH$, (see, for example, \cite[Th.~5.4]{dav:mol:zuy08}).
A random element $\xi$ is said to be \emph{symmetric} if
$\xi\eqd\xi^*$, that is, $\xi$ coincides in distribution with its
involution. The Laplace transform of a symmetric random element takes
only real values. Recall that the classical L\'evy-Khinchin-It\^o
description of infinitely divisible random elements of $\bR^d$ can
involve subtracting ``compensating'' terms to achieve convergence
of a sum of the points in a Poisson point process that would otherwise
be divergent, but that such compensation is not necessary when the
random elements are symmetric in the usual sense for $\bR^d$-valued
random elements (which is the special case of the sense considered
here with the involution given by $x \mapsto -x$). Since no such
recentering using subtraction is possible in the general semigroup
setting, we mostly consider symmetric random elements. If the
involution is the identity, then all random elements are symmetric.
\begin{definition}
A random element $\xi$ in $\bK$ is said to be \emph{strictly
$\alpha$-stable} if
\begin{equation}
\label{Eq:stable}
a^{1/\alpha}\xi'+b^{1/\alpha}\xi''
\eqd (a+b)^{1/\alpha}\xi\,,\qquad a,b>0\,,
\end{equation}
where $\xi'$ and $\xi''$ are i.i.d. copies of $\xi$.
\end{definition}
In general cones, any value $\alpha\neq0$ is possible, see
\cite{dav:mol:zuy08}. Since by redefining the scaling from $ax$ to
$a^{-1}x$ it is possible to turn a negative $\alpha$ into a positive
one, in the following we consider only the case $\alpha>0$.
\begin{remark}
An alternative definition of strictly stable random elements that
coincides with the above for $\bR^d$ (and in many other situations)
requires that, for all $n\geq2$, there exists $a_n>0$ such that
$\xi_1+\cdots+\xi_n\eqd a_n\xi$, where $\xi,\xi_1,\dots,\xi_n$ are
i.i.d. and $n\geq2$. While \eqref{Eq:stable} implies this condition
immediately, extra assumptions related to the semicontinuity
property of characters and the continuity of the scaling operation
are needed for the equivalence of the two definitions, see
\cite[Th.~5.16]{dav:mol:zuy08}. The major step is to establish that
$a_n=n^{1/\alpha}$, after which \eqref{Eq:stable} follows easily.
\end{remark}
A strictly $\alpha$-stable $\xi$ is always \emph{infinitely divisible}
and so its Laplace transform satisfies
\begin{equation}
\label{Eq:Laplace-exponent}
\bE\chi(\xi)=\exp\{-\phi(\chi)\},\qquad \chi\in\bKH\,,
\end{equation}
for a negative definite complex-valued function $\phi$ on $\bKH$ with
$\Re\phi\in[0,\infty]$ and $\phi(\one)=0$, see \cite[Th.~3.2.2,
Prop.~4.3.1]{ber:c:r}. The function $\phi$ is called the \emph{Laplace
exponent} of $\xi$.
A Laplace exponent is associated with a unique \emph{L\'evy measure}
$\nu$, a Radon measure on the semigroup of all bounded characters on
$\bKH$, see \cite{ber:c:r}. In the following we always assume that the
L\'evy measure is supported by $\bK':=\bK\setminus\{\neutral\}$
canonically embedded
into the semigroup of all bounded semicharacters on $\bKH$ and
that it is $\sigma$-finite on $\bK'$. In this case we say that $\xi$
\emph{admits a L\'evy measure}. The L\'evy measure $\nu$ satisfies
\begin{equation}
\label{eq:lev-mes}
\int_{\bK'}(1-\Re \chi(x))\,\nu(dx)<\infty
\end{equation}
for all $\chi\in\bKH$. The measure $\nu$ is the intensity measure of
a Poisson process $\{\eta_i: i\in\bN\}$ on $\bK'$, and the
appropriately defined (if necessary using principal values and
compensating terms) sum of the points $\eta_i$ yields an infinitely
divisible random element that is said to not have deterministic,
Gaussian or idempotent components.
\begin{lemma}
\label{lemma:non-zero}
Assume for some $\chi\in\bKH$ that
\begin{equation}
\label{Eq:lim-chi-positive}
\liminf_{t\downarrow0} \Re\chi(tx)>0
\end{equation}
for all $x\in\bK$. If $\xi$ is a strictly $\alpha$-stable random
element, then $\bE\chi(\xi)\neq 0$.
\end{lemma}
\begin{proof}
Since $-1 \le \Re\chi(t x) \le 1$ for all $t > 0$ and $x \in \bK$,
it follows from the assumption \eqref{Eq:lim-chi-positive} and
Fatou's Lemma that
\begin{equation}
\label{eq:Fatou_for_char}
0 < \bE \left[\liminf_{t\downarrow0} \Re\chi(t\xi)\right]
\le \liminf_{t\downarrow0} \bE \Re\chi(t\xi).
\end{equation}
By the stability property of $\xi$, $\bE\chi(\xi) =
(\bE\chi(n^{-1/\alpha}\xi))^n$ for all $n \in \bN$, and so
$\bE\chi(\xi) = 0$ would imply that $\bE\chi(n^{-1/\alpha}\xi)=0$
and hence $\bE\,\Re\chi(n^{-1/\alpha}\xi)=0$ for all $n \in \bN$,
but this contradicts \eqref{eq:Fatou_for_char}.
\end{proof}
\begin{lemma}
\label{lemma:homogeneous}
Assume that \eqref{Eq:lim-chi-positive} holds for all $\chi\in\bKH$.
Then the L\'evy measure of a strictly $\alpha$-stable random
element is an $\alpha$-homogeneous measure on $\bK'$.
\end{lemma}
\begin{proof}
It follows from \eqref{Eq:stable} that
\begin{displaymath}
\phi(a^{1/\alpha}\chi)+\phi(b^{1/\alpha}\chi)
=\phi((a+b)^{1/\alpha}\chi),\quad a,b>0\,,
\end{displaymath}
where the Laplace exponent $\phi$ of the strictly stable random
element $\xi$ is finite by Lemma~\ref{lemma:non-zero}. Since
$(x,a)\mapsto ax$ is a jointly measurable map, the function $a
\mapsto \bE\chi(a\xi)$ is measurable by Fubini's theorem. Therefore,
\begin{equation}
\label{eq:phi-homogeneous}
\phi(a\chi)=a^\alpha\phi(\chi),\qquad a>0.
\end{equation}
The random element $a\xi$ is also infinitely divisible and its
L\'evy measure is $B \mapsto \nu(aB)$ for measurable subsets $B\subseteq\bK'$. By
\eqref{eq:phi-homogeneous}, the L\'evy measure of $a\xi$ is
$a^\alpha\nu$. Since the L\'evy measure is unique, we obtain
\eqref{eq:nu-x-homogeneous}.
\end{proof}
\begin{theorem}
\label{thr:levy-k-separating-countable}
Let $\xi$ be a strictly stable random element that admits a
$\sigma$-finite L\'evy measure $\nu$. Assume that there is a
countable, closed under finite products, separating family of
characters $\bKH$ such that $t \mapsto \Re\chi(tx)$, $t\in\bR_{++}$,
is right- (or left-) continuous for all $x\in\bK'$ and $\chi \in
\bKH$. Assume also that Assumption~\ref{assumption:X} holds and
\eqref{Eq:lim-chi-positive} holds for all $\chi\in\bKH$.
Then $\nu$ admits the polar representation
$\pi\otimes\theta_\alpha$, where $\pi$ is a probability measure on
$\bK'$ satisfying
\begin{equation}
\label{eq:eps-condition}
\int_0^\infty \bE[1-\Re\chi(t\eps)] t^{-(\alpha+1)} \, dt <\infty,
\qquad \chi\in\bKH.
\end{equation}
The Poisson process with intensity measure $\nu$ can be represented
as $\{\Gamma_i^{-1/\alpha}\eps_i\}_{i\in\bN}$, where
$\{\eps_i\}_{i\in\bN}$ is a sequence of i.i.d. copies of a random
element $\eps$ in $\bK'$ with distribution $\pi$, and
$\{\Gamma_i\}_{i\in\bN}$ are successive points of a unit intensity
Poisson process on $\bR_+$. If $\xi$ is symmetric, then $\eps$ can
also be chosen to have a symmetric distribution.
\end{theorem}
\begin{proof}
The measure $\nu$ admits the polar decomposition
$\pi\otimes\theta_\alpha$ by Theorem~\ref{thr:general-polar} applied
with $\bX:=\bK'$, $\cH := \{1 - \Re \chi: \; \chi \in \bKH\}$, and
$\nu$ being the L\'evy measure of $\xi$. Indeed,
\eqref{eq:nu-h-integral} follows from \eqref{eq:lev-mes}, $\nu$ is
$\alpha$-homogeneous by Lemma~\ref{lemma:homogeneous} given that
\eqref{Eq:lim-chi-positive} is assumed to hold, and the separating
condition imposed on $\bKH$ yields \eqref{Eq:nu-support-h}.
The Poisson point process with intensity measure $\theta_\alpha$ is
obtained as $\{\Gamma_i^{-1/\alpha}: i\in\bN\}$. Thus, the Poisson
process on $\bK'\times \bR_{++}$ with intensity measure
$\pi\otimes\theta_\alpha$ is obtained by marking a Poisson point
process $\{\Gamma_i^{-1/\alpha}: i\in\bN\}$ with marks
$\{\eps_i\}_{i\in\bN}$ that are i.i.d. copies of a random element
$\eps$ in $\bK'$ with distribution $\pi$. By
\eqref{Eq:levy-condition-pi-general}, the random element $\eps$
satisfies \eqref{eq:eps-condition}. Since $\nu$ is the push-forward
of $\pi\otimes\theta_\alpha$ under the multiplication map
$(x,t)\mapsto tx$, the Poisson process with intensity $\nu$ is given
by $\{\Gamma_i^{-1/\alpha}\eps_i:\; i\in\bN\}$.
The uniqueness of the L\'evy measure yields that $\nu$ is symmetric
if $\xi$ is symmetric. Then in the proof of
Proposition~\ref{proposition:renorm} it suffices to replace each set
$W_k$ with the union of $W_k$ and its image under the involution to
ensure that $\pi$ is a symmetric measure.
\end{proof}
\begin{remark}
Note that the random element $\eps$ in
Theorem~\ref{thr:levy-k-separating-countable} is not unique and also
is not restricted to belong to the transversal $\bS$ from
Assumption~\ref{assumption:X}. By rescaling it is always possible to
arrange that $\eps\in\bS$ a.s., however in this case, the Poisson
process with intensity $\nu$ is given by
$\{c\Gamma_i^{-1/\alpha}\eps_i:\; i\in\bN\}$ for a positive scaling
constant $c$.
\end{remark}
\begin{remark}
Suppose that the ``L\'evy-Khinchin-It\^o'' decomposition of the strictly
$\alpha$-stable random element $\xi$ does not contain any deterministic, Gaussian
or idempotent components, so that $\xi$ is the sum of the points in
a Poisson process $\{\eta_i : i \in \bN\}$, where the sum is
appropriately defined by using principal values and compensating
terms if necessary, see \cite[Th.~7.2]{dav:mol:zuy08}.
Theorem~\ref{thr:levy-k-separating-countable} establishes that
$\{\eta_i : i \in \bN\}$ can be constructed as
$\{\Gamma_i^{-1/\alpha}\eps_i: i \in \bN\}$, that is, as randomly
scaled i.i.d. copies of a random element $\eps$.
Recall that no compensating terms for the sum of the $\eta_i$ are
required if $\xi$ is $\alpha$-stable and symmetric. In this case,
the Laplace exponent of $\xi$ is given by
\begin{displaymath}
\phi(\chi) =\int_{\bK'}(1-\chi(x))\,\nu(dx)
=\int_{\bK'}(1-\Re\chi(x))\,\nu(dx)\,,
\quad \chi\in\bKH\,.
\end{displaymath}
Put
\begin{equation}
\label{eq:xi-r}
\xi^{(r)} :=\sum_{\Gamma_i\leq r}
\Gamma_i^{-1/\alpha}\eps_i\,,\qquad r>0.
\end{equation}
The probability generating functional formula for a Poisson point
process yields that, for all $\chi\in\bKH$,
\begin{align*}
\bE \chi(\xi^{(r)})
& =\exp\left\{-\int_{r^{-\alpha}}^\infty \int_{\bK'}
(1-\chi(tx))\alpha t^{-(\alpha+1)} \, \pi(dx) \, dt\right\}\\
& \to \bE \chi(\xi)\qquad \text{as }
r\uparrow\infty,
\end{align*}
since $\bE\chi(t\eps)$ is real-valued by the symmetry of $\eps$ and
the integral under the exponential converges by
\eqref{eq:eps-condition}.
\end{remark}
\section{Examples}
\label{sec:applications}
\begin{example}
\label{example:Rd}
If $\bK=\bR^d$ with the usual arithmetic addition, the involution given by
the negation, the Borel $\sigma$-algebra, and conventional scaling
by positive scalars, then Assumption~\ref{assumption:X} holds with
$\bS$ being the sphere in $\bR^d$ with respect to any norm. A
countable separating family of continuous characters is given by
$\chi(x)=\exp\{\imath \langle u,x\rangle\}$, where $\langle
u,x\rangle$ is the scalar product of $u\in\bQ^d$ and $x\in\bR^d$,
and Theorem~\ref{thr:levy-k-separating-countable} yields the polar
representation of L\'evy measures for strictly stable random
vectors. In particular, each strictly stable random vector $\xi$
corresponds to the Poisson process
$\{\Gamma_i^{-1/\alpha}\eps_i:i\in\bN\}$. If $\alpha\in(0,1)$ or if
$\xi$ is symmetric, then the sum of these points converges almost
surely and yields the LePage series decomposition of $\xi$, see
\cite{lep:wood:zin81}.
\end{example}
\begin{example}
\label{ex:operator}
In the setting of Example~\ref{example:Rd}, define the scaling by
letting $tx:=\exp\{(\log t)\AA\}x$, $t>0$, for a non-degenerate
matrix $\AA$. The corresponding stable elements are usually called
\emph{operator stable}, see \cite{hud:mas81,shar69}. An operator
stable random element is infinitely divisible and so admits a
series representation and a L\'evy measure. By
Theorem~\ref{thr:levy-k-separating-countable}, an operator stable
random element $\xi$ in $\bR^d$ admits the LePage series
representation
\begin{displaymath}
\xi\eqd \sum_{i\in\bN} \exp\{-\alpha^{-1}(\log\Gamma_i)\AA\} \eps_i.
\end{displaymath}
Note that the polar decomposition of the L\'evy measure appears in
\cite[Eq.~(2)]{hud:jur:veeh86} and \cite{hud:mas81}.
\end{example}
\begin{example}
\label{ex:processes-values}
Let $\bK$ be the family $\bR^{[0,\infty)}$ of real valued-functions
on $\bR_+$ with the cylindrical $\sigma$-algebra, the pointwise arithmetic
addition of functions as the semigroup operation, involution being
the negation, and the scaling operation applied pointwise.
It is known \cite{kab:stoev12,mar70} that an infinitely
divisible separable in probability stochastic process $\xi(s)$,
$s\in\bR_+$, can be associated with the Poisson process
$\{\eta_i,i\in\bN\}$ on $\bR^{[0,\infty)}$ with a $\sigma$-finite
intensity measure $\nu$, so that $\xi$ admits a L\'evy measure
$\nu$. If $\xi$ is symmetric, then, for each $s\in\bR_+$,
\begin{equation}
\label{eq:id-processes}
\xi(s)\eqd \lim_{r\downarrow 0}\sum_{|\eta_i(t)|\geq r}
\eta_i(s),\qquad s\in\bR_+,
\end{equation}
where $\eqd$ in this setting denotes the equality of all
finite-dimensional distributions. It should be noted that the order
of summands in \eqref{eq:id-processes} may change with $t$, and
there might be no order of the summands that guarantees the
convergence for all rational $s$, not to say for all
$s\in\bR_+$. Such a common order exists for random functions with
non-negative values (in which case $\bK$ is endowed with the
identical involution).
This cone does not admit a countable separating family of
characters. Let $\bKD$ be the countable family of characters
defined by
\begin{equation}
\label{Eq:chi-line}
\chi(x)=e^{\imath x(s) u},\qquad s\in\bQ_+,
\end{equation}
where $x=(x(s))_{s\geq 0}$ is an element of $\bR^{[0,\infty)}$ and
$u$ belongs to the set $\bQ$ of rational numbers.
Theorem~\ref{thr:levy-k-separating-countable} applies to the process
$\xi$ restricted onto the set $\bQ_+$ of non-negative rationals, so
that the image $\tilde{\nu}$ of the L\'evy measure $\nu$ under the
map that restricts a function to $\bQ_+$ has polar decomposition as
$\pi\otimes\theta_\alpha$ and the Poisson process with intensity
$\tilde{\nu}$ is given by
$\{\Gamma_i^{-1/\alpha}\tilde{\eps}_i,i\geq1\}$. By
\cite[Prop.~2.19]{kab:stoev12}, for each $t\in\bR_+\setminus\bQ_+$,
the separability property of $\xi$ yields that
$\Gamma_i^{-1/\alpha}\tilde{\eps}_i(t_n)$ converges in probability
to $\eta_i(t)$, whence $\tilde{\eps}_i(t_n)$ converges in
probability to $\eps_i(t):=\Gamma_i^{1/\alpha}\eta_i(t)$. Thus, the
L\'evy measure $\nu$ corresponds to the Poisson process
$\{\Gamma_i^{-1/\alpha}\eps_i,i\geq1\}$, where $\{\eps_i,i\in\bN\}$
is a sequence of i.i.d. separable processes distributed as
$\eps(s)$, $s\in\bR_+$. Condition \eqref{eq:eps-condition} in this
setting is equivalent to $\bE|\eps(s)|^\alpha<\infty$ for all
$s$.
In the symmetric case, following \eqref{eq:xi-r}, the convergence in
\eqref{eq:id-processes} can be rephrased as
\begin{displaymath}
\xi(s)\eqd \lim_{r\uparrow\infty} \sum_{\Gamma_i\leq r}
\Gamma_i^{-1/\alpha}\eps_i(s),\qquad s\in\bR_+.
\end{displaymath}
Therefore, in case of symmetric $\alpha$-stable processes the order
of summands in \eqref{eq:id-processes} can be made the same for all
time points. This makes it possible to appeal to path
regularity results for stochastic integrals from
\cite[Th.~4]{ros89p} in order to confirm that $\eps$ shares path
regularity properties with $\xi$, for instance, $\eps$ is almost
surely right-continuous with left limits (c\`adl\`ag) if $\xi$ is
c\`adl\`ag. The same holds for stochastic processes with almost
surely non-negative values, since then the involution is the
identity. The pointwise convergence of the LePage series yields the
uniform convergence, see \cite{bas:ros13}. Note that a result
concerning the existence of the series representation of a general
(not necessarily symmetric stable) infinitely divisible c\`adl\`ag
function using c\`adl\`ag summands is not available.
\end{example}
\begin{example}
Let $\bK$ be the family of non-negative functions $x(s)$,
$s\in\bR_+$, with the cylindrical $\sigma$-algebra, the semigroup
operation being pointwise maximum, identical involution, and the
scaling applied poinwisely to the values of the function. It is
shown in \cite{kab:stoev12} that each separable max-infinitely
divisible stochastic process admits a L\'evy measure. A separating
family of characters is given by those of the form $x \mapsto
\chi(x) = \one_{x(s)< a}$ for $s,a\in\bR_+$. While these characters
are not continuous, the function $t\mapsto \chi(tx)$ is
right-continuous. Restricting the functions to non-negative
rationals as in Example~\ref{ex:processes-values} and using the
results from \cite{kab:stoev12} concerning the max-infinitely
divisible setting, we obtain that the L\'evy measure of each
max-stable separable in probability stochastic process $\xi$ admits
a polar representation and the process itself admits the series
representation
\begin{displaymath}
\xi(s)\eqd \bigvee_{i\in\bN}\Gamma_i^{-1/\alpha}\eps_i(s),
\end{displaymath}
which first appeared in \cite{haan84}.
\end{example}
\begin{example}
\label{ex:time-stable}
Equip the family $\bK$ of real-valued c\`adl\`ag functions on
$\bR_+$ that vanish at the origin with pointwise arithmetic addition, involution
being the negation, and the scaling defined by $(tx)(s):=x(ts)$, $s\in\bR_+$, for
$t\in\bR_{++}$. Stable elements in this cone with $\alpha=1$ are
called \emph{time-stable} processes in \cite{kop:mol15} and
{\em processes infinitely divisible with respect to time} in \cite{man05}.
The characters are given by \eqref{Eq:chi-line} and, in view of the
c\`adl\`ag assumption, they constitute a countable separating family
and are right-continuous as required in
Theorem~\ref{thr:levy-k-separating-countable}. Furthermore,
$\chi(tx)=\exp\{\imath x(ts)u\}\to 1$ as $t\downarrow0$, so that a
transversal in $\bK$ can be constructed as in
Remark~\ref{rem:sphere-from-h}, see also \cite{kop:mol15}. Thus, if
a time-stable process with symmetric distribution admits a series
representation with c\`adl\`ag functions, it also admits the LePage
representation
\begin{displaymath}
\xi(s)\eqd \sum_{i\in\bN} \eps_i(\Gamma_i^{-1}s),\qquad s\in\bR_+,
\end{displaymath}
where $\{\eps_i\}_{i\in\bN}$ are i.i.d. copies of $\eps(s)$,
$s\in\bR_+$, such that
\begin{displaymath}
\bE \int_0^\infty \min(1,\eps(s)^2)s^{-2}ds<\infty
\end{displaymath}
because of \eqref{eq:eps-condition}.
The setting can be altered by considering the family of non-negative
c\`adl\`ag functions with the identical involution. In particular,
each seperable in probability c\`adl\`ag time-stable
process with non-negative values admits a LePage series
representation.
\end{example}
\begin{example}
Let $\bK$ be the family of locally finite measures $\mu$ on $\bR^d$
with the arithmetic addition, identical involution, and the Borel
$\sigma$-algebra generated by the vague topology
\cite[Sec.~9.1]{MR2371524}. An infinitely divisible locally finite
random measure admits a L\'evy measure, see \cite{MR2371524}. A
countable separating family of continuous characters consists of
\begin{displaymath}
\chi(\mu)=\exp\left\{-\int ud\mu\right\},
\end{displaymath}
where $u$ belongs to an appropriately chosen countable family of
continuous functions with compact support. If the scaling operation
is applied to the values of measures, then $\chi(tx)\to1$ as
$t\downarrow0$ for all $x$, so that a transversal can be constructed
as in Remark~\ref{rem:sphere-from-h}. An alternative way of
constructing a measurable trasversal $\bS$ is to take a sequence
$\{B_k\}_{k\in\bN}$ of bounded sets that form a base of the topology
and let $\mu\in\bS$ if $\mu(B_0)=\cdots=\mu(B_{n-1})=0$ and
$\mu(B_n)=1$ for some $n$. By
Theorem~\ref{thr:levy-k-separating-countable}, each stable locally
finite random measure admits the LePage representation
\begin{displaymath}
\mu\eqd \sum_{i\in\bN} \Gamma_i^{-1/\alpha}\eps_i
\end{displaymath}
for a sequence $\{\eps_i\}_{i\in\bN}$ of i.i.d. locally finite
measures with $\alpha$-integrable values.
\end{example}
\begin{example}
Let $\bK$ be the family of closed sets $F$ in $\bR^d\setminus\{0\}$
with the union as the semigroup operation, identical involution, the
conventional scaling, and the $\sigma$-algebra generated by families
$\{F:F\cap K\neq\emptyset\}$ for all compact sets $K$. The countable
separating family of continuous characters is given by
$\chi(F):=\one_{F\cap G=\emptyset}$ for open sets $G$ from
the base of topology on $\bR^d$. Note that deterministic closed sets
have idempotent distributions in this cone. It is known that each
union infinitely divisible random closed set admits a L\'evy
measure, see \cite{MR2132405}. Thus, each strictly $\alpha$-stable
random closed set $\xi$ without idempotent factors (so that
$\bP\{x\in \xi\}<1$ for all $x\in\bR^d$) admits the series
representation as the union of $\Gamma_i^{-1/\alpha}\eps_i$,
$i\in\bN$, for a sequence $\{\eps_i\}_{i\in\bN}$ of i.i.d. random
closed sets.
\end{example}
\begin{example}
Let $\bK$ be the family of all non-decreasing functions
$\Phi:\cK\mapsto\bR_+$ defined on the family $\cK$ of compact
subsets of $\bR^d$ that vanish at the empty set and are upper
semicontinuous, that is $\Phi(K_n)\downarrow\Phi(K)$ as
$K_n\downarrow K$. Such functions are known as capacities (and
sometimes are called topological pre-capacities, see
\cite{MR1814344}). Equip $\bK$ with the semigroup operation by
taking pointwise maximum of two capacities and the scaling of their
values. It is shown in \cite{nor86} that an infinitely divisible
capacity $\xi$ admits a L\'evy measure. It does not have idempotent
factors if the essential infimum of $\xi(K)$ vanishes for all
$K\in\cK$. By Theorem~\ref{thr:levy-k-separating-countable}, each
strictly $\alpha$-stable capacity admits the series representation
$\sum_{i \in \bN} \Gamma^{-1/\alpha}\eps_i$, see also
\cite[Th.~4.1]{mol:strok15}.
\end{example}
\begin{example}
Let $\bK$ be the family of metric measure spaces with the Cartesian
product as the semigroup operation and the scaling applied to the
metric, see \cite{evan:mol15} for details. This semigroup admits a
countable separating family of characters and a measurable
transversal. Furthermore, each infinitely divisible random element
in $\bK$ admits a L\'evy measure and so
Theorem~\ref{thr:levy-k-separating-countable} applies and yields the
LePage series representation of stable metric measure spaces
obtained in \cite[Th.~10.3]{evan:mol15}.
\end{example}
\section*{Acknowledgment}
\label{sec:acknowledgment}
The paper was initiated while SE was visiting the University of Bern
supported by the Swiss National Science Foundation.
\def\cprime{$'$}
|
1,116,691,498,236 | arxiv | \section{INTRODUCTION}\label{section:intro}
The tremendous excitement generated by the recent LIGO discoveries of
gravitational waves (GWs) from merging binary black holes
\citep{LIGO1,LIGO2} will only be
surpassed by the unambiguous simultaneous detection of an
electromagnetic (EM) counterpart to such a GW signal. With an EM
counterpart, the sky localization of the GW source will likely be
greatly improved, and if a host galaxy can be identified, the
event can be placed in the proper astrophysical context. The galaxy's
redshift could also be used in conjunction with the luminosity
distance---measured independently with the GW signal---to place new
and unbiased constraints on cosmological parameters \citep{Nissanke:2010}.
Another major advantage of detecting an EM counterpart is that it
could give us the ability to combine with sub-threshold GW triggers
and improve the statistical significance of marginal signals,
potentially lowering the false alarm rate. It is also conceivable that
several weak coincident EM and GW signals could reveal a
population of sources which are individually too weak to be detectable
by blind GW or EM searches.
One of the most
promising mechanisms for a LIGO counterpart is a short gamma-ray burst
(SGRB) produced by seeing a binary neutron star (NS/NS) or neutron
star-black hole (NS/BH) binary merger \citep{Eichler:1989}. The disrupted NS will form a
massive, highly magnetized accretion disk around the companion BH,
driving a relativistic jet which will be manifest as a SGRB to
observers oriented along the jet axis.
For a sub-sample of SGRBs, a precursor gamma-ray flare can be seen
roughly $1-10$ seconds before the peak of the GRB \citep{Troja:2010}. For a nominal NS/BH
binary with masses $1.4M_\odot$ and $10M_\odot$, this would correspond
to hundreds of orbits before merger, with binary separations of
$20-30$ gravitational radii.
In this paper, we explore the possibility that these precursor flares
come from some emission process on the surface of the neutron star as
it spirals towards the companion black hole. If
so, the resulting gamma-rays will experience the extreme gravitational
forces that govern this highly relativistic and dynamical
system. These effects will be imprinted on the gamma-ray signal that
ultimately reaches a distant observer. We have identified two primary
features in these light curves: special relativistic Doppler beaming,
and magnification due to gravitational lensing. As we will show below,
the detailed properties of the light curve provide information about
the binary separation, inclination angle, and orbital period,
giving important information complementary to the GW signal.
The most robust feature of this precursor EM light curve is that is
should be locked in phase with the GW signal, also chirping through
increasing frequency and amplitude as the system approaches
merger. Thus we refer to this particular class of counterparts as
``electromagnetic chirps.'' We fully expect that data analysis tools
similar to those used in GW searches may be
fruitful in discovering and interpreting such signals in otherwise
noisy data from gamma-ray observatories such as the Gamma-ray Burst
Monitor (GBM) on Fermi (Dal Canton et al.\ 2017).
We remain intentionally agnostic about the specific physical mechanism
that produces the gamma-ray flash on the NS surface, but one promising
model is that of resonant shattering flares that occur as the binary
orbital frequency sweeps through the eigenfrequencies of the NS normal
modes during the moments leading up to merger
\citep{Tsang:2012,Tsang:2013}. When the quadrupolar crust-core
interface mode is resonantly excited by tidal interaction, a huge
amount of energy can be deposited into the crust, and ultimately
released as gamma-rays.
\section{LIGHT CURVES IN BINARY SPACETIME}\label{section:light_curves}
We calculate the light curves and spectra from the NS/BH system with
the Monte Carlo radiation transport code {\tt Pandurata}\,
\citep{Schnittman:2013}. To date, {\tt Pandurata}\, has only been applied
to problems with stationary Kerr spacetimes. In order to use it
for the highly dynamic spacetime of a merging compact binary system,
significant modifications were required. First and foremost, we needed
to move from the Hamiltonian formalism described in
\citet{Schnittman:2013}, appropriate for a system with multiple
integrals of motion, to a more generalized Lagrangian approach to
solving the geodesic equation for an arbitrary metric and connection,
better suited for the binary spacetime.
At the same time, we desired a spacetime formulation that would be
computationally efficient for integrating millions of geodesic
trajectories. Thus, instead of using full numerical relativity data
(typically only available for tens of binary orbits),
we instead opted for a relatively simple analytic description of
the binary spacetime based on a post-Newtonian (PN) approximation to the
orbit \citep{Kelly:2007}.
The binary is completely described by two non-spinning point masses with
$m_1+m_2=M$ and $m_1 \ge m_2$, moving on a circular orbit with binary
separation $D\equiv (GM/c^2)x$ (unless explicitly stated otherwise, we
hereby assume units with $G=c=1$). The angular velocity is given by the 2PN
expression
\begin{equation}\label{eqn:Omega}
\Omega =
\left[64\frac{x^3}{(1+2x)^6}+\eta\frac{1}{x^4}+\left(-\frac{5}{8\eta}+\eta^2\right)\frac{1}{x^5}\right]^{1/2}M^{-1}\, ,
\end{equation}
with $\eta \equiv m_1 m_2/M^2$.
The orbit is assumed to be instantaneously circular, but we do evolve
the separation according to the 2.5PN leading quadrupole radiation
reaction terms derived in \citet{Peters:1964}:
\begin{equation}\label{eqn:Peters}
\frac{dx}{dt} = -\frac{64}{5}\eta\frac{1}{x^3}\, .
\end{equation}
The metric is expressed by the standard 3+1 lapse/shift formalism:
\begin{equation}
g_{\mu \nu} = \begin{pmatrix}
-\alpha^2 + \beta^2 & \beta_j \\
\beta_i & \gamma_{ij}\\
\end{pmatrix}\, .
\end{equation}
Following \citet{Campanelli:2006}, we use $\alpha = 2/(1+\psi^4)$,
$\beta_j =0$, and $\gamma_{ij}= \delta_{ij}\psi^4$. The conformal
factor $\psi$ is given by
\begin{equation}
\psi = 1+ \frac{m_1}{2r_1}+\frac{m_2}{2r_2}\, ,
\end{equation}
with $r_1$ and $r_2$ being the simple Cartesian distances between the
spatial coordinate and the primary/secondary masses. For the
Christoffel components $\Gamma^{\rho}_{\mu \nu}$ we take
the spatial and temporal metric derivatives analytically based
on the trajectory as given by equation (\ref{eqn:Omega}).
To generate light curves, {\tt Pandurata}\, employs a Monte Carlo ray-tracing
scheme that is based on shooting a large number of photon packets from
the emission region to potential observers at infinity \citep{Schnittman:2013}.
For the NS/BH problem investigated here, we use a simple thermal, optically
thick emission model. Each photon is
launched from the surface of the NS with radius 10 km, isotropic in the local
frame of the star and with a limb-darkening factor appropriate for an
optically thick emitter. This gives the photon's initial position
$\mathbf{x}$ and four-velocity $\mathbf{u}$. The photon is then
propagated forward along the affine parameter $\lambda$ according to
the standard geodesic formula
\begin{equation}
\frac{d^2x^\rho}{d\lambda^2} = -\Gamma_{\mu \nu}^{\rho}
\frac{dx^\mu}{d\lambda} \frac{dx^{\nu}}{d\lambda}\, .
\end{equation}
When a photon packet reaches a simulated detector surface at some
large radius, its
direction specifies the appropriate pixel in the image plane, exactly
like a pinhole camera. Since each photon is also tagged with a time
stamp, a movie can be built up over time. Generally, each observer is
located at a specific sky position $(\theta,\phi)$, so the vast
majority of the Monte Carlo photons never contribute to the image. For
circular orbits, we are able to take advantage of the periodic motion
of the binary to efficiently map the azimuthal coordinate to the time
coordinate, so a single movie is produced by combining images from
observers at all azimuthal positions. Because we are also interested
in generating light curves for multiple latitudinal inclination
angles, every photon that escapes the system
ultimately contributes to some observer's light curve, restoring
a remarkable level of efficiency for the Monte Carlo approach.
\begin{figure}[h]
\caption{\label{fig:movie_stills} Snapshots of thermal emission from
the surface of a $1.4 M_\odot$ neutron star orbiting a $10 M_\odot$ black
hole. The observer is at inclination $90^\circ$, edge-on to the
orbital plane, and the binary separation is $D=10M$. In panel (a)
the observer, black hole, and neutron star are co-linear, resulting
in an Einstein ring around the BH, and producing the peak
magnification; in (b) the NS is moving towards the observer
with maximum blueshift; (c) shows the strong gravitational lensing
of photons that are deflected $180^\circ$ by the BH; and (d) is the
point of maximum redshift.}
\begin{center}
\includegraphics[width=0.35\textwidth]{fig1a.ps}
\includegraphics[width=0.35\textwidth]{fig1b.ps}\\
\includegraphics[width=0.35\textwidth]{fig1c.ps}
\includegraphics[width=0.35\textwidth]{fig1d.ps}
\end{center}
\end{figure}
In Figure \ref{fig:movie_stills} we show four snapshots of the NS/BH
system as seen by an observer oriented edge-on to the orbital
plane. The binary masses are $m_1=10M_\odot$ and $m_2=1.4M_\odot$ and
the binary separation is $D=10M$. The two dominant effects here are
gravitational lensing and relativistic beaming. From weak lensing
theory, whenever there is co-alignment of the source, lens, and
observer, an Einstein ring is formed, with high magnification. Lensing
by a black hole produces an infinite number of concentric Einstein
rings, each one at smaller radius and magnification.
The primary Einstein ring can be seen in panel (a), when the NS is on
the far side of the BH relative to the observer. The NS then
progresses around its orbit to (b), when it reaches the point of
maximum blueshift and beaming towards the observer. Also seen in (b)
is the faint secondary Einstein ring, only appearing a quarter phase
later due to the time delay of photons orbiting around the BH before
reaching the observer. In panel (c) another Einstein ring is seen,
produced when the system is aligned in the order BH--NS--observer, and
the photons from the nearer NS are deflected $180^\circ$ by the BH on
the far side of the orbit before returning to the observer. By this
time, the NS has already moved roughly a quarter phase out of the way,
and thus does not appear co-linear with the BH. Finally, in panel (d)
the NS is at the point of maximum redshift, noticeably fainter than in
panel (b).
\begin{figure}[h]
\caption{\label{fig:light_curve} {\it (top)} Normalized light curve corresponding
to Figure \ref{fig:movie_stills}, with the time of each panel labeled
accordingly. {\it (bottom)} Normalized light curve from the same
system, but with binary separation $D=40M$.}
\begin{center}
\includegraphics[width=0.4\textwidth]{fig2a.ps}\\
\includegraphics[width=0.4\textwidth]{fig2b.ps}
\end{center}
\end{figure}
In Figure \ref{fig:light_curve} we show the bolometric light curve
corresponding to the snapshots in Figure \ref{fig:movie_stills}. At
this small binary separation of $10M$, the orbital velocity is roughly
a quarter of the
speed of light, so the dominant feature in the light curve is the
Doppler beaming as the NS moves towards and away from the
observer. The lensing magnification contributes a strong peak at (a),
and a much smaller peak visible at (c). In the bottom panel of Figure
\ref{fig:light_curve} we show the light curve from the same system at
binary separation $D=40M$. The orbital velocity, and thus relativistic
beaming effect, is much smaller, while the lensing still produces a
large magnification. The smaller orbital velocity also implies the
light-crossing time is shorter relative to the orbital period, so the
four phases corresponding to those shown in Figure
\ref{fig:movie_stills} are more evenly spaced in time.
\begin{figure}[h]
\caption{\label{fig:lc_inc} Light curve dependence on observer
inclination. The binary separation is $10M$ as in
Fig.\ \ref{fig:movie_stills}. While the relativistic beaming effects
are significant even at low inclinations, the lensing effects are
only observable above $\sim 80^\circ$.}
\begin{center}
\includegraphics[width=0.8\textwidth]{fig3.ps}
\end{center}
\end{figure}
Figure \ref{fig:lc_inc} shows
the dependence of the light curve features on observer
inclination. Not surprisingly, the sharp lensing feature is only
visible at high inclination angles with $i\gtrsim 80^\circ$. Yet even
at $i=30^\circ$, the relativistic beaming is responsible for a significant
modulation in the light curve. This is because, at a given frequency,
the observed flux scales like the redshift cubed: $I_\nu \sim
(\nu_{\rm obs}/\nu_{\rm em})^3$. The bolometric flux modulation is greater by
another factor of the redshift. One simple way to understand this is
when the emitter has a blackbody temperature $T_{\rm em}$, it is
observed as a blackbody with temperature $T_{\rm obs}=T_{\rm
em}(\nu_{\rm obs}/\nu_{\rm em})$, and of course the total flux
scales like $T^4$. For even the moderate inclination of $i=30^\circ$,
the ratio of peak-to-valley flux is $I_{\rm max}/I_{\rm min} \approx
3$, while for edge-on systems the modulation is an order of magnitude
greater.
\begin{figure}[h]
\caption{\label{fig:lc_energy} Light curve dependence on energy.
The binary separation is $10M$ as in Fig.\ \ref{fig:movie_stills},
and the observer inclination is $60^\circ$.}
\begin{center}
\includegraphics[width=0.8\textwidth]{fig4.ps}
\end{center}
\end{figure}
For a thermal emission spectrum, the magnitude of the Doppler
modulation also depends on the energy band at which it is observed. At
energies below the thermal peak, the slope of the spectrum is
increasing like $I_\nu \sim \nu^2$, so the net effect of beaming is
relatively modest: the observer sees a blue(red)-shifted portion of an
inherently fainter(brighter) part of the spectrum. The opposite is
true at energies above the thermal peak, where the relativistic beaming
combines with the inherent shape of the spectrum to enhance the effect
of Doppler modulation. These effects are shown in Figure
\ref{fig:lc_energy}, where we have divided the observed spectrum into
three broad bands, one covering the thermal peak, and one each above
and below. For an inclination angle of $60^\circ$, the bolometric
peak-to-valley amplitude is $\sim 10$, while the
high-energy flux is actually modulated by nearly a factor of 100. This
will have important implications for designing a search strategy for
these systems.
On the other hand, the gravitational lensing is achromatic, and generally
strongest when the source is moving transverse to the observer, so the
lensing modulation will have roughly the same effect on light curves
in different energy bands.
\section{ELECTROMAGNETIC CHIRPS}
As shown in the previous section, the gamma-ray light curve from a
NS/BH binary is modulated at the orbital frequency with an amplitude that
is a function of the line-of-sight velocity. As the orbit shrinks due
to gravitational radiation losses, the frequency and amplitude of the
gamma-ray light curve increase. In other words, the EM light curve {\it
chirps}, just like the GW signal.
A NS/BH system with masses $1.4M_\odot$ and $10M_\odot$ will complete
roughly 700 orbits during the final minute before merger, entirely
within the LIGO band. A portion of the light curve corresponding to
this period is shown in Figure \ref{fig:inspiral}. To achieve
sufficiently high resolution with {\tt Pandurata}\, over such a long time, we need
only calculate a handful of circular light curves, each at a different orbital
separation, and then interpolate between them according to the
inspiral evolution as governed by equation (\ref{eqn:Peters}). The
evolution is cut off at a separation $D=10M$, shortly before
merger. In practice, the light curve shown in Figure
\ref{fig:inspiral} should really be thought of as a {\it modulation window}
to be multiplied by the inherent flare luminosity, which may only last
a few seconds or less, depending on the emissivity model.
\begin{figure}[h]
\caption{\label{fig:inspiral} Electromagnetic modulation for an
inspiraling NS/BH binary starting
at orbital separation $50M$, with observer inclination angle $82^\circ$.
The insets show zoomed-in views of the beginning and end of
the inspiral.}
\begin{center}
\includegraphics[width=1.0\textwidth]{fig5.ps}
\end{center}
\end{figure}
At early times, when the orbital velocity is somewhat smaller, the
lensing peak of the light curve is larger than that due to the Doppler
beaming, but at late times the beaming dominates the modulation (see
Fig.\ \ref{fig:light_curve}).
As with traditional radial velocity observations of spectroscopic
binaries, the Doppler shift modulation will provide a degenerate
measurement of the BH mass and the binary inclination. For nearly
edge-on systems where the lensing peak is observable, the degeneracy
can be broken and an accurate BH mass can be determined, much like in
the case of transiting exoplanet systems. With a coincident
GW detection, the gravitational waveform will provide a different
degenerate measure of the binary masses and inclination. For extremely
precise light curve measurements, one could even imagine resolving the
width of the lensing peak, in turn giving information about the NS
radius and thus the equation of state.
There is also the very real possibility that the source could be a
NS/NS binary, with either or both neutron stars initiating the
gamma-ray flare(s). In this case, the light curve would be simply
repeated with half the period, yielding two beaming and two lensing
peaks per orbit. Or if the flares occur at different times
(potentially due to different NS masses or magnetic field
configurations), the light curve would consist of two isolated flares,
modulated by the same chirping signature, but offset in phase by
$180^\circ$.
\section{OBSERVATIONAL POTENTIAL}
To this point, it has not been necessary to specify the physical
mechanism that causes the gamma-ray flare on the NS surface. Anything
that leads to emission closely tied to the neutron star will give
qualitatively the same EM chirp as the light curves computed in the
previous sections. One promising model is that of resonant shattering
of the NS crust \citep{Tsang:2012,Tsang:2013}. In this scenario, the
tidal deformation of the NS, modulated by the orbital motion,
resonantly excites the crust-core interface eigenmode, leading to the
crust shattering and releasing $\sim 10^{46-48}$ erg on an extremely
short time scale. Depending on the NS equation of state, this resonant
shattering flare takes place somewhere from 1 to 10 seconds before
merger, consistent with SGRB precursors seen with Swift
\citep{Troja:2010}. Another possible model that similarly deposits a
large amount of energy into the NS surface shortly before merger is
that of the unipolar inductor, where the accelerating magnetized NS
essentially forms an electric generator with the black hole or neutron
star companion \citep{Hansen:2001,Mingarelli:2015,DOrazio:2016}.
We propose to implement efficient searches for these characteristic chirp
signals in the data from existing time-domain observatories such as
Fermi in conjunction with any LIGO triggers. By using
data analysis techniques similar to the matched filtering employed
by the LIGO collaboration, we hope to be able to extract long but weak
signals from a noisy background. Moreover, if the source parameters
are partially known from a coincident GW signal (in particular the
chirp mass and the merger time) then the parameters of the EM chirp
are strongly constrained, restricting the volume of parameter space
that must be explored and further reducing the false-alarm background
of the search, as well as its computational cost. Methods for
performing such a search in Fermi/GBM data are currently being
investigated and will be described in a companion paper (Dal Canton et
al.\ 2017).
An important caveat in considering the effects of the EM chirp, in the
context of these models, is that high luminosities emitted thermally
from a small region, like the neutron star surface, will be optically
thick to pair-production ($\gamma + \gamma \rightarrow e^+ + e^-$),
resulting in a pair-photon fireball that only becomes optically thin
at surface of a much larger photosphere \citep{Goodman:1986}.
For photons with energies $E_\gamma \approx 500$ keV we can estimate
the optical depth to pair production (see e.g., \citet{Nakar:2005}):
\begin{equation}
\tau_{\gamma\gamma} \gtrsim
\frac{L_\gamma\sigma_T}{4 \pi c E_\gamma R_{\rm NS}} \approx 10^{10}
\frac{L_\gamma/(10^{46} {\rm erg\, s}^{-1})}{R_{\rm NS}/(10{\rm km})}
\, ,
\end{equation}
where $L_\gamma$ is the luminosity of photons near $E_\gamma$ and
$\sigma_T$ is the Thompson cross-section. Clearly, if all the energy
of these flares is deposited into photons above the pair-production
threshold, a relativistic pair-photon fireball is quickly formed,
which will expand relativistically until the pair-production freezes
out \citep{Goodman:1986}, resulting in a large photospheric radius
when compared to the orbital and gravitational scales. Additionally,
the photospheric surface is accelerated to large Lorentz factor
$\Gamma$, likely washing out any beaming effects from the
orbital dynamics.
For black-body emission, luminosity greater than $L \gtrsim 10^{42}$
erg s$^{-1}$, when confined to $R_{\rm NS} \approx 10$km ($T \gtrsim
20$keV), will have a sufficiently high-energy tail to be optically
thick to pair production. Thus, there exists an unfortunate trade-off
for potential EM chirp sources. Those brighter than $L \gtrsim
10^{42}$ erg s$^{-1}$, which are more easily detectable, will likely
be optically thick to pair-production and result in a pair-photon
fireball, potentially masking the EM chirp signature. Meanwhile, those
with luminosities below this threshold will be more difficult to
detect at extra-galactic distances. However, non-thermal emission
lacking significant contribution above the pair-production threshold
may avoid this limitation.
Resonant shattering flares, with luminosities of $\sim 10^{47-49}$ erg
s$^{-1}$, consistent with the observed SGRB precursors, are likely
pair-photon fireballs. However, emission from a black hole or neutron
star companion crossing the NS magnetic field lines in the unipolar
inductor model are significantly less energetic, with maximum
luminosity scaling as $L \sim 10^{40}$ erg
s$^{-1}$ $(B/10^{12} {\rm G})^2 (D/10^7 {\rm cm})^{-7}$, where $B$ is
the NS magnetic field and $D$ the binary separation \citep{Hansen:2001}.
Despite these potential complications, we are confident that the
basic EM chirp remains a robust and potentially very powerful
prediction for a counterpart to the GW signal from merging NS
binaries. In the near future, we encourage any wide-field survey with
sufficiently high time resolution to look for these characteristic
chirps preceding GW triggers. In the more distant future, we look
forward to using low-frequency GW observatories such as LISA to
identify precursor signals that would in turn give an early warning
for the time and sky location for ground-based events, triggering more
sensitive targeted EM observations \citep{Sesana:2016}.
\section*{Acknowledgments}
This work was partially supported by NASA grant ATP13-0077. TDC was
supported by an appointment to the NASA Postdoctoral Program at the
Goddard Space Flight Center, administered by Universities Space
Research Association under contract with NASA.
\newpage
|
1,116,691,498,237 | arxiv | \subsection{Notation} The smallest clone containing a set
${\mathscr F}\subseteq\O$ shall be denoted by $\cl{{\mathscr F}}$; moreover, we write
${\mathscr F}^*$ for the set of all functions which arise from
functions of ${\mathscr F}$ by identification of variables,
addition of fictitious variables, or permutation
of variables.
For $n\geq 1$ we denote the $n$-ary operations on
$X$ by ${\mathscr O}^{(n)}$; if ${\mathscr F}\subseteq\O$, then ${\mathscr F}^{(n)}$ will stand for ${\mathscr F}\cap{\mathscr O}^{(n)}$. We will see $X$
equipped with a vector space structure; then we write
$\spann(S)$ for the subspace of $X$ generated by a set of
vectors $S\subseteq X$. We shall denote the zero vector of $X$ by $0$, and
use the same symbol for the constant function
with value $0$. We write ${\mathscr L}$ for the set of linear functions on $X$. The sum $f+g$ of two linear functions
$f,g$ on $X$ is defined pointwise, as is the binary function $f(x)+g(y)$ obtained by the sum
of two unary functions of different variables. The range of a function $f\in\O$ is given the
symbol $\ran f$. For a set $Y$ we write
$\P(Y)$ for the power set of $Y$ and $\P_{fin}(Y)$ for the set of finite subsets of $Y$.
\end{section}
\begin{section}{Monoids of linear functions}
Given any partial order ${\frak P}$ with $|{\frak P}|=\lambda\leq 2^\kappa$, we construct a
monoid ${\mathscr M}$ such that ${\mathscr I}_{\mathscr M}$ is isomorphic to $1+\L$, where $\L$ is the lattice of order
ideals of ${\frak P}$.
Equip $X$ with a vector space structure of dimension $\kappa$
over any field $K$ of characteristic $\neq 2,3$ and fix a basis $B$ of $X$. Fix moreover
three distinguished elements $a,b,c \in B$ and write
$A=B\setminus\{a,b,c\}$.\\ Next we want to introduce a preferably natural notion
of ``small'' for subsets of $A$; in fact, we are looking for an order ideal
${\mathscr I}$
in $\P(A)$ extending the ideal $\P_{fin}(A)$ which is invariant under permutations of $A$ (i.e., if $S\in{\mathscr I}$ then
also $\alpha[S]\in{\mathscr I}$ for all permutations $\alpha$ of $X$), such that
if we factorize $\P(A)$ by this ideal, then the resulting
partial order has an antichain of length $\lambda$. Since we want to prove our theorem for
all $\lambda\leq 2^\kappa$, we need the existence of an antichain of length $2^\kappa$, i.e. as large
as $\P(A)$.
It is quite obvious that the only order ideals in $\P(A)$
that are invariant under permutations are the ${\mathscr I}_\xi=\{S\subseteq
A:|S|<\xi\}$, and the ${\mathscr J}_\xi=\{S\subseteq
A:|X\setminus S|\geq\xi\}$, where $\xi\leq\kappa$ is a
cardinal. For $X$ countably infinite, the ideal $\P_{fin}(A)={\mathscr I}_{\aleph_0}$ satisfies our requirement for
the antichain.
For there exists an \emph{almost disjoint} family ${\mathscr A}$ of subsets of
$A$ of size $2^{\aleph_0}$, meaning that all sets of ${\mathscr A}$
are infinite and whenever $A_1,A_2\in{\mathscr A}$ are distinct,
then $A_1\cap A_2$ is finite (see the textbook \cite{Jec02}). The reader interested in
countably infinite base sets only can imagine this ideal
in the following. However, it is consistent with ZFC that
almost disjoint families
of size $2^\kappa$ fail to exist on uncountable $\kappa$, even if we consider ${\mathscr I}_{\kappa}$ instead
of ${\mathscr I}_{\aleph_0}$ and replace ``$A_1\cap A_2$ is finite''
by the weaker
$``|A_1\cap A_2|<\kappa$''. Moreover, if ${\mathscr I}_{\kappa}$ does not give us an antichain of
desired length, then the other ideals
will not work either, so we have to do something
less elegant: Fix any family ${\mathscr A}\subseteq\P(A)$ of subsets of
$A$ of cardinality $\kappa$ such that $|{\mathscr A}|=\lambda$, and such that
$A_1\nsubseteq A_2$ for all distinct $A_1,A_2\in{\mathscr A}$.
Such families exist; see the textbook \cite[Lemma 7.7]{Jec02} for a proof of
this. Now we set the ideal ${\mathscr I}$ to consist of all proper
subsets of sets in ${\mathscr A}$, plus all finite sets, and call the sets of ${\mathscr I}$ \emph{small}. Obviously,
${\mathscr I}$ is only an order ideal (no lattice ideal) and quite arbitrary compared
to the ideal of finite subsets of $A$ which we can use for
countably infinite $X$. Note also that we had to give up invariance under permutations of $A$;
however, it will be sufficient
that if $\alpha$ maps $A_1$ bijectively onto $A_2$, where $A_1,A_2\in{\mathscr A}$,
and if $S\subseteq A_1$ is small, then
$\alpha[S]$ is small. Clearly, the sets of ${\mathscr A}$ are not
elements of ${\mathscr I}$, but their nontrivial intersections are. We index the family ${\mathscr A}$ by the
elements of ${\frak P}$: ${\mathscr A}=(A_p)_{p\in{\frak P}}$.
The monoid ${\mathscr M}$ we are going to construct will be one of linear functions on the vector space
$X$, the set of which we denote by ${\mathscr L}$. We shall sometimes speak of the
\emph{support} of a linear
function $f$, by which we mean the subset of $A$ of those basis vectors which $f$ does not
send to $0$. The monoid ${\mathscr M}$ will be the union of seven classes of
functions, plus the zero function. Three classes, namely
${\mathscr N}$, ${\mathscr N}'$ and ${\mathscr N}''$, do ``almost nothing'', in the
sense that they have small support; ${\mathscr N}$ essentially
guarantees that the polymorphisms $\pol({\mathscr M})$ of the monoid ${\mathscr M}$ are sums of
linear functions, and ${\mathscr N}'$ and ${\mathscr N}''$ are auxiliary functions
necessary for the monoid to be closed under
composition. The class $\Phi$ represents the elements of
the partial order ${\frak P}$, the class $\Psi$ its order. Finally, the
classes $\S_\Phi$ and $\S_{{\mathscr N}'}$ ensure that there exist
nontrivial polymorphisms of the monoid, and that they
correspond to elements of the partial order.\\
We start with the set ${\mathscr N}$
of those linear functions $n\in {\mathscr L}$ which
satisfy the following conditions:
\begin{itemize}
\item $n(a)=a$
\item $n(b)=0$
\item $n(c)=c$
\item $n$ has small support.
\end{itemize}
Next we add the set ${\mathscr N}'\subseteq{\mathscr L}$ consisting of all linear functions $n'$ for which:
\begin{itemize}
\item $n'(a)=0$
\item $n'(b)=0$
\item $n'(c)=b$
\item $n'$ has small support
\item $\ran n'\subseteq\spann(\{b\})$.
\end{itemize}
The class ${\mathscr N}''$ contains all $n''\in{\mathscr L}$ with
\begin{itemize}
\item $n''(a)=a$
\item $n''(b)=0$
\item $n''(c)=0$
\item $n''$ has small support
\item $\ran n''\subseteq\spann(\{a\})$.
\end{itemize}
Observe that all functions $f$ in these three classes have
small support, and that the range of the functions of
${\mathscr N}'$ and ${\mathscr N}''$ is only a one-dimensional subspace of
$X$.
Now we define for all $p\in{\frak P}$ a function
$\phi_p\in{\mathscr L}$ by setting
\begin{itemize}
\item $\phi_p(a)=0$
\item $\phi_p(b)=0$
\item $\phi_p(c)=b$
\item $\phi_p(d)=b$ for all $d\in A_p$
\item $\phi_p(d)=0$ for all other $d\in B$.
\end{itemize}
So $\phi_p$ is essentially the characteristic function of
$A_p$. Observe that
$\ran\phi_p\subseteq\spann(\{b\})$. We write
$\Phi=\{\phi_p:p\in{\frak P}\}$.
We fix for all $p,q\in{\frak P}$ with $q\leq_{\frak P} p$ a function $\psi_{p,q}\in {\mathscr L}$ such that
\begin{itemize}
\item $\psi_{p,q}$ maps $A_q$ bijectively onto $A_p$
\item $\psi_{p,q}(a)=a$
\item $\psi_{p,q}(b)=0$
\item $\psi_{p,q}(c)=c$
\item $\psi_{p,q}(d)=0$ for all other $d\in B$
\item If $q\leq_{\frak P} r\leq_{\frak P} p$, then $\psi_{p,r}\circ
\psi_{r,q}=\psi_{p,q}$.
\end{itemize}
This is possible: Let $Y$ be a set of cardinality $\kappa$
and choose for all $p\in{\frak P}$ a bijection $\mu_p$ mapping
$A_p$ onto $Y$. Then setting $\psi_{p,q}(d)=\mu_p^{-1}\circ
\mu_q(d)$ for all $d\in A_q$, $\psi_{p,q}(a)=a$, $\psi_{p,q}(c)=c$, and $\psi_{p,q}(d)=0$ for
all remaining $d\in B$ yields the required functions. We set
$\Psi=\{\psi_{p,q}:p,q\in{\frak P}, q\leq_{\frak P} p\}$. The idea
behind $\psi_{p,q}$ is that it ``translates''
the function $\phi_p$ of $\Phi$ into the function
$\phi_q$, and that such a translation function exists only
if $q\leq_{\frak P} p$. More precisely we have
\begin{lem}\label{LEM:compositionWithPhi}
Let $\phi_r\in\Phi$ and $\psi_{p,q}\in\Psi$. If $r=p$,
then $\phi_r\circ\psi_{p,q}=\phi_q$; otherwise,
$\phi_r\circ\psi_{p,q}\in{\mathscr N}'$.
\end{lem}
\begin{proof}
Assume first that $r=p$. Then
in the composite $\phi_r\circ\psi_{p,q}$, first $\psi_{p,q}$ maps
$A_q$ onto $A_p$, and all other vectors of $A$ to $0$,
and then $\phi_r$ sends $A_r=A_p$ to $b$, so that the composite
indeed sends $A_q$ to $b$ and all other vectors of $A$ to $0$,
as does $\phi_q$; one easily checks that also the extra conditions on $a,b,c\in B$
are satisfied. If on the other hand
$r\neq p$, then the only basis vectors in $A$ which
$\phi_r\circ\psi_{p,q}$ does not send to zero are
those in $\psi_{p,q}^{-1} [A_r\cap A_p]$, a small set
since $\psi_{p,q}$ is one-one on its support and by the properties of the family
${\mathscr A}$. Moreover,
$\ran(\phi_r\circ\psi_{p,q})\subseteq\ran\phi_r\subseteq\spann(\{b\})$.
Hence, since also the respective additional conditions
on $a,b,c\in B$ are satisfied we have
$\phi_r\circ\psi_{p,q}\in{\mathscr N}'$.\\
\end{proof}
The remaining functions to be added to our monoid are those of the form $\phi_p+n''$,
where $\phi_p\in\Phi$ and $n''\in{\mathscr N}''$, the set of which we denote by $\S_\Phi$,
and all functions of the form $n'+n''$, where $n'\in{\mathscr N}'$ and
$n''\in{\mathscr N}''$; this set we call $\S_{{\mathscr N}'}$. The elements $f$ of
$\S_\Phi$ and $\S_{{\mathscr N}'}$ both satisfy
\begin{itemize}
\item $f(a)=a$
\item $f(b)=0$
\item $f(c)=b$.
\end{itemize}
We set ${\mathscr M}={\mathscr N}\cup{\mathscr N}'\cup{\mathscr N}''\cup\Phi\cup\Psi\cup\S_\Phi\cup\S_{{\mathscr N}'}\cup\{0\}$.
Observe the following properties
which hold for all $f\in{\mathscr M}$ and which will be useful:
\begin{itemize}
\item $f(a)\in\{0,a\}$
\item $f(b)=0$
\item $f(c)\in\{0,b,c\}$.
\end{itemize}
\begin{lem}\label{LEM:LIN:M_is_a_monoid}
${\mathscr M}$ is a monoid.
\end{lem}
\begin{proof}
The following table describes the composition of the
different classes of functions in ${\mathscr M}$. Here, the meaning of ${\mathscr X}\circ{\mathscr Y}={\mathscr Z}$ is:
Whenever $f\in{\mathscr X}$ and
$g\in{\mathscr Y}$, then $f\circ g\in {\mathscr Z}$.\\
\begin{center}
\begin{tabular}{r|c|c|c|c|c|c|c}
$\circ$ & ${\mathscr N}$ & ${\mathscr N}'$ & ${\mathscr N}''$ & $\Phi$ & $\Psi$ & $\S_{\Phi}$ & $\S_{{\mathscr N}'}$\\
\hline
${\mathscr N}$ & ${\mathscr N}$ & $0$ & ${\mathscr N}''$ & $0$ & ${\mathscr N}$ & ${\mathscr N}''$ & ${\mathscr N}''$ \\
${\mathscr N}'$ & ${\mathscr N}'$ & $0$ & $0$ & $0$ & ${\mathscr N}'$ & $0$ & $0$ \\
${\mathscr N}''$ & ${\mathscr N}''$ & $0$ & ${\mathscr N}''$ & $0$ & ${\mathscr N}''$ & ${\mathscr N}''$ & ${\mathscr N}''$ \\
$\Phi$ & ${\mathscr N}'$ & $0$ & $0$ & $0$ & $\Phi \cup {\mathscr N}'$ & $0$ & $0$ \\
$\Psi$ & ${\mathscr N}$ & $0$ & ${\mathscr N}''$ & $0$ & $\Psi \cup {\mathscr N}$ & ${\mathscr N}''$ & ${\mathscr N}''$ \\
$\S_{\Phi}$ & $\S_{{\mathscr N}'}$& $0$ & ${\mathscr N}''$ & $0$ & $\S_{\Phi} \cup\S_{{\mathscr N}'}$& ${\mathscr N}''$ & ${\mathscr N}''$ \\
$\S_{{\mathscr N}'}$ & $\S_{{\mathscr N}'}$& $0$ & ${\mathscr N}''$ & $0$ & $\S_{{\mathscr N}'}$ & ${\mathscr N}''$ & ${\mathscr N}''$ \\
\end{tabular}
\end{center}
We check the fields of the table. The fact that $\ran
n'\subseteq\spann(\{b\})$ for all $n'\in {\mathscr N}'$ and $f(b)=0$ for all
$f\in{\mathscr M}$ yields the ${\mathscr N}'$-column; in the same way we get
the $\Phi$-column.\\
If $g=\phi_p+n''\in\S_\Phi$ and $f\in{\mathscr M}$, then $f\circ g=f\circ
\phi_p+f\circ n''=f\circ n''$, so the $\S_\Phi$-column
is equal to the ${\mathscr N}''$-column, and the same holds for the
$\S_{{\mathscr N}'}$-column.\\
We turn to the ${\mathscr N}$- and ${\mathscr N}''$-columns. The $\S_\Phi$- and the $\S_{{\mathscr N}'}$-row are the sum of the
$\Phi$- and the ${\mathscr N}'$-row with the ${\mathscr N}''$-row, respectively, since $(f+g)\circ h=(f\circ h)+(g\circ h)$
for all $f,g,h\in{\mathscr O}^{(1)}$. For the other rows of those columns,
note that if $f,g\in{\mathscr L}$ and $g$ has small support, then
also $f\circ g$ has small support. It is left to the reader to check the conditions on $a,b,c\in B$ and
on the range for the composites.\\
It remains to verify the $\Psi$-column. For the first
row, observe that since all $n\in{\mathscr N}$ have small support and
since $\psi_{p,q}^{-1}[S]$ is small for all small $S\subseteq
A$ and all $\psi_{p,q}\in\Psi$ by the properties of
${\mathscr A}$, any composition
$n\circ\psi_{p,q}$ will have small support. Thus, together
with the readily checked fact that the extra
conditions on $a,b,c\in B$ are satisfied we get that
$n\circ\psi_{p,q}\in{\mathscr N}$. The same argument yields the ${\mathscr N}'$- and
${\mathscr N}''$-rows.\\
The $\Phi$-row is a consequence of Lemma
\ref{LEM:compositionWithPhi}. Similarly to the proof of that lemma,
we show that $\psi_{p,s}\circ\psi_{t,q}$ is an element of ${\mathscr N}$
unless $s=t$, in which case it is $\psi_{p,q}$ by
construction. Indeed, assume $s\neq t$; then $\psi_{t,q}$
takes $A_q$ to $A_t$, but $\psi_{p,s}$ has support
$A_s$; therefore, the composite
$\psi_{p,s}\circ\psi_{t,q}$ has support $\psi_{t,q}^{-1} [A_t\cap
A_s]$, a small set since $\psi_{t,q}$ is injective on its support and
by the properties of the family ${\mathscr A}$. The conditions on $a,b,c$ for the composite to be in
${\mathscr N}$ are left to the reader, and we are done with the $\Psi$-row.\\
The $\S_\Phi$- and $\S_{{\mathscr N}'}$-rows are the sums of the
${\mathscr N}''$-row with the $\Phi$-row and the ${\mathscr N}'$-row
respectively, by the definitions of $\S_\Phi$ and $\S_{{\mathscr N}'}$.
\end{proof}
Recall that if ${\mathscr F}\subseteq\O$, then ${\mathscr F}^*$ consists of all functions
which arise from functions of ${\mathscr F}$ by
identification of variables, adding of fictitious variables, as well as by permutation of
variables. Functions in ${\mathscr F}^*$ are called \emph{polymers}
of functions on ${\mathscr F}$. Set
$$
{\mathscr V}=\{n'(x)+n''(y):n'\in{\mathscr N}', n''\in{\mathscr N}''\}.
$$
Moreover,
define for all $I\subseteq{\frak P}$ sets of functions
$$
{\mathscr D}_I=\{\phi_p(x)+n''(y): p\in I,
n''\in{\mathscr N}''\}
$$
and
$$
{\mathscr C}_I=({\mathscr M}\cup {\mathscr V}\cup{\mathscr D}_I)^*.
$$
Observe that ${\mathscr D}_{\frak P}$ is the set of all functions of the
form $\phi_p(x)+n''(y)$, where $\phi_p\in\Phi$ and
$n''\in{\mathscr N}''$.
\begin{lem}\label{LEM:theCIareClones}
Let $I\subseteq{\frak P}$ be an order ideal. Then ${\mathscr C}_I$ is a
clone in ${\mathscr I}_{\mathscr M}$.
\end{lem}
\begin{proof}
We first show that ${\mathscr C}_I^{(1)}={\mathscr M}$. Indeed, by its definition the unary functions in ${\mathscr C}_I$ are exactly ${\mathscr M}$
and those functions which arise when one identifies the two variables of a function in ${\mathscr V}\cup{\mathscr D}_I$.
If $f\in {\mathscr V}\cup{\mathscr D}_I$, then $f=n'(x)+n''(y)$ or $f=\phi_p(x)+n''(y)$.
Identifying its variables, we obtain a function of $\S_{{\mathscr N}'}$ in the first and of $\S_{\Phi}$ in
the second case, and in either case an element of ${\mathscr M}$. Therefore, the unary part of ${\mathscr C}_I$ is
exactly ${\mathscr M}$ and ${\mathscr C}_I$, if a clone, is indeed an element of ${\mathscr I}_{\mathscr M}$.\\
${\mathscr C}_I$ contains $\pi^1_1\in{\mathscr M}$ and therefore all
projections, as it is by definition closed under the
addition of fictitious variables.\\ We
prove that ${\mathscr C}_I$ is closed under composition. To do
this it suffices to prove that if $f(x_1,\ldots,x_n),
g(y_1,\ldots,y_m)\in{\mathscr C}_I$, then
$f(x_1,\ldots,x_{i-1},g(y_1,\ldots,y_m),x_{i+1},\ldots,x_n)\in{\mathscr C}_I$, for all $1\leq i\leq n$.
Moreover, since ${\mathscr C}_I$ is closed under the addition of fictitious variables,
we may assume that $f,g$ depend on all of
their variables, so by the definition of ${\mathscr C}_I$ they are at most
binary; since within ${\mathscr C}_I$ we can freely permute variables, we can assume $f,g\in{\mathscr M}\cup{\mathscr V}\cup{\mathscr D}_I$.
Also, since ${\mathscr C}_I$ is by definition closed under identification of variables, we may assume
that $y_i$ and $x_j$ are different variables, for all $1\leq i\leq m$ and $1\leq j\leq n$.\\
Let first $f\in{\mathscr M}$. If we substitute any $g\in{\mathscr M}$ for the only variable
of $f$, then we stay in ${\mathscr M}\subseteq{\mathscr C}_I$ since
${\mathscr M}$ is a monoid by Lemma \ref{LEM:LIN:M_is_a_monoid}.
If $g$ is binary and of the form
$m'(x)+m''(y)\in{\mathscr V}$, then by Lemma \ref{LEM:LIN:M_is_a_monoid} we have
$f(m'(x)+m''(y))=f(m'(x))+f(m''(y))=f(m''(y))\in{\mathscr M}^*\subseteq{\mathscr C}_I$,
since the unary function $f\circ m''\in{\mathscr M}$
as ${\mathscr M}$ is a monoid.
Similarly, if $g=\phi_p(x)+m''(y)\in{\mathscr D}_I$ we get $f(\phi_p(x)+m''(y))=
f(\phi_p(x))+f(m''(y))=f(m''(y))\in{\mathscr M}^*$.\\
We proceed with the case where $f$ is binary, so $f\in{\mathscr V}\cup{\mathscr D}_I$.
Assume $f=n'(x)+n''(y)\in{\mathscr V}$, and that we substitute a
unary $g(z)\in{\mathscr M}$ for $x$. By Lemma \ref{LEM:LIN:M_is_a_monoid}, $n'\circ
g\in{\mathscr N}'\cup\{0\}$; hence, $f(g(z),y)$ is a function of the form $m'(z)+n''(y)\in{\mathscr V}$
if $n'\circ g\in{\mathscr N}'$, and the essentially unary function $n''(y)\in{\mathscr M}^*$ if $n'\circ g=0$.
If we substitute a unary $g(z)\in{\mathscr M}$
for $y$, then $n''\circ g\in{\mathscr N}''\cup\{0\}$, so
that again we stay in ${\mathscr V}\cup {\mathscr M}^*$. So say that
$f=\phi_p(x)+n''(y)\in{\mathscr D}_I$, and that we substitute a
unary $g(z)\in{\mathscr M}$ for $x$. From Lemma \ref{LEM:LIN:M_is_a_monoid} we know
that $\phi_p\circ g\in{\mathscr N}'\cup\Phi\cup\{0\}$. If $\phi_p\circ
g$ vanishes, then we obtain an essentially unary
function in $({\mathscr N}'')^*\subseteq{\mathscr M}^*$ for $f(g(z),y)$.
If
$\phi_p\circ g\in{\mathscr N}'$, then the sum with
$n''(y)$ is in ${\mathscr V}$. The interesting case is the one
where $\phi_p\circ g\in\Phi$; from the
proof of Lemma \ref{LEM:LIN:M_is_a_monoid} we know
that this can only happen if $g$ equals some
$\psi_{s,t}\in\Psi$.
Moreover, from Lemma \ref{LEM:compositionWithPhi} we
infer that the composition is only in $\Phi$ if $s=p$,
and then we have $\phi_p\circ \psi_{p,t}=\phi_t$.
Hence in this case,
$f(g(z),y)=\phi_t(z)+n''(y)\in{\mathscr D}_I$ since $t\leq p\in
I$. To finish the case where we substitute a
unary function for a variable of a binary function, let $f=\phi_p(x)+n''(y)$
and substitute $g(z)\in{\mathscr M}$ for $y$. Then, since $n''\circ g\in{\mathscr N}''\cup\{0\}$,
the result will either be of the form $\phi_p(x)+m''(z)$ and
thus in ${\mathscr D}_I$, or just $\phi_p(x)\in{\mathscr M}^*$ in case $n''\circ g$ vanishes.\\
We now substitute binary functions $g(v,w)\in{\mathscr V}\cup{\mathscr D}_I$ into one variable of
a binary $f(x,y)\in{\mathscr V}\cup{\mathscr D}_I$. Let
$g(v,w)=m'(v)+m''(w)\in{\mathscr V}$. Since $h\circ m'=0$ for all $h\in{\mathscr M}$, and $f(x,y)$ is of the form
$f_1(x)+f_2(y)$ for some $f_1,f_2\in{\mathscr M}$,
and since all involved functions are linear, $m'$ will
vanish in any substitution with $g$. Therefore
substituting $g$ is the
same as substituting only
an essentially unary function, which we already
discussed. So let $g(v,w)=\phi_q(v)+m''(w)$. Then again, $h\circ\phi_q=0$ for all $h\in{\mathscr M}$,
so substitution of $g$ is equivalent to substituting only
$m''(y)$ and we are done.\\
\end{proof}
We now prove that $\cl{{\mathscr M}}$ and the ${\mathscr C}_I$ are the only
clones in ${\mathscr I}_{\mathscr M}$.
\begin{lem}
Let ${\mathscr G}$ be a monoid of linear functions on the vector
space $X$ which contains the constant function $0$, and let $k\geq 1$ be a natural number.
If for any finite sequence of vectors $d_1,\ldots,d_k\in X$ there
exist $e_1,\ldots,e_k\in X$ and
$h_1,\ldots,h_k\in{\mathscr G}$
such that $h_j(e_j)=d_j$ and $h_j(e_i)=0$ for all $1\leq i,j\leq k$ with $i\neq j$, then
all functions in
$\pol({\mathscr G})^{(k)}$ are of the form
$g_1(x_1)+\ldots+g_k(x_k)$, with $g_1,\ldots,g_k\in{\mathscr G}$.
\end{lem}
\begin{proof}
Let $F(x_1,\ldots,x_k)\in\pol({\mathscr G})^{(k)}$. Since $0\in{\mathscr G}$, the
functions $g_j(x_j)=F(0,\ldots,0,x_j,0,\ldots,0)$ are
elements of ${\mathscr G}$ for all $1\leq j \leq
k$. We claim
$F(d_1,\ldots,d_k)=g_1(d_1)+\ldots+g_k(d_k)$ for all
$d_1,\ldots,d_k\in X$. Indeed, let $e_1,\ldots,e_k\in
X$ and $h_1,\ldots,h_k\in{\mathscr G}$ be provided by the assumption of the lemma.
Then
$h(x)=F(h_1(x),\ldots,h_k(x))$ is an element of ${\mathscr G}$;
therefore it is linear. Hence,
$$
\begin{aligned}
h(e_1+\ldots+e_k)&=h(e_1)+\ldots+h(e_k)\\
&=F(h_1(e_1),\ldots,h_k(e_1))+\ldots+F(h_1(e_k),\ldots,h_k(e_k))\\
&=F(d_1,0,\ldots,0)+\ldots+F(0,\ldots,0,d_k)\\
&=g_1(d_1)+\ldots+g_k(d_k).
\end{aligned}
$$
On the other hand,
$$
\begin{aligned}
h(e_1+\ldots+e_k)&=F(h_1(e_1+\ldots+e_k),\ldots,h_k(e_1+\ldots+e_k))\\
&=F(h_1(e_1)+\ldots+h_1(e_k),\ldots,h_k(e_1)+\ldots+h_k(e_k))\\
&=F(d_1,\ldots,d_k).
\end{aligned}
$$
This proves the lemma.
\end{proof}
\begin{lem}\label{LEM:containsNsatisfiesQuasiLinear}
Let ${\mathscr G}$ be a monoid of linear functions on the vector
space $X$ which contains $0$. If ${\mathscr G}$ contains ${\mathscr N}$, then the condition
of the preceding lemma is satisfied for all $k\geq 1$.
\end{lem}
\begin{proof}
Given $d_1,\ldots,d_k\in X$ we choose any distinct
$e_1,\ldots,e_k\in A$. Now for $1\leq j\leq k$ we define $h_j\in{\mathscr N}$ to
map $e_j$ to $d_j$, $a$ to $a$, $c$ to $c$, and all remaining basis vectors to
$0$.
\end{proof}
\begin{lem}\label{LEM:sumsOfTwo}
Let $f,g\in{\mathscr M}$ be nonconstant. If $f+g\in{\mathscr M}$, then
$f\in{\mathscr N}'\cup \Phi$ and $g\in{\mathscr N}''$ (or the other way
round).
\end{lem}
\begin{proof}
Observe where the nontrivial functions of ${\mathscr M}$ map $a,c\in
B$:\\
\begin{center}
\begin{tabular}{r|c|c}
& $a$ & $c$\\
\hline
${\mathscr N}$ & $a$ & $c$\\
${\mathscr N}'$ & $0$ & $b$\\
${\mathscr N}''$ & $a$ & $0$\\
$\Phi$ & $0$ & $b$\\
$\Psi$ & $a$ & $c$\\
$\S_\Phi$ & $a$ & $b$\\
$\S_{{\mathscr N}'}$& $a$ & $b$\\
\end{tabular}
\end{center}
All functions $f\in{\mathscr M}$ satisfy $f(a)\in\{a,0\}$ and
$f(c)\in\{b,c,0\}$. Hence, if $f+g\in{\mathscr M}$, then
$f+g(a)=f(a)+g(a)\in\{a,0\}$ and
$f+g(c)=f(c)+g(c)\in\{b,c,0\}$.
Since the field $K$ has characteristic $\neq 2,3$ we have that $a+a,b+b,c+c,b+c\notin\{0,a,b,c\}$.
Thus it can be seen from the table that if
$f(a)+g(a)\in\{a,0\}$, then at least one of the
functions must map $a$ to $0$ and thereby be an element of ${\mathscr N}'\cup\Phi$. From
the condition $f(c)+g(c)\in\{b,c,0\}$ we infer that
either $f$ or $g$ must map $c$ to $0$ and hence belong to
${\mathscr N}''$. This proves the lemma.
\end{proof}
\begin{lem}\label{LEM:sumsOfThree}
Let $f,g,h\in{\mathscr M}$ be nonconstant. Then $f+g+h\notin{\mathscr M}$.
\end{lem}
\begin{proof}
Since $K$ has characteristic $\neq 2,3$ we have that no sum of two or three elements of
$\{a,b,c\}$ is an element of $\{0,a,b,c\}$. If $f+g+h\in{\mathscr M}$,
then $f(a)+g(a)+h(a)\in\{a,0\}$. This implies that at
least two of the three functions have to map
$a$ to $0$ and therefore belong to ${\mathscr N}'\cup\Phi$.
Also, $f(c)+g(c)+h(c)\in\{b,c,0\}$, from which we
conclude that at least two functions must map $c$ to
$0$ and thus be elements of ${\mathscr N}''$. So one
function would have to be both in ${\mathscr N}'\cup\Phi$ and in
${\mathscr N}''$ which is impossible. Hence, $f+g+h\notin{\mathscr M}$.
\end{proof}
\begin{lem}\label{LEM:LIN:PolM_consists_of}
$\pol({\mathscr M})={\mathscr C}_{\frak P}$. In particular, all functions in $\pol({\mathscr M})$ depend on at most two variables.
\end{lem}
\begin{proof}
Since ${\mathscr C}_{\frak P}$ is a clone with unary part ${\mathscr M}$ by Lemma
\ref{LEM:theCIareClones}, we have that
${\mathscr C}_{\frak P}\subseteq\pol({\mathscr M})$. To see the other inclusion,
let $F(x_1,\ldots,x_k)\in\pol({\mathscr M})^{(k)}$. Then by Lemma
\ref{LEM:containsNsatisfiesQuasiLinear},
$F(x_1,\ldots,x_k)=f_1(x_1)+\ldots+f_k(x_k)$, with $f_i\in{\mathscr M}$, $1\leq i\leq
k$. We show $F\in{\mathscr C}_{\frak P}$; since clones are closed under the addition of
fictitious variables, we may assume that $F$ depends
on all of its variables, i.e. $f_i$ is nontrivial for
all $1\leq i\leq k$. If $k=1$, then $F\in{\mathscr M}$, so
$F\in{\mathscr C}_{\frak P}$. If $k=2$, then since
$F(x,x)=(f_1+f_2)(x)$ has to be an element of ${\mathscr M}$, Lemma \ref{LEM:sumsOfTwo} implies that
up to permutation of variables,
$F\in{\mathscr V}\cup{\mathscr D}_I\subseteq{\mathscr C}_{\frak P}$. To conclude, observe that $k\geq
3$ cannot occur by Lemma \ref{LEM:sumsOfThree}, since
$F(x,x,x,0,\ldots,0)=f_1(x)+f_2(x)+f_3(x)$ must be an
element of ${\mathscr M}$ if $F\in\pol({\mathscr M})$.
\end{proof}
\begin{lem}\label{LEM:containN'+N''impliesN+N''}
Let ${\mathscr C}$ be a clone containing ${\mathscr M}$ and any function
of ${\mathscr V}$. Then ${\mathscr C}$ contains ${\mathscr V}$.
\end{lem}
\begin{proof}
Let $n'(x)+n''(y)\in{\mathscr V}\cap{\mathscr C}$, where $n'\in{\mathscr N}'$ and
$n''\in{\mathscr N}''$, and let $m'(x)+m''(y)$ with $m'\in{\mathscr N}'$ and $m''\in{\mathscr N}''$ be an arbitrary function
in ${\mathscr V}$. Since $\ran m'=\ran n'=\spann(\{b\})$, there is
$n_1\in{\mathscr L}$ with $m'=n'\circ n_1$. This $n_1$ can be chosen to satisfy $n_1(a)=a$, $n_1(b)=0$, and
$n_1(c)=c$; also, since $m'$ has small support, we can choose $n_1$ to have small support too.
Then $n_1\in{\mathscr N}\subseteq{\mathscr M}\subseteq{\mathscr C}$.
Similarly, there is $n_2\in{\mathscr N}$ such that $m''=n''\circ
n_2$. Hence, $m'(x)+m''(y)=n'(n_1(x))+n''(n_2(y))\in{\mathscr C}$.
\end{proof}
\begin{lem}\label{LEM:containPhi+N''impliesN+N''}
Let ${\mathscr C}$ be a clone containing ${\mathscr M}$ and any function
of ${\mathscr D}_{\frak P}$. Then ${\mathscr C}$ contains ${\mathscr V}$.
\end{lem}
\begin{proof}
Let $\phi_p(x)+n''(y)\in{\mathscr C}\cap{\mathscr D}_{\frak P}$, where $\phi_p\in\Phi$ and
$n''\in{\mathscr N}''$.
Taking any $n\in{\mathscr N}$ we set $n'=\phi_p\circ n\in{\mathscr N}'$. Then ${\mathscr C}$
contains $n'(x)+n''(y)\in{\mathscr V}$ and hence all
functions of ${\mathscr V}$ by the preceding lemma.
\end{proof}
\begin{lem}\label{LEM:functionsForcedIntoC}
Let ${\mathscr C}$ be a clone containing ${\mathscr M}$ and a function
$\phi_p(x)+n''(y)\in{\mathscr D}_{\frak P}$, where $\phi_p\in\Phi$ and $n''\in{\mathscr N}''$.
If $q\leq_{\frak P} p$ and $m''\in{\mathscr N}''$, then ${\mathscr C}$ contains the function
$\phi_q(x)+m''(y)$.
\end{lem}
\begin{proof}
As discussed in the proof of Lemma \ref{LEM:containN'+N''impliesN+N''},
there is $n\in{\mathscr N}$ such that $m''=n''\circ n$. Therefore ${\mathscr C}$ contains
$\phi_p(\psi_{p,q}(x))+n''(n(y))=\phi_q(x)+m''(y)$.
\end{proof}
\begin{prop}\label{PROP:allClonesOfIM}
If ${\mathscr C}\in{\mathscr I}_{\mathscr M}$ is a clone, then ${\mathscr C}={\mathscr M}^*=\cl{{\mathscr M}}$, or
${\mathscr C}={\mathscr C}_I$, where $I\subseteq{\frak P}$ is an order ideal on ${\frak P}$.
\end{prop}
\begin{proof}
Let ${\mathscr C}\neq \cl{{\mathscr M}}$, that is, ${\mathscr C}$ contains an essentially binary function.
Set $I=\{p\in{\frak P}:\exists
n''\in{\mathscr N}''\,(\phi_p(x)+n''(y)\in{\mathscr C})\}$. By Lemma
\ref{LEM:functionsForcedIntoC}, $I$ is an order ideal
of ${\frak P}$. We claim ${\mathscr C}={\mathscr C}_I$. Being elements of ${\mathscr I}_{\mathscr M}$, both ${\mathscr C}$ and
${\mathscr C}_I$ have ${\mathscr M}$ as their unary part. Let
$f(x,y)\in{\mathscr C}^{(2)}$ be essentially binary, i.e. depending on both of its variables;
then up to permutation of variables,
$f(x,y)\in{\mathscr V}\cup{\mathscr D}_{\frak P}$ by Lemma \ref{LEM:LIN:PolM_consists_of}. If $f\in{\mathscr V}$, then
$f\in{\mathscr C}_I$ by definition of ${\mathscr C}_I$. If $f\in{\mathscr D}_{\frak P}$, then $f(x,y)=\phi_p(x)+n''(y)$,
where $p\in{\frak P}$ and $n''\in{\mathscr N}''$. But then $p\in I$ by
definition of $I$ and so $f\in{\mathscr C}_I$. Hence,
${\mathscr C}^{(2)}\subseteq {\mathscr C}_I^{(2)}$. Because ${\mathscr C}$ contains a
binary function from ${\mathscr V}\cup{\mathscr D}_{\frak P}$, Lemmas
\ref{LEM:containN'+N''impliesN+N''} and
\ref{LEM:containPhi+N''impliesN+N''} imply
${\mathscr C}^{(2)}\supseteq{\mathscr V}$. Also, $\phi_q(x)+m''(y)\in{\mathscr C}^{(2)}$
for all $q\in I$ and all $m''\in{\mathscr N}''$ by Lemma
\ref{LEM:functionsForcedIntoC}, so that we have
${\mathscr C}^{(2)}\supseteq{\mathscr C}_I^{(2)}$ and thus ${\mathscr C}^{(2)}={\mathscr C}_I^{(2)}$.
Lemma \ref{LEM:LIN:PolM_consists_of} implies that
clones in ${\mathscr I}_{\mathscr M}$ are uniquely determined by their
binary parts, so that we conclude ${\mathscr C}={\mathscr C}_I$.
\end{proof}
\begin{prop}
Let $\L$ be the lattice of order ideals on the partial order ${\frak P}$.
The monoidal interval ${\mathscr I}_{\mathscr M}$ is isomorphic to
$1+\L$, which is to denote $\L$ with a new smallest
element (which corresponds to $\cl{{\mathscr M}}$) added to $\L$.
\end{prop}
\begin{proof}
The mapping $\sigma:1+\L\rightarrow{\mathscr I}_{\mathscr M}$ taking an order
ideal $I\in\L$ to ${\mathscr C}_I$, as well as the smallest
element of $1+\L$ to $\cl{{\mathscr M}}$, is obviously a lattice
homomorphism and injective. By the preceding
proposition it is also surjective.
\end{proof}
\begin{proof}[Proof of Theorem \ref{THM:LIN:powithO}]
Given a partial order ${\frak P}$ with smallest element, we
consider the partial order ${\frak P}'$ obtained from ${\frak P}$
by taking away the smallest element. By the preceding
proposition, we can construct a monoid ${\mathscr M}$ such that
${\mathscr I}_{\mathscr M}$ is isomorphic to $1+\L'$, where $\L'$ is the
lattice of order ideals on ${\frak P}'$. Now it is enough to observe that $1+\L'$ is
isomorphic to the lattice $\L$ of order ideals on ${\frak P}$.
\end{proof}
\begin{proof}[Proof of Corollary \ref{COR:completelyDistr}]
Let $\L$ be a completely distributive algebraic lattice with at
most $2^\kappa$ completely join irreducibles. Write
${\frak P}$ for the partial order of completely join
irreducibles of $\L$ (with the induced order), and write $\L'$ for the lattice of order
ideals on ${\frak P}$. The
mapping
$$
\sigma: \quad \begin{matrix} \L &\rightarrow& \L'\\
p
&\mapsto&
\{q\in{\frak P}:q\leq_\L p\}\end{matrix}
$$
is easily seen to be a homomorphism;
$\sigma$ is bijective because in a completely
distributive algebraic lattice, every element is a
join of completely join irreducibles.
\end{proof}
\begin{proof}[Proof of Corollary \ref{COR:LIN:powerset}]
The completely join irreducibles of $\P(\lambda)$ are exactly the singleton
sets, so there are exactly $\lambda\leq 2^\kappa$ of them and we can
refer to Corollary \ref{COR:completelyDistr}.
\end{proof}
\begin{proof}[Proof of Corollary \ref{COR:LIN:chains}]
$\L$ is completely distributive algebraic, so
this is a direct consequence of Corollary \ref{COR:completelyDistr}.
\end{proof}
\begin{defn}
A monoid ${\mathscr G}\subseteq {\mathscr O}^{(1)}$ is called \emph{collapsing}
iff its monoidal interval has only one element, i.e.
$\cl{{\mathscr G}}=\pol({\mathscr G})$.
\end{defn}
Denote by $\S$ the monoid of all permutations of $X$.
\begin{prop}
$\S$ is collapsing.
\end{prop}
\begin{proof}
Let $f\in\pol(\S)^{(2)}$. Then $\gamma(x)=f(x,x)$ is a permutation.
Now let $x,y\in X$ be distinct. There exists $z\in X$ with
$\gamma(z)=f(x,y)$. If $z\notin\{x,y\}$, then we can
find $\alpha,\beta\in\S$ with $\alpha(x)=x$,
$\alpha(y)=z$, $\beta(x)=y$, and $\beta(y)=z$. But
then
$f(\alpha,\beta)(x)=f(x,y)=f(z,z)=f(\alpha,\beta)(y)$,
so $f(\alpha,\beta)$ is not a permutation. Thus,
$z\in\{x,y\}$, and we have shown that
$f(x,y)\in\{f(x,x),f(y,y)\}$ for all $x,y\in X$.\\
Next we claim that for all $x,y\in X$, if
$f(x,y)=f(x,x)$, then $f(y,x)=f(y,y)$. Indeed, consider any permutation $\alpha$
which has the cycle $(xy)$. Then $f(x,\alpha(x))=f(x,y)=f(x,x)$,
so $f(y,\alpha(y))=f(y,x)$ has to be different from $f(x,x)$, because otherwise the
function $\delta(x)=f(x,\alpha(x))\in\S$
is not
injective. Hence, $f(y,x)=f(y,y)$.\\
Assume without loss of generality that $f(a,b)=f(a,a)$ for some distinct $a,b\in X$.
We claim that $f(a,c)=f(a,a)$ for all $c\in X$.
For assume not; then $f(a,c)=f(c,c)$ for some $c\in X$, and therefore $f(c,a)=f(a,a)$.
Let $\beta\in\S$ map $a$ to $b$ and $c$ to $a$.
Then $f(a,\beta(a))=f(a,b)=f(a,a)$, but also $f(c,\beta(c))=f(c,a)=f(a,a)$, a contradiction since $f$
preserves $\S$. Hence, $f(a,c)=f(a,a)$ for all $c\in
X$.\\
Now if $f(\tilde{a},\tilde{b})\neq f(\tilde{a},\tilde{a})$ for some $\tilde{a},\tilde{b}\in
X$, then $f(\tilde{a},\tilde{b})=f(\tilde{b},\tilde{b})$ and
as before we conclude $f(c,\tilde{b})=f(\tilde{b},\tilde{b})$ for
all $c\in X$. But then
$f(a,a)=f(a,\tilde{b})=f(\tilde{b},\tilde{b})$, so
$a=\tilde{b}$; furthermore, $f(a,\tilde{a})=f(\tilde{b},\tilde{a})=
f(\tilde{a},\tilde{a})\neq f(a,a)$ since we must have $a\neq\tilde{a}$, contradicting
$f(a,c)=f(a,a)$ for all $c\in X$.\\
Hence, $f(x,y)=f(x,x)$ for all $x,y\in
X$ so that $f$ is essentially unary. Therefore, all binary functions of $\pol(\S)$
are essentially unary. By a result of Grabowski
\cite{Gra97}, this implies that $\S$ is collapsing. (The
mentioned result was proved for finite base sets with at least three elements, but
the same proof works on infinite sets.)
\end{proof}
\begin{proof}[Proof of Corollary \ref{COR:LIN:ordinals}]
The preceding proposition gives us the ordinal $1$.
For larger ordinals, we can refer to Corollary
\ref{COR:LIN:chains}.
\end{proof}
\begin{proof}[Proof of Corollary \ref{COR:cardinalities}]
This is the direct consequence of Corollaries
\ref{COR:LIN:powerset} and \ref{COR:LIN:ordinals}.
\end{proof}
\end{section}
\bibliographystyle{alpha}
|
1,116,691,498,238 | arxiv | \section{INTRODUCTION}
A superfluid can flow without fricition, but its superfluidity breaks down above a certain critical velocity. The critical velocity is mainly determined by the intrinsic excitation properties of the superfluid~\cite{R1}, and its manifestation is significantly affected by the geometry and boundary condition of the flowing channel. Understanding the critical dynamics which involves energy dissipation processes is important in the study of a superfluid system. For ultracold atomic gas experiments, a simple method was developed to investigate the critical velocity of a superfluid. In that method, a sample is stirred with an optical obstacle formed by focusing a laser beam, and the onset of dissipation due to the increase in the obstacle velocity is detected via the increase in the sample temperature~\cite{R2,Dalibard12,R5} or the generation of topological defects such as quantized vortices~\cite{R7,Neely10,Kwon15a,Park18}. Finite critical velocities were presented as evidence for superfluidity~\cite{R2,Dalibard12}, and the measured values provided quantitative tests for our microscopic understanding of superfluid systems~\cite{R5,Kwon15a,Park18,R6}.
Recently, a symmetric binary superfluid gas system was experimentally realized using a Bose-Einstein condensate (BEC) of $^{23}$Na in two hyperfine spin states, i.e., $|F=1,m_F=1\rangle$ and $|F=1,m_F=-1\rangle$~\cite{R20}. This system, with a $\mathrm{Z}_2$ symmetry, constitutes a minimal setting for studying superfluidity with multiple order parameters. Spin superfluidity was demonstrated with the absence of damping in spin dipole oscillations in trapped samples~\cite{R16,R19}, and novel topological objects such as half-quantum vortices~\cite{R20,R21} and magnetic solitons~\cite{R24,R25} were observed. These developments lead us to anticipate a moving obstacle experiment with the binary superfluid system, discussed in previous numerical studies~\cite{R34,R35,Kamchatnov13,R36}. In particular, the optical obstacle can be engineered to be magnetic, i.e., exhibiting different potentials for the two spin components so that the system's properties in both the spin and the mass sectors may be addressed in a controlled manner. Considering different topological objects and peculiar dynamic effects such as countersuperflow instability~\cite{R29,R27}, such an experiment may open a way to investigate a new class of critical superfluid dynamics~\cite{Kamchatnov13}. In a recent experiment, a localized spin-dependent optical potential was indeed used to measure the speed of spin sound in a binary $^{23}$Na BEC~\cite{R26}. Therefore, the stirring experiment with a magnetic obstacle is within immediate reach.
Herein, we theoretically consider a primary case in which a penetrable, Gaussian magnetic obstacle moves in a symmetric binary BEC in two dimensions (2D). Based on the hydrodynamic equations of the two-component BEC system, we analytically and numerically investigate the spatial distributions of the induced superflows around the moving obstacle and demonstrate that the spin and the mass supercurrents are formed in characteristic spatial structures resembling those of electric and magnetic fields around a charge dipole and a current loop, respectively. Furthermore, we investigate the local Landau instability of the induced superflows and determine the critical velocity, $u_c$, of the magnetic obstacle as a function of its potential magnitude, $V_0$. We find that $u_c$ decreases almost linearly from the speed of spin sound with increasing $V_0$, which can be directly tested in current experiments. This study provides a basis for the study of the critical dynamics of binary superfluid systems with a moving magnetic obstacle.
The remainder of this paper is organized as follows: In Section~II, we present a hydrodynamic description of the spin and the mass currents near a moving magnetic obstacle in a two-component BEC. In Section~III, we first analyze the characteristic superflow pattern in a slow obstacle limit and then numerically investigate the evolution of the current distributions with increasing obstacle velocity. The critical velocity of the magnetic obstacle is determined by examining the local speed of sound at the obstacle and applying the Landau criterion. Finally, in Section~IV, we provide a summary and some outlooks on future experimental studies.
\section{Hydrodynamic model}
Figure~1 shows the physical situation of our interest, where a localized Gaussian potential traverses a homogeneous two-component BEC in 2D with a constant velocity, $\bm{u}=u\hat{\bm{x}}$. The BEC is a balanced mixture of two miscible components denoted by spin-$\uparrow$ and $\downarrow$, separately. They are identical to each other in terms of particle mass and intracomponent interactions, and the BEC represents a symmetric binary superfluid system. The Gaussian potential is spin dependent, which is attractive to the spin-$\uparrow$ component and repulsive to the spin-$\downarrow$ component, i.e., $V_{\uparrow(\downarrow)}(\bm{r})=-s_{\uparrow(\downarrow)} V(r)$ with $s_\uparrow=-s_\downarrow=1$ and $V(r)= V_0 \exp(-\frac{2 r^2}{\sigma^2})$. As such, the moving magnetic potential will generate different flow patterns for the two spin components. The main focus of this study is to investigate the spin and the mass currents near the moving obstacle; these are defined as $\bm{J}=n_\uparrow \bm{u}_\uparrow - n_\downarrow \bm{u}_\downarrow$ and $\bm{M}=n_\uparrow \bm{u}_\uparrow + n_\downarrow \bm{u}_\downarrow$, respectively, with $\{ n_{\uparrow(\downarrow)}(\bm{r},t), \bm{u}_{\uparrow(\downarrow)}(\bm{r},t) \}$ being the density distribution and the velocity field of the spin-$\uparrow(\downarrow)$ component, respectively.
\begin{figure}[t!]
\includegraphics[width=8.4cm]{FIG1}
\caption{Generation of supercurrents by a moving magnetic obstacle in a two-component Bose-Einstein condensate (BEC). (a) A penetrable Gaussian obstacle (blue) moves with a constant velocity $\bm{u}$ in a homogeneous two-dimensional BEC comprising two symmetric, spin-$\uparrow$ and $\downarrow$ components. The obstacle attracts the spin-$\uparrow$ component and repels the spin-$\downarrow$ component, and the two components have different density and velocity field distributions, denoted by $n_{\uparrow(\downarrow)}$ and $\bm{u}_{\uparrow(\downarrow)}$, respectively, near the moving obstacle. (b) Schematic description of the density and the velocity profiles along the horizontal dashed line in (a). $n_0$ is the density of each component for an unperturbed BEC.}
\label{fig.1}
\end{figure}
In the hydrodynamic approximation, the superflow dynamics of the binary superfluid system can be expressed as follows:
\begin{eqnarray}
\partial_t n_i &+&\nabla \cdot (n_i \bm{u}_i) =0 \\
m\partial_t \bm{u}_i &+& \nabla (\frac{1}{2} m u_i^2 + g n_i + g_{\uparrow\downarrow} n_j) = -\nabla V_i(\bm{r}-\bm{u}t),
\end{eqnarray}
where $i,j=\uparrow,\downarrow$ ($i\neq j$), $m$ is the particle mass, and $g(g_{\uparrow\downarrow})>0$ is the coefficient of the inter(intra)-component interactions. The first equation is the continuity equation for mass conservation, and the second one is the Euler equation associated with energy conservation. The hydrodynamic equations were derived from the Gross--Pitaevskii equations for the wave functions of the BEC, $\psi_i(\bm{r},t)$, under a Madelung transformation of $\psi_i(\bm{r},t)=\sqrt{{n}_i(\bm{r},t)}e^{i \theta_i(\bm{r},t)}$ and $\bm{u}_i (\bm{r},t)=\frac{\hbar}{m} \nabla \theta_i(\bm{r},t)$~\cite{Stringari96,R32}. $\theta_i(\bm{r},t)$ is the macroscopic phase of the spin-$i$ component and $\hbar=\frac{h}{2\pi}$ with the Planck constant $h$. The quantum pressure term is neglected, assuming that the obstacle width, $\sigma$, is much larger than the healing length of the condensate and that the potential magnitude, $V_0$, is small such that $n_i >0$, i.e., no density-depleted region exists for either spin component.
In the co-moving reference frame with the obstacle, the density distributions and the velocity fields are time independent, and the problem becomes more tractable. Under a Galilean transformation $\bm{r}\rightarrow \bm{r}+\bm{u}t$, the hydrodynamic equations reduce to
\begin{eqnarray}
&&\nabla \cdot (n_i \bm{v}_i) =0, \\
&& \nabla (\frac{1}{2} m v_i^2 - s_i V+ g n_i + g_{\uparrow\downarrow} n_j ) =0,
\end{eqnarray}
with $\bm{v}_i(\bm{r})=\bm{u}_i(\bm{r},0)-\bm{u}$.
For the boundary conditions of $n_i= n_0$ and $\bm{v}_i = -u\hat{\bm{x}}$ as $r\rightarrow\infty$, Eq.~(4) requires $\frac{1}{2}m{{v}_i}^{2} - s_i {V} +g {n}_{i}+ {g}_{\downarrow\uparrow} {n}_j =\frac{1}{2}m u^2 +(g+{g}_{\uparrow\downarrow})n_0$, resulting in
\begin{equation}
n_i = n_0+ \frac{s_i V}{\Delta g}+\frac{m \Big( g (u^2 -v_i^2) -g_{\uparrow\downarrow}(u^2-v_j^2 )\Big)}{2(g^2-g_{\uparrow \downarrow}^2)},
\end{equation}
with $\Delta g = g-g_{\uparrow\downarrow}$. Here, $\Delta g>0$ is due to the miscibility condition for the two spin components. From the irrotational property of $\nabla \times \bm{v}_i=0$, a potential function $S_i(\bm{r})$ for $\bm{v}_i(\bm{r})$ can be defined such that $\bm{v}_i=\nabla S_i$, and Eq.~(3) can be rewritten as
\begin{equation}
\nabla^2 S_i +\frac{1}{n_i}\nabla n_i \cdot \nabla S_i =0.
\end{equation}
Once $\{n_i(\bm{r}), S_i(\bm{r})\}$ are determined from Eqs.~(5) and (6), the spin and the mass currents at $t=0$ in the stationary BEC reference frame can be calculated as
\begin{eqnarray}
\bm{J}&=&{n}_{\uparrow}\nabla S_{\uparrow}-{n}_{\downarrow}\nabla S_{\downarrow}+m_z \bm{u} \\ \bm{M}&=&{n}_{\uparrow}\nabla S_{\uparrow}+{n}_{\downarrow}\nabla S_{\downarrow} + n_t \bm{u},
\end{eqnarray}
respectively, where $m_z(\bm{r})=n_\uparrow-n_\downarrow$ is the magnetization density, and $n_t(\bm{r})=n_\uparrow+n_\downarrow$ is the total number density of the BEC.
\section{Results}
\subsection{Slow Obstacle}
We first consider a perturbative regime in which the obstacle moves slowly such that the densities of the spin components are well approximated by the solutions of Eq.~(5) for $u=0$, i.e., $n_i(r)=n_0+ s_i V(r)/\Delta g$. In a later discussion, it will be clear that the approximation is valid when the kinetic energy of the induced flow is negligible compared to the spin interaction energy, i.e., $mu^2 \ll m c_s^2 \equiv \Delta g n_0 $. Here, $c_s$ is the speed of spin sound for the unperturbed BEC.
For the density distribution $n_i(r)$, the potential function $S_i(\bm{r})$ can be directly determined using Eq.~(6). Because $n_i$ has only an $r$-dependence, we perform separation of variables, i.e., ${S_i(r,\phi)}={R_i(r)}{\Phi_i(\phi)}$, and Eq.~(6) is transformed to
\begin{eqnarray}
&&\frac{{d}^{2}{R}_i}{{d}{r}^{2}}+\frac{1}{r}\frac{{d}{R_i}}{{d}{r}}+\frac{1}{1+\delta\tilde{n}_i}\frac{d \delta\tilde{n}_i}{dr}\frac{{d}{R}_i}{{d}{r}}-\frac{{l}^{2}}{{r}^{2}}R_i=0, \\
&&\frac{{d}^{2}{\Phi_i}}{{d}{\phi}^{2}}+{l}^{2}{\Phi_i}=0,
\end{eqnarray}
with $\delta\tilde{n}_i(r)=n_i(r)/n_0-1$ and $l$ being an integer. The boundary condition of ${S}_i\rightarrow -u r \cos{\phi}$ as $r\rightarrow \infty$ imposes ${l}=1$, and without loss of generality, we set $\Phi_i(\phi)=-u \cos \phi$. The solution for the radial function, $R_i(r)$, can be obtained perturbatively using the small parameter $\alpha=|\delta\tilde{n}_i(0)|=V_0/(\Delta g n_0)\ll1$, which is the maximum magnitude of the relative density variations in each spin component. When the radial function is expanded in a power series with respect to $\alpha$ as $R_i(r)=\sum_{k\geq 0} \alpha^k R_i^{(k)}(r)$, the $k$-th function $R_i^{(k)}(r)$ is recursively determined from Eq.~(9) as the solution to the following equation:
\begin{eqnarray}
\frac{{d}^{2}{R}_{i}^{(k)}}{{d}{r}^{2}}&+&\frac{1}{r}\frac{{d}{R}_{i}^{(k)}}{{d}{r}}-\frac{1}{{r}^{2}}{R}_{i}^{(k)} \nonumber \\
&&= \frac{4r}{\sigma^2} s_i \sum_{s=1}^{k} {s_i}^{s} e^{-\frac{2sr^2}{\sigma^2}} \frac{dR_{i}^{k-s}}{dr}.
\end{eqnarray}
Up to the second order of $\alpha$, we have
\begin{eqnarray}
R_i^{(0)}(r)&=&r, \nonumber \\
R_i^{(1)}(r)&=& s_i r \frac{-1+{e}^{-2{\rho}^2}}{4\rho^2}, \nonumber \\
R_i^{(2)}(r)&=&r\Big(\frac{1+2{e}^{-2\rho^2}-3{e}^{-4{\rho}^{2}}}{16{\rho^2}}-\frac{1}{4}\int_{2{\rho}^{2}}^{4{\rho}^{2}}{\frac{{e}^{-t}}{t}}dt\Big), \nonumber
\end{eqnarray}
where $\rho=r/\sigma$, thereby yielding the approximate solution of $S_i(\bm{r})$ as
\begin{equation}
S_i(r,\phi)=- u \big[ R_i^{(0)}+\alpha R_i^{(1)} +\alpha^2 R_i^{(2)}\big] \cos \phi,
\end{equation}
which satisfies the boundary condition as $r\rightarrow \infty$.
From Eqs.~(7) and (8) with $n_i(r)=n_0(1+s_i \alpha e^{-2\rho^2})$ and $S_i(r,\phi)$ in Eq.~(12), we obtained the analytic expressions of the spin and the mass currents as
\begin{eqnarray}
\bm{J}&=&2\alpha{n}_{0}{u} \Bigg[ \frac{(2\rho^2+1){e}^{-2{\rho}^{2}}-1}{4{\rho}^{2}}(\cos{2\phi}\,\hat{\bm{x}}+\sin{2\phi}\,\hat{\bm{y}})
+ \nonumber \\
&&+\frac{{e}^{-2{\rho}^{2}}}{2}\,\hat{\bm{x}} \Bigg] \\
\bm{M}&=&2\alpha^2{n}_{0}{u} \Bigg[
\frac{{({e}^{-2{\rho}^{2}}-1)}^{2}}{16{\rho}^{2}}(\cos{2\phi}\,\hat{\bm{x}}+\sin{2\phi}\,\hat{\bm{y}})+ \nonumber \\
&&\frac{1}{4}\int_{2{\rho}^{2}}^{4{\rho}^{2}}{\frac{{e}^{-t}}{t}}dt\, \hat{\bm{x}}\Bigg],
\end{eqnarray}
respectively. Of note is that $|\bm{J}|\propto \alpha$ and $|\bm{M}|\propto \alpha^2$, which indicate that the moving magnetic obstacle dominantly generates a spin current, as expected, as well as a mass current via a nonlinear process. The peak spin and mass currents occur at the obstacle center, and are given by $\bm{J}_0=\alpha n_0 \bm{u}$ and $\bm{M}_0=\frac{\ln 2}{2}\alpha^2 n_0 \bm{u}$, respectively.
\begin{figure}[t!]
\includegraphics[width=8.4cm]{FIG2}
\caption{Spin and mass superflows near a moving magnetic obstacle. Spatial distributions of (a) the spin current $\bm{J}$ and (b) the mass current $\bm{M}$ from the analytical expression of Eqs.~(13) and (14), respectively. The arrow indicates the current's direction and the color denotes the current's magnitude normalized to the peak value $J_0$ ($M_0$) for the spin (mass) current at the obstacle's center. Spatial distributions of (c) $\nabla \cdot \bm{J}$ and (d) $\nabla \times \bm{M}$.}
\label{fig.2}
\end{figure}
\subsection{Spin and Mass Flow Patterns}
Figures~2(a) and (b) show the spin and the mass current distributions predicted using Eqs.~(13) and (14), respectively. We used $g_{\uparrow\downarrow}/g=0.93$, which is the value for a mixture of $^{23}$Na in the $|F=1,m_F=\pm1\rangle$ states~\cite{R17,Knoop11}. As expected, both spin and mass currents appeared locally near the moving obstacle. In the spin current distribution, we observed two low-current holes, one in the front and the other in the back of the obstacle; furthermore, the spin current flowed out from the back hole and into the front one. Meanwhile, in the mass current distribution, we observed two low-current holes on the lateral sides of the obstacle, and the mass current swirled around each hole in opposite directions. The flow patterns of $\bm{J}$ and $\bm{M}$ around the moving obstacle resembled those of an electric field around a charge dipole and a magnetic field around a current loop, respectively. In Figs.~2(c) and 2(d), we present the distributions of $\nabla\cdot\bm{J}$ and $\nabla\times \bm{M}$, respectively, which clearly show the dipole configurations of the source and the sink for the spin current and the vorticity of the mass current, respectively.
To understand the characteristic flow patterns of the spin and the mass currents, we analyzed the general divergence and rotation properties of $\bm{J}$ and $\bm{M}$. From the continuity equation in Eq.~(3), we obtained $\nabla\cdot (n_i \bm{u}_i )=\bm{u}\cdot \nabla n_i$. Combining the latter with the irrotational property of $\nabla\times\bm{u}_i=0$, we obtain the following relations:
\begin{eqnarray}
\nabla \cdot \bm{J}&=& \bm{u} \cdot \nabla {m_z}\\
\nabla \cdot \bm{M} &=& \bm{u} \cdot \nabla{n_t} \nonumber \\
\nabla \times \bm{J} &=& \nabla m_z \times \Big( \frac{ \bm{u}_\uparrow +\bm{u}_\downarrow}{2} \Big) + \nabla n_t \times \Big( \frac{ \bm{u}_\uparrow - \bm{u}_\downarrow}{2} \Big) \nonumber \\
\nabla \times \bm{M} &=& \nabla m_z \times \Big( \frac{\bm{u}_\uparrow - \bm{u}_\downarrow}{2} \Big)+ \nabla n_t \times \Big( \frac{ \bm{u}_\uparrow + \bm{u}_\downarrow}{2} \Big). \nonumber
\end{eqnarray}
The first and the second equations result from spin and mass conservation, respectively, and the third and the fourth ones reveal intriguing nonlinear coupling between the spin and the mass channels in the binary system.
For a weak and slow magnetic obstacle, taking the same level of approximation as in the previous subsection, we have $\nabla n_t= 0$, $\bm{u}_\uparrow+\bm{u}_\downarrow = \mathcal{O}(\alpha^2)$, and $\bm{u}_\uparrow-\bm{u}_\downarrow=\frac{\bm{J}}{n_0}+\mathcal{O}(\alpha^3)$. Then, up to the first order in $\alpha$, the relations can be expressed as
\begin{eqnarray}
\nabla \cdot \bm{J}&=& \bm{u} \cdot \nabla {m_z} \equiv Q_J \nonumber \\
\nabla \cdot \bm{M} &=& 0 \nonumber \\
\nabla \times \bm{J} &=& 0 \nonumber \\
\nabla \times \bm{M} &=& \nabla m_z \times \frac{\bm{J}}{2n_0} \equiv \bm{I}_M,
\end{eqnarray}
which immediately explains the observed electric- and magnetic-field-like behaviors of $\bm{J}$ and $\bm{M}$, respectively, near the moving magnetic obstacle. The quantities $Q_J$ and $\bm{I}_M$ can be regarded as the `charge' and the `current' source densities for generating the spin and the mass currents, respectively. Their expressions are consistent with the previous results of $|\bm{J}|\propto \alpha$ and $|\bm{M}|\propto \alpha^2$ for $m_z\propto \alpha$. Furthermore, we may consider the `electric' and the `magnetic' dipole moments as
\begin{eqnarray}
\bm{p}_{J}&=&\int \bm{r}~Q_J(\bm{r})d^2\bm{r}=\bm{u} \int m_z(r) d^2 \bm{r} \nonumber \\
\bm{\mu}_{M}&=&\frac{1}{2}\int \bm{r}\times\bm{I}_M(\bm{r}) d^2\bm{r}=\frac{1}{2 n_t}\int m_z(\bm{r})\bm{J}(\bm{r}) d^2\bm{r}, \nonumber
\end{eqnarray}
respectively, which allow us to predict the currents in the far distant region of $r\gg \sigma$ to be $\bm{J}\sim\frac{2(\bm{p}_{J}\cdot\hat{r})\hat{r}-\bm{p}_{J}}{2\pi r^2}$ and $\bm{M} \sim \frac{2(\bm{\mu}_{M}\cdot\hat{r})\hat{r}-\bm{\mu}_{M}}{2\pi r^2}$. We emphasize that the relations in Eq.~(16) hold regardless of the potential form of $V(\bm{r})$ once the magnetic obstacle is weak and slow.
The existence of nonzero $\nabla \times \bm{M}$ should be highlighted. The superfluid velocity of the binary superfluid system can be expressed as ${\bm{u}}_{M}=\frac{\bm{M}}{n_t}$; therefore, the circulation of ${\bm{u}_M}$ can be nonzero for $\nabla \times \bm{M}\neq 0$. This is in stark contrast to the case with a single-component BEC, where the circulation of the superfluid velocity should be quantized with $h/m$ as a topological invariant of the system. Noting that the mass circulation of the binary superfluid system can have a continuous value in conjunction with the spin current is important.
\begin{figure}[t!]
\includegraphics[width=8.4cm]{FIG3}
\caption{Numerical results for the spin and the mass current distributions for (a, b) $u=0.1 c_s$ and (c, d) $u=0.9 c_s$. Here $c_s$ is the speed of spin sound in the unperturbed BEC. In the numerical simulations, $\Delta g/g=0.07$ and $V_0=0.1 m c_s^2$. (e) $J_0$ and (f) $M_0/J_0$ as functions of $u$. $\alpha\equiv V_0/(m c_s^2)=0.1$. The dotted lines indicate the analytic results from Eqs.~(13) and (14), and the solid lines are estimates with $\alpha_\text{eff}$, including the kinetic energy correction in the magnetization at the obstacle's center (Eq.~(18)).}
\label{fig.3}
\end{figure}
\subsection{Fast Obstacle}
To investigate how the flow patterns evolve with increasing obstacle velocity, we numerically calculated $\{n_i(\bm{r}), S_i(\bm{r})\}$ from Eqs.~(5) and (6) for various $u$. A finite difference method was employed for a $241 \times 241$ grid system, and the obstacle width, $\sigma$, was set to 40 grid spacings. The boundary conditions imposed were $n_i=n_0$ and $\bm{v}_i=-u \hat{\bm{x}}$ on the edge of the grid system.
Figures~3(a)--(d) show the numerical results for the spin and the mass currents for $u=0.1 c_s$ and $0.9 c_s$ with $V_0=0.1 m c_s^2$. The numerical results for low $u$ were confirmed to be in good quantitative agreement with the analytical predictions based on Eqs.~(13) and (14). We observe that as $u$ increases, the spatial distributions of $\bm{J}$ and $\bm{M}$ stretch along the lateral and the moving directions, respectively, but the flow patterns maintain their characteristic spatial structures~[Figs.~3(c) and (d)]. The peak currents still occur at the center of the moving obstacle. In Figs.~3(e) and (f), we plot $|\bm{J}_0|$ and $|\bm{M}_0|/|\bm{J}_0|$ as functions of $u$, respectively. For low $u$, $|\bm{J}_0|$ increases linearly with $u$, as predicted in Eq.~(13); however, it begins deviating upwardly as $u$ increases over $\approx 0.6 c_s$. The ratio $|\bm{M}_0|/|\bm{J}_0|$ increases quadratically with increasing $u$, departing from the predicted value of $\frac{\ln 2}{2}\alpha$.
The nonlinear $u$-dependence of $\bm{J}_0$ can be attributed to the additional density variations due to the increased flow velocity for high $u$. When the first-order kinetic energy correction term related to $u^2$ in Eq.~(5) is considered, the magnetization can be expressed as
\begin{equation}
m_z = \frac{2V}{\Delta g} + \frac{m}{2 \Delta g} \big( 2\bm{u}\cdot(\bm{u}_\uparrow-\bm{u}_\downarrow)\big).
\end{equation}
Because $\bm{u}_{rel}=\bm{u}_\uparrow-\bm{u}_\downarrow \approx \frac{\bm{J}}{n_0}$, the magnetization is enhanced in the center region where the spin current flows in the direction of the obstacle's motion. As the first relation of Eq.~(16) shows, this enhancement in $m_z$ results in an increase in the spin current. This mutual enhancing effect qualitatively explains the observed lateral stretching of the elongated, high-$|\bm{J}|$ region with high $u$.
If the magnetization distribution maintains its Gaussian form for high $u$, i.e., $m_z(r)=m_{z,0} e^{-\frac{2r^2}{\sigma^2}}$, we can infer ${\bm{J}_0}=\frac{m_{z,0}}{2} \bm{u}$ from Eq.~(13) because of the relation $\alpha=\frac{m_{z,0}}{2 n_0}$. Substituting $\bm{u}_{rel}\approx \frac{\bm{J}_0}{n_0}= \frac{m_{z,0}}{2n_0} \bm{u}$ into Eq.~(17), we obtain the magnetization at the obstacle's center as $m_{z,0}=\frac{2 n_0 V_0}{\Delta g n_0-\frac{1}{2}mu^2 }$. This suggests that the high-$u$ effect in the spin and the mass currents might be captured by replacing $\alpha$ in Eqs.~(13) and (14) with its effective value, i.e.,
\begin{equation}
\alpha_\text{eff}(u)=\frac{m_{z,0}}{2n_0}=\frac{V_0}{\Delta g n_0-\frac{1}{2}mu^2}.
\end{equation}
In fact, we observe that the numerical results for $|\bm{J}_0|$ and $|\bm{M}_0|/|\bm{J}_0|$ can be explained well quantitatively with $\alpha_\text{eff}$, i.e., $|\bm{J}_0|=\alpha_\text{eff} n_0 u$ and $|\bm{M}_0|/|\bm{J}_0|=\frac{\ln 2}{2} \alpha_\text{eff}$, respectively [Figs.~3(e) and (f)].
\subsection{Critical Velocity}
When the obstacle's velocity increases above a certain critical value, energy dissipation will occur in the binary superfluid system. According to the Landau criterion, the critical velocity is expressed as $u_L=\min [\varepsilon(\bm{p})/(\bm{p}\cdot \hat{\bm{u}})]$~\cite{R1}, where $\varepsilon(\bm{p})$ is the elementary excitation energy of momentum $\bm{p}$, and $\hat{\bm{u}}$ is the unit vector along the direction of the obstacle's motion. In general, in the long wavelength limit, the superfluid system has a linear dispersion of $\varepsilon(p)=c p$ with $c$ being the speed of sound, and the Landau critical velocity is given as $u_L=c$. In this section, we investigate the critical velocity $u_c$ of the magnetic obstacle based on the local Landau criterion, i.e., by comparing the obstacle's velocity to the local speed of sound.
\begin{figure}[t!]
\includegraphics[width=8.4cm]{FIG4}
\caption{Sound speed in a homogeneous two-component BEC. Radial distributions of the propagation speeds $c^{+}$ and $c^{-}$ of the fast (a, c) and slow (b, d) sounds, respectively, for various flow conditions of $\bm{u}_{\uparrow(\downarrow)} =\pm \frac{u_{rel}}{2}\hat{\bm{x}}$ and $n_{\uparrow(\downarrow)}=n_0\pm\frac{m_z}{2}$. $\Delta g/g=0.07$ and $c_{n(s)}$ denotes the propagation speed of mass (spin) sound for $u_\text{rel}=0$ and $m_z=0$. In (a) and (b), $m_z=0.36n_0$ and $u_\text{rel}$ changes from 0 to 1.9$c_s$ in intervals of 0.19$c_s$ for the ten lines. In (c) and (d), $u_{\text{rel}}=0.37c_s$ and $m_z$ changes from 0 to 1.8$n_0$ in intervals of 0.18$n_0$ for the ten lines.}
\label{fig.4}
\end{figure}
First, we determine the speed of sound for a stationary state, in which the two spin components flow with uniform velocity $\bm{u}_i$, having unifrom density $n_i$. Linearizing the hydrodynamic equations, Eqs.~(1) and (2), with $V_i=0$~\cite{R28}, we obtain
\begin{eqnarray}
\big (\partial_t +\bm{u}_i \cdot \nabla \big) \delta n_i &+& n_i \nabla \cdot \delta \bm{u}_i =0, \\
\big(\partial_t +\bm{u}_i \cdot \nabla \big) \delta \bm{u}_i &+& \frac{g}{m}\nabla \delta n_i +\frac{g_{\uparrow \downarrow}}{m}\nabla \delta n_j=0.
\end{eqnarray}
Furthermore, the coupled wave equations for $\delta n_\uparrow$ and $\delta n_\downarrow$ can be obtained as follows:
\begin{equation}
\Big( \big(\partial_t +\bm{u}_i \cdot \nabla \big )^2- \frac{g n_i}{m} \nabla^2 \Big ) \delta n_i - \frac{g_{\uparrow\downarrow} n_i}{m} \nabla^2 \delta n_j=0.
\end{equation}
If a traveling wave solution of $\delta n_i=A_i e^{i (\bm{q}\cdot \bm{r}-\omega t)}$ is to be obtained, the wave velocity $c=\omega/q$ should satisfy
\begin{equation}
\frac{A_{\downarrow}}{A_{\uparrow}}=\frac{(c-\bm{u}_\uparrow\cdot \hat{\bm{q}})^2 -\frac{g n_\uparrow}{m}}{\frac{g_{\uparrow\downarrow}n_{\uparrow}}{m}}=\frac{\frac{g_{\uparrow\downarrow}n_{\downarrow}}{m}}{(c-\bm{u}_\downarrow\cdot \hat{\bm{q}})^2 -\frac{g n_\downarrow}{m}},
\end{equation}
with $\hat{\bm{q}}=\frac{\bm{q}}{|\bm{q}|}$. In general, four solutions for $c$ are provided, but because $c(-\hat{\bm{q}})=-c(\hat{\bm{q}})$, we consider only the two positive solutions for the propagation direction of $\hat{\bm{q}}$ and denote them by $c^{+}$ and $c^{-}$ with $c^{+}\geq c^{-}$. For $n_i=n_0$ and $\bm{u}_i=0$, the fast (slow) sound speed is given by $c^{\pm}=c_{n(s)}=\sqrt{\frac{(g \pm g_{\uparrow\downarrow})n_0}{m}}$, and the sound propagates with $A_\uparrow=A_\downarrow$ ($A_\uparrow=-A_\downarrow$), corresponding to phonon (magnon) excitations in a symmetric binary superfluid system.
Figure~4 shows the sound speeds $c^{\pm}(\hat{\bm{q}})$ for various flow conditions of $\bm{u}_\uparrow =\frac{u_{\text{rel}}}{2}\hat{\bm{x}}$, $\bm{u}_\downarrow =-\frac{u_{\text{rel}}}{2}\hat{\bm{x}}$, $n_\uparrow=n_0+\frac{m_z}{2}$, and $n_\downarrow=n_0-\frac{m_z}{2}$. We observe that $c^{+}$ is not significantly affected by changes in $u_{\text{rel}}$ and $m_z$, implying the strong phonon characteristics of the fast sound. Meanwhile, $c^{-}$ is sensitive to them. This decreases with increasing $u_{\text{rel}}$ and $m_z$, and the reduction rate is the fastest along the spin current direction. In our moving-obstacle situation, the relative velocity and the density imbalance between the two spin components are maximum at the obstacle's center. Additionally, the obstacle's direction of motion is the same as the direction of the spin current. Therefore, as the obstacle's velocity increases, the stability of the induced superflow will break first in the region of the obstacle's center according to the local Landau criterion.
Figure~5(a) shows the speed of sound, $c^{-}(\hat{\bm{x}})$, at the obstacle's center as a function of $u$ for various $\alpha$ from $0.1$ to $0.9$. The local flow condition of $\{n_i,\bm{u}_i\}$ in the center region was numerically obtained for each set of $\{u, \alpha\}$, and $c^{-}(\hat{\bm{x}})$ was determined from Eq.~(22). As $u$ increases, $c^{-}$ decreases and eventually becomes equivalent to $u$, which defines the obstacle's cirtical velocity, $u_c$. Note that the countersuperflow instability corresponds to an imaginary solution of $c$ in Eq.~(22) and is irrelevant to our current study.
\begin{figure}[t!]
\includegraphics[width=8.4cm]{FIG5}
\caption{Critical velocity of the magnetic obstacle. (a) Slow sound speed, $c^{-}(\hat{\bm{x}})$, at the obtacle's center as a function of the obstacle velocity $u$ for various potential magnitudes $V_0(=\alpha m c_s^2)$. $\Delta g/g=0.07$. The flow condition of $\{n_{\uparrow(\downarrow)}, \bm{u}_{\uparrow(\downarrow)}\}$ at the obstacle center was numerically determined for given $u$ and $V_0$, and $c^{-}(\hat{\bm{x}})$ was calculated from Eq.~(22). The critical velocity $u_c$ is determined at the onset of Landau instability, where the obstacle's velocity (dashed line) exceeds the sound speed. (b) $u_c$ as a function of $\alpha$ for various $\Delta{g}/g$. The bottom graph shows the residue of $u_c$ with respect to a model critical line of $u_c={c_s}(1-\alpha)$.}
\label{fig.5}
\end{figure}
The critical velocity $u_c$ decreases with increasing potential magnitude, $V_0$ [Fig.~5(b)], which is attributable to the reduction in $c^-$ due to the increased $m_{z,0}$. In the limit of $V_0\rightarrow 0$, $u_c$ approaches $c_s$ as $m_{z,0}\rightarrow 0$ and $\bm{u}_i\rightarrow 0$. When $V_0$ reaches $m c_s^2$, i.e., $\alpha=1$, $u_c$ vanishes because the system is fully polarized at the obstacle's center and $c^{-}=0$. Interestingly, our numerical results indicate that $u_c$ decreases almost linearly with increasing $V_0$, suggesting an empirical critical line of $u_c(V_0)=c_s (1- \frac{V_0}{m c_s^2})$. We also scrutinized how $u_c$ was affected by the intercomponent interaction strength and observed that when $\Delta{g}/g$ increased from the $^{23}$Na value of 0.07, $u_c$ decreased for the same obstacle condition [Fig.~5(c)]. Because $\Delta{g}/g=1$ corresponds to a non-interacting two-component case, we may conclude that the observed linear dependence of $u_c$ on $V_0$ is driven by the interactions between the two spin components.
\section{Summary and Outlook}
We investigated the spin and the mass flow distributions generated by a moving, penetrable magnetic obstacle in a symmetric binary BEC. We presented an analytical description of the flow patterns in the perturbative regime for a slow obstacle and demonstrated that the induced spin and mass currents exhibit peculiar spatial distributions resembling those of the electric field from a charge dipole and the magnetic field around a current loop, respectively. When the obstacle's velocity was increased, we numerically observed that the spin and the mass flow patterns maintained their overall structures and that the peak current magnitudes were well accounted for by the enhanced spin polarization at the obstacle's center. Finally, we investigated the critical velocity $u_c$ of the magnetic obstacle based on the local Landau instability of the induced superflows and found that $u_c$ almost decreased linearly from the speed of spin sound with the increasing magnitude $V_0$ of the obstacle's potential.
The predicted $u_c(V_0)$ can be immediately tested in current experiments by measuring the rate of temperature increase of a stirred sample as a function of the obstacle's velocity. In previous experiments, the spin temperature of the two-component $^{23}$Na BEC was indirectly probed via the magnitude of spin fluctuations in the sample. When the obstacle's velocity exceeds a critical velocity, the magnetic obstacle will emit magnons, which can be detected as a sudden enhancement in spin fluctuations in the sample. When $u$ is increased further, another critical phenomonon involving the generation of topological objects, such as half-quantum vortices and magnetic solitons, may occur~\cite{Kamchatnov13,R36}. We also notice that another velocity point larger than $u_c$ exists, above which the obtacle center becomes fully polarized. This may facilitate a phase-slip process in the density-depleted spin component, possibly resulting in vortex nucleation. For a single-component BEC, vortex dipoles were experimentally observed to be periodically generated from a moving, penetrable obstacle~\cite{R15} and that a von Kármán vortex street was formed with an impenetrable obstacle~\cite{R11,R10}.
Finally, we point out that the experimental study with the two-component $^{23}$Na BEC can be extended to spin-1 spinor physics by rendering the $m_F=0$ spin state energetically accessible via tuning the quadratic Zeeman energy. In this case, the spin exchange process of $|m_F=1\rangle+|m_F=-1\rangle \rightarrow 2|m_F=0\rangle$ will be allowed for high spin currents~\cite{R16}, and the critical dynamics with the moving magnetic obstacle is expected to be richer for possibly involving different types of topological defects such as skyrmions~\cite{Choi12}.
\begin{acknowledgments}
We thank Joon Hyun Kim for his discussion and critical reading of the manuscript. This study was supported by the National Research Foundation of Korea (NRF-2018R1A2B3003373, NRF-2019M3E4A1080400).
\end{acknowledgments}
|
1,116,691,498,239 | arxiv | \section{Introduction}
Generative Adversarial Networks (GANs) have become a thriving topic in recent years
after the initial work by Goodfellow \emph{et al.} in \cite{goodfellow2014generative}.
Since then, GANs have quickly become a popular and rapidly changing field due to their
ability to learn high-dimensional complex real image distributions. As a result,
numerous GAN variants have emerged, like CramerGAN (\cite{bellemare2017cramer}),
MMDGAN (\cite{li2017mmd}), ProGAN (\cite{gao2019progan}), SN-DCGAN (\cite{miyato2018spectral}),
and the state-of-the-art StyleGAN, StyleGAN2, and StyleGAN3 (\cite{karras2019style,karras2020analyzing,karras2021alias}). Among various primary
applications of GANs is fake image and video generation, e.g., DeepFakes (\cite{deepfakes}),
FaceApp (\cite{faceapp}), and ZAO (\cite{zao}). In particular, DeepFakes is the first successful
project taking advantages of deep learning, which was started in 2017 on Reddit by an account
with the same name. Since then, deepfakes are regarded as falsified images and videos created
by deep learning algorithms, see \cite{sencar2022multimedia}. A major source of motivation for
investigation into the automatic deepfake detection is the visual indistinguishability between
fake images created by GANs and real ones. Moreover, the abuse of fake images potentially
pose threats to personal and national security. Therefore, research on deep fake detection has
become increasingly important with the rapid iteration of GANs.
There are two kinds of tasks in the detection of GAN-generated images. The easiest is identifying
an image as real or fake. The harder one consists of attributing fake images to the corresponding
GAN that generated them. In this paper, we mainly focus on the attribution task. Both tasks involve
extracting features from images and feeding them to classifiers. For the classifiers, there are
approaches based on traditional machine learning methods, which are relatively simple,
but often reach relatively bad results, see \cite{fridrich2012rich,khan2019benchmark}.
Approaches based on deep learning, especially convolutional neural networks (CNN), have proven
powerful and are employed in many recent papers, see \cite{rossler2019faceforensics++, yu2019attributing,Wang_2020_CVPR,liu2020global, yu2020responsible,frank2020leveraging, wolter2021wavelet}.
For feature extraction, the simplest method is just using raw pixels as input. The results are, however,
not of high accuracy and the classifiers fed with raw pixels are not robust under common perturbations, see \cite{liu2020global,frank2020leveraging}. Therefore, it is necessary to develop methods to better extract
features. One stream is the learning-based method by Yu \emph{et al.} in \cite{yu2019attributing,yu2021artificial, yu2020responsible}, which found unique fingerprints of each GAN. Another stream is based on the mismatches
between real and fake in the frequency domain, see \cite{zhang2019detecting,durall2019unmasking,frank2020leveraging,durall2020watch,liu2020global,wolter2021wavelet}. Specifically, multiresolution methods, e.g., the wavelet packet transform, have recently been employed for
deepfake detection, see Wolter \emph{et al.} in \cite{wolter2021wavelet}. Their work demonstrates the capabilities
of multiresolution analyses for the task at hand and marks the starting point for our considerations.
In contrast to the isotropic transformations considered there, we focus on anisotropic transformations,
i.e., the fully separable wavelet transform (\cite{velisavljevic2006directionlets}) and samplets (\cite{harbrecht2022samplets}), which are a particular variant of multiwavelets.
Because the generators in all GAN architectures synthesize images in high resolution
from low resolution images using deconvolution layers with square sliding windows,
it is highly likely for the anisotropic multi wavelet transforms of fake images to leave
artifacts on anisotropic sub-bands. In this paper, we show that features from anisotropic (multi-)wavelet
transforms are promising descriptors of images. This is due to remarkable mismatches between the anisotropic
multiwavelet transforms of real and fake images, see Figure \ref{fig:fp}. To evaluate the anisotropic features,
we set up a lightweight multi-class CNN classifier as in \cite{frank2020leveraging, wolter2021wavelet}
and compare our results on the datasets consisting of authentic images from one of the three commonly
used image datasets: Large-scale Celeb Faces Attributes (CelebA \cite{celeba}),
LSUN bedrooms (\cite{lsun}), and Flickr-Faces-HQ (FFHQ \cite{karras2019style}),
and synthesized images generated by CramerGAN, MMDGAN, ProGAN, and SN-DCGAN on the CelebA and LSUN bedroom,
or the StyleGANs on the FFHQ. Finally, as in \cite{frank2020leveraging,wolter2021wavelet},
we test the sensitivity to the number of training samples and the robustness under the four common perturbations: Gaussian blurring, image crop, JPEG based compression, and addition of Gaussian noise.
\section{Related work}
\noindent\textbf{Deep fake detection:}
A comprehensive statistical studying of natural images shows that regularities always exist
in natural images due to the strong correlations among pixels, see \cite{lyu2013natural}.
However, such regularity does not exist in synthesized images. Besides, it is well-known that
checkerboard artifacts exist in CNNs-generated images due to downsampling and upsampling layers,
see examples in \cite{odena2016deconvolution, azulay2018deep}. The artifacts make identification
of deepfakes possible. In \cite{marra2018detection, rossler2019faceforensics++,Wang_2020_CVPR},
the authors show that GAN-generated fake images can be detected using CNNs
fed by conventional image foresics features, i.e., raw pixels. In order to improve the accuracy
and generalization of classifier, several methods are proposed to address the problem of finding more discriminative features instead of raw pixels. Several non-learnable features are proposed,
for example hand-crafted cooccurrence features in \cite{nataraj2019detecting}, color cues in \cite{mccloskey2018detecting}, layer-wise neuron behavior in \cite{wang2019fakespotter},
and global texture in \cite{liu2020global}. In \cite{yu2019attributing},
Yu \emph{et al.} discover the existence of unique fingerprints of each GAN model,
which characterize the corresponding GAN model and are obtained in the training procedure.
With this technique, responsible GAN developers could fingerprint their models and keep track
of abnormal usage of their releases. In the follow-up paper (\cite{yu2020responsible}),
Yu \emph{et al.} scale up the GAN fingerprinting mechanism. However, in \cite{neves2020ganprintr},
Neves \emph{et al.} propose GANprintR to remove the fingerprints of GANs,
which renders this identification method useless.
\noindent\textbf{Frequency artifacts:}
It is found that artifacts are more visible in the frequency domain.
State of the art results are achieved using features in the frequency domain, e.g.,
the coefficients of the discrete cosine transform (\cite{zhang2019detecting,durall2019unmasking,frank2020leveraging,durall2020watch,liu2020global}) and the coefficients of the isotropic wavelet packet transform (\cite{wolter2021wavelet}).
In \cite{frank2020leveraging}, Frank \emph{et al.} found that the grid-like patterns in the frequency domain
stem from the upsampling layers. Even though ProGAN and StyleGANs are equipped with improved upsampling
methods, artifacts still exit in their frequency domain. Combination of the features in frequency domain and lightweight convolutional neural networks can outperform the complex heavyweight convolution neural networks
using features based on the pixel values of images. In \cite{wolter2021wavelet}, features based on wave-packets
are used, which outperforms all the other state-of-the-art methods with comparable lightweight CNN classifiers.
The success of the isotropic wave-packets inspired us to further investigate this direction and to
also take into account anisotropic multiresolution analyses,
to extract more distinguishable features for the deepfake detection.
\section{Proposed Method}
\subsection{Motivation}
Images are often composed of two types of regions:
mostly monochromatic patches, usually backgrounds, and areas with sharp color
gradients, found in correspondence with borders that separate different objects.
This construction is similar to a square wave in 1D, which is notoriously difficult
to approximate with only cosine functions like the discrete cosine transform (DCT) does.
This fact is known as the Gibbs phenomenon, see \cite{gibbs1898fourier}. Similar to a square
wave in 1D, images can be considered as piecewise constant functions in 2D, which makes using
DCT methods challenging as their supports are not localized in space but only in frequency.
This results in redundant representations of images in the frequency domain. One solution,
proposed in \cite{wolter2021wavelet}, is to decompose an image into frequencies while also
maintaining spatial information is using wavelets, which are localized in both domains and
are thus less susceptible to discontinuities. In order to manifest the efficient representation
of images using wavelets, we consider an isotropic pattern with discontinuities on the boundaries
of square blocks, and a anisotropic pattern with discontinuities on the boundaries of rectangular
blocks, see Figure \ref{fig:iso_and_aniso_patterns}. We then compute the DCT and four different
kinds of wavelet transforms, i.e., the discrete wavelet transform (DWT), the discrete wavelet packet
transform (DWPT), the fully separable wavelet transform (FSWT), and the samplet transform.
From the bar plot in Figure \ref{fig:iso_and_aniso_patterns}, all wavelet transforms
overcome the Gibbs phenomenon, in contrast to the DCT and the DWPT. However, anisotropic wavelet
transforms, i.e., FSWT and samplets, perform much better than isotropic DWT in the task of finding
efficient representations for anisotropic patterns which commonly exist in real images.
\begin{figure}[ht]
\begin{subfigure}{.4\columnwidth}
\centering
\includegraphics[width=\linewidth]{figs/equal_boxes.png}
\end{subfigure}%
\begin{subfigure}{.6\columnwidth}
\centering
\begin{tikzpicture}
\pgfplotsset{compat=1.16, every tick label/.append style={font=\tiny}, every node near coord/.style={font=\Tiny}}
\begin{axis}[ybar,bar width=0.5,
xtick={0,1,2,3,4},
xticklabels={DCT,DWT,DWPT,FSWT,Samplets},
ylabel={\# of nonzeros},
ymin=0,
ymax=78000,
xmax=4.5,
width=\textwidth,
nodes near coords style={font=\sffamily,align=center,text width=1em},
nodes near coords=\pgfmathsetmacro{\mystring}{{"49k","16","65k","16","16"}[\coordindex]}\mystring,
nodes near coords align={vertical},
]
\addplot coordinates
{(0,49408) (1,16) (2,65536) (3,16) (4,16)};
\end{axis}
\end{tikzpicture}
\end{subfigure}
\begin{subfigure}{.4\columnwidth}
\centering
\includegraphics[width=\linewidth]{figs/mondrian.png}
\end{subfigure}%
\begin{subfigure}{.6\columnwidth}
\centering
\begin{tikzpicture}
\pgfplotsset{compat=1.16, every tick label/.append style={font=\tiny}, every node near coord/.style={font=\Tiny}}
\begin{axis}[ybar,bar width=0.5,
xtick={0,1,2,3,4},
xticklabels={DCT,DWT,DWPT,FSWT,Samplets},
ylabel={\# of nonzeros},
ymin=0,
ymax=78000,
xmax=4.7,
width=\textwidth,
nodes near coords style={font=\sffamily,align=center,text width=1em},
nodes near coords=\pgfmathsetmacro{\mystring}{{"65k","2k","65k","702","702"}[\coordindex]}\mystring,
nodes near coords align={vertical},
]
\addplot coordinates
{(0,65536) (1,1949) (2,65536) (3,702) (4,702)};
\end{axis}
\end{tikzpicture}
\end{subfigure}
\caption{The top left part shows a \(256\times256\) blockwise constant grayscale image
with 16 squares of equal size (black stands for 0, and white stands for 255). The bottom left
part shows a \(256\times256\) blockwise constant grayscale image with 40 rectangles
(black stands for 0, and white stands for 255). The right part shows the numbers of
nonzero coefficients under different transforms of the isotropic and anisotropic patterns
respectively. Herein, we consider the wavelets and samplets with vanishing moments up to order 1.}
\label{fig:iso_and_aniso_patterns}
\end{figure}
The previous works \cite{zhang2019detecting,frank2020leveraging} have already analyzed
the effectiveness of using the frequency domain instead of the direct pixel representation
when detecting deepfakes. Moreover, the method in \cite{wolter2021wavelet} has improved
the state-of-the-art result using the isotropic wavelet, i.e., the DWPT. However, they usually
result in redundant representations of images. Moreover, the rely on only isotropic decompositions.
We are convinced that anisotropic transforms can add a new aspect to the challenge at hand.
The intuition behind this reasoning comes from the fact that GAN architectures typically only use
isotropic convolutions (square sliding windows) to synthesize new samples, thus being unaware of the
fingerprint they are leaving in the hidden anisotropic coefficients' distribution of the image.
We focus on two technologies that allow us to expose these fingerprints and obtain
a spatio-frequency representation of the source image: the fully separable wavelet transform and samplets,
which are a particular variant of multiwavelets.
\subsection{Preliminaries}
\subsubsection{Isotropic wavelets}
\noindent\textbf{Discrete wavelet transform:} Wavelets are localized waves that have a nonzero
value around a certain point but then collapse to zero when moving further away. Examples for such
wavelets are Haar- and Daubechies- wavelets, see \cite{daubechies1992}. The main idea behind
wavelet-based analyses is the decomposition of a signal with respect to a hierarchical scales.
The smaller the scales, the higher the corresponding frequency. Unlike the DCT, wavelets are localized
in the spatial domain as well, due to their hierarchical nature. The fast wavelet transform (FWT),
see \cite{beylkin1991fast},
is an algorithm commonly used to apply a discrete wavelet transform onto an $n\times n$ image
and decompose it into approximation coefficients using convolutions of filter banks and downsampling
operators, with a computational cost in the order of $\mathrm{O}(n^2)$, i.e., linear in the number of pixels.
The results of one decomposition step of the FWT are four sets of coefficients usually referred to as
$a$, $h$, $v$, $d$, which stand for approximation, horizontal, vertical, and diagonal coefficients.
To produce a decomposition up to level $l$, the $a$ coefficient of level $l-1$ is further decomposed
into four components: $aa$, $ah$, $av$, $ad$, giving rise to the structure in the leftmost side of
Figure \ref{fig:wvlt_structures}. This approach is also known as Mallat decomposition (\cite{mallat_dec})
and amounts to a 2D isotropic wavelet transform. All frequency-based methods like Fourier transformations
and wavelets are usually tailored towards unbounded domains, while images are instead bounded.
In the case of the Haar wavelet, the transition from unbounded to bounded is not a problem and
no boundary modifications are required to obtain an orthonormal basis.
However, higher-order wavelets require modifications at the boundary,
which introduces the need for padding, usually accomplished either by appending zeros to a dimension
or by repeating the data or reflecting it. To avoid changing the size of images,
the Gram-Schmidt boundary filter is proposed in \cite{boundary_filters}, which replaces the wavelet
filters at the boundary with special shorter filters that preserve both the shape and the
reconstruction property of the wavelet transform.
\noindent\textbf{Wavelet packets:} Following the assumption that information is mostly contained
in the low-frequency approximation coefficient $a$, the Mallat approach leaves the high frequency
terms $h$, $v$, and $d$ untouched when moving up one level in the decomposition.
Previous work (\cite{fft_lower_freq}) has instead shown that,
especially when dealing with compressed and lower resolution images, information in the higher
frequencies is usually affected more reduction processes than that in the lower part of the
frequency spectrum. Therefore, the remaining three feature maps are likely to also contain
relevant information for deepfake detection. Motivated by this reasoning,
the authors of \cite{wolter2021wavelet} employed wavelet packets as input features for their
classifiers. A single level decomposition of a wavelet packet is essentially the same as that
of the FWT, but when decomposing up to level $l$, also the $h$, $v$, and $d$ of level $l-1$ are
further decomposed into $(ha,hh,hv,hd)$, $(va,vh,vv,vd)$ and $(da,dh,dv,dd)$, at the total cost of
$\mathrm{O}(n^2\log n)$ operations. The resulting quad-tree-like structure is depicted in the middle
of Figure \ref{fig:wvlt_structures} for a complete decomposition. In \cite{wolter2021wavelet},
the authors used a Gram-Schmidt boundary filter such that coefficients of any image using the
wavelet packet transform remain the same size as the transformed image. Afterwards,
all packets are stacked together along the channel direction before feeding the CNN.
\begin{figure}[ht]
\centering
\begin{tikzpicture}[scale=0.57]
\draw[black, thin] (0,0) -- (4,0) -- (4,4) -- (0, 4) --
(0,0);
\draw[black, thin] (2,0) -- (2,4);
\draw[black, thin] (1,2) -- (1,4);
\draw[black, thin] (0,2) -- (4,2);
\draw[black, thin] (0,3) -- (2,3);
\node (v) at (1,1) {$v$};
\node (d) at (3,1) {$d$};
\node (h) at (3,3) {$h$};
\node (aa) at (0.5,3.5) {$aa$};
\node (av) at (0.5,2.5) {$av$};
\node (ad) at (1.5,2.5) {$ad$};
\node (ah) at (1.5,3.5) {$ah$};
\foreach \x in {5,...,9} {%
\draw[black, thin] (\x,0) -- (\x,4);
}
\foreach \y in {0,...,4} {%
\draw[black, thin] (5,\y) -- (9,\y);
}
\node at (5.5,3.5) {$aa$};
\node at (5.5,2.5) {$av$};
\node at (6.5,2.5) {$ad$};
\node at (6.5,3.5) {$ah$};
\node at (7.5,3.5) {$ha$};
\node at (7.5,2.5) {$hv$};
\node at (8.5,2.5) {$hd$};
\node at (8.5,3.5) {$hh$};
\node at (5.5,1.5) {$va$};
\node at (5.5,0.5) {$vv$};
\node at (6.5,0.5) {$vd$};
\node at (6.5,1.5) {$vh$};
\node at (7.5,1.5) {$da$};
\node at (7.5,0.5) {$dv$};
\node at (8.5,0.5) {$dd$};
\node at (8.5,1.5) {$dh$};
\draw[black, thin] (10,0) -- (14,0) -- (14,4) -- (10, 4) --
(10,0);
\draw[black, thin] (10,2) -- (14,2);
\draw[black, thin] (10,3) -- (14,3);
\draw[black, thin] (12,0) -- (12,4);
\draw[black, thin] (11,0) -- (11,4);
\end{tikzpicture}%
\caption{Leftmost: the quad tree structure of the wavelet decomposition of an image.
Middle: the complete tree structure of a wavelet packet.
Rightmost: tree structure of a fully separable wavelet decomposition.}
\label{fig:wvlt_structures}
\end{figure}
\subsubsection{Anisotropic wavelets}
\noindent\textbf{Fully Separable Wavelet Transform:} Previous work has focused on the
isotropic discrete wavelet transform and its derivatives to extract features. For images,
the isotropic decomposition from level 1 up to level $l$ consists of transforming each axis
once for each level. As we have seen with the FWT and wavelet packets, first the $x$ axis is
decomposed, then the $y$ axis, and only after the algorithm moves to the next level.
Instead, the fully separable decomposition first decomposes completely one axis up to level $l$ and
then decomposes the obtained features on the other axis
(see the rightmost of Figure \ref{fig:wvlt_structures} for a sketch of the resulting pattern).
This different approach gives rise to a more anisotropy-friendly feature extraction while only
increasing the computational cost by a factor of two compared to the Mallat decomposition.
Similarly to wavelet packets, a suitable boundary modification, e.g., by padding,
is necessary for the fully separable wavelet transform with more than one vanishing moment.
In this paper, we consider both the reflect-padding method and the
boundary filter with QR orthogonalization.
\noindent\textbf{Samplets:}
First presented in \cite{harbrecht2022samplets}, samplets are a generalization of
multiwavelets that produce a multiresolution analysis of a given signal in terms of
discrete signed measures. Samplets can be constructed with an arbitrary number $m$ of
vanishing moments for arbitrary data sets. For structured data, such as images,
lowest order samplets (m=1) correspond to Haar wavelets.
The construction of the basis and the fast samplet transform can both be performed in
linear time, i.e., the cost of transforming an $n\times n$
input image is \(\mathcal{O}(n^2)\).
Different from the FSWT, samplets are a data-centric approach and
can accommodate any data dimensions without the need
for padding or special boundary treatment. In particular,
they do not suffer from discontinuities at
the boundaries and do not introduce artifacts in the decomposition,
which allows using samplets directly without modifications
and without increasing their computational cost.
\subsection{Proof of concept}
\begin{figure*}[ht]
\centering
\includegraphics[width=\linewidth]{figs/comp_ffhq.png}
\caption{A row by row comparison of real and synthesized images on sub-bands. The first row shows
the average of 1000 real images from the FFHQ dataset. The second column shows the
corresponding samplet coefficients, while the last column shows the coefficients of the FSWT.
The second row shows the same information for a dataset of the same size generated by
the StyleGAN model taken from \href{thispersondoesnotexist.com}{thispersondoesnotexist.com}. In the
color map, white stands for zeros, red for positive numbers, and blue for negative numbers.}
\label{fig:fp}
\end{figure*}
\begin{figure}[ht]
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics[trim={0.0cm 12.0cm 12.0cm 0.0cm},clip,width=\linewidth]{figs/ffhq_fs_pca.png}
\caption{Raw pixels}
\end{subfigure}%
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics[trim={12.0cm 12.0cm 0.0cm 0.0cm},clip,width=\linewidth]{figs/ffhq_fs_pca.png}
\caption{Low frequency coefficients}
\end{subfigure}
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics[trim={0.0cm 2.5cm 12.0cm 12.5cm},clip,width=\linewidth]{figs/ffhq_fs_pca.png}
\caption{Anisotropic coefficients}
\end{subfigure}%
\begin{subfigure}{.5\linewidth}
\centering
\includegraphics[trim={12.0cm 2.5cm 0.0cm 12.5cm},clip,width=\linewidth]{figs/ffhq_fs_pca.png}
\caption{High frequency coefficients}
\end{subfigure}
\caption{The 3D scatter plots of 2000 samples consisting of 1k real and 1k fake images visualize the first three principle components of the raw pixels and the three different parts of the FSWT coefficients.}
\label{fig:pca-ffhq}
\end{figure}
To illustrate how features are extracted by anisotropic multiresolution analyses,
we visualize in Figure \ref{fig:fp} the samplet- and FSTW- coefficients
for the average of 1000 real images from the FFHQ (top row) and for
1000 images by the StyleGAN models from the photorealistic face generation website
\href{thispersondoesnotexist.com}{thispersondoesnotexist.com} (bottom row).
As can be seen, the average images from the real sample and the one generated by
StyleGAN in the first column are very similar. In particular, it is very difficult to tell
which of the exemplary images in the second column is synthesized and which one is real.
According to the experiments in \cite{liu2020global}, human performance in classification
only leads to an accuracy of about 63.9\% in the FFHQ vs StyleGAN case, which is only slightly
higher than the probability of randomly guessing. The two anisotropic transforms lead
to the sparse patterns on the right side of the figure. The patterns for the real images
and the synthesized images are clearly distinguishable, especially by examining the
high and anisotropic frequency parts. These observations suggest that
samplets and FSWT are suitable for assisting classifiers in discerning the origin of a
given image.
To further illustrate the idea of using features extracted by the two proposed anisotropic
transforms, we visualize the first three principal components of the raw pixels and
different parts of the FSWT coefficients in a principal component analysis (PCA).
The samples for FFHQ and StyleGAN2 consist of 1000 images, each, with a resolution
of \(128\times 128\). Panel (a) in Figure \ref{fig:pca-ffhq} shows the principal
components of the raw pixels, which are not distinguishable. The same accounts for
the low frequency contributions depicted in panel (b). In turn,
the principal components of real and synthesized images become linearly separable
in the anisotropic parts and high frequency parts, depicted in panels (c) and (d).
The scales of the wavelet transform coefficients decrease exponentially with respect to
their underlying level \(l\) if the underlying signal exhibits sufficient smoothness.
However, the importance of the high frequency coefficients at high levels is large.
Therefore, we employ a block-wise normalization, where we divide coefficients by
the maximum absolute value on the corresponding block. This way, we can bring each
block to the same range \((-1,1)\). Moreover, we keep the zero coefficients untouched
to maintain the sparsity pattern. In the following experiments, this block-wise
normalization is always applied before feeding the data into the CNN classifier.
\section{Experiments}
All experiments in this paper are performed on a compute server with
two Intel Xeon (E5-2650 v3 @ 2.30GHz),
one NVIDIA A100-PCIe-40GB, two NVIDIA GeForce GTX 1080 Ti (Titan 11GB GDDR5X),
and one NVIDIA GeForce GTX 1080 (Founders Edition 8GB GDDR5X).
\subsection{Source Identification}
\noindent\textbf{Experimental Setup}
The experiments are conducted on three datasets: CelebA, LSUN bedroom, and FFHQ.
The datasets CelebA and LSUN bedroom are generated by the pre-trained GAN models
from \cite{yu2019attributing}, and contain 150k resized real images of resolution
\(128\times128\) and 150k fake images in the same size for each model.
The models under consideration are CramerGAN, MMDGAN, ProGAN, and SN-DCGAN.
Thus, the total dataset has a size of 750k.
It is then partitioned into training, validation, and test datasets consisting of
500k, 100k, and 150k images respectively.
On the other hand, the FFHQ dataset contains 70k real images resized to
\(128\times128\) and 70k fake images of the same size for each StyleGAN model, i.e.,
StyleGAN, StyleGAN2, and StyleGAN3. As in \cite{wolter2021wavelet}, we split the full
dataset into training, validation, and test datasets with sizes of 252k, 8k, and 20k,
respectively. The CelebA and LSUN bedroom samples are generated following the recommendations
of the from the references \cite{frank2020leveraging, wolter2021wavelet},
which suggest using the pre-trained models from \cite{yu2019attributing}. Analogously to
\cite{wolter2021wavelet}, the fake dataset of FFHQ is instead generated using
the pre-trained models provided by the authors of
\cite{karras2019style,karras2020analyzing,karras2021alias}, using, in particular,
the R configuration (translation and rotation equivalent) of StyleGAN3,
a value of 1 for the truncation parameter, and image numbers from 1 to 70k as the random seeds.
Secondly, the fully separable wavelet transform is realized by using the GPU enabled
fast wavelet transform library \verb+ptwt+ (\cite{pywt,WolterPhD,Blanke2021}).
The samplets, implemented in C++, are integrated into the toolchain by using
\verb|pybind11| (\cite{pybind11}). The implementation is available at
\url{https://github.com/muchip/fmca}.
In order to demonstrate the advantages of features extracted by anisotropic multiresolution analyses,
we train the same shallow CNN proposed in \cite{frank2020leveraging}, see Table \ref{tab:cnn_arch}),
fed with samplets and the fully separable decomposition to match the state of the art performance.
In the architecture, a fully connected layer is added at the end of the convolution part as a classifier.
Its output is the number of classes, and the input size is \(32d^2\), where \(d\) depends on the size
of input features. Because the samplet transform does not involve any padding procedure,
\(d\) for the samplets is the same number as for the raw pixels. However, the size of the fully separable decomposition coefficients depends on the padding methods, which is slightly larger than the size of
the raw pixels and the samplets except when using boundary filters. Therefore, the fully connected layer
for the FSWT without the boundary method is slightly larger than that for the raw pixels and samplets.
With a specific size for the Daubechies 3 orthogonal wavelet (\textit{db3}),
and reflect-padding method the total number of parameters for the FSWT increase to
roughly 202k compared to the 170k parameters for the raw pixels, DCT, and samplets.
In the training procedure, we set the batch size to 128 and train the model using the Adam algorithm (\cite{kingma2014adam}) with a learning rate of 0.001 for 10 epochs. The random seed values used
are 0,1,2,3,4 for 5 repetitive training procedures. The random seed is not only used to initialize
the network weights, but also shuffle the entire datasets before splitting.
This way, there is no bias in the partitioning of the data into the training,
test, and validation datasets.
\begin{table}[ht]
\centering
{\small
\begin{tabularx}{0.5\linewidth}{cc}
\toprule
\multicolumn{2}{c}{Simple CNN} \\
\hline
Conv & (3,3,3,3)\\
ReLU &\\
Conv & (3,8,3,3) \\
ReLU & \\
AvgPool & (2,2) \\
Conv & (8,16,3,3) \\
ReLU & \\
AvgPool & (2,2) \\
Conv & (16,32,3,3) \\
ReLU & \\ \hline
Dense & (32 $\cdot$ d $\cdot$ d, $c$) \\
\bottomrule
\end{tabularx}
}
\caption{CNN classifier architecture for the samplets and the fully separable transform.}
\label{tab:cnn_arch}
\end{table}
\begin{table*}
\centering
\begin{tabular}{lccccccccc}
\toprule
&& && \multicolumn{2}{c}{CelebA\%} && \multicolumn{2}{c}{LSUN bedrooms\%}\\
\cline{5-6} \cline{8-9}
Method && parameters && max\ & $\mu\pm\sigma$ && max\ & $\mu\pm\sigma$\\ \hline
Pixels (Frank \emph{et al.} \cite{frank2020leveraging}) && 170k && 97.80 & - && 98.95 & -\\
DCT (Frank \emph{et al.} \cite{frank2020leveraging}) && 170k && 99.07 & - && 99.64 & -\\
Packet-ln-db3 (Wolter \emph{et al.} \cite{wolter2021wavelet}) && 109k && 99.38 & $99.11\pm0.49$ && 99.19 & $99.01\pm0.17$\\
Packet-ln-db4 (Wolter \emph{et al.} \cite{wolter2021wavelet}) && 109k && 99.43 & $99.27\pm0.15$ && 99.02 & $98.46\pm0.67$\\ \hline
Samplets-BN-2 (our) && 170k && 99.77 & $99.71\pm0.06$ && 99.46 & $99.37\pm0.06$\\
Samplets-BN-3 (our) && 170k && 99.93 & $99.87\pm0.06$ && 99.42 & $99.19\pm0.2$\\
Samplets-BN-4 (our) && 170k && 99.77 & $99.71\pm0.08$ && 99.28 & $99.3\pm0.12$\\
Samplets-BN-5 (our) && 170k && 99.86 & $99.79\pm0.09$ && 99.41 & $99.3\pm0.09$\\
Samplets-BN-6 (our) && 170k && 99.87 & $99.8\pm0.05$ && 99.47 & $99.32\pm0.13$\\
Samplets-BN-7 (our) && 170k && 99.86 & $99.72\pm0.08$ && 99.5 & $99.35\pm0.21$\\
FSWT-BN-db3-boundary (our) && 170k && 99.88 & $99.84\pm0.05$ && 99.1 & $98.76\pm0.35$\\
FSWT-BN-db4-boundary (our) && 170k && 99.91 & $99.76\pm0.11$ && 99.33 & $99.13\pm0.18$\\
FSWT-BN-db3-reflect (our) && 202k && \textbf{99.97} & $99.96\pm0.01$ && \textbf{99.9} & $99.79\pm0.06$\\
FSWT-BN-db4-reflect (our) && 225k && 99.96 & $99.86\pm0.14$ && 99.86 & $99.79\pm0.04$\\
\bottomrule
\end{tabular}
\caption{Accuracy results for different features with max, mean and standard deviation on the CelebA and the LSUN bedroom datasets. We report our results in comparison to the results in \cite{frank2020leveraging, wolter2021wavelet}, which are obtained using seeds from 0 to 4.}
\label{tab:res_kfold_celeba_lsun}
\end{table*}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figs/celeba_fp.png}
\caption{Average coefficients of each feature extraction method on the CelebA dataset.
These visualizations are obtained by applying the block-wise normalization,
then scaling them to the range \((0,255)\), and finally casting to unsigned int 8 bit.
The first row is for the samplets and the second is for the fully separable
\textit{db3} with \(l=3\). We are using the diverging color map with white
standing for zeros, red for positive numbers, and blue for negative numbers.}
\label{fig:celeba_fs}
\end{figure*}
\noindent\textbf{Results:}
The best result from \cite{wolter2021wavelet} is obtained by the \textit{db4-fuse}
architecture, where a much more complex network than the one from Table~\ref{tab:cnn_arch}
is fed with both the wavelet packet representations, the Fourier transform,
and the raw pixels. We decided to ignore the results of the fuse architecture
because they rely more on the underlying network architecture than the input features.
We rather compare our features to the features in \cite{frank2020leveraging, wolter2021wavelet}
using the same simple CNN in Table \ref{tab:cnn_arch},
focusing on the results solely obtained from a single multiresolution analysis:
wavelet packets \textit{db3} and \textit{db4}. We also evaluate samplets with
vanishing moments up to order \(m=7\) to see how the accuracy changes when
increasing drastically the number of vanishing moments.
To summarize, for our features, we consider:
\begin{itemize}
\item Samplets with vanishing moments up to order \(m=7\);
\item Fully separable \textit{db3} and \textit{db4} with the maximum decomposition level of 3,
to have a direct comparison to the wavelet packet \textit{db3} and \textit{db4};
\end{itemize}
From Table \ref{tab:res_kfold_celeba_lsun}, we see that on the datasets of CelebA and LSUN bedroom
the anisotropic transforms performs better in terms of the maximum, the mean, and the standard deviation.
Among all features, the fully separable \textit{db3} with the reflect-padding method is the clear winner.
On the another hand, if we only consider the fully separable transformation without padding,
samplets with the vanishing moment 3 perform better than the FSWT with the boundary method on all aspects.
Samplets with a lot of vanishing moments do not bring much improvements compared
to the ones with lower vanishing moments. Thus, we will only focus on samplets-3 and samplets-4
to have a direct comparison to the the wavelet packet \textit{db3} and \textit{db4}
in our following experiments. To visually demonstrate why anisotropic features work better,
in Figure \ref{fig:celeba_fs}, we compare the average samplets, and the fully separable decomposition
coefficients of the reference CelebA dataset and synthetic data generated from multiple GANs.
As we can see, the anisotropic analysis leads to different patterns for images from different
sources, which, we believe, could aid a neural classifier in discerning the origin of each data point.
We also want to explore how well the different features perform on more advanced GAN models.
Since the five GANs considered for the CelebA and the LSUN bedrooms are not state-of-the-art anymore,
we shift our focus to StyleGAN, StyleGAN2 and StyleGAN3, for which there also exist pre-trained
models on the FFHQ dataset. We repeat the classification task for 5 times and report the results in
Table \ref{tab:res_kfold_ffhq} for the six best methods identified before: samplets-3 and samplets-4,
fully separable \textit{db3} and \textit{db4} with the boundary and the reflect-padding methods,
as well as the benchmarks in \cite{frank2020leveraging, wolter2021wavelet}. Even though samplets
perform not as good as on the CelebA and LSUN bedroom datasets, the fully separable \textit{db3}
and \textit{db3} with reflecting padding method consistently perform better compared
to the other features.
\vspace{1em}
\begin{table}
\centering
\begin{adjustbox}{width=\linewidth}
\begin{tabular}{lcc}
\toprule
& \multicolumn{2}{c}{FFHQ\%} \\
\cline{2-3}
Method & max\ & $\mu\pm\sigma$ \\\hline
Pixels (Wolter \emph{et al.} \cite{wolter2021wavelet}) & 93.71 & $90.90 \pm 2.19$ \\
FFT2D (Wolter \emph{et al.} \cite{wolter2021wavelet}) & 86.03 & $85.52 \pm 0.50$ \\
Packet-ln-db4 (Wolter \emph{et al.} \cite{wolter2021wavelet}) & 96.28 & $95.85 \pm 0.59$ \\ \hline
Samplets-BN-3 & 92.95 & $92.43\pm0.53$ \\
Samplets-BN-4 & 91.19 & $89.3\pm1.47$ \\
FSWT-BN-db3-boundary & 95.72 & $94.9\pm0.47$ \\
FSWT-BN-db4-boundary & 94.35 & $93.63\pm0.7$ \\
FSWT-BN-db3-reflect & 96.36 & $95.13\pm1.63$ \\
FSWT-BN-db4-reflect & \textbf{97.15} & $96.13\pm0.55$ \\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{Accuracy results for different features with max, mean and standard deviation
on the FFHQ dataset.
We report the results as stated in \cite{wolter2021wavelet},
which were obtained using seeds from 0 to 4.}
\label{tab:res_kfold_ffhq}
\end{table}
\subsection{Training on 20\% of the training dataset}
We also observe that anisotropic features can achieve a
higher accuracy than DCT and wavelet packets when not much training data is available.
To demonstrate this, we only train our model on 20\% of the CelabA training dataset,
and report the statistics of the accuracy on the entire CelebA test dataset. From Table \ref{tab:res_kfold_celeba_20}, we observe that samplets and the FSWT performs better on all
aspects, especially on the average and the standard deviation. The fully separable
\textit{db4} with the reflect-padding method almost reaches
the same accuracy as using the entire training dataset.
\vspace{1em}
\begin{table}
\centering
\begin{adjustbox}{width=\linewidth}
\begin{tabular}{lcc}
\toprule
& \multicolumn{2}{c}{incomplete CelebA\%} \\
\cline{2-3}
Method & max\ & $\mu\pm\sigma$ \\\hline
Pixels (Frank \emph{et al.} \cite{frank2020leveraging}) & 96.33 & - \\
DCT (Frank \emph{et al.} \cite{frank2020leveraging}) & 98.47 & - \\
Packet-ln-db4 (Wolter \emph{et al.} \cite{wolter2021wavelet}) & 99.01 & $96.96 \pm 3.47$ \\ \hline
Samplets-BN-3 & 99.49 & $99.35\pm0.09$ \\
Samplets-BN-4 & 98.5 & $97.7\pm1.05$ \\
FSWT-BN-db3-boundary & 99.18 & $98.81\pm0.24$ \\
FSWT-BN-db4-boundary & 99.14 & $98.76\pm0.23$ \\
FSWT-BN-db3-reflect & 99.77 & $99.49\pm0.3$ \\
FSWT-BN-db4-reflect & \textbf{99.89} & $99.68\pm0.16$ \\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{Accuracy results for different features with max,
mean and standard deviation on a fifth of the original data.}
\label{tab:res_kfold_celeba_20}
\end{table}
\subsection{Robustness to Perturbations}
\begin{table*}
\centering
\begin{tabular}{lccccccc}
\toprule
& Blur\% & &Cropped\%& & Compression\%& & Noise\%\\ \hline
Pixels (Frank \emph{et al.} \cite{frank2020leveraging}) & 88.23 & & 97.82 & & 78.67 & & 78.18\\
DCT (Frank \emph{et al.} \cite{frank2020leveraging}) & 93.61 & & 98.83 & & 94.83 & & 89.56\\
Packet-ln-db3 (Wolter \emph{et al.} \cite{wolter2021wavelet}) & - & & 95.68 & & 84.73 & & -\\ \hline
Samplets-BN-3 & 93.96 & &98.5 & & 91.71 & & 87.18\\
Samplets-BN-4 & 92.97 & &97.6 & & 89.97 & & 84.46\\
FSWT-BN-db3-boundary & 91.14 & & 98.49 & & 94.23 & & 88.17\\
FSWT-BN-db4-boundary & 90.71 & & 98.35 & & 95.19 & & 86.41\\
FSWT-BN-db3-reflect & \textbf{96.79} & & \textbf{99.2} & & 96.86 & & \textbf{90.88} \\
FSWT-BN-db4-reflect & 96.28 & & 99.13 & & \textbf{97.23} & & 90.8\\
\bottomrule
\end{tabular}
\caption{Accuracy results for different features with max on the distorted datasets}
\label{tab:perb_lsun}
\end{table*}
Finally, we test the resilience of different features against 4 common image perturbations:
Gaussian blurring, image crop, JPEG based compression, and addition of Gaussian noise.
We consider the same image perturbation configurations as in \cite{frank2020leveraging}:
Gaussian blurring with a kernel size randomly sampled from \((3, 5, 7, 9)\),
image crop by a percentage randomly selected from \(U(5,20)\),
JPEG based compression with a quality randomly selected from \(U(10, 75)\),
and addition of Gaussian noise whose variance is sampled from the uniform distribution \(U(5,20)\). The modified pixel values are clipped to the range of (0,255), followed by being cast into 8-bit unsigned integers. In the experiment, we apply one of the mentioned perturbations with a probability
of \(\frac{1}{2}\). Here, we conduct the experiments on the LSUN bedroom dataset. Results
are shown in Table \ref{tab:perb_lsun}. All anisotropic transformations perform much better
than the other features including pixels, DCT coefficients, and wavelet packets.
Our features are very robust when images are exposed to the random crop. Instead
the accuracy is reduced more dramatically under the blurring, JPEG-based compression,
and noise addition. Especially in the case of noise addition, the accuracy is reduced by around 10\%.
\section{Conclusion}
With the results for anisotropic multiresolution analyses presented,
we have added a new aspect to the general understanding
of how GANs truly operate and what traces they tend to leave in their output.
Based on the experiments, the fully separable decomposition and samplets improved
the accuracy results compared to the state-of-the-art on the CelebA and LSUN bedroom datasets.
However, even though samplets and FSWT with the boundary method do not achieve the state-of the art
on the FFHQ dataset, FSWT with the reflect-padding method perform consistently best among all features.
In terms of training on incomplete datasets and robustness to perturbations,
anisotropic transformations are better than wavelet packets. Moreover, samplets and FSWT
with the boundary method do not require padding. They are thus free from boundary artifacts,
allowing them to transform any input signal while maintaining the same support size.
Furthermore, this capability makes them perfect as a drop-in addition to any network architecture,
which means that, for example, any pixel-based discriminator could instantly and effortlessly
improve its classification performance by just adding a single preprocessing layer without
modifying its architecture.
|
1,116,691,498,240 | arxiv | \section{Introduction}
In many application areas such as bioinformatics, text mining, image retrieval, spectroscopy domains or
social networks the available electronic data are increasing and get more complex in size and representation.
In general these data are not given in vectorial form and \emph{domain specific} (dis-)similarity measures
are used, as a replacement or complement to Euclidean measures. These data are also often associated to dedicated
structures which make a representation in terms of Euclidean vectors difficult: biological sequence data, text files, XML data,
trees, graphs, or time series \cite{DBLP:journals/jmlr/ChenGGRC09,mediansom,neuhaus} are of this type.
These data are inherently compositional and a feature representation leads to information loss.
As an alternative, tailored dissimilarity measures such as pairwise alignment functions, kernels for structures
or other domain specific similarity and dissimilarity functions can be used as the interface to the data.
But also for vectorial data,
non-metric proximity measures are common in some disciplines.
An example of this type is the use of divergence measures \cite{Cichocki20101532}
which are very popular for spectral data analysis in chemistry, geo- and medical
sciences \cite{Schleif2010h,Nguyen2013691}, and are not metric in general.
In such cases, machine learning techniques which can deal with pairwise
non-metric similarities or dissimilarities are attractive \cite{Pekalska2005a}.
The paper is organized as follows. First we give a brief review of related work. Subsequently we review common
transformation techniques for dissimilarity data and discuss the influence of non-Euclidean measures, by eigenvalue corrections.
Thereafter we discuss alternative methods for processing small dissimilarity data. We extend this discussion to approximation strategies and
give an alternative derivation of the Nystr\"om approximation together
with a convergence proof, also for indefinite kernels.
This allows us to apply the Nystr\"om technique
to similarities as well as for dissimilarities.
Thus, we can link both strategies effectively to use kernel methods for the analysis of larger (non-)metric dissimilarity data.
Then we show the effectiveness of the proposed approach
by different supervised learning tasks aligned with various error measures. We also discuss differences and commons
to some known approaches supported by experiments on simulated data\footnote{This article contains extended and improved results and is based on \cite{Schleif2013b}}.
\section{Related work}
Similarity and dissimilarity learning or for short proximity learning has attracted wide attention over the last years, pioneered by work of \cite{Goldfarb1984575} and
major contributions in \cite{Pekalska2005a} and different other research groups. As will be detailed more formally in the next section,
the learning of proximities is challenging under different aspects: in general there is no underlying vector space, the proximities may be non-Euclidean,
the data may not be metric. As mentioned before a symmetric matrix of metric similarities between objects is essentially a kernel and can be analyzed
by a multitude of kernel methods \cite{Cristianini2004a}. But complex preprocessing steps are necessary, as discussed in the following,
to apply them on non-metric (dis-)similarities. Some recent work discussed non-metric \emph{similarities} in the context of kernel approaches
by means of indefinite kernels see e.g. \cite{Liwicki20121624,Haasdonk2009a}, resulting in non-convex formulations. Other approaches try to make the
kernel representation positive semi definite (psd) or learn an alternative psd proxy matrix close to the original one
\cite{DBLP:journals/jmlr/ChenGGRC09,DBLP:conf/icml/ChenGR09}, but with high computational costs. For dissimilarity matrices only few
approaches have been published \cite{Lu30082005,DBLP:journals/siammax/BrickellDST08} both with quadratic to cubic computational costs
in the number of samples. In fact, as discussed in the work of \cite{Pekalska2005a},
non-Euclidean proximities can encode important information in the Euclidean
as well as in non-Euclidean parts of space,
represented by the positive and negative eigenvalues
of the corresponding similarity matrix, respectively.
Thus, transformations of similarities to make them psd,
by e.g. truncating the negative eigenvalues,
may be inappropriate \cite{DBLP:conf/sspr/PekalskaDGB04}.
This however is very data dependent and for a large number of datasets negative eigenvalues may be actually noise effects
while for other data sets the negative eigenvalues carry relevant information \cite{DBLP:phd/de/Laub2004,DBLP:journals/pr/LaubRBM06}.
Often non-psd kernels are still used with kernel algorithms but actually on a heuristical basis, since corresponding error bounds are provided
only for psd kernels in general. As we will see in the experiments for strongly non-psd data it may happen that standard kernel methods
fail to converge due to the violation of underlying assumptions.
Another strategy is to use a more general theory of learning with similarity functions proposed in \cite{DBLP:journals/ml/BalcanBS08}.
Which can be used to identify descriptive or discriminative models based on a available similarity function under some conditions
\cite{DBLP:conf/nips/KarJ12}. A practical approach of the last type for classification problems was provided in \cite{DBLP:conf/nips/KarJ11}.
The model is defined on a fixed randomly chosen set of landmarks per class and a transfer function. Thereby the landmarks are a small set of columns
(or rows) of a kernel matrix which are used to formulate the decision function. The weights of the decision
function are then optimized by standard approaches. The results are however in general substantially worse than those provided in
\cite{DBLP:journals/jmlr/ChenGGRC09} where the datasets are taken from.
In the following we will focus on non-metric proximities and especially \emph{dissimilarities}. Native methods for the analysis of matrix dissimilarity data
have been proposed in \cite{DBLP:journals/neco/GraepelO99,Pekalska2005a,DBLP:journals/jmlr/PekalskaPD01,Schleif2012k}, but are in general based on non-convex optimization schemes
and with quadratic to linear memory and runtime complexity, the later employing some of the approximation techniques discussed subsequently
and additional heuristics. The strategy to correct non-metric dissimilarities is addressed in the literature only for smaller data sets. And there exist
basically three approaches to make them metric. The first one is to modify the (symmetric) dissimilarity matrix such that all triangle equations
in the data are fulfilled \cite{DBLP:journals/siammax/BrickellDST08}, which is called the \emph{metric-nearness} problem. The second strategy
is to learn again a metric proxy matrix \cite{Lu30082005}. Both strategies are quite costly and not used at large scale. The third approach
is based on converting the dissimilarities to similarities, by double centering followed by an eigenvalue correction of the similarities and
back conversion to dissimilarities. These steps scale quadratic and cubic, respectively. We focus on the last approach and provide a runtime
and memory efficient solution for problems at large scale\footnote{With large we refer to a sample size $N \in [1e3-1e6]$. We do not focus
on very big data - which are (not yet) considered in the area of proximity learning.}.
The approximation concepts used in the following are based on the Nystr\"om approximation which was introduced to machine learning by the work of
\cite{DBLP:conf/nips/WilliamsS00}. In \cite{DBLP:journals/pami/FowlkesBCM04} the Nystr\"om approximation was used to simplify the normalized Cut
problem, which can be considered as a clustering problem. This work was however valid for \emph{psd similarity} matrices, only. An extension to
\emph{non-psd} similarities was addressed in \cite{DBLP:conf/eccv/BelongieFCM02}, but the derivation can still lead to an invalid matrix approximation
\footnote{The derivation of $Z$ on p 535 for negative eigenvalues in $\Lambda$ leads to complex values and hence invalid results.
However the strategy proposed in the corresponding paper may have removed the negative eigenvalues in $\Lambda$, due to a rank reduction,
explaining the experimental results. But the cut-off of negative eigenvalues can again be criticized \cite{DBLP:conf/sspr/PekalskaDGB04}}.
Our proposal derives valid eigenvector and eigenvalue estimates also for non-psd proximity matrices.
Large (dis-)similarity data are common in biology like the famous
\emph{UniProt-/SwissProt}-database with $\approx 500,000$ entries
or \emph{GenBank} with $\approx 135,000$ entries, but there are many more (dis-)similarity data as discussed in the work based
on \cite{Pekalska2005a,DBLP:journals/tsmc/PekalskaD08}. These growing data sets request effective and generic modeling approaches.
Here we will show how potentially non-metric (dis-)simi\-larities can be effectively processed by standard kernel methods by
correcting the proximity data with linear costs.
The proposed strategies permit the effective application of many kernel methods for these type of data under very mild conditions.
Especially for metric dissimilarities the approach keeps the known guarantees, like generalization bounds (see e.g. \cite{DBLP:journals/jmlr/DrineasM05}).
For non-psd data we give a convergence proof, but the corresponding bounds are still open, yet our experiments are promising.
\section{Transformation techniques for (dis-)similarities}
\label{sec:trafos}
Let $\v{v}_j \in \mathbb{V}$ be a set of objects defined in some data space, with $|\mathbb{V}|=N$.
We assume, there exists a dissimilarity measure such that $\mathbf{D} \in \mathbb{R}^{N \times N}$ is a dissimilarity matrix
measuring the pairwise dissimilarities $D_{ij}=d(\v{v}_j,\v{v}_i)^2$
between all pairs $(\v{v}_i,\v{v}_j) \in \mathbb{V}$
\footnote{We assume $D_{ij}$ to be squared to simplify the notation.}.
Any reasonable (possibly non-metric) distance measure is sufficient.
We assume zero diagonal $d(\v{v}_i,\v{v}_i)=0$ for all $i$ and symmetry $d(\v{v}_i,\v{v}_j)=d(\v{v}_j,\v{v}_i)$
for all $i,j$.
\subsection{Transformation of dissimilarities and similarities into each other}
Every dissimilarity matrix $\mathbf{D}$ can be seen as a distance matrix
computed in some, not necessarily Euclidean, vector space.
The matrix of the inner products computed in this space
is the corresponding similarity matrix $\mathbf{S}$.
It can be computed from $\mathbf{D}$ directly
by a process referred to as double centering \cite{Pekalska2005a}:
\begin{eqnarray*}
\mathbf{S} &=& -\mathbf{J} \mathbf{D} \mathbf{J}/2 \\
\mathbf{J} &=& (\mathbf{I}-\mathbf{1}\mathbf{1}^\top/N)
\end{eqnarray*}
with identity matrix $\mathbf{I}$ and vector of ones $\mathbf{1}$.
Similarly, it is possible to construct the dissimilarity matrix element-wise
from the matrix of inner products $\mathbf{S}$
\[D_{ij} = S_{ii} + S_{jj} - 2 S_{ij}.\]
As we can see, both matrices $\mathbf{D}$ and $\mathbf{S}$
are closely related to each other and represent the same data,
up to translation, which is lost by the double-centering step.
If the mean estimate, used in the double centering step, is inaccurate
the conversion of $\mathbf{D}$ to $\mathbf{S}$ is inaccurate as well, which
can have a negative impact on e.g. a classifier based on $\mathbf{S}$.
The data stems from an Euclidean space,
and therefore the distances $d_{ij}$ are Euclidean,
if and only if $\mathbf{S}$ is positive semi-definite (psd) \cite{Berg1984}.
This is the case, when we observe only non-negative eigenvalues
in the eigenspectrum of the matrix $\mathbf{S}$ associated to $\mathbf{D}$.
Such psd matrices $\mathbf{S}$ are also referred to as kernels
and there are many classification techniques,
which have been proposed to deal with such data,
like the support vector machine (SVM) \cite{vapnik2000nature}.
In the case of non-psd similarities, the mercer kernel based techniques
are no longer guaranteed to work properly
and additional transformations of the data are required
or the methods have to be modified substantially,
effecting the overall runtime efficiency or desired properties like convexity \cite{Ong2004639,Haasdonk2005482}.
To define these transformations we need first
to understand the pseudo-Euclidean space.
\subsection{Pseudo-Euclidean embedding}\label{sec:pseudo_eucl_embedd}
Given a symmetric dissimilarity with zero diagonal,
an embedding of the data in a pseudo-Euclidean vector space
is always possible \cite{Goldfarb1984575}.
\begin{definition}[Pseudo-Euclidean space \cite{Pekalska2005a}]
A pseudo-Euclidean space $\xi=\mathbb{R}^{(p,q)}$
is a real vector space equipped with a non-degenerate,
indefinite inner product $\langle .,.\rangle_\xi$. $\xi$ admits a direct orthogonal decomposition $\xi=\xi_+ \oplus \xi_-$ where
$\xi_+= \mathbb{R}^p$ and $\xi_-= \mathbb{R}^q$ and the inner product is positive definite on $\xi_+$ and negative
definite on $\xi_-$. The space $\xi$ is therefore characterized by the signature $(p,q)$.
\end{definition}
A symmetric bi-linear form in this space is given by
\[
\langle\v{x},\v{y}\rangle_{p,q} =
\sum_{i=1}^p x_i y_i - \sum_{i=p+1}^{p+q} x_i y_i =
\v{x}^\top \mathbf{I}_{p,q}\v{y}
\]
where $\mathbf{I}_{p,q}$ is a diagonal matrix with $p$ entries $1$ and $q$ entries $-1$.
Given the eigendecomposition of a similarity matrix
$\mathbf{S} = \mathbf{U} \mathbf{\Lambda} \mathbf{U}^\top$
we can compute the corresponding vectorial representation $\mathbf{V}$
in the pseudo-Euclidean space by
\begin{equation}
\mathbf{V} = \mathbf{U}_{p+q} \left|\mathbf{\Lambda}_{p+q}\right|^{1/2}
\label{eq:embedding}
\end{equation}
where $\mathbf{\Lambda}_{p+q}$ consists of $p$ positive and $q$ negative
non-zero eigenvalues and $\mathbf{U}_{p+q}$ consists of the corresponding eigenvectors.
It is straightforward to see that
$D_{ij}=\langle\v{v}_i-\v{v}_j,\v{v}_i-\v{v}_j\rangle_{p,q}$
holds for every pair of data points.
Similarly to the signature $(p, q)$ of a space $\xi$,
we describe our finite data sets, given by a matrix $\mathbf{D}$ or $\mathbf{S}$,
by the extended signature $(p, q, N-p-q)$
which represents the number of positive eigenvalues $p$,
the number of negative eigenvalues $q$
and the number of the remaining zero eigenvalues
in the similarity matrix.
\subsection{Dealing with pseudo-Euclidean data}
\label{sec:trafos_corr}
In \cite{DBLP:journals/jmlr/ChenGGRC09} different strategies were analyzed to
obtain valid kernel matrices for a given similarity matrix $\mathbf{S}$,
most popular are: \emph{flipping, clipping, vector-representation, shift correction}.
The underlying idea is to remove negative eigenvalues
in the eigenspectrum of the matrix $\mathbf{S}$.
One may also try to learn an alternative psd kernel representation with maximum alignment to the original non-psd kernel matrix \cite{DBLP:journals/jmlr/ChenGGRC09,DBLP:conf/icml/ChenGR09,DBLP:journals/jmlr/LiZY09}
or split the proximities based on positive and negative eigenvalues as discussed in
\cite{Pekalska2005a,Haasdonk2009a}.
The \emph{flip}-operation takes the absolute eigenvalues of the matrix $\mathbf{S}$.
This corresponds to ignoring the separation of the space $\xi$
into $\xi_+$ and $\xi_-$ and instead computing in the space $\mathbb{R}^{p+q}$.
This approach preserves the variation in the data
and could be revoked for some techniques after the training
by simply reintroducing the matrix $\mathbf{I}_{p,q}$ into the inner product.
The \emph{shift}-operation increases all eigenvalues by the absolute
value of the minimal eigenvalue.
This approach performs a non-linear transformation in the pseudo-Euclidean space,
emphasizing $\xi_+$ and nearly eliminating $\xi_-$.
The \emph{clip}-operation sets all negative eigenvalues to zero.
This approach corresponds to ignoring the space $\xi_-$ completely.
As discussed in \cite{DBLP:conf/sspr/PekalskaDGB04}, depending on the data set,
this space could carry important information
and removing it would make some tasks, as e.g. classification, impossible.
After the transformation of the eigenvalues,
the corrected matrix $\mathbf{S}^*$ is obtained as
$\mathbf{S}^* = \mathbf{U} \mathbf{\Lambda}^* \mathbf{U}^\top$,
with $\mathbf{\Lambda}^*$ as
the modified eigenvalue matrix using one of the above operations.
The obtained matrix $\mathbf{S}^*$ can now be considered
as a valid kernel matrix $\mathbf{K}$
and kernel based approaches can be used to operate on the data.
The analysis in \cite{DBLP:conf/sspr/PekalskaDGB04} indicates that for non-Euclidean dissimilarities some corrections like above
may change the data representation such that information loss occurs.
This however is not yet systematically explored and very data dependent,
best supported by domain knowledge about the data or the used proximity measure.
Alternatively, techniques have been introduced which directly deal with possibly non-metric dissimilarities.
Using the Equation \eqref{eq:embedding} the data can be embedded into
the pseudo-Euclidean space.
Classical vectorial machine learning algorithms can then be adapted
to operate directly in the pseudo-Euclidean space.
This can be achieved by e.g. defining a positive definite
inner product in the space $\xi$.
Variations of this approach are also possible
whereby an explicit embedding is not necessary
and the training can be done implicitly,
based on the dissimilarity matrix only \cite{Pekalska2005a}.
A further strategy is to employ so called relational or proximity learning methods as discussed e.g. in \cite{Schleif2012k}.
The underlying models consist of prototypes,
which are implicitly defined as a weighted linear combination of training points:
\begin{equation*}
\v{w}_j=\sum_i\alpha_{ji}\v{v}_i\mbox{ with } \sum_i\alpha_{ji}=1\,. \qquad \mathbb{W}=\{\v{w}_1, \hdots, \v{w}_c\}
\end{equation*}
But this explicit representation is not necessary because the algorithms are based only
on a specific form of distance calculations using the matrix $\mathbf{D}$ and
the potentially unknown vector space $V$ is not needed.
The basic idea is an implicit computation of distances $d(\cdot,\cdot)$
during the model calculation based on the dissimilarity matrix $\mathbf{D}$ using weights $\alpha$:
\begin{equation}
d(\v{v}_i,\v{w}_j)^2=
[\mathbf{D}\cdot\alpha_j]_i-\frac{1}{2}\cdot\alpha_j^\top \mathbf{D}\alpha_j
\label{eq:rel_distance}.
\end{equation}
details in \cite{Schleif2012k}.
As shown e.g. in \cite{DBLP:journals/neco/HammerH10} the mentioned methods
do not rely on a metric dissimilarity matrix $\mathbf{D}$,
but it is sufficient to have a symmetric $\mathbf{D}$ in a pseudo-Euclidean space,
with constant self-dissimilarities.
The \emph{dissimilarity space} approach is another technique
which does not embed the data into the pseudo-Euclidean space \cite{Pekalska2005a}.
Instead, one selects a representative set of points $\v{w}_i \in \mathbb{W}$
and considers for every point the dissimilarities to the set $\mathbb{W}$
as features, resulting in a vectorial representation
$\mathbf{x}_i=[d(\v{v}_i,\v{w}_1),d(\v{v}_i,\v{w}_2),d(\v{v}_i,\v{w}_3),...]^\top$.
This corresponds to an embedding into an Euclidean space
with the dimensionality equal to the size of the selected set of points.
These vectors can then be processed using any vectorial approaches.
A negative point of this representation is the
change of the original data representation
which may disturb the structure of the data.
It is also highly reliable on a good representative set,
since highly correlated sampled points generate similar features
and the correlation information is lost in the embedded space.
\subsection{Complexity}
The methods discussed before are suitable for data analysis based on similarity or dissimilarity data
where the number of samples $N$ is rather small, e.g. scales by some thousand samples.
For large $N$, most of the techniques discussed above become infeasible.
All techniques which use the full (dis-)similarity matrix,
have $\mathcal{O}(N^2)$ memory complexity
and thus at least $\mathcal{O}(N^2)$ computational complexity.
Double centering, if done naively, is cubic,
although after simplifications it can be computed in $\mathcal{O}(N^2)$.
Transformation from $\mathbf{S}$ to $\mathbf{D}$ can be done element-wise,
but if the full matrix is required it is still quadratic.
All the techniques relying on the full eigenvalue decomposition,
e.g. for eigenvalue correction or for explicit pseudo-Euclidean embedding,
have an $\mathcal{O}(N^3)$ computational complexity.
The only exception is the dissimilarity space approach.
If it possible to select a good representative set of a small size,
one can achieve linear computational and memory complexity.
The technique becomes quadratic as well,
if all data points are selected as the representative set.
Other then this,
only for \emph{metric, similarity data} (psd kernels) efficient approaches have been proposed before, e.g.
the Core-Vector Machine (CVM) \cite{DBLP:conf/icml/TsangKK07} or low-rank linearized SVM \cite{DBLP:journals/jmlr/ZhangLWM12}
for classification problems or an approximated kernel k-means algorithm for clustering \cite{DBLP:conf/kdd/ChittaJHJ11}.
A schematic view of the relations between $\mathbf{S}$ and $\mathbf{D}$ and its transformations
is shown in Figure \ref{fig:simdis_schema}, including the complexity of the transformations.
Some of the steps can be done more efficiently by known methods,
but with additional constraints or in atypical settings as discussed in the following.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{schema_dissim}
\caption{Schema of the relation between similarities and dissimilarities.}
\label{fig:simdis_schema}
\end{figure}
In the following, we discuss techniques to deal with larger sample sets
for potentially non-metric similarity and especially dissimilarity data.
We show how standard kernel methods can be used,
assuming that for non-metric data,
the necessary transformations have no severe negative
influence on the data accuracy. Basically also core-set techniques \cite{DBLP:journals/comgeo/BadoiuC08}
become accessible for large potentially non-metric (dis-)similarity data in this way, but at the cost of multiple additional intermediate steps.
In particular, we investigate the Nystr\"om approximation technique,
as low rank linear time approximation technique;
we will show its suitability and linear time complexity
for similarities as well as dissimilarities,
applied on the raw data as well as for the eigenvalue correction.
\section{Nystr\"om approximation}
\label{sec:ny_approx}
As shown in \cite{DBLP:conf/nips/WilliamsS00}, given a symmetric positive semi-definite kernel matrix $\mathbf{K}$,
it is possible to create a low rank approximation of this matrix
using the Nystr\"om technique \cite{ny_orig}.
The idea is to sample $m$ points, the so called landmarks,
and to analyze the small $m \times m$ kernel matrix $\mathbf{K}_{m,m}$
constructed from the landmarks.
The eigenvalues and eigenvectors from the matrix $\mathbf{K}_{m,m}$
can be used to approximate the eigenvalues and eigenvectors
of the original matrix $\mathbf{K}$.
This allows to represent the complete matrix in terms
of a linear part of the full matrix only.
The final approximation takes the simple form
\begin{equation}
\mathbf{\hat{K}}=\mathbf{K}_{N,m} \mathbf{K}^{-1}_{m,m} \mathbf{K}_{m,N},
\label{eq:Ny_equation}
\end{equation}
where $\mathbf{K}_{N,m}$ is the kernel matrix between $N$ data points and $m$ landmarks
and $\mathbf{K}^{-1}_{m,m}$ is the Moore-Penrose pseudoinverse of the small matrix.
This technique has been proposed
in the context of Mercer kernel methods in \cite{DBLP:conf/nips/WilliamsS00}
with related proofs and bounds given in \cite{DBLP:journals/jmlr/DrineasM05}
and very recent results in \cite{DBLP:journals/corr/abs-1303-1849}.
It can be applied in conjunction with algorithms using the kernel matrix
in multiplications with other matrices or vectors only.
Due to the explicit low rank form as in Equation \eqref{eq:Ny_equation}
it is possible to select the order of multiplication,
thus reducing the complexity
from quadratic in the number of data points to a linear one.
\subsection{Eigenvalue decomposition of a Nystr\"om approximated matrix}\label{sec:eval_decomp}
In some applications it might be useful to compute the exact eigenvalue decomposition
of the approximated matrix $\mathbf{\hat{K}}$,
e.g. to compute the pseudo-inverse of this matrix.
We will show now, how this decomposition can be computed in linear time
\footnote{A similar strategy was used to construct large eigenmaps from \emph{psd} similarity matrices
as recently shown \cite{JMLR:v14:talwalkar13a} but our approach applies also to non-psd matrices.}.
The psd matrix approximated by Equation \eqref{eq:Ny_equation}
can be written as
\begin{align*}
\mathbf{\hat{K}} & = \mathbf{K}_{N,m} \mathbf{K}^{-1}_{m,m} \mathbf{K}_{m,N}\\
& = \mathbf{K}_{N,m} \mathbf{U} \mathbf{\Lambda}^{-1}
\mathbf{U}^\top \mathbf{K}_{N,m}^\top\\
& = \mathbf{B} \mathbf{B}^\top,
\end{align*}
where we defined $\mathbf{B}=\mathbf{K}_{N,m} \mathbf{U} \mathbf{\Lambda}^{-1/2}$
with $\mathbf{U}$ and $\mathbf{\Lambda}$ being the eigenvectors and eigenvalues
of $\mathbf{K}_{m,m}$, respectively.
Further it follows
\begin{align*}
\mathbf{\hat{K}}^2 & = \mathbf{B} \mathbf{B}^\top \mathbf{B} \mathbf{B}^\top\\
& = \mathbf{B} \mathbf{V} \mathbf{A} \mathbf{V}^\top \mathbf{B}^\top,
\end{align*}
where $\mathbf{V}$ are the orthonormal eigenvectors of the matrix
$\mathbf{B}^\top \mathbf{B}$
and $\mathbf{A}$ the matrix of its eigenvalues.
The corresponding eigenequation can be written as
$
\mathbf{B}^\top \mathbf{B} \mathbf{v} = a \mathbf{v}.
$
Multiplying it with $\mathbf{B}$ from left
we get the eigenequation for $\mathbf{\hat{K}}$
\[
\mathbf{B} \mathbf{B}^\top (\mathbf{B} \mathbf{v})
= a \left( \mathbf{B} \mathbf{v} \right).
\]
It is clear, that $\mathbf{A}$ must be the matrix of eigenvalues of $\mathbf{\hat{K}}$.
The matrix $\mathbf{B} \mathbf{v}$ is the matrix of the corresponding eigenvectors,
which are orthogonal but not necessary orthonormal.
The normalization can be computed from the decomposition
\begin{align*}
\mathbf{\hat{K}} & = \mathbf{B} \mathbf{V} \mathbf{V}^\top \mathbf{B}^\top\\
& = \mathbf{B} \mathbf{V} \mathbf{A}^{-1/2} \mathbf{A}
\mathbf{A}^{-1/2} \mathbf{V}^\top \mathbf{B}^\top\\
& = \mathbf{C} \mathbf{A} \mathbf{C}^\top,
\end{align*}
where we defined $\mathbf{C} = \mathbf{B} \mathbf{V} \mathbf{A}^{-1/2}$
as the matrix of orthonormal eigenvectors of $\mathbf{\hat{K}}$.
Thus,
$
\mathbf{\hat{K}} = \mathbf{C} \mathbf{A} \mathbf{C}^\top
$
is the orthonormal eigendecomposition of $\mathbf{\hat{K}}$.
\subsection{Convergence proof}
The Nystr\"om approximation was proposed for the psd matrices
and thus, it was not accessible for distance matrices
and similarities coming from non-psd kernel functions.
First developments to apply the Nystr\"om technique to indefinite matrices
were presented in \cite{nips10gismokham,Schleif2012k}.
Although supported with experiments, a formal proof was lacking.
Here we present a proof, that shows, that the Nystr\"om approximated,
possible indefinite, kernel converges in the operator norm to the true underlying kernel
as long as the number of landmarks is large enough. Generalization bounds will be a subject of future work.
Let $K$ be an integral operator and its kernel
$k\in L^2(\Omega^2)$ be a continuous symmetric function (not necessarily psd, i.e. it does not have to reproduce a Hilbert space):
\[
K f(x) := \int_\Omega k(x,y)f(y) d\mu(y).
\]
Without loss of generality let $\Omega$ be an interval $[a,b]\subset \mathbb{R}$
with measure 1.
Then $K$ is a compact operator in a Hilbert space $\mathfrak{H}$
\[
\|K\|_{L^2 \to L^2} :=
\sup_{\|f\|\leq 1} \|K f\|_{L^2}
\leq \|k\|_{L_2},
\]
with the operator norm $\|.\|_{L^2 \to L^2}$ and the $L_2$-norm $\|.\|_{L_2}$.
We define a measurement operator $T_m$
which divides the space $\Omega$ into $m$ spaces $\Omega_j$,
each with the measure $1/m$.
It converts functions $f \in \mathfrak{H}$
to functions $f_m \in \mathfrak{H}_m$ which are piece-wise constant on each $\Omega_j$.
The corresponding integral kernel of $T_m$ is defined as:
\[
t_m(x,y):=
\begin{cases}
m & x, y \in \Omega_j \text{ for any } j \\
0 & \text{else}.
\end{cases}
\]
It follows for an $x \in \Omega_j$ that
\[
T_m f(x) = \int_\Omega t_m(x,y) f(y) d\mu(y) = m
\int_{\Omega_j} f(y) d\mu(y),
\]
where we can see, that the right hand side is
the mean value of $f(y)$ on $\Omega_j$
and thus constant for all $x \in \Omega_j$.
This way, the operator $T_m$ allows us to approximate a function $f(x)$
by measuring it at $m$ places $f(x_j)$ and assuming that it is constant in between.
Measuring the operator $K$ we get $K_m := T_m \circ K$ with the integral kernel
\begin{align*}
\int_\Omega t_m(x,z) k(z,y) d\mu(z)
& = \sum_{j=1}^m \int_{\Omega_j} t_m(x,z) k(z,y) d\mu(z) \\
& = \sum_{j=1}^m 1_{\Omega_j}(x) m \int_{\Omega_j} k(z,y) d\mu(z) \\
& = \sum_{j=1}^m 1_{\Omega_j}(x) k_j(y) \\
& =: k_m(x,y),
\end{align*}
where $1_{\Omega_j}(x)$ is the indicator function which is $1$ if $x \in \Omega_j$
and $0$ elsewhere and we defined
$k_j=m \int_{\Omega_j} k(z,y) d\mu(z)$.
\noindent We can now analyze the convergence behavior of $K_m$ to $K$.
$\forall x \in \Omega_j$ and $\forall y \in \Omega$ we get
\begin{align*}
& \left| k_m(x,y) - k(x,y) \right| =\\
& = \left| m \int_{\Omega_j} k(z,y) d\mu(z)
- m \int_{\Omega_j} k(x,y) d\mu(z) \right| \\
& \leq m \int_{\Omega_j} \left| k(z,y) - k(x,y) \right| \, d\mu(z).
\end{align*}
Since $k$ is continuous on the interval $[a,b]$, it is uniformly continuous
and we can bound
\begin{align*}
\left| k(z,y) - k(x,y) \right|
& \leq \mathcal{D}(\Omega_j)
:= \sup_{\substack{x_1, \, x_2 \in \Omega_j\\ y \in \Omega}}
\left| k(x_1,y) - k(x_2,y) \right| \\
& \leq \delta_m := \max_j \mathcal{D}(\Omega_j)
\end{align*}
and therefore
\[
\sup_{\substack{x \in \Omega \\ y \in \Omega}}\left| k_m(x,y) - k(x,y) \right|
\leq \delta_m.
\]
For $m \to \infty$ the $\Omega_j$ become smaller and $\delta_m \to 0$,
thus kernel $k_m$ converges to $k$.
For the operators $K$ and $K_m$ it follows
\[
\| K_m - K \|_{L^2 \to L^2} \to 0
\]
which shows that $K_m$ converges to $K$ in the operator norm,
if the number of measurements goes to infinity.
\noindent Applying $K_m$ on $f$ results in
\begin{align*}
K_m f(x)
& = \int_{\Omega} k_m(x,y) f(y) d\mu(y) \\
& = \sum_{j=1}^m 1_{\Omega_j}(x) \int_{\Omega} k_j(y) f(y) d\mu(y) \\
& = \sum_{j=1}^m a_j 1_{\Omega_j}(x)
\end{align*}
where $a_j:=\int_{\Omega} k_j(y) f(y) d\mu(y)$ is a constant with respect to $x$.
It is clear that $K_m f$ is always in the linear hull of
$1_{\Omega_1}(x),...,1_{\Omega_m}(x)$ and the image of the operator
$\Im K_m=\operatorname{span}\{1_{\Omega_1}(x),...,1_{\Omega_m}(x)\}$ is $m$ dimensional.
Since the coefficients $a_j$ are finite, $K_m$ is a compact operator
and because the sequence of $K_m$ converges to $K$,
we see that $K$ is in fact a compact operator.
According to the "Perturbation of bounded operators" theorem \cite{DBLP:conf/colt/LuxburgBB04},
if a sequence $K_m$ converges to $K$ in the operator norm,
then for an isolated eigenvalue $\lambda$ of $K$
there exist isolated eigenvalues $\lambda_m$ of $K_m$
such that $\lambda_m \to \lambda$
and the corresponding spectral projections converge in operator norm.
This theorem allows us to estimate the eigenvalues and eigenfunctions of
the unknown operator $K$ by computing the eigendecomposition
of the measured operator $K_m$.
The eigenfunctions and eigenvalues of the operator $K_m$
are given as the solutions of the eigenequation
\begin{equation}
K_m f=\lambda f.
\label{eig_eq}
\end{equation}
We know that the left hand side of the equation is in the image of $K_m$
and therefore an eigenfunction $f$ must have the form
\begin{equation}
f(x)=\sum_{i=1}^m f_i 1_{\Omega_i}(x)
\label{eig_func}
\end{equation}
where $f_i$ are constants.
For the left side of the Equation \eqref{eig_eq} it follows
\begin{align*}
K_m f(x)
& = \int_{\Omega} \sum_{j=1}^m 1_{\Omega_j}(x) k_j(y) f(y) d\mu(y) \\
& = \sum_{j=1}^m 1_{\Omega_j}(x)
\int_{\Omega} k_j(y) \sum_{i=1}^m f_i 1_{\Omega_i}(y) d\mu(y) \\
& = \sum_{j=1}^m \sum_{i=1}^m 1_{\Omega_j}(x) f_i
\int_{\Omega_i} k_j(y) d\mu(y) \\
& = \sum_{j=1}^m \sum_{i=1}^m 1_{\Omega_j}(x) \frac{1}{m}
f_i k_{ji}
\end{align*}
and we defined
$k_{ji}=m \int_{\Omega_i} k_j(y) d\mu(y)
=m^2 \int_{\Omega_i} \int_{\Omega_j} k(y,z) d\mu(y) d\mu(z)$
which represents our measurement of the kernel $k$ around the $i$-th and $j$-th points.
If we combine the above equation with the Equation \eqref{eig_eq} for an $x \in \Omega_j$
we get
\[
\sum_{i=1}^m \frac{1}{m} k_{ji} f_i = \lambda f_j.
\label{eig_eq_j}
\]
This equation is a weighted eigenequation and we can turn it into a regular eigenequation
by defining $\tilde{\lambda}=m\lambda$
and $\tilde{f}_i = f_i/\sqrt{m}$.
Thus, we get
\[
\sum_{i=1}^m k_{ji} \tilde{f}_i = \tilde{\lambda} \tilde{f}_j.
\]
Hence $\tilde{\lambda}$ and $\tilde{f}$ are the eigenvalues and eigenvectors
of matrix $(k_{ji})$.
Note, that $f_i$ are scaled to guarantee the normalization of $\tilde{f}$
\begin{align*}
1
& = \int_{\Omega} f(x) f(x) d\mu(x) \\
& = \int_{\Omega} \sum_{i=1}^m f_i^2 1_{\Omega_i}(x) d\mu(x)\\
& = \sum_{i=1}^m f_i^2 \int_{\Omega_i} d\mu(x)\\
& = \sum_{i=1}^m \left(\frac{f_i}{\sqrt{m}}\right)^2.
\end{align*}
The eigendecomposition takes the form
\[
(k_{ji}) = \sum_{l=1}^m \tilde{\lambda}^l \tilde{f}^l (\tilde{f}^l)'
\]
and for a single measured element we get
\[
k_{ij} = \sum_{l=1}^m \tilde{\lambda}^l \tilde{f}^l_i \tilde{f}^l_j.
\]
\noindent According to the spectral theorem \cite{werner}
the eigendecomposition of $k$ is
\[
k(x,y) = \sum_{l=1}^\infty \gamma^l \phi^l(x) \phi^l(y)
\]
where $\gamma^l$ and $\phi^l$ are the eigenvalues and eigenfunctions, respectively.
Since $K$ is a compact operator, $\gamma^l$ is a null sequence.
Thus, the sequence of operators $\tilde{K}_m$ with the kernel
$\tilde{k}_m(x,y) = \sum_{l=1}^m \gamma^l \phi^l(x) \phi^l(y)$
converges to $K$ in the operator norm for $m \to \infty$ \cite{werner}
and we can approximate
\begin{align*}
k(x,y) & \approx
\sum_{l=1}^m \gamma^l \phi^l(x) \phi^l(y) \\
& = \sum_{l=1}^m
\int_{\Omega} k(x,z) \phi^l(z) d\mu(z)
\frac{1}{\gamma^l}
\int_{\Omega} k(y,z') \phi^l(z') d\mu(z'),
\end{align*}
where we assume that none of the $\gamma^l$ are zero.
Further, due to the "Perturbation of bounded operators" theorem,
the eigenvalues $\lambda^l$ converge to $\gamma^l$
and the corresponding eigenspaces converge in the operator norm
and we can approximate
\[
k(x,y) \approx \sum_{l=1}^m
\int_{\Omega} k(x,z) f^l(z) d\mu(z)
\frac{1}{\lambda^l}
\int_{\Omega} k(y,z') f^l(z') d\mu(z').
\]
Taking into account the Equation \eqref{eig_func} the above formula turns into
\begin{align*}
k(x,y) \approx & \sum_{l=1}^m
\int_{\Omega} k(x,z) \sum_{i=1}^m f^l_i 1_{\Omega_i}(z) d\mu(z)\\
\cdot \; &
\frac{1}{\lambda^l}
\int_{\Omega} k(y,z') \sum_{j=1}^m f^l_j 1_{\Omega_j}(z') d\mu(z') \\
= & \sum_{l=1}^m
\sum_{i=1}^m f^l_i \int_{\Omega_i} k(x,z) d\mu(z)
\frac{1}{\lambda^l}
\sum_{j=1}^m f^l_j \int_{\Omega_j} k(y,z') d\mu(z') \\
= & \sum_{i=1}^m \sum_{j=1}^m
k_i(x)
\left(
\sum_{l=1}^m \frac{f^l_i}{\sqrt{m}} \frac{1}{m \lambda^l} \frac{f^l_j}{\sqrt{m}}
\right)
k_j(y)\\
= & \sum_{i=1}^m \sum_{j=1}^m
k_i(x)
\left( k^{-1} \right)_{ij}
k_j(y),
\end{align*}
where $k^{-1}$ is the pseudo-inverse of the matrix consisting of elements $k_{ij}$.
It is now clear, that after measuring $k_i(x)$ at $N$ places
and writing the above formula in matrix form,
we retain the original Nystr\"om approximation as in Equation \eqref{eq:Ny_equation}.
Note, that the approximation of $k(x,y)$ consists of two approximations.
The first one is the approximation of the rank of the matrix and the second one
is the approximation of the eigenfunctions and eigenvalues.
Although we don't know the exact eigenvalues and eigenfunctions
of kernel $k(x,y)$, the approximation is exact if the kernel has a rank $\le m$
\footnote{If the true rank is larger than $m$ the eigenvalues do not match the
true once and errors occur - like with any other approach. However the presented
approach can also keep negative eigenvalues, given they are within the top $m$ eigenvalues.}.
This fact is known for the Nystr\"om approximation and can be validated
by simple matrix transformations. The reason is, that if the rank of a kernel is $m$
then it can be represented as an inner product in a pseudo-Euclidean space
and $m$ linearly independent landmarks build a basis which spans this space.
The position of any new point $x$ is then fully determined by $k(x,x_i)$,
with $x_i$ being the landmarks, so that all inner products between any points
are determined and the matrix $\mathbf{K}$ can be computed precisely.
The Nystr\"om approximation involves the computation of $\mathbf{K}_{N,m}$
and inversion of $\mathbf{K}_{m,m}$ with the corresponding complexities of
$\mathcal{O}(mN)$ and $\mathcal{O}(m^3)$, respectively.
The multiplication of both matrices as well as multiplication
of the approximated matrix with other matrices,
required for further processing and training,
has the complexity of $\mathcal{O}(m^2 N)$.
Thus, the overall complexity of the Nystr\"om technique
is given by $\mathcal{O}(m^2N)$.
\section{Transformations of (dis-)similarities with linear costs}
The Nystr\"om approximation was proposed originally
to deal with large psd similarity matrices
with kernel approaches in mind by \cite{DBLP:conf/nips/WilliamsS00}.
To apply these techniques on indefinite similarity and dissimilarity matrices
additional transformations, as discussed in section \ref{sec:trafos}, are required.
Unfortunately, these transformations have quadratic or even cubic time complexity,
making the advantage gained by the Nystr\"om approximation pointless.
Since we can now apply the Nystr\"om technique on arbitrary symmetric matrices,
it is not only possible to approximate the dissimilarities directly,
but also to perform the transformations in linear time.
Thus, we can apply relational and kernel techniques
on similarities and dissimilarities including eigenvalue corrections if necessary.
In this section we will elaborate
how the transformations discussed in section \ref{sec:trafos}
can be done in linear time if applied for the Nystr\"om-approximated matrices.
The updated costs are shown on the Figure \ref{fig:simdis_schema_ny}.
\begin{figure*}
\centering
\includegraphics[width=0.7\columnwidth]{schema_dissim_nystroem}
\caption{Updated schema from Figure \ref{fig:simdis_schema}
using the discussed approximation.
The costs are now substantially smaller, provided $m \ll N$.
}
\label{fig:simdis_schema_ny}
\end{figure*}
\subsection{Transformation of dissimilarities and similarities into each other}
Given a dissimilarity matrix $\mathbf{D}$,
there are two ways to construct the approximated matrix $\mathbf{\hat{S}}$.
First, we can transform $\mathbf{D}$ to $\mathbf{S}$ using double centering
and then apply Nystr\"om approximation to $\mathbf{S}$.
Obviously, this approach has quadratic time complexity
due to the double centering step.
Second, we can approximate $\mathbf{D}$ to $\mathbf{\hat{D}}$ first
and then apply double centering.
As we will show in the following,
this transformation requires only linear computational time.
As mentioned before, from the dissimilarity matrix $\mathbf{D}$
we can compute the corresponding similiarity matrix using double centering.
This process is noted as $\mathbf{S(D)}$ in the following:
\begin{equation*}\label{eq:double_centering}
\mathbf{S(D)} = -\mathbf{J} \mathbf{D} \mathbf{J}/2 \\
\end{equation*}
where $\mathbf{J}=(\mathbf{I}-\mathbf{1}\mathbf{1}^\top/N)$
with identity matrix $\mathbf{I}$ and vector of ones $\mathbf{1}$.
Expanding the right side of the equation we get
\begin{eqnarray*}
\mathbf{S(D)} &=& -\frac{1}{2} \mathbf{J} \mathbf{D} \mathbf{J} \nonumber \\
&=& -\frac{1}{2} \left ( \left (\mathbf{I} - \frac{1}{N} \mathbf{1} \mathbf{1}^\top \right )
\mathbf{D}
\left (\mathbf{I} - \frac{1}{N} \mathbf{1} \mathbf{1}^\top \right ) \right )
\nonumber \\
&=& -\frac{1}{2} \left (\mathbf{D}
-\frac{1}{N} \mathbf{D} \mathbf{1} \mathbf{1}^\top
-\frac{1}{N} \mathbf{1} \mathbf{1}^\top \mathbf{D}
+\frac{1}{N^2} \mathbf{1} \mathbf{1}^\top
\mathbf{D}
\mathbf{1} \mathbf{1}^\top \right ). \nonumber \\
\end{eqnarray*}
Approximating $\mathbf{S(D)}$ requires computation of a linear part of each summand,
but still involves summation over the full matrix $\mathbf{D}$.
Alternatively, by approximating $\mathbf{D}$ first, we get
\begin{eqnarray}
&\mathbf{S} \overset{Ny}{\approx}& \mathbf{S(\hat{D})}
= -\frac{1}{2} \left[ \mathbf{D}_{N,m} \cdot \mathbf{D}_{m,m}^{-1} \cdot \mathbf{D}_{m,N} -\frac{1}{N} \mathbf{D}_{N,m} \right. \label{eq:dis_to_sim} \\
& & \left.\cdot (\mathbf{D}_{m,m}^{-1} \cdot (\mathbf{D}_{m,N} \mathbf{1} )) \mathbf{1}^\top -\frac{1}{N} \mathbf{1} ((\mathbf{1}^\top \mathbf{D}_{N,m}) \cdot \mathbf{D}_{m,m}^{-1}) \right.
\nonumber\\
& & \left. \cdot \mathbf{D}_{m,N}
+\frac{1}{N^2} \mathbf{1} (( \mathbf{1}^\top \mathbf{D}_{N,m}) \cdot
\mathbf{D}_{m,m}^{-1} \cdot (\mathbf{D}_{m,N} \mathbf{1}))\mathbf{1}^\top
\right]. \nonumber
\end{eqnarray}
This equation can be rewritten for each entry of the matrix $\mathbf{S(\hat{D})}$
\begin{eqnarray*}
\hat{S}_{ij}(\mathbf{\hat{D}})
&=& -\frac{1}{2} \Bigg[ \mathbf{D}_{i,m} \cdot \mathbf{D}_{m,m}^{-1} \cdot \mathbf{D}_{m,j} \\
& & -\frac{1}{N} \sum_k \mathbf{D}_{k,m} \cdot \mathbf{D}_{m,m}^{-1} \cdot
\mathbf{D}_{m,j} \\
& & \left.
-\frac{1}{N} \sum_k \mathbf{D}_{i,m} \cdot \mathbf{D}_{m,m}^{-1} \cdot
\mathbf{D}_{m,k}\right.\nonumber\\
& & \left. +\frac{1}{N^2} \sum_{kl} \mathbf{D}_{k,m} \cdot
\mathbf{D}_{m,m}^{-1} \cdot \mathbf{D}_{m,l}
\right], \nonumber
\end{eqnarray*}
as well as for the sub-matrices $\mathbf{S}_{m,m}(\mathbf{\hat{D}})$ and $\mathbf{S}_{N,m}(\mathbf{\hat{D}})$,
in which we are interested for the Nystr\"om approximation
\begin{eqnarray*}
\mathbf{S}_{m,m}(\mathbf{\hat{D}})
&=& -\frac{1}{2} \left[ \mathbf{D}_{m,m}
-\frac{1}{N} \mathbf{1} \cdot \sum_k \mathbf{D}_{k,m} \right. \\
& & \left.
-\frac{1}{N} \sum_k \mathbf{D}_{m,k} \cdot \mathbf{1}^\top \right.\nonumber\\
&& \left. +\frac{1}{N^2} \mathbf{1} \cdot \sum_{kl} \mathbf{D}_{k,m} \cdot
\mathbf{D}_{m,m}^{-1} \cdot \mathbf{D}_{m,l} \cdot \mathbf{1}^\top
\right] \nonumber
\end{eqnarray*}
\begin{eqnarray*}
\mathbf{S}_{N,m}(\mathbf{\hat{D}})
&=& -\frac{1}{2} \left[ \mathbf{D}_{N,m}
-\frac{1}{N} \mathbf{1} \cdot \sum_k \mathbf{D}_{k,m} \right. \\
& & \left.
-\frac{1}{N} \sum_k \mathbf{D}_{N,m} \cdot \mathbf{D}_{m,m}^{-1} \cdot
\mathbf{D}_{m,k} \cdot \mathbf{1}^\top\right. \nonumber\\
& & \left. +\frac{1}{N^2} \mathbf{1} \cdot \sum_{kl} \mathbf{D}_{k,m} \cdot
\mathbf{D}_{m,m}^{-1} \cdot \mathbf{D}_{m,l} \cdot \mathbf{1}^\top
\right]. \nonumber
\end{eqnarray*}
Now the matrix $\mathbf{S(\hat{D})}$ can be approximated via the matrix
$\mathbf{\hat{S}(\hat{D})}$ using the matrices
$\mathbf{S}_{m,m}(\mathbf{\hat{D}})$ and $\mathbf{S}_{N,m}(\mathbf{\hat{D}})$.
This requires only a linear part of $\mathbf{D}$ and involves linear computation time.
Comparing this approach to the quadratic computation of $\mathbf{S}_{N,m}$,
we see, that the first three summands are identical
and only the forth summand is different.
This term involves summation over the full dissimilarity matrix
and, depending on the approximation quality of $\mathbf{\hat{D}}$, might vary.
The deviation is added to each pairwise similarity
resulting in a non-linear transformation of the data.
If $m$ corresponds to the rank of $\mathbf{D}$
then double centering is exact
and no information loss occurs during the approximation.
Otherwise, the information loss increases with smaller $m$
for both approaches
and the error is made by approximating $\mathbf{S}$ in the first case
and by approximating $\mathbf{D}$ in the second case.
If the Nystr\"om approximation is feasible for a given data set,
then the second approach allows
to perform the transformation in linear instead of quadratic time.
It should be mentioned that a similar transformation is possible with
the landmark multidimensional scaling (L-MDS) \cite{DBLP:conf/nips/SilvaT02}
which is widely known in the visualization community and typically used to
embed data into a low $2-3$ dimensional space. Embeddings to higher dimensions
are possible but not considered, in general.
The idea of L-MDS is to sample a small amount $m$ of points, the so called landmarks,
compute the corresponding dissimilarity matrix
followed by a double centering on this matrix.
Finally the data are projected to a low dimensional space
using an eigenvalue decomposition.
The remaining points can then be projected into the same space,
taking into account the distances to the landmarks, and applying a triangulation.
From this vectorial representation of the data
one can easily retrieve the similarity
matrix as a scalar product between the points.
It was shown, that L-MDS is also a Nystr\"om technique by \cite{Platt:2005},
but compared to our proposed approach in Equation \eqref{eq:dis_to_sim}
L-MDS makes not only an error in the forth summand, but also in the second and the third.
Additionally, and more importantly, by projecting into \emph{Euclidean space}
it makes an implicit clipping of the eigenvalues.
As discussed above and will be shown later,
this might disturb data significantly, leading to qualitatively worse results.
Thus, our proposed method can be seen as a generalization of L-MDS
and should be used instead.
Similarly to the transformation from $\mathbf{D}$ to $\mathbf{\hat{S}}$,
there are two ways to transform $\mathbf{S}$ to $\mathbf{\hat{D}}$.
First, transform the full matrix $\mathbf{S}$ to $\mathbf{D}$
using $D_{ij} = S_{ii} + S_{jj} - 2 S_{ij}$
and then apply the Nystr\"om approximation
\begin{equation}
\mathbf{\hat{D}} = \mathbf{D}_{N,m}
\mathbf{D}_{m,m}^{-1}
\mathbf{D}_{N,m}^\top.
\label{eq:dis_corr_app}
\end{equation}
Second, approximate $\mathbf{S}$ with $\mathbf{\hat{S}}$
and then transform it to $\mathbf{\hat{D}}$.
The first approach requires quadratic time,
since it transforms the full matrix.
In the second approach only $\mathbf{D}_{N,m}$ is computed,
thus making it linear in time and memory.
Obviously, both approaches produce the same results,
but the second one is significantly faster.
The reason is, that for the computation of $\mathbf{\hat{D}}$
only the matrix $\mathbf{D}_{N,m}$ is required
and it is not necessary to compute the rest of $\mathbf{D}$.
\subsection{Eigenvalue correction}
For non-Euclidean data,
the corresponding similarity matrix is indefinite.
We would like to make the data Euclidean
in order to avoid convergence issues,
or to be able to use kernel methods.
A strategy to obtain a valid kernel matrix from similarities
is to apply an eigenvalue correction as discussed in section \ref{sec:trafos_corr}.
This however can be prohibitive for large matrices, since to correct the whole
eigenvalue spectrum, the whole eigenvalue decomposition is needed,
which has $\mathcal{O}(N^3)$ complexity.
The Nystr\"om approximation can again decrease computational costs dramatically.
Since we can now apply the approximation on an arbitrary symmetric matrix,
we can make the correction afterwards,
reducing the complexity to a linear one, as we will show now.
Given non-metric dissimilarities $\mathbf{D}$,
we can first approximate them and then convert to approximated similarities $\mathbf{\hat{S}}(\mathbf{\hat{D}})$
using the Equation \eqref{eq:dis_to_sim}.
For similarities $\mathbf{\hat{S}}$
given directly or obtained from $\mathbf{\hat{S}}(\mathbf{\hat{D}})$,
we need to compute the eigenvalue decomposition in linear time.
As we have shown in the section \ref{sec:eval_decomp},
it is possible to compute the exact eigenvalue decomposition
of a Nystr\"om-approximated psd matrix in linear time, given
the corresponding similarity matrix has indeed rank $m$.
Since $\mathbf{\hat{S}}$ is indefinite,
we can not apply the above technique directly.
Instead, since in a squared matrix the eigenvectors stay the same,
we first compute
\begin{align*}
\mathbf{\hat{S}}^2 & = \mathbf{S}_{N,m} \mathbf{S}^{-1}_{m,m} \left( \mathbf{S}_{m,N}
\cdot \mathbf{S}_{N,m} \right) \mathbf{S}^{-1}_{m,m} \mathbf{S}_{m,N}\\
& = \mathbf{S}_{N,m} \mathbf{\tilde{S}}_{m,m} \mathbf{S}_{N,m}^\top\\
& = \mathbf{C} \mathbf{\tilde{A}} \mathbf{C}^\top.
\end{align*}
The resulting matrix can be computed in linear time and is psd.
This means, we can determine its eigenvalue decomposition
as described in section \ref{sec:eval_decomp}:
\[
\mathbf{\hat{S}}^2 = \mathbf{C} \mathbf{\tilde{A}} \mathbf{C}^\top,
\]
where $\mathbf{\tilde{A}}$ are the eigenvalues of $\mathbf{\hat{S}}^2$
and $\mathbf{C}$ are the eigenvectors
of both $\mathbf{\hat{S}}^2$ and $\mathbf{\hat{S}}$.
Using the eigenvectors $\mathbf{C}$, the eigenvalues $\mathbf{A}$
of $\mathbf{\hat{S}}=\mathbf{C}\mathbf{A}\mathbf{C}^\top$
can be retrieved via
$\mathbf{A}=\mathbf{C}^\top \mathbf{\hat{S}} \mathbf{C}$.
Then we can correct the eigenvalues $\mathbf{A}$
by some technique as discussed in section \ref{sec:trafos_corr} to $\mathbf{A}^*$.
The corrected approximated matrix $\mathbf{\hat{S}}^*$ is then simply
\begin{equation}
\label{eq:sim_corr_app}
\mathbf{\hat{S}}^* = \mathbf{C} \mathbf{A}^* \mathbf{C}^\top.
\end{equation}
Thus, using a low rank representation of a similarity matrix
we can compute its eigenvalue decomposition
and perform eigenvalue correction in linear time.
If it is desirable to work with the corrected dissimilarities,
then using the Equation \eqref{eq:dis_corr_app}, it is possible to transform
the corrected similarity matrix $\mathbf{\hat{S}}^*$ back to dissimilarities
resulting in the corrected and approximated matrix $\mathbf{\hat{D}}^*$.
\subsection{Out-of-sample extension}
Usually models are learned by a training set
and we expect them to generalize well on the new unseen data, or the test set.
In such cases we need to provide an out-of-sample extension,
i.e.\ a way to apply the model on the new data.
This might be a problem for the techniques dealing with (dis-)similarities.
For example, in proxy approaches the out of sample extension is in general
handled by solving another costly optimization problem \cite{DBLP:conf/icml/ChenGR09,Lu30082005}.
If the matrices are corrected, we need to correct the new (dis-)similarities
as well to get consistent results. Fortunately this can be easily done in the
Nystr\"om framework.
If we compare the Equations \eqref{eq:Ny_equation} and \eqref{eq:sim_corr_app}
we see that the correction is performed
on a different decomposition of $\mathbf{\hat{S}}$, i.e.:
\begin{equation}
\label{eq:appr_ooe}
\mathbf{S}_{N,m} \mathbf{S}_{m,m} \mathbf{S}_{N,m}^\top =
\mathbf{\hat{S}} = \mathbf{C} \mathbf{A} \mathbf{C}^\top.
\end{equation}
If we correct $\mathbf{A}$ it is not clear what happens on the left side
of the above equation.
Therefore, to compute the out-of-sample extension
we need to find a simple transformation
from one decomposition to the other.
Taking a linear part $\mathbf{\hat{S}}_{N,m}$ from the equation \ref{eq:appr_ooe}
we get
\[
\mathbf{S}_{N,m} =
\mathbf{C}_{N,m} \mathbf{A} \mathbf{C}_{m,m}^\top,
\]
which leads after simple transformation to
\[
\mathbf{C}_{N,m} = \mathbf{S}_{N,m}
\left( \mathbf{A} \mathbf{C}_{m,m}^\top \right)^{-1}.
\]
Plugging the above formula into Equation \eqref{eq:sim_corr_app} we get
\begin{align*}
\mathbf{\hat{S}}^* & =
\mathbf{S}_{N,m} \left( \mathbf{A} \mathbf{C}_{m,m}^\top \right)^{-1}
\mathbf{A}^*
\left(\left( \mathbf{A} \mathbf{C}_{m,m}^\top \right)^{-1}\right)^\top \mathbf{S}_{N,m}^\top\\
& = \mathbf{S}_{N,m} (\mathbf{C}_{m,m}^\top)^{-1}\mathbf{A}^{-1}
\mathbf{A}^*
\mathbf{A}^{-1}\mathbf{C}_{m,m}^{-1}\mathbf{S}_{N,m}^\top\\
& = \mathbf{S}_{N,m} (\mathbf{C}_{m,m}^\top)^{-1}
(\mathbf{A}^*)^{-1}
\mathbf{C}_{m,m}^{-1}\mathbf{S}_{N,m}^\top\\
& = \mathbf{S}_{N,m}
\left(\mathbf{C}_{m,m}\mathbf{A}^*\mathbf{C}_{m,m}^\top\right)^{-1}
\mathbf{S}_{N,m}^\top
\end{align*}
and we see that we simply need to extend the matrix $\mathbf{S}_{N,m}$
by uncorrected similarities between the new points and the landmarks to obtain
the full approximated and \emph{corrected} similarity matrix,
which then can be used by the algorithms to compute the out-of-sample extension.
The same approach can be applied to the dissimilarity matrices.
Here we first need to transform the new dissimilarities to similarities
using Equation \eqref{eq:dis_to_sim}, correct them and then
transform back to dissimilarities.
In \cite{DBLP:journals/jmlr/ChenGGRC09} a similar approach is taken.
First, the whole similarity matrix
is corrected by means of a projection matrix. Then this projection matrix is
applied to the new data, so that the corrected similarity between old and new
data can be computed. This technique is in fact the Nystr\"om approximation,
where the whole similarity matrix $\mathbf{S}$ is treated as the approximation
matrix $\mathbf{S}_{m,m}$ and the old data, together with the new data build the
matrix $\mathbf{S}_{N,m}$. Rewriting this in the Nystr\"om framework
makes it clear and more obvious, without the need to compute the projection matrix and
with an additional possibility to compute the similarities between the new points.
\subsection{Proof of concept}
We close this section by a small experiment on the ball dataset as proposed in \cite{DBLP:conf/sspr/DuinP10}.
It is an artificial dataset based on the surface distances of randomly positioned balls of two classes having a slightly different radius.
The dataset is non-Euclidean with substantial information encoded in the negative part of the eigenspectrum.
We generated the data with $300$ samples per class leading to an $N \times N$
dissimilarity matrix $\mathbf{D}$, with $N=600$.
Now the data have been processed in four different ways
to obtain a valid kernel matrix $\mathbf{S}$.
First encoding, denoted as $SIM1$, was constructed
by converting $\mathbf{D}$ to $\mathbf{S}$
with double centering and computing the full eigenvalue decomposition.
The negative eigenvalues were then corrected by flipping.
This approach, which we will refer to as the {\bf standard approach} in the following,
has a complexity of $\mathcal{O}(N^3)$.
Further, we generated an approximated similarity matrix $\mathbf{\hat{S}}^*$ by using the proposed approach, flipping in the eigenvalue correction
and $10$ landmarks for the Nystr\"om approximation. This dataset is denoted as $SIM2$ and was obtained with a complexity of $\mathcal{O}(m^2N)$.
The third dataset $SIM3$ was obtained in the same way but the eigenvalues were clipped. The dataset $SIM4$ was obtained using
landmark MDS with the same landmarks as for $SIM2$ and $SIM3$.
The data are processed by a Support Vector Machine in a $10$-fold
crossvalidation. The results on the test sets are shown in the Table \ref{tab:sim}.
\begin{table*}
\centering
\caption{\label{tab:sim} Test set results of a 10-fold SVM run on the ball dataset using the different encodings.}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}l|c|c|c|c|c}\hline
& $SIM1$ & $SIM2$ & $SIM3$ & $SIM4$ \\\hline\hline
Test-Accuracy & $100\pm0$ & $88.83\pm3.15 $ & $51.50\pm6.64 $ & $50.67\pm3.94$
\end{tabular*}
\end{table*}
As mentioned, the data contain substantial information in the negative fraction of the eigenspectrum, accordingly one may
expect that these eigenvalues should not be removed.
This is also reflected in the results. L-MDS removed the negative eigenvalues
and the classification model based on these data shows random prediction accuracy.
The SIM3 encoding is a bit better.
Also in this case the negative eigenvalues are removed but the limited amount of class separation information, encoded
in the positive fraction was better preserved, probably due to the different calculation of the matrix $\mathbf{\hat{S}}_{mm}$.
The SIM2 data used the flipping strategy and shows already quite good prediction accuracy, taking into account that the
kernel matrix is only approximated by $10$ landmarks and the relevant (original negative) eigenvalues are of small magnitude.
As a last point it should be mentioned that corrections like clipping, flipping and their effect
on the data representation are still under discussion and considered to be not always optimal \cite{Pekalska2005a}.
Additionally the selection of landmark points is discussed in \cite{DBLP:journals/tnn/ZhangK10a,DBLP:journals/jmlr/KumarMT12}
Further, for very large data sets (e.g. some 100 million points) the Nystr\"om approximation may still be
too costly and some other strategies have to be found as suggested in \cite{DBLP:conf/icml/LiKL10}.
\section{Experiments}
We now apply the priorly derived approach to six non-metric dissimilarity and similarity data and show the effectiveness for a classification task.
The considered data are (1) the imbalanced SwissProt similarity data as described in \cite{mediansom} consisting of protein sequence alignment
scores, (2) the balanced chromosome dissimilarity data taken from \cite{neuhaus} with scores of aligned gray value images, (3) the imbalanced proteom dissimilarity
data set from \cite{PrTools:2012:Online}, (4) the balanced Zongker digit dissimilarity data from \cite{PrTools:2012:Online,Jain19971386} which
is based on deformable template matchings of 2000 handwritten NIST digits. Further the balanced Delft gestures data base (DS5)
taken from \cite{PrTools:2012:Online} and the WoodyPlants50 (Woody) (DS6) from the same source is used.
DS5 represents a sign-language interpretation problem with dissimilarities computed
using a dynamic time warping procedure on the sequence of positions \cite{Lichtenauer20082040}.
The DS6 dataset contains of shape dissimilarities between leaves collected in a study on woody plants \cite{DBLP:journals/pami/LingJ07}.
Further details about the data can be found in Table \ref{tab:datasets}.
\begin{table*}[ht]
\begin{center}
\caption{\label{tab:datasets} Overview of the considered datasets and their properties.}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}l|c|c|c|c}\hline
Data set & Name & \# samples & \# classes & Signature \\\hline\hline
DS1 & SwissProt & 10988 & 30 & [8488,2500,0]\\
DS2 & Chromosom & 4200 & 21 & [2258,1899,43]\\
DS3 & Proteom & 2604 & 53 & [1502,682,420]\\
DS4 & Zongker & 2000 & 10 & [961,1038,1]\\
DS5 & Delft & 1500 & 20 & [963,536,1]\\
DS6 & Woody & 791 & 14 & [602,188,1]\\
\end{tabular*}
\end{center}
\end{table*}
All datasets are non-metric, multiclass and contain a large number of objects, such that a regular eigenvalue correction
with a prior double centering for dissimilarity data, as discussed before, is already very costly but can still be calculated to get comparative results.
\subsection{Classification performance}
The data are analyzed in various ways, employing either the clipping eigenvalue correction, the flipping eigenvalue correction, or by not-correcting the eigenvalues
\footnote{
Shift correction was found to have a negative impact on the model as already discussed in \cite{DBLP:journals/jmlr/ChenGGRC09}.}.
To be effective for the large number of objects we also apply the Nystr\"om approximation as discussed
before using $10, 50, 100$ and all points as landmarks. If the data have high rank $>100$, they are potentially not well suited for approximations
and approximation errors are unavoidable.
Landmarks have been selected randomly from the data. Other sampling strategies have been discussed in \cite{DBLP:journals/jmlr/FarahatGK11,DBLP:journals/tnn/ZhangK10a,DBLP:conf/icml/SiHD14},
however with additional meta parameters, which we would like to avoid for clarity of the proposed approach. Also the impact of the Nystr\"om approximation with respect
to kernel methods has been discussed recently in \cite{DBLP:journals/jmlr/CortesMT10},
but this is out of the focus of the presented approach.
For comparison we also show the results as obtained by using Landmark-MDS, which naturally applies a clipping and, as mentioned before,
makes various simplifications in the conversion step, which can lead to inaccuracies in the data representation.
\begin{figure}[p]
\centering
\includegraphics[width=0.99\textwidth]{swiss_accuracy_rt_exp_proposed}\\%\qua
\includegraphics[width=0.99\textwidth]{swiss_accuracy_rt_exp_standard
\caption{Top: box-plots of the classification performance for different sample sizes of DS1 using the proposed approach with $500$ landmarks.
Bottom: The same experiment but with the standard approach. Obviously our approach does not sacrifice performance for computational speed.}
\label{fig:diss_rt_swiss}
\end{figure}
The prediction accuracies of a $10$-fold crossvalidation for $m=\{10,50,100\}$ are shown in Table \ref{tab:comparison_10}-\ref{tab:comparison_100}.
The influence of $N$ with respect to a fixed number of landmarks is studied in the experiment shown in Figure \ref{fig:diss_rt_swiss}.
A runtime analysis, comparing to the standard approach, is shown in Figure \ref{fig:diss_rt}. The results of the standard approach where no approximations are used but only eigenvalue corrections on the full matrix
are provided in Table \ref{tab:comparison_full}. We also provide results for the dissimilarity-space representation using a linear and an elm kernel \cite{DBLP:journals/ijon/FrenayV11} in Table \ref{tab:comparison_diss_space}.
As mentioned before this representation does not need any approximations or eigenvalue corrections but the out of sample extension is
costly if many landmarks are chosen, or the selection of the landmarks has to be optimized using e.g. a wrapper approach \cite{DBLP:journals/pr/PekalskaDP06}.
Here we use all points as landmarks to simplify the evaluation.
\begin{table}
\begin{center}
\caption{\label{tab:comparison_10} Signature and average test set accuracy for SwissProt (DS1), Chromosome (DS2), Proteom (DS3), Zongker (DS4), Delft gestures (DS5), Woody (DS6)
using a Nystr\"om approximation with $10, 50, 100, full$ landmarks and no, clip or flip eigenvalue correction.}
\footnotesize
\begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}|c|c|c|c|c}\hline
& $10$ / Clip & $10$ / Flip & $10$ / No & $10$ L-MDS \\\hline\hline
\tiny{DS1}& \footnotesize{[9, 0, 10979]} & \footnotesize{[10, 0, 10978]} &\footnotesize{[9, 1, 10978]} &\\
& $30.67\pm5.07$* & $\bf 31.65\pm 5.41 $* & $5.93 \pm 5.23$ & $26.47 \pm 6.27$\\
\tiny{DS2}& \footnotesize{[9, 0 ,4191]} & \footnotesize{[10, 0, 4190]} &\footnotesize{[9,1, 4190]} &\\
& $67.61\pm6.49$ & $\bf 74.83\pm 3.23$* &$ 18.79 \pm 14.08$ &$ 67.086\pm 6.09$ \\
\tiny{DS3}& \footnotesize{[9, 0 ,2595]} & \footnotesize{[10, 0, 2594]} &\footnotesize{[9, 1, 2594]} &\\
& $59.33\pm 6.87$* & $\bf 62.43\pm 7.30 $* & $ 2.52\pm2.33$ &$ 56.74\pm6.26$ \\
\tiny{DS4}& \footnotesize{[8, 0 ,1992]} & \footnotesize{[10, 0, 1996]} &\footnotesize{[8, 2, 1990]} &\\
& $42.51\pm 10.51$* & $\bf 44.92\pm11.07 $* & $ 10.63\pm3.15$ &$ 32.83\pm9.49$\\
\tiny{DS5}& \footnotesize{[9, 0 ,1491]} & \footnotesize{[10, 0, 1490]} &\footnotesize{[9, 1, 1490]} &\\
& $73.75\pm 5.12$ & $\bf 78.76\pm 4.60 $* & $ 15.12\pm 13.05$ &$73.86\pm5.72$\\
\tiny{DS6}& \footnotesize{[9, 0 ,782]} & \footnotesize{[10, 0, 781]} & \footnotesize{[9, 1, 781]} &\\
& $75.96\pm 4.89$ & $\bf 79.51\pm 5.33 $* & $ 38.86\pm 14.14$ &$76.03\pm4.77$\\
\end{tabular*}
\end{center}
\end{table}
\begin{table}\vspace{-1cm}
\begin{center}
\caption{\label{tab:comparison_50} Signature and average test set accuracy for SwissProt (DS1), Chromosome (DS2), Proteom (DS3), Zongker (DS4), Delft gestures (DS5), Woody (DS6)
using a Nystr\"om approximation with $10, 50, 100, full$ landmarks and no, clip or flip eigenvalue correction.}
\footnotesize
\begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}|c|c|c|c|c}\hline
& $50$ / Clip & $50$ / Flip & $50$ / No & $50$ L-MDS \\\hline\hline
\tiny{DS1}& \footnotesize{[49, 0 ,10939]} & \footnotesize{[50, 0, 10930]} &\footnotesize{[49, 1, 10931]} &\\
& $76.21\pm 5.13$ & $76.49 \pm 3.73 $ & $ 69.05\pm 5.01$ & $\bf 76.59 \pm 4.65$\\
\tiny{DS2}& \footnotesize{[49, 0 ,4151]} & \footnotesize{[50, 0, 4150]} &\footnotesize{[49,1, 4150]} &\\
& $94.05\pm1.17$ & $ 93.94\pm 1.28 $ & $ 83.66 \pm 25.43$ &$\bf 94.11\pm 1.21$ \\
\tiny{DS3}& \footnotesize{[48, 0 ,2556]} & \footnotesize{[50, 0, 2554]} &\footnotesize{[49, 1, 2550]} &\\
& $93.08\pm2.25$ & $\bf 93.82\pm 1.59 $* & $ 3.53\pm3.25$ &$92.35\pm2.08$\\
\tiny{DS4}& \footnotesize{[34, 0 ,1979]} & \footnotesize{[50, 0, 1950]} &\footnotesize{[34, 16, 1950]} &\\
& $80.79\pm3.94$* & $\bf 85.35\pm 3.42$* & $9.82 \pm 2.08$ &$ 73.57\pm6.71$\\
\tiny{DS5}& \footnotesize{[48,0,1452]} & \footnotesize{[50, 0, 1450]} &\footnotesize{[48, 2, 1450]} &\\
& $\bf 95.31\pm1.82$ & $ 94.72\pm 2.25 $ & $24.99 \pm 27.56$ & $ 95.31\pm1.89$\\
\tiny{DS6}& \footnotesize{[49, 0 ,742]} & \footnotesize{[50, 0, 741]} &\footnotesize{[49, 1, 741]} &\\
& $88.55\pm 4.11$ & $\bf 89.30\pm 3.72 $ & $ 81.40\pm 23.63$ &$88.46\pm4.35$\\
\end{tabular*}
\end{center}
\end{table}
\begin{table}\vspace{-1cm}
\begin{center}
\caption{\label{tab:comparison_100} Signature and average test set accuracy for SwissProt (DS1), Chromosome (DS2), Proteom (DS3), Zongker (DS4), Delft gestures (DS5), Woody (DS6)
using a Nystr\"om approximation with $10, 50, 100, full$ landmarks and no, clip or flip eigenvalue correction. }
\footnotesize
\begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}|c|c|c|c|c}\hline
& $100$ / Clip & $100$ / Flip & $100$ / No & $100$ L-MDS \\\hline\hline
\tiny{DS1}& \footnotesize{[99, 0 ,10889]} & \footnotesize{[100, 0, 10888]} &\footnotesize{[99, 1, 10888]} &\\
& $87.62\pm 2.11$ & $ 87.63\pm 1.85 $ & $\bf 88.17 \pm 2.19$ & $87.50 \pm 2.24$\\
\tiny{DS2}& \footnotesize{[91, 0 ,4109]} & \footnotesize{[100, 0, 4100]} &\footnotesize{[91,9,4100]} &\\
& $95.00\pm 1.11$ & $ 94.71\pm 1.68 $ & $ 11.29\pm7.68$ & $\bf 95.18\pm 1.07$ \\
\tiny{DS3}& \footnotesize{[96, 0 ,2506]} & \footnotesize{[99, 0, 2505]} &\footnotesize{[97, 2, 2505]} & \\
& $96.48\pm 1.34$ & $\bf 96.96\pm 1.17 $ & $ 13.75 \pm 9.90$ & $96.29\pm1.27$ \\
\tiny{DS4}& \footnotesize{[63, 0 ,1937]} & \footnotesize{[100, 0, 1900]} &\footnotesize{[62, 38, 1900]} &\\
& $83.47\pm4.31$* & $\bf 87.42\pm 3.15 $* & $10.55 \pm 2.43$ & $80.34\pm7.73$ \\
\tiny{DS5}& \footnotesize{[91, 0 ,1401]} & \footnotesize{[100, 0, 1400]} &\footnotesize{[92, 8, 1400]} &\\
& $\bf 96.07\pm 1.56$ & $94.74\pm 4.23 $ & $ 23.33 \pm 18.62$ & $ 96.01\pm1.69$\\
\tiny{DS6}& \footnotesize{[96, 0 ,695]} & \footnotesize{[100, 0, 691]} &\footnotesize{[96, 4, 691]} &\\
& $90.69\pm 3.38$ & $\bf 90.71\pm 3.20 $ & $ 38.11\pm 23.74$ &$90.51\pm3.65$\\
\end{tabular*}
\end{center}
\end{table}
\begin{table*}[ht]
\begin{center}
\caption{\label{tab:comparison_full} Average test set accuracy for SwissProt (DS1), Chromosome (DS2), Proteom (DS3), Zongker (DS4), Delft gestures (DS5), Woody (DS6)
using the standard approach (no-approximations) and the flip, clip or no-eigenvalue correction on the full matrix. This has $\mathcal{O}(N^3)$ complexity.}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}l|c|c|c}\hline
Data set & clip & flip & no \\\hline\hline
DS1 & $ 95.45\pm 0.88$ & $ 95.39\pm 1.01$ & $ 95.40\pm0.59$ \\
DS2 & $ 97.12\pm 0.89$ & $ 97.17\pm 0.99$ & $ 96.93\pm 0.66$ \\
DS3 & $ 99.42\pm 0.66$ & $ 99.42\pm 0.45$ & $ 99.38\pm 0.61$ \\
DS4 & $ 95.65\pm 1.13$ & $ 96.25\pm 0.75$ & $ 25.25\pm 4.78$ \\
DS5 & $ 98.33\pm 1.67$ & $ 98.00\pm 0.94$ & $ 96.13\pm 1.43$ \\
DS6 & $ 92.54\pm 2.27$ & $ 93.17\pm 2.48$ & $ 89.63\pm 3.58$ \\
\end{tabular*}
\end{center}
\end{table*}
\begin{table*}[ht]
\begin{center}
\caption{\label{tab:comparison_diss_space} Average test set accuracy for SwissProt (DS1), Chromosome (DS2), Proteom (DS3), Zongker (DS4), Delft gestures (DS5), Woody (DS6)
using the dissimilarity space representation and a linear kernel or an elm kernel.}
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}l|c|c}\hline
Data set & linear & elm \\\hline\hline
DS1 & $26.01\pm5.49$ & $72.09\pm 0.96$ \\
DS2 & $76.76\pm1.11$ & $89.88\pm 0.96$ \\
DS3 & $68.36\pm2.48$ & $85.37\pm 2.86$ \\
DS4 & $93.70\pm2.04$ & $95.05\pm 1.71$ \\
DS5 & $87.73\pm3.83$ & $91.67\pm 2.58$ \\
DS6 & $28.83\pm6.97$ & $89.38\pm 4.48$ \\
\end{tabular*}
\end{center}
\end{table*}
\begin{figure*}
\caption{Spearman rank correlation (left) and the crossvalidation accuracy (right)
for the three largest data sets using the proposed approach with an interleaved double centering
and Nystr\"om approximation on the dissimilarity data.
}\label{fig:correl_pred}
\begin{center}
\subfigure[Swiss correlation]{\includegraphics[width=0.49\textwidth]{swiss_m_analysis_flip_correlation}}
\subfigure[Swiss accuracy]{\includegraphics[width=0.49\textwidth]{swiss_m_analysis_flip_prediction}}\\
\subfigure[Chromosom correlation]{\includegraphics[width=0.49\textwidth]{chromo_m_analysis_flip_correlation}}
\subfigure[Chromosom accuracy]{\includegraphics[width=0.49\textwidth]{chromo_m_analysis_flip_prediction}}\\
\subfigure[Proteom correlation]{\includegraphics[width=0.49\textwidth]{prodom_m_analysis_flip_correlation}}
\subfigure[Proteom accuracy]{\includegraphics[width=0.49\textwidth]{prodom_m_analysis_flip_prediction}}
\end{center}
\end{figure*}
\begin{figure*}
\caption{Logarithmic representation of the eigenspectrum of the unapproximated and double centered matrix for the
larger datasets DS1 - DS3.}\label{fig:log_ev}
\begin{center}
\subfigure[Swiss eigenspectrum]{\includegraphics[width=0.32\textwidth]{swissprot_eigenspectrum}}
\subfigure[Chromosom eigenspectrum]{\includegraphics[width=0.32\textwidth]{chromosom_eigenspectrum}}
\subfigure[Proteom eigenspectrum]{\includegraphics[width=0.32\textwidth]{prodom_eigenspectrum_log}}
\end{center}
\end{figure*}
To get comparable experiments, the same randomly drawn landmarks are used in each of the corresponding sub-experiments
(along a column in the table). New landmarks are only drawn for different Nystr\"om approximations and for sample sizes shown in Figure \ref{fig:diss_rt_swiss}.
Classification rates are calculated in a 10-fold crossvalidation with 10 repeats using the Core-Vector-Machine (CVM) \cite{DBLP:conf/icml/TsangKK07}.
The crossvalidation does not include a new draw of the landmarks, to cancel out the selection bias of the Nystr\"om approximation, accordingly CVM
use the same kernel matrices. However, our objective is not maximum classification performance (which is only one possible
application) but to demonstrate the effectiveness of our approach for dissimilarity data of larger scale.
First, one observes that the eigenvalue correction has a strong, positive effect
on the classification performance consistent with earlier findings \cite{DBLP:journals/jmlr/ChenGGRC09}.
Best results over a row are highlighted in bold at the various result tables. If the difference is significantly better than L-MDS
a $\star$ has been added.
Raising the number of landmarks improves the classification performance for the experiments with eigenvalue correction.
Using kernels without eigenvalue correction has in general a negative impact. While an increase in the number of landmarks leads to a better
approximation of the dataset and may therefore improve the classification accuracy it can also raise the influence of negative eigenvalues,
damping the performance\footnote{Comparing signatures at different Nystr\"om approximations also shows that many
eigenvalues are close to zero and are sometimes counted as positive,negative or zero.}. We found that flipping is in general
superior to clipping. For $m=10$ flipping was consistently better than clipping or L-MDS. With an increase of $m$ the approximation error
of L-MDS vanishes and the results become more and more similar to the clipping results. But for DS4 L-MDS is also inferior if $m=100$,
which shows that for some data L-MDS gives bad results, due to its approximation errors even for rather large $m$.
Especially for DS3,DS4 and DS6 we observe that the proposed method gives much better results.
In Table \ref{tab:comparison_diss_space} we also show the
crossvalidation results by use of the priorly mentioned dissimilarity space representation. For simplicity we use an $N$ dimensional feature space
and analyse the obtained vector representation by means of a linear kernel and a defacto parameter free elm kernel as proposed by \cite{DBLP:journals/ijon/FrenayV11}.
For the majority of the experiments the obtained results are significantly worse with the exception of DS4. Also for DS5 a comparison with the Nystr\"om
approximation at $m=100$ gives still acceptable results. It should be noted that the results of the elm-kernel experiments are consistently better compared to
the linear kernel, indicating the high non-linearity of the data. Obviously the dissimilarity space representation is in general no reasonable alternative.
Additionally it becomes very costly for out-of-sample extensions if the number of considered features is large.
\begin{figure*}
\caption{Runtime analysis of the proposed vs the standard approach for the larger considered dissimilarity data
sets. All eigenvalues of the data sets have been processed by flipping. }\label{fig:diss_rt}
\begin{center}
\subfigure[Chromosom]{\includegraphics[width=0.49\textwidth]{chromo_runtime_pro_vs_std}}
\subfigure[Delft gestures]{\includegraphics[width=0.49\textwidth]{delft_gestures_runtime_pro_vs_std}}\\
\subfigure[Proteom]{\includegraphics[width=0.49\textwidth]{prodom_runtime_pro_vs_std}}
\subfigure[Zongker]{\includegraphics[width=0.49\textwidth]{zongker_runtime_pro_vs_std}}
\subfigure[SwissProt]{\includegraphics[width=0.49\textwidth]{swiss_runtime_pro_vs_std}}
\end{center}
\end{figure*}
In another experiment, see Figure \ref{fig:correl_pred} we analyzed the proximity preservation of the approximated and corrected
matrix with respect to the unapproximated and corrected matrix. One would expect that for very low Nystr\"om rates (high approximation),
only the dominating eigenvalues are kept and the approximation suffers mainly when the eigenspectra are very smooth. At
increasing Nystr\"om rates (lower approximation), first more and more small eigenvalues (also negative ones) are kept leading to
a more complex data set and accordingly also a more complex proximity preservation task. Finally if the Nystr\"om rates are high
(almost no approximation) one would expect a perfect preservation. This effect is indeed observed in Figure \ref{fig:correl_pred}.
We used the Spearman's rank correlation to measure how far the ranks of the proximities (e.g. distances) are preserved between
the two approaches, namely our proposal and a full double centering, followed by a full eigenvalue correction.
Low correlation indicates that the data relations are not well preserved whereas small correlation errors
indicate that most likely only local neighborhood relations are confused. Comparing the correlation results (left plots in Figure \ref{fig:correl_pred})
with the prediction accuracy on the test data (right plots in Figure \ref{fig:correl_pred}) we see that only strong variations in the
correlation lead to strong misclassifications. This agrees with our expectation that the data are potentially clustered and local
errors in the data relation have only a weak or no effect on the classification model. Similar results were found if we compare our approach
to data which have been first double-centered without approximations and where only the eigenvalue correction is done using the Nystr\"om
approach.
From the analysis we can conclude that the proposed approach is quite effective to keep the global relations in the data space
also for quite high approximations, which is relevant for classification and clustering the data. The local neighborhood relations
are kept only for approximation rates of above $60\%$. As one can see from smooth eigenspectra in Figure \ref{fig:log_ev}, the
rank of the data sets is rather high, accordingly only for large $m$ the approximation can keep detail information, effecting the
local relationships of the data points. Thus, if the different classes are close to each other
and have complex nonlinear boundaries,
decreasing the number of landmarks
leads to an increased classification error.
In practice, as can be seen on the Figure \ref{fig:correl_pred},
the number of the landmarks needs to be very small to take effect.
It is thus possible to approximate the matrices
by selecting $m$ sufficiently small,
without sacrificing the classification accuracy.
\subsection{Runtime performance}
As shown exemplary in Figure \ref{fig:diss_rt_swiss} the classification performance on eigenvalue-cor\-rected data is approximately
the same for our proposed strategy and the standard approach.
But the runtime performance is drastically better for an
increase in the number of samples.
To show this we selected subsets from the considered data with different sizes
from $1000$ to the maximal number, while the number of landmarks is fixed by $L=500$ and calculated the runtime and classification
performance using the CVM classifier in a 10-fold crossvalidation. The eigenvalues have been flipped in this experiment.
The results of the proposed approach compared to the standard approach are shown in the plots of Figure \ref{fig:diss_rt}.
For larger $N$ the runtime of the standard method (red/dashed line) is two magnitudes larger on log-scale compared to the proposed approach.
\section{Large scale experiments}
As a final experiment we analyze the proposed approach for large scale non-metric proximity data. With respect
to the work presented in the former sections a valid application of kernel methods for such data is not yet possible.
Neither the classical eigenvalue correction approach \cite{DBLP:journals/jmlr/ChenGGRC09} nor the learning
of a proximity kernel \cite{DBLP:conf/icml/ChenGR09} scales to larger data sets with $N \gg 1e3$ samples,
the problem becomes even more challenging if the data are given as dissimilarities such that a double
centering is needed to keep a corresponding representation. Due to the large number of samples a full matrix
reconstruction is not any longer possible to calculate error measures like the Spearman rank correlation accordingly
we only provide test set errors obtained within a $10$ fold crossvalidation using a CVM.
In our experiments we consider:
\begin{itemize}
\item The SwissProt protein database \cite{swissprot} but now at \emph{larger scale} in the version of 11/2010,
restricted to ProSite labeled sequences with at least $1,000$ entries per label. We obtain
$46$ ProSite labels and $82,525$ sequences which are compared by the Smith-Waterman alignment
algorithm as provided in \cite{citeulike:668527}. We refer to this data as DS-L-1. The obtained similarity scores
are symmetric but non-metric, accordingly standard kernel methods can not be used directly in a valid form.
We take $1,000$ landmarks, randomly taken from the selected classes.
The dataset has $2$ larger negative eigenvalues in the approximated matrix.
\item The Pavia remote sensing data consist of $42.776$ spectra (DS-L-2). The dataset is taken from \cite{RemoteSensing}.
We use the symmetrized Kullback-Leibler Divergence, which is also known as the
spectral information divergence (SID) in remote sensing and frequently used as an effective \emph{non-metric}
measure to compare spectral data \cite{vanderMeer20063} and use $10\%$ randomly chosen points as landmarks.
\item The Salina data of $54129 $ points (DS-L-3) also taken from \cite{RemoteSensing} with the same measure
and settings as for DS-L-2
\item The ball dataset with $30,000$ samples (Ball-Large). Landmarks are selected randomly as $10\%$ from the dataset.
\end{itemize}
For all of these data sets a standard kernel approach is costly in calculating the whole similarity matrix
and it would be basically impossible to get an eigenvalue correction in a reasonable time.
Modern kernel classifiers like the Core-Vector Machine (CVM)\cite{DBLP:conf/icml/TsangKK07}
do not need to evaluate all the kernel similarities but our similarities are non-metric and an
accurate online eigenvalue correction is not available.
However we can use our presented approach approximating the score matrix as well as performing
an eigenvalue correction. The calculation of the final approximated kernel function and eigenvalue correction by the presented approach
takes only some minutes.
The obtained approximated and now positive semi definite similarity
matrices can be used by a Core-Vector Machine in a $10$ fold crossvalidation to generate a
classification model with a good mean prediction accuracy see Table \ref{tab:results_large}.
\begin{table}\centering
\begin{tabular*}{\textwidth}{@{\extracolsep{\fill}}l|c|c|c|c|c|c}
Data & size & type & flip & clip & No & L-MDS (clip) \\\hline
DS-L-1 & 80k & S & $\bf 96.24 \pm 0.29 \%$ & $96.22 \pm 0.28\%$ & \text{failed} & $96.14\pm 0.27\%$\\
DS-L-2 & 40k & D & $\bf 82.56 \pm 0.60 \%$ & $79.80 \pm 0.94\%$ & \text{failed} & $81.18\pm 1.17\%$ \\
DS-L-3 & 50k & D & $\bf 88.11 \pm 0.68\%$* & $85.06 \pm 0.73\%$ & \text{failed} & $81.37\pm 0.62\%$ \\
Ball-Large & 30k & D & $\bf 93.59 \pm 0.63\%$* & $50.28 \pm 0.80\%$ & $28.50 \pm 0.76\%$ & $50.13 \pm 0.97\%$\\
\end{tabular*}
\caption{Crossvalidation results of the large scale data sets (D - dissimilarities, S - similarities) using flip, clip or no eigenvalue correction\label{tab:results_large}.}
\end{table}
An additional benefit of the CVM approach is that it naturally leads to very sparse models. Accordingly
the out of sample extension to new sequences requires only few score calculations to the sequences
of the training set.
\section{Conclusions}
In this article we addressed the analysis of potentially non-metric proximity data and especially the relation between dissimilarity and similarity data.
We proposed effective and \emph{accurate} transformations across the different representations. The results show that our approach can be
understood as a generalization of Landmark MDS. L-MDS did not show any significant superior results compared to our method, but instead was often
found to be significantly worse. This finding also persisted if the number of landmarks was raised to a rather large value.
Dedicated learning algorithms for dissimilarities and kernels are now accessible for both types of data.
The specific coupling of double centering and Nystr\"om approximation permits to compute
an exact eigenvalue decomposition in linear time which is a valuable result for many different methods
depending on the exact calculation of eigenvalues and eigenvectors of a proximity matrix.
While our strategy is very effective e.g. to improve supervised learning of non-metric dissimilarities by kernel methods,
it is however also limited again by the Nystr\"om approximation, which itself may fail to provide sufficient approximation
and accordingly further research in this line is of interest. Nevertheless, dedicated methods for arbitrary proximity data
as addressed in \cite{DBLP:conf/sspr/PekalskaDGB04} will also be subject of future work. For non-psd data the error introduced by the Nystr\"om approximation and the eigenvalue correction
is not yet fully understood and bounds similar as proposed in \cite{DBLP:journals/jmlr/DrineasM05}
are still an open issue. It is also of interest to extend our approach to other types of matrix approximation schemes
as e.g. the CUR algorithm and others \cite{wang2013improving,DBLP:conf/aistats/WangZ14,DBLP:conf/kdd/WangZQZ14}. In future work we will also analyze in more detail the handling of extremely large (dis-)similarity sets
\cite{Schleif2014e,DBLP:conf/icml/GittensM13} and analyze our approach in the context of unsupervised problems \cite{DBLP:conf/icml/ZhangTK08}.
\section*{Acknowledgments}
We would like to thank Alexander Grigor'yan, Faculty of Mathematics,
University of Bielefeld for effective discussions about functional analysis and eigenspaces
and Barbara Hammer, Center of Excellence, University of Bielefeld for continuous support in this project.
Financial support from the Cluster of Excellence 277 Cognitive Interaction Technology
funded by the German Excellence Initiative is gratefully acknowledged. F.-M. Schleif was supported by a Marie Curie Intra-European Fellowship (IEF): FP7-PEOPLE-2012-IEF (FP7-327791-ProMoS)
We would like to thank R. Duin and E. Pekalska for providing access to some of the non-metric datasets and the distools and prtools.
\section{Appendix}
\textbf{Definition:}
The norm of an operator $K:L^2(\Omega) \to L^2(\Omega)$ is defined as
\[
\|K\|_{L^2 \to L^2}=\sup_{\|f\|\leq 1} \|K f\|_{L^2}
\]
and the norm of a function $f\in L^2(\Omega)$ is defined as
\[
\|f\|_{L^2} = \left(\int_\Omega |f(x)|^2 d\mu(x)\right)^{1/2}.
\]
\textbf{Theorem:}
The sequence of operators $K_m$ converges uniformly to $K$
in the operator norm if
\[
\sup_{\substack{x \in \Omega \\ y \in \Omega}}\left| k_m(x,y) - k(x,y) \right|
\leq \delta_m
\]
and $\delta_m \to 0$ for $m \to \infty$.
\textbf{Proof:}
The uniform convergence is given if
$\|K_m-K\|_{L^2 \to L^2} \to 0$
for $m \to \infty$.
Thus, we need to compute this quantity.
Following the computations in \cite{werner},
we can write for the norm of $Kf$
\begin{align*}
\|Kf\|^2_{L^2}
= & \int_\Omega |K f(x)|^2 d\mu(x)\\
= & \int_\Omega \left|\int_\Omega k(x,y) f(y) d\mu(y)\right|^2 d\mu(x)\\
\leq & \int_\Omega \left(\int_\Omega |k(x,y)| |f(y)| d\mu(y)\right)^2 d\mu(x)\\
\leq & \int_\Omega \left(\int_\Omega |k(x,y)|^2 d\mu(y)\right)
\left(\int_\Omega |f(y)|^2 d\mu(y)\right) d\mu(x)\\
= & \int_\Omega \int_\Omega |k(x,y)|^2 d\mu(x) d\mu(y) \|f\|^2_{L^2}
\end{align*}
where we used H\"older's inequality and Fubini's theorem.
It follows
\begin{align*}
\|K_m-K\|_{L^2 \to L^2}^2
= & \sup_{\|f\|\leq 1} \|(K_m - K) f\|_{L^2}^2 \\
= & \sup_{\|f\|\leq 1} \int_\Omega |(K_m - K) f(x)|^2 d\mu(x)\\
\leq & \sup_{\|f\|\leq 1}
\int_\Omega \int_\Omega |k_m(x,y) - k(x,y)|^2 d\mu(x) d\mu(y) \|f\|^2_{L^2} \\
\leq & \int_\Omega \int_\Omega \delta_m^2 d\mu(x) d\mu(y) \\
= & \delta_m^2
\end{align*}
and since $\delta_m \to 0$ for $m \to \infty$,
we have $\|K_m-K\|_{L^2 \to L^2} \to 0$
for $m \to \infty$.
$\Box$
\bibliographystyle{plain}
|
1,116,691,498,241 | arxiv | \section{Introduction}
OJ 287, a BL Lac type active galactic nucleus, located at a redshift of 0.306, was discovered in 1967 (\citealt{Dickel_1967}).
It was known as an exceptionally active variable source even five decades ago (\citealt{Andrew_1971}).
A proper study of the variability of OJ 287 in different time scales can be found in \citet{Valtonen_2006}. Intraday variability in radio and optical data of OJ 287 was first detected by \citet{Valtaoja_1985}.
Variability in blazars fits into three different categories as described below. Changes over a range of minutes to less than a day
\citep[e.g.][]{1995ARA&A..33..163W, 1975IAUS...67..573K, 2003AJ....126...47R} are defined as intra-night variability (INV, or intra-day variability i.e. IDV, or
microvariability); those on a timescale of days to a few months are commonly known as short term variations (STV);
while the variations over several months to years are defined as long term variations \citep[LTV, e.g.][]{2011A&A...531A..38A, 2005A&A...438...39R, 2017MNRAS.469..813A}.
Short time scale variability of this BL Lac object in the near-infrared frequency was studied using standard JHK photometry, which showed variability of amplitude 0.7 mag over the observing period of 23 months (\citealt{Lorenzetti_1989}). From the long-term optical light curve of OJ 287, it was inferred that it has binary supermassive black holes (\citealt{Sillanpaa_1988}). They found that the light curve shows repeated outbursts at the interval of 11.65 years and minimum flux at the interval of 11 years. These results were verified by others (\citealt{Kidger_1992}).
Different models for the periodic outburst of OJ 287 in optical frequency have been discussed earlier (\citealt{Dey_2019}), involving the periodic motion of a binary supermassive black hole. One kind of model assumes that the orientation of the jet of the primary black hole changes in a regular manner due to precession. The optical flare would thus be the result of the enhancement in the Doppler factor of the jet. In another model, optical flaring in OJ 287 results from enhanced accretion during pericenter passage or collision between the secondary black hole and the accretion disc of the primary black hole.
A big flare from OJ 287 was predicted to happen in 1994 according to the binary black hole model of \citet{Sillanpaa_1988}. This was observed by \citet{Sillanpaa_1996} and thus the prediction of 12 year cycle was confirmed. \citet{Lehto_1996} proposed that the reason for the flares is an impact of the secondary black hole on the accretion disk of the primary, which means that there has to be two such flares during each orbital cycle. This is a unique property of the model, not easily accounted for in other proposals. The model predicted the time of the second flare in November 1995 within a two week time window (\citealt{Valtonen_1996}), and subsequently \citet{Sillanpaa_1996b} observed the flare and confirmed the prediction. \citet{Sundelius_1996, Sundelius_1997} calculated the flare arising from tides in this binary model and predicted the next big impact flare in 2005, a year earlier than would be expected from strict periodicity. It was reported by \citet{Valtonen_2006}. The flares come sooner than in the strictly periodic models due to precession, as is well stated in their paper. Finally, the observation of the 2015 flare confirmed this shift, which by then was 3 years (\citealt{Valtonen_2016}). This paper also found the signature of disk impacts, the thermal nature of the flare. \citet{2019ApJ...882...88V} updated the model of \citet{Lehto_1996} and determined the disk parameters using time delays calculated in \citet{Dey_2018}.
\par
During the phase 2008 – 2010 tidal flares were expected according to the model by \citet{Sundelius_1996,Sundelius_1997}.
The gamma-ray light curve of OJ 287 during 2008 August -- 2010 January was studied by \citet{Neronov_2011}. They found the variability time scale is lower than 3.2 hours. They inferred that the observed gamma-ray emission was from the jet of the smaller mass black hole. Detection of gamma-rays of energy higher than 10 GeV constrained the lower limit of the Doppler factor to 4.
\par
The broadband spectrum of the major gamma-ray flare in 2009 was studied by \citet{Kushwaha_2013}. They explained the multi-wavelength spectral energy distribution (SED) by combining synchrotron, synchrotron self-Compton (SSC), and external Compton (EC) processes. They suggested that the emission region in the jet is surrounded by a bath of photons at 250 K. They also inferred that the location of this emission region is 9 pc away from the central engine. The high activity of OJ 287 during December 2015 -- April 2016 was studied by \citet{2018MNRAS.473.1145K}, and the authors inferred simultaneous multi-wavelength emission. They explained the optical bump as accretion disc emission associated with the primary black hole.
The smaller bump feature in optical-UV appeared to be consistent with line emission. They explained the gamma-ray emission with inverse Compton scattering of photons from the line emission.
\par
The flux and polarisation variability at optical bands of OJ 287 during the December 2015 to February 2016 outburst was studied by \citet{Rakshit_2017}. The intra-night optical variability data was analyzed, and the shortest variability time scale was estimated as $142\pm 38$ minutes. This constrained the lower limit on the value of the Doppler factor to 1.17 and the upper limit on the value of the magnetic field to 3.8 Gauss. The size of the emission region was constrained to less than $2.28\times 10^{14}$ cm.
\par
The multi-band optical variability from September 2015 to May 2016 was studied by \citet{Gupta_2017} using nine ground-based optical telescopes. They detected a large optical outburst in December 2015 and a second comparably strong flare in March 2016. The long term optical, ultraviolet and X-ray variability in different activity states of OJ 287 was studied using
UVOT and XRT instruments of Swift (\citealt{Siejkowski_2017}). They did not find any clear relation between optical/UV and X-ray emission during quiescence state or outbursts.
\par
The strong activity in optical to X-ray frequency during July 2016 to July 2017 was studied by \citet{2018MNRAS.479.1672K}. The daily gamma-ray fluxes during this time are consistent with no variability. They modeled the SEDs with a two-zone leptonic model. The first zone gives an LBL SED, and the second zone gives an HBL SED. In their model, the second zone is located at a distance of parsec-scale from the central engine.
\par
A hadronic model to explain the X-ray and gamma-ray outburst of November 2015 outburst of OJ 287 has been given by \citet{2020MNRAS.498.5424R}. They have used the binary supermassive black hole model, where the initial trigger comes from the impact of the secondary black hole on the accretion disc of the primary black hole. An idealized spherical outflow is generated from this impact. A shock is formed when this spherical outflow -- containing cosmic rays and thermal ions -- interacts with the AGN wind of the primary black hole.
In their model, the cosmic rays are shock accelerated due to the collision of the outflow with the AGN wind of the primary black hole. The cosmic ray protons interact with the thermal ions, and as a result, secondary leptons, photons are produced in proton-proton interactions. The optical flare is explained by combining the jet emission from \citet{Kushwaha_2013} and the thermal bremsstrahlung emission in the outflow.
The photon field produced as a result of thermal bremsstrahlung acts as target for inverse Compton emission by the secondary leptons. They have explained the X-ray and gamma-ray data by this inverse Compton emission of the secondary electrons.
\par
Recently, \citet{Komossa_2020} reported the detection of a very bright outburst of OJ 287 covering X-ray, UV, and optical frequency from April to June of 2020.
They concluded that the outburst is jet-driven and consistent with the binary supermassive black hole model. In this model, the impact of the secondary black hole on the disk of the primary triggers an after-flare. This impact enhances the accretion activity of the primary black hole, which results in enhanced jet emission by the primary black hole.
\par
In this paper, we have analyzed the multi-wavelength data of OJ 287 for the period of 2017 to 2020, which includes the outburst discussed in the paper by \citet{Komossa_2020}. The total period 2017 -- 2020 considered in our work has been divided into five segments after analyzing the variability time scale in optical and X-ray data. We have done the modeling of the SEDs with a time-dependent leptonic model, which includes synchrotron, SSC.
The data analysis is discussed in section 2. Our results of data analysis
and the modelling of SEDs are discussed in section 3.
The discussions and conclusions of our study are given in section 4.
\section{Multiwavelength Observations and Data Analysis}
\subsection{\textit{Fermi}-LAT}
\textit{Fermi-LAT} is an excellent space-based telescope to explore the extragalactic and Galactic objects in the gamma-ray sky. It uses the pair conversion method to detect gamma-rays in the energy
range of 20 MeV -- 500 GeV. It has a wide field of view (FoV) of about 2.4 sr (\citealt{Atwood_2009}), which scans 20\% of the sky at any time.
The total scanning period of the entire sky with this telescope is around three hours. OJ 287 was observed in the brightest flaring state in X-ray when monitored by the Swift-telescope (Atel 10043) in 2017, and, soon, flares in other frequency bands were also detected.
\textit{Fermi}-LAT is continuously monitoring the source OJ 287 since 2008. We have collected the data from January 2017 to May 2020, and it is found that the source is in a moderate flux state within this period.
We have analyzed the gamma-ray data following the standard data reduction and analysis procedure described by
\textit{Science Tools}\footnote{https://fermi.gsfc.nasa.gov/ssc/data/analysis/documentation/}.
The details of the method of this analysis are discussed in \citet{Prince_2018}.
\begin{figure*}
\centering
\includegraphics[scale=0.60]{MWL-lc.eps}
\caption{The upper plot shows the weekly binned gamma-ray light curve for 0.1--300 GeV. Panels 2nd, 3rd, and 4th are the Swift-XRT and UVOT light curves. The 5th panel is the radio light curve from OVRO at 15 GHz.
The entire light curve is divided in five different states based on the flux and magnitude seen in Swift-XRT and UVOT. The various states are denoted as A, B, C, D, and E and their time duration is represented by the color patches.}
\label{fig:total_lc}
\end{figure*}
\subsection{X-ray Observations}
On February 3, 2017, an X-ray flare was observed by the \textit{Swift}-telescope, and the results were reported in Atel 10043. It is reported as the brightest flare ever detected since the monitoring started by the \textit{Swift}-telescope. After that, many multiple flares were observed in X-rays until May 2020, and this whole period has been studied in this paper.
\textit{Swift} is a space-based telescope with three instruments onboard, observing all kinds of Galactic and extragalactic sources in soft $\&$ hard X-rays, Optical, and UV simultaneously. The working energy range of Swift-XRT is 0.3-10.0 keV.
The BL Lac OJ 287 was observed by \textit{Swift-XRT} telescope during the multiple flaring episodes in X-ray frequencies between the period January 2017 to May 2020. We have analyzed all the observations done during this period, and the processing of the raw data is done by using the task `\textit{xrtpipeline}\footnote{https://heasarc.gsfc.nasa.gov/ftools/caldb/help/xrtpipeline.html}' and cleaned events file are produced for each observation.
The CALDB version 20160609 is used while processing the raw data. Our analysis is focused on only the Photon Counting
mode observations, and the task `\textit{xselect}' is used for source and background selection. We have selected a region of 12 arc seconds around
the source and away from the source for the source and background, respectively, in our data analysis. The task `\textit{xselect}' is also used to extract the spectrum and light curve, and the modeling
of the spectrum is done in `\textit{Xspec}'(\citealt{Arnaud_1996}). For modeling the spectra, we have used a single power-law model. The Galactic absorption column
density $n_H$ = 1.10$\times$10$^{20}$ cm$^{-2}$ is fixed from \citet{Kalberla_2005}. The modeling is done for an energy range of 0.3 -- 10.0 keV.
\subsection{Optical and UV Observations}
Having the Swift Ultraviolet/Optical Telescope (UVOT, \citealt{Roming_2005}) on board with \textit{Swift-XRT} has the advantage of getting simultaneous observations in Optical and UV bands.
\textit{Swift-UVOT} has also observed the OJ 287 in all of the available six filters U, V, B, W1, M2, and W2, simultaneously with the X-ray observations.
The source instrumental magnitudes are extracted following the $uvotsource$ procedure. We have considered the region of 5 arcsec around the source and away from it as the source and the background region, respectively, in our data analysis.
The magnitudes are corrected for galactic extinction by using the reddening E(B-V) = 0.0241 from \citet{Schlafly_2011} and zero points from \citet{Breeveld_2011}. Moreover, the magnitudes are converted into flux by multiplying by the conversion factor estimated by \citet{Poole_2008} and the ratios of extinction to reddening from \citet{Giommi_2006}.
In the period between Feb 2019 - Jan 2020, we performed observations of OJ 287 using five different telescopes around the globe, which are: 2.15\,m Jorge Sahade telescope (JS, telescope A) and 60\, cm Helen Sawyer Hogg telescope (HSH, telescope B), CASLEO, Argentina;
1.3\,m JC Bhattacharya telescope (JCBT; telescope C) at the Vainu Bappu Observatory (VBO), India.
The technical descriptions of the above telescopes are summarized in Table 1 of \citet{2019MNRAS.488.4093A} and
\citet{2021A&A...645A.137A}. The number of observations made in each band on a particular date during our monitoring campaign
is provided in Table\,\ref{tab:obs_log}.
The preliminary data reduction includes bias correction, flat fielding, and cosmic-ray removal, which was performed with
IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatories, which are operated
by the Association of Universities for Research in Astronomy Inc., under a cooperative agreement with the
National Science Foundation.} software. We then processed the cleaned CCD images using the Dominion Astronomical
Observatory Photometry (DAOPHOT II) software \citep{S1987PASP, S1992ASPC} using the aperture photometry
technique through which we obtained instrumental magnitudes for our target and four standard stars located in the same field. A more detailed and comprehensive description of data reduction methods used is given in Section 2 of \citet{2019MNRAS.488.4093A}. Finally, to extract the instrumental
differential light curves (LCs), we selected two non-variable standards having
magnitude and color very similar to that of the blazar. The calibrated LCs were obtained using the star 10 of \citet{1996A&AS..116..403F}.
After constructing the calibrated LCs of our source, we carefully inspected the LCs for any outliers. A handful of
such suspicious data points were detected and corrected.
\subsection{Radio data at 15 GHz}
Owens Valley Radio Observatory(OVRO; \citealt{Richards_2011}) is one of the observatories that monitors the bright \textit{Fermi} detected blazars. It is a 40-meter single-dish antenna working at a frequency of 15 GHz. A large number of Fermi blazars are continuously monitored by OVRO twice a week. Our candidate source, OJ 287, is also part of the OVRO monitoring program, and we have collected the data from September 2017 to July 2020.
\begin{table}
\caption{Log of photometric observations for the blazar OJ\,287. }
\textwidth=8.0in
\textheight=11.0in
\centering
\noindent
\label{tab:obs_log}
\begin{tabular}{lc|llll} \\
\multicolumn{2}{c}{}\\
\hline
\noalign{\smallskip}
Date of & Telescope & \multicolumn{4}{c}{Number of data points} \\
observations & & \\
(yyyy mm dd) & & $B$ & $V$ & $R$ & $I$ \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
2019 02 15 & C & 0 & 1 & 6 & 1 \\
2019 02 27 & A & 3 & 2 & 30 & 2 \\
2019 03 01 & A &2 & 9 & 27 & 1 \\
2019 03 02 & A &0 & 9 & 22 & 1 \\
2019 03 03 & A &1 &17 &16 & 2 \\
2019 03 09 & C &1 & 1 & 1 & 1 \\
2019 03 11 & C &1 & 1 &1 & 1 \\
2019 03 12 & C &0 & 1 &1 & 0 \\
2019 03 13 & C &1 & 1 & 1 & 1 \\
2019 03 26 & A &2 & 1 &24& 2 \\
2019 04 05 & C &1&1 &17 &1 \\
2019 04 06 & C &1 & 1 & 10 & 1 \\
2019 04 07 & C &1 & 2 &20 & 2 \\
2019 04 09 & A &1 &16 &16 &1 \\
2019 04 10 & A &2 & 10 &10 & 2 \\
2019 04 10 & A &3 & 4 & 12 & 2 \\
2019 04 11 & A &2 & 12 &14 & 2 \\
2019 12 17 & A &0 & 1 &9 & 2 \\
2020 01 03 & A &2 & 2 &120 & 2 \\
2020 01 27 & B &0 & 4 &4 & 1 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
\begin{table*}
\centering
\caption{Table shows the fractional variability and the variability time estimated for various states in different waveband as shown in Figure 1, and explained in section 3.1.1 in detail.}
\begin{tabular}{ccc c p{0.1cm}}
\hline
\noalign{\smallskip}
Instrument& Various states& Fractional variability& Variability time\\
& &F$_{\rm var}$ & $\tau_{var}$[days] \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
XRT & A & 0.48$\pm$0.01 & 2.41 & \\
XRT & B & 0.29$\pm$0.01 & 2.39 & \\
XRT & C & 0.25$\pm$0.01 & 2.32 & \\
XRT & D & 0.28$\pm$0.01 & 0.80 & \\
XRT & E & 0.57$\pm$0.01 & 0.98 & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
UVOT-U & A &0.284$\pm$0.003 & 4.99& \\
U & B &0.215$\pm$0.004 & 3.19& \\
U & C &0.103$\pm$0.006 & 15.83 \\
U & D &0.164$\pm$0.005 & 6.94 \\
U & E &0.585$\pm$0.003 & 0.58 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
UVOT-B & A &0.191$\pm$0.032 & 3.75 \\
B & B &0.221$\pm$0.004 & 3.18 \\
B & C &0.140$\pm$0.006 & 19.70 \\
B & D &0.152$\pm$0.005 & 3.56 \\
B & E &0.483$\pm$0.003 & 1.26 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
UVOT-V & A &0.272$\pm$0.003 & 5.25 \\
V & B &0.228$\pm$0.005 & 4.04 \\
V & C &0.104$\pm$0.008 & 10.33 \\
V & D &0.088$\pm$0.007 & 2.82 \\
V & E &0.499$\pm$0.004 & 0.76 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
UVOT-W1& A &0.303$\pm$0.003 & 5.44 \\
W1 & B &0.205$\pm$0.005 & 5.97 \\
W1 & C &0.096$\pm$0.007 & 11.26 \\
W1 & D &0.203$\pm$0.006 & 6.86 \\
W1 & E &0.599$\pm$0.004 & 0.95 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
UVOT-M2& A &0.305$\pm$0.001 & 4.00 \\
M2 & B &0.222$\pm$0.002 & 5.06 \\
M2 & C &0.117$\pm$0.003 & 19.67 \\
M2 & D &0.205$\pm$0.003 & 6.53 \\
M2 & E &0.635$\pm$0.002 & 3.80 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
UVOT-W2& A &0.308$\pm$0.003 & 4.15 \\
W2 & B &0.214$\pm$0.004 & 4.60 \\
W2 & C &0.111$\pm$0.006 & 19.28 \\
W2 & D &0.221$\pm$0.006 & 3.53 \\
W2 & E &0.628$\pm$0.004 & 1.08 \\
\noalign{\smallskip}
\hline
\end{tabular}
\label{tab:var}
\end{table*}
\begin{table*}
\caption{Results of INV observations of OJ 287.}
\label{tab:var_res}
\centering
\resizebox{0.82\textwidth}{!}{
\begin{tabular}{ccccccccc}
\hline\hline \noalign{\smallskip}
Date of observation & Passband & $N$ & $\sigma_1$ & $\sigma_2$ & $\Gamma_{\rm SF}$ & $C$-test & $F$-test & Variable (?)\\ \noalign{\smallskip}
(yyyy mm dd) & & & & & & & & \\ \noalign{\smallskip}
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9)\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
27.02.2019 & R & 30 & 0.0063 & 0.0027 & 1.1219 & 2.3389 & 5.4705 & PV \\
01.03.2019 & R & 27 & 0.0105 & 0.0041 & 1.1073 & 2.5505 & 6.5051 & PV \\
02.03.2019 & R & 22 & 0.0031 & 0.0037 & 1.1287 & 0.8236 & 1.4741 & NV \\
03.03.2019 & R & 16 & 0.0026 & 0.0033 & 1.1166 & 0.7715 & 1.6802 & NV \\
& V & 17 & 0.0143 & 0.0075 & 1.1177 & 1.8953 & 3.5921 & NV \\
26.03.2019 & R & 24 & 0.0064 & 0.0027 & 1.0808 & 2.3377 & 5.4647 & PV \\
05.04.2019 & R & 17 & 0.0117 & 0.0133 & 1.1345 & 0.8754 & 1.3049 & NV \\
06.04.2019 & R & 17 & 0.0139 & 0.0128 & 1.1406 & 1.0883 & 1.1845 & NV \\
07.04.2019 & R & 20 & 0.0289 & 0.0088 & 1.0617 & 3.2657 & 10.665 & Var\\
09.04.2019 & R & 16 & 0.0119 & 0.0063 & 1.0389 & 1.8821 & 3.5423 & NV \\
& V & 16 & 0.0182 & 0.0068 & 1.0549 & 2.6690 & 7.1238 & Var\\
10.04.2019 & R & 10 & 0.0052 & 0.0055 & 1.0721 & 0.9523 & 1.1027 & NV \\
& V & 10 & 0.0068 & 0.0024 & 1.0890 & 2.8971 & 8.3932 & Var\\
10.04.2019 & R & 12 & 0.0058 & 0.0049 & 1.0980 & 1.1759 & 1.3828 & NV \\
11.04.2019 & R & 14 & 0.0086 & 0.0042 & 1.0969 & 2.0671 & 4.2730 & PV \\
& V & 12 & 0.0140 & 0.0035 & 1.0994 & 3.9829 & 15.863 & Var\\
03.01.2020 & R & 120 & 0.0076 & 0.0050 & 0.9357 & 1.5211 & 2.3137 & PV \\
\noalign{\smallskip} \hline\noalign{\smallskip}
\end{tabular}}\\
\tablefoot{Table columns read: (2) passband of observation. (3) Number of data points in the given passband. (4)-(5) Results for $C$ and $F$-test, respectively. (6) Corresponding scale factor. (7) Dispersion of the corresponding control-comparison star LC. (8) Variability status denoted as follows: Var = variable, NV = non-variable, PV = possibly variable.}
\end{table*}
\begin{figure*}
\begin{center}
\epsfig{figure= 27feb19.eps,height=1.8in,width=2in,angle=0}
\epsfig{figure= 1mar19.eps,height=1.8in,width=2in,angle=0}
\epsfig{figure= 2mar19.eps,height=1.8in,width=2in,angle=0}
\epsfig{figure= 3mar19.eps,height=1.8in,width=2in,angle=0}
\epsfig{figure= 26mar19.eps,height=1.8in,width=2in,angle=0}
\epsfig{figure= 5apr19.eps,height=1.8in,width=2in,angle=0}
\epsfig{figure= 6apr19.eps,height=1.8in,width=2in,angle=0}
\epsfig{figure= 7apr19.eps,height=1.8in,width=2in,angle=0}
\epsfig{figure= 9apr19.eps,height=1.8in,width=2in,angle=0}
\epsfig{figure= 10apr19.eps,height=1.8in,width=2in,angle=0}
\epsfig{figure= 10apr19_2.eps,height=1.8in,width=2in,angle=0}
\epsfig{figure= 11apr19.eps,height=1.8in,width=2in,angle=0}
\epsfig{figure= 3jan20.eps,height=1.8in,width=2in,angle=0}
\caption{Light curves for OJ\,287; green denotes $V$ band while red denotes $R$ filter. In each plot, X axis is JD and Y axis is the source magnitude.
Observation date and the telescope used are indicated in each plot viz.
Telescope A, is JS (2.15\,m Jorge Sahade telescope), telescope B, HSH (Helen Sawyer Hogg telescope), and
telescope C, JCBT (Vainu Bappu Observatory (VBO), India).}
\label{LC_BL1}
\end{center}
\end{figure*}
\section{Results}
A detailed temporal and spectral study has been done using the multi-wavelength data from the \textit{Fermi}-LAT, Swift-XRT/UVOT telescope. The archival data from OVRO are used to perform the correlation study with the gamma-ray.
\subsection{Multi-waveband Fractional and Temporal Variability}
\subsubsection{Multi-waveband Variability}
OJ 287 is mentioned as a gamma-ray source in 3FGL (\citealt{3FGL}) as well as in 4FGL (\citealt{4FGL}) catalog by Fermi-LAT. The blazar OJ 287 is one of the most active blazars with a binary black hole system, and that makes it one of a kind and thus an interesting source in the Fermi-LAT catalog. It is monitored by various ground-based and space-based telescopes across the entire wavelength range. The recent flare seen by Swift-XRT and UVOT during the beginning of the year 2020 has been confirmed as the second brightest flare in X-ray and optical/UV \citep{Komossa_2020}.
The multi-wavelength light curve, since January 2017 to May 2020, from radio (at 15 GHz) to gamma-ray (0.1--300 GeV) is shown in Figure \ref{fig:total_lc}. The whole light curve is divided into various activity states based on the variability and flux states seen in X-ray, optical, and UV. The states are defined as A, B, C, D, and E. Among all these five states, state A and E have higher mag/flux values in optical/X-ray and considered to be flaring states.
The X-ray flare in state A was earlier studied by
\citet{2018MNRAS.479.1672K}, and \citet{Kapanadze_2018}. They found strong positive correlation between optical, UV and X-ray outbursts.
Our results are consistent with them, which suggests that the same population of electrons is generating the optical, UV and X-ray outbursts.
The flare in optical and X-ray band during the state E is widely reported in many astronomer's telegram (\citealt{ATel13637}, \citealt{ATel13658}, \citealt{ATel13677}, \citealt{ATel13755}, \citealt{ATel13785}) and studied by \citet{Komossa_2020} and \citet{Kushwaha_2020}.
Here, we provide a broadband temporal and spectral analysis of these states and the broadband SED modeling is also done to compare the jet parameters between various high (A \& E) and low (B, C, \& D) states.
The upper panel shows the gamma-ray light curve by Fermi-LAT. It is found that the source is not very bright in gamma-ray. The variation in flux is nearly a factor of $5$ between its lowest and highest flux state. The average flux during this period is 2.65$\times$10$^{-8}$ ph cm$^{-2}$ s$^{-1}$. The average flux from the 4FGL catalog for 1$-$100 GeV is 0.6$\times$10$^{-8}$ ph cm$^{-2}$ s$^{-1}$ shown by a horizontal dashed line in Figure \ref{fig:total_lc}. In the last segment of the light curve for the period 2017 - 2020, the gamma-ray data show their highest flux of around 1.0$\times$10$^{-7}$ ph cm$^{-2}$ s$^{-1}$.
The X-ray light curve is shown in the second panel. It is observed that the source is most variable during the first and fifth states, and the flux became maximum in early 2017.
The highest flux state in X-ray coincides with the high flux state in radio and optical/UV. In gamma-ray, the flux is not much high, but the variability can be seen in the light curve.
The light curves for various bands of optical and UV are shown in panels 3rd and 4th. The source seems to be variable across the whole light curve and achieving its maxima in {\bf early 2017 and mid 2020}. These light curves are similar to the X-ray light curve, which suggests the link between their production site and their physical processes. The multi-wavelength SED modeling is done later in this paper to discuss these possibilities.
The 5th panel shows the simultaneous observations in radio at 15 GHz. The light curve reveals that the source is variable in radio and the maximum variation is a strong decrease from 10 to 1 Jy. \\
The fractional variability estimated for various states is reported in Table \ref{tab:var}.
The fractional variability is used to characterize the long-term variability in various bands. It is formulated by \citet{Vaughan_2003} as:
\begin{equation}
F_{\rm var} = \sqrt{\frac{S^2 - err^2}{F^2}},
\end{equation}
where, F denotes the mean flux, S$^2$ and err$^2$ are the variance and the mean square error in the flux, respectively. The error in flux variability amplitude is given in \citet{Prince_2019}. The fractional variability amplitude estimated for the various states in all wavebands is depicted in Table \ref{tab:var}.
The flux doubling/halving time is also estimated for all the states in all bands. The values are further used to characterize the variability in OJ 287.
The flux doubling time is also known as variability time, is defined as \citep{Zhang_1999}
\begin{equation}
t_d = \frac{(F_1 + F_2)(T_2 - T_1)}{2|F_2 - F_1|}
\end{equation}
where, F$_1$, and F$_2$ are fluxes at time T$_1$ and T$_2$. The doubling time or the fastest (or shortest) variability time (t$_{\rm var}$) considered as the smallest value among the available pairs in the light curve.
The variability amplitude and the doubling time together characterize the variability of the source in various states. In segment E, the source appeared to be more variable with the highest variability amplitude among all the states in all wavebands and have the shortest (fastest) variability time of the order of 1 day (Table \ref{tab:var}). The values of variability amplitude vary from 50$\%$ to 60$\%$ among all the wavebands and the shortest variability time of OJ 287 during 2017 to 2020 in X-ray is nearly 1 day.
The radio data are very sparse during this entire period, and hence we did not include them in the variability study to draw any meaningful conclusion.
\subsubsection{Intra-day Variability (IDV)}
Considering the modest number of observations in each passband, the variability of the source is
measured using the C-criterion, which compares the dispersion in the (blazar-comparison star) and control star -
comparison star. We also used the F test, which is the ratio of the variances of the blazar instrumental light curve (LC) to that of the standard star.
The above tests are discussed in more detail in \citet{2019MNRAS.488.4093A}.
As claimed by
\citet{2017MNRAS.467..340Z}, dispersion scaling by the Howell ($\Gamma_{\rm SF}$) factor \citep{1988AJ.....95..247H} to match the control
star and the target error distributions result in the most reliable results.
We call a particular LC to be variable (Var) only when both tests reject the null hypothesis at 99.5\%
confidence level, possibly variable (PV) if just one of the tests rejects the null hypothesis, and
non-variable (NV) if both tests fail to reject the null hypothesis.
Intraday variability (IDV) results for our observation campaign are summarized in Table\,\ref{tab:var_res} where columns 1 -- 8 are,
respectively, observation date, the passband of observation, number of data points in the given passband, dispersion of
blazar differential LC (DLC), dispersion of the control star DLC, Howell's factor, results for $C$-test and $F$-test.
The variability state of the source is given in column 9.
Our total monitoring coverage contains 13 intraday LCs.
The IDV behavior of OJ 287 over the entire duration is displayed in Figure\,\ref{LC_BL1}.
We found only 1 LC (i.e., April 07, 2019) to be variable according to
our conditions, while 5 LCs were PV. On the remaining seven nights, the source was found to be NV.
Our intraday LCs span a duration of 2 -- 4 hours. Therefore, the relatively short span of observations reduces
the chances of detecting genuine variability. The highest level recorded for OJ 287 was in 2015 Dec by
\citet{2017MNRAS.465.4423G}.
OJ 287 has been monitored for more than a century and has R band data available since 1890. It is one of the
extensively studied sources using both photometry and polarimetry observations on diverse timescales.
During the 2015 flare, the source attained a V mag of $\sim$13.4, R mag of $\sim$ 13.0 while I mag $\sim$ 12.4.
From the current monitoring session, we found the brightest state reached by the source was on January 03, 2020, with R $\sim$ 14.38 mag, fainter than its brightest state in 2015 by $\sim$ 1.4 mag while the faintest state attained by the source was on December 18, 2019, with
R band mag of 15.15, approximately 2.15 mag fainter than its brightest state during 2015 -- 2016 outburst.
Significant optical LTV is also observed for the source with an R band magnitude change of $\sim$ 2
(Fig.\,\ref{fig:total_lc}). The variability trends observed during our
monitoring period are quite different from the previous ones \citep{2017MNRAS.465.4423G}. The target
did not display high IDV during the current phase, which could be due to the lesser data cadence.
\subsection{Gamma-ray Spectral Analysis}
We have also produced the gamma-ray spectra for all the various states of the source identified in Figure \ref{fig:total_lc}. The gamma-ray spectra are produced with the help of \textit{likeSED.py}\footnote{https://fermi.gsfc.nasa.gov/ssc/data/analysis/user/} a python code provided by the Fermi Science Tools.
First, the \textit{Likelihood} analysis is done with the default spectral model power-law (PL) to model the spectral data points, and further, we have changed the model to various other models, like log parabola (LP) and broken power law (BPL) to get the best fit. The details about the models are discussed in \citet{Prince_2018}. The isotropic $\gamma$-ray luminosity corresponding to each spectral models are estimated during all the segments by following the equation 5 of \citet{Prince_2021}, and the values are of the order of 10$^{47}$ $-$ 10$^{48}$ erg/s, which are lower than the Eddington luminosity (10$^{50}$ erg/s) of this source as estimated in Section 3.4. The estimated $\gamma-ray$ luminosity values are mentioned in Table \ref{tab:gamma_sed}.
The gamma-ray spectrum and model fitting are shown in Figure \ref{fig:gamma_sed1}, and corresponding model parameters are presented in Table \ref{tab:gamma_sed}.
Considering the PL spectral model, the spectral state of the source changes from segment A to Segment B, C, and D from harder ($\Gamma_{\rm PL}$ = 1.90$\pm$0.06) to softer ($\Gamma_{\rm PL}$ = 2.27$\pm$0.08)) and again it becomes harder from segment D ($\Gamma_{\rm PL}$ = 2.35$\pm$0.10) to segment E ($\Gamma_{\rm PL}$ = 2.21$\pm$0.06).
The \textit{Likelihood} analysis returns the TS (test statistics; TS $\sim$ 25, which corresponds to 5$\sigma$ significance; \citealt{Mattox_1996}) corresponding to each model and is generally used to decide which model gives the best to the spectral data points. So finally, we measure the TS$_{curve}$ = 2(log L(LP/BPL) - log L(PL)), where L represents the likelihood function \citep{Nolan_2012}. The TS$_{curve}$ reveals the presence of curvature or a break in the spectrum, and which could be caused by the absorption of high energy photons ($>$ 20 GeV; \citealt{Liu_2006}) by the broad-line region (BLR), assuming the emitting region is located within the BLR. However, if the emitting region is located outside the BLR, a nice power-law spectral behavior is expected. The best spectral model favors the large positive value of TS$_{curve}$ over the low value of TS$_{curve}$.
The various models and their corresponding parameters are exhibited in Table \ref{tab:gamma_sed}. The values of TS$_{curve}$
are close to each other for the different models. TS values are nearly equal for the PL and LP models but differ with BPL. However, there is no clear trend in the spectral model that can explain the $\gamma$-ray sed from all the segments, which shows that the source has a very complex behavior during 2017$-$2020.
\begin{figure*}
\begin{center}
\includegraphics[scale=0.35]{seg-A.eps}
\includegraphics[scale=0.35]{seg-B.eps}
\includegraphics[scale=0.35]{seg-C.eps}
\includegraphics[scale=0.35]{seg-D.eps}
\includegraphics[scale=0.35]{seg-E.eps}
\caption{Gamma-ray SED of all the segments identified during 2017$-$2020 in OJ 287 are modeled with three different spectral models PL, LP, and BPL (see Section 3.2 for more details). The down arrow represents the upper limit in that particular segment.}
\label{fig:gamma_sed1}
\end{center}
\end{figure*}
\begin{table*}
\centering
\caption{The modeled parameters of gamma-ray SED for all the segments identified in Figure 1. Column 3 shows the isotropic $\gamma$-ray luminosity during the various segments, which is lower than the Eddington luminosity ($\sim$10$^{50}$ erg/s) of the source as discussed in Section 3.4. }
\begin{tabular}{c c c c c c c c}
\hline
\noalign{\smallskip}
Various & F$_{0.1-300 \rm{GeV}}$& Luminosity & PowerLaw & & & TS & TS$_{curve}$ \\
states & (10$^{-8}$ ph cm$^{-2}$ s$^{-1}$) & (10$^{48}$ erg s$^{-1}$) & $\Gamma_{\rm PL}$ & & & & \\
\noalign{\smallskip} \hline \noalign{\smallskip}
A & 4.10$\pm$0.50&0.25 & -1.90$\pm$0.06 & -- & -- & 570.20 & --\\
B & 4.90$\pm$0.60&0.80 & -2.27$\pm$0.08 & -- & -- & 313.00 & --\\
C & 5.10$\pm$0.81&0.93 & -2.24$\pm$0.10 & -- & -- & 173.22 & --\\
D & 5.40$\pm$0.81&1.04 & -2.35$\pm$0.10 & -- & -- & 178.32 & --\\
E & 5.20$\pm$0.55&2.92 & -2.21$\pm$0.06 & -- & -- & 551.28 & -- \\
\noalign{\smallskip} \hline \noalign{\smallskip}
&& & LogParabola & \\
& && $\alpha$ & $\beta$ & & & \\
\noalign{\smallskip} \hline \noalign{\smallskip}
A & 3.60$\pm$0.70&0.22 & 1.81$\pm$0.12 & 0.03$\pm$0.03 & -- & 569.48 & -0.72 \\
B & 4.90$\pm$0.60&0.81 & 2.27$\pm$0.08 & 0.00$\pm$0.00 & -- & 313.01 & 0.01 \\
C & 4.70$\pm$0.96&0.84 & 2.19$\pm$0.13 & 0.05$\pm$0.07 & -- & 173.64 & 0.42 \\
D & 5.40$\pm$0.81&0.96 & 2.35$\pm$0.10 & 0.00$\pm$0.00 & -- & 178.30 & 0.02 \\
E & 5.20$\pm$0.55&2.45 & 2.21$\pm$0.06 & 0.00$\pm$0.00 & -- & 551.24& -0.08 \\
\noalign{\smallskip} \hline \noalign{\smallskip}
&& & Broken PowerLaw & & E$_{break}$& & \\
& && $\Gamma_1$ & $\Gamma_2$ & [GeV] & & \\
\noalign{\smallskip} \hline \noalign{\smallskip}
A & 3.70$\pm$0.70&0.21 & -1.77$\pm$0.17 & -1.98$\pm$0.11 & 1.43$\pm$0.74 & 569.44 & -0.76 \\
B & 5.30$\pm$0.30&1.03 & -2.44$\pm$0.83 & -2.16$\pm$0.09 & 0.64$\pm$0.25 & 315.26 & 2.26 \\
C & 3.60$\pm$1.20&1.08 & -1.97$\pm$0.24 & -2.64$\pm$0.36 & 1.78$\pm$0.16 & 144.38 & -28.84 \\
D & 6.20$\pm$0.87&1.16 & -2.64$\pm$0.14 & -1.86$\pm$0.16 & 1.41$\pm$0.13 & 186.32 & 8.00 \\
E & 5.80$\pm$2.00&3.02 & -2.40$\pm$0.45 & -2.04$\pm$0.06 & 1.00$\pm$0.32 & 560.03& 8.75\\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}
\label{tab:gamma_sed}
\end{table*}
\subsection{Correlations Studies}
We have collected multi-wavelength data from 2017 to May 2020. Five different states are identified based on the flux and variability seen in X-rays and Optical/UV. During this period between January 2017 to May 2020, we have not observed any flare in gamma-ray, but the source is variable in this low state, as can be seen in the top panel of Figure \ref{fig:total_lc}. The source is flaring in X-ray and optical/UV and appears to be more variable in X-rays, optical, UV, and radio (15 GHz) wavebands, as shown in Figure \ref{fig:total_lc} from top to bottom.
Here we try to investigate the correlation between the X-ray and optical/UV emission for all the states since they have good coverage in all the wavebands. The observed time lags between light curves at different wavebands can be really helpful to locate their emission regions along the jet axis.
To estimate the correlation, we have followed the method developed by \citet{Edelson_1988}. Different bin sizes have been chosen in different combinations to examine the discrete correlation function (DCF) peaks. The DCF estimated for all the possible combinations between X-rays and UV/Optical are shown in Figure \ref{fig:dcf}. The top row of the figure shows the DCF for state A and then followed by B, C, D, and E towards the bottom row. The correlation coefficients, time lags, and the bin size for all the combinations are mentioned in Table \ref{tab:dcf_tab}. Our results show that optical-X-ray and the UV-X-ray emissions for the states A, B, D, and E are highly correlated with values of correlation coefficient above 50$\%$ and with time lags within the bin size. This strong correlation with zero time lag suggests that these two emissions have a common emission region. However, for state C, we do not observe any correlation between Optical/UV and X-rays.
We have also estimated the significance of the DCF peaks by simulating the 1000 artificial optical and UV light curves by following the monte carlo procedure described in \citet{10.1093/mnras/stt764}, for PSD slope 1.5. The simulated light curves are cross-correlated with the observed X-ray light curves. Further, 2$\sigma$ and 3$\sigma$ significance is estimated, which are shown in red and blue dashed line in Figure \ref{fig:dcf}. Our results show that in most of the cases, emissions are correlated above 2 $\sigma$ significance.
As can be seen in Figure \ref{fig:total_lc}, the radio data are very sparse, and hence we did not include it in the correlation study.
\begin{table}
\centering
\caption{DCF parameters for all the combinations. Most of the time lags were found within the binsize.}
\begin{tabular}{c c c c c}
\hline \noalign{\smallskip}
States & Combinations & DCF & Time lags & binsize \\
\noalign{\smallskip} \hline \noalign{\smallskip}
A & V vs X-rays &0.73$\pm$0.05& -2.06& 10.0 \\
& B vs X-rays &0.73$\pm$0.05& -2.06& 10.0 \\
& U vs X-rays &0.72$\pm$0.05& -2.06& 10.0 \\
& W1 vs X-rays &0.76$\pm$0.05& -2.06& 10.0 \\
& M2 vs X-rays &0.71$\pm$0.05& -2.06& 10.0 \\
& W2 vs X-rays &0.74$\pm$0.05& -2.06& 10.0 \\
\noalign{\smallskip} \hline \noalign{\smallskip}
B & V vs X-rays & 0.37$\pm$0.09&-15.42 & 15.0 \\
& B vs X-rays &0.43$\pm$0.08&-10.00 & 20.0 \\
& U vs X-rays &0.37$\pm$0.09&-15.42 & 15.0 \\
& W1 vs X-rays &0.44$\pm$0.12&-5.00& 10.0 \\
& M2 vs X-rays &0.43$\pm$0.10&-11.75& 12.0 \\
& W2 vs X-rays &0.46$\pm$0.10&-11.75& 12.0 \\
\noalign{\smallskip} \hline \noalign{\smallskip}
C & V vs X-rays &0.60$\pm$0.25&37.60& 8.0 \\
& B vs X-rays &0.41$\pm$0.22&40.00& 10.0 \\
& U vs X-rays &0.42$\pm$0.20&40.00& 10.0 \\
& W1 vs X-rays &0.38$\pm$0.28&46.00& 8.0 \\
& M2 vs X-rays &0.49$\pm$0.25&46.00& 8.0 \\
& W2 vs X-rays &0.35$\pm$0.11&29.27& 8.0 \\
\noalign{\smallskip} \hline \noalign{\smallskip}
D & V vs X-rays &0.50$\pm$0.13&-5.00&10.0 \\ & B vs X-rays &0.57$\pm$0.13&-5.00& 10.0 \\
& U vs X-rays &0.61$\pm$0.12&-6.28& 12.0 \\
& W1 vs X-rays &0.66$\pm$0.14&-5.00& 10.0 \\
& M2 vs X-rays &0.63$\pm$0.13&-5.00& 10.0 \\
& W2 vs X-rays &0.63$\pm$0.12&-5.00& 10.0 \\
\noalign{\smallskip} \hline \noalign{\smallskip}
E & V vs X-rays &0.79$\pm$0.04&-5.00& 10.0 \\
& B vs X-rays &0.78$\pm$0.04&-5.00& 10.0 \\
& U vs X-rays &0.78$\pm$0.04&-5.00& 10.0 \\
& W1 vs X-rays &0.79$\pm$0.04&-5.00& 10.0 \\
& M2 vs X-rays &0.80$\pm$0.04&-5.00& 10.0 \\
& W2 vs X-rays &0.79$\pm$0.04&-5.00& 10.0 \\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}
\label{tab:dcf_tab}
\end{table}
\begin{figure*}
\centering
\includegraphics[scale=0.55]{DCF_with_2-3sigma.eps}
\caption{Cross-correlation of optical/UV vs X-ray. The five rows of the plot are the five states (A, B, C, D, E) defined in Figure \ref{fig:total_lc}. Horizontal red and blue dashed lines are 2$\sigma$ and 3$\sigma$ significance respectively. We do not observe any significant time lag in any of the combinations. During state C, we see some time lags but the correlation coefficient are below 50$\%$ and below 2$\sigma$ significance, and hence do not consider as an actual time lag. }
\label{fig:dcf}
\end{figure*}
\subsection{Modeling the Multi-wavelength SEDs}
Good coverage of OJ 287 in various wavebands provides an opportunity to obtain the multi-wavelength spectral energy distribution (MWSED), which has been used in our modeling. We have produced the MWSED using {\it Swift}-XRT, UVOT and {\it Fermi}-LAT data for all the observations in different states. The modeling of OJ 287 has been done previously in various ways.
We assume that the emission region is in the jet of the primary black hole. The emission region is a spherical blob that is moving with Doppler factor $\delta$ down the jet. The shock accelerated leptons are losing energy inside this blob by synchrotron, and synchrotron self Compton (SSC) processes.
We have used a publicly available time-dependent code, GAMERA\footnote{http://joachimhahn.github.io/GAMERA} (\citealt{Hahn_2015}) to model the broadband SED. It is a python based code and needs an initial injected electron spectrum as an input which further solves the transport equation (3) and estimates the propagated electron spectrum. Finally, the propagated electron spectrum is used to calculate the emission from the various processes like Synchrotron, SSC, and EC by external photons of various origin (BLR, DT, accretion disk).
We use the following transport equation to find the electron spectrum after energy loss:
\begin{equation}\label{8}
\frac{\partial N(E,t)}{\partial t}=Q(E,t)-\frac{\partial}{\partial E}\Big(b(E,t) N(E,t)\Big)
\end{equation}
where, $Q(E,t)$ is the input spectrum and $N(E,t)$ is the propagated one at a time `t'.
$b(E,t)$ corresponds to the radiative loss by different physical processes, synchrotron, SSC, and EC scattering. We have assumed a LogParabola electron distribution as the injected electron spectrum in our modeling.
The MWSED could be modelled with the leptonic scenario
where SSC (synchrotron self Compton) and EC (external Compton) emission are generating the high energy peak. The study by \citealt{Kushwaha_2013} on the 2009 flare suggests that the X-ray and $\gamma$-ray emission can be explained by SSC and EC processes respectively, where the seed photons for the external Compton are originated by a thermal bath of 250 K located at distance of $\sim$9 pc from the SMBH.
In a more recent study by \citealt{2018MNRAS.473.1145K} the December 2015 - May 2016 high state has been modelled using both SSC and EC emissions.
The December 2015 high activity has been predicted to occur from the impact of the second black hole on the accretion disk of the primary black hole. The non thermal emission showed nearly co-spatial origin. They modelled the gamma ray flux by external Compton emission of relativistic electrons by optical-UV line emission, which shows the signature of a blue bump in the optical-UV flux.
They fitted the X-ray data with SSC emission.
In the present study we find that the source is variable in all wavebands including $\gamma$-ray, though we did not see any flaring behavior in this band. The observed day scale variability time in gamma-ray flux suggests the
location of the emission region close to the SMBH, within a few parsec from the base of the jet. The variability study across all the wavebands exhibits an order of day scale flux variability, due to this reason we model the broadband SED with only synchrotron and SSC within a single emission zone. The correlation study also suggests that the emissions are produced at the same location.
The SSC emission is determined by the synchrotron emission and the size of the emission region. The synchrotron emission depends on the magnetic field in the emission region in the jet and the energy of the leptons.
The size of the emission region can be constrained by the variability time scale. Considering 1 day variability time in the gamma-ray data, we have estimated the size of the emission region by the following relation r$\sim$ c t$_{var}$ $\delta$/(1+z), where $\delta$ = 20, and it is found to be r$\sim$4.0$\times$10$^{16}$cm.
However, in our modeling the size of the emission region
is a free parameter and we have found that a smaller size of the emission region is required to explain the broadband SED.
We also have many other parameters in our model, e.g., magnetic field, injected electron spectrum, lower and higher energy cut-offs in the injected electron spectrum, normalization of the electron spectrum, and these are optimized to achieve the best SED fit.
The MWSED modeling results are depicted in Figure \ref{fig:MWSED} for the various states, and their corresponding best-fit parameters are tabulated in Table \ref{tab:sed_param}. States C $\&$ D are very similar to each other in variability and flux states, and hence we only show the SED modeling of state C.
The modeling confirms that the low and high energy peaks can be constrained with synchrotron and SSC, respectively.
In Figure \ref{fig:MWSED}, it has been noticed that the source has more emission in Optical-UV than in $\gamma$-ray. Hence a large value of magnetic field ($\sim$4-7 Gauss) is used to fit the data.
\begin{figure*}
\includegraphics[scale=0.4]{Seg-A-SSC.eps}
\includegraphics[scale=0.4]{Seg-B-SSC.eps}
\includegraphics[scale=0.4]{Seg-C-SSC.eps}
\includegraphics[scale=0.4]{Seg-E-SSC.eps}
\caption{ The MWSED for all the various segments observed during the year 2017$-$2020. The `dotted dahsed' and `dahsed' line in different colors in Synchrotron and SSC peaks are the time evolution of the model. The down arrow represents the $\gamma$-ray upper limits. The optical/UV, X-ray, and $\gamma$-ray data points are shown in red, blue, and magenta colors respectively.}
\label{fig:MWSED}
\end{figure*}
The previous broadband SED modeling of OJ 287 at different occasions of low and bright state (\citealt{Kushwaha_2013}, \citealt{2018MNRAS.473.1145K, 2018MNRAS.479.1672K}) was carried out with a
range of values of the Doppler and Lorentz factor. In this study, we have fixed the Doppler and Lorentz factor of the blob at 20 and 15.5 respectively, which are similar to the values reported in earlier papers.
\begin{table*}
\centering
\caption{Multi-wavelength SED modeling results with the best fitted parameters values. The input injected electron distribution is LogParabola with reference energy 60 MeV. The Doppler factor and the Lorentz factor are fixed at 20.0 and 15.5 respectively. }
\begin{tabular}{c c c c c}
\hline \noalign{\smallskip}
high state& Parameters & Symbols & Values & Period \\
\noalign{\smallskip} \hline \noalign{\smallskip}
Segment-A & &&& 183 days\\
& Size of the emitting zone& r & 2.6$\times$10$^{15}$ cm & \\
& Min Lorentz factor of emitting electrons & $\gamma_{min}$& 350.0 &\\
& Max Lorentz factor of emitting electrons & $\gamma_{max}$& 2.8$\times$10$^{4}$ &\\
& Input injected electron spectrum (LP) & $\alpha$ & 1.60 & \\
& Curvature parameter of the PL spectrum & $\beta$& 0.02 & \\
& Magnetic field in emitting zone & B & 5.9 G & \\
& Jet power in electrons & P$_{j,e}$ & 4.35$\times$10$^{44}$ erg/s & \\
& Jet power in magnetic field & P$_{j,B}$ & 2.12$\times$10$^{44}$ erg/s & \\
& Jet power in protons & P$_{j,P}$ & 3.39$\times$10$^{43}$ erg/s& \\
& Total jet power & P$_{jet}$ & 6.81$\times$10$^{44}$ erg/s& \\
\noalign{\smallskip} \hline \noalign{\smallskip}
Segment-B & &&& 225 days \\
& Size of the emitting zone& r & 2.6$\times$10$^{15}$ cm & \\
& Min Lorentz factor of emitting electrons & $\gamma_{min}$& 120.0 &\\
& Max Lorentz factor of emitting electrons & $\gamma_{max}$& 3.6$\times$10$^{4}$ &\\
& Input injected electron spectrum (LP) & $\alpha$ & 1.68 & \\
& Curvature parameter of the PL spectrum & $\beta$& 0.005 & \\
& Magnetic field in emitting zone & B & 4.2 G & \\
& Jet power in electrons & P$_{j,e}$ & 2.59$\times$10$^{44}$ erg/s & \\
& Jet power in magnetic field & P$_{j,B}$ & 1.07$\times$10$^{44}$ erg/s & \\
& Jet power in protons & P$_{j,P}$ & 3.56$\times$10$^{43}$ erg/s& \\
& Total jet power & P$_{jet}$ & 4.02$\times$10$^{44}$ erg/s& \\
\noalign{\smallskip} \hline \noalign{\smallskip}
Segment-C & &&& 121 days \\
& Size of the emitting zone& r & 2.6$\times$10$^{15}$ cm & \\
& Min Lorentz factor of emitting electrons & $\gamma_{min}$& 160.0 &\\
& Max Lorentz factor of emitting electrons & $\gamma_{max}$& 3.6$\times$10$^{4}$ &\\
& Input injected electron spectrum (LP) & $\alpha$ & 1.68 & \\
& Curvature parameter of the PL spectrum & $\beta$& 0.005 & \\
& Magnetic field in emitting zone & B & 4.2 G & \\
& Jet power in electrons & P$_{j,e}$ & 2.96$\times$10$^{44}$ erg/s & \\
& Jet power in magnetic field & P$_{j,B}$ & 1.07$\times$10$^{44}$ erg/s & \\
& Jet power in protons & P$_{j,P}$ & 4.06$\times$10$^{43}$ erg/s& \\
& Total jet power & P$_{jet}$ & 4.44$\times$10$^{44}$ erg/s& \\
\noalign{\smallskip} \hline \noalign{\smallskip}
Segment-E & &&& 326 days \\
& Size of the emitting zone& r & 2.6$\times$10$^{15}$ cm & \\
& Min Lorentz factor of emitting electrons & $\gamma_{min}$& 1.4$\times$10$^{3}$ &\\
& Max Lorentz factor of emitting electrons & $\gamma_{max}$& 1.5$\times$10$^{4}$ &\\
& Input injected electron spectrum (LP) & $\alpha$ & 1.6 & \\
& Curvature parameter of the PL spectrum & $\beta$& 0.005 & \\
& Magnetic field in emitting zone & B & 6.7 G & \\
& Jet power in electrons & P$_{j,e}$ & 3.26$\times$10$^{44}$ erg/s & \\
& Jet power in magnetic field & P$_{j,B}$ & 2.73$\times$10$^{44}$ erg/s & \\
& Jet power in protons & P$_{j,P}$ & 1.32$\times$10$^{43}$ erg/s& \\
& Total jet power & P$_{jet}$ & 6.12$\times$10$^{44}$ erg/s& \\
\noalign{\smallskip} \hline \noalign{\smallskip}
\end{tabular}
\label{tab:sed_param}
\end{table*}
We have also estimated the total jet power and the power in the individual component of the jet. The components are leptons, magnetic fields, and protons. We assume that the number ratio of leptons to protons is 20:1 in the jet and estimate the jet power in leptons and protons separately.
The total jet power is generally defined as,
\begin{equation}
P_{jet} = \pi r^2 \Gamma^2 c (U'_e + U'_B + U'_p)
\end{equation}
where, $U'_e$, $U'_B$, and $U'_p$ are the energy densities in leptons, magnetic field and protons in the jet or co-moving frame. The values of the size of the emission region (`$r$') and the Lorentz factor ($\Gamma$) are already provided in the discussion above.
The total jet power calculated for all the states is shown in Table \ref{tab:sed_param}, and the value is much smaller than the Eddington luminosity of the source. The Eddington luminosity for the primary BH is estimated as L$_{Edd}$ = 4$\pi$Gmm$_p$c/$\sigma_T$, where `m' is the mass of the primary BH, m$_p$ is the proton mass, and $\sigma_T$ is the Thompson scattering cross-section. The primary BH mass is estimated by \citet{2018MNRAS.473.1145K} by modeling the NIR-optical spectrum with an accretion disk, and the reported value is $\sim$1.8$\times$10$^{10}$M$_\odot$. The Eddington luminosity is estimated to be 2.30$\times$10$^{50}$ erg/s, which is much higher than the total jet power estimated in this study by SED modeling.
Modeling the high optical flux state with synchrotron emission requires a higher value of the magnetic field and hence higher jet power in the magnetic field. It is found that during the flaring state A \& E the total jet power is 1.5 times higher than the total jet power estimated for low state B \& C. The SED modeling also suggests that more luminosity in high energetic electrons is required to produce the broadband emission during flaring states A \& E.
The non thermal flares during state A and state E might have resulted from disk impact in November-Dec 2015 (\citealt{Valtonen_2016}) and July-September 2019 (\citealt{Laine_2020}) respectively when thermal flares were observed.
The injection of high energetic electrons in jet could be due to the time delayed increase in the accretion rate and jet activity triggered by disk impact of secondary black hole or by tidal disruption events \citet{Sundelius_1997}.
The variable accretion rate causes internal shock in the jet which accelerates electrons and they lose energy radiatively (\citealt{Valtonen_2006}).
The model of \citet{Sundelius_1997} predicted a major increase in accretion rate in beginning of January 2020. However, the non-thermal flares happened during April-June 2020 nearly 4 months after their predicted time. The physical explanation for this delay requires a better understanding of the disk-jet connection as discussed by \citet{Komossa_2020}.
\section{Discussions and Conclusions}
During the period between 2017 -- 2020, the blazar OJ 287 did not show any bright flaring states in $\gamma$-ray. However, high flux states were reported across the optical-UV and X-ray wavebands in various Atels notifications, during that period.
Also, variability in flux was observed in optical, UV, X-ray and gamma-ray frequency.
Five states have been identified as A, B, C, D, \& E based on the flux and fractional variability seen in optical-UV and X-rays. States A and E appear to be the brightest among all others in Figure \ref{fig:total_lc}, which can also be verified by the total jet power (Table \ref{tab:sed_param}) found from the modeling of these states. The variability time found across the bands ranges from 12 hr to $~$ 20 days across all states. The fastest variability time in X-rays is found to be of the order of 1 day. The optical bands U, B, \& V have the shortest variability time of $\sim$ 14 hr, 30 hr, \& 18 hr, while in UV bands, they are of the order of 1 day, 4 days, and 1 day for W1, M2, \& W2 bands respectively. Though the source was not bright in $\gamma$-rays, we have produced the $\gamma$-ray spectrum for the different states to see if there is any variation in the spectrum.
The gamma-ray data shows day scale variability and the maximum variation in flux between high and low state is found to be 5 times.
The $\gamma$-ray data can be well fitted with PL or LP model. The values of the test statistics are similar for both the models.
Further, we have estimated the correlations between various wavebands in order to understand whether they have a common emission region. The results show that emission is highly correlated (within the errorbars) between the different bands, which suggests their co-spatial origin. A single-zone emission model is applied to explain the multi-wavelength emission by performing the MWSED modeling. The SED modeling confirms the presence of high magnetic field in the jet and that the jet emission is powered by relativistic electrons.
\par
In the binary black hole model the primary black hole is surrounded by an accretion disk. The orbit of the secondary black hole around the primary black hole is such that it intersects the accretion disk of the primary black hole two times.
The major outbursts which occur at approximately 12 years of interval could be due to tidally induced mass flows. One such outburst is expected for every pericenter passage of the secondary (\citealt{Sundelius_1997}). \citet{Pihajoki_2013} theoretically predicted the timings of the precursor flares and compared with the observed flares in the light curve of OJ 287. Based on the model of \citet{Sundelius_1997} a major after-flare is expected in January 2020, but it was observed in May 2020.
The various physical conditions which affect this time delay are disk/corona properties and geometry, magnetic field geometry, shock formation in the jet. They are not yet well understood (\citealt{Komossa_2020}), which makes it very hard to predict this time delay from first principle.
\par
The disk impact model predicts thermal bremmsstrahlung radiation as outbursts in optical-UV frequency due to the impact of the secondary black hole on the accretion disk of the primary black hole \citet{Lehto_1996}. This model successfully predicted the impact flares
in 2007, 2015, 2019 (\citealt{Dey_2021}). The disk impact triggers time delayed increase in accretion rate and jet activity which leads to after-flare effects. The flares in state A \& E in our work can be explained with this model. During states B, C, D there was no flare, hence a lower jet power is needed to model these low states.
\par
Microvariability studies of blazars are one of the most relevant probes to understand the physical conditions very close to the central supermassive black hole. The exact phenomenon behind IDV in blazars is still under debate.
Flux variations in blazars on intraday timescales almost certainly arise from intrinsic factors that are inherent to the blazar jets such as shocks in the helical jets \citep{2015JApA...36..255C} or blobs of plasma traversing through the Doppler boosted magnetized jet or formation of ultra-relativistic mini jets in the helical jet itself. In the low state of blazars, an alternative source for optical IDV is the accretion disc \citep[AD, e.g.][]{1993A&A...271..216C}. According to the AD-based models, instabilities or hot spots or any other enhanced emission on the AD can yield optical IDV in blazars when the source is in a low state. The presence of confirmed IDV on only 1 night out of 13 nights could be most likely due to a uniform jet emission and any change in the direction of the relativistic shock with respect to our line of sight (LOS) if at all present, is very weak.
LTV in blazars can be attributed to both intrinsic and extrinsic factors. Extrinsic mechanisms involve geometrical effects such as deviation of the emitting region with respect to LOS, thus causing variation in the Doppler factor \citep{2009A&A...504L...9V} which in turn is observed as a variation on a long-term basis. Long-term flux variations in blazar LCs can also be caused by the launching of the new shocks. In general, optical IDV in blazars involve both intrinsic and extrinsic mechanisms \citep{2016ApJ...820...12P} and are usually difficult to disentangle.
\citet{Komossa_2020} studied the large X-ray data sample from 2015 to 2020, and their results pose a few fundamental questions. They observed the strong flare in X-ray, optical, and UV bands. The observation at the peak of the X-ray flare shows a steep power-law spectrum with index 2.8, which is very rare in blazars but consistent with the synchrotron origin of X-ray emission. They concluded that the emission is jet-driven and which is consistent with the binary black hole model. In another study by \citet{Kushwaha_2020}, the spectral change in X-ray emission was noted during 2017 to 2020, and they also suggested that it could be an emission from the jet. Here we have modeled the various low and high states observed during 2017 to 2020 considering the emissions are produced inside the jet.
It was reported earlier by \citet{2018MNRAS.473.1145K} that the source was active during December 2015 $-$ April 2016 in IR to $\gamma$-ray frequency. However, another study by \citet{2018MNRAS.479.1672K} for the period of June 2016 $-$ September 2017 found that the source was very bright in IR to X-ray, but no variability was seen in $\gamma$-ray. A similar kind of behavior is seen during early 2017 and mid 2020 when the source is flaring in optical-UV and X-ray but not very active in $\gamma$-ray. The different behaviors at different frequencies and at different epochs of time make this source very complex in nature. Many more observational and theoretical studies are required to understand the complex nature of the blazar OJ 287.
\section*{Acknowledgements}
The project was partially supported by the Polish Funding Agency National Science Centre, project 2017/26/A/ST9/00756 (MAESTRO 9), and MNiSW grant DIR/WK/2018/12.
Based on data obtained at Complejo Astronómico El Leoncito, operated under agreement between the Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina and the National Universities of La Plata, Córdoba and San Juan.
\bibliographystyle{aa}
|
1,116,691,498,242 | arxiv | \section{Introduction}
A prominent question in the study of (modal) logics and their semantics
is what classes of frames can be defined as the class of frames satisfying
some set of formulae.
Such a class is usually called \emph{axiomatic} or \emph{modally definable}.
A milestone result partially answering this question in the realm of classical
normal modal logic is from Goldblatt and Thomason
and dates back to 1974~\cite{GolTho74}. It states that
an elementary class of Kripke frames is axiomatic if and only if
it reflects ultrafilter extensions and is closed under p-morphic images,
generated subframes and disjoint unions.
The proof in~\cite{GolTho74} relies on Birkhoff's variety theorem~\cite{Bir35}
and makes use of the algebraic semantics of the logic.
A model-theoretic proof was provided almost twenty years later by
Van Benthem~\cite{Ben93}.
A similar result for (non-modal) intuitionistic logic was proven by
Rodenburg \cite{Rod86} (see also \cite{Gol05}), where the interpreting
structures are \emph{intuitionistic} Kripke frames and models.
This, of course, requires analogues of the notions of p-morphic images,
generated subframes, disjoint unions and ultrafilter extensions.
While the first three carry over straightforwardly from the setting
of classical normal modal logic,
ultrafilters need to be replaced by \emph{prime filters}.
In recent years, Goldblatt-Thomason style theorems (which we will simply refer
to as ``Goldblatt-Thomason theorems'') for many other logics have
been proven, including for positive normal modal logic \cite{CelJan99},
graded modal logic \cite{SanMa10},
modal extensions of {\L}ukasiewicz finitely-valued logics \cite{Teh16},
LE-logics \cite{ConPalTzi18-arxiv},
and modal logics with a universal modality \cite{SanVir19}.
A general Goldblatt-Thomason theorem for coalgebraic logics for
$\cat{Set}$-coalgebras was given in~\cite{KurRos07}.
In the present paper we prove Goldblatt-Thomason theorems for
modal intuitionistic logics.
These include the extensions of intuitionistic logic with
a normal modality~\cite{WolZak97,WolZak98,WolZak99},
a monotone one~\cite[Sec.~6]{Gol93},
a neighbourhood modality~\cite{DalGreOli20},
and a strict implication modality~\cite{LitVis18,LitVis19,GroLitPat21}.
For each we obtain:
\begin{center}
{\it
A class $\ms{K}$ of frames closed under prime filter extensions is axiomatic if \\
and only if it reflects prime filter extensions and is closed under \\
disjoint unions, regular subframes and p-morphic images.}
\end{center}
Instead of proving each of these results individually, we prove a more
general Goldblatt-Thomason theorem for \emph{dialgebraic intuitionistic logics},
merging techniques from~\cite{Gol05} and~\cite{KurRos07}.
We then apply this to specific instances.
Dialgebraic logic slightly generalises coalgebraic logic and
was recently introduced in \cite{GroPat20}.
It provides a framework where modal logics are developed
parametric in the signature
of the language and a functor $\fun{T} : \cat{C}' \to \cat{C}$, where
$\cat{C}'$ is some subcategory of $\cat{C}$.
While coalgebraic logics are too restrictive to describe modal intuitionistic
logics (see e.g.~\cite[Rem.~8]{Lit17-arxiv}, \cite[Sec.~2]{GroPat20}),
the additional flexibility of dialgebraic logic does allow us to
model a number of them.
The paper is structured as follows.
In Sec.~\ref{sec:prelim} we recall a semantics for the extension
of intuitionistic logic with a normal modality $\Box$ from~\cite{WolZak99}.
Using this as running example, in Sec.~\ref{sec:general} we
recall the basics of dialgebraic logic and prove the
Goldblatt-Thomason theorem.
In particular, this yields a new Goldblatt-Thomason theorem for the
logic and semantics from Sec.~\ref{sec:prelim}.
In Sec.~\ref{sec:app} we instantiate the general theorem to several more
modal intuitionistic
logics from the literature to obtain new Goldblatt-Thomason theorems.
\bigskip\noindent
\textit{Related version.} \,
This paper is accepted for publications at AiML 2022.
\section{Normal Modal Intuitionistic Logic}\label{sec:prelim}
For future reference, we recall
the extension of
intuitionistic logic with a unary meet-preserving modality from Wolter and
Zakharyaschev~\cite{WolZak98,WolZak99}.
\begin{definition}
Denote the language of intuitionistic logic by $\lan{L}$, with
proposition letters from some countably infinite set $\Prop$.
That is, $\lan{L}$ is generated by the grammar
$$
\phi ::= \top \mid \bot \mid p \mid \phi \wedge \phi \mid
\phi \vee \phi \mid \phi \to \phi,
$$
where $p \in \Prop$.
Write $\lan{L}_{\Box}$ for its extension with a unary operator $\Box$.
Further, let $\log{L}$ be the intuitionistic propositional calculus,
and let $\log{L}_{\Box}$ be the logic that arises from extending an
axiomatisation for $\log{L}$ (that we assume includes uniform substitution)
with the axioms and rule
\begin{equation}\label{eq:box-ax-rule}
\Box\top \leftrightarrow \top, \qquad
\Box p \wedge \Box q \leftrightarrow \Box(p \wedge q), \qquad
(p \leftrightarrow q)/(\Box p \leftrightarrow \Box q)
\end{equation}
\end{definition}
We write $\cat{Pos}$ for the category of posets and order-preserving
functions. In this paper, we define an \emph{intuitionistic Kripke frame} as a
poset and we write
$\cat{Krip}$ for the full subcategory of $\cat{Pos}$ whose morphisms
are p-morphisms \cite[Sec.~2.1.1]{Bez06}.
(Sometimes intuitionistic Kripke frames are defined to be preorders.
For the results presented in this paper there is no discernible difference.)
\begin{definition}
A \emph{$\Box$-frame} is a triple $(X, \leq, R)$ where $(X, \leq)$ is an
intuitionistic Kripke frame and $R$ is a relation on $X$ satisfying
$
({\leq} \circ R \circ {\leq}) = R.
$
Adding a valuation $V : \Prop \to \fun{Up}(X, \leq)$
($= \{a \subseteq X \mid x \in a \text{ and } x \leq y \text{ implies } y \in a \}$)
yields a \emph{$\Box$-model}, in which we can interpret
$\lan{L}_{\Box}$-formulae.
Proposition letters are interpreted via the valuation,
intuitionistic connectives are interpreted as usual in the underlying intuitionistic
Kripke frame and a state $x$ satisfies $\Box\phi$ if all its
$R$-successors satisfy $\phi$.
\end{definition}
While morphisms are not defined in~\cite{WolZak98,WolZak99}, there is
an obvious choice:
\begin{definition}
A \emph{$\Box$-morphism} form $(X, \leq, R)$ to $(X', \leq', R')$
is a function $f : X \to X'$ such that for $E \in \{ \leq, R \}$
and for all $x, y \in X$ and $z' \in X'$:
\begin{itemize}
\item If $xEy$ then $f(x)E'f(y)$;
\item If $f(x)E'z'$ then $\exists z \in X$ such that $xEz$ and $f(z) = z'$.
\end{itemize}
We write $\cat{WZ\Box}$ for the category of $\Box$-frames and -morphisms.
\end{definition}
The algebraic semantics of $\log{L}_{\Box}$ is given as follows.
\begin{definition}
A \emph{Heyting algebra with operators} (HAO) is a pair
$(A, \Box)$ of a Heyting algebra $A$ and a function $\Box : A \to A$
satisfying $\Box\top = \top$ and $\Box a \wedge \Box b = \Box(a \wedge b)$
for all $a, b \in A$.
Together with $\Box$-preserving Heyting homomorphisms,
these constitute the category $\cat{HAO}$.
\end{definition}
We briefly recall some categories, functors and natural transformations.
\begin{definition}\label{def:fun-nat}
$\cat{DL}$ and $\cat{HA}$ denote the categories of distributive
lattices and Heyting algebras.
Let $\fun{u\kern-.1em p}$ be the contravariant functor $\cat{Pos} \to \cat{DL}$
that sends a poset to the distributive lattice of its upsets and
an order-preserving function $f$ to $f^{-1}$.
Write $\fun{pf} : \cat{DL} \to \cat{Pos}$
for the contravariant functor
sending $A \in \cat{DL}$ to the set of prime filters of $A$ ordered
by inclusion, and a homomorphism to its inverse image.
These restrict to $\fun{u\kern-.1em p'} : \cat{Krip} \to \cat{HA}$
and $\fun{pf'} : \cat{HA} \to \cat{Krip}$.
Let $\eta : \fun{id}_{\cat{Pos}} \to \fun{pf} \circ \fun{u\kern-.1em p}$
and $\theta : \fun{id}_{\cat{DL}} \to \fun{u\kern-.1em p} \circ \fun{pf}$
be the natural trans\-for\-ma\-tions defined by
$\eta_{(X, \leq)}(x) = \{ a \in \fun{u\kern-.1em p}(X, \leq) \mid x \in a \}$ and
$\theta_A(a) = \{ \ff{p} \in \fun{pf}A \mid a \in \ff{p} \}$.
(These are the units of the dual adjunction between $\cat{Pos}$ and
$\cat{DL}$.)
Furthermore, $\theta$ restricts to the natural transformation
$\theta' : \fun{id}_{\cat{HA}} \to \fun{u\kern-.1em p'} \circ \fun{pf'}$.
\end{definition}
Every $\Box$-frame $(X, \leq, R)$ yields a HAO $(\fun{u\kern-.1em p'}(X, \leq), \Box_R)$
(called its \emph{complex algebra}),
with $\Box_R(a) = \{ x \in X \mid xRy \text{ implies } y \in a \}$.
Conversely, every HAO $(A, \Box)$ gives rise to a $\Box$-frame
$(\fun{pf'}A, \subseteq, R_{\Box})$, where
$\ff{p}R_{\Box}\ff{q}$ iff for all $a \in A$, $\Box a \in \ff{p}$ implies $a \in \ff{q}$.
Concatenating these constructions yields:
\begin{definition}
The \emph{prime filter extension} of a $\Box$-frame $(X, \leq, R)$
is the frame $(X^{pe}, \subseteq, R^{pe})$, where $X^{pe}$ is the set of
prime filters on $(X, \leq)$ and $R^{pe}$ is defined by
$\ff{p}R^{pe}\ff{q}$ iff for all $a \in \fun{u\kern-.1em p'}(X, \leq)$,
$\Box_R(a) \in \ff{p}$ implies $a \in \ff{q}$.
\end{definition}
\section{A General Goldblatt-Thomason Theorem}\label{sec:general}
We restrict the framework of dialgebraic logic~\cite{GroPat20}
to an intuitionistic base. Within this, we prove a Goldblatt-Thomason theorem.
Throughout this section, we show how general constructions specialise to the normal modal
intuitionistic logic from Sec.~\ref{sec:prelim}.
Our focus on an intuitionistic propositional base allows us
to augment the framework of dialgebraic logic from~\cite{GroPat20}
in the following ways:
\begin{itemize}
\item In~\cite{GroPat20} a logic is identified via an initial object
in some category, which plays the role of the Lindenbaum-Tarski algebra.
Here we define logics explicitly, by means of an axiomatisation.
\item Whereas proposition letters in~\cite{GroPat20} are regarded as
predicate liftings, here we elevate them to a
special status. This has two reasons: first, it simplifies the
connection to (frames and models for) modal intuitionistic logics
from the literature; second, they
facilitate the use of Birkhoff's variety theorem.
\item We give dialgebraic definitions of subframes, p-morphic images and
disjoint unions, and corresponding preservation results.
\item We give prime filter extensions for models (not just for frames).
\end{itemize}
We work towards a Goldblatt-Thomason theorem as follows.
First we recall the use of dialgebras as frames for modal extensions
of intuitionistic logic (Sec.~\ref{subsec:lan-frm}),
and we prove some invariance properties (Sec.~\ref{subsec:invariance}).
Then we describe algebraic semantics and prime filter extensions
dialgebraically (Sec.~\ref{subsec:algsem} and~\ref{subsec:pe}).
This culminates in the Goldblatt-Thomason theorem in Sec.~\ref{subsec:gt}.
\subsection{Languages and Frames}\label{subsec:lan-frm}
Dialgebras were introduced by Hagino in~\cite{Hag87} to describe data types.
Here we use them
to describe frames for modal intuitionistic logics.
\begin{definition}
Let $\fun{F}, \fun{G} : \cat{C} \to \cat{D}$ be functors.
An \emph{$(\fun{F}, \fun{G})$-dialgebra} is a pair $(X, \gamma)$ where $X \in \cat{C}$
and $\gamma : \fun{F}X \to \fun{G}X$ is a $\cat{D}$-morphism.
An \emph{$(\fun{F}, \fun{G})$-dialgebra morphism}
from $(X, \gamma)$ to $(X', \gamma')$ is a $\cat{C}$-morphism
$f : X \to X'$ such that $\fun{G}f \circ \gamma = \gamma' \circ \fun{F}f$.
They constitute the category $\cat{Dialg}(\fun{F}, \fun{G})$.
In diagrams:
$$
\begin{tikzcd}[row sep=-.1em, column sep=1em]
& \fun{F}X
\arrow[dd, "\gamma"]
&
&
& \fun{F}X
\arrow[dd, "\gamma" left]
\arrow[rr, "\fun{F}f"]
&
& \fun{F}X'
\arrow[dd, "\gamma'"] \\
\text{objects:}
&
&
& \text{arrows:}
&
&
& \\
& \fun{G}X
&
&
& \fun{G}X
\arrow[rr, "\fun{G}f", below]
&
& \fun{G}X'
\end{tikzcd}
$$
\end{definition}
We will be concerned with two classes of dialgebras. First,
$(\fun{i}, \fun{T})$-dialgebras, where $\fun{i} : \cat{Krip} \to \cat{Pos}$
is the inclusion functor and $\fun{T} : \cat{Krip} \to \cat{Pos}$ is
any functor, serve as frame semantics for our dialgebraic intuitionistic
logics.
Second, dialgebras for functors $\cat{HA} \to \cat{DL}$
will be used as algebraic semantics.
\begin{example}
Let $\fun{P_{up}} : \cat{Krip} \to \cat{Pos}$ be the functor that
sends an intuitionistic Kripke frame $(X, \leq)$ to its set of
upsets ordered by reverse inclusion, and a p-morphism
$f : (X, \leq) \to (X', \leq')$ to
$\fun{P_{up}}f : \fun{P_{up}}(X, \leq) \to \fun{P_{up}}(X', \leq')
: a \mapsto f[a]$.
Then identifying a relation $R$ on $X$ with the map $\gamma_R : (X, \leq) \to \fun{P_{up}}(X, \leq) : x \mapsto \{ y \in X \mid xRy \}$ yields an isomorphism $\cat{WZ\Box} \cong \cat{Dialg}(\fun{i}, \fun{P_{up}})$~\cite[Sec.~2]{GroPat20}.
\end{example}
Modalities for $\cat{Dialg}(\fun{i}, \fun{T})$ are defined
via predicate liftings~\cite[Def.~5.7]{GroPat20}.
\begin{definition}
An \emph{$n$-ary predicate lifting} for a functor
$\fun{T} : \cat{Krip} \to \cat{Pos}$
is a natural transformation
$$
\lambda : (\fun{Up} \circ \fun{i})^n \to \fun{Up} \circ \fun{T}.
$$
Here $\fun{Up} : \cat{Pos} \to \cat{Set}$ is the contravariant functor
that sends a poset to its set of upsets,
and $(\fun{Up} \circ \fun{i})^n(X, \leq)$ is the $n$-fold product of
$\fun{Up}(\fun{i}(X, \leq))$ in $\cat{Set}$.
\end{definition}
\begin{definition}
Let $\Prop$ be a countably infinite set of proposition letters.
For a set $\Lambda$ of predicate liftings, define the language
$\lan{L}(\Lambda)$ by the grammar
$$
\phi ::= \top
\mid \bot
\mid p
\mid \phi \wedge \phi
\mid \phi \vee \phi
\mid \phi \to \phi
\mid \heartsuit^{\lambda}(\phi_1, \ldots, \phi_n),
$$
where $p$ ranges over $\Prop$ and $\lambda \in \Lambda$ is $n$-ary.
\end{definition}
\begin{definition}\label{def:interpretation}
Let $\Lambda$ be a set of predicate liftings for $\fun{T} : \cat{Krip} \to \cat{Pos}$.
An \emph{$(\fun{i}, \fun{T})$-model} $\mo{M}$ is an
$(\fun{i}, \fun{T})$-dialgebra $\mo{X} = (X, \leq, \gamma)$
with a valuation $V : \Prop \to \fun{Up}(X, \leq)$.
Truth of $\phi \in \lan{L}(\Lambda)$ at $x \in X$ is defined by
\begin{align*}
\mo{M}, x \Vdash \top
&\quad\text{\phantom{iff}}\quad\text{always} \\
\mo{M}, x \Vdash \bot
&\quad\text{\phantom{iff}}\quad\text{never} \\
\mo{M}, x \Vdash p
&\iff x \in V(p) \\
\mo{M}, x \Vdash \phi \wedge \psi
&\iff \mo{M}, x \Vdash \phi \text{ and } \mo{M}, x \Vdash \psi \\
\mo{M}, x \Vdash \phi \vee \psi
&\iff \mo{M}, x \Vdash \phi \text{ or } \mo{M}, x \Vdash \psi \\
\mo{M}, x \Vdash \phi \to \psi
&\iff x \leq y \text{ and } \mo{M}, y \Vdash \phi
\text{ imply } \mo{M}, y \Vdash \psi \\
\mo{M}, x \Vdash \heartsuit^{\lambda}(\phi_1, \ldots, \phi_n)
&\iff \gamma(x) \in \lambda_{(X,\leq)}(\llbracket \phi_1 \rrbracket^{\mo{M}}, \ldots, \llbracket \phi_n \rrbracket^{\mo{M}})
\end{align*}
Here $\llbracket \phi \rrbracket^{\mo{M}} = \{ x \in X \mid \mo{M}, x \Vdash \phi \}$.
We write $\mo{M} \Vdash \phi$ if $\mo{M}, x \Vdash \phi$ for all $x \in X$
and $\mo{X} \Vdash \phi$ if $(\mo{X}, V) \Vdash \phi$ for
all valuations $V$ for $\mo{X}$.
If $\Phi \subseteq \lan{L}(\Lambda)$ then we say that
$\Phi$ is \emph{valid} on $\mo{X}$, and write $\mo{X} \Vdash \Phi$, if
$\mo{X} \Vdash \phi$ for all $\phi \in \Phi$.
Also, let
$$
\Fr \Phi = \{ \mo{X} \in \cat{Dialg}(\fun{i}, \fun{T}) \mid \mo{X} \Vdash \Phi \}.
$$
We call a class $\ms{K} \subseteq \cat{Dialg}(\fun{i}, \fun{T})$
\emph{axiomatic} if $\ms{K} = \Fr\Phi$ for some $\Phi \subseteq \lan{L}(\Lambda)$.
\end{definition}
\begin{example}\label{exm:pred-lift-box}
Since $\Box$-frames correspond to $(\fun{i}, \fun{P_{up}})$-dialgebras,
it is easy to see that $\Box$-models correspond to
$(\fun{i}, \fun{P_{up}})$-models.
The modal operator $\Box$ can be induced by the predicate
lifting $\lambda^{\Box} : \fun{Up} \circ \fun{i} \to \fun{Up} \circ \fun{P_{up}}$
given by
$$
\lambda^{\Box}_{(X, \leq)}
: \fun{Up}(\fun{i}(X, \leq)) \to \fun{Up}(\fun{P_{up}}(X, \leq))
: a \mapsto \{ b \in \fun{P_{up}}(X, \leq) \mid b \subseteq a \}.
$$
Indeed, if $\mo{M} = (X, \leq, R, V)$ is a $\Box$-model and
$(X, \leq, \gamma_R, V)$ the corresponding $(\fun{i}, \fun{P_{up}})$-model
then we have $x \Vdash \Box\phi$ iff every $R$-successor of $x$ satisfies
$\phi$, i.e.~iff $\gamma_R(x) \subseteq \llbracket \phi \rrbracket^{\mo{M}}$.
By definition the latter is equivalent to
$\gamma_R(x) \in \lambda^{\Box}_{(X, \leq)}(\llbracket \phi \rrbracket^{\mo{M}})$.
\end{example}
Finally, we define morphisms between $(\fun{i}, \fun{T})$-models.
\begin{definition}
An \emph{$(\fun{i}, \fun{T})$-model morphism} from $\mo{M} = (\mo{X}, V)$
to $\mo{M}' = (\mo{X}', V')$ is an $(\fun{i}, \fun{T})$-dialgebra
morphism $f : \mo{X} \to \mo{X}'$ such that $V = f^{-1} \circ V'$.
\end{definition}
\begin{proposition}\label{prop:mor-pres-truth}
If $f : \mo{M} \to \mo{M}'$ is an $(\fun{i}, \fun{T})$-model morphism,
then for all states $x$ of $\mo{M}$ and $\phi \in \lan{L}(\Lambda)$,
we have $\mo{M}, x \Vdash \phi$ iff $\mo{M}', f(x) \Vdash \phi$.
\end{proposition}
\begin{proof}
Let $\mo{M} = (X, \leq, \gamma, V)$ and $\mo{M}' = (X', \leq', \gamma', V')$.
The proof proceeds by induction on the structure of $\phi$.
If $\phi \in \Prop$ then the claim follows from the
definition of an $(\fun{i}, \fun{T})$-model morphism.
The inductive cases for propositional connectives are routine,
so we focus on the modal case. We restrict our attention to
unary modalities, higher arities being similar. Compute:
\begin{align*}
\mo{M}, x &\Vdash \heartsuit^{\lambda}\phi \\
&\iff \gamma(x) \in \lambda_{(X, \leq)}(\llbracket \phi \rrbracket^{\mo{M}})
&\text{(Def.~\ref{def:interpretation})} \\
&\iff \gamma(x) \in \lambda_{(X, \leq)}(f^{-1}(\llbracket \phi \rrbracket^{\mo{M}'}))
&\text{(Induction hypothesis)} \\
&\iff \gamma(x) \in \lambda_{(X, \leq)}((\fun{i}f)^{-1}(\llbracket \phi \rrbracket^{\mo{M}'}))
&\text{(Because $\fun{i}f = f$)} \\
&\iff \gamma(x) \in
(\fun{T}f)^{-1}(\lambda_{(X', \leq')}(\llbracket \phi \rrbracket^{\mo{M}'}))
&\text{(Naturality of $\lambda$)} \\
&\iff (\fun{T}f)(\gamma(x)) \in \lambda_{(X', \leq')}(\llbracket \phi \rrbracket^{\mo{M}'}) \\
&\iff \gamma'((\fun{i}f)(x)) \in \lambda_{(X', \leq')}(\llbracket \phi \rrbracket^{\mo{M}'})
&\text{($f$ is a dialgebra morphism)} \\
&\iff \mo{M}', f(x) \Vdash \heartsuit^{\lambda}\phi
&\text{(Def.~\ref{def:interpretation} and $\fun{i}f = f$)}
\end{align*}
This proves the proposition.
\end{proof}
\subsection{Disjoint Unions, Generated Subframes and p-Morphic Images}
\label{subsec:invariance}
The category theoretic analogue of a disjoint union is a coproduct.
For any $\fun{T} : \cat{Krip} \to \cat{Pos}$
the category $\cat{Dialg}(\fun{i}, \fun{T})$ has coproducts
because $\cat{Krip}$ has coproducts and $\fun{i}$ preserves
them~\cite[Thm.~3.2.1]{Blo12}.
So we define:
\begin{definition}
The \emph{disjoint union} of a $K$-indexed family of
$(\fun{i}, \fun{T})$-dialgebras $\mo{X}_k = (X_k, \leq_k, \gamma_k)$
is the coproduct $\coprod_{k \in K} \mo{X}_k$ in $\cat{Dialg}(\fun{i}, \fun{T})$.
\end{definition}
\begin{example}
Let $(X_k, \leq_k, R_k)$ be a $K$-indexed set of $\Box$-frames,
and $(X_k, \leq_k, \gamma_k)$ the corresponding $(\fun{i}, \fun{P_{up}})$-dialgebras.
The coproduct $\coprod_{k \in K}(X_k, \leq_k, \gamma_k)$ is given by
$(X, \leq, \gamma)$, where $(X, \leq)$ is the coproduct of the intuitionistic
Kripke frames $(X_k, \leq_k)$ (which is computed as in $\cat{Set}$),
and $\gamma : (X, \leq) \to \fun{P_{up}}(X, \leq)$
is given by $\gamma(x_k) = \gamma_k(x_k)$ (for $x_k \in X_k$).
Transforming this back into a $\Box$-frame, we obtain
$(X, \leq, R)$, with $xRy$ iff there is a $k \in K$ with
$x, y \in X_k$ and $xR_ky$.
So this corresponds to the expected notion of disjoint union of
$\Box$-frames.
\end{example}
\begin{proposition}\label{prop:disj-union}
Let $\mo{X}_k = (X_k, \leq_k, \gamma_k)$ be a family of
$(\fun{i}, \fun{T})$-dialgebras indexed by some set $K$.
Suppose $\mo{X}_k \Vdash \phi$ for all $k \in K$.
Then $\coprod \mo{X}_k \Vdash \phi$.
\end{proposition}
\begin{proof}
Let $V$ be a valuation for $\coprod \mo{X}_k$.
Define the valuation $V_k$ for $\mo{X}_k$ by $V_k(p) = V(p) \cap X_k$.
Then the coproduct inclusion maps
$\kappa_k : (\mo{X}_k, V_k) \to (\coprod \mo{X}_k, V)$ are
$(\fun{i}, \fun{T})$-model morphisms, hence the assumption
$\mo{X}_k \Vdash \phi$ for all $k \in K$ implies that
$(\coprod \mo{X}_k, V) \Vdash \phi$.
Since $V$ was arbitrary, $\coprod \mo{X}_k \Vdash \phi$.
\end{proof}
\begin{definition}\label{def:subf-im}
Let $\mo{X}' = (X', \leq', \gamma')$ and $\mo{X} = (X, \leq, \gamma)$ be
$(\fun{i}, \fun{T})$-dialgebras.
\begin{enumerate}
\item $\mo{X}'$ is called a \emph{generated subframe} of $\mo{X}$ if there
exists a p-morphism $f : \mo{X}' \to \mo{X}$ such that
$f : (X', \leq') \to (X, \leq)$ is an embedding.
\item $\mo{X}'$ is a \emph{p-morphic image} of $\mo{X}$ if there
exists a surjective dialgebra morphism $\mo{X} \to \mo{X}'$.
\end{enumerate}
\end{definition}
\begin{example}
Guided by \cite[Def.~2.5 and~3.13]{BRV01},
we could define a generated sub-$\Box$-frame of a $\Box$-frame
$(X, \leq, R)$ as a $\Box$-frame $(X', \leq', R')$ such that:
\begin{itemize}
\item $X' \subseteq X$ and
${\leq'} = ({\leq} \cap (X' \times X'))$ and $R' = (R \cap (X' \times X'))$;
\item if $x \in X'$ and $x \leq y$ or $x R y$, then $y \in X'$.
\end{itemize}
With this definition, it can be shown that a $\Box$-frame
$\mo{X}'$ is isomorphic to a generated sub-$\Box$-frame of a $\Box$-frame $\mo{X}$
if and only if the dialgebraic rendering of $\mo{X}'$ is a generated
subframe of the dialgebraic rendering of $\mo{X}$ (as per Def.~\ref{def:subf-im}).
\end{example}
\begin{proposition}\label{prop:subf-im}
Let $\mo{X}$ be an $(\fun{i}, \fun{T})$-dialgebra such
that $\mo{X} \Vdash \phi$.
\begin{enumerate}
\item If $\mo{X}'$ is a generated subframe of $\mo{X}$
then $\mo{X}' \Vdash \phi$.
\item If $\mo{X}'$ is a p-morphic image of $\mo{X}$
then $\mo{X}' \Vdash \phi$.
\end{enumerate}
\end{proposition}
\begin{proof}
We prove the first item, the second item being similar.
If $\mo{X}' = (X', \leq', \gamma')$ is a generated subframe of
$\mo{X} = (X, \leq, \gamma)$ then there exists a
$(\fun{i}, \fun{T})$-dialgebra morphism $f : \mo{X}' \to \mo{X}$
that is an embedding of the underlying posets.
Let $V'$ be any valuation for $\mo{X}'$.
Define a valuation $V^{\uparrow}$ for $\mo{X}$ by
$V^{\uparrow}(p) = \{ x \in X \mid \exists y \in V'(p) \text{ s.t. } f(y) \leq x \}$.
Then the fact that $f$ is an embedding implies that $V' = f^{-1} V^{\uparrow}$,
and therefore $f : (\mo{X}', V') \to (\mo{X}, V^{\uparrow})$ is a
dialgebra model morphism.
The assumption that $\mo{X} \Vdash \phi$ together with
Prop.~\ref{prop:mor-pres-truth} implies that $(\mo{X}', V') \Vdash \phi$.
Since $V'$ is arbitrary we find $\mo{X}' \Vdash \phi$.
\end{proof}
\subsection{Axioms and Algebraic Semantics}\label{subsec:algsem}
In order to get intuition for the dialgebraic perspective of algebraic
semantics, we observe that the category $\cat{HAO}$ is isomorphic to
a category of dialgebras. In this case, we consider dialgebras for
functors $\cat{HA} \to \cat{DL}$.
Again, one of the functors is simply the inclusion functor, which we
denote by $\fun{j} : \cat{HA} \to \cat{DL}$.
\begin{example}\label{exm:L-box}
Let $\fun{K} : \cat{HA} \to \cat{DL}$ be the functor that
sends a Heyting algebra $A$ to the free distributive lattice
generated by $\{ \dbox a \mid a \in A \}$ modulo $\dbox\top = \top$ and
$\dbox a \wedge \dbox b = \dbox (a \wedge b)$, where $a$ and $b$ range over $A$.
The action of $\fun{K}$ on a Heyting homomorphism $h : A \to A'$
is defined on generators by $\fun{K}h(\dbox a) = \dbox h(a)$.
Then $\cat{HAO} \cong \cat{Dialg}(\fun{K}, \fun{j})$~\cite[Exm.~3.3]{GroPat20}.
\end{example}
We denote generators by dotted boxes to distinguish
them from the modality $\Box$.
Observe that the relations defining $\fun{K}$ correspond
to the axioms we want a normal box to satisfy.
We investigate how to generalise this to the setting of some
arbitrary set $\Lambda$ of predicate liftings for
a functor $\fun{T} : \cat{Krip} \to \cat{Pos}$.
\begin{definition}
A \emph{rank-1 formula} in $\lan{L}(\Lambda)$ is a formula $\phi$
such that
\begin{itemize}
\item $\phi$ does not contain intuitionistic implication;
\item each proposition letter appears in the scope of precisely
one modal operator.
\end{itemize}
A \emph{rank-1 axiom} is a formula of the form
$\phi \leftrightarrow \psi$, where $\phi, \psi$ are rank-1 formulae.
It is called \emph{sound} if it is valid in all $(\fun{i}, \fun{T})$-dialgebras.
Let $\Ax$ be a collection of sound rank-1 axioms.
Define the logic $\log{L}(\Lambda, \Ax)$ as the smallest set of
$\lan{L}(\Lambda)$-formulae containing $\Ax$ and an axiomatisation
for intuitionistic logic,
which is closed under modus ponens,
uniform substitution, and
$$
\dfrac{\phi_1 \leftrightarrow \psi_1 \quad \cdots \quad
\phi_n \leftrightarrow \psi_n}
{\heartsuit^{\lambda}(\phi_1, \ldots, \phi_n)
\leftrightarrow \heartsuit^{\lambda}(\psi_1, \ldots, \psi_n)}
\qquad\text{(congruence rule)}.
$$
\end{definition}
Example~\ref{exm:L-box} generalises as follows~\cite[Sec.~5]{GroPat20}.
\begin{definition}\label{def:L-lambda-ax}
Let $\Lambda$ be a set of predicate liftings for $\fun{T}$ and
$\Ax$ a set of sound rank-1 axioms for $\lan{L}(\Lambda)$.
For a Heyting algebra $A$, define $\fun{L}^{(\Lambda, \Ax)}A$ to be the
free distributive lattice generated by
$\{ \dheartsuit^{\lambda}(a_1, \ldots, a_n) \mid \lambda \in \Lambda, a_i \in A \}$
modulo the axioms in $\Ax$, where each occurrence of $\heartsuit$ is
replaced by the formal generator $\dheartsuit$, $\leftrightarrow$ is replaced by $=$,
and the proposition
letters range over the elements of $A$.
(This is well defined since the axioms in $\Ax$ are rank-1 axioms, which
result in equations constructed from elements of the form
$\dheartsuit(a_1, \ldots, a_n)$ and distributive lattice connectives.)
If $h : A \to A'$ is a Heyting homomorphism, define
$\fun{L}^{(\Lambda, \Ax)}h : \fun{L}^{(\Lambda, \Ax)}A \to \fun{L}^{(\Lambda, \Ax)}A'$
on generators by
$\fun{L}^{(\Lambda, \Ax)}h(\dheartsuit^{\lambda}(a_1, \ldots, a_n))
= \dheartsuit^{\lambda}(h(a_1), \ldots, h(a_n))$.
Then $\fun{L}^{(\Lambda, \Ax)} : \cat{HA} \to \cat{DL}$ defines a
functor.
\end{definition}
Again, we use a symbol with a dot in it to denote formal generators,
and separate them from symbols in the language.
\begin{example}
Let $\Lambda = \{ \lambda^{\Box} \}$, where $\lambda^{\Box}$ is the
predicate lifting from Exm.~\ref{exm:pred-lift-box},
and write $\Box$ instead of $\heartsuit^{\lambda^{\Box}}$.
Let $\Ax$ consist of the two axioms (not the rule) from \eqref{eq:box-ax-rule},
and note that these are both rank-1 axioms.
Then the logic $\log{L}(\Lambda, \Ax)$ coincides with $\log{L}_{\Box}$,
and the functor obtained from the procedure in Def.~\ref{def:L-lambda-ax}
is naturally isomorphic to $\fun{K}$ from Exm.~\ref{exm:L-box}.
(The only difference is the symbol used to represent the formal generators.)
\end{example}
The following observation allows us to use the Birkhoff variety theorem
when proving the Goldblatt-Thomason theorem below.
\begin{lemma}\label{lem:dialgLj-variety}
Let $\fun{L}$ be obtained from predicate liftings and axioms via
Def.~\ref{def:L-lambda-ax}.
Then the category $\cat{Dialg}(\fun{L}, \fun{j})$ is a variety of algebras.
\end{lemma}
\begin{proof}
It is known that the category $\cat{HA}$ of Heyting algebras is a variety
of algebras. We add to its signature an $n$-ary operation symbol for each
$n$-ary predicate lifting in $\Lambda$, and to the set of equations defining
$\cat{HA}$ the equations obtained from $\Ax$ by replacing $\leftrightarrow$ with
equality and proposition letters with variables.
\end{proof}
We can evaluate $\lan{L}(\Lambda)$-formulae in a
$(\fun{L}^{(\Lambda, \Ax)}, \fun{j})$-dialgebra $(A, \alpha)$ with an
assignment of the proposition letters to elements of $A$.
Intuitionistic connectives are interpreted as in the Heyting algebra $A$,
and the interpretation of $\heartsuit^{\lambda}(\phi_1, \ldots, \phi_n)$
is given by
$\alpha(\dheartsuit^{\lambda}(\llparenthesis\kern.1em \phi_1 \kern.1em\rrparenthesis, \ldots, \llparenthesis\kern.1em \phi_n \kern.1em\rrparenthesis))$,
where $\llparenthesis\kern.1em \phi_i \kern.1em\rrparenthesis$ is the interpretation of $\phi_i$.
We say that $\phi$ is valid in $(A, \alpha)$, and write $(A, \alpha) \models \phi$,
if $\phi$ evaluates to $\top$ under every assignment of the proposition letters.
This evaluation is closely related to the interpretation of formulae
in $(\fun{i}, \fun{T})$-dialgebras:
a formula $\phi$ is valid in some $(\fun{i}, \fun{T})$-dialgebra if and
only if it is valid in some related algebra, called the complex algebra.
\begin{definition}\label{def:rho}
Define
$\rho : \fun{L}^{(\Lambda, \Ax)} \circ \fun{u\kern-.1em p'}
\to \fun{u\kern-.1em p} \circ \fun{T}$
on generators by
$$
\rho_{(X, \leq)}(\dheartsuit^{\lambda}(a_1, \ldots, a_n))
= \lambda_{(X, \leq)}(a_1, \ldots, a_n).
$$
Then $\rho$ is a well defined transformation because $\Ax$ is assumed to be sound,
and it is natural because predicate liftings are natural transformations.
It gives rise to a functor
$(\cdot)^+ : \cat{Dialg}(\fun{i}, \fun{T})
\to \cat{Dialg}(\fun{L}^{(\Lambda, \Ax)}, \fun{j})$,
which sends an $(\fun{i}, \fun{T})$-dialgebra $(X, \leq, \gamma)$
to its \emph{complex algebra} $(\fun{u\kern-.1em p'}(X, \leq), \gamma^+)$, given by
$$
\begin{tikzcd}
\fun{L}^{(\Lambda, \Ax)}(\fun{u\kern-.1em p'}(X, \leq))
\arrow[r, "\rho_{(X, \leq)}"]
\arrow[rrr, rounded corners=1.3ex,
to path={ -- ([yshift=-2.5ex,xshift=2ex]\tikztostart.south)
-- node[above,pos=.56]{\scriptsize$\gamma^+$}
([yshift=-2.5ex,xshift=-1ex]\tikztotarget.south)
-- (\tikztotarget.south)}]
& \fun{u\kern-.1em p}(\fun{T}(X, \leq))
\arrow[r, "\fun{u\kern-.1em p}\gamma"]
& [-.5em]
\fun{u\kern-.1em p}(\fun{i}(X, \leq))
\arrow[r, equal]
& [-1.5em]
\fun{j}(\fun{u\kern-.1em p'}(X, \leq)).
\end{tikzcd}
$$
The action of $(\cdot)^+$ on an $(\fun{i}, \fun{T})$-dialgebra morphism $f$
is given by $f^+ = \fun{u\kern-.1em p'} f$.
\end{definition}
\begin{example}
Let $(X, \leq, R)$ be a $\Box$-frame and
$(X, \leq, \gamma)$ the corresponding $(\fun{i}, \fun{P_{up}})$-dialgebra.
The complex algebra of $(X, \leq, \gamma)$ is the
$(\fun{K}, \fun{j})$-dialgebra $(\fun{u\kern-.1em p'}(X, \leq), \gamma^+)$,
where $\gamma^+$ is given by
$\gamma^+(\dbox a) = \gamma^{-1}(\lambda^{\Box}(a)) = \{ x \in X \mid \gamma(x) \subseteq a \}$.
Translating this to a HAO, we see that this corresponds precisely
to the complex algebra of $(X, \leq, R)$ in the sense of
Sec.~\ref{sec:prelim}.
\end{example}
\begin{proposition}\label{prop:complex-alg}
Let $\mo{X}$ be an $(\fun{i}, \fun{T})$-dialgebra and
$\phi \in \lan{L}(\Lambda)$. Then we have
$$
\mo{X} \Vdash \phi \iff \mo{X}^+ \models \phi.
$$
\end{proposition}
\begin{proof}
This follows from a routine induction on the structure of $\phi$,
where the base case follows from the fact that valuations for $\mo{X}$
correspond bijectively to assignments of the proposition letters to
elements of $\mo{X}^+$.
\end{proof}
\subsection{Prime Filter Extensions}\label{subsec:pe}
The proof of the Goldblatt-Thomason theorem relies on Birkhoff's variety
theorem and the connection between frame semantics and algebraic semantics
of a logic. As we have seen above, every $\Box$-frame gives rise to a
complex algebra, or, more generally, every $(\fun{i}, \fun{T})$-dialgebra
gives rise to a $(\fun{L}, \fun{j})$-dialgebra.
To transfer the variety theorem from $(\fun{L}, \fun{j})$-dialgebras back to
$(\fun{i}, \fun{T})$-dialgebras, we need a functor
$(\cdot)_+ : \cat{Dialg}(\fun{L}, \fun{j}) \to \cat{Dialg}(\fun{i}, \fun{T})$
such that
for each $(\fun{i}, \fun{T})$-dialgebra $\mo{X}$,
\begin{equation}\label{eq:goal2}\tag{$\star$}
(\mo{X}^+)_+ \Vdash \phi
\quad\text{implies}\quad\mo{X} \Vdash \phi.
\end{equation}
\begin{assum}\label{ass:3.4}
Throughout this subsection, let
$\fun{T} : \cat{Krip} \to \cat{Pos}$ be a functor,
$\Lambda$ a set of predicate liftings for $\fun{T}$,
and a set $\Ax$ of sound rank-1 axioms from $\lan{L}(\Lambda)$.
Abbreviate $\fun{L} := \fun{L}^{(\Lambda, \Ax)}$ and
$\rho := \rho^{(\Lambda, \Ax)}$.
\end{assum}
A functor $(\cdot)_+ : \cat{Dialg}(\fun{L}, \fun{j}) \to \cat{Dialg}(\fun{i}, \fun{T})$
arises from a natural transformation
$\tau$ in the same way as $\rho$ induced a functor from frames to
complex algebras. To stress its dependence on the choice of $\tau$,
we denote it by $(\cdot)_{\tau}$ instead of $(\cdot)_+$.
\begin{definition}\label{def:tau}
Let $\tau : \fun{pf} \circ \fun{L} \to \fun{T} \circ \fun{pf'}$
be a natural transformation.
Then we define the contravariant functor
$(\cdot)_{\tau} : \cat{Dialg}(\fun{L}, \fun{j}) \to \cat{Dialg}(\fun{i}, \fun{T})$
on objects by sending a $(\fun{L}, \fun{j})$-dialgebra $\amo{H} = (H, \alpha)$ to
the $(\fun{i}, \fun{T})$-dialgebra $\amo{H}_{\tau}$ given by
$$
\begin{tikzcd}
\fun{i}(\fun{pf'}H)
\arrow[r, equal]
& [-1.5em]
\fun{pf}(\fun{j}H)
\arrow[r, "\fun{pf}\alpha"]
& \fun{pf}(\fun{L}H)
\arrow[r, "\tau_H"]
& \fun{T}(\fun{pf'}H).
\end{tikzcd}
$$
For a $(\fun{L}, \fun{j})$-dialgebra morphism $h : \amo{H} \to \amo{H}'$
we define $h_{\tau} = \fun{pf'}h : \amo{H}'_{\tau} \to \amo{H}_{\tau}$.
Naturality of $\tau$ ensures that this is well defined.
\end{definition}
We call $(\mo{X}^+)_{\tau}$ the $\tau$-prime filter extension of
an $(\fun{i}, \fun{T})$-dialgebra $\mo{X}$ if $\tau$ satisfies a
sufficient condition that ensures that~\eqref{eq:goal2} holds
(by Prop.~\ref{prop:pf-key}).
This condition relies on the following variation of the adjoint mate of $\rho$.
\begin{definition}
Let $\rho : \fun{L} \circ \fun{u\kern-.1em p'} \to \fun{u\kern-.1em p} \circ \fun{T}$.
Then we write $\rho^{\flat}$ for the natural transformation defined as
the composition
$$
\begin{tikzcd}
\fun{T} \circ \fun{pf'}
\arrow[r, "\eta_{\fun{T} \circ \fun{pf'}}"]
& \fun{pf} \circ \fun{u\kern-.1em p} \circ \fun{T} \circ \fun{pf'}
\arrow[r, "\fun{pf}\rho_{\fun{pf'}}"]
& \fun{pf} \circ \fun{L} \circ \fun{u\kern-.1em p'} \circ \fun{pf'}
\arrow[r, "\fun{pf}(\fun{L}\theta')"]
& \fun{pf} \circ \fun{L},
\end{tikzcd}
$$
where $\eta$ and $\theta$ are defined as in Def.~\ref{def:fun-nat}.
\end{definition}
\begin{definition}\label{def:pfe5}
Let $\tau$ be a natural transformation such that
$\rho^{\flat} \circ \tau = \fun{id}_{\fun{pf} \circ \fun{L}}$.
\begin{enumerate}
\item Define
$\pe := (\cdot)_{\tau} \circ (\cdot)^+
: \cat{Dialg}(\fun{i}, \fun{T})\to \cat{Dialg}(\fun{i}, \fun{T})$.
We call $\pe\mo{X}$ the \emph{$\tau$-prime filter extension}
of $\mo{X} \in \cat{Dialg}(\fun{i}, \fun{T})$.
\item The \emph{$\tau$-prime filter extension} of a model
$\mo{M} = (\mo{X}, V)$ is $\pe\mo{M} := (\pe\mo{X}, V^{pe})$,
where $V^{pe}(p) = \{ \ff{q} \in \pe\mo{X} \mid V(p) \in \ff{q} \}$
for all $p \in \Prop$.
\end{enumerate}
\end{definition}
Observe that the prime filter extension of an $(\fun{i}, \fun{T})$-dialgebra
$\mo{X} = (X, \leq, \gamma)$ is of the form
$\pe\mo{X} = (X^{pe}, \subseteq, \gamma^{pe})$,
where $X^{pe}$ denotes the set of prime filters of upsets of
$(X, \leq)$ and $\gamma^{pe}$ is
computed using both $\rho$ and $\tau$.
We now show that $\tau$-prime filter extensions satisfy~\eqref{eq:goal2}.
\begin{proposition}\label{prop:pf-key}
Let $\tau$ be a natural transformation such that
$\rho^{\flat} \circ \tau = \fun{id}_{\fun{pf} \circ \fun{L}}$,
$\mo{X} = (X, \leq, \gamma)$ an $(\fun{i}, \fun{T})$-dialgebra, $\mo{M} = (\mo{X}, V)$
a model based on $\mo{X}$, $\phi \in \lan{L}(\Lambda)$.
\begin{enumerate}
\item For all prime filters $\ff{q} \in X^{pe}$ we have
$\pe\mo{M}, \ff{q} \Vdash \phi$ iff $\llbracket \phi \rrbracket^{\mo{M}} \in \ff{q}$.
\item For all states $x \in X$ we have
$\mo{M}, x \Vdash \phi$ iff $\pe\mo{M}, \eta_{(X, \leq)}(x) \Vdash \phi$.
\item If $\pe\mo{X} \Vdash \phi$ then $\mo{X} \Vdash \phi$.
\end{enumerate}
\end{proposition}
\begin{proof}
The proof of the proposition is given in the appendix.
\end{proof}
\begin{example}\label{exm:box-frame-tau}
Returning to our example of $\Box$-frames, we wish to find a natural
transformation $\tau^{\Box}$ such that
$(\rho^{\Box})^{\flat} \circ \tau^{\Box} = \fun{id}_{\fun{pf} \circ \fun{L}^{\Box}}$.
Before defining $\tau^{\Box}$, let us get an idea of what
$(\rho^{\Box})^{\flat}$ looks like.
Let $A$ be a Heyting algebra and $Q \in \fun{pf}(\fun{L}^{\Box}A)$.
Since $Q$ is determined by elements of the form $\dbox a$ it contains,
where $a \in A$,
we pay special attention to these elements.
For $D \in \fun{P_{up}} \circ \fun{pf'}A$ and $a \in A$ we have
\begin{align*}
\dbox a \in (\rho^{\Box}_A)^{\flat}(D)
&\iff \rho_{\fun{pf'}A}((\fun{pf}(\fun{L}^{\Box}\theta'_A))(\dbox a)) \in \eta_{\fun{T}(\fun{pf'}A)}(D) \\
&\iff \rho_{\fun{pf'}A}(\dbox\theta'_A(a)) \in \eta_{\fun{T}(\fun{pf'}A)}(D) \\
&\iff D \in \rho_{\fun{pf'}A}(\dbox \theta'_A(a)) \\
&\iff D \subseteq \theta'_A(a)
\end{align*}
Guided by this we define
$\tau : \fun{pf} \circ \fun{L}^{\Box} \to \fun{P_{up}} \circ \fun{pf'}$
on components by
$$
\tau_A^{\Box}
: \fun{pf}(\fun{L}^{\Box}A) \to \fun{P_{up}}(\fun{pf'}A)
: Q \mapsto \{ \ff{p} \in \fun{pf'}A \mid \forall a \in A, \dbox a \in Q \text{ implies } a \in \ff{p} \}
$$
With this definition we can prove the following lemma,
the proof of which can be found in the appendix.
\begin{lemma}\label{lem:Box-canonical}
$\tau^{\Box}$ is a natural transformation such that
$(\rho^{\Box})^{\flat} \circ \tau^{\Box} = \fun{id}_{\fun{pf} \circ \fun{L}^{\Box}}$.
\end{lemma}
Now suppose $(A, \Box)$ is a HAO, and $\amo{A} = (A, \alpha)$ its corresponding
$(\fun{L}^{\Box}, \fun{j})$-dialgebra
(with $\alpha$ given by $\alpha(\dbox a) = \Box a$).
We have $\amo{A}_{\tau} = (\fun{pf'}A, \subseteq, \gamma)$, where
$$
\gamma(\ff{q})
= \{ \ff{p} \in \fun{pf'}A \mid \forall a \in A,
\dbox a \in \alpha^{-1}(\ff{q}) \text{ implies } a \in \ff{p} \}.
$$
Note that $\dbox a \in \alpha^{-1}(\ff{q})$ iff $\Box a = \alpha(\dbox a) \in \ff{q}$.
Therefore, translating $\gamma$ to a relation $R_{\gamma}$, we obtain:
$\ff{q}R_{\gamma}\ff{p}$ iff $\Box a \in \ff{q}$ implies $a \in \ff{p}$ for
all $a \in A$.
It follows that the $(\fun{i}, \fun{T})$-dialgebra
corresponding to the prime filter extension of a $\Box$-frame $(X, \leq, R)$
(as in Sec.~\ref{sec:prelim})
coincides with the $\tau^{\Box}$-prime filter extension of the dialgebraic
rendering of $\mo{X}$.
So, modulo dialgebraic translation, prime filter extensions and
$\tau^{\Box}$-prime filter extensions of $\Box$-frames coincide.
\end{example}
\subsection{The Goldblatt-Thomason Theorem}\label{subsec:gt}
Finally, we put our theory to work and prove a Goldblatt-Thomason theorem
for dialgebraic intuitionistic logics.
We work with the same assumptions as in Assum.~\ref{ass:3.4}.
Additionally, we assume that we have a natural transformation
$\tau : \fun{pf} \circ \fun{L} \to \fun{T} \circ \fun{pf'}$
such that $\rho^{\flat} \circ \tau = \fun{id}_{\fun{pf} \circ \fun{L}}$.
This allows us to use Def.~\ref{def:pfe5}.
\begin{definition}
If $\Phi \subseteq \lan{L}(\Lambda)$ and $\amo{A} \in \cat{Dialg}(\fun{L}, \fun{j})$
then we write $\amo{A} \models \Phi$ if $\amo{A} \models \phi$ for all
$\phi \in \Phi$. Besides, we let
$
\Alg \Phi = \{ \amo{A} \in \cat{Dialg}(\fun{L}, \fun{j}) \mid \amo{A} \models \Phi \}
$
be the collection of $(\fun{L}, \fun{j})$-dialgebras satisfying $\Phi$.
We say that a class $\ms{C} \subseteq \cat{Dialg}(\fun{L}, \fun{j})$
is \emph{axiomatic} if $\ms{C} = \Alg \Phi$
for some collection $\Phi$ of $\lan{L}(\Lambda)$-formulae.
\end{definition}
\begin{lemma}\label{lem:alg-axiomatic-variety}
$\ms{C} \subseteq \cat{Dialg}(\fun{L}, \fun{j})$ is axiomatic
iff it is a variety of algebras.
\end{lemma}
\begin{proof}
If
$\cat{A} = \{ \amo{A} \in \cat{Dialg}(\fun{L}, \fun{j}) \mid \amo{A} \models \Phi \}$,
then it is precisely the variety of algebras satisfying
$\phi^x \leftrightarrow \top$, where $\phi \in \Phi$ and $\phi^x$ is
the formula we get from $\phi$ by replacing the proposition letters with
variables from some set $S$ of variables.
Conversely, suppose $\cat{A}$ is a variety of algebras given by a set
$E$ of equations using variables in $S$. For each equation $\phi = \psi$
in $E$, let $(\phi \leftrightarrow \psi)^p$ be the formula we get from
replacing the variables in $\phi \leftrightarrow \psi$ with proposition
letters. Then we have
$\cat{A} = \Alg \{ (\phi \leftrightarrow \psi)^p \mid \phi = \psi \in E \}$.
\end{proof}
For a class $\ms{K}$ of $(\fun{i}, \fun{T})$-dialgebras, write
$\ms{K}^+ = \{ \mo{X}^+ \mid \mo{X} \in \ms{K} \}$ for the collection
of corresponding complex algebras.
Also, if $\ms{C}$ is a class of algebras, then we write
$H\ms{C}$, $S\ms{C}$ and $P\ms{C}$ for its closure under
\textbf{h}omomorphic images, \textbf{s}ubalgebras and \textbf{p}roducts,
respectively.
\begin{lemma}\label{lem:mdv}
A class $\ms{K} \subseteq \cat{Dialg}(\fun{i}, \fun{T})$ is axiomatic
if and only if
\begin{equation}\label{eq:mdv}
\ms{K} = \{ \mo{X} \in \cat{Dialg}(\fun{i}, \fun{T})
\mid \mo{X}^+ \in HSP(\ms{K}^+) \}.
\end{equation}
\end{lemma}
\begin{proof}
Suppose $\ms{K}$ is axiomatic, i.e.~$\ms{K} = \Fr\Phi$.
Then it follows from Prop.~\ref{prop:complex-alg} and the fact that $H$,
$S$ and $P$ preserve validity of formulae that
\eqref{eq:mdv} holds.
Conversely, suppose \eqref{eq:mdv} holds.
Since $HSP(\ms{K}^+)$ is a variety, Birkhoff's variety theorem
states that it is of the from $\Alg\Phi$.
It follows that $\ms{K} = \Fr\Phi$.
\end{proof}
We now have all the ingredients to prove the Goldblatt-Thomason theorem.
\begin{theorem}\label{thm:gt}
Let $\ms{K} \subseteq \cat{Dialg}(\fun{i}, \fun{T})$ be closed under
$\tau$-prime filter extensions.
Then $\ms{K}$ is axiomatic if and only if $\ms{K}$ reflects $\tau$-prime
filter extensions and is closed under disjoint
unions, generated subframes and p-morphic images.
\end{theorem}
\begin{proof}
The implication from left to right follows from
Sec.~\ref{subsec:invariance} and Prop.~\ref{prop:pf-key}.
For the converse, by Lem.~\ref{lem:mdv} it suffices to prove that
$\ms{K} = \{ \mo{X} \in \cat{Dialg}(\fun{i}, \fun{T}) \mid \mo{X}^+ \in HSP(\ms{K}^+) \}$.
So let $\mo{X} = (X, \gamma) \in \cat{Dialg}(\fun{i}, \fun{T})$ and
suppose $\mo{X}^+ \in HSP(\ms{K}^+)$.
Then there are $\mo{Z}_i \in \ms{K}$ such that $\mo{X}^+$ is the
homomorphic image of a sub-dialgebra $\amo{A}$ of the
product of the $\mo{Z}_i^+$.
In a diagram:
$$
\begin{tikzcd}
\mo{X}^+
& [2.5em]
\amo{A}
\arrow[l, ->>, "\text{surjective}" {above,pos=.46}]
\arrow[r, >->, "\text{injective}"]
& [2.5em]
\prod \mo{Z}_i^+
\end{tikzcd}
$$
Since $\prod\mo{Z}_i^+ = (\coprod \mo{Z}_i)^+$,
dually this yields
$$
\begin{tikzcd}
(\mo{X}^+)_{\tau}
\arrow[r, >->, "\text{gen.~subframe}"]
& [5em]
\amo{A}_{\tau}
& [5em]
\big(\big(\coprod \mo{Z}_i\big)^+\big)_{\tau}
\arrow[l, ->>, "\text{p-morphic image}" {above,pos=.46}]
\end{tikzcd}
$$
We have $\coprod \mo{Z}_i \in \ms{K}$ because $\ms{K}$ is closed under coproducts,
and $\big((\coprod \mo{Z}_i)^+\big)_{\tau} \in \ms{K}$ because $\ms{K}$ is
closed under prime filter extensions.
Then $\amo{A}_{\tau} \in \ms{K}$ and $(\mo{X}^+)_{\tau} \in \ms{K}$ because
$\ms{K}$ is closed under p-morphic images and generated subframes.
Finally, since $\ms{K}$ reflects prime filter extensions we find
$\mo{X} \in \ms{K}$.
\end{proof}
Circling back to $\Box$-frames, it follows from Lem.~\ref{lem:Box-canonical}
and Thm.~\ref{thm:gt} that:
\begin{theorem}\label{thm:gt-frm}
Suppose $\ms{K} \subseteq \cat{WZ\Box}$ is closed under prime filter extensions.
Then $\ms{K}$ is axiomatic if and only if it reflects prime filter extensions
and is closed under disjoint unions, generated subframes and p-morphic
images.
\end{theorem}
\section{Applications}\label{sec:app}
In each of the following subsection we recall a modal intuitionistic logic and
model it dialgebraically.
We use this to derive a notion of prime filter extension and
we apply Thm.~\ref{thm:gt} to obtain a Goldblatt-Thomason theorem.
\subsection{Goldblatt's Geometric Modality I}\label{subsec:mon-I}
The extension of intuitionistic logic with a monotone modality,
here denoted by ${\customtri}$, was first studied by Goldblatt
in~\cite[Sec.~6]{Gol93}. It is closely related to its classical
counterpart~\cite{Che80,Han03,HanKup04}, except that the underlying
propositional logic is intuitionistic.
A dialgebraic perspective was given in~\cite[Sec.~8]{GroPat20}.
Let $\lan{L}_{{\customtri}}$ denote the language of intuitionistic logic
extended with a unary operator ${\customtri}$, and write
$\log{L}_{{\customtri}}$ for the logic obtained from extending
intuitionistic logic $\log{L}$ with the axiom
$
{\customtri}(p \wedge q) \to {\customtri} p
$
and the congruence rule for ${\customtri}$.
\begin{definition}
An \emph{intuitionistic monotone frame} (or \emph{IM-frame})
is a triple $(X, \leq, N)$ where
$(X, \leq)$ is an intuitionistic Kripke frame and $N$ is a function that
assigns to each $x \in X$ a collection of upsets of $(X, \leq)$ such that:
\begin{itemize}
\item if $a \in N(x)$ and $a \subseteq b \in \fun{Up}(X, \leq)$,
then $b \in N(x)$;
\item if $x \leq y$ then $N(x) \subseteq N(y)$.
\end{itemize}
An \emph{intuitionistic monotone frame morphism} (IMF-morphism) from
$(X_1, \leq_1, N_1)$ to $(X_2, \leq_2, N_2)$ is a p-morphism
$f : (X_1, \leq_1) \to (X_2, \leq_2)$ such that
$f^{-1}(a_2) \in N_1(x_1)$ iff $a_2 \in N_2(f(x_1))$
for all $x_1 \in X_1$ and $a_2 \in \fun{Up}(X_2, \leq_2)$.
We write $\cat{Mon}$ for the category of intuitionistic monotone
frames and morphisms.
\end{definition}
An \emph{intuitionistic monotone model} is a tuple $\mo{M} = (X, \leq, N, V)$
such that $(X, \leq, N)$ is an intuitionistic monotone frame and
$V : \Prop \to \fun{Up}(X, \leq)$ is a valuation.
The interpretation of $\lan{L}_{{\customtri}}$-formulae at a state $x$
in $\mo{M}$ is defined recursively, where the propositional cases
are as usual and
$\mo{M}, x \Vdash {\customtri} \phi$ iff $\llbracket \phi \rrbracket^{\mo{M}} \in N(x)$.
We now take a dialgebraic perspective.
\begin{definition}
For an intuitionistic Kripke frame $(X, \leq)$, define
$$
\fun{M}(X, \leq) =
\{ W \subseteq \fun{Up}(X, \leq) \mid \text{ if } a \in W \text{ and }
a \subseteq b \in \fun{Up}(X, \leq) \text{ then } b \in W \}
$$
ordered by inclusion.
For a p-morphism $f : (X_1, \leq_1) \to (X_2, \leq_2)$, let
$$
\fun{M}f
: \fun{M}(X_1, \leq_1) \to \fun{M}(X_2, \leq_2)
: W \mapsto \{ a_2 \in \fun{Up}(X_2, \leq_2) \mid f^{-1}(a_2) \in W \}.
$$
Then $\fun{M} : \cat{Krip} \to \cat{Pos}$ defines a functor.
\end{definition}
\begin{theorem}[\cite{GroPat20}, Thm.~8.3]\label{thm:Mon-dialg}
We have $\cat{Mon} \cong \cat{Dialg}(\fun{i}, \fun{M})$.
\end{theorem}
Translating the dialgebraic notion of disjoint union to IM-frames gives:
\begin{definition}\label{def:mif-coprod}
Let $\{ (X_k, \leq_k, N_k) \mid k \in K \}$ be a $K$-indexed set
of IM-frames.
The disjoint union $\coprod_{k \in K}(X_k, \leq_k, N_k)$
is the frame $(X, \leq, N)$ where $(X, \leq)$ is the disjoint union of the
intuitionistic Kripke frames $(X_k, \leq_k)$, and
$N$ is given by $a \in N(x_k)$ iff $a \cap X_k \in N_k(x_k)$ for all
$a \in \fun{Up}(X, \leq)$ and $x_k \in X_k$.
\end{definition}
\begin{definition}
An IM-frame $\mo{X}'$ is a \emph{generated subframe} of an IM-frame
$\mo{X}$ if there exists
an IMF-morphism $\mo{X}' \to \mo{X}$ that is an embedding of posets,
and $\mo{X}'$ is a \emph{p-morphic image} of $\mo{X}$ if there
is a surjective IMF-morphism $\mo{X} \to \mo{X}'$.
\end{definition}
The modal operator
${\customtri}$ can be introduced by the predicate lifting
$\lambda^{{\customtri}} : \fun{Up} \circ \fun{i} \to \fun{Up} \circ \fun{M}$ given by
$$
\lambda^{{\customtri}}_{(X, \leq)}(a) = \{ W \in \fun{M}(X, \leq) \mid a \in W \}.
$$
With $\Ax = \{ {\customtri}(a \wedge b) \wedge {\customtri} a \leftrightarrow {\customtri}(a \wedge b) \}$ we have
$\log{L}_{{\customtri}} = \log{L}(\{ \lambda^{{\customtri}} \}, \Ax)$.
Its algebraic semantics is given by $(\fun{L}^{{\customtri}}, \fun{j})$-dialgebras,
where $\fun{L}^{{\customtri}} : \cat{HA} \to \cat{DL}$ is the functor sending
$A$ to the free distributive lattice generated by
$\{ \dtriangle a \mid a \in H \}$ modulo $\dtriangle(a \wedge b) \leq \dtriangle b$.
The corresponding natural transformation
$\rho^{{\customtri}} : \fun{L}^{{\customtri}} \circ \fun{u\kern-.1em p'} \to \fun{u\kern-.1em p} \circ \fun{M}$
is defined on generators
by $\rho^{{\customtri}}_{(X, \leq)}(a) = \{ W \in \fun{M}(X, \leq) \mid a \in W \}$.
Towards prime filter extensions and a Goldblatt-Thomason
theorem we need to define a right inverse $\tau$ of $(\rho^{{\customtri}})^{\flat}$.
To garner inspiration we investigate what
$(\rho^{{\customtri}})^{\flat}_A : \fun{M}(\fun{pf'}A) \to \fun{pf}(\fun{L}^{{\customtri}}A)$
looks like for $A \in \cat{HA}$.
We have
\begin{align*}
\dtriangle a \in (\rho^{{\customtri}})^{\flat}_A(W)
\iff \rho_{\fun{pf'}A}(\dtriangle \theta_A'(a))
\in \eta_{\fun{M} \circ \fun{pf'}A}(W)
\iff \theta_A'(a) \in W
\end{align*}
for all $W \in \fun{M}(\fun{pf'}A)$ and $a \in A$.
(Recall that $\theta'_A(a) = \{ \ff{q} \in \fun{pf'}A \mid a \in \ff{q} \}$.)
\begin{definition}\label{def:closed-open}
Let $A \in \cat{HA}$. We call $D \in \fun{u\kern-.1em p'}(\fun{pf'}A)$
\emph{closed} if $D = \bigcap \{ \theta_A'(a) \mid a \in A \text{ and } D \subseteq \theta_A'(a) \}$, and
\emph{open} if
$D = \bigcup \{ \theta_A'(a) \mid a \in A \text{ and } \theta_A'(a) \subseteq D \}$.
\end{definition}
(Indeed, this coincides with closed and open upsets of $\fun{pf'}A$, conceived
of as an Esakia space~\cite[Sec.~2.3.3]{Bez06}.)
Upsets of the form $\theta'_A(a)$ are closed \emph{and} open.
\begin{definition}\label{def:mon-tau}
For a Heyting algebra $A$, define
$\tau_A : \fun{pf} \circ \fun{L}^{{\customtri}}A \to \fun{M} \circ \fun{pf'}A$
as follows. Let $Q \in \fun{pf} ( \fun{L}^{{\customtri}}A)$ and
$D \in \fun{Up}(\fun{pf'}A)$, and define:
\begin{itemize}
\item If $D = \theta'_A(a)$ for some
$a \in A$, then $\theta'_A(a) \in \tau_A(Q)$ if $\dtriangle a \in Q$;
\item If $D$ is closed
then $D \in \tau_A(Q)$ if for all $a \in A$, $D \subseteq \theta'_A(a)$ implies
$\dtriangle a \in Q$.
\item For other $D$, $D \in \tau_A(Q)$ if there is a closed upset
$C \subseteq D$ such that $C \in \tau_A(Q)$.
\end{itemize}
\end{definition}
It is easy to see that $\tau^{{\customtri}}_A$ is an order-preserving function,
i.e.~a morphism in $\cat{Pos}$.
The next lemma states that $\tau^{{\customtri}}$ is a natural transformation.
We postpone the unexciting proof to the appendix.
\begin{lemma}\label{lem:mon-tau-natural}
The transformation $\tau$ from Def.~\ref{def:mon-tau}
is natural.
Moreover, $(\rho^{{\customtri}})^{\flat}_A \circ \tau^{{\customtri}}_A = \fun{id}(\fun{pf}(\fun{L}^{{\customtri}}A))$ for every Heyting algebra $A$.
\end{lemma}
Translating the dialgebraic definition of a prime filter extension
to IM-frames gives a definition of prime filter extension for
IM-frames. We emphasise that this definition relies on $\tau^{{\customtri}}$.
In the next section we derive a different notion of prime filter extension
for IM-frames, with its own Goldblatt-Thomason theorem.
\begin{definition}
The \emph{$\tau^{{\customtri}}$-prime filter extension} of an IM-frame
$(X, \leq, N)$ is the IM-frame $(X^{pe}, \subseteq, N^{pe})$, where
$N^{pe}$ is given as follows.
Let ${\customtri}_N(a) = \{ x \in X \mid a \in N(x) \}$,
and for $\ff{q} \in X^{pe}$ and $D \in \fun{Up}(X^{pe}, \subseteq)$
define:
\begin{itemize}
\item If $a \in \fun{u\kern-.1em p'}(X, \leq)$,
then $\theta_A'(a) \in N^{pe}(\ff{q})$ if ${\customtri}_Na \in \ff{q}$;
\item If $D$ is closed then
$D \in N^{pe}(\ff{q})$ if $\theta_A'(a) \in N^{pe}(\mf{q})$
for all $\theta_A'(a)$ containing $D$;
\item For any $D$, $D \in N^{pe}(\ff{q})$ if there is a closed
$C \subseteq D$ such that $C \in N^{pe}(\ff{q})$.
\end{itemize}
\end{definition}
Now Thm.~\ref{thm:gt} instantiates to:
\begin{theorem}
Suppose $\ms{K}$ is a class of IM-frames closed under $\tau^{{\customtri}}$-prime filter extensions.
Then $\ms{K}$ is axiomatic iff it reflects $\tau^{{\customtri}}$-prime filter extensions
and is closed under disjoint unions, generated subframes and p-morphic
images.
\end{theorem}
\subsection{Goldblatt's Geometric Modality II}
We substantiate the claim that a logic may have several notions of
prime filter extension by giving a different
right-inverse of $(\rho^{{\customtri}})^{\flat}$
from Sec.~\ref{subsec:mon-I}. The setup
is the same as in Sec.~\ref{subsec:mon-I}, so we
proceed by defining a right-inverse of $(\rho^{{\customtri}})^{\flat}$.
\begin{definition}
For a Heyting algebra $A$, define
$\sigma_A : \fun{pf} \circ \fun{L}^{{\customtri}}A \to \fun{M} \circ \fun{pf'}A$
by sending $Q \in \fun{pf}(\fun{L}^{{\customtri}}A)$ to $\sigma_A(Q)$, where:
\begin{itemize}
\item For open upsets $D$,
let $D \in \sigma_A(Q)$ if $\exists a \in A$ s.t.~$\dtriangle a \in Q$
and $\theta_A'(a) \subseteq D$;
\item For any other upset $D$, let $D \in \sigma_A(Q)$ if all open supersets
of $D$ are in $\sigma_A(Q)$.
\end{itemize}
\end{definition}
Similar to Lem.~\ref{lem:mon-tau-natural} we can prove the following.
\begin{lemma}\label{lem:sigma-natural}
$\sigma = (\sigma_A)_{A \in \cat{HA}} : \fun{pf} \circ \fun{L}^{{\customtri}} \to \fun{M} \circ \fun{pf'}$
is a natural transformation,
and for every Heyting algebra $A$, we have
$\rho^{\flat}_A \circ \sigma_A = \fun{id}_{\fun{pf}(\fun{L}^{{\customtri}}A)}$.
\end{lemma}
Now $\sigma$ yields a different notion of
prime filter extension, the precise definition of which we leave to the reader.
Thm.~\ref{thm:gt} yields a Goldblatt-Thomason theorem
with respect to this different notion of prime filter extension.
\begin{theorem}
Let $\ms{K}$ be a class of IM-frames closed
under $\sigma$-prime filter extensions.
Then $\ms{K}$ is axiomatic iff it reflects $\sigma$-prime filter
extensions and is closed under disjoint unions, generated subframes and
p-morphic images.
\end{theorem}
\subsection{Non-Normal Intuitionistic Modal Logic}
Neighbourhood semantics
is used to accommodate for non-normal modal
operators~\cite{Sco70,Mon70,Che80,Pac17}.
Dalmonte, Grellois and Olivett recently put forward an intuitionistic
analogue~\cite{DalGreOli20}
to interpret the extension of
intuitionistic logic with unary modalities $\Box$ and $\Diamond$ which
a priori do not satisfy any interaction axioms.
The ordered sets underlying the neighbourhood semantics from~\cite{DalGreOli20}
are allowed to be preorders. Conforming to our general framework,
we shall assume them to be posets.
However, as mentioned in the introduction, we can obtain exactly the same
(dialgebraic) results when replacing posets with preorders.
We use $\wp$ to denote the (covariant) powerset
functors on $\cat{Set}$.
\begin{definition}
A \emph{coupled intuitionistic neighbourhood frame} or \emph{CIN-frame} is a tuple
$(X, \leq, N_{\Box}, N_{\Diamond})$ such that $(X, \leq)$ is an intuitionistic
Kripke frame and $N_{\Box}, N_{\Diamond}$ are functions
$X \to \wp\wp X$ such that for all $x, y \in X$:
$$
x \leq y
\quad\text{implies}\quad N_{\Box}(x) \subseteq N_{\Box}(y)
\quad\text{and}\quad N_{\Diamond}(x) \supseteq N_{\Diamond}(y).
$$
A CIN-morphism
$f : (X, \leq, N_{\Box}, N_{\Diamond}) \to (X', \leq', N'_{\Box}, N'_{\Diamond})$
is a p-morphism $f : (X, \leq) \to (X', \leq')$ where
for all $N \in \{ N_{\Box}, N_{\Diamond} \}$, $x \in X$, $a' \in \wp X'$,
$f^{-1}(a') \in N(x)$ iff $a' \in N'(f(x))$.
$\cat{CIN}$ denotes the category of CIN-frames and -morphisms.
\end{definition}
The language $\lan{L}_{\Box\!\Diamond}$ extending the intuitionistic
language with unary modalities $\Box$ and $\Diamond$ can be
interpreted in models based on CIN-frames, where
\begin{align*}
x \Vdash \Box\phi &\iff \llbracket \phi \rrbracket \in N_{\Box}(x), &
x \Vdash \Diamond\phi &\iff X \setminus \llbracket \phi \rrbracket \notin N_{\Diamond}(x).
\end{align*}
We now view this dialgebraically:
\begin{definition}
Define $\fun{N} : \cat{Krip} \to \cat{Pos}$
on objects $(X, \leq)$ by
$\fun{N}(X, \leq) = (\wp\power X, \subseteq) \times (\wp\power X, \supseteq)$,
and on morphisms $f : (X, \leq) \to (X', \leq')$
by
$$
\fun{N}f(W_1, W_2) = \big( \{ a_1' \in \wp X' \mid f^{-1}(a_1') \in W_1 \},
\{ a_2' \in \wp X' \mid f^{-1}(a_2') \in W_2 \} \big).
$$
\end{definition}
\begin{theorem}\label{thm:CIN-dialg}
We have $\cat{CIN} \cong \cat{Dialg}(\fun{i}, \fun{N})$.
\end{theorem}
\begin{proof}
The isomorphism on objects is obvious. The isomorphism on
morphisms follows from a computation similar to that in
the proof of Thm.~\ref{thm:Mon-dialg}.
\end{proof}
The modal operators $\Box, \Diamond$ are induced by
$\lambda^{\Box}, \lambda^{\Diamond} : \fun{Up} \circ \fun{i} \to \fun{Up} \circ \fun{N}$,
where
\begin{align*}
\lambda_{(X, \leq)}^{\Box}(a)
&= \{ (W_1, W_2) \in \fun{N}(X, \leq) \mid a \in W_1 \} \\
\lambda_{(X, \leq)}^{\Diamond}(a)
&= \{ (W_1, W_2) \in \fun{N}(X, \leq) \mid X \setminus a \notin W_2 \}
\end{align*}
Unravelling the definition of a disjoint union of (the dialgebraic
renderings of) CIN-frames shows that it is computed similar to
Def.~\ref{def:mif-coprod}.
Generated subframes and p-morphic images are defined
by means of CIN-morphisms.
Since $\Box$ and $\Diamond$ only satisfy the congruence rule,
the algebraic semantics is given by dialgebras for the functor
$\fun{L}^{\Box\!\Diamond} : \cat{HA} \to \cat{DL}$ that sends
$A$ to the free distributive lattice generated by
$\{ \dbox a, \ddiamond a \mid a \in A \}$.
The induced natural transformation
$\rho^{\Box\!\Diamond} : \fun{L}^{\Box\!\Diamond} \circ \fun{u\kern-.1em p'} \to \fun{u\kern-.1em p} \circ \fun{N}$
is defined on components via
$\rho^{\Box\!\Diamond}_{(X, \leq)}(\dbox a) = \lambda_{(X, \leq)}^{\Box}(a)$ and
$\rho^{\Box\!\Diamond}_{(X, \leq)}(\ddiamond a) = \lambda_{(X, \leq)}^{\Diamond}(a)$.
Akin to Sec.~\ref{subsec:mon-I} we find
$\dbox a \in (\rho_A^{\Box\!\Diamond})^{\flat}(W_1, W_2)$ iff
$\theta'_A(a) \in W_1$ and
$\ddiamond a \in (\rho_A^{\Box\!\Diamond})^{\flat}(W_1, W_2)$
iff $\fun{pf'}A \setminus \theta'_A(a) \notin W_1$
for all $A \in \cat{HA}$, $(W_1, W_2) \in \fun{N}(\fun{pf'}A)$ and $a \in A$.
\begin{definition}
For a Heyting algebra $A$, define
$$
\tau_A
: \fun{pf}(\fun{L}^{\Box\!\Diamond}A) \to \fun{N}(\fun{pf'}A)
: Q \mapsto \big( \{ \theta'_A(a) \mid \dbox a \in Q \}, \{ \fun{pf'}A \setminus \theta'_A(a) \mid \ddiamond a \notin Q \} \big).
$$
\end{definition}
Then $\tau = (\tau_A)_{A \in \cat{HA}}$ defines a natural transformation
$\fun{pf} \circ \fun{L}^{\Box\!\Diamond} \to \fun{N} \circ \fun{pf'}$.
It follows from the definitions that
$(\rho^{\Box\!\Diamond})^{\flat} \circ \tau
= \fun{id}_{\fun{pf} \circ \fun{L}^{\Box\!\Diamond}}$.
We get the following definition of $\tau$-prime filter extensions
and Goldblatt-Thomason theorem.
\begin{definition}
The $\tau$-prime filter extension of a CIN-frame
$\mo{X} = (X, \leq, N_{\Box}, N_{\Diamond})$ is given by
$\pe\mo{X} = (X^{pe}, \subseteq, N_{\Box}^{pe}, N_{\Diamond}^{pe})$,
where for $\ff{q} \in X^{pe}$ we have
\begin{align*}
N_{\Box}^{pe}(\ff{q})
&= \{ \theta'_{\fun{u\kern-.1em p'}(X, \leq)}(a) \in \wp X^{pe}
\mid a \in \fun{u\kern-.1em p}(X, \leq) \text{ and } \Box_N(a) \in \ff{q} \} \\
N_{\Diamond}^{pe}(\ff{q})
&= \{ X^{pe} \setminus \theta'_{\fun{u\kern-.1em p'}(X, \leq)}(a) \in \wp X^{pe}
\mid a \in \fun{u\kern-.1em p}(X, \leq) \text{ and } \Diamond_N(a) \in \ff{q} \}
\end{align*}
Here $\Box_N(a) = \{ x \in X \mid a \in N_{\Box}(x) \}$
and $\Diamond_N(a) = \{ x \in X \mid X \setminus a \notin N_{\Diamond}(x) \}$.
\end{definition}
\begin{theorem}
Let $\ms{K}$ be a class of CIN-frames closed
under $\tau$-prime filter extensions.
Then $\ms{K}$ is axiomatic iff it reflects $\tau$-prime filter
extensions and is closed under disjoint unions, generated subframes and
p-morphic images.
\end{theorem}
\subsection{Heyting-Lewis Logic}
Finally we discuss Heyting-Lewis logic, the extension of intuitionistic
logic with a binary strict implication operator
$\sto$~\cite{LitVis18,LitVis19,GroLitPat21}.
\begin{definition}
A \emph{strict implication frame} is a tuple $(X, \leq, R_s)$,
where $(X, \leq)$ is an intuitionistic
Kripke frame and $R_s$ is a relation on $X$ such that $x \leq y R_s z$ implies
$x R_s z$.
Morphisms between them are functions that are p-morphisms with respect to
both orders.
Models are defined as expected, and $\sto$ is interpreted via
$$
x \Vdash \phi \sto \psi \iff \text{for all } y \in X,
\text{ if }xR_sy \text{ and } y \Vdash \phi \text{ then } y \Vdash \psi.
$$
\end{definition}
Strict implication frames can be modelled as
$(\fun{i}, \fun{P}_s)$-dialgebras, where
$\fun{P}_s : \cat{Krip} \to \cat{Pos}$ is the functor that sends
$(X, \leq)$ to $(\wp X, \subseteq)$ ($\wp$ denotes the covariant
powerset functor)
and a p-morphism $f$ to $\wp f$.
The modality $\sto$ can then be defined via the binary predicate lifting
$\lambda^{\sto}$, given on components by
$$
\lambda^{\sto}_{(X, \leq)}(a, b)
= \{ c \in \fun{P_s}(X, \leq) \mid c \cap a \subseteq b \}.
$$
Disjoint unions, generated subframes and p-morphic images are
defined as for $\Box$-frames.
The algebraic semantics for this logic given in~\cite[Def.~III.1]{GroLitPat21}
can be modelled dialgebraically in a similar way as we have seen above.
Computation of the natural transformation $\rho^{\sto}$ is, by now, routine.
Examining the proof of the duality for Heyting-Lewis logic sketched
in~\cite[Section III-D]{GroLitPat21}, we can compute
a one-sided inverse $\tau$ to $(\rho^{\sto})^{\flat}$.
We suppress the details, but do give the resulting notion of prime filter
extension:
\begin{definition}
The \emph{prime filter extension} of a strict implication frame $(X, \leq, R_s)$
is given by the frame $(X^{pe}, \subseteq, R_s^{pe})$, with $R_s^{pe}$ defined by
$$
\fil{p} R_s^{pe} \fil{q} \iff \forall a, b \in \fun{u\kern-.1em p}(X, \leq),
\text{if } a \sto_R b \in \fil{p} \text{ and } a \in \fil{q}
\text{ then } b \in \fil{q}
$$
where $a \sto_R b = \{ x \in X \mid R[x] \cap a \subseteq b \}$.
\end{definition}
With this notion of prime filter extension, Thm.~\ref{thm:gt} instantiates to:
\begin{theorem}
A class $\ms{K}$ of strict implication frames that is closed under prime
filter extensions is axiomatic iff it reflects
prime filter extensions and is closed under disjoint unions, generated
subframes and p-morphic images.
\end{theorem}
\section{Conclusions}
We have given a general way to obtain Goldblatt-Thomason theorems for
modal intuitionistic logics, using the framework of dialgebraic logic.
Subsequently, we applied the general result to several concrete modal
intuitionistic logics.
The results in this paper can be generalised in several directions.
\begin{description}
\item[More applications.]
The general Goldblatt-Thomason theorem can also be instantiated to
$\Diamond$-frames and $\Box\Diamond$-frames~\cite{WolZak99}.
Using preorders instead of posets, we can obtain Goldblatt-Thomason
theorems for ((strictly) condensed) $H\Box$-frames and
$H\Box\Diamond$ frames used by Bo\v{z}i\'{c} and
Do\v{s}en~\cite{BozDos84}.
\item[More base logics.]
The framework of dialgebraic logic is not restricted to an intuitionistic
base. Generalising the results from this paper, we can obtain a general
Goldblatt-Thomason theorem that also covers modal bi-intuitionistic
logics~\cite{GroPat19} and modal lattice logics~\cite{BezEA22}.
%
Moreover, this would also cover coalgebraic logics over a classical
and a positive propositional base.
The results in this paper can be generalised to dialgebraic logics
for different base logics. This would give rise to Goldblatt-Thomason
\item[Other modal intuitionistic logics]
The results in the paper do not apply to the modal
intuitionistic logics investigated by~Fischer Servi~\cite{Fis81},
Plotkin and Sterling~\cite{PloSti86}, and Simpson~\cite{Sim94},
because these formalisms are not covered by the dialgebraic approach.
It would be interesting to see if similar techniques can be applied
to these logics to still prove Goldblatt-Thomason theorems.
\end{description}
\bigskip\noindent
\textit{Acknowledgements.} \,
I am grateful to the anonymous reviewers for many constructive and helpful comments.
{\footnotesize
\bibliographystyle{plainnat}
|
1,116,691,498,243 | arxiv | \subsection{Introduction} \label{introduction}
\textbf{Splitting Factors} \, Scalar curvature constraints on a manifold $W^n$ can be studied by means of splitting procedures. The idea is to understand the given curvature condition on $W^n$ inductively from similar scalar curvature constraints on suitable subspaces $V^k \subset W^n$, $k <n$, we call \emph{splitting factors} of $W^n$. The success of this strategy depends on the \emph{control} we can get on such splitting factors. This is what this paper is about.\\
The basic examples of splitting factors have been discovered by Schoen and Yau \cite{SY1}. Any compact area minimizing hypersurface $H^n$, in a $scal>0$-manifold $M^{n+1}$, admits a \emph{conformal} $scal>0$-metric.
The conformal factor is the first eigenfunction $f_H>0$ of the conformal Laplacian $L_H = -\Delta +\frac{n-2}{4 (n-1)} \cdot scal_H$. The stability of $H$ implies that the first eigenvalue $\lambda_H$ must be positive. Then the transformation law (\ref{trl}) for scalar curvature under the conformal transformation $f_H^{4/n-2} \cdot g_H$ shows that $scal(f_H^{4/n-2} \cdot g_H) >0$ since $\lambda_H >0$ and $f_H>0$:
\begin{equation} \label{trl} scal(f_H^{4/n-2} \cdot g_H) \cdot f_H^{\frac{n+2}{n-2}} = L_H(f_H) = \lambda_H \cdot f_H > 0.\end{equation}
This idea also applies to other geometric scenarios, including general relativity, in work of Gromov, Lawson, Galloway, Schoen and others \cite{GL1}, \cite{G1}, \cite{G2}, \cite{GS}. These extensions involve minimizers of other variational integrals and, more generally, of \emph{almost minimizers}, cf. \ref{bcn}.D, not necessarily arising from variational problems like horizons of black holes. The use of (almost) minimizers with boundaries or corners has further broadened the range of applications \cite{GL1}, \cite{G1}.\\
This brings us to the question what can be said about such splitting factors. In low dimensions $\le 7$ they are smooth manifolds making it easy to argue inductively and to descend stepwise to lower dimensions. This changes in higher dimensions. An (almost) area minimizer $H^n$ may have a complex singular set $\Sigma \subset H$ of codimension $\ge 7$. $H \setminus \Sigma$ degenerates and the conformal deformations of $H \setminus \Sigma$ diverge when we approach $\Sigma$. This generally produces complicated singular geometries with a new singular set $\Sigma^*$ we get from completing the deformed $H \setminus \Sigma$.\\
This said, our goal is to describe natural conformal deformations leading to well-controlled splitting factors in any dimension. To this end we first note that the spectral theory of $L_H$ on $H \setminus \Sigma$ deviates from that in the compact smooth case. On $H \setminus \Sigma$ we have positive eigenfunctions for any eigenvalue $\lambda \le \lambda_H$, where $\lambda_H$ is the \emph{principal eigenvalue}\footnote{In the compact and smooth case $\lambda_H$ is nothing but the ordinary first eigenvalue.} of $L_H$. This observation is also the starting point for our approach. We deliberately work with eigenvalues $\lambda>0$ which are \emph{strictly smaller} than $\lambda_H$. The remarkable consequence of this choice is that the analysis of the associated eigenfunctions and conformal deformations near $\Sigma$ are much easier to understand and to work with. This results from the particular potential theory of $L_H$ we have on such minimizers \cite{L1}-\cite{L3}. What we get are canonical, in a concrete sense \emph{minimal}, conformal deformations into surprisingly simple $scal>0$-geometries. We call them the \emph{minimal (splitting) factors} of $M$.\\
With minimal factors we get inductive splitting schemes in arbitrary dimensions. In the second part of this paper \cite{L4} we derive an \emph{inductive removal of singularities} from minimal factors. Another route is the full regularization of minimal factors to \emph{smooth} $scal>0$-manifolds in any given dimension. This approach \emph{additionally} employs \cite{L2} and \cite{L3} for the Jacobi field operator, cf.\cite{L5}. Using either of these two methods one may treat scalar curvature problems in the smooth category. In turn, Schoen and Yau \cite{SY2} have suggested an alternative strategy through nestings of certain singular minimizers. Minimal splitting factors may be of use in such a setting as well. \\
\liii
\textbf{Minimal Factors} \, For now we focus on the case of a compact area minimizer $H^n$ in some $scal>0$-manifold $M^{n+1}$ with full statements postponed to Ch.\ref{over}. We consider the induced metric $g_H$, the second fundamental form $A_H$ on $H \setminus \Sigma \subset M$, with norm $|A|$, and some fixed \si-transform $\bp$, cf.Ch.\ref{bcn}. For now we just think of $\bp$ as a revamped version of $|A|$.
For $\Sigma \n$ there are actually many different positive solutions of eigenvalue equations of $L_H$ for ordinary eigenvalues. This is also true for the following $\bp$-weighted eigenvalue equation (\ref{weee}).
\begin{equation}\label{weee}
L_H (u_\lambda) = \lambda \cdot \bp^2 \cdot u_\lambda \,\mm{ on } H \setminus \Sigma,
\end{equation}\noindent
for any $\lambda < \lambda^{\bp}_{H}$, where $\lambda^{\bp}_{H}$ is the principal eigenvalue of ${\bp}^{-2} \cdot L$. We note that although $\bp$ diverges towards $\Sigma$, one can still show that $\lambda^{\bp}_{H}>0$. We will use deformations by solutions of (\ref{weee}). They have valuable additional properties when compared to their non-weighted counterparts. Most obviously we get the blow-up invariance in (iii) below. Now we make the main choices: for some $\lambda \in (0,\lambda^{\bp}_{H})$ and a
(super)solution $u_\lambda>0$ of (\ref{weee}) with \emph{minimal growth} towards $\Sigma_H$, when compared to other solutions $v>0$, we define the associated \emph{minimal factor metric}
\begin{equation}\label{gm}
g_{\sima}:=u_\lambda^{4/n-2} \cdot g_H
\end{equation}
We observe from (\ref{trl}) and (\ref{weee}) that $scal(g_{\sima})>0$. The surprising point about $g_{\sima}$ is that other properties of this metric strongly resemble those of the original area minimizing geometry on $H$:
\begin{enumerate}
\item The metric completion $(\widehat{H \setminus \Sigma}, \widehat{d_{\sima}})$ of $(H \setminus \Sigma,g_{\sima})$ is \emph{homeomorphic} to $H$, in particular it is again compact. Thus we can write $(\widehat{H \setminus \Sigma},\widehat{ d_{\sima}})$ as $(H,d_{\sima})$ and we call it a \emph{minimal (splitting) factor} of $M$. $(H,d_{\sima})$ admits a canonical augmentation to a \emph{metric measure space} $(H,d_{\sima},\mu_{\sima})$. This deformed space is \emph{doubling} and it has a lower \emph{volume decay of order n}.
\item $(H,d_{\sima},\mu_{\sima})$ supports \emph{Poincar\'{e}} and \emph{Sobolev inequalities} and this also means it satisfies \emph{isoperimetric inequalities}. Besides other applications, this is inevitable to control minimal hypersurfaces within $(H,d_{\sima},\mu_{\sima})$, in particular, close to $\Sigma$.
\item Any \emph{blow-up} of $(H,d_{\sima},\mu_{\sima})$ in some $p \in \Sigma$ is a $scal>0$-\emph{cone} $(C,d_{\sima})$, where $C$ is a tangent cone of $(H,d_H)$ but $C$ is now equipped with its minimal factor\footnote{For the original area minimizing cone $C^n \subset \R^{n+1}$ we have $scal =-|A|^2\le 0$.} metric $d_{\sima}$ for the same eigenvalue $\lambda$. Thus, $(H,d_{\sima},\mu_{\sima})$ has a simple asymptotic geometry near $\Sigma$ amenable to inductive tangent cone reductions similar to the original space $(H,d_H)$. Note that since the principal eigenvalues of $H$ and $C$ usually differ, with $\lambda^{\bp}_{H} < \lambda^{\bp}_{C}$, we encounter non-principal eigenvalues in the blow-up analysis even if circumstances allowed us to start from $\lambda^{\bp}_{H}$.
\item The \emph{Hausdorff codimension} of the singular set $\Sigma \subset (H,d_H)$ is $\ge 7$. This is a well-known result due to Federer cf. \cite{F}, \cite{Gi}. The blow-up properties and estimates for minimal growth solutions show that this estimate remains valid for singularities of minimal factors. This is an important detail since bounds on the Hausdorff dimension of $\Sigma$, rather than on its topological dimension, are what is needed in the analysis on $(H,d_{\sima})$, cf. the following remark.
\end{enumerate}
\textbf{Remark} \, The canonical homeomorphism between $(H,d_H)$ and $(H,d_{\sima})$ is not Lipschitz regular. However, different from the topological dimension the Hausdorff dimension is \emph{not} a topological but only a bi-Lipschitz invariant. Thus the Federer estimate for $(H,d_H)$ does not simply carry over to $(H,d_{\sima})$, instead we use particularities of minimal growth solutions to estimate the Hausdorff dimension of $\Sigma$ directly for minimal factors. To illustrate this issue we recall that there is a (not Lipschitz but H\"{o}lder regular) homeomorphism $\phi: \R^2 \ra \R^2$ mapping $S^1$ to the Koch snowflake $K \subset \R^2$ with Hausdorff dimension $\ln4/ \ln 3$. In other words the Hausdorff dimension of $S^1 \subset \R^2$ relative to the $\phi$-pull-back of the Euclidean distance metric is $\ln4/ \ln 3$ cf.\cite{GV}, \cite{Bi}. \\
\textbf{Organization of this Paper} \, The main challenge is the Solobev inequality for minimal splitting factors. The proof uses the hyperbolic unfolding of almost minimizers \cite{L1} to derive estimates for the conformal factors. Hyperbolic geometry is also employed to introduce other geometric structures on almost minimizers like \emph{canonical Semmes families of curves}, in the original area minimizer $H$. These particular Semmes families survive the deformation to $(H,d_{\sima},\mu_{\sima})$. We use similar techniques to get \emph{volume growth estimates for balls} in $(H,d_{\sima},\mu_{\sima})$. Standard theory for metric measure spaces, e.g. \cite[Ch.9]{HKST}, then implies the asserted Solobev inequalities.\\
The existence of $scal>0$-tangent cones is derived from corresponding analytic results in \cite{L2} and \cite{L3}. From this and growth estimates for minimal Green's functions we get the estimate for the Hausdorff dimension of $\Sigma$ relative to $(H,d_{\sima},\mu_{\sima})$.\\
\textbf{Remark} \, The present paper, together with \cite{L4}, extends our earlier (unpublished) lecture notes \cite{L6} to improve its accessibility and to broaden the range of applications.\\
The presentation is rigorously based on the (published) papers \cite{L1} - \cite{L3}. They cover the asymptotic geometry and analysis near singularities we need to derive our results. In \cite{L5} we survey the logic of the approach and the vital r\^{o}le of hyperbolic geometry in the higher dimensional theory.
\subsubsection{Basic Concepts}\label{bcn
We summarize some basic notations, concepts and results from the potential theory of almost minimizing hypersurfaces \cite{L1} - \cite{L3}.\\
\textbf{A. Basic Classes} of \emph{integer multiplicity rectifiable current} of dimension $n\ge 2$ with connected
support inside some complete, smooth Riemannian manifold $(M^{n+1},g_M)$
{\small \begin{description}
\item[${\cal{H}}^c_n$:] $H^n \subset M^{n+1}$ is compact locally mass minimizing without boundary.
\item[${\cal{H}}^{\R}_n$:] $H^n \subset\R^{n+1}$ is a complete hypersurface in flat Euclidean space $(\R^{n+1},g_{eucl})$ with $0\in H$ that is an oriented minimal boundary of some open set in $\R^{n+1}$.
\item[${\cal{H}}_n$:] ${\cal{H}}_n:= {\cal{H}}^c_n \cup {\cal{H}}^{\R}_n$ and ${\cal{H}} :=\bigcup_{n \ge 1} {\cal{H}}_n$. We briefly refer to $H \in {\cal{H}}$ as an \textbf{area minimizer}.
\item[$\mathcal{C}_{n}$:] $\mathcal{C}_{n} \subset {\cal{H}}^{\R}_n$ is the space of area minimizing $n$-cones in $\R^{n+1}$ with tip in $0$.
\item[$\mathcal{SC}_{n}$:] $\mathcal{SC}_n \subset\mathcal{C}_n$ is the subset of cones which are at least singular in $0$.
\item[${\cal{G}}^c_n$:] $H^n \subset M^{n+1}$ is a compact \textbf{almost minimizer}, cf.\ref{bcn}.D. below. We set ${\cal{G}}^c :=\bigcup_{n \ge 1} {\cal{G}}^c_n.$
\item [${\cal{G}}_n$:] ${\cal{G}}_n := {\cal{G}}^c_n \cup {\cal{H}}^{\R}_n$ and ${\cal{G}} :=\bigcup_{n \ge 1} {\cal{G}}_n$.
\end{description}}
We note that each of the classes ${\cal{H}}_n$, ${\cal{H}}^{\R}_n$ and ${\cal{G}}_n$ is \emph{closed under blow-ups}.\\
\textbf{B. One-Point Compactifications} of hypersurfaces $H \in {\cal{H}}^{\R}_n$ are denoted by $\widehat{H}$. For the singular set $\Sigma_H$ of some $H \in {\cal{H}}^{\R}_n$ we \emph{always} add $\infty_H$ to $\Sigma$ as well, even when $\Sigma$ is already compact, to define $\widehat\Sigma_H:=\Sigma_H \cup \infty_H$. On the other hand, for $H \in{\cal{G}}^{c}_n$ we set $\widehat H=H$ and $\widehat\Sigma=\Sigma$.\\
\textbf{C.1. \si-Structures} An \si-transform $\bp$ is a distance measure to singular and highly curved parts of an almost minimizer. There are several ways to define such an $\bp$, but they all share some simple properties: an assignment $\bp$ which associates with any $H \in {\cal{G}}$ a locally Lipschitz function $\bp_H:H \setminus \Sigma_H\to\R^{\ge 0}$ is an \textbf{\si-transform}, more precisely a Hardy \si-transform, provided it satisfies the following axioms.
\begin{itemize}
\item If $H \subset M$ is totally geodesic, then $\bp_H \equiv 0$. Otherwise we have $\bp_H>0, \bp_H \ge |A_H| $, $\bp_H(x) \ra \infty,$ for $x \ra p \in \Sigma_H$
and $\bp_{\lambda \cdot H} \equiv \lambda^{-1} \cdot \bp_{H}$ for any $\lambda >0$.
\item If $H$ is not totally geodesic, and thus $\bp_H>0$, we define the \textbf{\si-distance} $\delta_{\bp_H}:=1/\bp_H$.
This function is
$L_{\bp}$-Lipschitz regular for some constant $L_{\bp}=L(\bp,n)>0$, i.e.,
\[
|\delta_{\bp_H}(p)- \delta_{\bp_H}(q)| \le L_{\bp} \cdot d_H(p,q) \mm{ for any } p, q \in H \setminus \Sigma \mm{ and any } H \in {\cal{G}}_n.
\]
\item
If $H_i \in {\cal{H}}_n$, $i \ge 1$, is a sequence converging to the limit space $H_\infty \in {\cal{H}}_n$,
then $\bp_{H_i}\overset{C^\alpha} \longrightarrow {\bp_{H_\infty}}$ for any $\alpha \in (0,1)$. For general $H \in {\cal{G}}_n$,
this \textbf{naturality} holds for blow-ups: $\bp_{\tau_i \cdot H} \overset{C^\alpha} \longrightarrow {\bp_{H_\infty}}$, for any sequence $\tau_i \ra \infty$ so that $\tau_i \cdot H \ra H_\infty \in {\cal{H}}^\R_n$.
\item For any compact $H \in {\cal{G}}^c$ and any $C^{\alpha}$-regular $(2,0)$-tensor $B$, $\alpha \in (0,1)$, on the ambient space $M$ of $H$ with
$B|_H \not\equiv -A_H$ there exists a constant $k_{H;B} > 0$ such that
\[
\int_H|\nabla f|^2 + |A+B|_H|^2 \cdot f^2 dV \ge k_{H;B} \cdot \int_H \bp^2 \cdot f^2 dV \ge \frac{k_{H;B}}{L^2_{\bp}} \cdot \int_H \frac{f^2}{dist_H(x, \Sigma)^2}dV.
\]
\end{itemize}
\noindent It is worthy to recall that a \emph{totally geodesic} almost minimizer $H$, where $|A| \equiv 0$, is automatically \emph{regular} since $|A|$ diverges when we approach hypersurface singularities.\\
\noindent Any \si-transform $\bp$ admits a $C^\infty$-\textbf{Whitney smoothing} $\bp^*$ that still satisfies these axioms except for a slightly weaker form of naturality:
\begin{equation}\label{smot}
c_1 \cdot \delta_{\bp}(x) \le \delta_{\bp^*}(x) \le c_2 \cdot \delta_{\bp}(x)\quad\mm{and}\quad |\p^\beta \delta_{\bp^*} / \p x^\beta |(x) \le c_3(\beta) \cdot \delta_{\bp}^{1-|\beta|}(x)
\end{equation}
for constants $c_i(L_{\bp},H,\beta) >0$, with $c_i(L_{\bp},n,\beta) >0$ for ${\cal{H}}^{\R}_n$, $i=1,\,2,\,3$. Here, $\beta$ is a multi-index for derivatives with respect to normal
coordinates around $x \in H \setminus \Sigma$. Throughout this paper we choose \emph{one fixed pair of a \si-transform and an associated Whitney smoothing } $\bp$ and $\bp^*$ . The precise choices are immaterial for the sequel as the results will not depend on the concrete \si-transform or Whitney smoothing.\\
\textbf{C.2. \si-Pencils} An elementary way to use $\bp$ is to quantify a non-tangential way of approaching $\Sigma$. We define non-tangential \textbf{\si-pencils} to describe a generalized inner cone condition viewing $\Sigma_H$ as the boundary of $H \setminus \Sigma_H$.
\begin{equation}\label{pen}
\P(z,\omega):= \{x \in H \setminus \Sigma \,|\,\omega \cdot d_H(x,z) < \delta_{\bp} (x)\}
\end{equation}
pointing to $z \in \Sigma$, where $\omega>0$. The angle $\arctan(\omega^{-1})$ is some kind of aperture of $\P(z,\omega)$ relative to $z$. When $H$ is a cone $C$ singular in $0$, we write $C \setminus \{0\} \cong S_C \times \R^{>0}$, for $S_C:=\p B_1(0) \cap C$. Then the pencil $\P(0,\omega)$ is just a subcone $C(U) \subset C$ over some open set $U \subset S_C$.\\
It will be useful to also define the \textbf{truncated \si-pencils} $\TP$. Compared to the \si-pencils $\P$, the $\TP$ are in controllable distance to the singular set and $\D$-maps easily extend to these sets.
\begin{equation}\label{trpen}
\TP(p,\omega,R,r)=\TP_H(p,\omega,R,r):=B_{R}(p) \setminus B_{r} (p) \cap \P(p,\omega)\subset H.
\end{equation}
\li
\textbf{C.3. \si-Sobolov spaces} We use dedicated Sobolev spaces, the Hilbert spaces $H^{1,2}_{\bp}(H \setminus \Sigma)$. We recall from \cite[Ch.5.1]{L1}:
\begin{itemize}
\item The \textbf{$H^{1,2}_{\bp}$-scalar product}:\, $\langle f,g \rangle_{H^{1,2}_{\bp}(H \setminus \Sigma)}:= \int_{H \setminus \Sigma} \langle \nabla f, \nabla g \rangle + \bp^2 \cdot f \cdot g \, dV $, for $C^2$-functions $f,g$. The $H^{1,2}_{\bp}$-norm associated to this scalar product is written $|f|_{H^{1,2}_{\bp}(H \setminus \Sigma)}$.
\item The \textbf{\si-Sobolov space} $H^{1,2}_{\bp}(H \setminus \Sigma)$ is the $H^{1,2}_{\bp}(H \setminus \Sigma)$-completion of the subspace of functions in $C^2(H \setminus \Sigma)$ with finite $H^{1,2}_{\bp}(H \setminus \Sigma)$-norm. $H^{1,2}_{\bp}(H \setminus \Sigma)$ is a Hilbert space.
\end{itemize}
For non-totally geodesic hypersurfaces $H\in\cal{G}$ we have a vital \textbf{compact approximation} result
\begin{equation}\label{csa}
H^{1,2}_{\bp}(H \setminus \Sigma) \equiv H^{1,2}_{\bp,0}(H \setminus \Sigma):= H^{1,2}_{\bp}\mm{-completion of }C^2_0(H \setminus \Sigma).
\end{equation}
where $C^2_0(H \setminus \Sigma)$ is the space of smooth functions with compact support on $H \setminus \Sigma$.\\
\liii
\textbf{D.1. Almost Minimizers} \, An \textbf{almost minimizer} $H^n$ is a possibly singular
hypersurface looking more and more like an area minimizer the closer we zoom into it. \\
More precisely, the volume of a ball $B_r(p) \subset H^n$ of radius $r>0$ exceeds that of the area minimizer with the
same boundary by at most $c_H \cdot r ^{n+ 2 \cdot \beta}$, for some constant $c_H >0$. Such an almost minimizer $H^n$ is a $C^{1, \beta}$-hypersurface except for some singular set $\Sigma_H$ of Hausdorff-dimension $\le n-7$. Sequences of scalings $H_i= \tau_i \cdot H$ of $H$, for some sequence $\tau_i \ra \infty$, for $i \ra \infty$, around a
given singular point $x \in \Sigma \subset H^n$, \emph{flat norm subconverge}\footnote{Saying a sequence \emph{subconverges} means it converges after some possible selection of a subsequence.} to area minimizing tangent cones $C^n \subset \R^{n+1}$.\\
\textbf{D.2. Tameness and $\D$-Maps} \, In the case of Euclidean area minimizers we know that $H \setminus \Sigma_H$ is smooth and there is even a
$C^k$-approximation by tangent cones $C$ for any $k \in \Z^{\ge 0}$ in the following sense. For $B_R(q) \cap C \setminus \Sigma_C$, $R>0$ we have from Allard theory:
for any $k \in \Z^{\ge 1}$ and large $i$, $B_R(q_i) \cap H_i$, for suitable $q_i \in H \setminus \Sigma$, is a local $C^k$-section, up to minor adjustments near $\p B_R(q)$:
\[\Gamma_i :B_R(q) \cap C \ra B_R(q_i) \cap H_i\subset\nu \mm{ of the normal bundle } \nu \mm{ of } B_R(q)\cap C\] and,
for $i \ra \infty$, $\Gamma_i$ converges, in $C^k$-norm, to the zero section, which we identify with $B_R(q) \cap C$. We call the $C^k$-section $\D := \Gamma_i$
the \emph{asymptotic identification map} or $\D$\textbf{-map} for short. We simply write $\D$ when all other details are known from the context. The point about the $\D$-maps is that $C^k$-functions on $B_R(p_i) \cap H_i$ become comparable to $C^k$-functions on $B_R(p) \cap C$ from an $\D$-map pull-back to $C$ (or $H_\infty$). We use this to specify the class $\cal{G}$ of almost minimizers with a sufficient degree of regularity for the purposes of this series of papers:
an almost minimizer $H$ belongs to $\cal{G}$ provided the following $C^{k,\gamma}$-\emph{tameness} properties, for some $k \ge 2$, $\gamma \in (0,1)$,
we typically use $k=5$ (and drop $\gamma$) to be on the safe side, hold:
\begin{enumerate}
\item The (generalized) mean curvature is locally bounded.
\item $H \setminus \Sigma$ and the $\D$-maps $\Gamma_i$ are $C^{k,\gamma}$-regular with $|\D - id_{C}|_{C^{k,\gamma}(B_R(q) \cap C )} \ra 0$, for $i \ra \infty$,
for any given tangent cone $C$, $p \in \Sigma_H$ and $q \in C$ with $\overline{B_R(q)} \subset C \setminus \sigma_C$, for some $R>0$.
\end{enumerate}
Besides area minimizers, $\cal{G}$ covers cases we typically encounter in scalar curvature geometry or general relativity like hypersurfaces of prescribed mean
curvature but also cases \emph{not} arising from variational principles, like marginally outer trapped surfaces (= horizons of black holes).\\
\textbf{D.3. Principal Eigenvalues} \, Let $H\in\cal{G}$ be a non-totally geodesic almost minimizer. Then we use potential theory, we cite from \cite{L2} and \cite{L3}, to study the conformal Laplacian near $\Sigma$.
\li
\begin{itemize}
\item There exists a finite constant $\tau = \tau(H)>-\infty$ such that for any $C^2$-function $f$ which is compactly supported in $H\setminus \Sigma$:
$\int_H f \cdot L_H f \, dV \, \ge \, \tau \cdot \int_H \bp^2\cdot f^2 dV$. The largest such $\tau \in \R$ is the \textbf{principal eigenvalue} $\lambda^{\bp}_{H}$ of $\delta_{\bp}^2 \cdot L$. One can show that, for any $H\in\cal{G}$, $\lambda^{\bp}_H > -\infty$.
\item If, in addition, $scal_M \ge 0$ and $H \in {\cal{H}}$, then $\lambda^{\bp}_{H}>0$. Due to the weight $\bp^2$, that diverges towards $\Sigma$, this is a proper upgrade to the classical statement asserting that the ordinary eigenvalue is positive.
\liii
\end{itemize}
Throughout this paper we express the eigenfunctions as solutions of the equations $L_{H,\lambda} \, \phi=0$ for the following \emph{shifted conformal Laplacian}:
\begin{equation}\label{v1}
L_{H,\lambda}:=L_H - \lambda \cdot \bp^2 \cdot Id, \mm{ for } \lambda < \lambda^{\bp}_H ,
\end{equation}
for the principal eigenvalue $\lambda^{\bp}_H \mm{ of }{\bp}^{-2} \cdot L$. For $\lambda = \lambda^{\bp}_{H}$ there is a unique$^\cs$ positive solution\footnote{\emph{unique$^\cs$} is our abbreviation for \emph{unique up to multiplication by a positive constant}.} of $L_{H,\lambda} \, \phi=0$, sometimes called the \emph{ground state} of $L_H$. This is the counterpart to the first eigenvalue in the smooth compact case. However, the potential theory of $L_{H,\lambda_H}$ is less well behaved than that of $L_{H,\lambda}$ for $\lambda < \lambda^{\bp}_{H}$ where we have a versatile theory. The condition $\lambda < \lambda^{\bp}_{H}$ is essential to derive the fundamental asymptotic control over solutions of $L_{H,\lambda} \, \phi=0$ in terms of \emph{boundary Harnack inequalities} with $\Sigma_H$ being the boundary of $H \setminus \Sigma_H$ cf.\cite[Th.1-Th.3]{L2}.\\
We recall that a (super)solution $u \ge 0$ of $L_{H,\lambda}\, \phi=0$ has
\textbf{minimal growth} towards $p \in \widehat{\Sigma}$, if there is a supersolution $w >0$, such that $(u/w)(x) \ra 0$, for $x \ra p$, $x \in H \setminus \Sigma$. This generalizes the Dirichlet boundary condition of vanishing boundary values to cases, like that of $L_{H,\lambda}$, where positive solutions of $L_{H,\lambda} \phi=0$ develop poles in $\Sigma$.
\subsubsection{Main Results}\label{over}
The results we summarized above hold for any $H \in {\cal{G}}$ regardless of the sign of $\lambda_H$. They apply as soon as $\lambda_H - \lambda>0$. Now we turn to finer details and they are more sensitive to the sign of $\lambda$. For them, and actually for the rest of this paper, we choose one fixed pair of values
\begin{equation}\label{lam}
0 < \lambda < \lambda_H.
\end{equation}
That is, we additionally assume that these eigenvalues are \textbf{positive} and, in particular, all metrics we get from conformal deformations by positive solutions of $L_{H,\lambda} \, \phi=0$ have $scal >0$. \\
\textbf{Minimal Splitting Factors} \, We start with the definition our basic metrics. Due to the locally Lipschitz regular coefficients of $L_{H,\lambda} = L_H - \lambda \cdot \bp^2 \cdot Id$ solutions of $L_{H,\lambda} \, \phi=0$ are $C^{2,\alpha}$-regular, for any $\alpha \in (0,1)$. This suggests the following regularity assumptions.
\begin{definition} \emph{\textbf{(Minimal Factor Metrics)}} \label{msge} For $H \in {\cal{G}}$ and $\lambda < \lambda_H$, let $\Phi>0$ be a $C^{2,\alpha}$-supersolution of $L_{H,\lambda} \phi =0$ on $H\setminus\Sigma_{H}$ so that in the case
\begin{itemize}
\item $H \in {\cal{G}}^c$: $\Phi$ is a solution in a neighborhood of $\Sigma$ and it has \textbf{minimal growth} towards $\Sigma$.
\item $H \in {\cal{H}}^{\R}_n$: $\Phi$ is a solution on $H\setminus \Sigma_{H}$ with \textbf{minimal growth} towards $\Sigma$.
\end{itemize}
Then we call the $\Phi^{4/n-2} \cdot g_H$ \textbf{minimal factor metrics}.
\end{definition}
\begin{remark} 1. Minimal factor metrics are naturally assigned to any $H \in {\cal{G}}$. This is owing to the boundary Harnack inequality \ref{mbhsq} (\cite[Theorem 3.4 and 3.5]{L2}). It shows that for any two such supersolutions $\Phi_1$, $\Phi_2$ on $H\setminus \Sigma_{H}$ we have some constant $c \ge 1$ so that $c^{-1} \cdot \Phi_1 \le \Phi_2 \le c \cdot \Phi_1$ near $\Sigma$ and, from \cite[Th.5]{L2}, the quotient $\Phi_1/\Phi_2$ can be extended to $\Sigma$ as a continuous positive function defined on $H$, although any such supersolution diverges when we approach $\Sigma$. For $H \in {\cal{H}}^{\R}_n$ the boundary Harnack inequality even yields $\Phi_2 \equiv c \cdot \Phi_1$. The blow-up invariance, in Theorem \ref{bloo} below, is another important aspect of the naturality of this assignment.\\
2. Near points in $\Sigma$ we may also characterize these minimal growth solutions as \emph{minimizers} of a Dirichlet type variational integral \cite[Prop.3.16]{L3} for some smooth boundary data
{\small \[\int_{H \setminus U} | \nabla_H f |^2 + \left(\frac{n-2}{4 (n-1)} \cdot scal_H - \lambda \cdot \bp^2 \right)\cdot f^2 \, d V,\;f \mm{ smooth with }supp \, f \subset H\setminus \Sigma_{H}\mm{ and }f|_{\p U} = F.
\]}
\noindent for some neighborhood $U$ of a basepoint $p \in H \setminus \Sigma$, with $\overline{U} \subset H \setminus \Sigma$, when $H \in {\cal{G}}^c$, and of $\infty \in \widehat{H}$, when $H \in {\cal{H}}^{\R}_n$.\\
3. Since $\lambda < \lambda_H$ we do not have regular positive solutions with minimal growth towards all of $\widehat{\Sigma}$ but we always have some \textbf{minimal Green's function} $G(x,y)$ for $L_{H,\lambda}$, that is, $G(\cdot ,y)$ has minimal growth towards $\widehat{\Sigma}$. This minimal Green's function is uniquely determined. Throughout this paper we exclusively use \textbf{minimal} Green's functions.
\end{remark}
\liii
\begin{example}\label{exa} With $G(\cdot ,y)$ one may construct supersolutions which are proper solutions of minimal growth near $\Sigma$. For any open set $V$ with compact closure $\overline{V} \subset H \setminus \Sigma$ and a smooth function $\phi$ on $H \setminus \Sigma$ with $\phi \equiv 0$ on $(H \setminus \Sigma) \setminus V$ and $\phi> 0$ on $V$, we set
\begin{equation}\label{suppa}
\mathbf{S}(x)=\mathbf{S}[H,\lambda, V, \phi](x):=\int_{H \setminus \Sigma} G(x,y) \, \phi(y) \, dV(y).
\end{equation}
This is a smooth positive supersolution of $L_{H,\lambda} \phi=0$ on $H \setminus \Sigma$ with $\mathbf{S} \in H^{1,2}_{\bp}(H \setminus \Sigma)$, \cite[Lemma 3.11 and Prop.3.12]{L2}, and it solves $L_{H,\lambda} \phi=0$ away from $V$ with minimal growth towards $\widehat{\Sigma}$. The Riesz decomposition theorem shows that any regular supersolution can written in the form (\ref{suppa}). If we drop the regularity requirements for $\mathbf{S}$ then any supersolution can be written as $\int_{H \setminus \Sigma} G(x,y) \, d\mu(y)$ for some Radon measure $\mu$.
\end{example}
\li
\begin{theorem}\emph{\textbf{(Singular Sets)}}\label{idi} For any $H \in {\cal{G}}$ we have:
\begin{itemize}
\item The metric completion $(\widehat{H \setminus \Sigma}, \widehat{d_{\sima}})$ of $(H \setminus \Sigma,\Phi^{4/n-2} \cdot g_H)$ is \textbf{homeomorphic} to $(H,d_H)$. Thus, we can write it as $(H,d_{\sima})$. In particular, for $H \in {\cal{G}}^c_n$, we have that $(H,d_{\sima})$ is compact.
\item The \textbf{Hausdorff dimension} of $\Sigma$ relative to $(H,d_{\sima})$ is again $\le n-7$.
\end{itemize}
We call $(H,d_{\sima})$ a minimal spitting factor, briefly a \textbf{minimal factor}, of its ambient space $M$ and $d_{\sima}$ the completed \textbf{minimal factor metric}.
\end{theorem}
We typically write $d_{\sima}$ when the specific choice of $\Phi$ is not needed or known from the context. In turn we write $d_{\sima}(\Phi)$ if $H$ has not been specified. The following two results show that minimal factors admit (exclusively) tangent cones being again minimal factors.
\begin{theorem}\emph{\textbf{(Blow-Ups)}}\label{bloo}
For $H \in {\cal{G}}$ and $L_{H,\lambda}$ with $\lambda < \lambda_H$ we consider $(H, d_{\sima}(\Phi_H))$, any singular point $p \in \Sigma_H$ and any tangent cone $C$ in $p$. Then we get the following \textbf{blow-up invariance:}\\
Any sequence $(H, \tau_i \cdot d_{\sima}(\Phi_H))$ scaled by some sequence $\tau_i \ra \infty$, $i \ra \infty$, around $p$, subconverges\footnote{This convergence captures that of the underlying minimizers and the conformal deformation cf.Ch.\ref{bhpgr}.} and the limit of any converging subsequence is $(C, d_{\sima}(\Phi_C))$ for some tangent cone $C$.
\end{theorem}
\begin{theorem}\emph{\textbf{(Euclidean Factors)}} \label{umi} For any non-totally geodesic $H \in {\cal{H}}^{\R}_n$ and $\lambda < \lambda_H$ there is a \textbf{unique}$^\cs$ space $(H, d_{\sima}(\Phi_H))$. For $C \in \mathcal{SC}_{n}$ the associated space $(C, d_{\sima}(\Phi_C))$ is invariant under scaling around $0 \in C$, that is, it is again a cone.
\end{theorem}
There is no such blow-up invariant scheme for \emph{principal} eigenvalues and eigenfunctions. The $\bp$-weighted eigenvalues of scaling invariant and we generally have $\lambda_H < \lambda_C$, cf.\cite[Lemma 3.9]{L3}. This means that in inductive cone reduction arguments \emph{non-principal} eigenvalues, like $\lambda_H$ on $C$, will occur anyway. In \cite{L4} and \cite{L5}, we even use the degree of freedom of choosing upper positive bounds on $\lambda$, much smaller than $\lambda_H$, to get bounds on the growth rate of solutions towards the singular set.\\
\noindent \textbf{Metric Measure Spaces} \, We augment $(H, d_{\sima}(\Phi_H))$ to a metric measure space. To this end we show that there is a canonical extension of $\Phi^{2 \cdot n/(n-2)}\cdot \mu_H$ on $H \setminus \Sigma$ to a measure $\mu_{\sima}$ on $H$, where $\mu_H$ is the $n$-dimensional Hausdorff measure on $(H^n,g_H) \subset (M^{n+1},g_M)$. In fact, this extension $\mu_{\sima}$ is a \textbf{Borel measure} on $(H,d_{\sima})$ cf.\cite[pp.62-64]{HKST}.
\begin{definition}\emph{\textbf{(Minimal Factor Measures)}}\label{mms} For any $H \in {\cal{G}}_n$ equipped with a minimal factor metric
$\Phi^{4/(n-2)} \cdot g_H$ we define the \textbf{minimal factor measure} $\mu_{\sima}$ on $H$ by
\begin{equation}\label{meas}
\mu_{\sima}(E):=\int_{E \setminus \Sigma_H} \Phi^{2 \cdot n/(n-2)}\cdot d\mu_H, \mm{ for any Borel set } E \subset H
\end{equation}
\end{definition}
This is one of the places where we use the low Hausdorff dimension of $\Sigma \subset (H, d_{\sima}(\Phi_H))$. The dimensional estimate, from Theorem \ref{idi}, is sufficient to also define some lower dimensional volume elements. The main case is $\mu^{n-1}_{\sima}$ for hypersurfaces within $(H, d_{\sima}(\Phi_H))$. We get $d\mu^{n-1}_{\sima}$ from extending $\Phi^{2 \cdot (n-1)/(n-2)}\cdot d\mu^{n-1}_H$ on $H \setminus \Sigma$, where $d\mu_H^{n-1}$ is the hypersurface element on
$(H \setminus \Sigma, g_H)$.\\
We derive some basic regularity results for $\mu_{\sima}$, most importantly, volume growth estimates for balls in $(H,d_{\sima},\mu_{\sima})$. To limit the amount of technical statements in this introduction we only mention the resulting doubling property and volume decay estimates we get for $(H,d_{\sima},\mu_{\sima})$.
\begin{theorem}\emph{\textbf{(Doubling and Volume Decay)}} \label{dvintro} For any $H \in {\cal{G}}$, there is a $C=C(H)>0$, and $C=C(n)>0$ for $H \in {\cal{H}}^{\R}_n$, so that
for radii and volumina computed relative $(H,d_{\sima},\mu_{\sima})$:
\begin{itemize}
\item $\mu_{\sima}$ is \textbf{doubling}: for any $q \in H$ and $r>0$:
\begin{equation}\label{dou}
\mu_{\sima}(B_{2 \cdot r}(q)) \le C \cdot \mu_{\sima}(B_{r}(q)).
\end{equation}
\item For balls $B_* \subset B \subset H$ we have a \textbf{relative lower volume decay} of order $n$:
\begin{equation}\label{volgro}
diam(B_*)^n/diam(B)^n \le C \cdot \mu_{\sima}(B_*)/\mu_{\sima}(B).
\end{equation}
\item For $H \in {\cal{G}}^c_n$, the total volume relative $d_{\sima}$ is finite: $Vol(H,d_{\sima}) < \infty$.\\
\end{itemize}
\end{theorem}
\noindent \textbf{Geometric Analysis on $(H,d_{\sima},\mu_{\sima})$} \, Now we reach the validity of Sobolev inequalities and, from this, of isoperimetric inequalities for $(H,d_{\sima},\mu_{\sima})$. In this part we plunge into the asymptotic geometry of $H \in {\cal{G}}$ near $\Sigma_H$. We use the \textbf{\si-uniformity} of these spaces \cite{L1}. This is a boundary regularity property of the boundary $\Sigma$ relative to
$H \setminus \Sigma$. We use it to define \textbf{Semmes families of curves} in the original space $H \in {\cal{G}}$.
We show that relative to the $(H,d_{\sima},\mu_{\sima})$ the curve families we have chosen still satisfies the axioms for such Semmes families. The presence of such curve families was abstracted from the proof of the Euclidean Poincar\'{e} inequality and, indeed, for doubling spaces it implies the Poincar\'{e} inequality.
\begin{theorem} \emph{\textbf{(Poincar\'{e} inequality)}}\label{sobb} For any $H \in {\cal{G}}$, there is a constant $C_0=C_0(H,\Phi) >0$, so that when $B \subset H$ is an open ball, $u:B \ra \R$ is an $L^1$-function on $H$ that is $C^1$ on $H \setminus \Sigma$. Then we have, setting $|\nabla u| \equiv 0$ on $\Sigma$.
\begin{equation}\label{poinm}
\fint_B |u-u_B| \, d \mu_{\sima} \le C_0 \cdot \fint_B |\nabla u| \, d \mu_{\sima}, \mm{ where } u_B:=\fint_B u \, d \mu_{\sima} := \int_B u \, d \mu_{\sima}/\mu(B)
\end{equation}
\end{theorem}
The volume decay property \emph{of order n} in Th.\ref{dvintro}(ii) allows us to improve this Poincar\'{e} inequality to the following Sobolev inequality.
\begin{theorem} \emph{\textbf{(Sobolev Inequality)}}\label{sobb} For any $H \in {\cal{G}}$, there is some $C_1=C_1(H,\Phi)>0$, so that for some open ball $B \subset H$, $u:B \ra \R$ is an $L^1$-function on $H$ that is $C^1$ on $H \setminus \Sigma$. Then we have
\begin{equation}\label{ii2}
\Big(\fint_B |u-u_B|^{n/(n-1)} \, d \mu_{\sima}\Big)^{(n-1)/n} \le C_1 \cdot \fint_B |\nabla u| \, d \mu_{\sima}
\end{equation}
\end{theorem}
From the low Hausdorff dimension of $\Sigma$ in \ref{dvintro} (v) this gives us the desired isoperimetric inequalities for $(H,d_{\sima},\mu_{\sima})$.
\begin{theorem} \emph{\textbf{(Isoperimetric Inequality)}}\label{iip} For any $H \in {\cal{G}}$, there are some constants $\gamma=\gamma(H)>0,$ $\gamma^*=\gamma(H^*)>0$, both depending only on $n$ when $H \in {\cal{H}}^{\R}_n$, so that for any open set $U \subset H$ with compact closure and rectifiable boundary $\p U$
\begin{equation}\label{iifin}
\mu_{\sima}(U)^{(n-1)/n} \le \gamma \cdot \mu^{n-1}_{\sima}(\p U),
\end{equation}
\begin{equation}\label{ii2in}
\min \{ \mu_{\sima}(B_{\rho} \cap U), \mu_{\sima} (B_{\rho} \setminus U)\}^{(n-1)/n} \le \gamma^* \cdot \mu^{n-1}_{\sima}(B_{\rho} \cap \p U),
\end{equation}
\end{theorem}
As an application we have volume growth estimates for area minimizers in $(H,d_{\sima},\mu_{\sima})$.
\begin{corollary} \emph{\textbf{(Area Minimizers in $(H,d_{\sima},\mu_{\sima})$)}} \label{arin}\, For $H \in {\cal{G}}$ and $L_{H,\lambda}$ with $\lambda < \lambda_H$, there
is a $\rho_H>0$ so that for any $r \in (0,\rho_H)$ and any area minimizing boundary $L^{n-1}$ bounding some open set $L^+ \subset H$ in $(H,d_{\sima},\mu_{\sima})$:
\begin{equation}\label{estin}
\kappa^-_n\cdot r^n \le \mu_{\sima}(L^+ \cap B_r(p)) \le \kappa^+_n\cdot r^n
\end{equation}
where $\kappa^-_n, \kappa^+_n >0$ denote constants depending only on the dimension $n$.
\end{corollary}
In turn, the estimates (\ref{iifin}), (\ref{ii2in}) and (\ref{estin}) imply a control over area minimizing hypersurfaces \emph{within} $(H,d_{\sima},\mu_{\sima})$ that is similar to that in the Euclidean case.
\setcounter{section}{2}
\renewcommand{\thesubsection}{\thesection}
\subsection{Hyperbolic Unfoldings} \label{hyd}
The Gromov hyperbolic geometry $d_{\bp}$ of hypersurfaces $H \in {\cal{G}}$, \cite[Theorem 1.11, Proposition 3.10 and Theorem 1.13]{L1} is essential for most of the results in this paper. We oftentimes employ all three geometries $d_H$, $d_{\sima}$ and $d_{\bp}$ in one argument. Here we recall some basics and draw consequences we use in the following sections.
\subsubsection{Canonical Semmes Families} \label{uhy}
\begin{definition}\emph{\textbf{(Gromov Hyperbolicity)}}\label{grh} A metric space $X$ is \textbf{geodesic}, when any two points can be joined by a geodesic, i.e. a path which
is an isometric embedding of an interval. A geodesic metric space $X$ is $\mathbf{\delta}$\textbf{-hyperbolic,} if all its geodesic triangles are $\mathbf{\delta}$\textbf{-thin} for
some $\delta \ge 0$. The space $X$ is called \textbf{Gromov hyperbolic} when it is $\delta$-hyperbolic for some $\delta$.
\end{definition}
A \textbf{generalized geodesic ray} $\gamma: I \ra X$ is an isometric embedding of the interval $I \subset \R$ into $X$, where either $I = [0,\infty)$, then $\gamma$ is a proper
geodesic ray, or $I = [0,R]$, for some $R \in (0,\infty)$. Then $\gamma$ is a geodesic arc. When we fix a base point $p \in X$ we can use the hyperbolicity to canonically
identify any $x \in X$ with the (properly) generalized ray $\gamma_x$ with endpoint $\gamma(R) = x$. We extend the definition of such a ray to $I = [0,\infty]$ setting $\gamma(x) = \gamma(R)$, when $x \in [R,\infty]$.
\begin{definition} \emph{\textbf{(Gromov Boundary)}} Let $X$ be a complete Gromov hyperbolic space.
The set $\p_G X$ of equivalence classes $[\gamma]$ of geodesic rays, from a base point $p \in X$, with two such rays being equivalent if they have finite Hausdorff distance. $\p_G X$ is called the \textbf{Gromov boundary} of $X$.
\end{definition}
$\p_G X$ does not depend on the choice of the base point $p$. To topologize $\overline{X}_G = X \cup \p_G X$, we say $x_n \in \overline{X}$ \emph{converges} to $x \in \overline{X}$ if there exist
generalized rays $c_n$ with $c_n(0) = p$ and $c_n(\infty) = x_n$ subconverging (on compacta) to a generalized ray $c$ with $c(0) = p$ and $c(\infty) = x$. The canonical map $X \hookrightarrow \overline{X}_G$ is a homeomorphism onto its image, $\p_G X$ is closed and $\overline{X}_G$ is compact and called the \textbf{Gromov compactification} of $X$.\\
\textbf{Hyperbolic Unfoldings} \, Now we turn to our almost minimizers $H \in {\cal{G}}$. The locally Lipschitz Riemannian metric $\bp^2 \cdot g_H$ and its $C^\infty$-Whitney smoothing ${\bp^*}^{^2} \cdot g_H$, both defined on $H \setminus \Sigma$, induce the distance functions $d_{\bp_H}$ and $d_{\bp^*_H}$, where we usually drop the index $H$, and write $d_{\bp}$ and $d_{\bp^*}$, when $H$ is known from the context. We recall the following hyperbolicity properties of these metrics \cite[Theorem 1.11, Cor.3.6 and Prop.3.11]{L1}.
\begin{proposition} \emph{\textbf{(Hyperbolic Unfoldings)}} \label{hu} For any non-totally geodesic $H \in {\cal{G}}$,
$(H \setminus \Sigma, d_{\bp})$ and the quasi-isometric $(H \setminus \Sigma, d_{\bp^*})$, are \textbf{complete Gromov hyperbolic spaces} with \textbf{bounded geometry}.
We call $(H \setminus \Sigma, d_{\bp})$ and $(H \setminus \Sigma, d_{\bp^*})$ \textbf{hyperbolic unfoldings} of $(H \setminus \Sigma, g_H)$. The identity map on $H \setminus \Sigma$ extends to \textbf{homeomorphisms}
\[
\widehat{H}\cong\overline{(H \setminus \Sigma,d_{\bp})}_G \cong \overline{(H \setminus \Sigma,d_{\bp^*})}_G \,\mm{ and } \, \widehat{\Sigma} \cong\p_G(H \setminus \Sigma,d_{\bp}) \cong \p_G(H \setminus \Sigma,d_{\bp^*}),
\]
For $H\in{\cal{H}}^\R_n$, $(H \setminus \Sigma, d_{\bp})$ is $\delta(L_{\bp},n)$-hyperbolic with $(\sigma_n, \ell_n)$-bounded geometry and $(H \setminus \Sigma, d_{\bp^*})$ is $\delta^*(L_{\bp},n)$-hyperbolic with $(\sigma_n, \ell_n)$-bounded geometry for constants only depending on $n$.
\end{proposition}
\begin{remark} 1. The condition for bounded geometry is this. For global Lipschitz constant $\ell \ge 1$ and radius $\varrho>0$ there exists around any point $p\in H\setminus\Sigma$ an $\ell$-bi-Lipschitz chart $\phi_p:B_\varrho(p) \ra U_p$ between the ball $B_\varrho(p)$ in $(H\setminus\Sigma, d_{\bp})$ to some open set $U_p \subset(\R^n,g_{\R^{n}})$. We shall always assume that $0 \in U_p$ and $\phi_{p}(p) = 0$. In cases where we need to specify these parameters we say that $(H\setminus\Sigma, d_{\bp})$ has \emph{$(\varrho,\ell)$-bounded geometry}.\\
2. For $H \in {\cal{H}}^{\R}_n$ the boundary $\p_G(H \setminus \Sigma,d_{\bp})$ always contains the point at infinity even when $H$ is regular, whereas, for regular $H \in {\cal{G}}^c_n$, we have $\p_G(H \setminus \Sigma,d_{\bp}) \v$. The (excluded) case where $H$ is totally geodesic is the trivial one in this theory. Then $H$ is automatically smooth and, moroever, $(H, d_{\bp})$ and $(H, d_{\bp^*})$, degenerate to one-point spaces.
\qed \end{remark}
\textbf{Semmes Families} \, Here we start with a geometric application of hyperbolic unfoldings. For this we recall that to handle analysis on rather general metric measure spaces Semmes, cf.\cite{Se}, \cite{He} and \cite{HKST}, has decompiled the classical proof of Poincar\'{e} inequality on $\R^n$ where, in some important step, we encounter uniformly distributed families of curves linking any two given points.
The abstracted concept is that of \emph{thick families of curves}, also called \emph{Semmes families}, satisfying the two conditions (i) and (ii) in \ref{sem0} below. Under some mild assumptions on the metric space the presence of Semmes families implies the validity of a Poincar\'{e} inequality.\\
On almost minimizers we have Poincar\'{e}, Sobolev and isoperimetric inequalities \cite{BG}. In this case the presence of Semmes families comes hardly as a surprise, but there is a little extra we get from hyperbolic unfoldings. The hyperbolic geodesics give us \emph{canonically} defined Semmes families on $(H, d_H)$ matching also minimal factor geometries, that is, they are still Semmes families relative $(H, d_{\sima})$. This is proved and used in Ch.\ref{sobo} when we derive the Poincar\'{e} inequality for $(H, d_{\sima})$.
\begin{proposition} [Canonical Semmes Families on $H \in {\cal{G}}_n$]\label{sem0} For any $H \in {\cal{G}}_n$ there is a $C=C(H)>0$ and $C=C(n)>0$ for $H \in {\cal{H}}^{\R}_n$,
and for any two $p,q \in H$ a family $\Gamma_{p,q}$ of rectifiable curves $\gamma: I_\gamma \ra H$, $I_\gamma \subset \R$, joining $p$ and $q$, so that:
\begin{enumerate}
\item For any $\gamma \in \Gamma_{p,q}$: $l(\gamma|[s,t]) < C \cdot d(\gamma(s),\gamma(t))$, for $s,t \in I_\gamma$.
\item Each family $\Gamma_{p,q}$ carries a probability measure $\sigma_{p,q}$ so that for any Borel set $A \subset X$, the assignment $\gamma \mapsto l(\gamma \cap A)$ is $\sigma$-measurable with
{\small \begin{equation}\label{tcu1}
\int_{\Gamma_{p,q}} l(\gamma \cap A) \, d \sigma(\gamma) \le C \cdot \int_{A_{C,p,q}} \left(\frac{d(p,z)}{\mu(B_{d(p,z)}(p))} + \frac{d(q,z)}{\mu(B_{d(q,z)}(q))}\right) d \mu(z)
\end{equation}}
for $A_{C,p,q}:=(B_{C \cdot d(p,q)}(p) \cup B_{C \cdot d(p,q)}(q))\cap A$.
\end{enumerate}
The family $\Gamma_{p,q}$ uniformly surrounds a central curve, its \textbf{core} $\gamma_{p,q}$. It is a hyperbolic geodesic linking $p$ and $q$ in the Gromov compactification of the hyperbolic unfolding.
\end{proposition}
\textbf{Proof} \, We start with the case of the Euclidean $\R^n$ where thick families of curves joining any two points
$p$ and $q$ can be constructed explicitly. Then we use hyperbolic unfoldings to transfer these families to $(H,d_H)$.\\
\textbf{Step 1} \, For any two points $x, y \in \R^n$, consider the hyperplane $L^{n-1}(x,y)$ orthogonal to the line segment $[x, y] \subset \R^n$ passing through the midpoint $m(x,y)$ of $[x, y]$. For $\rho=d(x,y)$ we consider a ball $B_r=B_r^{n-1}(m(x,y)) \subset L^{n-1}(x,y)$ of radius $r \in (0,\rho]$ we choose later. For any $z\in B_r$, let $\gamma_{z}$ be the unit speed curve from $x$ to $y$ we get when we follow the line segments $[x, z]$ and
$[z,y]$. We define the space of such curves $\Gamma^{\R^n}_{x,y}=\Gamma^{\R^n}_{x,y}(r):=\{\gamma_{z}\,|\,z\in B_r\}$. The point set we get as a union of the curves, which ia double cone, is denoted $D_r(x,y)$.
We clearly have a constant $C_0>0$, independent of $x, y$ and $r$, so that (i) is satisfied for any $\gamma \in \Gamma^{\R^n}_{x,y}$.\\
Towards (ii) we define the probability measure $\alpha^{\R^n}_{x,y}$ on $\Gamma^{\R^n}_{x,y}$ as follows: if $W\subset\Gamma^{\R^n}_{x,y},$
\begin{equation}\label{defsem}
\alpha^{\R^n}_{x,y}(W) :=\mathcal{H}^{n-1}(\{z\in B_r^{n-1}\,|\, \gamma_{z}\in W\})/\mathcal{H}^{n-1}(B_r^{n-1})
\end{equation}
From the coarea formula, cf.[AFP], Lemma 2.99, we see that for any Borel set $A\subset \R^{n}$ the function $\gamma\mapsto\ell(\gamma\cap A)$ on $\Gamma^{\R^n}_{x,y}$ is $\alpha^{\R^n}_{x,y}$-measurable.
Now assume that all points in $A$ are closer to $x$ than to $y$. For the distance between corresponding points on any two segments we have
\begin{equation}\label{dist}
d(s \cdot z_1 + (1-s) \cdot x, s \cdot z_2 + (1-s) \cdot x) \le 2 \cdot r \cdot s, \mm{ for } s \in [0,1], z_1, z_2 \in B_r.
\end{equation}
The coarea formula therefore gives the following inequality for the annuli $A_{j} :=A \cap B(x,\ 2^{-j}d(x,\ y))\setminus B(x, 2^{-j-1}d(x, y))$,
$j \in \Z^{\ge 0}$:
\li
\[\int_{B_r}\ell(\gamma_{z}\cap A)d\mathcal{H}^{n-1}(z)=\sum_{j=0}^{\infty}\ \int_{B_r}\ell(\gamma_{z}\cap A_{j})d\mathcal{H}^{n-1}(z) \le \sum_{j=0}^{\infty} 2^{(j+2)(n-1)} \mu(A_{j})\]
\[\le 4^{n-1} \cdot \int_{A\cap B(x,d(x,y))}\frac{d(x,z)}{\mu(B_{d(x,z)}(x))} d \mu(z)\]
\liii
\bigskip
In the last inequality we used that $\mu(B_{d(x,z)}(x)) = c_n \cdot d(x,z)^n$, where $c_n >0$ is the volume of the unit ball, to remove the factor $2^{j \cdot (n-1)}$. For $A$ closer to $y$ than to $x$ we argue similarly with the integrand $d(y,z)/\mu(B_{d(y,z)}(y))$ and decomposing a general $A$ into the two components of points closer to $x$ or $y$ we get (\ref{tcu1}) in (ii). For (i) and (ii) we choose $C=4^{n-1}/ (c_{n-1} \cdot r^{n-1}) + C_0$.
\bigskip
\textbf{Step 2} \, Since the Gromov compactification $\overline{X}_G$ of $X=(H \setminus \Sigma,\bp^2 \cdot g_H)$ is homeomorphic to $(H, d_H)$ there is, for any two points $p,q \in H$, a hyperbolic geodesic $\gamma_{p,q} \subset X \cup \{p,q\} \subset \overline{X}_G$ that joins these points. \\
With this choice we already get condition \ref{sem0}(i) for $\gamma_{p,q}$ that will be the central curve in the yet to define family $\Gamma_{p,q}$. Namely, relative to $(H,g_H)$ \emph{each segment }$\widetilde{\gamma}_{x,y} \subset \gamma_{p,q}$ joining two points $x,y \in \gamma_{p,q}$ is \emph{\si-uniform}, more precisely a c-\si-uniform curve for some $c \ge 1$, that is:
\begin{itemize}
\item \emph{\textbf{Quasi-geodesic:}} \, $l(\widetilde{\gamma}_{x,y}) \le c \cdot d(x,y).$
\item \emph{\textbf{Twisted double \si-cones:}} \, $l_{min}(\widetilde{\gamma}_{x,y}(z)) \le c \cdot \delta_{\bp}(z)$ for any $z \in \widetilde{\gamma}_{x,y}$.
\end{itemize}
where $l_{min}(\widetilde{\gamma}_{x,y})(z)$:= minimum of the lengths of the two subcurves of $\widetilde{\gamma}_{x,y}$ from $x$ to $z$ and from $q$ to $z$. From Prop. 3.11 and Lemma 3.13 of \cite[Ch.3.2]{L1} the constant $c$ depends only on $H$ and only on $n$ for $H \in {\cal{H}}^{\R}_n$. Now we use the Lipschitz continuity of $\delta_{\bp}$
\begin{equation}\label{lip}
|\delta_{\bp_H}(p)- \delta_{\bp_H}(q)| \le L_{\bp} \cdot d_H(p,q) \mm{ for any } p, q \in H \setminus \Sigma \mm{ and any } H \in {\cal{G}}_n.
\end{equation}
With (\ref{lip}) we materialize the twisted double \si-cone condition and consider the following twisted double \si-cones in $(H, d_H)$ with \emph{core} $\gamma_{p,q}$
\begin{equation}\label{tube}
T_d(p,q) := \bigcup_{z \in \gamma_{p,q} \setminus \{p,q\}} B_{l_{min}(\gamma_{p,q}(z))/(d \cdot c \cdot L_{\bp})}(z).\\
\end{equation}
\bigskip
We observe that for $d > 1$ we have $T_d(p,q) \subset H \setminus \Sigma$ and $\overline{T_d(p,q)} \cap \Sigma \subset \{p,q\}$ since
\begin{equation}\label{beg}
dist(z,\Sigma) \ge L_{\bp}^{-1} \cdot \delta_{\bp_H}(z) \ge (L_{\bp} \cdot c)^{-1} \cdot l_{min}(\gamma_{p,q}(z))
\end{equation}
For $d >1$ and any $z \in \gamma_{p,q}$ the ball $B_{l_{min}(\gamma_{p,q}(z))/(d \cdot c \cdot L_{\bp}))}$ scaled to unit size, i.e. scaled by $({l_{min}(\gamma_{p,q}(z))/(d \cdot c \cdot L_{\bp}))})^{-1}$, has the same uniform bounds on the sectional curvature for any inner point $z$ on any such curve $\gamma_{p,q}$. Namely, $|A| \le \bp$, $\bp_{\lambda \cdot H} \equiv \lambda^{-1} \cdot \bp_{H}$ and $|A|$ bounds the norm of the principal curvatures and thus, via Gauss formulas, the sectional curvature of $H$, since the ambient space is a smooth manifold.
This shows that for $d > 1$ large enough the exponential map in any such $z$ is a bi-Lipschitz map with constant $\le 2$ mapping $B_{l_{min}(\gamma_{p,q}(z))/(d \cdot c)}$ onto its image cf.\cite[Appendix B]{L1} for further details.
We observe that this $d$ is independent of the chosen $p,q \in H$ and for $H \in {\cal{H}}^{\R}_n$ we infer from \cite[Prop.2.7 (iv)]{L1} and \cite[Lemma 3.13]{L1} that it only depends on $n$.\\
\textbf{Step 3} \, The $T_d(p,q)$ are the total spaces of fibrations by curves we introduce now. They define the desired Semmes families $\Gamma_{x,y}$ on $(H, d_H)$.\\
Let $\ell$ be the length of $\gamma_{p,q} \subset (H, d_H)$. We parameterize $\gamma_{p,q}$ by arc-length and choose a point $x \in \R^n$ with $|x| =\ell/2$ and the family
$\Gamma_{x,-x}(r)$ in $\R^n$ with $r=d \cdot c \cdot L_{\bp}$. We start with the isometry that maps $[-x,x]$ onto $\gamma_{p,q}$. With these choices we consider Fermi coordinates along $[-x,x] \setminus \{-x,x\}$ on $D_r(x,y)$ and the inner points of $\gamma_{p,q} \setminus \{p,q\}$ on $T_d(p,q)$ and use them to define a bi-Lipschitz map from $T_d(p,q)$ to $D_r(x,y)$. From this we see that the pull-back of the family $\Gamma^{\R^n}_{x,-x}$ and the $\alpha^{\R^n}_{x,-x}$ on $\Gamma^{\R^n}_{x,-x}$ defines the family $\Gamma_{p,q}$ and the $\alpha_{p,q}$ with the desired properties.\qed
\subsubsection{Green's Functions and \si-Doob Transforms} \label{bhpdo}
Now we turn to analytic applications of hyperbolic unfoldings.
The main reason for our interest in these unfoldings is that the potential theory of $\delta_{\bp^*}^2 \cdot L_{H,\lambda}$ on $(H \setminus \Sigma, d_{\bp^*})$ is nicely structured towards the Gromov boundary from the work of Ancona cf. \cite{A1}, \cite{A2} and \cite{KL}. \\
To formulate associated boundary regularity results for $(H \setminus \Sigma, d_H)$ along $\Sigma$ we briefly look at the standard conformal deformation the flat unit disk $D$ to the hyperbolic space. In this \emph{Poincar\'{e} metric} the intersections $B \cap D$ of flat Euclidean discs $B$ centered in boundary points $\p D$ become hyperbolic halfspaces. This suggests the following generalization: we consider hyperbolic halfspaces ${\mathcal{N}}\subset (H \setminus \Sigma, d_{\bp})$. They are generally not quite conformally equivalent to the distance balls in $(H,d_H)$, but it turns out that the ${\mathcal{N}}$, and \emph{not} these balls in $(H,d_H)$ are the adequate choice for analytic estimates on $(H,d_H)$, towards $\Sigma_H$. We recall \cite[Lemma 2.4]{L1}:
\begin{lemma} [Canonical Chains of Half Spaces]\label{nn} \label{cpc} Let $X$ be a $\delta$-hyperbolic space and $\gamma: (0, a) \ra X$ a geodesic with $a\in (10^3 \cdot \delta, \infty]$. We choose curve points $x_i=\gamma(t_i)$, $i \in \R^{\ge 0}$, with $x_0=\gamma(0)$ and $d(x_i,x_{i+1})=300\cdot \delta$ and set
\begin{equation}\label{vu}
\mathcal{N}^\delta_i[\gamma]:=\{x \in H \setminus \Sigma\,|\, dist\big(x,\gamma([t_i,a))\big)<dist\big(x,\gamma((0,t_i]) \big)\}
\end{equation}
We call the $\mathcal{N}^\delta_i(\gamma)$ a \textbf{canonical chain} and we also define the open sets $\mathbf{N}^\delta_i$ in the Gromov compactification $X_G$ of $X$ naturally extending $\mathcal{N}^\delta_i(\gamma) \subset X$.
\begin{equation}\label{vvu}
\mathbf{N}^\delta_i :=\mathcal{N}^\delta_i \cup \{z \in \p_G X\,|\, z \mm{ can be represented by a sequence in }\mathcal{N}^\delta_i\} \subset X_G.
\end{equation}
Then, if $\gamma$ is a ray representing some $z \in \p_G X$, the $\mathbf{N}^\delta_i(\gamma)$ are a neighborhood basis of $z$ in $X_G$.
\end{lemma}
Now we can state one of the central boundary regularity results \cite[Theorem 3.4 and 3.5]{L2} where we consider the singular set $\Sigma$ as a \emph{boundary} of its regular complement $H \setminus \Sigma$. We get them from the transition of this theory on the hyperbolic unfolding back to $L_{H,\lambda}$ on $(H \setminus \Sigma, g_H)$. Since the hyperbolicity constant $\delta$ is already determined from \ref{hu} we drop it in our further statements.
\begin{theorem}[Boundary Harnack Inequality]\label{mbhsq}
For $H \in {\cal{G}}$ and $L_{H,\lambda}$ \si-adapted, let $u$, $v>0$ be two supersolutions of $L_{H,\lambda}\, \phi= 0$ on $H \setminus \Sigma$ both solving $L_{H,\lambda}\, \phi= 0$ on $\mathcal{N}_i \cap H \setminus \Sigma$, around some $z \in \widehat{\Sigma}$, with minimal growth along $\mathbf{N}_i \cap \widehat{\Sigma}$. Then there is a $C=C(H,\lambda_H-\lambda)>1$, and $C=C(n,\lambda_H-\lambda)>1$ when $H\in{\cal{H}}^\R_n$, independent of $i$, so that for any $i$ we have boundary Harnack inequalities on $H \setminus \Sigma$ relative the boundary $\widehat{\Sigma}$:
\begin{equation}\label{fhepq}
u(x)/v(x) \le C \cdot u(y)/v(y), \mm{ for all } x,\, y \in \mathcal{N}_{i+1}.
\end{equation}
The same inequality holds for $u,v$ considered as solutions of $\delta_{\bp^*}^2 \cdot L_{H,\lambda} \, \phi=0$ on $(H \setminus \Sigma, d_{\bp^*})$.
\end{theorem}
Typical such \emph{supersolutions} are the minimal Green's function $G(\cdot, p)$, $p \in H \setminus \Sigma$, the functions $\mathbf{S_\lambda}$ from \ref{exa} (\ref{suppa}) and the functions $\Phi$ from Def.\ref{msge} of minimal factor metrics. The majority of all results in \cite{L2} and \cite{L3} relies on \ref{mbhsq}. Here we use this inequality to locally reduce our considerations to the case where we conformally deformed $H \setminus \Sigma$ by the minimal Green's function. \\
An efficient way to derive estimates for minimal Green's function is the use of \emph{Doob transforms} also called \emph{h-transforms} $L^{h}$, a standard transformation from stochastic analysis e.g. \cite[Ch.4.1]{P}, for operators $L$ on function spaces for smooth $h>0$ on $H \setminus \Sigma$, defined as $L^{h}:= h^{-1}Lh$, i.e. for smooth $g$ we set $L^{h}g(x):= h^{-1}(x)\, L (h \cdot g )(x).$ We choose $L= L_{H,\lambda}$ and $h=\delta^{-(n-2)/2}_{\bp^*}$.
\begin{proposition} \emph{\textbf{(\si-Doob Transforms)}}\label{doob} For any $H \in {\cal{H}}_n$ we define the \textbf{\si-Doob Transform} $L^{\sima}=L^{\sima}_{H,\lambda}$ on
$(H \setminus \Sigma, d_{\bp^*})$ of $L=L_{H,\lambda}$ to be the following Schr\"odinger operator* on suitably regular functions, for the present, we consider $C^2$-functions $\phi$.
\li
\begin{equation}\label{lsdef}
L^{\sima} \phi:= \delta_{\bp^*}^2 \cdot L^{\delta^{-(n-2)/2}_{\bp^*}}_{H,\lambda}\phi = \delta^{(n+2)/2}_{\bp^*} \cdot L ({\delta^{-(n-2)/2}_{\bp^*}}_{H,\lambda} \cdot \phi) =
\end{equation}
{\small \[ \delta_{\bp^*}^2 \cdot \Big(-\Delta_{{\bp^*}^{^2}\cdot g_H} \phi +
\Big(\frac{n-2}{4 (n-1)} scal_H - \lambda \bp ^{2} - \delta_{\bp^*}^{(n-2)/2} \Delta_{g_H} \delta_{\bp^*}^{-(n-2)/2} \Big)\phi \Big).\]}
\begin{itemize}[leftmargin=*]
\item For a function $v>0$ we set $u:=\delta^{-(n-2)/2}_{\bp^*} \cdot v$. Then $u^{4/(n-2)} \cdot g_H=v^{4/(n-2)} \cdot {\bp^*}^{^2}\cdot g_H$ and
\begin{equation}\label{3g}
u \mm{ solves } L \phi =0 \Leftrightarrow v \mm{ solves } L^{\sima} \phi=0 \, , \,
L u \ge c \cdot \delta^{-2}_{\bp^*} \cdot u \Leftrightarrow L^{\sima} v = \delta^{(n+2)/2}_{\bp^*}\cdot L u \ge c \cdot v,
\end{equation}
for some $c>0$. That is, $L$ is \si-adapted $\Leftrightarrow L^{\sima}$ is weakly coercive.
\item The operator $L^{\sima}$ is symmetric on $(H \setminus \Sigma, d_{\bp^*})$ and, thus, the minimal Green's function $G^{\sima}$ of $L^{\sima}$ on $(H \setminus \Sigma, d_{\bp^*})$ satisfies $G^{\sima}(x,y)=G^{\sima}(y,x), \mm{ for } x \neq y \in H \setminus \Sigma.$
\item The minimal Green's functions $G$ of $L$ on $(H \setminus \Sigma, d_H)$ and $G^{\sima}$ of $L^{\sima}$ on $(H \setminus \Sigma, d_{\bp^*})$ satisfy
\begin{equation}\label{gre}
G^{\sima}(x,y) = \delta_{\bp^*}^{(n-2)/2}(x) \cdot \delta_{\bp^*}^{(n-2)/2}(y) \cdot G(x,y), \mm{ for } x \neq y \in H \setminus \Sigma.
\end{equation}
\item Assume that there is some $v>0$ so that for some $c>0$: $L^{\sima} v \ge c \cdot v$. Then we have:
\begin{enumerate}[leftmargin=*]
\item for any singular $H \in {\cal{G}}^c_n$ there are constants $\beta(H,c),\alpha(H,c),\sigma(H,c)>0$ and
\item for any non-totally geodesic $H \in {\cal{H}}^{\R}_n$ there are $\beta(n,c),\alpha(n,c),\sigma(n,c)>0$ so that
\end{enumerate}
\begin{equation}\label{ges}
G^{\sima}(x,y)\le \beta \cdot \exp(-\alpha \cdot d_{\bp^*}(x,y)),\mm{ for } x,y \in H \setminus \Sigma \mm{ and } d_{\bp^*}(x,y)> 2 \cdot \sigma.
\end{equation}
\end{itemize}
\end{proposition}
*The formula in the second line of (\ref{lsdef}) is only used to understand the structure of $L^{\sima}$. While $L$ is a Schr\"odinger operator on $(H \setminus \Sigma, g_H)$, the weight $\delta_{\bp^*}^2$ makes $L^{\sima}$ a Schr\"odinger operator on $(H \setminus \Sigma, {\bp^*}^{^2}\cdot g_H)$. This is a variant of the \emph{unfolding correspondence} \cite[Proposition 3.3]{L2}.\\
\li
\textbf{Proof} \, The formula (\ref{lsdef}) for $L^{\sima}$ and the relations (\ref{3g}) follow from straightforward computations. The symmetry of $L^{\sima}$ is seen as follows: for any two functions $u,v \in C_0^\infty(H \setminus \Sigma,\R)$ we have, writing $dV^{\sima}=\delta^{-n}_{\bp^*} dV$ for the volume element associated to ${\bp^*}^{^2}\cdot g_H$, the following:
{\small \[\int_{H \setminus \Sigma} u \cdot L^{\sima} v \, dV^{\sima} = \int_{H \setminus \Sigma} u \cdot \delta^{(n+2)/2}_{\bp^*} \cdot L ({\delta^{-(n-2)/2}_{\bp^*}}_{H,\lambda} \cdot v) \cdot \delta^{-n}_{\bp^*} dV =
\int_{H \setminus \Sigma} L ({\delta^{-(n-2)/2}_{\bp^*}}_{H,\lambda} \cdot u) \cdot {\delta^{-(n-2)/2}_{\bp^*}}_{H,\lambda} \cdot v \, dV.\]}
For the transformation law (\ref{gre}) we recall that one may characterize $G^{\sima}(x,y)$ as the unique function that solves
\[L_x^{\sima} \int_{H \setminus \Sigma} G^{\sima}(x,y) u(y) dV^{\sima}(y)= u(x) \mm{ for any } u \in C_0^\infty(H \setminus \Sigma,\R).\]
Thus we insert the right hand side to check this identity using the identity for $G$ relative to $L$:
\[\delta^{(n+2)/2}_{\bp^*}(x) L_x \int_{H \setminus \Sigma} \delta^{(n-2)/2}_{\bp^*}(y) \cdot G(x,y) \cdot u(y) \cdot \delta^{-n}_{\bp^*}(y) dV(y)=\delta^{(n+2)/2}_{\bp^*}(x) u(x) \delta^{-(n+2)/2}_{\bp^*}(x)=u(x).\]
Finally, we get (\ref{ges}) from the work of Ancona \cite[Ch.2]{A1}, carried out in more detail in \cite[Proposition 4.7]{KL}, and $G^{\sima}(q,p)=G^{\sima}(p,q)$. These results apply due to the uniform ellipticity, more precisely described as adaptedness in \cite{A1} and \cite{KL}), of $L^{\sima}$ on $(H \setminus \Sigma, {\bp^*}^{^{2}} \cdot g_H)$ and the weak coercivity assumption we made for $L^{\sima}$. \qed
\begin{remark}\ \label{bhp3} 1. The operators and eigenfunctions on $(H \setminus \Sigma, g_H)$ we are interested in all involve $\bp$ which satisfies the
naturality axiom (S4). The Whitney smoothed version $\bp^*$ satisfies this axiom approximatively in a way quantified in (\ref{smot}). The relations (\ref{smot}) also mean that estimates expressed in terms of $\bp^*$ can equally be used for $\bp$ and vice versa. Analytically $\bp^*$ is merely used intermediately to unfold $(H \setminus \Sigma, g_H)$ to a \emph{smooth} manifold. This is a common way to bypass regularity issues from the weaker Lipschitz regularity of $\bp$ in the potential theoretic analysis. The appearance of Whitney smoothings cancels out on the way back from the unfolding to $(H \setminus \Sigma, g_H)$ cf.\cite[Ch.3]{L1} for details.\\
2. Like the other quantitative analytic estimates the constants $\beta(H,c),\alpha(H,c),\sigma(H,c)>0$ in \ref{doob} (\ref{ges}) come from the potential theory on the hyperbolic unfolding. They are therefore determined from the hyperbolicity and bounded geometry constants we get for the hyperbolic unfolding in \ref{hu}. $H$ and any scaled version $\lambda \cdot H$, $\lambda>0$
have the same unfolding since $\bp_{\lambda \cdot H} \equiv \lambda^{-1} \cdot \bp_{H}$. Thus these constants remain unchanged under scalings of $H$, in particular, in blow-up processes.\qed
\end{remark}
We turn to some first applications to the metric completion $(\widehat{H \setminus \Sigma}, \widehat{d_{\sima}})$ of $(H \setminus \Sigma,\Phi^{4/n-2} \cdot g_H)$ and consider the singular set $(\widehat{H \setminus \Sigma}, \widehat{d_{\sima}})$ as a boundary of $(H \setminus \Sigma,\Phi^{4/n-2} \cdot g_H)$:
\begin{equation}\label{poh}
\p_{\sima}(H \setminus \Sigma,\Phi^{4/n-2} \cdot g_H):= \widehat{H \setminus \Sigma} \setminus (H \setminus \Sigma) \subset (\widehat{H \setminus \Sigma}, \widehat{d_{\sima}})
\end{equation}
At this stage of the story $\p_{\sima}$ for $H \in {\cal{H}}^{\R}_n$ could contain also points at infinity of $(H \setminus \Sigma, g_H)$. To rule this out we actually use that $\lambda>0$. This will be a prerequisite to apply the Bombieri-Giusti Harnack inequality \cite{BG} in \ref{umi} and \ref{lcmg}.
\begin{proposition}[Diameter Bounds]\label{lcmg} For $H \in \cal{G}$ we have the following estimates:
\begin{enumerate}
\item For $H \in {\cal{G}}^c$, we have $diam(H,d_H)<\infty$ and $diam \big((\widehat{H \setminus \Sigma}, \widehat{d_{\sima}})\big)<\infty$.
\item For $H \in \cal{G}$ and balls $(B_r(z), d_H)$ we have $diam(\widehat{B_r(z) \setminus \Sigma},\widehat{d_{\sima}}) \ra 0$ for $r \ra 0$.
\item There is a continuous map $\widehat{I_H}: (H,d_H) \ra (\widehat{H \setminus \Sigma}, \widehat{d_{\sima}})$ that canonically extends the identity map
$id: (H \setminus \Sigma,d_H) \ra (H \setminus \Sigma, d_{\sima})$.
\end{enumerate}
\end{proposition}
\textbf{Proof} \, We start with (i). The assertion that, for $H \in {\cal{G}}^c_n$, the \emph{intrinsic} diameter $diam(H,d_H)$ is finite is \cite[Theorem 1.8]{L1}.
To derive $diam \big((\widehat{H \setminus \Sigma}, \widehat{d_{\sima}})\big)<\infty$ we note that any two points $x,y \in (H \setminus \Sigma, d_{\bp^*})$ can be joined
by a hyperbolic geodesic arc $\gamma_{x,y}$, with $\gamma_{x,y}(0)=x$. By definition these arcs are parameterized by arc length and we have $d_{\bp^*}(x,y)=l_{d_{\sima}}(\gamma_{x,y})$.\\
We choose a basepoint $p \in H \setminus \Sigma$ and the ball $B=B_{2 \cdot \sigma}(p)$ where the radius is measured relative to $d_{\sima}$, where $\sigma$ is the bounded geometry constant for $d_{\sima}$ as used in \ref{doob} (\ref{ges}). From the boundary Harnack inequality \ref{mbhsq} and the compactness of $\Sigma$ we have a constant $a>0$ so that
\begin{equation}\label{upi}
\Phi(x) \le a \cdot G(x,p), \mm{ for any } x \in H \setminus (\Sigma \cup B)
\end{equation}
Now we apply \ref{doob} (\ref{ges}) and (\ref{3g}) to estimate $\Phi$, along $\gamma_{p,y}$. We have $t=d_{\bp^*}(\gamma_{p,y}(t),p)$, $g_H(\dot \gamma(t),\dot \gamma(t)) = \delta_{\bp^*}^{2}(\gamma(t))$ and for any $z \in \gamma_{x,y}$ with $d_{\bp^*}(p,z) \ge 2 \cdot \sigma$
\li
\begin{eqnarray}\label{upest}
\Phi^{4/(n-2)}(z) \cdot g_H(\dot \gamma,\dot \gamma) & \le & a \cdot G(z,p)^{4/(n-2)} \cdot g_H(\dot \gamma,\dot \gamma)= \\
\nonumber a \cdot {\bp^*}^{^2}(p) \cdot G^{\sima}(z,p)^{4/(n-2)} & \le & a \cdot {\bp^*}^{^2}(p) \cdot c \cdot \left(\exp(-\alpha \cdot d_{\bp^*}(z,p))\right)^{4/n-2}.
\end{eqnarray}
\liii
Since $\left(\exp(-\alpha \cdot t)\right)^{2/n-2}$ is integrable on $\R^{\ge 0}$, the integral of $\left(\exp(-\alpha \cdot d_{\bp^*}(\cdot,p))\right)^{2/n-2}$ along $\gamma_{x,y} |_{[2 \cdot \sigma,d_{\bp^*}(y,p))}$ is \emph{uniformly upper bounded} for any $y \in H \setminus (\Sigma \cup B)$. Thus $diam \big((H \setminus \Sigma,d_{\sima})\big) < \infty$ and from this we also have for the metric completion $diam \big((\widehat{H \setminus \Sigma}, \widehat{d_{\sima}})\big)<\infty$.\\
For (ii) we only need to consider the case $z \in \Sigma$. The boundary Harnack inequality can be used to upper estimate $\phi$ by $G(\cdot,p)$ in a neighborhood of $z$. Following the argument of part (i), we see that the integral of $\left(\exp(-\alpha \cdot d_{\bp^*}(\cdot,p))\right)^{2/n-2}$ along $\gamma_{x,y} |_{[2 \cdot \sigma,d_{\bp^*}(y,p))}$ is uniformly upper bounded for any $y \in H \setminus (\Sigma \cup B) \cap B_R(z)$, for any fixed $R>0$. From this we have $diam(\widehat{B_r(z) \setminus \Sigma},\widehat{d_{\sima}})<\infty$.\\
Now we use that $\Sigma \subset (H,d_H)$ is homeomorphic to $\p_G (H \setminus \Sigma,d_{\bp^*}) \setminus \{\infty\}$, where $\p_G$ denotes the \emph{Gromov boundary}, to define a
canonical continuous map
\begin{equation}\label{idd}
I_H: (H,d_H) \ra (\widehat{H \setminus \Sigma}, \widehat{d_{\sima}})
\end{equation}
We recall the definition of the \emph{Gromov product}:
for any $y, z \in \p_G (H \setminus \Sigma,d_{\bp^*}) \setminus \{\infty\}$, we call
\[
(y \cdot z)_p:=\frac 12\cdot (d_{\bp^*}(p, y) + d_{\bp^*}(p,z)-d_{\bp^*}(y,z))
\]
the \emph{Gromov product} of $y$ and $z$ with respect to $p$. It measures how long two geodesic rays from $p$ to $y$ and $p$ to $z$ travel together before they diverge (and form the $\delta$-thin triangle when we add $\gamma_{y,z}$ to the picture.) Concretely, we have from \cite[Lemme 2.17]{GH}:
\begin{equation}\label{ghy}
(y \cdot z)_p \le d_{\bp^*}(p, \gamma_{y,z}) \le (y \cdot z)_p + 4 \cdot \delta
\end{equation}
We use this product to describe \emph{neighborhood bases} $U(z,a)\subset \p_G (H \setminus \Sigma,d_{\bp^*})$ and $\U(z,a)\subset (H \setminus \Sigma,d_{\bp^*})$ of $z \in \Sigma$, for $a >0$:
\begin{itemize}[leftmargin=*]
\item $U(z,a):= \{ x \in \p_G (H \setminus \Sigma,d_{\bp^*}) \,|\,\mm{ there are geodesic rays } \gamma_1,\, \gamma_2$ from $p$ to $\gamma_1(\infty) =z,\, \gamma_2(\infty) = x$ and such that $\liminf_{t \ra \infty} (\gamma_1(t),\gamma_2(t))_p \ge a\}$.
\item $\U(z,a):= \{x \in (H \setminus \Sigma,d_{\bp^*}) \,|\, \mm{ there is a geodesic ray } \gamma \mm{ starting from }p \mm{ to }
\gamma(\infty)= z \mm{ and such that }\\\liminf_{t \ra \infty} (\gamma(t),x)_p \ge a\}$.
\end{itemize}
See for instance~\cite[Lemma III.H.3.6]{BH} or~\cite[Chapter 2]{BK}. Since $U(z,a)$ and $\U(z,a) \cap U(z,a)$ shrink to $z$, for $a \ra \infty$, we infer from (\ref{upest}) that
\begin{equation}\label{shr}
diam \big((\U(z,a), d_{\sima})\big) \ra 0, \mm{ for } a \ra \infty
\end{equation}
In turn, this shows that $\bigcap_{a >0}\overline{\U(z,a)}$ contains exactly one point $z_{\bp} \in \p_{\sima}(H \setminus \Sigma,\Phi^{4/n-2} \cdot g_H)$ the singular set defined in (\ref{poh}) and we define
\begin{equation}\label{ext}
I_H: \Sigma \ra \p_{\sima}(H \setminus \Sigma,\Phi^{4/n-2} \cdot g_H), \mm{ by } I_H(z) := z_{\bp}, \mm{ for } z \in \Sigma
\end{equation}
$I_H$ extends the identity map on $H \setminus \Sigma$ to a map $\widehat{I_H}: (H,d_H) \ra (\widehat{H \setminus \Sigma}, \widehat{d_{\sima}})$.
From the previous discussion we also see, for a converging sequence $a_k \ra a$ in $(H,d_H)$, that $d_{\sima}(\widehat{I_H}(a_k), \widehat{I_H}(a)) \ra 0$, for $k \ra \infty$. That is, $\widehat{I_H}$ is a \emph{continuous} map. \qed
Before we show that $\widehat{I_H}$ is a \emph{homeomorphism} we introduce some basic blow-up analysis and a suitable version of the Bombieri-Giusti Harnack inequality \cite{BG}.
\subsubsection{Blow-Ups and Induced Solutions} \label{bhpgr}
The potential theory of $L_{H,\lambda}$ on $H$ can be supplemented from the individual potential theories of $L_{C,\lambda}$ on any of its
tangent cones $C$ \cite[Th.3]{L3}. An example are asymptotic growth estimates for eigenfunctions on $H$ we get from \emph{induced} eigenfunctions on $C$. To explain this process we summarize \cite[Ch.3.1 and 3.2]{L3}. We consider solutions $u>0$ of $L_{H,\lambda} \, \phi = 0$ either on bounded subsets $U \subset H \setminus \Sigma_H$ or on the whole regular set $H \setminus \Sigma_H$, in this case we say $u$ is an \emph{entire solution}.\\
While we scale by increasingly large $\tau >0$, we observe that $\tau \cdot \TP_H(p,\omega,R/\tau,r/\tau)$, cf.C.2., is better and better $C^k$-approximated by the corresponding truncated \si-pencil in the given tangent cone. This carries over to the analysis on $\P$. When we choose any $p \in \Sigma_H$ then we have for any $\ve > 0$, $R > 1 > r >0$ and some $\omega \in (0,1)$ and any entire solution $u>0$ of $L(H) \,\phi = 0$ there exists some $\tau^*(L, u,\ve,\omega, R , r, p)>0$ such that for \emph{any} $\tau\ge \tau^*$ the following \emph{freezing result} describing a gradually improving cone approximation under scaling is true \cite[Prop.3.10]{L3}:\\
There is some tangent cone $C^\tau_p$ with $|\D - id_{C^\tau_p}|_{C^{k,\gamma}(\TP_{C^\tau_p}(0,\omega,R,r))} \le \ve $ and an entire solution $v>0$, of $L(C^\tau_p) \,\phi = 0$, that can be chosen \emph{independently} of $\ve,\omega, R,r$, with
\begin{equation}\label{ffefua}
|u \circ \D / v-1|_{C^{2,\alpha}(\TP_{C^\tau_p}(0,\omega,R,r) )} \le \ve.
\end{equation}
We call such an entire solution $v$ on $C^\tau_p$ an \emph{induced solution}. We also recall \cite[Lemma 3.9]{L3}
\begin{equation}\label{inher}
\lambda^{\bp}_{L,H} \le \lambda^{\bp}_{L,C} \mm{ for any tangent cone } C \mm{ of } H.
\end{equation}
We also use that \emph{minimal growth is inherited} under convergence to limits although the singular sets are merely coarsely related to each others \cite[Th.3]{L3}. We consider three different situations.\\
\textbf{Tangent Cones} \, Let $u >0$ be a supersolution of $L_{H,\lambda} =0$, $\lambda < \lambda^{\bp}_H $ that is a solution on a neighborhood $V$ of some $p \in \Sigma$ with \emph{minimal growth towards any point in $V \cap \Sigma$}. Then, if $C$ is a tangent cone of $H$ in $p$, any solution induced on $C$ has \emph{minimal growth towards all points of $\Sigma_C$.}\\
\textbf{General Blow-Ups} \, More generally, let $u >0$ be a supersolution of $L_{H,\lambda} =0$, $\lambda < \lambda^{\bp}_H $ that is a solution on a neighborhood $V$ of $p \in \Sigma$ with \emph{minimal growth towards} $V \cap \Sigma$ and consider a sequence $s_i \ra \infty$ of scaling factors and a sequence of points $p_i \ra p$ in $\Sigma_{H}$ such that $(s_i \cdot H, p_i)$ subconverges to a limit space $(H_\infty, p_\infty)$ with $H_\infty \in {\cal{H}}^{\R}_n.$ Then the induced solutions on $H_\infty \setminus \Sigma_{H_\infty}$ have \emph{minimal growth towards }$\Sigma_{H_\infty}$. \\
\textbf{Tangent Cones at Infinity} \, For $H \in {\cal{H}}^{\R}_n$ the argument of \cite[Th.3]{L3} equally applies to \emph{tangent cones at infinity}. Recall that they are the possible limit spaces we get from scaling $H$ by some sequence $\tau_i >0$ with $\tau_i \ra 0$ for $i \ra \infty$. We assume $u >0$ solves $L_{H,\lambda} =0$, $\lambda < \lambda^{\bp}_H $ with \emph{minimal growth towards all points of $\Sigma_H$} (but not to infinity). Then, for any tangent cone $C$ of $H$ at infinity, any solution induced on $C$ has \emph{minimal growth towards all points of }$\Sigma_C$.\\
We apply this blow-up inheritance in several places throughout this paper. We start with a result where we transfer a distance growth rate from tangent cones back to the given hypersurface.
\begin{lemma}\label{umia} There is some $\beta(H) >0$ so that for any non-totally geodesic $H \in {\cal{H}}^{\R}_n$ and $\lambda \in (0,\lambda_H)$ we have for the Euclidean ball $B_R(0)$, $R>0$ large enough:
\begin{equation}\label{div}
dist_{d_{\sima}}(\p B_R(0) \cap H \setminus \Sigma, \{0\}) \ge R^\beta.
\end{equation}
\end{lemma}
\textbf{Proof} \, In the cone case $H=C$ we know from \cite[Th.4.4]{L3}, that in terms of polar coordinates $ (\omega,r) \in \p B_1(0) \cap C \setminus \Sigma \times \R^{>0}=C \setminus \Sigma$:
{\small \begin{equation}\label{sep}
\Phi_C(\omega,r) = \psi(\omega) \cdot r^{\alpha}, \mm{ for } \textstyle \alpha = - \frac{n-2}{2} + \sqrt{ \Big( \frac{n-2}{2} \Big)^2 + \mu}<0,
\end{equation}}
where $\mu(C,\lambda)>-(n-2)^2/4$ is the ordinary principal eigenvalue of an associated elliptic operator on $\p B_1(0) \cap C $.
Now we recall the inheritance result \cite[Th.3.13]{L3} for cones in $\mathcal{SC}_n$. Since this space is compact we get a constant $A[n,\lambda]>0$ depending only on $n$ and $\lambda$ so that
{\small \begin{equation}\label{exx}
-\frac{n-2}{2} <-A[n,\lambda] < \alpha_+(C) <0, \mm{ for any } C\in \mathcal{SC}_n,
\end{equation}}
that is, $-A[n,\lambda] \cdot 2/(n-2) >-1$ and $(r^{-A[n,\lambda]})^{2/(n-2)}$ is \emph{not integrable} over $[1,\infty)$.\\
For general $H \in {\cal{H}}^{\R}_n$ we get from the freezing (\ref{ffefua}) of tangent cones at infinity that for any $\omega >0$ there is some constant $k>0$ and some radius $\rho>0$ so that
\begin{equation}\label{divv}
r^{-A[n,\lambda]} \le k \cdot \Phi(x) \mm{ for } x \in \P(0,\omega) \setminus B_\rho(0) \mm{ with } r = d_H(x,0).
\end{equation}
Thus we get a constant $k^*$ so that for $r \ge \rho$:
\begin{equation}\label{a}
r^{-n} \cdot r^{n-1} \cdot r^{-A[n,\lambda]} \cdot r \le k^* \cdot Vol(B_{r}(z))^{-1} \cdot \int_{B_{r}(z) \cap H \setminus \Sigma} \Phi \, dV
\end{equation}
Now we note that $scal_H=-|A|^2$ and since $\lambda>0$ we have
{\small \begin{equation}\label{ml}
\Delta_H \Phi = -\Big(\frac{n-2}{4 (n-1)} \cdot |A|^2 + \lambda \cdot \bp^2\Big) \cdot \Phi \le 0,
\end{equation}}
that is, $\Phi$ is superharmonic on $H \setminus \Sigma$. Therefore, the Bombieri-Giusti Harnack inequality \cite[Th.6]{BG} applies to $\Phi$ and it yields some constant $c>0$
independent of $H \in {\cal{H}}^{\R}_n$ and of $r \ge \rho$ so that
\begin{equation}\label{bghs}\,
r^{-A[n,\lambda]} \le k^* \cdot Vol(B_{r}(z))^{-1} \cdot \int_{B_{r}(z) \cap H \setminus \Sigma} \Phi \, dV \le k^* \cdot c \cdot \inf_{{B_{r}(z) \cap H \setminus \Sigma}}\Phi.
\end{equation}
From this and (\ref{exx}) we get some $\beta>0$ so that for $R>0$ large enough
\begin{equation}\label{divv}
dist_{d_{\sima}}(\p B_R(0) \cap H \setminus \Sigma, \{0\}) \ge R^\beta.
\end{equation}
\qed
\setcounter{section}{3}
\renewcommand{\thesubsection}{\thesection}
\subsection{Growth and Singularities of Minimal Factors} \label{dom}
The results we have seen so far used the hyperbolic unfoldings and the boundary Harnack inequality. Now we additionally use the Bombieri-Giusti Harnack inequality in a rather complementary way.
In simple terms, so far we have found upper bounds for certain geometric quantities and Bombieri-Giusti Harnack inequality additionally yield lower bounds.
\subsubsection{Comparison of Balls in $(H,d_H)$ and $(H,d_{\sima})$} \label{dav}
To compare radii of balls measured either in terms of $d_H$ or $d_{\sima}(\Phi)$ we use an interplay of estimates for the three main metrics on $H$, the original geometry $d_H$, the minimal factor metric $d_{\sima}(\Phi)$ and the hyperbolic geometry $d_{\bp^*}$.
\begin{proposition}[Local Compactness of Minimal Factor Metrics]\label{lcmg} For $H \in \cal{G}$ we have the following basic metric properties for $d_{\sima}(\Phi)$:
\begin{enumerate}
\item For any $z \in H$ and $r>0$ there is an outer $d_{\sima}$-radius $0<\rho_{out}(r,z)<\infty$ and an inner $d_{\sima}$-radius $0<\rho_{inn}(r,z)<\infty$ so that
\begin{equation}\label{or}
(B_{\rho_{inn}}(z) , d_{\sima}) \subset (B_r(z), d_H) \subset (B_{\rho_{out}}(z), d_{\sima}).
\end{equation}
\item $(\widehat{H \setminus \Sigma}, \widehat{d_{\sima}(\Phi)})$ is homeomorphic to $(H,d_H)$. Thus we can write it $(H,d_{\sima})$.
\end{enumerate}
Writing $(B_a(p), d_H) \subset (B_b(p), d_{\sima})$ means the set-theoretic inclusion $B_a(p) \subset B_b(p)$ where the radii are measured relative $d_H$ respectively $d_{\sima}$.
\end{proposition}
\textbf{Proof} \, For (i) we first note that $(B_r(z), d_H) \subset (B_{\rho_{out}}(z), d_{\sima})$ is just \ref{lcmg}(ii). To show the existence of $\rho_{inn}$ we prove that for $r>0$ small enough there is some $c>0$ so that $\Phi \ge c$ on $B_r(z)\cap H \setminus \Sigma$.
From this we readily see that we can choose $\rho_{inn}= c^{2/n-2} \cdot r$. To get that lower estimate we recall the Gau\ss-Codazzi equation:
\begin{equation}\label{gce}
|A_H|^2 + 2Ric_M(\nu,\nu) = scal_M - scal_H +(tr A_H)^2,
\end{equation}
where $tr A_H$ is the mean curvature of $H$. There is some $m >0$ so that for any constant $k>0$ there is a small $r>0$ so that:
\begin{equation}\label{curv}
|Ric_M(\nu,\nu)|, |scal_M|,|tr A_H| \le m, \mm{ whereas } \bp \ge k \mm{ on } B_{r}(z) \cap H \setminus \Sigma.
\end{equation}
The bound on $|tr A_H|$ is part of the chosen definition D.2 for almost minimizers. Thus, for large $k\gg 1$ and, thus, small $r>0$, any solution $\Phi>0$ of $L_{H,\lambda} \phi =0$ is a superharmonic on $B_{r}(z) \cap H \setminus \Sigma$:
{\small \begin{equation}\label{ml}
\Delta_H \Phi = \Big(\frac{n-2}{4 (n-1)} \cdot scal_H - \lambda \cdot \bp^2\Big) \cdot \Phi \le 0
\end{equation}}
Almost minimizer share the regularity theory with area minimizers. In particular we get the same blow-up limits, the minimal oriented boundaries in Euclidean space, used to derive the localized isoperimetric inequality \cite[Theorem 2 p.\ 39]{BG}. From this the Bombieri-Giusti $L^1$-Harnack inequality \cite[Theorem 6 p.\ 39]{BG} and \cite[Proposition 2.7 and Corollary 2.10]{L1}, showing that the intrinsic and the extrinsic distances are equivalent i.e. there is a constant $c(H)\in (0,1)$ such that for any $p,q \in H$ we have $c \cdot d_{g_{H^n}}(p,q) \le d_{g_{M^{n+1}}}(p,q) \le d_{g_{H^n}}(p,q)$,
applies, when $H \in {\cal{G}}^c_n$, for some sufficiently small $r_H>0$ and any $r \in (0,r_H)$ respectively when $H \in {\cal{H}}^{\R}_n$ for any $r>0$ :
\begin{equation}\label{bgha}
0< a \cdot \int_{B_{r}(z) \cap H \setminus \Sigma} \Phi \, dV=:c \le \inf_{{B_{r}(z) \cap H \setminus \Sigma}}\Phi,\, \mm{ for some }a = a(L_{H,\lambda},r)>0.
\end{equation}
This estimate shows that $I_H: \Sigma \ra \p_{\sima}(H \setminus \Sigma,\Phi^{4/n-2} \cdot g_H), \mm{ for } z \in \Sigma$ from (\ref{ext}) is \emph{injective}.\\
Now we claim that $I_H$ is also \emph{surjective}. For this we note that any point $x \in \p_{\sima}(H \setminus \Sigma,\Phi^{4/n-2} \cdot g_H)$ is the $d_{\sima}$-limit of a sequence points $x_k \in H \setminus \Sigma$ and we can choose hyperbolic geodesic arcs $\gamma_{p,x_n}$ from $p$ to the $x_n$.
A subsequence of these arcs converges to a hyperbolic geodesic ray that represents some point $\overline{x} \in \Sigma$ and then we see that $\{x\}=\bigcap_{a >0}\overline{\U(\overline{x},a)}$, that is, $I_H(\overline{x})=x$.\\
Finally, we show that $\widehat{I_H}^{-1}$ is continuous. For compact $H$ any closed set is mapped onto a compact and thus closed set. For $H \in {\cal{H}}^{\R}_n $ we use \ref{umia} to argue similarly. For any $R>0$ and any closed set $A \subset H$ that $\widehat{I_H}(A \cap \overline{B_R(0)}$ is again closed. Since (\ref{div}) shows that the sequence $\widehat{I_H}(\p B_i(0) \cap H)$, $i =1,2...$ has no accumulation points in $(H,d_{\sima})$ we infer that $\widehat{I_H}(A)$ is also closed. This implies that $\widehat{I_H}$ is a homeomorphism.
\qed
We may reformulate \ref{lcmg} (ii) and (iii) as follows: for any $r>0$ and $p \in H$ we have some $\kappa \ge 1$ and $\rho>0$ so that
\begin{equation}\label{incl}
(B_\rho(p),d_{\sima}) \subset (B_r(p), d_H) \subset (B_{\kappa \cdot \rho}(p),d_{\sima}).
\end{equation}
We define
\begin{equation}\label{kappa}
\kappa_0(H,p,r,\Phi):=\inf \{\kappa \ge 1 \,|\, (\ref{incl}) \mm{ holds for at least one }\rho>0\}.
\end{equation}
This infimum actually is a minimum since $(B_\rho(p),d_{\sima})$ and $(B_r(p), d_H)$ are open. In turn, this $\kappa_0$ determines a unique $\rho_0>0$ that satisfies (\ref{incl}). The parameters $\kappa_0$ and $\rho_0$ depend differently on the gauging of $\Phi$, that is, on choosing a positive multiple of $\Phi$:
\begin{equation}\label{gauge}
\kappa_0(\lambda \cdot \Phi) = \kappa_0(\Phi) \mm{ but } \rho_0(\lambda \cdot \Phi) = \lambda^{2/n-2} \cdot \rho_0(\Phi), \mm{ for } \lambda >0.
\end{equation}
We now prove that $\kappa_0$ is essentially independent of $r$ and $p$.
\begin{proposition}[Equivalence of Ball Radii]\label{radii} We consider the two cases where
\[\mm{ \emph{\textbf{(i)}} } H \in {\cal{H}}^{\R}_n \mm{ and non-totally geodesic}, \, \mm{ \emph{\textbf{(ii)}} } H \in {\cal{G}}^c_n \mm{ and singular.}\]
Then we have a common upper bound for $\kappa_0(H,p,r,\Phi)$ for any minimal factor metric:
\begin{equation}\label{kap}
k_n \ge \kappa_0(H,p,r,\Phi), \mm{ for any } p \in H, r>0,
\end{equation}
for some $k_n \ge 1$ depending only on the dimension $n$, for $r$ sufficiently small in case (ii).
\end{proposition}
\textbf{Proof} \, For $H \in {\cal{H}}^{\R}_n$ and $\tau >0$, $v \in \R^{n+1}$, we also have $\tau \cdot H + v \in {\cal{H}}^{\R}_n$. Thus we only need to consider the case $p=0$ and $r=1$.\\
We assume there is a sequence $H_i \in {\cal{H}}^{\R}_n$ and $ (B_1(0), d_{H_i})$ with $\kappa_0[i]:=\kappa_0(H_i,0,1)\ra \infty$. We can find a subsequence of the pointed spaces with limit space $H_\infty \in {\cal{H}}^{\R}_n$ and regular points $q_i \in \p B_{1/2}(p_i) \subset H$ with $a_n \ge \bp(q_i) \ge b_n >0$, for constants $a_n >b_n>0$, and $q_i \ra q_\infty \in \p B_{1/2}(0) \subset H_\infty$, for $i \ra \infty$. This follows from the naturality of $\bp$ and the compactness of ${\cal{H}}^{\R}_n$. The naturality of $\bp$ also shows that $q_\infty$ is again regular with $a_n \ge \bp(q_\infty) \ge b_n >0$. We use the $q_i$ as basepoints and choose the parameter $s_i>0$ so that $s_i \cdot \Phi(q_i)=1$. Then, by standard elliptic theory, using $\D$-maps, the $s_i \cdot \Phi$ compactly subconverge to a solution $\Phi_\infty>0$ of $L_{H_\infty,\lambda} \phi=0$ on $H_\infty \setminus \Sigma_{H_\infty}$ with $\Phi_\infty(q_\infty)=1$.\\
Since $q_\infty$ is a regular point there is a radius $\eta \in (0,1/4)$, for large $i$ and then independently of $i$, so that $B_{2 \cdot \eta}(q_i)$ is regular and, via $\D$-maps, nearly isometric to $B_{2 \cdot \eta}(q_\infty)$ in $C^5$-norm. Then, we have Harnack inequalities for positive solutions of $L_{H_i,\lambda} \phi=0$ on $B_{2 \cdot \eta}(q_i)$ with constants independent of $i$. Thus we have uniform lower estimates $k>0$, for large $i$, so that
\begin{equation}\label{gcbg}
k \le \int_{B_{\eta}(q_i)} s_i \cdot \Phi \, dV \le \int_{B_1(p_i) \cap H_i \setminus \Sigma} s_i \cdot \Phi \, dV
\end{equation}
In this Euclidean case the intrinsic and the extrinsic distances for $H_i \in {\cal{H}}^{\R}_n$ are equivalent by a constant depending only on $n$ \cite[Corollary 2.10]{L1} and
we have $Ric_M, scal_M, tr A_H \equiv 0$. From this the Bombieri-Giusti $L^1$-Harnack inequality (\ref{bgha}) gives us a common constant $l=l(L_{H_i,\lambda})>0$, independent of $i$, so that $\inf_{B_1(0) \cap H_i \setminus \Sigma} s_i \cdot \Phi \ge l$ and we get, as in the proof of \ref{lcmg}(iii), a lower positive estimate for the $d_{\sima}(s_i \cdot \Phi )$-distance $\delta_i$ of points in $(\p B_1(p_i), d_{H_i})$ to $p_i$: $\delta_i \ge \Delta$, for some $\Delta>0$ that can be chosen independently of $i$. \\
\li
To disprove that $\kappa_0[i] \ra \infty$ we use the normalizations $L_{H_i,\lambda}\,G(\cdot,q_i)=\delta_{q_i}$ and $s_i \cdot \Phi(q_i)=1$ on $(H_i \setminus \Sigma, d_,{H_i})$.
For any $z \in \overline{B_1(0)}$ choose hyperbolic geodesic rays $\gamma[z]$, relative $(H_i \setminus \Sigma, d_{\bp^*})$, from $q_i$ to $z$ in $(H_i \setminus \Sigma, d_H)$.
From $a_n \ge \bp(q_i) \ge b_n >0$, (\ref{smot}) and (\ref{gre}) we infer
\begin{equation}\label{nal}
c_1^{-(n-2)} \cdot a_n^{(n-2)/2}\cdot {\bp}^{(n-2)/2}(x) \cdot G^{\sima}(x,q_i) \ge G(x,q_i), \mm{ for } x \neq y \in H_i \setminus \Sigma.
\end{equation}
Next we choose the integer $m \in \Z$ with $2 \cdot \sigma +1 > m \ge 2 \cdot \sigma$ and recall from the definition that the $\mathcal{N}_{m}(\gamma[z]) \cap (B_{m}(p_i), d_{\bp^*}) \v$. Thus from the ordinary Harnack inequality, applied on $(B_{2 \cdot m}(p_i), d_{\bp^*})$ and the boundary Harnack inequality on $\mathcal{N}_{m}(\gamma[z])$ we now get a uniform upper $d_{\sima}$-length estimate from (\ref{upest}). Therefore, contradicting the assumption, there is a common upper bound $k^*_n$ for the $\kappa_0[i] \cdot \delta_i \ge \kappa_0[i] \cdot \Delta$. This proves the claim in case (i).\\
Finally we reduce case (ii), that is, $H \in {\cal{G}}^c_n$, to that of $H \in {\cal{H}}^{\R}_n$. Here we claim that there is a radius $r_H>0$ so that $k_n:=k^*_n +1$ is an upper bound for $\kappa_0(H,p,r)$, for any $p \in H$, $r \in (0,r_H)$. Otherwise there is a sequence of points $p_i \in H$ and radii $r_i>0$, with $r_i \ra 0$, so that $\kappa_0[i]:=\kappa_0(p_i,r_i)\ra \infty$. We use the scaling invariance of $L_{H,\lambda} \phi=0$ and scale $g_H$ to $r^{-2} \cdot g_H$ and thereby $(B_r(p),g_H)$ to $(B_1(p), r^{-2} \cdot g_H)$.
Then there is a compactly converging subsequence of the pointed spaces $H_i:=(H, r_i^{-1} \cdot d_H,p_i)$, which contains $(B_1(p_i), r_i^{-1} \cdot d_H,p_i)$, with limit pointed space $(H_\infty, d_{H_\infty},0)$ for some hypersurface $H_\infty \in {\cal{H}}^{\R}_n$. Now we can repeat the argument of case (i), applied to these $H_i$, and see that $k_n:=k^*_n +1$ upper bounds $\kappa_0(H,p,r)$, for $r \in (0,r_H)$, $r_H>0$ small enough. \qed
The blow-up theory of Ch.\ref{bhpgr}, applied to the regular parts, and the distance estimates (\ref{or}) from \ref{lcmg}, providing the extension to the metric completions, readily imply the following.
\begin{corollary}\emph{\textbf{(Blow-Ups)}}\label{blooa}
For $H \in {\cal{G}}$ and $L_{H,\lambda}$, $\lambda < \lambda_H$, we consider $(H, d_{\sima}(\Phi_H))$, any singular point $p \in \Sigma_H$ and any tangent cone $C$ in $p$. Then we get the following \textbf{blow-up invariance:}
Any sequence $(H, \tau_i \cdot d_{\sima}(\Phi_H))$ scaled by around $p$ for some sequence $\tau_i \ra \infty$, $i \ra \infty$, subconverges and the limit of any converging subsequence is $(C, d_{\sima}(\Phi_C))$ for some tangent cone $C$.
\end{corollary}
We only need to specify the notion of convergence. It has two layers: a subsequence of $H_i= \tau_i \cdot H$ of $H$, for some sequence $\tau_i \ra \infty$, for $i \ra \infty$, around a
given singular point $x \in \Sigma \subset H^n$ converges to an area minimizing tangent cone $C^n \subset \R^{n+1}$. The flat norm convergence becomes a compact $C^k$-convergence over regular subsets if $C$ expressed in terms of a $C^k$-convergence of $\D$-maps of D.2. Then the sections $\Phi_H \circ \Gamma_i$ compactly $C^k$-converge to $\Phi_C$ on $C$ after normalizing the value of the $\Phi_H \circ \Gamma_i$ in a common base point in $C \setminus \Sigma_C$.\\
The fact that the Martin boundary of $L_{H,\lambda}$ for $\lambda < \lambda_H$ on $H \in {\cal{H}}^{\R}_n$ has exactly one point at infinity and again (\ref{or}) from \ref{lcmg} show:
\begin{corollary}\emph{\textbf{(Euclidean Factors)}} \label{euclf} For any non-totally geodesic $H \in {\cal{H}}^{\R}_n$ and $\lambda < \lambda_H$ there is a \textbf{unique}$^\cs$ space $(H, d_{\sima}(\Phi_H))$. For $C \in \mathcal{SC}_{n}$ the associated space $(C, d_{\sima}(\Phi_C))$ is invariant under scaling around $0 \in C$, that is, it is again a cone.
\end{corollary}
\subsubsection{Singularities of Minimal Factors} \label{sdm}
The (partial) regularity theory for any almost minimizer $H \in {\cal{G}}$ within some smooth ambient manifold $M^{n+1}$ says that $H$ is smooth except for a singular set $\Sigma_H$ that has Hausdorff codimension $\ge 8$ relative to $M^{n+1}$, cf.\cite{F}, \cite[Ch.11]{Gi}. We extend this estimate to the singular set of $(H,d_{\sima})$.\\
We recall some basic concepts and formulate them for the metric space $(H,d_{\sima})$.
\begin{definition}[Hausdorff Measure and Dimension]\label{haus} For some $H \in {\cal{G}}$, let $A \subset (H,d_{\sima})$, $k \in [0,\infty)$ and $\delta \in (0,\infty]$. Then we set
\begin{equation}\label{hkd}
{\cal{H}}^\delta_k(A):= \inf \Big\{ \sum_i diam(S_i)^k \, \Big| \, A \subset \bigcup_i S_i, S_i \subset H, diam(S_i) < \delta \Big\},
\end{equation}
\begin{equation}\label{hk}
{\cal{H}}_k(A):= \lim_{\delta \ra 0} {\cal{H}}^\delta_k(A) = \sup_\delta {\cal{H}}^\delta_k(A).
\end{equation}
${\cal{H}}_k(A)$ is the $k$-dimensional \textbf{Hausdorff measure} of A. The infimum of all $k$ so that ${\cal{H}}_k(A)=0$ is the \textbf{Hausdorff dimension} $\dim_{\cal{H}} (A)=\dim_{\cal{H}} (A \subset (H,d_{\sima}))$ of $A$ as a subset of $(H,d_{\sima})$.
\end{definition}
In the literature the definition (\ref{hkd}) oftentimes contains a multiplicative gauging constant to keep the values consistent with that of the Riemannian volume of the Euclidean unit ball. But we are primarily interested in the Hausdorff dimension of $\Sigma$ and omit this constant to keep the computations as simple as possible.
\begin{remark}\label{fini} We recall from \cite[Chapter 2.3]{L1} that the extrinsic and the intrinsic distances for $H\in{\cal{G}}$ are equivalent. For $H\in{\cal{H}}^{\R}_n$ we even have a constant $c^{\R}_n \in (0,1)$ depending only on the dimension $n$ with $c^{\R}_n \cdot d_{g_H}(p,q) \le d_{g_{\R^{n+1}}}(p,q) \le d_{g_H}(p,q)$, for any two $p, q\in H\subset\R^{n+1}$. \\
This means that for the embedded area minimizer $(H,d_{\sima})$ the computation of the Hausdorff dimension of $\Sigma$ relative to $H^n$ or to $M^{n+1}$ gives the same result. \qed
\end{remark}
\begin{proposition}[Basic Properties of ${\cal{H}}^\infty_k(A)$]\label{hausi} For every $A \subset (H,d_{\sima})$, we have ${\cal{H}}^\infty_k(A)=0$ if and only if ${\cal{H}}_k(A)=0$.
For ${\cal{H}}_k$-almost all $x \in A$ we have
\begin{equation}\label{hki}
\limsup_{r \ra 0} \, {\cal{H}}^\infty_k(A \cap B_r(x))/r^k \ge 1
\end{equation}
\end{proposition}
\textbf{Proof} \, This is \cite[Lemma 11.2 and Proposition 11.3]{Gi} in the case of subsets $A \subset \R^n$. These results are direct consequences of the definitions of
${\cal{H}}^\infty_k(A)$ and ${\cal{H}}_k(A)$. They neither use the
Euclidean structure nor the regularity of the underlying space and equally apply to $(H,d_{\sima})$. \qed
We also notice that for $A$ compact and $\delta \in (0,\infty]$ we have ${\cal{H}}^\delta_k(A)<\infty$ even when $A$ has Hausdorff dimension $>k$. The definition of ${\cal{H}}^\delta_k$ also readily implies the existence of open neighborhoods $U(A,k,\eta,\delta)$ of $A$, for any $\eta>0 $, so that ${\cal{H}}^\delta_k(U) \le {\cal{H}}^\delta_k(A) + \eta$. Moreover, in non-compact cases with ${\cal{H}}^\delta_k(A)=\infty$, we also write ${\cal{H}}^\delta_k(A)>0$ to keep the notation consistent. \\
\textbf{Hausdorff dimension of $\Sigma$} \, Now we extend Federer's estimate for the dimension of $\Sigma$ relative to the area minimizing geometry to the case where $\Sigma$ belongs to $(H,d_{\sima})$ recovered from the completion of $(H \setminus \Sigma, \Phi^{4/n-2} \cdot g_H)$ .
\begin{theorem}[Partial Regularity of $(H,d_{\sima})$]\label{haus} The Hausdorff dimension of $\Sigma$ relative to the minimal factor metric $(H,d_{\sima})$ is $\le n-7$.
\end{theorem}
This is not a direct consequence of the classical estimate for area minimizers since the identity map from $(H, d_H)$ to $(H, d_{\sima})$ is not Lipschitz continuous.
H\"{o}lder continuous homeomorphisms can increase the dimension of a subset of dimension $a \in (0,n)$ to any value $b \in (a,n)$, cf.\cite{GV},\cite{Bi},\cite{Bn}.\\
We rather imitate Federer's original argument, cf. \cite[Ch.11]{Gi}, for these geometries. We start with a comparison of the ${\cal{H}}^\infty_k$-measure of $\Sigma \subset (H,d_{\sima})$ with that of the singular set of its tangent cones.
\begin{proposition}[Measure under Blow-Ups]\label{mup} For $H \in {\cal{G}}$ converging under scaling by some sequence $\tau_j \ra \infty$, for $j \ra \infty$, to some tangent cone $C$ of $H$ in $p$ we identify with the tip $0 \in C$:
\begin{equation}\label{lim1}
{\cal{H}}^\infty_k(\Sigma_C \cap \overline{B_1(0)}) \ge \limsup_j {\cal{H}}^\infty_k(j \cdot \Sigma_H \cap \overline{B_1(0)})
\end{equation}
Here the ball $B_1(0) \subset \R^{n+1}$ denote the usual Euclidean unit ball. ${\cal{H}}^\infty_k$ is computed relative to $(j \cdot H,d_{\sima})$ and $(C,d_{\sima})$
\end{proposition}
The main point is the following consequence of the minimal growth properties of the conformal deformations expressed in terms of the growth estimate \ref{doob} (\ref{ges}) for the minimal Green's function.
\begin{lemma}[Corresponding Balls]\label{cbc} For any ball $B^C_r(q) \subset B_1(0) \cap C$ of intrinsically measured radius $r \in (0,1)$ in $C$ and any $\delta >0$ there is a ball $B^{H_j}_r(q_i)\subset j \cdot H$ so that
\begin{equation}\label{de1}
B^{H_j}_r(q_j) \ra B^C_r(q) \mm{ in flat norm, for } j \ra \infty
\end{equation}
and we have
\begin{equation}\label{de2}
diam(B^{H_j}_r(q_j),d_{\sima}) \ra diam(B^C_r(q),d_{\sima}). \mm{ for } j \ra \infty
\end{equation}
We will call the balls $B^{H_j}_r(q_j)$ asymptotically \textbf{corresponding} to $B^C_r(q)$.
\end{lemma}
\textbf{Proof} \, From D.2 we have (\ref{de1}) outside an $\eta$-distance tube $U_\eta(\Sigma_C)$ around $\Sigma_C$, for any arbitrary small $\eta>0$, for the diffeomorphic $\D$-images of this complement set in $j \cdot H$
\begin{equation}\label{dco}
\D: C \cap B_1(0) \ra \D(C \cap B_1(0)) \subset j \cdot H
\end{equation}
From \ref{fini}, that is \cite[Chapter 2.3]{L1}, we infer, choosing some sufficiently small $\eta >0$, that there are (not necessarily singular) points $q_i$, so that (\ref{de1}) holds for the $B^{H_j}_r(q_j)$.\\ Similarly E.2 and E.3 show that the
minimal growth solutions converge smoothly to that on $C$ outside that distance tube. Now we reuse the length estimate (\ref{upest}) and remark \ref{bhp3}.2 to see that the tube size of the conformally deformed $\eta$-distance tube also, and uniformly in $j$, shrinks to zero when $\eta \ra 0$. This shows (\ref{de2}).\qed
\textbf{Proof of \ref{mup}} \, We cover $\Sigma_H \cap \overline{B_1(0)}$ by finitely many intrinsic distance balls $B_i \subset (C,d_{\sima}))$ so that
\begin{equation}\label{lim2}
{\cal{H}}^\infty_k(\Sigma_C \cap \overline{B_1(0)}) > \sum_i diam(B_i)^k - \ve
\end{equation}
From \ref{cbc} we find for any $\delta >0$ some $j_\delta >0$ a corresponding family of balls $B_i^\delta \subset j \cdot (H,d_{\sima})$ covering $j \cdot \Sigma_H \cap \overline{B_1(0)}$ with $diam(B^\delta_i) \le (1 + \delta) \cdot diam(B_i)$ and hence:
\begin{equation}\label{lim3}
{\cal{H}}^\infty_k(j \cdot \Sigma_H \cap \overline{B_1(0)}) \le \sum_i diam(B^\delta_i)^k
\end{equation}
Summarizing we have
\begin{equation}\label{lim4}
\limsup_j {\cal{H}}^\infty_k(j \cdot \Sigma_H \cap \overline{B_1(0)}) \le (1 + \delta)^k \cdot \left({\cal{H}}^\infty_k(\Sigma_C \cap \overline{B_1(0)}) + \ve\right)
\end{equation}
For $\ve \ra 0$ and $\delta \ra 0$ the claimed estimate (\ref{lim1}) follows. \qed
\begin{corollary}[Cone Reduction]\label{cre} For $H \in {\cal{G}}$ assume that ${\cal{H}}_k(\Sigma_H)>0$, for some $k$. Then there exists some tangent cone $C$ in ${\cal{H}}_k$-almost every point $x \in \Sigma_H$ such that ${\cal{H}}_k(\Sigma_C)>0$.
\end{corollary}
\textbf{Proof} \, From (\ref{hki}) in \ref{hausi} we can find for ${\cal{H}}_k$-almost every point $x \in \Sigma_H$ a sequence of radii $r_i \ra 0$ for $i \ra \infty$ such that ${\cal{H}}^\infty_k(A \cap B_{r_i}(x)) \ge r_i^k$. Applying \ref{mup} we get the claim. \qed
It is obvious that the dimensions of the singular sets of an area minimizing cone $C^n \subset \R^{n+1}$ and that of the product cone $\R \times C^n \subset \R^{n+2}$ differ by one. In the area minimizing case this is used in the inductive reduction argument for the codimension estimate \cite[Th.11.8]{Gi}. Minimal factor metrics on $\R^m \times C^{n-m}$ are no longer Riemannian products. To understand their structure we recall that the eigenfunction $\Phi_{\R^m \times C^{n-m}}$ with minimal growth towards any point of $\Sigma$ is unique$^\cs$ from \cite[Th.3]{L2} and \cite[Th.6]{L3}. As a consequence $\Phi_{\R^m \times C^{n-m}}$ reflects the symmetries of $\R^m \times C^{n-m}$.
\begin{remark}\label{evee} Following \cite[Prop.4.6]{L3}, we can write $\Phi_{\R^m \times C^{n-m}}$ in cylindrical coordinates $x=(z, r,\omega) \in \R^m \times \R^{>0} \times (\p B_1(0) \cap C \setminus \Sigma_C)$ for $r=r(x)=dist(x,\R^m \times \{0\})$
\[\Phi_{\R^m \times C^{n-m}}(\omega,r,z) = \psi(\omega) \cdot r^{\alpha_+} \mm{ for some } \alpha_+<0 \mm{ on } \R^m \times C^{n-m} \setminus \Sigma_{\R^m \times C^{n-m}}.\]
Let $C \in \mathcal{SC}_n$ be a singular area minimizing cone. There are constants $\Lambda_n > \lambda_n >0$ depending only on $n$ such that $(L_C)_\lambda$ is \si-adapted for any $\lambda \le \Lambda_n$ and for $\lambda \in (0,\lambda_n]$
\[
-\vartheta_\lambda > \alpha_+ \ge - (1- \sqrt{3/4}) \cdot \frac{n-2}{2} >-\frac{n-2}{2} >- (1+ \sqrt{3/4}) \cdot \frac{n-2}{2} \ge \alpha_- > \vartheta_\lambda -(n-2)
\]
for constants $\eta_\lambda$ and $\vartheta_\lambda>0$ depending only on $\lambda$ and $n$.\qed
\end{remark}
Thus $(\R^m \times C^{n-m},d_{\sima})$ is invariant under \emph{translations} in $\R^m$-direction and under \emph{scalings} around any point on the axis $\R^m \times \{0\}$. From this we determine the Hausdorff dimension of $\R^m \times \{0\}$.
\begin{lemma}[Hausdorff Dimension of Axes]\label{hausx} Within $(\R^m \times C^{n-m}, d_{\sima})$, $m \ge 1$, the $m$-planes $\R^m \times \{y\}$, $y \in C$ have Hausdorff dimension $m$. Moreover, for any $k>0$ and $y \in C$ we have ${\cal{H}}^\infty_k( [0,1]^m \times \{0\})>0$ if and only if ${\cal{H}}^\infty_k( [0,1]^m \times \{y\})>0$ .
\end{lemma}
\textbf{Proof} \, We use \ref{lcmg} and the translation invariance of $(\R^m \times C^{n-m}, d_{\sima})$ in $\R^m$-direction to choose $\rho>0$ so that the balls $B_{\rho}(p_i)$, $p_i \in \Z^m$, measured relative $(\R^m \times C^{n-m}, d_{\sima})$, cover $\R^m \times \{0\} \in \R^m \times C^{n-m}$. Now we consider the Euclidean lattices $2^{-j} \cdot \Z^m \subset \R^m$, $j =1,2,...$ and observe that due to scaling invariance of $(\R^m \times C^{n-m}, d_{\sima})$ the $B_{2^{-j} \cdot \rho}(2^{-j} \cdot p_i)$, $p_i \in \Z^m$, cover $\R^m \times \{0\} \in \R^m \times C^{n-m}$. The restriction of the lattice $2^{-j} \cdot \Z^m$ to the unit cube $[0,1]^m \subset \R^m$ contains $2^{j \cdot m}$ points (up to lower orders along the boundary). For these balls we get for any $\alpha >m$.
\li
\begin{equation}\label{konv1}
{\cal{H}}^\infty_\alpha([0,1]^m) \le \sum_{\{p_i \in [0,1]^m \cap 2^{-j} \cdot \Z^m \}} diam(B_{2^{-j} \cdot \rho}(2^{-m} \cdot p_i))^\alpha = 2^{j \cdot m} \cdot 2^{-j \cdot \alpha} \cdot \rho^\alpha \ra 0 ,\mm{ for } j \ra \infty.
\end{equation}
Thus $\dim_{\cal{H}} ([0,1]^m \subset (H,d_{\sima})) \le m$. In turn, since $\alpha <0$ there is some $\delta_0$ so that for $\delta \in (0,\delta_0)$ any sets of diameter $\le \delta$ that intersects
$\R^m \times \{0\}$ also has diameter $\le \delta$ when computed relative to $d_H$. Thus we infer from the area minimizing case where $\dim_{\cal{H}} ([0,1]^m \subset (H,d_H)) = m$
that $\dim_{\cal{H}} ([0,1]^m \subset (H,d_{\sima})) \ge m$.\\
More generally, the translation that maps $\R^m \times \{0\}$ onto $\R^m \times \{y\}$, $y \in C$ is bi-Lipschitz in terms of $d_{\sima}$ and from comparing ball covers, defined as above, we see that, up to this constant (to the power of $m$) we get the same Hausdorff measure estimates for $[0,1]^m \times \{0\} \subset \R^m \times \{0\}$ also for $[0,1]^m \times \{y\} \subset \R^m \times \{y\}$ and vice versa. \qed
\begin{lemma}[Radial Singularities]\label{radix} For $C^{n-m} \in \mathcal{SC}_{n-m}$ assume that $0 \varsubsetneq \Sigma_C$, that is, $C^{n-m}$ is singular not only in $0$ and that for some $k>0$: ${\cal{H}}^\infty_k( \Sigma_{\R^m \times C^{n-m}})>0$. Then there is some ball $B \subset \R^m \times C^{n-m}$ with $\overline{B} \cap (\R^m \times \{0\}) \v$ so that ${\cal{H}}^\infty_k(B \cap \Sigma_{\R^m \times C^{n-m}})>0$.
\end{lemma}
\textbf{Proof} \, Assume there is no such ball. Then we have, using a suitable countable ball cover, that ${\cal{H}}^\infty_k(\Sigma_{\R^m \times C^{n-m}} \setminus \R^m \times \{0\})=0$. This means ${\cal{H}}^\infty_k(\R^m \times \{0\})>0$ but, from this \ref{hausx} also shows that ${\cal{H}}^\infty_k([0,1]^m\times \{0\})>0$ for any $y \in \Sigma_{[0,1]^m\times C^{n-m}}$, a contradiction.\qed
\textbf{Proof of \ref{haus}} \, From \ref{radix} and \ref{cre} there is a point $p \in B \cap \Sigma_{\R^m \times C^{n-m}} \setminus \R^m \times \{0\}$ and a tangent cone $C^*$ in $p$ such that ${\cal{H}}_k(\Sigma_{C^*})>0$. Since $p \notin \R^m \times \{0\}$ we know that $C^*$ can be written as $C^* =\R^{m+1} \times C^{n-m-1}$. We iterate this argument until we reach some cone $C^{\cs} =\R^{m+l} \times C^{n-l}$ where $C^{n-l}$ is singular only in $0$. The value ${n-l}$ may depend on the chosen sequence of blow-up points but we know from the fact that hypersurfaces in ${\cal{H}}_n$ for $n \le 6$ are regular that $n-l \ge 7$. Since we have ${\cal{H}}_k(\Sigma_{C^{\cs}})>0$ we get, from \ref{hausx}, that $k \le n-7$. \qed
\subsubsection{Metric Measure Spaces $(H,d_{\sima},\mu_{\sima})$} \label{met}
From \ref{haus} we get the following canonical extension of the Riemannian volume measure $\Phi^{2 \cdot n/(n-2)}\cdot \mu_H$ on $H \setminus \Sigma$ to a measure $\mu_{\sima}$ on $(H,d_{\sima})$, where $\mu_H$ is the $n$-dimensional Hausdorff measure on $(H^n,d_H) \subset (M^{n+1},g_M)$. In turn, $\mu_H$ is the extension of the
Riemannian volume on $(H^n \setminus \Sigma,g_H) \subset (M^{n+1},g_M)$ using that also ${\cal{H}}^n(\Sigma)=0$ relative $(H^n,d_H)$.
\begin{definition}\emph{\textbf{(Minimal Factor Measures $(H,d_{\sima},\mu_{\sima})$)}}\label{mms2}\, For any $H \in {\cal{G}}_n$ equipped with a minimal factor metric $\Phi^{4/(n-2)} \cdot g_H$ we define the minimal factor measure $\mu_{\sima}$ on $H$ by
\begin{equation}\label{meas}
\mu_{\sima}(E):=\int_{E \setminus \Sigma_H} \Phi^{2 \cdot n/(n-2)}\cdot d\mu_H, \mm{ for any Borel set } E \subset H
\end{equation}
\end{definition}
\begin{remark}[Regularity of $\mu_{\sima}$]\label{regumu} As a consequence of the estimates, in \ref{vgr} below, we also see that $\mu_{\sima}$ is an \textbf{outer regular measure}, that is, we have for any Borel subset $E \subset H$:
\begin{equation}\label{outme}
\mu_{\sima}(E)= \inf \{\mu_{\sima}(A) \,|\, E \subset A , A \subset H \mm{ open} \}.
\end{equation}
In \ref{vgr} we show that for $H \in {\cal{G}}^c_n$ we have $\mu_{\sima}(H) < \infty$. Thus for any $\ve >0$ there is a neighborhood $U_\ve$ of $\Sigma$ in $H$ so that $\mu_{\sima}(U_\ve \setminus \Sigma) < \ve$. This also holds for non-compact $H \in {\cal{G}}_n$ from $\mu_{\sima}(B_r(q)) < \infty$ using suitable ball covers of $\Sigma$.
From this we see that $\mu_{\sima}$ is a \textbf{Borel measure} on $(H,d_{\sima})$. This and other basics of such extensions of measures are explained in \cite[pp.62-64]{HKST}. \qed
\end{remark}
Now we derive volume estimates for distance balls in $(H,d_{\sima})$ from intersections of these balls with $H \setminus \Sigma$ measured in terms of $dV_{\Phi^{4/n-2} \cdot g_H}$ and write
\begin{equation}\label{abbr}
\mu_{\sima}(B_r(q))= Vol(B_r(q),d_{\sima}) = \int_{B_r(q) \cap H \setminus \Sigma} \Phi^{2 \cdot n/(n-2)}\cdot d\mu_H
\end{equation}
We intermediately use the expression $Vol(B_r(q),d_{\sima})$ to notationally simplify considerations where we measure distances and volumes with respect to different metrics. These mixed measurements are used to stepwise employ compactness arguments not directly applicable to $(H,d_{\sima},\mu_{\sima})$.
\begin{proposition}[Volume Growth on $(H,d_{\sima},\mu_{\sima})$]\label{vgr} We consider any non-totally geodesic $H \in \cal{G}$ equipped with some minimal factor metric $d_{\sima}(\Phi)$.
\begin{enumerate}
\item For $H \in {\cal{G}}^c_n$, the total volume relative $d_{\sima}$ is finite: $Vol(H,d_{\sima}) < \infty$.
\item There are constants $a(n),b(n) >0$ depending only on the dimension $n$ so that, in the cases
\begin{enumerate}
\item $H \in {\cal{G}}^c_n$ and $r \in (0,r_H)$, for some suitably small $r_H>0$,
\item $H \in {\cal{H}}^{\R}_n$ and any radius $r>0$, we have the following volume estimates:
\end{enumerate}
\begin{equation}\label{volds}
a \cdot r^n \le Vol(B_r(q),d_{\sima}) \le b \cdot r^n.
\end{equation}
\end{enumerate}
\end{proposition}
\textbf{Proof} \, We first show for $H \in {\cal{G}}^c_n$ that $Vol(H,d_{\sima}) < \infty$. This amounts to check that $|\Phi|_{L^{2 \cdot n/(n-2)}} <\infty$ and we derive this from the minimal growth condition. Note that the optimal estimate for \emph{general} (super)solutions $v>0$ of $L_{H,\lambda}\, \phi= 0$, from the Bombieri-Giusti Harnack inequality,
is $|v|_{L^{p}} <\infty$, $p < n/(n-2)$.\\
Our given $\Phi>0$ solves $L_{H,\lambda}\, \phi= 0$ on some open set $U := (H \setminus \Sigma) \setminus \overline{V}$. Thus we only need to verify that $Vol(U,d_{\sima}) < \infty$ since $\overline{V}$ is compact and regular and, hence, $Vol(\overline{V} ,d_{\sima}) < \infty$. On $U$ we can compare
$\Phi>0$ with the minimal Green's function $G$. We know, from the boundary Harnack inequality \ref{mbhsq} that, for some base point $p \in V$, and for use in (\ref{g}) we actually assume $dist_{\bp^*}(p,U) \ge 2 \cdot \sigma$, there is a constant $c\ge 1$ so that
\begin{equation}\label{gphi}
c^{-1} \cdot G(\cdot , p) \le \Phi \le c \cdot G(\cdot , p) \mm{ on } U.
\end{equation}
\li
{\small
From \cite[Prop.3.12, Step 2]{L2} we know that $G(\cdot , p)$ minimizes the, therefore \emph{finite}, variational integral
\begin{equation}\label{diri}
J_{U}(f):= \int_{U} | \nabla_H f |^2 + V_\lambda \cdot f^2 \, d V, \mm{ for } V_\lambda:= \frac{n-2}{4 (n-1)} \cdot scal_H - \lambda \cdot \bp^2,
\end{equation}
running over all $f \in H^{1,2}_{\bp}(H \setminus \Sigma)$ with $f|_{\p U} = G(\cdot , p)$ in the trace sense. From this and the \si-adaptedness of $L_{H,\lambda}$ we have
for $C_F:=\int_{\overline{V}} |\nabla F|^2(x) + V_\lambda \cdot F(x)^2 \, dV$, for some $C^{2,\alpha}$-extension $F$ of $G(\cdot, p)|_{\p V}$ to $V$.
{\small\begin{equation}\label{v1}
\int_{U} \bp^2(x) \cdot G(x, p)^2 dV \le (\lambda^{\bp}_{L,H}-\lambda)^{-1} \cdot \int_{U} |\nabla G(\cdot, p)|^2(x) + V_\lambda \cdot G(x, p)^2 \, dV + C_F
<\infty\end{equation}}
From (\ref{gre}), (\ref{ges}) and (\ref{smot}) we get some $\alpha^\circ, \beta^\circ >0$ so that for $x \in U \setminus \Sigma$:
{\small \begin{equation}\label{g}
G(x,p) \le \bp^{(n-2)/2}(p) \cdot \bp^{(n-2)/2}(x) \cdot \beta^\circ \cdot \exp(-\alpha^\circ \cdot d_{\bp}(x,p)),
\end{equation}}
With this inequality and (\ref{v1}) we get, for some $c ^\circ>0$
{\small\begin{equation}\label{vq1}
\int_{U} G(x , p)^{4/(n-2) + 2} dV \le c ^\circ \int_{U} \bp^2(x) G(x , p)^2 dV < \infty.
\end{equation}}
For the volume element $dV(g_H)$ of $g_H$ we have $dV(\Phi^{4/(n-2)} \cdot g_H)=\Phi^{2 \cdot n/(n-2)} \cdot dV(g_H)$. From this, writing $2 \cdot n/(n-2)=4/(n-2) + 2$, we have:
\begin{equation}\label{ga}
Vol(U, \Phi^{4/(n-2)} \cdot g_H) =
\int_{U} \Phi^{2 \cdot n/(n-2)} (x) dV \le c^{2 \cdot n/(n-2)} \cdot \int_{U} G(x , p)^{4/(n-2) + 2} dV < \infty
\end{equation}}
\liii
Thus $Vol(H,d_{\sima}) < \infty$ and we can localize the argument to see, and this equally applies to $H \in {\cal{H}}^{\R}_n$, that $Vol(B_r(p),d_{\sima}) <\infty$.\qed
Now we turn to the volume growth estimates (ii). Since we derive the estimate from another augmented compactness argument, the main case is $H \in {\cal{H}}^{\R}_n$. We start with an estimate for the $d_{\sima}(\Phi)$-volume of $d_{\sima}(\Phi)$-unit balls for $p=0 \in \Sigma_H$. We make a \textbf{radial gauge} of $d_{\sima}(\Phi)$, that is, we assume, after a possible replacing of $\Phi$ for some multiple $k \cdot \Phi$, $k>0$, we have
\begin{equation}\label{incl1}
(B_1(0),d_{\sima}) \subset (B_1(0), d_H) \subset (B_{\kappa_0}(0),d_{\sima}).
\end{equation}
With this gauge it is enough to estimate $Vol\big((B_1(0), d_{H}),d_{\sima}(\Phi)\big)$, that is, the unit ball with radii relative $d_H$ but with volume measured relative $d_{\sima}$ for some $\Phi$ satisfying (\ref{incl}). This way we can use \ref{radii} to treat the radius and the volume relative $d_{\sima}$ separately. \\
We also note that there are constants $w_n > v_n>0$ so that depending only on $n$ so that
\begin{equation}\label{euclvol}
v_n \le Vol\big((B_1(0), d_H),d_H\big) \le w_n \mm{ for any } H \in {\cal{H}}^{\R}_n
\end{equation}
\noindent \textbf{Claim 1.} \emph{The volume of the $d_{\sima}$-unit ball in $H \in {\cal{H}}^{\R}_n$ satisfies
\begin{equation}\label{cc}
a^*_n \le Vol\big((B_1(0), d_{H}),d_{\sima}(\Phi)\big), \mm{ for some } a^*_n>0 \mm{ depending only on } n.
\end{equation} }
\noindent \textbf{Proof of Claim 1.} We assume there were a sequence
\begin{equation}\label{ass}
H_i \in {\cal{H}}^{\R}_n \mm{ with } Vol\big((B_1(0), d_{H_i}),d_{\sima}(\Phi_i)\big) \ra 0, \mm{ for } i \ra \infty.
\end{equation}
We will first consider the case $\Phi_i \equiv a_i \cdot G_i(\cdot,p_i)$, for $p_i \notin (B_1(0), d_{H_i}) \subset H_i$, where $G_i$ is the minimal Green's function on $H_i \setminus \Sigma_{H_i}$ so that via $\D$-maps $p_i \ra p_\infty$ for some regular point $p_\infty \notin (B_1(0), d_{H_\infty}) \subset H_\infty$ and where the $a_i>0$ are chosen so that $\Phi_i$ satisfies the gauge (\ref{incl1}).\\
We know, from \cite[Prop.3.12]{L2}, that the $G_i(\cdot,p_i)$ converge to $G_\infty(\cdot,p_\infty)$, for the minimal Green's function $G_\infty$ on $H_\infty \setminus \Sigma_{H_\infty}$.
From this (\ref{ass}) implies that $a_i \ra 0$, for $i \ra \infty$. We show that this contradicts the first inclusion of the radial gauge (\ref{incl1}). To this end we recall from \ref{hu} and \ref{doob} (\ref{ges}) that the
constants $\alpha, \beta >0$ and $\sigma>0$ for the estimate $G^{\sima}(x,y)\le \beta \cdot \exp(-\alpha \cdot d_{\bp^*}(x,y))$ we have for $d_{\bp^*}(x,y) \ge 2 \cdot \sigma$ only depend on $n$. Thus we get a common finite upper bound for $G$ in (\ref{upest}) for the $d_{\sima}$-length of hyperbolic geodesic rays from $p_i$ to $0$. Since $a_i \ra 0$, for $i \ra \infty$, this shows, knowing that $\kappa_0$ is the optimal choice, that $(B_1(0),d_{\sima}) \subsetneq (B_1(0), d_{H_i})$ for $i$ large enough, contradicting the chosen radial gauge.\\
Now we extend this argument to general $\Phi_i$ satisfying (\ref{ass}). We choose $p_i \notin (B_1(0), d_{H_i}) \subset H_i$ so that via $\D$-maps $p_i \ra p_\infty$ for some regular point $p_\infty \notin (B_1(0), d_{H_\infty}) \subset H_\infty$. Then we choose a hyperbolic geodesic ray $\gamma_i \in (H_i,d_{\bp^*_{H_i}})$ from $p_i$ to $0$, so that the $\gamma_i$ compactly converge, via $\D$-maps, in $H_\infty\setminus \Sigma_{H_\infty}$, to some geodesic ray $\gamma_\infty \subset (H_\infty,d_{\bp^*_{H_\infty}})$ from $p_\infty$ to $0$.
This also implies that the neighborhood basis $\mathcal{N}_k(\gamma_i)\subset H_i\setminus \Sigma_{H_i}$, we assign to the $\gamma_i$, compactly Hausdorff converge in $H_\infty\setminus \Sigma_{H_\infty}$ to $\mathcal{N}_k(\gamma_\infty)\subset H_\infty\setminus \Sigma_{H_\infty}$, via $\D$-maps. \\
The standard Harnack inequality shows that $a_i:=\Phi_i(p_i) \ra 0$ for $i \ra \infty$. Now we consider $a_i \cdot G_i(\cdot,p_i)$ and note that, on $\mathcal{N}_k$ we have
may apply the boundary Harnack inequality \ref{mbhsq} with \emph{common} Harnack constants, independent of $i$. We get a \emph{common} constant $a$ in (\ref{upest}). Therefore, we also have a common finite upper bound for the $d_{\sima}$-length of hyperbolic geodesic rays from $p_i$ to $0$. Then we argue as before to see that $a_i \ra 0$ contradicts the radial gauge. \qed
\noindent \textbf{Claim 2.} \emph{The volume of the $d_{\sima}$-unit ball in $H \in {\cal{H}}^{\R}_n$ satisfies
\begin{equation}\label{cl2}
Vol\big((B_1(0), d_{H}),d_{\sima}(\Phi)\big) \le b^*_n, \mm{ for some } b^*_n>0 \mm{ depending only on } n.
\end{equation} }
\noindent \textbf{Proof of Claim 2.} This time we assume there were a sequence
\begin{equation}\label{ass2}
H_i \in {\cal{H}}^{\R}_n \mm{ with } Vol\big((B_1(0), d_{H_i}),d_{\sima}(\Phi_i)\big) \ra \infty, \mm{ for } i \ra \infty.
\end{equation}
Again, we first turn to the case $\Phi_i \equiv a_i \cdot G_i(\cdot,p_i)$, for $p_i \notin (B_1(0), d_{H_i}) \subset H_i$, where $G_i$ is the minimal Green's function on $H_i \setminus \Sigma_{H_i}$ so that via $\D$-maps $p_i \ra p_\infty$ for some regular point $p_\infty \notin (B_1(0), d_{H_\infty}) \subset H_\infty$ and where the $a_i>0$ are chosen so that $\Phi_i$ satisfies the gauge (\ref{incl1}). We use again that the $G_i(\cdot,p_i)$ converge to $G_\infty(\cdot,p_\infty)$. This time (\ref{ass}) implies that $a_i \ra \infty$, for $i \ra \infty$ and we now show that this contradicts the second inclusion of the radial gauge (\ref{incl1}). To this end we note that the
compact $\D$-map convergence $G_i(\cdot,p_i) \ra G_\infty(\cdot,p_\infty)$ shows that the $L^1$-norm of $G_i(\cdot,p_i)$ on $(B_1(0), d_{H_i})$ remains positively lower bounded. The Bombieri-Giusti $L^1$-Harnack inequality (\ref{bgha}) and the argument of \ref{lcmg}(iii) therefore show that the $ (B_1(0), d_{H_i})\subsetneq (B_{\kappa_0}(0),d_{\sima}).$ for $i$ large enough.\qed
\noindent \textbf{Claim 3.} \emph{For constants $a_n^*,b_n^* >0$ depending only on $n$, we have for any $H \in {\cal{H}}^{\R}_n$:
\begin{equation}\label{cl3}
a_n^* \cdot r^n \le Vol(B_r(q),d_{\sima}) \le b_n^* \cdot r^n, \mm{ for any } r>0.
\end{equation}}
\noindent \textbf{Proof of Claim 3.} To determine the volume growth rate of balls in $H$ in terms of $r$ we use that the estimate for unit balls only depends on $n$. For any $H \in {\cal{H}}^{\R}_n$ we also have $r^{-1} \cdot H \in {\cal{H}}^{\R}_n$. That is, we apply the unit ball estimate to $r^{-1} \cdot H \in {\cal{H}}^{\R}_n$ and then rescale $r^{-1} \cdot H$ to $H$. The identity
\begin{equation}\label{voltr}
Vol(B_r(0), \Phi^{4/(n-2)} \cdot g_H) = r^n \cdot Vol(B_1(0), \Phi^{4/(n-2)} \cdot r^{-2} \cdot g_H),
\end{equation}
then shows:\, $a^*_n \cdot r^n \le Vol(B_r(0), \Phi^{4/(n-2)} \cdot g_H) \le b^*_n \cdot r^n.$\qed
\noindent \textbf{Claim 4.} \emph{For any $H \in {\cal{G}}^c_n$ and $r \in (0,r_H)$, for a suitably small $r_H>0$, we have for any $q \in \Sigma_H$:
\[a_n \cdot r^n \le Vol(B_r(q),d_{\sima}) \le b_n \cdot r^n,\mm{ for }a_n,b_n >0 \mm{ depending only on } n.\]}
\noindent \textbf{Proof of Claim 4.} We repeat the arguments from Claim 1-3 in two loops: we first consider one fixed $H \in {\cal{G}}^c_n$ and show that there are constants $b_H>a_H>0$ and some small $r_H>0$ so that: $a_H \le Vol(B_1(q), \Phi^{4/(n-2)} \cdot r^{-2} \cdot g_H) \le b_H$, for any $q \in \Sigma_H$ and $r \in (0,r_H)$. Otherwise we had a converging sequence of points $q_i \in \Sigma_H$ and of radii $r_i \ra 0$, for $i \ra \infty$ so that these volumina would either converge to $0$ or $\infty$. Both cases can be ruled out in the same way as in the case $H \in {\cal{H}}^{\R}_n$ in Claims 1 and 2. In a second loop we assume that $b>a>0$ can be chosen independent of $H \in {\cal{G}}^c_n$ for
suitably small radii $r_H$. Otherwise we had a compactly converging sequence $r_i^{-1} \cdot H_i \in {\cal{G}}^c_n$ with $r_i \ra 0$, for $i \ra \infty$, and of unit balls $(B_1(q_i), \Phi^{4/(n-2)} \cdot r_i^{-2} \cdot g_{H_i})$ so that the associated volumina either converge to $0$ or $\infty$. Again both cases can be ruled out in the same way as in Claims 1
and 2. Finally, we use that (\ref{voltr}) in the argument we used for Claim 3 applies to all $r \in (0,r_H)$ to see that there are $a_n,b_n >0$ depending only on $n$ so that
$a_n \cdot r^n \le Vol(B_r(q),d_{\sima}) \le b_n$, for any $H \in {\cal{G}}^c_n$ and $r \in (0,r_H)$. \qed
\begin{corollary}\emph{\textbf{(Doubling and Volume Decay)}} \label{dv} For any $H \in {\cal{G}}$, there is a $C=C(H)>0$, and $C=C(n)>0$ for $H \in {\cal{H}}^{\R}_n$, so that
for radii and volumina computed relative $(H,d_{\sima},\mu_{\sima})$:
\begin{enumerate}
\item $\mu_{\sima}$ is \textbf{doubling}: for any $q \in H$ and $r>0$:
\begin{equation}\label{dou}
\mu_{\sima}(B_{2 \cdot r}(q)) \le C \cdot \mu_{\sima}(B_{r}(q)).
\end{equation}
\item For balls $B_* \subset B \subset H$ we have a \textbf{relative lower volume decay} of order $n$:
\begin{equation}\label{volgro}
diam(B_*)^n/diam(B)^n \le C \cdot \mu_{\sima}(B_*)/\mu_{\sima}(B).
\end{equation}
\end{enumerate}
\end{corollary}
\textbf{Proof} \, The volume growth estimates (\ref{volds}) readily imply the estimates for $H \in {\cal{H}}^{\R}_n$ and for $H \in {\cal{G}}^c$ and $r \in (0,r_H)$ for some $r_H>0$ small enough. For $H \in {\cal{G}}^c$ the volumina of all balls of radius $\ge r_H$ are uniformly positively upper and lower bounded. Thus, we can raise $C(n)$ to a value $C(H)$ so that (\ref{dou}) and (\ref{volgro}) extend to all balls. \qed
The classical BV-approach to define perimeters $P(\Omega)$ of an open set $\Omega \subset \R^n$ and area minimizing boundaries in $\R^n$, as exposed in \cite{Gi}, can be extended to open sets in Riemannian manifolds. Here we further extend this approach to $(H,d_{\sima},\mu_{\sima})$. There are far more general theories for currents in metric spaces \cite{AK} but the Hausdorff dimension estimate for $\Sigma$ combined with (\ref{volds}) allows us to keep things much simpler since we only interested in open subsets of $(H,d_{\sima},\mu_{\sima})$.\\
From \ref{haus} we see that the hypersurface area of a boundary $L$ of an open set in $(H,d_{\sima})$ is determined from that of $L \cap H\setminus \Sigma$. We define a canonical extension of the hypersurface area element we have on $H \setminus \Sigma$ using the measure $\mu_{\sima}$ in the style of the BV-approach to geometric measure theory \cite{Gi}:
\begin{definition}\emph{\textbf{(Hypersurface Area $\mu^{n-1}(\p \Omega)$)}}\label{mmsh}\, For any $H \in {\cal{G}}_n$ equipped with a minimal factor metric
$\Phi^{4/(n-2)} \cdot g_H$ and some open subset $\Omega \subset H$. We define
\begin{equation}\label{measn}
\mu^{n-1}(\p \Omega):= \lim_{r \ra 0} \Big\{P(\Omega \setminus \bigcup_i B_i) \, \Big| \, \Sigma \subset \bigcup_i B_i, \mm{ for balls } B_i \subset H \mm{ of radius } \le \delta\}
\end{equation}
\end{definition}
Note that $\Omega \setminus \bigcup_i B_i \subset H \setminus \Sigma$ and the estimate \ref{haus} shows that the limit exists. To show that, for bounded $\Omega$, that $\mu^{n-1}(\p \Omega) < \infty$ and the measure it defines is regular, as is \ref{regumu}, we note from (\ref{volds}) and the coarea formula that there are constants $0 < a^* < b^*$ depending only on the dimension
\begin{equation}\label{volds2}
a^* \cdot r^{n-1} \le Vol(\p B_r(q),d_{\sima}) \le b^* \cdot r^{n-1},
\end{equation}
for \emph{almost any} $r>0$, when $r$ is small enough.
\setcounter{section}{4}
\renewcommand{\thesubsection}{\thesection}
\subsection{Isoperimetry of Minimal Factors} \label{iiaa}
We derive some refined estimates for the integral of conformal changes along hyperbolic geodesic and combine them with the growth estimates of the last chapter to establish the Sobolev inequality for $(H,d_{\sima},\mu_{\sima})$. We use it to get some estimates for the behavior of area minimizers within $(H,d_{\sima},\mu_{\sima})$, in particular, near $\Sigma$.
\subsubsection{Path Integrals and Hyperbolicity} \label{path}
We start with some estimates for the minimal Green's function on the twisted \si-double cones $T_d(p,q) = \bigcup_{z \in \gamma_{p,q} \setminus \{p,q\}} B_{l_{min}(\gamma_{p,q}(z))/(d \cdot c \cdot L_{\bp})}(z)$ for some hyperbolic geodesic $\gamma_{p,q}$ from some basepoint $p \in H \setminus \Sigma$ to $q \in \Sigma$ in the Gromov compactification $\overline{X}_G$ of $X=(H \setminus \Sigma,\bp^2 \cdot g_H)$.
For the midpoint of $m \in \gamma_{p,q}$ dividing $\gamma_{p,q}$ in two parts $\gamma_{p,m}$ and $\gamma_{m,q}$ of same length we also define
\begin{equation}\label{tubepm}
T^p_d(p,q) := \bigcup_{z \in \gamma_{p,m} \setminus \{p\}} B_{l_{min}(\gamma_{p,q}(z))/(d \cdot c \cdot L_{\bp})}(z) \mm{ and } T^q_d(p,q) := \bigcup_{z \in \gamma_{m,q} \setminus \{q\}} B_{l_{min}(\gamma_{p,q}(z))/(d \cdot c \cdot L_{\bp})}(z)
\end{equation}
From the argument in Step 2 of \ref{sem0}, and \cite[Appendix B]{L1} we first notice that, for elliptic estimates it is enough to focus on the hyperbolic geodesic $\gamma_{p,q}$ passing through the center of each of the balls $B_{l_{min}(\gamma_{p,q}(z))/(d \cdot c \cdot L_{\bp})}(z)$.
\begin{lemma}[Harnack Inequalities on $T_d(p,q)$]\label{hat} There are constants $C(d,H)>0$, $C(d,n)>0$ for $H \in {\cal{H}}^{\R}_n$, so that for any positive solution $u>0$ of $L_{H,\lambda} \, \phi=0$ on $T_1(p,q) \subset H \setminus \Sigma$ we have the following Harnack inequalities for any $z \in \gamma_{p,q} \setminus \{p,q\}$
\end{lemma}
\begin{equation}\label{har}
\textstyle \sup_{B_{l_{min}(\gamma_{p,q}(z))/(d \cdot c \cdot L_{\bp})}(z)} u \le C \cdot \inf_{B_{l_{min}(\gamma_{p,q}(z))/(d \cdot c \cdot L_{\bp})}(z)} u
\end{equation}
\textbf{Proof} \, The equation $L_{H,\lambda} \, \phi=0$ is scaling invariant and thus we can scale any of these ball $B_{l_{min}(\gamma_{p,q}(z))/(d \cdot c \cdot L_{\bp})}(z)$ to unit size where the underlying geometry and the transformed coefficients of $L_{H,\lambda}$ are uniformly bounded. From the common Harnack inequality we get on any of the scaled balls we get (\ref{har}) since this relation is invariant under scalings.\qed
\begin{lemma}\label{nochma} For any $d > 1$ there is a constant $k(H,\Phi)>0$, and $k(n)>0$ for $H \in {\cal{H}}^{\R}_n$, and some radius $\rho(H,d, p)>0$ so that for any $q \in H$
\begin{equation}\label{ib}
l_{\sima}(\gamma_p[z]) = \int_{\gamma_p[z]}\Phi^{2/n-2} ds \le k \cdot l_{g_H}(\gamma_p[z]) \cdot \Phi^{2/n-2}(z), \mm{ for any } z \in T^p_d(p,q).
\end{equation}
\end{lemma}
\textbf{Proof} \, From the Harnack inequality on $T_d(p,q)$ (\ref{har}) it is enough to prove the result for the core curve $\gamma_{p,q}$.
When the hyperbolic length $l_{hyp}$ of $\gamma_p[z]$ measured with respect to $d_{\bp^*}$ is upper bounded by $10^3 \cdot \delta$. Then the ordinary Harnack inequality applied to
each of the at most $10^3 \cdot \delta/(\sigma/2)$ balls $B_{\sigma}(q_i)$, $q_i \in \gamma_p[z]$, placed along $\gamma_p[z]$, so that $d_{\bp^*}(q_i,q_{i+1}) =\sigma/2$ gives a uniform
upper bound along $\gamma_p[z]$ and thus we get (\ref{ib}).\\
For longer curves we combine this with the boundary Harnack inequality \ref{mbhsq} for hyperbolic halfplanes $\mathcal{N}_i[\gamma]\gamma_{p,q}$ to get an upper estimate from $G(\cdot,z)$
using (\ref{upest}). Note that, from the \si-uniformity of $\gamma_{p,q}$ we get an upper bound on $\bp(z)$. In the case $H \in {\cal{H}}^{\R}_n$ we use the compactness of ${\cal{H}}^{\R}_n$ and
the estimate (\ref{div}) to see that the constant only depends on $n$. \qed
\begin{lemma}\label{nochma2} There is a constant $C^*(H)>0$, and $C^*(n)>0$ for $H \in {\cal{H}}^{\R}_n$, so that
\begin{equation}\label{ibb}
l_{\sima}(\gamma|[s,t]) < C^* \cdot d_{\sima}(\gamma(s),\gamma(t))
\end{equation}
\end{lemma}
\textbf{Proof} \, Again, applying \ref{hat}, it suffices to consider the core curve. After scaling we may assume that $l_H(\gamma|[s,t])=1$. We recall that hyperbolic geodesic arcs are \si-uniform: for some $c \ge 1$ and any $z \in \widetilde{\gamma}_{x,y}$
\begin{equation}\label{sun}
l(\widetilde{\gamma}_{x,y}) \le c \cdot d(x,y) \mm{ and } l_{min}(\widetilde{\gamma}_{x,y}(z)) \le c \cdot \delta_{\bp}(z).
\end{equation}
The constant $c$ and the asserted inequality (\ref{ibb}) are a scaling invariant. Thus, for $l_H(\gamma|[s,t])=1$ and $z=$ the midpoint of $\gamma|[s,t]$, we have $\bp(z) \le a$, for some
$a(c)>0$ depending only on $c$. If we have common upper bounds for the $\bp(\gamma(s)),\bp(\gamma(t))$ then we have a lower bound for the $d_{\sima}(\gamma(s),\gamma(t))$. Thus we may assume that both endpoints are arbitrarily close to $\Sigma$ but then, as we have already seen in the proofs of \ref{umia} and of \ref{radii}, the Bombieri-Giusti Harnack inequality yields a contradiction.
\qed
\subsubsection{Poincar\'{e} and Sobolev Inequalities} \label{sobo}
Here we show that the canonical Semmes families of curves on $(H,d_H,\mu_H)$ we have defined in \ref{sem0} are still Semmes families in $(H,d_{\sima},\mu_{\sima})$. This and the volume relations \ref{dv} imply Poincar\'{e}, Sobolev and isoperimetric inequalities for $(H,d_{\sima},\mu_{\sima})$ from standard arguments in the theory of metric measure spaces \cite{Se}, \cite{He} and \cite{HKST}.
\begin{proposition} [Semmes Families on $(H,d_{\sima},\mu_{\sima})$]\label{semin} For a minimal factor metric on any $H \in {\cal{G}}_n$ there is a $C_{\sima}=C_{\sima}(H)>0$ and $C_{\sima}=C_{\sima}(n)>0$ for $H \in {\cal{H}}^{\R}_n$,
and for any two $p,q \in H$ the canonical Semmes family $\Gamma_{p,q}$ of \ref{sem0} satisfies the following two axioms relative to $(H,d_{\sima},\mu_{\sima})$:
\begin{enumerate}
\item For any $\gamma \in \Gamma_{p,q}$: $l_{\sima}(\gamma|[s,t]) < C_{\sima}\cdot d_{\sima}(\gamma(s),\gamma(t))$, for $s,t \in I_\gamma$.
\item Each family $\Gamma_{p,q}$ carries a probability measure $\sigma_{{\sima}}=\sigma_{{\sima},p,q}$ so that for any Borel set $A \subset X$, the assignment $\gamma \mapsto l_{\sima}(\gamma \cap A)$ is $\sigma_{\sima}$-measurable with
{\small \begin{equation}\label{tcu}
\int_{\Gamma_{p,q}} l_{\sima}(\gamma \cap A) \, d \sigma_{\sima}(\gamma) \le C^* \cdot \int_{A_{C_{\sima},p,q}} \left(\frac{d_{\sima}(p,z)}{\mu_{\sima}(B_{d_{\sima}(p,z)}(p))} + \frac{d_{\sima}(q,z)}{\mu_{\sima}(B_{d_{\sima}(q,z)}(q))}\right) d \mu_{\sima}(z)
\end{equation}}
for $A_{C^*,p,q}:=(B_{C_{\sima} \cdot d_{\sima}(p,q)}(p) \cup B_{C_{\sima} \cdot d_{\sima}(p,q)}(q))\cap A$.\\
\end{enumerate}
\end{proposition}
\textbf{Proof of \ref{semin}} \, We start with assertion (i). After scaling we may assume $d_{\sima}(\gamma(s),\gamma(t))=1$. $\gamma$ is c-\si-uniform
that is we have
\begin{equation}\label{csuni}
A. \,\, l(\widetilde{\gamma}_{x,y}) \le c \cdot d(x,y) \mm{ and } B. \,\, l_{min}(\widetilde{\gamma}_{x,y}(z)) \le c \cdot \delta_{\bp}(z) \mm{ for any } z \in \widetilde{\gamma}_{x,y}.
\end{equation}
We see from B. that for any point $z \in \gamma_{p,q} \setminus \{p,q\}$ so that $\bp(z) \le c \cdot l_{min}(\gamma_{p,q}(z))=: l(z)$. We scale $H$ by $l(z)^{-1}$ and observe that, due to A. on the resulting ball $B_{c^{-1}/2}(z)$: $|A|(y)\le \bp(y) \le L_{\bp} \cdot c$ and that $L_{H,\lambda}$ transforms to an operator with uniformly bounded coefficients on $B_{c^{-1}/2}(z)$. Then we get uniform $C^{2,\alpha}$-bounds on all these balls and this means for any two points $\gamma(s),\gamma(t) \in B_{c^{-1}/4}(z) \cap \gamma$ the quasi-geodesic condition for $g_H$
carries over to $l_{\sima}(\gamma|[s,t]) < C^* \cdot d_{\sima}(\gamma(s),\gamma(t))$ for some uniform $C^*$. From this and (\ref{upest}) showing $\int_{\gamma_q[z]}G^{2/n-2}(\cdot,p) < \infty$ we see that, if there were no common upper bound $C^*$ for any two curve points, then this would be the case for
the endpoints of a hyperbolic geodesic ray. That is, after scaling to $l_{\sima}(\gamma_i) =1$ we may assume that $d_{\sima}(z_i,q_i) \ra 0$, for $i \ra \infty$.
However using that the Harnack inequality the transformed $L_{H,\lambda}$ on $B_{c^{-1}/2}(z)$ shared common bounds the Harnack constant is also bounded on $B_{c^{-1}/2}(z)$. In turn, this shows that $dist_{\sima}(\p B_{c^{-1}/2}(z), z) > c>0$ contradicting $d_{\sima}(z_i,q_i) \ra 0$. Choosing the canonical parameterization this family satisfies the condition (i). \\
For the inequality (ii) we set, for any Borel set $A \subset H$:
\[\mu_1(A):= \int_{\Gamma_{p,q}} l(\gamma \cap A) \, d \sigma(\gamma)\, \mm{ and } \, \mu_2(A):= C \cdot \int_{A_{C,p,q}} \left(\frac{d(p,z)}{\mu(B_{d(p,z)}(p))} + \frac{d(q,z)}{\mu(B_{d(q,z)}(q))}\right) d \mu(z).\]
We recall that the closure of the twisted double \si-cones $\overline{T_d(p,q)} \cap \Sigma \subset \{p,q\}$. Thus we only need to consider $A \subset H \setminus \Sigma$. To check the relation $\mu_1(A) \le \mu_2(A)$, for some $C>0$ independent of $A$, between these two \emph{measures} on $H$ we use their $\sigma$-additivity and reduce the argument:
\li
\begin{itemize}[leftmargin=*]
\item We only need to show the relation $\mu_1(B) \le \mu_2(B)$ for small balls $B =B_\ve(x) \subset H \setminus \Sigma$ centered in an arbitrarily chosen point $x \in H \setminus \Sigma$ and for some $C>0$ independent of $B$ and of $x$.
\item We may assume $x \in T_d(p,q)$, otherwise we can choose $B$ so small that $B \cap T_d(p,q) \v$ and thus $\mu_1(B) =0$. Then (ii) holds trivially.
\item We may assume that $B$ is small enough that
\begin{equation}\label{ggg}
1/2 \cdot G^{2/n-2} (x) \le G^{2/n-2} (y) \le 2 \cdot G^{2/n-2} (x),\mm{ for } y \in B.
\end{equation}
\end{itemize}
Then we have
{\small \begin{equation}\label{tcu}
1/2 \cdot \int_{\Gamma_{p,q}} l_{\sima}(\gamma_q[z] \cap B) \, d \sigma(\gamma) \le \int_{\Gamma_{p,q}} G^{2/n-2} (x) \cdot l_{g_H}(\gamma_q[z] \cap B) \, d \sigma(\gamma) \le ...
\end{equation}}
Since $\Gamma_{p,q}$ is a Semmes family relative $(H,d_H,\mu_H)$ we have from Prop.\ref{sem0}:
\begin{equation}\label{tcu}
...\le G^{2/n-2} (x) \cdot \int_{\Gamma_{p,q}} l_{g_H}(\gamma_q[z] \cap B) \, d \sigma(\gamma)\le G^{2/n-2} (x) \cdot C \cdot \int_{B_{C,p,q}} \frac{d(q,z)}{\mu(B_{d(q,z)}(q))} d \mu(z) \le ...
\end{equation}
using (\ref{ggg}) again and $d(\gamma(s),\gamma(t)) \le l(\gamma|[s,t]) < C \cdot d(\gamma(s),\gamma(t))$, for any $\gamma \in \Gamma_{p,q}$, $s,t \in I_\gamma$ and also that $a \cdot r^n \le Vol(B_r(q),d_H) \le b \cdot r^n.$ We get
{\small \begin{equation}\label{tcu}
... \le 2 \cdot C \cdot \int_{B_{C,p,q}} G^{2 /n-2}(z) \cdot \frac{d(q,z)}{\mu(B_{d(q,z)}(q))} d \mu_{g_H}(z) \le C \cdot C^+ \cdot \int_{B_{C,p,q}} G^{2 /n-2}(z) \cdot \frac{l(\gamma|[s,t]) }{l(\gamma|[s,t])^n} d \mu_{g_H}(z) \le ...
\end{equation}}
Now we use \ref{nochma} $ l_{g_H}(\gamma_q[z]) \le k \cdot G^{2/n-2}(z,p)/\int_{\gamma_q[z]}G^{2/n-2}(\cdot,p) \mm{ for any } z \in H \setminus \Sigma \cap B_\rho(q)$.
{\small \begin{equation}\label{d2g}
... \le 2 \cdot C \cdot k^n \cdot \int_{B_{C,p,q}} G^{2/n-2}(z) \cdot \bigg(G^{2/n-2}(z) \bigg/\int_{\gamma_q[z]}G^{2/n-2}\bigg)^{n-1} d \mu_{g_H}(z) = ...
\end{equation}}
Finally form \ref{nochma2} we have $d_{\sima}(\gamma(s),\gamma(t)) \le l_{\sima}(\gamma|[s,t]) < C^* \cdot d_{\sima}(\gamma(s),\gamma(t))$, for any $\gamma \in \Gamma_{p,q}$, $s,t \in I_\gamma$ and from \ref{vgr} (\ref{volds}): $a \cdot r^n \le Vol(B_r(q),d_{\sima}) \le b \cdot r^n,$ to get
{\small \begin{equation}\label{tcu}
... = 4 \cdot C \cdot k^n \cdot \int_{B_{C,p,q}} \frac{\int_{\gamma[z]}G^{2/n-2}}{(\int_{\gamma_q[z]}G^{2/n-2})^n} \cdot G^{2 n/n-2}(z) \cdot d \mu_{g_H}(z) \le C^* \cdot \int_{B_{C^*,p,q}} \frac{d_{\sima}(q,z)}{\mu_{\sima}(B_{d_{\sima}(q,z)}(q))}d \mu_{\sima}(z).
\end{equation}}
\qed
\begin{corollary} \emph{\textbf{(Poincar\'{e} inequality)}}\label{poii} For any $H \in {\cal{G}}$, there is a constant $C_0=C_0(H,\Phi) >0$, so that when $B \subset H$ is an open ball, $u:B \ra \R$ is an $L^1$-function on $H$ that is $C^1$ on $H \setminus \Sigma$. Then we have, setting $|\nabla u| \equiv 0$ on $\Sigma$.
\begin{equation}\label{poinm}
\fint_B |u-u_B| \, d \mu_{\sima} \le C_0 \cdot \fint_B |\nabla u| \, d \mu_{\sima}, \mm{ where } u_B:=\fint_B u \, d \mu_{\sima} := \int_B u \, d \mu_{\sima}/\mu(B)
\end{equation}
\end{corollary}
The volume decay property \emph{of order n} in Th.\ref{dvintro}(ii) allows us to improve this Poincar\'{e} inequality to the following Sobolev inequality.
\begin{corollary} \emph{\textbf{(Sobolev Inequality)}}\label{sobb} For any $H \in {\cal{G}}$, there is some $C_1=C_1(H,\Phi)>0$, so that for some open ball $B \subset H$, $u:B \ra \R$ is an $L^1$-function on $H$ that is $C^1$ on $H \setminus \Sigma$. Then we have
\begin{equation}\label{ii2}
\Big(\fint_B |u-u_B|^{n/(n-1)} \, d \mu_{\sima}\Big)^{(n-1)/n} \le C_1 \cdot \fint_B |\nabla u| \, d \mu_{\sima}
\end{equation}
\end{corollary}
From the low Hausdorff dimension of $\Sigma$ in \ref{dvintro} (v) this gives us the desired isoperimetric inequalities for $(H,d_{\sima},\mu_{\sima})$.
\begin{corollary} \emph{\textbf{(Isoperimetric Inequality)}}\label{iip} For any $H \in {\cal{G}}$, there are some constants $\gamma=\gamma(H)>0,$ $\gamma^*=\gamma(H^*)>0$, both depedning only on $n$ when $H \in {\cal{H}}^{\R}_n$, so that for any open set $U \subset H$ with compact closure and rectifiable boundary $\p U$
\begin{equation}\label{iifin}
\mu_{\sima}(U)^{(n-1)/n} \le \gamma \cdot \mu^{n-1}_{\sima}(\p U),
\end{equation}
\begin{equation}\label{ii2in}
\min \{ \mu_{\sima}(B_{\rho} \cap U), \mu_{\sima} (B_{\rho} \setminus U)\}^{(n-1)/n} \le \gamma^* \cdot \mu^{n-1}_{\sima}(B_{\rho} \cap \p U),
\end{equation}
\end{corollary}
\textbf{Proof of \ref{poii} - \ref{iip}} \, \ref{dv} and \ref{semin} imply the underlying Poincar\'{e} inequality \ref{poii}, cf.\cite[14.2,p.396]{HKST}, and the refinement to the Sobolev inequality \ref{sobb} and the isoperimetric inequalities \ref{iip} from \cite[Theorem 9.1.15(i)]{HKST}, for $p=1$ and $Q=n$, cf. also \cite{Se}. The original results are for balls but for $H \in {\cal{G}}^c_n$ we may either use one large ball or finite ball covers and for Euclidean boundaries we use the scaling invariance of the setting to extend the estimates to the more general case. \qed
In the inductive use of scalar curvature splittings we sooner or later encounter the situation where we need to understand the behavior of area minimizing hypersurfaces \emph{within} $H \in {\cal{G}}$ equipped with a minimal factor geometry and here the following result becomes a basic tool.
\begin{corollary}[Volume Growth of Area Minimizers in $(H,d_{\sima},\mu_{\sima})$] \label{vol} For $H \in {\cal{G}}$ and $\lambda < \lambda_H$, there
is a $\rho_H>0$ so that for any $r \in (0,\rho_H)$ and any area minimizing boundary $L^{n-1}$ in $(H,d_{\sima},\mu_{\sima})$:
\begin{equation}\label{est}
\kappa^-_n\cdot r^n \le \mu_{\sima}(L^+ \cap B_r(p)) \le \kappa^+_n\cdot r^n
\end{equation}
where $\kappa^-_n, \kappa^+_n >0$ denote constants depending only on the dimension.
\end{corollary}
\textbf{Proof} \, Since $L$ is area minimizing we see that
\begin{equation}\label{i1}
\mu^{n-1}_{\sima}(B_r(p)\cap L ) \le \mu^{n-1}_{\sima}(\p (L^+ \setminus B_r(p)))
\end{equation}
Moreover, we have, for almost every $r$:
\begin{equation}\label{i2}
\mu^{n-1}_{\sima}(\p (B_r(p)\cap L^+)) = \mu^{n-1}_{\sima}(B_r(p)\cap L ) + \mu^{n-1}_{\sima}(L^+ \cap \p B_r(p))
\end{equation}
Thus we have
\begin{equation}\label{i3}
\mu^{n-1}_{\sima}(\p (B_r(p)\cap L^+)) \le 2 \cdot \mu^{n-1}_{\sima}(L^+ \setminus \p B_r(p)) = 2 \cdot \p \left( \mu_{\sima}(L^+ \setminus B_r(p))\right) /\p r
\end{equation}
Summarizing we get from the isoperimetric inequality:
\begin{equation}\label{i4}
\mu_{\sima}(L^+ \setminus B_r(p))^{(n-1)/n} \le 2 \cdot \gamma \cdot \p \left( \mu_{\sima}(L^+ \setminus B_r(p))\right) /\p r,
\end{equation}
Integration gives: $\mu_{\sima}(L^+ \setminus B_r(p)) \ge \kappa^-_n \cdot r^n$ for some $\kappa^-_n >0$.
For the upper bound we simply use that $L$ also bounds $L^-$. Hence, we get the same \emph{lower estimates} also for $\mu^{n-1}_{\sima}(\p (B_r(p)\cap L^-))$.
Now recall that t here are constants $a(n),b(n) >0$ depending only on the dimension $n$ so that, in the cases
\begin{enumerate}
\item $H \in {\cal{G}}^c_n$ and $r \in (0,r_H)$, for some suitably small $r_H>0$,
\item $H \in {\cal{H}}^{\R}_n$ and any radius $r>0$, we have the following volume estimates:
\end{enumerate}
\begin{equation}\label{volds2}
a \cdot r^n \le \mu_{\sima}(B_r(q)) \le b \cdot r^n.
\end{equation}
Thus we see that $ \mu_{\sima}(L^+ \setminus B_r(p)) \le (b- \kappa^-_n)\cdot r^n$.
\qed
The volume growth estimate yields a shape control over area minimizing hypersurfaces in $(H,d_{\sima},\mu_{\sima})$ that is similar to that in the Euclidean case. As an application one can show that there are no horn-shaped pieces of area minimizers in $(H,d_{\sima},\mu_{\sima})$ entering narrow hollow cylinders. In particular, such an area minimizer cannot \emph{stretch out} along $\Sigma$. We use this in subsequent papers to control the effect of deformations on such area minimizers.
\footnotesize
\renewcommand{\refname}{\fontsize{14}{0}\selectfont \textbf{References}}
|
1,116,691,498,244 | arxiv | \section{Introduction}
Electric magnetic duality is originally observed from Maxwell equations, which describe one of the fundamental forces in nature. Under switching $\vec E\rightarrow \vec B$ and $\vec B \rightarrow -\vec E$, where $\vec E$ is electric field and $\vec B$ is magnetic field (without considering any electric and magnetic sources), the Maxwell equations are invariant\cite{Deser1}. The duality is extended to string theory and various kinds of field theories of free massless fields with various spins, sometimes to those in curved spacetime e.g. Maxwell system in de Sitter spacetime and to approximate non-Abelian dualities \cite{Henneaux:2004jw,Deser:2004xt,Deser:2013xb,Deser:2005sz,Jatkar:2012mm}.
One of the interesting directions of developing electric magnetic duality is a research if electric magnetic duality ensures that certain classical symmetry of a system is retained when the system is quantized(e.g. see \cite{Bunster:2012hm}). In \cite{Bunster:2012hm}, the authors argue that electric magnetic duality can ensure if a classical vector field theory enjoying Lorentz symmetry is still Lorentz invariant even when it is quantized.
The pioneering argument started from a paper by Dirac\cite{Dirac1} in 1962. In his paper, he discussed this issue as follows. It is not manifest if a quantum field theory keeps its classical symmetry(symmetry of the classical Lagrangian and equations of motion) because of (e.g.) the ordering issue of the field variables(due to the second quantization rule on them). Since a state in quantum field theory can change to another representation by unitary transform and its dynamics is described by unitary time evolution, acting symmetry generators(spatial translation, rotation and boost, temporal translation) on that state, then if the second quantization is consistent with the algebra of the symmetry generators, then this ensures that the symmetry retains in its quantum field theory.
More precisely, he introduces a canonical pair of quantum fields as $\xi$ and $\eta$ satisfying
\begin{equation}
[\xi,\eta^\prime]=\delta,
\end{equation}
where prime denotes that the field variable depends on prime coordinate i.e. $\eta^\prime=\eta(x^\prime)$, $\delta=\delta^d(x-x^\prime)$, $d$-dimensional $\delta$-function and so it is an equal-time commutator
\footnote{For further discussion, even if we develop every mathematical equation in terms of $d$, in fact we restrict ourselves to $d=3$ case only.}
. $\xi$ may become a field variable in the theory and $\eta$ is its canonical conjugate. From them, he constructs a momentum density $K_s$ and introduces an energy density $U$, which provide the representation of the symmetry generators,
where the index $s$ is space index\footnote{We will use $s,t,r,u$ to be spatial indices running from 1 to 3.}. It turns out that such symmetry generators constructed from $K_s$ and $U$ satisfy Poincare algebra if the energy density satisfies the following commutation relation:
\begin{equation}
\label{intri-uu}
[U,U^\prime]=K_{t,t}\delta+2K_t\delta_{,t},
\end{equation}
where $A_{,s}\equiv \frac{\partial A}{\partial x^s}$.
By using this observation, the authors in \cite{Bunster:2012hm} discovered the following: Suppose a vector field theory in 4-$d$ flat spacetime which enjoys electric magnetic duality and Lorentz symmetry is quantized in a way that electric magnetic duality is manifest, more precisely it is requested for its second quantization rule to be
\begin{equation}
\label{BB-commm}
[\mathcal B^a_s,\mathcal B^b_t]=\epsilon^{ab}\epsilon_{stu}\delta_{,u},
\end{equation}
where $a,b=1,2$ are $SO(2)$ indices related to electric magnetic duality rotation, $\epsilon$ is fully anti-symmetric tensor, $ \vec \mathcal B^1=\vec E$ and $ \vec\mathcal B^2=\vec B$. One can define the momentum density and the energy density from the fields $\mathcal B^a_s$ as
\begin{equation}
\label{em-density-intro}
K_r=-\frac{1}{2}\mathcal B^{as}\mathcal B^{bt}\epsilon^{ab}\epsilon_{str} {\rm \ \ and \ \ } U=f(h,v),
\end{equation}
where
\begin{equation}
h=\frac{1}{2}\mathcal B^{as}\mathcal B^{bt}\delta^{ab}\delta_{st}, {\ \ }v=K_rK^r
\end{equation}
and $f(h,v)$ satisfies the following condition
\begin{equation}
(f_{,h})^2+4f_{,h}f_{,v}+4(f_{,v})^2=k,
\end{equation}
for some constant $k$. The momentum density generates Lie derivative along a spatial vector field $v_i$ as $\mathcal L_v \Phi(\mathcal B)=[\Phi,\int d^dx v^s K_s]$ for some field $\Phi$. It turns out that such an energy density satisfies the commutation relation that Dirac suggested in his paper. Therefore, one can find out that the vector field theory is manifestly Lorentz invariant when it is quantized.
In this paper, we have extended such discussion to conformal symmetry. Our motivation is that $U(1)$ vector field theory in $4-d$ flat spacetime, whose Lagrangian density is comprised of its kinetic term only, is conformally invariant, since its stress energy tensor vanishes. Thus, one may ask {\it if quantum version of such kind of classical field theory is still conformally invariant when its second quantization rule manifestly enjoys electric magnetic duality transform}.
In fact, we have shown that the theory is still conformal by examining conformal algebra with the similar manner that Dirac studied. In section \ref{Conditions for a 4}, we develop the conditions that the momentum and the energy densities satisfy for this. It turns out that the energy density still satisfies (\ref{intri-uu}) and therefore the momentum density and the energy density that Dirac suggested also satisfy conformal algebra under one condition that {\bf conformal dimension of the energy density is $d+1$}. The simplest example for such case is $U=h$.
In section \ref{section2}, we conclude that since a specific class of the energy density (\ref{em-density-intro}) whose conformal dimension is $d+1$ obtained in \cite{Bunster:2012hm} satisfy the same commutation relation (\ref{intri-uu}), then conformal symmetry is retained in such quantum theory of $U(1)$ vector field which is manifestly invariant under electric magnetic duality rotation.
The final issue to discuss is central charge. There possibly is conformal anomaly which shows up in OPE's of energy momentum tensors, then transformation rule of the field variables and the momentum and energy densities will be affected by the anomaly. However, as long as we restrict ourselves in global conformal symmetry, central charge cannot affect the transformation rules. For example, in 2-$d$ CFT, the central charge contribution to the transformation of stress energy tensor is given by derivative of the transformation parameters.
\section{Conditions for a 4-$d$ quantum field theory to be conformal}
\label{Conditions for a 4}
In this section, we extend Dirac's argument about conditions for a quantum field theory to retain Poincare symmetry to conformal symmetry.
\paragraph{Conformal algebra}
Conformal algebra in $d+1$-dimensional space time is given by
\begin{eqnarray}
\label{conformal algebra}
[D,P_\mu]&=&-P_\mu, {\ \ } [D, \kappa_\mu]=\kappa_\mu, {\ \ }[\kappa_\mu,P_\nu]=-2(g_{\mu\nu}D+L_{\mu\nu}), \\ \nonumber
[\kappa_\rho,L_{\mu\nu}]&=&(g_{\rho\mu}\kappa_\nu-g_{\rho\nu}\kappa_\mu), {\ \ }[P_\rho,L_{\mu\nu}]=g_{\rho\mu}P_\nu-g_{\rho\nu}P_\mu \\ \nonumber
[L_{\mu\nu},L_{\rho\sigma}]&=&g_{\nu\rho}L_{\mu\sigma}+g_{\mu\sigma}L_{\nu\rho}-g_{\mu\rho}
L_{\nu\sigma}-g_{\nu\sigma}L_{\mu\rho}, {\rm\ \ and \ the\ others\ vanish,}
\end{eqnarray}
where $D$ is dilatation, $\kappa_\mu$ is special conformal, $P_\mu$ is translation and $L_{\mu\nu}$ is rotation and boost generators
\footnote{The generators are given by
\begin{equation}
D=x^\mu P_\mu, {\ \ }L_{\mu\nu}=x_\mu P_\nu-x_\nu P_\mu, {\ \ }\kappa_\mu=2x_\mu x^\nu P_\nu-x^\nu x_\nu P_\mu,
\end{equation}
in terms of translation generator, $P_\mu$.
}. $g_{\mu\nu}$ is $d+1$-dimensional flat spacetime metric, whose signature is chosen as $g_{\mu\nu}={\rm diag}(+,-,-,...,-)$.
The symmetry generators are sorted to two different classes. The first class is a set of the generators having the quantum fields transform in spatial directions and the second class is those forcing them transform in temporal direction. The former provides unitary transform of the fields in a given spacelike hypersurface and the later does dynamics of the fields.
\paragraph{Momentum density}
We first examine the generators having the fields transform in spatial directions.
For this, we decompose these generators into spatial and temporal parts as
\begin{eqnarray}
P_\mu\rightarrow P_s, P_0, {\ \ \ \ }L_{\mu\nu}\rightarrow L_{st}, L_{0t}, \\ \nonumber
\kappa_\mu\rightarrow \kappa_s,\kappa_0{\rm\ \ and\ \ }D\rightarrow D^{(s)}+D^{(t)},
\end{eqnarray}
where we have defined the spatial parts of the symmetry generators in terms of a momentum density, $K_s$ as
\begin{eqnarray}
P_t&=&\int K_t d^dx, {\ \ \ } L_{rs}= \int (x_r K_s-x_sK_r)d^dx\\ \nonumber
D^{(s)}&=&-\int x_sK_s d^dx, {\ \ \ }\kappa_t=\int({-2x_t x_r K_r+ x_r x_r K_t })d^dx
\end{eqnarray}
To specify field variables, $V_t^{(1)}$, $V_t^{(2)}$ and the momentum density in our vector theory, we introduce variables $\xi_s$ and $\eta_s$ as
\begin{equation}
V^{(1)}_{t}=\eta_t, {\ }V^{(2)}_{t}=\xi_t, {\rm \ and \ }K_t=\eta_u\xi_{u,t}-(\eta_u\xi_t)_{,u},
\end{equation}
where $\eta_s$ and $\xi_s$ form a canonical pair as
\begin{equation}
[\xi_t,\eta_s^\prime]=\delta_{ts}\delta,
\end{equation}
where $\delta_{ts}$ is Kronecker's delta whereas $\delta$ is $d$-dimensional delta function.
By using canonical commutation relation of $\xi_s$ and $\eta_s$, transformation rules of the field variables are obtained as
\begin{eqnarray}
[V^{(1)}_t,P_r]&=&V^{(1)}_{t,r} \\ \nonumber
[V^{(1)}_t,L_{rs}]&=&x_rV^{(1)}_{t,s}-x_sV^{(1)}_{t,r}+(-\delta_{rt}V^{(1)}_s+\delta_{st}V^{(1)}_r) \\ \nonumber
[V^{(1)}_t,D^{(s)}]&=&-x_s V^{(1)}_{t,s}+(\Delta_{1}V^{(1)}_t), \\ \nonumber
[V^{(1)}_t,\kappa_s]&=&-2x_sx_rV^{(1)}_{t,r}+x_rx_rV^{(1)}_{t,s}+(2\delta_{ts}x_rV^{(1)}_r
-2x_tV^{(1)}_s+2\Delta_1x_sV^{(1)}_t)
\end{eqnarray}
and
\begin{eqnarray}
[V^{(2)}_t,P_r]&=&V^{(1)}_{t,r} \\ \nonumber
[V^{(2)}_t,L_{rs}]&=&x_rV^{(1)}_{t,s}-x_sV^{(1)}_{t,r}+(-\delta_{rt}V^{(1)}_s+\delta_{st}V^{(1)}_r) \\ \nonumber
[V^{(2)}_t,D^{(s)}]&=&-x_s V^{(1)}_{t,s}+(\Delta_{2}V^{(1)}_t), \\ \nonumber
[V^{(2)}_t,\kappa_s]&=&-2x_sx_rV^{(2)}_{t,r}+x_rx_rV^{(2)}_{t,s}+(2\delta_{ts}x_rV^{(2)}_r
-2x_tV^{(2)}_s+ 2\Delta_2 x_sV^{(2)}_t)
\end{eqnarray}
where $\Delta_1=d-1$ and $\Delta_2=1$, which are conformal dimensions of the field variables, $V^{(1)}_t$ and $V^{(2)}_t$ respectively.
From these we can obtain the following relations:
\begin{eqnarray}
[K_t,P_r]&=&K_{t,r}, \\ \nonumber
[K_t,L_{rs}]&=&x_rK_{t,s}-x_sK_{t,r}-\delta_{rt}K_s+\delta_{st}K_r, \\ \nonumber
[K_t, D^{(s)}]&=&-x_rK_{t,r}+(\Delta_1+\Delta_2+1)K_t \\ \nonumber
[K_t,\kappa_s]&=&-2x_s x_r K_{t,r}+x_r x_r K_{t,s}+2\delta_{st}x_rK_r
+2(\Delta_1+\Delta_2+1)x_s K_t-2x_tK_s,
\end{eqnarray}
which provide the commutation relations of conformal algebra for the spacetime indices $\mu$ and $\nu$ to be restricted in $\mu,\nu=1,2...,d$.
\paragraph{Energy density}
To complete the conformal algebra(\ref{conformal algebra}), we need to examine the temporal parts of the generators. To do this, we define a local quantity, ``energy density" $U$ and express these generators by it as
\begin{eqnarray}
\label{generator-definitionsU}
P_0&=&\int U d^dx,{\ \ }L_{t0}=\int x_t U d^d x,\\ \nonumber
D^{(t)}&=&0,{\ \ }\kappa^{(t)}_0=\int x_sx_sU d^d x.
\end{eqnarray}
This energy density is scalar under spatial parts of symmetry transforms and we suppose that it has conformal dimension $\Delta_E$, so it
might transform as below:
\begin{eqnarray}
[U,P_t]&=&U_{,t},{\ \ }[U,L_{st}]=x_s U_{,t}-x_t U_{,s}, \\ \nonumber
[U,D^{(s)}]&=&-x_s U_{,s}{{+\Delta_E U}}, {\ \ }[U,\kappa_s]=-2x_s x_r U_{,r}+x_r x_r U_{,s}{{+2\Delta_E x_sU}}
\end{eqnarray}
Such energy density commutation relations lead
\begin{eqnarray}
[P_0,P_t]&=&0,{\ \ }[P_0,L_{st}]=0,{\ \ }[P_s,L_{t0}]=-\delta_{st}P_0,{\ \ }[L_{t0},L_{rs}]=
\delta_{ts}L_{r0}-\delta_{tr}L_{s0} \\ \nonumber
[D^{(s)},P_0]&=&{{-}}P_0,{\ \ }[D^{(s)},L_{0t}]=0,{\ \ }[\kappa_r,L_{0t}]=\delta_{rt}\kappa_0, {\ \ }[\kappa_0,L_{st}]=0\\ \nonumber
[D^{(s)},\kappa_0]&=&\kappa^{(t)}_0,{\ \ }[\kappa_0,P_t]=-2L_{0t}
,{\ \ }[\kappa_s,P_0]=-2L_{s0}, {\ \ }[\kappa_0,\kappa_s]=0,
\end{eqnarray}
under one condition that {\bf the conformal dimension of the energy density}
\begin{equation}
\Delta_E=d+1.
\end{equation}
They are the commutation relations between temporal and spatial parts of the generators.
Finally, we request the commutation relations between temporal parts of the generators to complete our discussion. They are given by
\begin{eqnarray}
[P_0,P_0]&=&0,{\ \ }[L_{t0},L_{s0}]=L_{st},{\ \ }[P_0,L_{0t}]=P_t,{\ \ } [\kappa_0,L_{0t}]=\kappa_t,{\ \ }\\ \nonumber
[\kappa_0,P_0]&=&-2D^{(s)},{\ \ }[\kappa_0,\kappa_0]=0,
\end{eqnarray}
These are translated to the following equations by using (\ref{generator-definitionsU}):
\begin{eqnarray}
\label{energy density relation0}
\int\int [U,U^\prime]d^dxd^dx^\prime&=&0 \\
\label{energy density relation1}
\int\int x_tx^\prime_s[U,U^\prime]d^dxd^dx^\prime&=&\int(x_sK_t-x_tK_s)d^dx \\
\label{energy density relation2}
\int\int x_t[U,U^\prime]d^dx d^dx^\prime&=&\int K_t d^dx \\
\label{energy density relation3}
\int\int x_sx_sx^\prime_t[U,U^\prime]d^dx d^dx^\prime&=&\int(2x_tx_sK_s-x_sx_sK_t)d^d x\\
\label{energy density relation4}
\int\int x_sx_s[U,U^\prime]d^dx d^dx^\prime&=&2\int x_sK_sd^dx \\
\label{energy density relation5}
\int\int x_u x_u x^\prime_s x^\prime_s[U,U^\prime]d^dxd^dx^\prime&=&0
\end{eqnarray}
The remaining task is to find out commutation relation between energy densities satisfying the above relations. We start with the most general form of the energy density commutation relation as Dirac suggested\cite{Dirac1}. It is
\begin{equation}
\label{commutator-UUprime}
[U,U^\prime]=a\delta+b_r\delta_{,r}+c_{rs}\delta_{,rs}+d_{rst}\delta_{,rst}+...,
\end{equation}
where
the coefficients in front of $\delta$-functions are functions of $x_s$ only.
If we switch $U$ and $U^\prime$, by its anti commuting nature we have
\begin{eqnarray}
\label{commutator-UprimeU}
[U^\prime,U]&=&a\delta-b^\prime_r\delta_{,r}+c^\prime_{rs}\delta_{,rs}-d^\prime_{rst}\delta_{,rst}+..., \\ \nonumber
&=&a\delta-(b_r\delta)_{,r}+(c_{rs}\delta)_{,rs}-(d_{rst}\delta)_{,rst}+... \\ \nonumber
&=&\delta(a-b_{r,r}+c_{rs,rs}-d_{rst,rst}+...) \\ \nonumber
&+&\delta_{,r}(-b_r+2c_{ru,u}-3d_{rsu,su}+...) \\ \nonumber
&+&\delta_{,rs}(c_{rs}-3d_{rsu,u}+...).
\end{eqnarray}
Since (\ref{commutator-UUprime}) and (\ref{commutator-UprimeU}) are added up to zero, so from that condition we have
\begin{eqnarray}
\label{a-eq}
0&=&2a-b_{r,r}+c_{rs,rs}-d_{rst,rst}+... \\
\label{c-eq}
0&=&2c_{rs,s}-3d_{rst,st}+... \\
\label{cprime-eq}
0&=&2c_{rs}-3d_{rsu,u}+...\\ \nonumber
...
\end{eqnarray}
(\ref{a-eq}) gives a solution of $a$ as
\begin{equation}
a=\alpha_{r,r}, {\rm \ \ where\ \ }2\alpha_r=b_{r}-c_{rs,s}+d_{rst,st}-...,
\end{equation}
and (\ref{c-eq}) means that $c_{ru,u}$ is indeed second derivative, then
\begin{eqnarray}
\int (2\alpha_r-b_r)d^dx&=&0, {\rm \ \ and\ \ }\int x_s(2\alpha_r-b_r)d^dx=0,\\ \nonumber
{\rm \ \ since\ \ } 2\alpha_r-b_r&=&-c_{rs,s}+d_{rst,st}-...\rightarrow({\rm second\ derivative\ and \ higher} )
\end{eqnarray}
By using these, we derive more useful relations as
\begin{eqnarray}
\label{comm-U-int}
\int [U,U^\prime]d^dx^\prime&=&\alpha_{r,r} \\
\int x_s^\prime[U,U^\prime]d^dx&=&x_s\alpha_{r,r}-b_s
\end{eqnarray}
After all, we plug (\ref{commutator-UUprime}) into (\ref{energy density relation1}-\ref{energy density relation4}) to fix coefficients of $\delta$-functions(and derivative of them) on the
right hand side of (\ref{commutator-UUprime}). The relation(\ref{comm-U-int}) directly solves (\ref{energy density relation0}).
(\ref{energy density relation2}) gives
\begin{eqnarray}
\int K_td^dx=\int x_t \alpha_{r,r}d^dx=\int \alpha_td^dx=\frac{1}{2}\int b_t d^dx,
\end{eqnarray}
where we have used (\ref{comm-U-int}). From this, we get the most general form of the solutions $\alpha_r$ and $\beta_r$ as
\begin{equation}
\label{ansztz--apha-b}
\alpha_t=K_t+\beta_{tr,r}+\zeta_{,t}{\rm\ \ and\ \ }b_t=2K_t+\bar\beta_{tr,r}+\bar\zeta_{,t},
\end{equation}
where $\beta_t$,${\bar\beta_t}$,${\zeta}$ and $\bar\zeta$ are arbitrary functions of $x_s$.
(\ref{energy density relation1}) provides
\begin{eqnarray}
&{\ }&\int(x_sK_t-x_tK_s)d^dx=\int x_t(x_s\alpha_{u,u}-b_s)=\frac{1}{2}\int d^dx(x_sb_t-x_tb_s)\\ \nonumber
&=&\frac{1}{2}\int d^dx(2x_sK_t-2x_tK_s+x_s\bar\beta_{tr,r}-x_t\bar\beta_{sr,r}+x_s\bar\zeta_{,t}-x_t\bar\zeta_{,s}),
\end{eqnarray}
This relation restricts $\bar\beta_{st}$ to be
\begin{equation}
\int(\bar\beta_{ts}-\bar\beta_{st})d^dx=0.
\end{equation}
and similarly
\begin{equation}
\int(\beta_{ts}-\beta_{st})d^dx=0.
\end{equation}
Next, consider (\ref{energy density relation3}), which is given by
\begin{eqnarray}
2\int x_sK_sd^dx=\int x_sx_s\alpha_{t,t}d^dx=\int 2x_t(K_t+\beta_{tr,r}+\zeta_{,t}),
\end{eqnarray}
which provides conditions for $\beta_{st}$ and $\zeta$ as
\begin{equation}
\int(\beta_{tt}+d\zeta) d^d x=0,
\end{equation}
Moreover, (\ref{energy density relation4}) becomes
\begin{eqnarray}
&{\ }&\int(2x_tx_sK_s-x_sx_sK_t)d^d x=\int x_sx_s(x_t\alpha_{r,r}-b_t)\\ \nonumber
&=&\int(2x_tx_sK_s-x_sx_sK_t)d^d x+\int\{ x_t(2\beta_{rr}+2(d+2)\zeta-2\bar\zeta)+2x_s(\beta_{st}+\beta_{ts}-\bar\beta_{ts})\}d^dx,
\end{eqnarray}
Then, from this we get
\begin{equation}
\int\{ x_t(2\beta_{rr}+2(d+2)\zeta-2\bar\zeta)+2x_s(\beta_{st}+\beta_{ts}
-\bar\beta_{ts})\}d^dx=0
\end{equation}
Finally we examine (\ref{energy density relation5}). (\ref{ansztz--apha-b}) satisfies this under a condition that
\begin{equation}
\int\{2x_tx_u(2\beta_{ut}-\bar\beta_{ut})+x_sx_s(2\beta_{tt}-\bar\beta_{tt}
+2(2+d)\zeta-(2+d)\bar\zeta+c_{uu}) \}=0.
\end{equation}
Minimal solutions of the coefficients in front of $\delta$-functions on the right hand side of (\ref{commutator-UUprime}) are given by
\begin{equation}
2\alpha_t=b_t=2K_t, {\ \ \rm and\ \ }\beta_{st}=\bar\beta_{st}=\zeta=\bar\zeta=c_{rs}...=0
\end{equation}
Therefore, the minimal solution of the commutation relation between the energy densities which satisfies conformal algebra becomes
\begin{equation}
\label{UU-commm}
[U,U^\prime]=K_{t,t}\delta+2K_t\delta_{,t}.
\end{equation}
\section{Conformal invariance and 4-$d$ vector theories}
\label{section2}
The main consequence of the last section is (\ref{UU-commm}). Once we quantize our vector field theory as (\ref{BB-commm}) and define the momentum and the energy densities as (\ref{em-density-intro}), then this satisfies\cite{Bunster:2012hm}
\begin{equation}
[U,U^\prime]=-\varepsilon\delta^{st}(K_s+K^\prime_s)\delta_{,t},
\end{equation}
where $\varepsilon=0 {\rm \ or }-1$. Conformal algebra is consistently constructed from the energy density only when the conformal dimension of energy density is $\Delta_E=4$ in 4-dimensional spacetime. The simplest candidate for this is $U=h$, since $\mathcal B^a_s$ has conformal dimension $2$.
\section*{Acknowledgement}
We would like to thank Alfred D. Shapere for the useful discussion. J.H.O thanks his $\mathcal W.J$.
This work is supported by the research fund of Hanyang University(HY-2013) only.
|
1,116,691,498,245 | arxiv | \section{Introduction}
Ionization and electron transfer (electron capture),
which may occur in collisions between an atom and a bare nucleus,
represent two of basic collision processes studied by atomic physics.
In the process of ionization the atom emits an electron, which
after the colllision moves freely in space, while
in the transfer process an electron
initially bound in the atom is captured
into a bound state of the moving ion.
Both of these processes possess interesting physics,
their study is of importance for many applications
and various aspects of these processes
have been attracting attention for decades.
Quite an interesting situation is encountered
when a combination of transfer and ionization occurs
in a single collision event. Such a process,
which becomes possible if the atomic target
has at least two electrons, is called transfer-ionization.
During the last decade transfer-ionization
in collisions of protons with helium atoms
has attracted much attention \cite{mergel}-\cite{schmidt}.
There are only a few known basic mechanisms
governing transfer-ionization in fast colllisions.
Depending on whether the electron-electron
interaction (correlations) plays
in them a crucial role or not, these mechanisms
can be subdivided into "correlated"
and "uncorrelated" ones.
The group of uncorrelated mechanisms consists of
the so called independent transfer-ionization
(ITI) and capture--shake-off (C-SO).
In the ITI electron capture and emission occur
due to the "independent" interactions
of the electrons with the ionic projectile.
In a theoretical description this mechanism appears
starting with second order perturbation theory
in the ion-atom interaction
and for its realization does not need
any electron-electron interaction.
According to the C-SO mechanism,
a fast removal of the captured electron from
the atom leads to a "sudden" change
of the atomic potential in which
the other electron is moving.
As a result, the electron tries to adjust
its state to the new potential and
has a nonzero probability
to become unbound \cite{mcg}.
The correlated mechanisms are more interesting.
They include so called electron-electron Thomas (EET) mechanism
and a mechanism which will be termed here as electron-electron Auger (EEA).
According to the EET, transfer-ionization proceeds
in two steps \cite{thomas}, \cite{briggs}.
In the first step, as a result of a binary collision with
the ion, one of the electrons acquires
a velocity $\sqrt{2} v$, where $v$ is the ion velocity,
moving under the angle of $45^0$ with respect
to the motion of the ion.
In the second step this electron scatters on
the other electron acquiring, as simple kinematics shows,
a velocity equal to the projectile velocity,
both in absolute magnitude and direction,
that makes the capture very probable.
The same kinematics also tells that the other electron
in this process gets a velocity, which is perpendicular
to the projectile velocity and whose absolute value is equal to $v$.
Thus, as a result of the EET one electron is captured and
the other is emitted perpendicular to the projectile motion.
The electron-electron interaction is also
the (main) driving force of the EEA mechanism.
The physics of the latter becomes very transparent
when it is viewed in the rest frame of the projectile nucleus.
The functioning of this mechanism is based
on the fact that the presence of the
second nucleus makes the initial configuration of atomic particles
unstable with respect to a kind of Auger decay.
Indeed, in the presence of this nucleus
one of the electrons, which initially belongs
to a bound configuration of fast moving particles constituting
the atom, undergoes a radiationless transition
\cite{foot_note} into a bound state of the ion by transferring
(most of) energy excess to the another atomic electron which,
as a result of this, is emitted from the atom
\cite{we-EE}, \cite{ich-EE}.
A clear signature of this mechanism is that
in the rest frame of the atom the electron is emitted
in the direction opposite to
the projectile motion
\cite{we-EE}, \cite{ich-EE}, \cite{daniel}.
One has to emphasize that the mechanisms for
transfer-ionization, discussed above, are
in essence high-energy approximations,
the validity of which improves with increasing
impact energy. Therefore, the description
of transfer-ionization in terms of
these mechanisms becomes really meaningful
only provided the collision velocity is high enough:
$v \gg v_i, v_f $, where $v_i \sim Z_t$ and $v_f \sim Z_p$
the typical velocities of the electron(s) in the initial
and final bound states, respectively, and $Z_t$ ($Z_p$)
is the charge of the nucleus of the target (projectile).
This implies that in order to get an insight into the physics
of transfer-ionization by considering this process as driven
by these mechanisms, even in collisions with protons
the impact velocity should lie in the range
$ v \stackrel{>}{\sim} 10 v_0$, where $ v_0$ is
the Bohr velocity in atomic hydrogen.
Although transfer-ionization was studied in a number of papers,
most of them were concerned with the total cross section.
A better understanidng of the physics of this process
can be obtained when differential cross sections are
explored. Concerning such cross sections in the case
of transfer-ionization in fast collisions
only the cross sections singly differential
in the momentum component of the emitted electron
or the target recoil ion, parallel/antiparallel
to the projectile velocity, have been considered
(see e.g. \cite{schmidt}, \cite{we-EE}-\cite{ich-EE}).
However, the exploration of such singly differential
cross sections even in principle can hardly allow one
to clearly separate the contributions of
the correlated and uncorrelated mechanisms
(and thus to study and understand them better).
Compared to the singly differential cross sections
the doubly differential cross sections, which are a function of
both parallel and perpendicular to the projectile velocity
components of the momentum of the emitted electron,
can yield much more information about the process.
Therefore, in the present paper we consider
such cross sections for transfer-ionization in collisions of
fast protons, alpha-particles and lithium nuclei with helium atoms.
It will, in particular, be shown
that the study of such doubly differential cross sections
may enable one to clearly separate and
identify the different mechanisms contributing
to transfer-ionization and to get
a better insight into the physics of this process.
One should say that all the previous experimental
studies devoted to the spectra of electrons emitted in transfer-ionization
were dealing with relatively low impact velocities where,
as was already mentioned, the discussion of this process
in terms of the four mechanisms may not yet be very meaningful.
Therefore. we hope that the present article could
trigger the interest of experimental physicists
to the exploration of this process at higher impact velocities.
Atomic units ($\hbar = m_e = |e| =1 $) are used throughout the paper
except where the otherwise stated.
\section{General Consideration}
In our description of transfer-ionization
the correlated and uncorrelated mechanisms
shall be treated separately (and added in the cross section
incoherently). We begin with
considering the correlated ones.
\subsection{The EEA and EET mechanisms}
The (approximate) transition amplitude for transfer-ionization
can be written
\begin{eqnarray}
a_{fi} = -i \int_{-\infty}^{+\infty} dt
\langle \Psi_f(t) |\hat{W} |\Psi_i(t)\rangle.
\label{e1}
\end{eqnarray}
Here $\hat{W}$ is the coulomb interaction between
the electrons and $\Psi_i(t)$ and $\Psi_f(t)$
are the initial and final states of the electrons.
In the nonrelativistic domain of atomic collisions
the description of electron capture is covariant
under a Galilean transformation and
any Galilean frame can be chosen to consider this process.
Assuming that the target atom is (initially)
at rest in the laboratory frame,
we take for the moment the rest frame of the atom
as our reference frame and choose its origin
at the position of the atomic nucleus.
We denote the coordinates of the electrons
by ${\bf r}_1$ and ${\bf r}_2$.
The projectile-nucleus with a charge $Z_p$
is assumed to move along
a straight-line trajectory
${\bf R}(t)= {\bf b} + {\bf v} t$, where
${\bf b}$ is the impact parameter,
${\bf v}$ the collision velocity and $t$ the time.
The coordinates of the 'first' and 'second'
electrons with respect to the position
of the projectile are denoted by
${\bf s}_1$ and ${\bf s}_2$, respectively
(${\bf s}_j={\bf r}_j-{\bf R}(t)$; $j=1,2$).
We choose the initial state as
\begin{eqnarray}
\Psi_i(t) &=& \Lambda_i \, \varphi_i({\bf r}_1,{\bf r}_2)
\exp(-i E_i t ).
\label{e2}
\end{eqnarray}
In Eq.(\ref{e2}) $\varphi_i$ is the initial
unperturbed two-electron atomic state with
an energy $E_i$ and $ \Lambda_i$ is
a factor which accounts for the distortion
of the initial atomic state by the field of
the incident ion, its form shall be specified later.
We approximate the state $\varphi_i$ according to
\begin{eqnarray}
\varphi_i({\bf r}_1,{\bf r}_2) =
A_i \left( \exp\left(-\alpha r_1 - \beta r_2 \right)
+ \exp\left(-\alpha r_2 - \beta r_1\right) \right)
\exp\left( \gamma r_{12}\right),
\label{e3}
\end{eqnarray}
where $A_i$ is the normalization factor,
$r_{12}=\mid {\bf r}_1 - {\bf r}_2 \mid $
is the inter-electron distance and
the parameters $\alpha$, $\beta$ and $\gamma$
can be chosen from the following sets:
(i) $\alpha=\beta=2 $, $\gamma=0$;
(ii) $\alpha=\beta=1.69$, $\gamma=0$;
(iii) $\alpha=\beta=1.86$, $\gamma=0.254$;
(iv) $\alpha=2.18$, $\beta=1.18$, $\gamma=0$;
and (v) $\alpha=2.21$, $\beta=1.44$, $\gamma=0.207$.
The final state is taken according to
\begin{eqnarray}
\Psi_f(t) &=& \Lambda_f \frac{1}{\sqrt{2}} \left[
\chi_f({\bf s}_1) \exp( i {\bf v} \cdot {\bf r}_1 )
\, \phi_{\bf k}({\bf r}_2)
+ \chi_f({\bf s}_2)
\exp( i {\bf v} \cdot {\bf r}_2 )
\, \phi_{\bf k}({\bf r}_1) \right]
\nonumber \\
&& \times \exp(-i (\epsilon_k + \varepsilon_f) t)
\exp\left( -i \frac{v^2}{2} t \right).
\label{e4}
\end{eqnarray}
Here, $\chi_f$ is the
final (unperturbed) bound state of the electron
captured by the projectile,
$\varepsilon_f$ the energy of this state
(as viewed in the rest frame of the projectile)
and $ \exp\left( i {\bf v} \cdot {\bf r}_j -i v^2 t/2 \right)$
the so called translational factor.
Further, $\phi_{\bf k}$ denotes
the state of the emitted electron
which moves in the field of the residual
atomic ion with (asymptotic) momentum ${\bf k}$ and
energy $\epsilon_k = k^2/2$ and $\Lambda_f$ describes
the distortions of the states of captured and emitted
electrons by the fields of the residual atomic ion
and projectile, respectively.
Now we turn to the discussion of the form of
the distortion factors $\Lambda_i$ and $\Lambda_f$.
Let us remind the reader that in this paper
we consider only collisions at high impact velocities
in which one has $Z_p/v \ll 1$.
Besides, as will be seen below, in the transfer-ionization process
the emitted electron has a high velocity ($\sim v \gg Z_p$)
with respect to the projectile.
From the work on atomic ionization
it is known that in such collisions
the account of the distortion
does not noticeably changes the result.
At the same time it is also known that
for electron transfer reactions
the effect of the distortion in general
remains very important even at $Z_p/v \ll 1$.
Therefore, in our treatment we shall ignore the distortions
for that electron, which is to be emitted,
and account only for the distortions acting
on that electron which is to be captured.
With such an assumption one can show that
the transition amplitude in the momentum space,
\begin{eqnarray}
S_{fi}({\bf q}_{\perp}) = \frac{1}{2 \pi}
\int d^2 {\bf b} a_{fi}({\bf b})
\exp(i {\bf q}_{\perp} \cdot {\bf b}),
\label{e5}
\end{eqnarray}
is given by
\begin{eqnarray}
S_{fi}({\bf q}_{\perp}) =
S^{\alpha, \beta}_{fi}({\bf q}_{\perp})
+ S^{\beta, \alpha}_{fi}({\bf q}_{\perp}).
\label{e6}
\end{eqnarray}
Here,
\begin{eqnarray}
S^{\alpha, \beta}_{fi}({\bf q}_{\perp}) &=&
- \frac{\sqrt{2} i A_i}{ (2 \pi)^3 v}
\int d^3 {\bf s} \chi^*_f({\bf s})
\exp(i {\bf q} \cdot {\bf s}) \Lambda_i({\bf s})
\int d^3 \mbox{\boldmath$\kappa$}
\frac{G_{ion}(\mbox{\boldmath$\kappa$}; \beta)}{\kappa^2 + \gamma^2}
\nonumber \\
&& \times \int d^3 {\bf r} \Lambda^*_f({\bf r})
\exp(-i ({\bf v} + {\bf q} + \mbox{\boldmath$\kappa$})
\cdot {\bf r}) \exp(-\alpha r)
\label{e7}
\end{eqnarray}
where
\begin{eqnarray}
G_{ion}(\mbox{\boldmath$\kappa$}; \beta) =
\int d^3 {\bf r} \phi^*_{\bf k}({\bf r})
\exp( i \mbox{\boldmath$\kappa$} \cdot {\bf r})
\exp(-\beta r)
\label{e8}
\end{eqnarray}
and
\begin{eqnarray}
{\bf q} = \left({\bf q}_{\perp},
\frac{ E_i - \varepsilon_f - k^2/2 - v^2/2 }{v} \right)
\label{e9}
\end{eqnarray}
is the momentum transfer in the collision.
Note that $S^{\beta, \alpha}_{fi}({\bf q}_{\perp})$ is obtained
from $S^{\alpha, \beta}_{fi}({\bf q}_{\perp})$
by interchanging $\alpha $ and $ \beta $
in Eqs. (\ref{e7})-(\ref{e8}).
The explicit form of the distortion factors
is taken according to the continuum-distorted-state (CDW)
model which has been proved quite successful in describing
the total cross section for capture in a three-body collision
system (one active electron + two nuclei).
In this model the distortion factors read
\begin{eqnarray}
\Lambda_i({\bf s}) &=&
N(\nu_p) \, \left._1F_1 \right.
\left( i\nu_p,1,i v s + i {\bf v} \cdot {\bf s} \right)
\nonumber \\
\Lambda_f({\bf r}) &=& N^*(\nu_t) \,
\left._1F_1 \right.
\left( -i\nu_t,1, - i v r - i {\bf v} \cdot {\bf r} \right),
\label{e10}
\end{eqnarray}
where $ N(\nu)= e^{ \pi \nu /2 } \Gamma(1-i\nu)$,
$\nu_p=Z_p/v$, $\nu_t=Z_t/v$,
and $\Gamma$ and $ \left._1F_1 \right.$
are the gamma and confluent
hypergeometric functions,
respectively (see e.g. \cite{Ab-St}).
The inclusion of the distortion factor for
the initial state in the form given by
the first line of (\ref{e10}) means that in our treatment
the electron, which is to be transferred,
in its initial state moves not only in
the field of the atom
but also in the (coulomb) field of the projectile.
Therefore, with such a factor
the transition amplitude (\ref{e1})
describes both the EEA and EET capture channels
while when this factor is set to unity,
$\Lambda_i = 1$, the calculated contribution
of the EEA mechanism becomes much larger but
the EET mechanism simply "vanishes".
To conclude this subsection let us note that
the account of the distortion for the final state turned out
to be not so crucial.
Indeed, in cases tested the difference
between results obtained with the distortion factor $\Lambda_f$
in the form given by the second line of (\ref{e10}) and
by setting $\Lambda_f = 1$ was not substantial.
Therefore, taking into account that the neglect
of this distortion greatly simplifies the calculation
reducing the computation time,
in what follows we shall report only results obtained
when we suppose that $\Lambda_f =1 $.
\subsection{Independent transfer-ionization and capture--shake-off}
Let us now very briefly consider two uncorrelated mechanisms:
the independent transfer-ionization and capture--shake-off.
According to the first of them transfer-ionization proceeds
in two independent steps: one electron is captured (transfer)
and the other one is emitted (ionization).
These transitions are driven by the interaction between
the projectile and the electrons while the electrons
do not need at all to interact with each other for
the transitions to occur. Note that within this mechanism
the projectile must interact with the target at least twice
(at least one interaction per electron).
In the consideration of the present paper
the capture and ionization parts of the independent transfer-ionization
are regarded as occurring in the collision between a projectile-nucleus
and a hydrogen-like system. The latter is described using
an effective nuclear charge which was taken to be $1.69$,
both for capture and ionization.
In the impact parameter space the amplitude for this process
is a product of the single-electron transition
amplitudes for capture and ionization. The latter ones
are obtained using the three-body CDW (capture)
and CDW-EIS (ionization) \cite{cdw-eis} models.
In capture--shake-off the "instant" removal of one of the electrons
from the atom due to its capture by the fast projectile
forces the other electron to react to a sudden
change of the atomic potential.
As a result, the second electron can be shaked off
from the target and become unbound \cite{mcg}.
The amplitude for this channel is estimated as
the product of the amplitude for single electron
capture (evaluated within the three-body CDW --
like in case of the ITI)
and the amplitude for shake-off which is simply an overlap
between the initial and final states of the ``second'' electron.
\subsection{The total contribution to transfer-ionization}
In our calculations we add
the contributions of the correlated, the independent and
capture--shake-off channels incoherently.
In the context of the present paper,
which is focused on the correlated capture mechanisms,
such an incoherent addition does not represent
a big shortcoming since at the collision parameters considered here
the correlated and uncorrelated have a small overlap
in the momentum space of the emitted electrons.
To conclude this section note that the validity
of our approach to transfer-ionization in fast collisions
has been already tested in \cite{we-EE}-\cite{ich-EE}
where the cross sections singly differential in
the longitudinal momentum of the emitted electrons
and target recoil ions were calculated for proton
on helium collisions
at $v = 12.6$ and $15.2$ a.u. and
a good agreement with available experimental
data has been found \cite{foot-new}.
\section{Results and discussion}
In this section we discuss the momentum spectra for electrons
emitted in transfer-ionization in collisions of protons,
alpha-particles and bare lithium nuclei with helium.
As was mentioned in the previous section,
in our evaluation of the ITI and C-SO mechanisms
we use the effective charge of $1.69$ to describe the initial
undistorted state of the electrons in helium.
Therefore, for consistency, in our calculation
of the contributions from the correlated mechanisms
we use the set ii) of the parameters
for the state (\ref{e3})
(except in figure \ref{corr-in-init-state},
where the sets i) and v) are used).
The momentum spectra shown in figures
\ref{12-1}-\ref{21-3} are given
in the rest frame of the target
(laboratory frame) and are represented
by the doubly differential cross section
\begin{eqnarray}
\frac {d^2 \sigma }{ dk_{lg} dk_{tr} } =
k_{tr} \int_0^{2 \pi} d\varphi_k \,
\int d^2 {\bf q}_{\perp} \left| S_{fi}({\bf q}_{\perp}) \right|^2,
\label{e11}
\end{eqnarray}
where $k_{lg} = {\bf k} \cdot {\bf v} /v $ and
${\bf k}_{tr} = {\bf k} - k_{lg} {\bf v} /v $
are the longitudinal and transverse parts,
respectively, of the momentum ${\bf k}$
of the emitted electron. The integration
in (\ref{e11}) runs over the transverse part of
the momentum transfer and the azimuthal angle
$\varphi_k$ of the emitted electron.
In the range of collision parameters considered here
an atomic electron is mainly captured
into the ground state of the projectile.
Therefore, in what follows we consider
transfer-ionization only for this channel.
\begin{figure}[t]
\vspace{-0.45cm}
\begin{center}
\includegraphics[width=0.61\textwidth]{12-1-new.eps}
\end{center}
\vspace{-0.9cm}
\caption{ \footnotesize{ Momentum spectrum (in b/(a.u.)$^2$)
of electrons emitted in the reaction
$3.6$ MeV p$^+$ + He(1s$^2$) $\to$ H(1s) + He$^{2+}$ + e$^-$
collisions ($v=12$ a.u.). }}
\label{12-1}
\end{figure}
The momentum spectra of electrons emitted in collisions
with protons are displayed in figures \ref{12-1}, \ref{16-1}
and \ref{21-1} for impact energies of $3.6$, $6.4$ and $11$ MeV,
respectively. These energies correspond to $v=12$, $16$ and $21$ a.u.
It is seen in the figures that there are three distinct
maxima in the spectra.
\begin{figure}[t]
\vspace{-0.45cm}
\begin{center}
\includegraphics[width=0.61\textwidth]{16-1-new.eps}
\end{center}
\vspace{-0.9cm}
\caption{ \footnotesize{ Same as in figure \ref{12-1} but for
$6.4$ MeV p$^+$ + He(1s$^2$) $\to$ H(1s) + He$^{2+}$ + e$^-$
collisions ($v=16$ a.u.). }}
\label{16-1}
\end{figure}
\subsection*{Uncorrelated transfer-ionization}
The maximum, which is located at small values of $k$,
has its origin in the uncorrelated mechanisms:
the independent transfer-ionization and
capture--shake-off.
In high-velocity collisions ($v \gg Z_p, Z_t$)
the cross section for single electron capture
calculated within the CDW approximation scales
approximately as $Z_p^5/v^{11}$. In our model, this is obviously
also the scaling for the contribution
of the capture--shake-off channel to the cross section.
Since the ionization part of the independent transfer-ionization
adds the factor $Z_p^2/v^2$, the cross section for this channel
is proportional to $Z_p^7/v^{13}$ and, compared to
the capture--shake-off, shows
a steeper dependence both on the projectile
charge and collision velocity.
According to our model,
in collisions with protons
(in the range of impact velocities
considered) the maximum at small $k$ is
dominated by capture--shake-off, that leads
to the shape of the spectrum almost symmetric
with respect to $k_{lg} = 0$ \cite{asym-shake-off}.
The situation becomes somewhat different in collisions with
alpha-particles and lithium nuclei
in which the independent transfer-ionization
becomes relatively
more important and, as a result,
the emission spectrum acquires
a slight forward-backward asymmetry
with more emitted electrons
moving in the forward semi-sphere
(see figures \ref{16-2}, \ref{21-2} and \ref{21-3}).
\begin{figure}[t]
\vspace{-0.45cm}
\begin{center}
\includegraphics[width=0.61\textwidth]{21-1-new.eps}
\end{center}
\vspace{-0.9cm}
\caption{ \footnotesize{ Same as in figure \ref{12-1} but for
$11$ MeV p$^+$ + He(1s$^2$) $\to$ H(1s) + He$^{2+}$ + e$^-$
collisions ($v=21$ a.u.). }}
\label{21-1}
\end{figure}
\subsection*{Correlated transfer-ionization}
The maximum at large (negative) $k_{lg} $
appears due to the EEA mechanism whereas the maximum
at large $ k_{tr} $ is a signature of the EET channel.
\vspace{0.25cm}
i) Let us consider the kinematics of these two correlated
channels of transfer-ionization. To this end it is convenient
to go first to the rest frame of the projectile-nucleus.
In this frame the latter particle does not take part
in the energy balance of the process (because it is heavy
and is initially at rest).
Therefore, the energy balance can be written as
$u_e^2/2 + \Delta E \approx v^2$.
Here, $u_e$ is the velocity of the emitted electron,
$\Delta E = v \Delta Q_{lg}$ is the change in
energy of the nucleus of the atom with
$\Delta Q_{lg}$ being the change in
its longitudinal momentum, $v^2$ is the initial energy
of the two incident electrons
and we have neglected the initial and final binding
energies since $v \gg Z_p$ and $v \gg Z_t$.
Thus, the velocity $u_e$ of the emitted electron
in the projectile frame is approximately given by
\begin{eqnarray}
u_e = v \sqrt{ 2 (1 - \Delta Q_{lg}/v) }.
\label{velocity}
\end{eqnarray}
\begin{figure}[t]
\vspace{-0.45cm}
\begin{center}
\includegraphics[width=0.61\textwidth]{16-2-new.eps}
\end{center}
\vspace{-0.9cm}
\caption{ \footnotesize{ Same as in figure \ref{12-1} but for
$6.4$ MeV/u He$^{2+}$ + He(1s$^2$) $\to$ He$^{+}$(1s) + He$^{2+}$ + e$^-$
collisions ($v=16$ a.u.). }}
\label{16-2}
\end{figure}
\vspace{0.25cm}
ii) If the nucleus of the atom would be just a spectator
in the collision (and thus $ \Delta Q_{lg} = 0 $),
one would obtain $u_e \approx \sqrt{2} v$.
Taking into account that in the target frame the electron emitted via
the EEA mechanism moves in the direction, which is opposite
to the projectile velocity, the momentum spectrum of electrons
produced via the EEA should then be centered in this frame around
$k_{lg} \approx v - \sqrt{2} v \approx - 0.4 v $.
Looking at the figures one sees, however,
that only at the highest impact energy considered ($v=21$ a.u.)
the electron spectrum is really having the maximum
at the longitudinal momentum rather close to $- 0.4 v$ while
at the lower velocities ($v=12$ and $16$ a.u.)
this maximum is located at a noticeable distance
from the point $k_{lg} = - 0.4 v$.
This means that only at sufficiently
high impact energies the EEA becomes (almost)
purely electronic mechanism without
the involvement of the nucleus of the atom.
At lower impact velocities the target nucleus
does noticeably participate in this mechanism (see also \cite{ich-EE}).
\begin{figure}[t]
\vspace{-0.45cm}
\begin{center}
\includegraphics[width=0.61\textwidth]{21-2-new.eps}
\end{center}
\vspace{-0.9cm}
\caption{ \footnotesize{ Same as in figure \ref{12-1} but for
$11$ MeV/u He$^{2+}$ + He(1s$^2$) $\to$ He$^{+}$(1s) + He$^{2+}$ + e$^-$
collisions ($v=21$ a.u.). }}
\label{21-2}
\end{figure}
\begin{figure}[t]
\vspace{-0.45cm}
\begin{center}
\includegraphics[width=0.61\textwidth]{21-3-new.eps}
\end{center}i)
\vspace{-0.9cm}
\caption{ \footnotesize{ Same as in figure \ref{12-1} but for
$11$ MeV/u Li$^{3+}$ + He(1s$^2$) $\to$ Li$^{2+}$(1s) + He$^{2+}$ + e$^-$
collisions ($v=21$ a.u.). }}
\label{21-3}
\end{figure}
\vspace{0.25cm}
iii) Following the simple picture of the EET mechanism,
which was mentioned in the Introduction, one would expect that
in the rest frame of the target atom the EET is
characterized by electrons emitted with a velocity $v$ under
the angle of $90^0$ with respect to the motion of the projectile.
Correspondingly, the velocity of the electron with respect
to the projectile should be equal to $\sqrt{2} v$.
The latter value indeed
agrees with the simple estimate for the electron velocity
$u_e$ given above in this subsection
(if we assume that $ \Delta Q_{lg} = 0 $).
However, according to the spectra shown in figures
\ref{12-1}-\ref{21-3}, in the target frame
the velocity $v_{EET}$ of the emitted electron is on average
slightly smaller than $v$. Besides,
the angle $\vartheta_{EET}$, which characterises
the position of the ``center of mass'' of the EET maximum
in the momentum spectrum, is somewhat larger than $90^0$:
$\vartheta_{EET} = 90^0 + \delta$,
where $\delta > 0$. Moreover, the differences
$(v - v_{EET})$ and $\delta $ increase
with increasing the charge of the projectile
and/or decreasing the impact velocity.
The reason for this is that the simple picture does not
take into account that the electron which moves together
with the projectile is actually not free but bound \cite{foot_note_2}.
When the binding of the captured electron increases
the differences between the result and
what the simple picture suggests also growth.
Only in the limit $v \to \infty$ does the position of
the EET maximum coincide with the prediction
of this picture (see also \cite{briggs}).
Assuming that at sufficiently high impact velocities
the nucleus of the target atom is a spectator in the EET mechanism,
one can find a simple relation between
the averaged velocity $v_{EET}$ of the emitted electron
and the angle $\delta $.
Indeed, in the rest frame of the projectile the energy
of this electron is approximately given by
$v^2$. Taking into account that the same energy
can also be expressed as
$\left( v + v_{EET} \sin \delta \right)^2/2 + (v_{EET} \cos \delta)^2/2
= v^2/2 + v_{EET}^2/2 + v v_{EET} \sin \delta$,
one obtains $v_{EET} \approx
v \left(\sqrt{1+\sin^2 \delta} - \sin \delta \right)
\approx v \left(1 - \delta \right)$.
\vspace{0.25cm}
iv) Two more observations can be drawn from figures \ref{12-1}-\ref{21-3}.
First, for a fixed projectile charge state
the relative importance of the EET versus EEA increases with $v$.
Second, for a fixed collision velocity $v$
the EEA mechanism gains in relative importance
when the charge of the projectile increases.
The first observation can be understood noting that
the EEA and EET are basically the first
and second order processes, respectively \cite{ich-EE}, \cite{briggs}.
As a result, the EET weakens more slowly with increasing $v$
than the EEA. Further, the dependence of
the EEA mechanism on $Z_p$ is a bit steeper ($ \sim Z_p^5$) \cite{ich-EE}
than that of the EET ($ \sim Z_p^5/(Z_p + Z_t \sqrt{2}) $) \cite{briggs}.
This enables one to understand the second observation.
\subsection*{ Dynamic versus stationary correlations }
Both the EEA and EET mechanisms crucially rely on
the coulomb interaction between the electrons.
On the other hand, the electron-electron correlations
in the initial and final asymptotic states
of the colliding system are also manifestation
of this interaction \cite{corr-in-init-state}.
The principal difference between them
is that while the correlations in
the asymptotic states are stationary in nature,
the EEA and EET are based on
the electron-electron interaction
in its dynamic variant.
An illustrative example of the correspondence between
dynamic and stationary manifestations of basically the same force
is represented by the interaction between an electron
and its electromagnetic field (the radiation field).
A stationary situation is realized,
for instance, when one considers
a free (undistorted) hydrogen atom in the ground state.
In this case the interaction with the radiation field
has quite a weak impact on the system: it merely leads
to a tiny shift of the energy of the ground state.
Let us consider, however, a situation when the hydrogen atom
collides with a fast ion. Now the same interaction
may lead to electron transfer from the atom to
the ion, which is called radiative electron capture.
In this dynamic situation the interaction with
the radiation field leads to a drastic change
in the state of the electron.
The difference between the stationary and
dynamic manifestations of the electron-electron interaction
is not that dramatic. Nevetheless, it is the dynamic electron
correlations (the EEA and EET),
which drive the process of transfer-ionization,
whereas the stationary ones providing an ``environment''
also influence the process by determining, for instance,
the mean electron-electron distance
in the initial atomic state
and, thus, the magnitude of the dynamic force
acting between the electrons in
the transfer process \cite{corr-final-state}.
These points can be seen
in figure \ref{corr-in-init-state}
where we present the contributions
to the momentum spectrum
due to the EEA and EET mechanisms
calculated with two different approximations
for the ground state of helium.
In the left plot the parameters
of the state (\ref{e3} ) were taken as
$\alpha = \beta = 2$ and $\gamma = 0$ which means
that the electron-electron correlations
in this state are completely ignored.
The right plot was obtained by choosing
$\alpha=2.21$, $\beta=1.44$ and $\gamma=0.207$
which includes (in an approximate way)
both the radial and angular correlations
in the ground state of helium.
It is seen that there is practically no difference
in shape of these two spectra. However,
their absolute intensities
differ by about a factor of $2$
since the electron-electron interaction
in the ground state of helium increases
the mean electron-nucleus distances in the atom
and, thus, decreases the effective strength
of the EEA and EET channels (see also \cite{ich-EE}).
\begin{figure}[t]
\vspace{-0.45cm}
\begin{center}
\includegraphics[width=0.71\textwidth]{corr-in-init-state.eps}
\end{center}
\vspace{-0.9cm}
\caption{ \footnotesize{ The calculated contribution
of the EEA and EET mechanisms to the momentum spectrum
of electrons emitted
in $11$ MeV p$^+$ + He(1s$^2$) $\to$ H(1s) + He$^{2+}$ + e$^-$
collisions ($v=21$ a.u.). The left panel:
$\alpha = \beta = 2$, $\gamma = 0$.
The right panel: $\alpha=2.21$, $\beta=1.44$
and $\gamma=0.207$. }}
\label{corr-in-init-state}
\end{figure}
\subsection*{Kinematics of the uncorrelated transfer-ionization channels}
To conclude our discussion in this section note
that Eq. (\ref{velocity}) is of course also
valid for the uncorrelated mechanisms.
In contrast to the correlated ones, however,
in this case we have $u_e \approx v$
and, hence, $ \Delta Q_{lg} \approx v/2 $.
Therefore, it is the nucleus of the atom which
balances (in the projectile frame)
the energy change of the captured electron,
$\Delta E = v \Delta Q_{lg} \approx v^2/2$,
both in the independent transfer-ionization and
capture--shake-off channels.
\section{Conclusions}
We have considered in some detail transfer-ionization
in collisions of fast protons, alpha-particles
and lithium nuclei with atomic helium.
There are four basic mechanisms which are responsible
for this process. Two of them (the independent transfer-ionization
and capture--shake-off) are so called uncorrelated mechanisms
which means that they would not disappear if
the electron-electron interaction would be ``switched off''.
In contrast, this interaction does play a crucial role
in the other two (the electron-electron and electron-electron-Thomas)
mechanisms which both are governed by the
dynamic electron-electron correlations.
Our consideration shows that at sufficiently
high impact velocities the contributions of
the correlated and uncorrelated mechanisms
can be clearly separated in the cross section doubly
differential in the longitudinal and transverse
components of the momentum of the emitted electron.
The study of this cross section also enables one
to separate the two correlated mechanisms
from each other and get insight into
subtle details of the dynamics of transfer-ionization.
At high impact energies $v \gg Z_t, Z_p$
the position of the center of the maximum in the momentum spectrum,
caused by the EEA mechanism,
tends in the target frame to $k_{lg} = -0.4 v$.
This means that the role of
the nucleus of the atom in this mechanism weakens with increasing
collision velocity and the EEA eventually becomes
a truly electronic one. However, according to our model,
even at impact velocities as high as $12$ and $16$ a.u.
the helium nucleus still noticeably participates in this process.
According to the well known picture of the EET mechanism
the emitted electron should have a velocity equal
to the collision velocity $v$ and fly under
the angle $90^0$ with respect to the projectile motion.
Our model predicts, however, that the velocity
of the emitted electron is on average smaller than $v$ and
that the electron is emitted under the angle which
is larger than $90^0$. These two differences are interconnected
and increase if the charge of the projectile increases
and/or the impact velocity decreases.
An experimental exploration of the spectra of electrons
emitted in transfer-ionization at high impact
velocities is very desirable.
At the highest velocity ($v = 21$ a.u.), considered in this article,
the total cross section for transfer-ionization,
according to our estimates,
is of the order of $0.1$, $1$ and $10$ mb in collisions with protons,
alpha-particles and lithium nuclei. These values are of course
rather small. Note, however, that already several years ago
it was possible (see \cite{schmidt}) to measure
the longitudinal momentum spectrum of the recoil target ions
for transfer-ionization process with the total cross section
of the order of $1$ mb.
\section*{Acknowledgement}
A.B.V. acknowledges the support from the Extreme Matter Institute EMMI
and the program for visiting international senior scientists
of Chinese Academy of Sciences.
|
1,116,691,498,246 | arxiv | \section{Introduction}
In recent years, the semiclassical theory of
stochastic gravity has been taking shape \cite{BeiL} \cite{Bei2} and is
finding interesting applications as proposed for
black hole physics \cite{suk} \cite{seema} and cosmology \cite{cosverd}.
This relies on including the quantum stress tensor fluctuations in the
semiclassical Einstein equation, and looking upon its induced effects therein.
In another attempt, \cite{moffat} for developing a stochastic theory
for gravity, a different approach of incorporating the
stochastic effects has been proposed. This raises
a question about smearing out singularities in spacetimes of interest.
However the basic difference
there, lies in the way stochasticity is introduced. The approach that we
take up here, is that of introducing stochasticity in a more physically
relevant fashion using the Langevin formulation . This is in terms of
randomness of the stress tensor itself, however the applications for our
formalism are quite specific and have to be addressed clearly in terms of
the physical picture of randomness.
The theory of classical Brownian motion \cite{B1} \cite{FP} and
elaborately formulated Semiclassical Stochastic Gravity \cite{BeiL}
\cite{Bei2} as mentioned above, naturally
seem to direct one towards raising a query about possible formulation
of a meaningful theory of Classical Stochastic Gravity.
In what we propose, the modified Einstein equation includes the first order
fluctuations of the classical stress tensor, as shown in the subsequent
sections. This
has very different coverage in terms of developments and applications, than
the semiclassical Einstein-Langevin equation.
Physical insights and direct applicability to relativistic Astrophysics and
Cosmology make such a development quite desirable at this stage.
We put forward the idea of analysing a relativistic thermal
cloud of collisional gas, using the Classical Einstein-Langevin equation that
we propose in this article.
\section{Domain of valid applications: The randomness of the classical stress
tensor}
For astrophysical objects which can be described by giving different models
of the stress-energy tensor (namely perfect and imperfect fluids, classical
fields etc), the statistical averages of the stress tensor, are of
interest.
Specific example is that of a relativistic star modeled by perfect
fluid. The microscopic particles of the fluid collide frequently, such that
their mean free path is short compared to the scale on which the density
changes. A mean stress tensor in this case can be defined. An observer
moving with an average velocity $u^\alpha$ (four velocity), giving the mean
velocity field of the fluid will see the collisions randomly distribute
the nearby paricle velocities, so that the particle distribution appears
locally isotropic \cite{fried}. The stress tensor can then be treated as a
random variable and the source of stochasticity in such cases, are the
collisions of microscopic particles. This has been suggested in \cite{Bei1}.
Neutron stars dynamics, stability issues etc, require a
detailed knowledge of the stars' microphysics.
While working on perturbations of such system which
is an area of active interest, one can further think of including
these fluctuations since they give contributions which are
otherwise ignored.
The source of these fluctuations may necessarily be the microphysics
encompassing the quantum phenomena of the interior of the stars, which we may
be able to partially capture in fluctuations of the classical matter
variables. Here we work with
the simplest possible model of the perfect fluid of star, hence we
restrict to fluctuations of pressure and energy density of the matter. It is
important to mention here that this analysis is very different from that of
the semiclassical case, where the stochasticty is based on specific
quantum fields coupled to the spacetime and what follows does not have a
implications to classical counterpart. In what follows, the classical
fluctuations that we mention here of the stress tensor, have no correspondence
to the quantum fluctuations as treated in semiclassical stochastic gravity
regarding physical phenomena of the system under investigation.
Our endeavour is to enhance the study of perturbations, instabilities
and stellar oscillations as established in literature \cite{fried}, by
inclusion of the above mentioned fluctuations .
It may also be possible to extend the same consistently, for a very
different scenario, namely the large scale structure of the universe.
The relevant
scales of interest there are expected to decide the fluctuations of the
stress tensor and the randomness. For such cases, one can then
phenomenologially put in the fluctuations of $T^{ab}$ in the Einstein
Equation which specifies noise, thus giving it a form of a Classical
Einstein-Langevin equation.
\section{The Classical Einstein-Langevin Equation}
The simplest form of E-L equation can be written as,
\begin{equation} \label{el}
G^{ab}[g+h](x) = T^{ab}[g+h](x) + \tau^{ab}[g](x)
\end{equation}
where the fluctuations $\tau^{ab}[g]$ defined by $ \tau^{ab}(x)= (T^{ab}(x) -
<T^{ab}(x)>)$ are taken over the unperturbed background and satisfy
the condition $\nabla_a \tau^{ab}=0$.
Here the covariant derivative is taken w.r.t the metric
$g^{ab}$ . This ensures that the Einstein Langevin equation
is covariantly conserved. The term $\tau^{ab}$ gives the equation a stochastic
form and is thus defined only through its expection value. For the equation to
be meaningful we put $< \tau^{ab}(x) > = 0$, which holds for a Langevin type
noise.
The perturbations that are induced by these fluctuations form the solution
of this equation.
The magnitude of fluctuations thus
defined, needs to be small enough to fulfill the criteria for validity of such
a treatment.
We are interested in seeking the the effect of these fluctuations on the
spacetime geometry.
This can be obtained by solving the above equation formally
for the perturbations $h_{ab}$ of the metric $g_{ab}$.
We shall aim for our future work to develop methods for solutions, which
are quite invovled and would depend on specific models.
In other applications where the classical
fields associated with the body are of interest, one can take average of the
stress tensor over these e.g electromagnetic, magnetic or
electric field, or classical scalar fields in general, associated with the body.
It may be worth mentioning the difference of the averages taken here and
the expectations in semiclassical theory.
A quantum stress tensor expectation $<\psi|\hat{T}^{\mu \nu}|\psi>$
over certain quantum states, as in
semiclassical Einstein Equation may be treated as classical averages
\cite{holland}. Also the issues of regularization etc, related
to the quantum stress tensor make the semiclassical theory very involved in
mathematical developments regarding formulations and solutions of the
corresponding Einstein Langevin equation.
The averages of the classical stress tensor that we consider here are
fundamentally different and quite simple. Here the underlying Physics is in a
different domain, so this should not be confused with the avegares in the
semiclassical case. Similarly, the applications of the semiclassical and
classical stochastic gravity do not overlap.
We give as an example of a gravitating system, a collapsing relativistic star
, which can be modeled by the
classical Einstein-Langeving equation. The stochastic analysis of such a
system, in order to explore statistical behavior of spacetime is the final
aim of this study.
As we will see later, that the fluctuations of the pressure and density,
induce
additional contribution to the metric perturbations, in addition to giving
these a stochastic nature. It thus gives scope for statistical analysis of
the spacetime structure in various contexts, including near critical point
behaviour of collapse, stellar oscillations and so on.
Thus equation (\ref{el}) above can take the following form,
\begin{equation}
G^{ab}[g](x) + \delta G^{ab}[h](x) = T^{ab}[g](x) + \delta T^{ab}[h](x)
+ \tau^{ab}[g](x)
\end{equation}
which reduces to
\begin{equation}
\delta G^{ab}[h](x) = \delta T^{ab} [h](x) + \tau^{ab}[g](x)
\end{equation}
We assume the stochastic term to be of the following form, as it
describes the Langevin noise.
\begin{equation}
< \tau^{ab}(x)> = 0 , < \tau^{ab}(x) \tau^{cd}(x')> = N^{abcd}(x,x')
\end{equation}
where $ab$ correspond to $x$ and $cd$ to $x'$. The fluctuations
denoted by $\tau^{ab}$ can be written as
\begin{equation}
\tau^{ab}(x) = T^{ab}(x) - < T^{ab}(x)>
\end{equation}
as mentioned earlier.
Thus the two point correlation is given by
\begin{equation} \label{noise}
<\tau^{ab}(x) \tau^{cd}(x') > = < (T^{ab}(x) - < T^{ab}(x)>)(T^{cd}(x') -
<T^{cd}(x')>) >
\end{equation}
Here $<T^{ab}>$ is the statistical average of the classical stress
tensor, since this is treated as a random variable itself.
The bitensor $N^{abcd}(x,x')$ decribes the two point correlation and has
the following properties.
\begin{enumerate}
\item
\begin{equation}
N^{abcd}(x,x') = N^{cdab}(x',x)
\end{equation}
This is clear from eqn. (\ref{noise}).
\item
\begin{equation}
\nabla_a N^{abcd}(x,x') = \nabla_b N^{abcd}(x,x') =
\nabla_c N^{abcd}(x,x') = \nabla_d N^{abcd}(x,x')= 0
\end{equation}
This follows from covariant conservation of $\tau^{ab} $ .
\end{enumerate}
The contributions coming out of the noise can be important, in addition to
giving spacetime perturbations a stochastic nature; which is otherwise
ignored in a deterministic treatment.
In the next section we attempt to obtain induced perturbations of spacetime
geometry, which partially accounts for solution of the E-L equation. This
may be seen as a heuristic calculation for spherically symmetric
perturbations of a relativistic star.
\section{ Induced metric perturbations for relativistic star}
For slowly rotating stars, the formalism starts with spherical stars and their
perturbations.
\subsection{The basic framework}
A spherically symmetric spacetime in Schwarzchild coordinates is of the form
\begin{equation}
ds^2 = - e^{2 \nu} dt^2 + e^{2 \lambda } dr^2 + r^2 d \Omega^2
\end{equation}
We discuss the spherical (radial) perturbations here. A more complete
analysis of general perturbations, which includes non-spherical
cases is more involved in terms of solutions of the Einstein
Langevin equation, and we reserve it for later work.
The formalism for obtaining perturbations of stars around equilibrium
confinguartion, lies at the core of study of oscilations, stability issues
and critical points of collapse of massive stars. For motion that maintains
spherical symmetry, we intend to work on perturbed
potentials $\delta \nu$ and $\delta \lambda$ describing the spacetime
geometry and the Lagrangian displacement of the fluid elements denoted by
$ \xi^a $. Here we restrict ourselves to the $r$-dependence of $ \nu, \lambda
$ and $\xi^a$ given by only one component $\xi^r$.
The system is described by perturbed Einstein-Euler equation
\cite{fried} .
The fluid's 3-velocity at fixed $r$ is given by
$ v = e^{\lambda - \nu} \dot{r} $ and its four-velocity takes the form
\begin{equation}
u^a = \frac{e^{-\nu} }{\sqrt{1-v^2}} (1,\dot{r},0,0) = \frac{1}{\sqrt{1-v^2}}
(e^{-\nu}, e^{-\lambda} v, 0,0 )
\end{equation}
The stress-energy tensor is given by
\begin{equation} \label{eq:stress1}
T^{ab} = (\epsilon+ p) u^a u^b + g^{ab} p
\end{equation}
The unperturbed components of field equation are given as
\begin{eqnarray}
G_{tt} = 8 \pi T_{tt} & : & \nonumber \\
- e^{2(\nu- \lambda)} (\frac{1}{r^2} - \frac{2}{r}
\lambda ' )& - & \frac{e^{2 \nu}}{r^2} = - 8 \pi e^{2 \nu}\epsilon
\frac{1}{1-v^2} \label{eq:1}\\
G_{rr} = 8 \pi T_{rr} & : & (\frac{1}{r^2} + \frac{2}{r} \nu' )
- \frac{e^{2 \lambda}}{r^2} = 8 \pi e^{2 \lambda}(\epsilon \frac{v^2}{1- v^2} +
\frac{p}{1-v^2}) \label{eq:2} \\
G_{tr} = 8 \pi T_{tr} & : & -\frac{2}{r} \dot{\lambda} = 8 \pi
e^{\lambda + \nu} (\epsilon + p) \frac{v}{1-v^2} \\
e^{ 2 \lambda} G_{\theta \theta} = 8 \pi e^{2 \lambda} T_{\theta \theta }
& : & \nonumber \\
& & \nu '' + \nu'^2 - \nu ' \lambda ' + \frac{1}{r} (\nu ' - \lambda ' )
= 8 \pi e^{2 \lambda} p
\end{eqnarray}
\subsection{The perturbed equations} \label{sec:pertpot}
The perturbed equations, including the fluctuations of the
random stress tensor are given by (corresponding to the first two equations
above)
\begin{eqnarray}
G^{(1)}_{tt} = 8 \pi T^{(1)}_{tt}(x) + \tau_{tt}(x) & : & \nonumber \\
e^{-2 \lambda }\{(2 \delta \lambda )(\frac{1}{r^2} -
\frac{2}{r} \lambda ' ) - \frac{2}{r} \delta{\lambda'} \} = - 8 \pi
(\delta \epsilon ) \frac{1}{1-v^2} + 8 \pi \tau_{tt} \\
G^{(1)}_{rr}(x) = 8 \pi T^{(1)}_{rr}(x) + 8 \pi \tau_{rr}(x)
& : & \nonumber \\
e^{-2 \lambda} [ \frac{1}{r} \delta \nu' - (\frac{2}{r} \nu' +
\frac{1}{r^2}) \delta \lambda ] = 4 \pi \delta p + 4 \pi \tau_{rr} & &
\label{eq:delnu} \\
G^{(1)}_{tr}(x) = 8 \pi T^{(1)}_{tr}(x) + 8 \pi \tau_{tr}(x) & : & \nonumber \\
\frac{2}{r} \dot{( \delta \lambda)} = - 8 \pi e^{2 \lambda }
(\epsilon + p ) \dot{\xi} + 8 \pi \tau_{tr} \label{eq:delam}
\end{eqnarray}
The rest of the perturbed equations can be obtained similarly.
From equation (\ref{eq:delam})
\begin{equation}
\delta \lambda = - 4 \pi r e^{2 \lambda }
(\epsilon + p ) \xi + 4 \pi r \int \tau_{tr} dt
\end{equation}
From equation (\ref{eq:1}) and (\ref{eq:2}) one can write
\begin{equation}
\nu' + \lambda' = 4 \pi (\epsilon + p) e^{2 \lambda} r
\end{equation}
thus giving
\begin{equation} \label{eq:delamm}
\delta \lambda = - (\nu' + \lambda') \xi + 4 \pi r \int \tau_{tr} dt
\end{equation}
The expressions for Eulerian changes in the fluid variables are
\begin{eqnarray}
\delta v & = & \delta(e^{\lambda - \nu} \dot{r})
= e^{\lambda- \nu} \dot{\xi} \\
\delta \epsilon & = & \Delta \epsilon - \xi \epsilon' \\
\delta p & = & \Delta p - \xi p'
\end{eqnarray}
with
\begin{eqnarray}
\Delta \epsilon = -\frac{1}{2} (\epsilon + p ) q^{ab}\Delta g_{ab}
\end{eqnarray}
with $\Delta g_{ab} = h_{ab} + \nabla_a \xi_b + \nabla_b \xi_a $, and
$ q^{ab} = u^a u^b + g^{ab} $ .
and
\begin{equation}
\Delta p = -\frac{1}{2} \Gamma_1 p q^{ab} \Delta g_{ab}
\end{equation}
where $\Gamma_1$ is the adiabatic index given by
\[ \Gamma_1 = \frac{\epsilon + p}{p} \frac{dp}{d \epsilon} \]
In our case we easily obtain
\begin{equation}
q^{ab} \Delta g_{ab} = 2 \delta \lambda - \frac{e^{-\lambda}}{r^2}
[e^\lambda r^2 \xi]' = \frac{2}{r^2} e^\nu( e^{- \nu} r^2 \xi)' - 16 \pi r
\int \tau_{tr} dt
\end{equation}
thus giving
\begin{eqnarray}
\delta p & = & - \Gamma_1 p \frac{1}{r^2} e^{\nu}( e^{-\nu} r^2 \xi)'
+ 8 \pi \Gamma_1 p r \int \tau_{tr} dt - \xi p' \\
\delta \epsilon & = & -(\epsilon+ p) \frac{e^{\nu} }{r^2} (e^{-\nu} r^2 \xi)'
+ 8 \pi (\epsilon + p ) r \int \tau_{tr} dt - \epsilon' \xi
\end{eqnarray}
From equation (\ref{eq:delnu}) one obtains,
\begin{equation}
\delta \nu(t,r) = \int f_1(r) dr + 4 \pi \int f_2(r) (\int \tau_{tr} dt) dr
+ 4 \pi \int r \tau_{rr} dr
\end{equation}
where
\[ f_1(r) =e^{2 \lambda } \{[-(\nu' + \lambda')\xi](\frac{1}{r} + 8 \pi p)
+ 4 \pi r[- \Gamma_1 p \frac{1}{r^2} e^{\nu}(e^{-\nu} r^2 \xi)' - \xi p' ] \}\]
and
\[ f_2(r) = e^{2 \lambda} r \{ 2 \Gamma_1 p - \frac{1}{r} - 8 \pi r p \} \]
The expression for $\delta \lambda$ as obtained in equation (\ref{eq:delamm})
has been used in the above analysis. However in order to be able
to see the effect on two point correlations and the effect of noise, another
expression for the same as obtained from equation (\ref{eq:1}) would be useful.
This can be shown to be of the form
\begin{equation}
\delta \lambda(t,r) = \int m_1(r) dr + 8 \pi \int m_2(r) (\int \tau_{tr} dt) dr
+ 8 \pi \int \frac{e^{\lambda}}{r^{1/2}} \tau_{tt} dr
\end{equation}
where
\[ m_1 = \frac{1}{1-v^2} \frac{1}{r^{1/2}} [ 2 (\nu'+\lambda') e^{\nu}(e^{-\nu}
r^2 \xi)' + 8 \pi e^{\lambda} \xi \epsilon']
\]
and
\[ m_2 = \frac{2 e^{\lambda}}{r^{1/2}} \frac{1}{1-v^2} (\nu' + \lambda') e^{-2
\lambda} \]
\subsection{Model of Noise}
The model of noise that we use in the above, decides the stochastic behaviour
of the perturbations of the metric.
For the perfect fluid stress tensor in spherically symmetric
spacetime given by (\ref{eq:stress1}) we have
\begin{eqnarray}
\tau_{tt} (x) & = & g_{tt}(x) \gamma_\epsilon(x) \nonumber \\
\tau_{rr}(x) & = & g_{rr}(x) \gamma_ p(x) \nonumber \\
\tau_{tr}(x) & = & u_t(x) u_r(x) (\gamma_\epsilon (x) + \gamma_p (x))
\nonumber \\
\tau_{\theta \theta}(x) & = & g_{\theta \theta}(x) \gamma_p(x) \nonumber \\
\tau_{\phi \phi}(x) & = & g_{\phi \phi}(x) \gamma_p(x)
\end{eqnarray}
where we denote the fluctuations in pressure and density by $\gamma_p$ and
$\gamma_\epsilon$. We note that for the stress tensor that
we have considered in this article, the above are all of the non-
vaishing components of the fluctuations.
For our noise model we assume $< \gamma_p> =
<\gamma_\epsilon> = 0$, which implies the vanishing of $<\xi_{ab}>$ as
expected.
The two point correlations of $\xi_{ab}$ , namely the Noise Kernel then can be
defined as
\begin{equation}
N_{abcd}(x,x') = K_{abcd}(x,x') <\gamma_i (x) \gamma_j(x')> \mbox{ ; where }
\{i,j \} =\{ p, \epsilon \}
\end{equation}
with the following relevant components for the model:
\begin{eqnarray}
& & K_{tttt}(x,x') = g_{tt}(x) g_{tt}(x'); K_{rrrr}(x,x') = g_{rr}(x)g_{rr}(x')
\nonumber \\
& & K_{trtr}(x,x') = u_t(x)u_t(x')u_r(x)u_r(x'); K_{trcd}(x,x') =
u_t(x)u_r(x) g_{cd}(x') \nonumber \\
& & K_{rr\theta \theta}(x,x') = g_{rr}(x) g_{\theta \theta}(x');
K_{rr\phi \phi}(x,x') = g_{rr}(x) g_{\phi \phi}(x') \nonumber \\
& & K_{\theta\theta\phi\phi}(x,x') = g_{\theta \theta}(x) g_{\phi \phi}(x')
\end{eqnarray}
We assume no correlation between pressure and density fluctuations for
simplicity. A gaussian distribution may be well suited model for this noise,
such that all the higher order correlations may be defined in terms of
the two point correlations. A coloured noise model may be more favourable
than a white noise in the case of curved spacetime.
To be consistent with our simplified case of spherically symmetric
spacetime and radial perturbations of fluid variables, one can
clearly see that $g_{tt}$ and $g_{rr}$ are functions of $r$, with pressure
and density having $t,r$ dependence.
It would be meaningful here to choose a $\delta$- correlation for
$r$ dependence in the noise model while keeping the $t$ dependence such that
$<\gamma_p(t,r) \gamma_p(t,r')> = p_0 \mu(t-t') \delta(r-r')$ and
$<\gamma_\epsilon(t,r) \gamma_\epsilon(t',r')> = q_0 \mu(t-t') \delta(r-r')$.
Here $p_0$ and $q_0$ define the strength of the correlations.
\subsection{ The two point correlations of potentials}
Using the perturbed potentials as obtained in section (\ref{sec:pertpot})
and the noise model descibed above, the two point correlations are
\begin{eqnarray}
& & <\delta \nu(t,r) \delta \nu(t,r')> =\int \int f_1(r) f_1(r') dr dr' +
(8 \pi)^2 p_0 \int e^{4 \lambda(r)} \{ \mu(t-t') r^2 - \nonumber \\
& & f_2 (r) \int \dot{r} \mu(t-t') dt \} dr \label{eq:nunu} \\
& & <\delta \lambda(t,r) \delta \lambda(t',r')> = \int \int m_1(r) m_1(r')dr
dr' + (8 \pi)^2 q_0 \int \frac{e^{2 \lambda(r)}}{r} \nonumber \\
& & \{ \mu(t-t') + m_2(r) r^{1/2} e^{(\lambda(r) + 2 \nu(r))} \int \dot{r}
\mu(t-t') dt \} dr \label{eq:ll}
\end{eqnarray}
In evaluating the above expressions, we have assumed $dr/dt'=0$, while
$\dot{r}= dr/dt$ remains non-vanishing. The second term in equation
(\ref{eq:nunu}) and (\ref{eq:ll}) can be seen to arise out of
the fluctuations of the stress tensor. These are additional contributions
to the perturbations, which arise out of stochasticity taken into
account in the model. The strength of the noise $q_0$ and $p_0$
decide the magnitude of these, and we would elaborate more on this and
a fluctuation dissipation theorem in an upcoming article. These are deeper
issues, and need to be worked upon separately, while considering a
solution of the Einstein Langevin equation in terms of metric
perturbations $h_{ab}$ rather than just the potentials. It may then, be
more appropriate to give physically relevant results in this
context.
Here, our purpose has been to establish this formalism and state few basic
consequences of including the effects of fluctuations of the
classical stress tensor.
It may be of interest to associate other meaningful models
of noise for relativistic stars and seek the corresponding behavior.
\section{ Further Directions }
Though the framework here is inspired by semiclassical stochastic gravity,
classical Brownian motion developments should be looked upon for the first
principles. It may be
interesting to see if, general relativity in this domain(statistical), would
have at the fundamental level a much richer structure and content, than the
theory of Brownian motion in Newtonian Physics. This can be expected, due to
curved spacetime here, which gets coupled to matter.
There is indication towards this, from certain domains where we see the effect
of underlying geometry of space as that in fractal structures, which come
up in a very interesting way \cite{seemaadg}. One can seek to
analyse the Einstein Langevin equation in terms of Markovian or Non-Markovian
criteria, in addition to the background spacetime stucture affecting the
same.
Apart from this general interest in stochastic theory development, the
present article directs one, towards a well defined research program which
includes the following:
\begin{itemize}
\item A formal solution of the classical Einstein Langevin equation in
terms of general perturbation of the background spacetime metric to be
obtained by devising methods to do so. Thus solvable models have to be
identified, for which this could be done analytically, along with cases
where one may need numerical solutions.
\item Few cases of interest would be the rotating neutron star and binary
star systems, which are of interest to Graviational Waves.
\item
This analysis with additional contributions to the the spacetime
structure and its perturbations, also can be used to study
instabilities in collapsing bodies. A formal study towards
non-nonequilibrium statistical physics can be provided in this context.
The instabilities in
gravitational collapse of relativistic stars (viz a neutron
star or a system of neutron stars), would be amenable to analysis
in terms of studying the correlations of the stress tensor
fluctuations and their effects near different critical regions during the
collapse scenario.
\item Another question that can be raised, is about an associated
radiation of classical waves that a gravitating body such as a back hole
may emit, that of superradiance . The contribution
of fluctuations to the superradiance and its behaviour may be an interesting
direction to probe in order to see if it gives significant results.
\item For the complete gravitational collapse, where one necessarily gets
a singualrity as the end state of the collapse, the effect of these
stochastic contributions can play interesting role in deciding the
stability criteria for the end states. This can modify the present
established results, regarding parameter values that decide the occurance of
naked or covered singularity. Also this may enhance the area by giving it
a way, to work out issues using statistical physics .
\end{itemize}
The few possible applications of the proposed classical theory of stochastic
gravity in this article are mentioned, to physically motivate such a
formulation and bring to notice the directions where this would lead to
meaningful study. One can always suggest many more sub-areas and
problems in astrophysics or cosmology, where this may find valid
applications.
Introducing the idea
of such an approach as that of a Langevin equation in classical gravity, is
just the very first step. This necessarily needs to be followed by the
appropriate solutions of the Einstein-Langevin equation, including
mathematical developments for specific cases.
This is our endeavour in immediate future.
\section*{Acknowledgements}
Seema Satin is thankful to Bei Lok Hu, Sukanta Bose,
and T. Padmanabhan for helpful discussions.
|
1,116,691,498,247 | arxiv | \section{Introduction}
\label{sec:introduction}
Clustering is a fundamental primitive in the realms of
data management and machine learning, with applications in a
large spectrum of domains such as database search, bioinformatics,
pattern recognition, networking, operations research, and many more
\cite{HennigMMR15}. A prominent clustering subspecies
is \emph{center-based clustering} whose goal is to
partition a set of data items into $k$ groups, where $k$ is an input
parameter, according to a notion of similarity, captured by a given
measure of closeness to suitably chosen representatives, called
centers. There is a vast and well-established literature on sequential
strategies for different instantiations of center-based clustering
\cite{AwasthiB15}. However, the explosive growth of data that needs to
be processed often rules out the use of these sequential strategies,
which are often impractical on large data sets, due to
their time and space requirements. Therefore, it is of paramount
importance to devise efficient distributed clustering strategies
tailored to the typical computational frameworks for big data
processing, such as MapReduce \cite{LeskovecRU14}.
In this paper, we focus on the \emph{$k$-median} and \emph{$k$-means}
clustering problems. Given a set $P$ of points in a general metric
space and a positive integer $k \leq |P|$, the $k$-median (resp.,
$k$-means) problem requires to find a subset $S \subseteq P$ of $k$
points, called \emph{centers}, so that the sum of all distances
(resp., square distances) between the points of $P$ to their closest
center is minimized. Once $S$ is determined, the association of
each point to its closest center naturally defines a clustering of
$P$. While scarcely meaningful for general metric spaces,
for Euclidean spaces, the widely studied \emph{continuous} variant of these two
problems removes the constraint that $S$ is a subset of $P$, hence
allowing a much richer choice of centers from the entire space. Along
with \emph{$k$-center}, which requires to minimize the maximum
distance of a point to its closest center, $k$-median and $k$-means
are the most popular instantiations of center-based clustering, whose
efficient solution in the realm of big data has attracted vast
attention in the recent literature
\cite{EneIM11,BahmaniMVKV12,BalcanEL13,Song0H17,CeccarelloPP19}.
One of the reference models for big data computing, also adopted in
most of the aforementioned works, is MapReduce
\cite{DeanG08,PietracaprinaPRSU12,LeskovecRU14}, where a set of
processors with limited-size local memories process data in a sequence
of parallel rounds. Efficient MapReduce algorithms should aim at
minimizing the number of rounds while using substantially sublinear
local memory.
A natural approach to solving large instances of combinatorial
optimization problems relies on the extraction of a much smaller
``summary'' of the input instance, often dubbed \emph{coreset} in the
literature \cite{Har-Peled2004}, which embodies sufficient information
to enable the extraction of a good approximate solution of the whole
input. This approach is profitable whenever the (time and space)
resources needed to compute the coreset are considerably lower than
those required to compute a solution by working directly on the input
instance. Coresets with different properties have been studied in the
literature to solve different variants of the aforementioned
clustering problems \cite{Philips2016}.
The main contributions of this paper are novel coreset-based
space/round-efficient MapReduce algorithms for $k$-median and
$k$-means.
\subsection{Related work}
The $k$-median and $k$-means clustering problems in general metric
spaces have been extensively studied, and constant approximation
algorithms are known for both problems \cite{AwasthiB15}.
In recent years, there has been growing interest in the
development of distributed algorithms to attack these problems in
the big data scenario (see \cite{Song0H17} and references therein).
While straightforward parallelizations of known iterative sequential
strategies tend to be inefficient due to high round complexity, the
most relevant efforts to date rely on distributed constructions of
coresets of size much smaller than the input, upon which a sequential
algorithm is then run to obtain the final solution.
Ene et al.\ \cite{EneIM11} present a randomized MapReduce algorithm
which computes a coreset for $k$-median of size $O(k^2 |P|^{\delta})$
in $O(1/\delta)$ rounds, for any $\delta \in (0,1)$. By using an
$\alpha$-approximation algorithm on this coreset, a weak
$(10\alpha+3)$-approximate solution is obtained. In the paper, the
authors claim that their approach extends also to the $k$-means
problem, but do not provide the analysis. For this latter problem,
in \cite{BahmaniMVKV12} a parallelization of the popular
$k$-means++ algorithm by \cite{ArthurV07} is presented, which builds an
$O(k \log |P|$)-size coreset for $k$-means
in $O(\log |P|)$ rounds. By running an $\alpha$-approximation
algorithm on the coreset, the returned solution features
an $O(\alpha)$ approximation ratio.
A randomized MapReduce algorithm for $k$-median has been
recently presented in
\cite{Song0H17}, where the well known local-search PAM algorithm
\cite{Kaufmann1987} is employed to extract a small family of
possible solutions from random samples of the input. A suitable
refinement of the best solution in the family is then
returned. While extensive experiments support the
effectiveness of this approach in practice, no tight theoretical
analysis of the resulting approximation quality is provided.
In the continuous setting, Balcan et al.\ \cite{BalcanEL13} present
randomized 2-round algorithms to build coresets in $\mathbb{R}^d$ of
size $O\left(\frac{kd}{\epsilon^2}+Lk\right)$ for $k$-median, and
$O\left(\frac{kd}{\epsilon^4 }+Lk\log({Lk})\right)$ for $k$-means,
for any choice of $\epsilon \in (0,1)$,
where the computation is distributed among $L$ processing elements.
By using an $\alpha$-approximation algorithm on the coresets, the
overall approximation factor is $\alpha+O(\epsilon)$.
For $k$-means, a recent improved construction
yields a coreset which is a factor $O(\epsilon^2)$ smaller and features
very fast distributed implementation \cite{BachemLK18}. It is not
difficult to show that a straightforward adaptation of these
algorithms to general spaces (hence in a non-continuous setting) would
yield $(c \cdot \alpha+ O(\epsilon))$-approximations, with $c \geq 2$,
thus introducing a non-negligible gap with respect to the quality of
the best sequential
approximations.
Finally, it is worth mentioning that there is a rich literature
on sequential coreset constructions for $k$-median and $k$-means,
which mostly focus on the continuous case in Euclidean spaces
\cite{Feldman2011,Har-Peled2004,Har-Peled2005,SohlerW18,CohenCK18}.
We do not review the results in these works since our focus is on
distributed algorithms in general metric spaces. We also note that the recent work of \cite{Huang2018}
addresses the construction of coresets for $k$-median and $k$-means in general metric spaces, where the
coreset sizes are expressed as a function of the doubling dimension. However, their construction strategy is
rather complex and it is not clear how to adapt it to the distributed setting.
\subsection{Our contribution}
We devise new distributed coreset constructions and show how to employ
them to yield accurate space-efficient 3-round MapReduce algorithms for
$k$-median and $k$-means. Our
coresets are built in a \emph{composable} fashion \cite{IndykMMM14} in
the sense that they are obtained as the union of small local coresets
computed in parallel (in 2 MapReduce rounds) on distinct subsets
of a partition of the input. The final solution is obtained by
running a sequential approximation algorithm on the coreset in the
third MapReduce round. The memory requirements of our
algorithms are analyzed in terms of the desired approximation
guarantee, and of the \emph{doubling dimension} $D$ of the underlying
metric space, a parameter which generalizes the dimensionality of
Euclidean spaces to general metric spaces and is thus related to the
increasing difficulty of spotting good clusterings as the parameter
$D$ grows.
Let $\alpha$ denote the best approximation ratio attainable
by a sequential algorithm for either $k$-median or $k$-means on general metric
spaces. Our main results are
3-round $(\alpha+O(\epsilon))$-approximation
MapReduce algorithms for $k$-median and $k$-means, which require
$O(|P|^{2/3}k^{1/3}\cdot$ \\ $(c/\epsilon)^{2D} \log^2{|P|})$ local memory, where
$c>0$ is a suitable constant that will be specified in the analysis,
and $\epsilon \in (0,1)$ is a user-defined precision parameter.
To the best of our knowledge, these are the first MapReduce algorithms
for $k$-median and $k$-means in general metric spaces which feature
approximation guarantees that can be made arbitrarily close to those
of the best sequential algorithms, and run in few rounds using local
space substantially sublinear for low-dimensional spaces.
In fact, prior to our work existing MapReduce algorithms for
$k$-median and $k$-means in general metric spaces
either exhibited approximation factors
much larger than $\alpha$ \cite{EneIM11,BahmaniMVKV12},
or missed a tight theoretical analysis of the approximation factor
\cite{Song0H17}.
Our algorithms revolve around novel coreset constructions somehow
inspired by those proposed in \cite{Har-Peled2004} for Euclidean
spaces. As a fundamental tool, the constructions make use of a
procedure that, starting from a set of points $P$ and a set of centers
$C$, produces a (not much) larger set $C'$ such that for any point $x
\in P$ its distance from $C'$ is significantly smaller than its
distance from $C$. Simpler versions of our constructions can also be
employed to attain 2-round MapReduce algorithms for the continuous
versions of the two problems, featuring $\alpha+O(\epsilon)$
approximation ratios. While similar approximation guarantees have
already been achieved in the literature using more space-efficient but randomized
coreset constructions \cite{BalcanEL13,BachemLK18}, this result provides evidence of the general
applicability of our novel approach.
Finally, we want to point out that a very desirable feature of our
MapReduce algorithms is that they do not require a priori knowledge of
the doubling dimension $D$ and, in fact, it is easily shown that they
adapt to the dimensionality of the dataset which, in principle, can be
much lower than the one of the underlying space.
\vspace*{0.3cm}
\noindent
{\bf Organization of the paper.} The rest of the paper is organized
as follows. Section~\ref{sec:preliminaries} contains a number of
preliminary concepts, including various properties of coresets that
are needed to achieve our results.
Section~\ref{section:coresetconstruct} presents our novel coreset
constructions for $k$-median
(Subsection~\ref{subsection:coresetkmedian}) and $k$-means
(Subsection~\ref{subsection:coresetkmeans}). Based on these
constructions, Subsection~\ref{subsection:mapreducefinal} derives the
MapReduce algorithms for the two problems. Finally,
Section~\ref{sec:conclusions} offers some concluding remarks.
\section{Preliminaries}
\label{sec:preliminaries}
Let $\mathcal{M}$ be a metric space with distance function
$d(\cdot,\cdot)$. We define the \emph{ball of radius $r$ centered at
$x$} as the set of points at distance at most $r$ from
$x$. The \emph{doubling dimension} of $\mathcal{M}$ is the smallest
integer $D$ such that for any $r$ and $x \in \mathcal{M}$, the ball of
radius $r$ centered at $x$ can be covered by at most $2^D$ balls of
radius $r/2$ centered at points of $\mathcal{M}$. Let $x \in
\mathcal{M}$ and $Y \subseteq \mathcal{M}$. We define $d(x,Y) =
\min_{y \in Y}d(x,y)$ and $x^Y = \argmin_{y \in Y}d(x,y)$. A set of
points $P \subseteq \mathcal{M}$ can be weighted by assigning a
positive integer $w(p)$ to each $p \in P$.
In this case, we will use the notation $P_w$ (note that an
unweighted set of points can be considered weighted with unitary weights).
Let $X_w$ and $Y$ be two subsets of $\mathcal{M}$. We define
$\nu_{X_w}(Y) = \sum_{x \in X_w}w(x)d(x,Y)$ and $\mu_{X_w}(Y) =
\sum_{x \in X_w}w(x)d(x,Y)^2$. The values $\nu_{X_w}(Y)$ and $\mu_{X_w}(Y)$
are also referred to as \textit{costs}.
In the \emph{$k$-median problem} (resp., \emph{$k$-means problem}), we
are given in input an instance $\mathcal{I} = (P,k)$, with $P \subseteq
\mathcal{M}$ and $k$ a positive integer. A set $S \subseteq P$ is a
solution of $\mathcal{I}$ if $|S| \leq k$. The objective is to find the
solution $S$ with minimum cost $\nu_{P}(S)$ (resp., $\mu_{P}(S)$).
Given an instance $\mathcal{I}$ of one of these two problems, we
denote with $\texttt{\rm opt}_\mathcal{I}$ its optimal solution. Moreover, for $\alpha
\geq 1$, we say that $S$ is an \emph{$\alpha$-approximate solution}
for $\mathcal{I}$ if its cost is within a factor $\alpha$ from the
cost of $\texttt{\rm opt}_\mathcal{I}$. In this case, the value $\alpha$ is also called
approximation factor. An \emph{$\alpha$-approximation algorithm}
computes an $\alpha$-approximate solution for any input instance. The
two problems are immediately generalized to the case of weighted
instances $(P_w,k)$. In fact, all known approximations algorithms can
be straightforwardly adapted to handle weighted instances keeping the
same approximation quality.
Observe that the squared distance does not satisfy the triangle
inequality. During the analysis, we will use the following weaker
bound.
\begin{proposition}
\label{proposition:squareddistance}
Let $x,y,z \in \mathcal{M}$. For every $c>0$
we have that $d(x,y)^2 \leq (1+1/c)d(x,z)^2 + (1+c)d(z,y)^2$.
\end{proposition}
\begin{proof}
Let $a,b$ be two real numbers. Since $(a/\sqrt{c}-b\cdot\sqrt{c})^2 \geq 0$, we obtain that $2ab \leq a^2/c + c \cdot b^2 $. Hence, $(a+b)^2 \leq (1+1/c)a^2 + (1+c)b^2$. The proof follows since
$d(x,y)^2 \leq \left[ d(x,z)+d(z,y)\right]^2$ by triangle inequality.
\end{proof}
A coreset is a small (weighted) subset of the input which summarizes
the whole data. The concept of summarization can be captured with the
following definition, which is commonly adopted to describe coresets
for $k$-means and $k$-median (e.g., \cite{Har-Peled2004,Feldman2011,Huang2018}).
\begin{definition}
\label{definition:strong}
A weighted set of points $C_w$ is an $\epsilon$-approximate coreset of an
instance $\mathcal{I} = (P,k)$ of $k$-median $($resp., $k$-means$)$
if for any solution $S$ of $\mathcal{I}$ it holds
that $|\nu_P(S) - \nu_{C_w}(S)| \leq \epsilon \cdot \nu_P(S)$
$($resp., $|\mu_P(S) - \mu_{C_w}(S)| \leq \epsilon \cdot \mu_P(S)$$)$.
\end{definition}
Informally, the cost of any solution is approximately the same if
computed from the $\epsilon$-approximate coreset rather than from the
full set of points. In the paper we will also make use of the
following different notion of coreset (already used in
\cite{Har-Peled2004,EneIM11}), which upper bounds the aggregate
``proximity'' of the input points from the coreset as a function of
the optimal cost.
\begin{definition}
\label{definition:bounded} Let $\mathcal{I} = (P,k)$ be an instance
of $k$-median $($resp., $k$-means$)$.
A set of points $C_w$ is an $\epsilon$-bounded coreset of $\mathcal{I}$ if it exists a map $\tau: P \rightarrow C_w$ such that
$\sum_{x \in P}d(x,\tau(x)) \leq \epsilon \cdot \nu_P(\texttt{\rm opt}_\mathcal{I})$
$($resp., $\sum_{x \in P}d(x,\tau(x))^2 \leq \epsilon \cdot \mu_P(\texttt{\rm opt}_\mathcal{I})$$)$ and for any $x \in C_w$, $w(x) = |\{ y \in P: \tau(y) = x\}|$.
We say that $C_w$ is weighted according to $\tau$.
\end{definition}
The above two kind of coresets are related, as shown in the following
two lemmas.
\begin{lemma}
\label{lemma:boundedtostrongkmedian}
Let $C_w$ be an $\epsilon$-bounded coreset of a $k$-median instance $\mathcal{I}=(P,k)$. Then $C_w$ is also a $\epsilon$-approximate coreset of $\mathcal{I}$.
\end{lemma}
\begin{proof}
Let $\tau$ be the map of the definition of $\epsilon$-bounded coreset. Let $S$ be a solution of $\mathcal{I}$. Using triangle inequality, we can easily see that $d(x,S) - d(x,\tau(x)) \leq d(\tau(x),S)$ and $d(\tau(x),S) \leq d(\tau(x),x) + d(x,S)$ for any $x \in P$. Summing over all points in $P$, we obtain that
\begin{align*}
\nu_{P}(S) - \sum_{x \in P}d(x,\tau(x)) \leq \nu_{C_w}(S) \leq \sum_{x \in P}d(x,\tau(x)) + \nu_{P}(S)
\end{align*}
To conclude the proof, we observe that $\sum_{x \in P}d(x,\tau(x)) \leq \epsilon \cdot \nu_P(\texttt{\rm opt}_\mathcal{I}) \leq \epsilon \cdot \nu_P(S)$.
\end{proof}
\begin{lemma}
\label{lemma:boundedtostrongkmeans}
Let $C_w$ be an $\epsilon$-bounded coreset of a $k$-means instance $\mathcal{I}=(P,k)$. Then $C_w$ is also a $(\epsilon+2\sqrt{\epsilon})$-approximate coreset of $\mathcal{I}$.
\end{lemma}
\begin{proof}
Let $\tau$ be the map of the definition of $\epsilon$-bounded coreset. Let $S$ be a solution of $\mathcal{I}$. We want to bound the quantity $| \mu_{P}(S) - \mu_{C_w}(S) | = \sum_{x \in P} | d(x, S)^2 - d(\tau(x),S)^2 |$.
We rewrite $|d(x, S)^2 - d(\tau(x),S)^2|$ as $\left[ d(x,S) + d(\tau(x),S) \right] \cdot | d(x,S) - d(\tau(x),S)|$.
By triangle inequality, we have that $d(x,S) \leq d(x,\tau(x)) + d(\tau(x),S)$ and $d(\tau(x),S) \leq d(\tau(x),x) + d(x,S)$. By combining these two inequalities, it results that $|d(x,S) - d(\tau(x), S)| \leq d(x,\tau(x))$. Moreover, $d(x,S) + d(\tau(x),S) \leq 2d(x,S) + d(x,\tau(x))$. Hence
\begin{eqnarray*}
| \mu_{P}(S) - \mu_{C_w}(S) | & \leq &
\sum_{x \in P} d(x, \tau(x))\left[ 2d(x,S) + d(x,\tau(x))\right] \\
& \leq & \epsilon \cdot \mu_{P}(S) + 2\sum_{x \in P}d(x,\tau(x))d(x,S)
\end{eqnarray*}
where we used the fact that $\sum_{x \in P}d(x,\tau(x))^2 \leq \epsilon \cdot \mu_{P}(\texttt{\rm opt}_\mathcal{I}) \leq \epsilon \cdot \mu_{P}(S)$. We now want to bound the sum over the products of the two distances. Arguing as in the proof of \autoref{proposition:squareddistance}, we can write:
\begin{align*}
2\sum_{x \in P}d(x,\tau(x))d(x,S) \leq \sqrt{\epsilon} \cdot \sum_{x \in P}d(x,S)^2 + \frac{1}{\sqrt{\epsilon}} \sum_{x \in P}d(x,\tau(x))^2 \leq 2 \sqrt{\epsilon} \cdot \mu_{P}(S)
\end{align*}
To wrap it up, it results that $ | \mu_{P}(S) - \mu_{C_w}(S) | \leq (\epsilon + 2 \sqrt{\epsilon})\cdot \mu_{P}(S)$.
\end{proof}
In our work, we will build coresets by working in parallel over a partition of the input instance. The next lemma provides known results on the relations between the optimal solution of the whole input points and the optimal solution of a subset of the input points.
\begin{lemma}
\label{lemma:optimalsolrelation}
Let $C_w \subseteq P$. Let $\mathcal{I} = (P,k)$ and $\mathcal{I}' = (C_w,k)$. Then:
$($a$)$ $\nu_{C_w}(\texttt{\rm opt}_{\mathcal{I}'}) \leq 2\nu_{C_w}(\texttt{\rm opt}_\mathcal{I})$;
and
$($b$)$ $\mu_{C_w}(\texttt{\rm opt}_{\mathcal{I}'}) \leq 4\mu_{C_w}(\texttt{\rm opt}_\mathcal{I})$.
\end{lemma}
\begin{proof}
We first prove point $(b)$. Let $X = \{ x^{C_w} : x \in \texttt{\rm opt}_\mathcal{I} \}$. The set $X$ is a solution of $\mathcal{I}'$. By optimality of $\texttt{\rm opt}_{\mathcal{I}'}$, we have that $\mu_{C_w}(\texttt{\rm opt}_{\mathcal{I}'}) \leq \mu_{C_w}(X)$. Also, by triangle inequality, it holds that $\mu_{C_w}(X) \leq \sum_{x \in C_w}w(x)\left[ d(x,\texttt{\rm opt}_\mathcal{I}) + d(x^{\texttt{\rm opt}_\mathcal{I}}, X)\right]^2$. We observe that $d(x^{\texttt{\rm opt}_\mathcal{I}}, X) \leq d(x, \texttt{\rm opt}_\mathcal{I})$ by definition of $X$. Thus, we obtain that $\mu_{C_w}(\texttt{\rm opt}_{\mathcal{I}'}) \leq 4\mu_{C_w}(\texttt{\rm opt}_\mathcal{I})$. The proof of $(a)$ follows the same lines with a factor $2$ less since we do not square.
\end{proof}
Bounded coresets have the
nice property to be \emph{composable}. That is, we can partition the
input points into different subsets and compute a bounded coreset
separately in each subset: the union of those coresets is a bounded
coreset of the input instance. This property, which is formally stated in the
following lemma, is crucial to develop efficient MapReduce algorithms
for the clustering problems.
\begin{lemma}
\label{lemma:unionbounded}
Let $\mathcal{I} = (P,k)$ be an instance of
$k$-median $($resp., $k$-means$)$. Let $P_1,\ldots,P_L$ be a partition of $P$. For $\ell=1,\ldots,L$, let $C_{w,\ell}$ be an $\epsilon$-bounded coreset of $\mathcal{I}_\ell = (P_\ell,k)$. Then $C_w = \cup_\ell C_{w,\ell}$ is a $2\epsilon$-bounded coreset $($resp., a $4\epsilon$-bounded coreset$)$ of $\mathcal{I}$.
\end{lemma}
\begin{proof}
We prove the lemma for $k$-median. The proof for $k$-means is similar.
For $\ell=1,\ldots,L$, let $\tau_\ell$ be the map from $P_\ell$ to $C_{w,\ell}$ of \autoref{definition:bounded}. Now, for any $x \in P$, let $\ell$ be the integer such that $x \in P_\ell$; we define $\tau(x) = \tau_\ell(x)$.
\begin{align*}
\sum_{x \in P}d(x,\tau(x)) \leq \sum_{\ell=1}^{L}\sum_{x \in P_\ell}d(x,\tau_\ell(x)) \leq \epsilon \sum_{\ell=1}^L \nu_{P_\ell}(\texttt{\rm opt}_{\mathcal{I}_\ell}) \leq 2\epsilon \cdot \nu_{P}(\texttt{\rm opt}_{\mathcal{I}})
\end{align*}
In the last inequality, we used the fact that $\nu_{P_\ell}(\texttt{\rm opt}_{\mathcal{I}_\ell}) \leq 2 \nu_{P_\ell}(\texttt{\rm opt}_{\mathcal{I}})$ from \autoref{lemma:optimalsolrelation}.
\end{proof}
In the paper, we will need the following additional characterization
of a representative subset of the input, originally introduced in
\cite{Har-Peled2004}.
\begin{definition}
\label{definition:centroidset}
Let $\mathcal{I} = (P,k)$ be an instance of $k$-median $($resp.,
$k$-means$)$. A set $C$ is said to be an $\epsilon$-centroid set of
$\mathcal{I}$ if there exists a subset $X \subseteq C$, $|X| \leq k$, such
that $\nu_P(X) \leq (1+\epsilon)\nu_P(\texttt{\rm opt}_\mathcal{I})$
$($resp., $\mu_P(X) \leq (1+\epsilon)\mu_P(\texttt{\rm opt}_\mathcal{I})$$)$.
\end{definition}
\noindent
Our algorithms are designed for the \emph{MapReduce}
model of computation which has become a
de facto standard for big data algorithmics in recent years.
A MapReduce
algorithm~\cite{DeanG08,PietracaprinaPRSU12,LeskovecRU14} executes in
a sequence of parallel \emph{rounds}. In a round, a multiset $X$ of
key-value pairs is first transformed into a new multiset $X'$ of
key-value pairs by applying a given \emph{map function} (simply called
\emph{mapper}) to each individual pair, and then into a final multiset
$Y$ of pairs by applying a given \emph{reduce function} (simply called
\emph{reducer}) independently to each subset of pairs of $X'$ having
the same key. The model features two parameters, $M_L$, the
\emph{local memory} available to each mapper/reducer, and $M_A$, the
\emph{aggregate memory} across all mappers/reducers.
\section{Coresets construction in MapReduce}
\label{section:coresetconstruct}
\sloppy
Our coreset constructions are based on a suitable point selection
algorithm called $\texttt{CoverWithBalls}$, somewhat inspired by the exponential
grid construction used in \cite{Har-Peled2004} to build
$\epsilon$-approximate coresets in $\mathbb{R}^d$ for the continuous
case. Suppose that we want to build an $\epsilon$-bounded coreset of a
$k$-median instance $\mathcal{I} = (P,k)$ and that a $\beta$-approximate
solution $T$ for $\mathcal{I}$ is available. A simple approach would be to
find a set $C_w$ such that for any $x$ in $P$ there exists a point
$\tau(x) \in C$ for which $d(x,\tau(x)) \leq (\epsilon/2\beta) \cdot
d(x,T)$. Indeed, if $C_w$ is weighted according to $\tau$, it can be seen
that $C_w$ is an $\epsilon$-bounded coreset of
$\mathcal{I}$. The set $C_w$ can be constructed greedily by iteratively
selecting an arbitrary point $p \in P$, adding it to $C_w$, and
discarding all points $q \in P$ (including $p$)
for which the aforementioned property
holds with $\tau(q) = p$. The construction ends when all points of $P$
are discarded. However, note that the points of $P$ which are already very
close to $T$, say at a distance $\leq R$ for a suitable tolerance
threshold $R$, do not contribute much to $\nu_{P}(T)$, and so to the sum
$\sum_{x \in P}d(x,\tau(x))$. For these points, we can relax the
constraint and discard them from $P$ as soon their distance to
$C_w$ becomes at most $(\epsilon/2\beta) \cdot R$. This relaxation is
crucial to bound the size of the returned set as a function of the
doubling dimension of the space.
\begin{algorithm}
\DontPrintSemicolon
$C_w \leftarrow \emptyset$ \;
\While{$P \neq \emptyset$}{
$p \longleftarrow $ arbitrarily selected point in $P$ \;
$C_w \longleftarrow C_w \cup \{ p \}, w(p) \longleftarrow 0$ \;
\ForEach{$q \in P$}{
\If{ $d(p,q) \leq \epsilon/(2\beta) \max \{R,d(q,T) \} $}{
remove $q$ from $P$ \;
$w(p) \longleftarrow w(p)+1$ \hspace{10pt} \tcc*[r]{(i.e. $\tau(q) = p $, see \autoref{lemma:taucoverwithballs})}
}
}
}
return $C_w$
\caption{\texttt{CoverWithBalls}$(P,T,R,\epsilon,\beta)$}
\end{algorithm}
Algorithm $\texttt{CoverWithBalls}$
is formally described in the
pseudocode below. It receives in input
two sets of points, $P$ and $T$, and three positive
real parameters $R$, $\epsilon$, and $\beta$, with $\epsilon < 1$ and
$\beta \geq 1$ and outputs a weighted set $C_w \subseteq
P$ which satisfies the property stated in the following lemma.
\begin{lemma}
\label{lemma:taucoverwithballs}
Let $C_w$ be the output of $\texttt{CoverWithBalls}(P,T,R,\epsilon,\beta)$.
$C_w$ is weighted according to a map $\tau: P \rightarrow C_w$ such that, for any $x \in P$, $d(x,\tau(x)) \leq \epsilon/(2\beta)\max\{R,d(x,T)\}$.
\end{lemma}
\begin{proof}
For any $x \in P$, we define $\tau(x)$ as the point in $C_w$ which caused the removal of $x$ from $P$ during the execution of the algorithm. The statement immediately follows.
\end{proof}
While in principle the size of $C_w$ can be arbitrarily close to
$|P|$, the next theorem shows that this is not the case for low
dimensional spaces, as a consequence of the fact that there cannot be
too many points which are all far from one another. We first
need a technical lemma. A set of points $X$ is said to be an
\emph{$r$-clique} if for any $x,y \in X$, $x \neq y$, it holds that
$d(x,y) > r$. We have:
\begin{lemma}
\label{lemma:sizeclique}
Let $0 < \epsilon < 1$. Let $\mathcal{M}$ be a metric space with doubling dimension $D$. Let $X \subseteq \mathcal{M}$ be an $\epsilon \cdot r$-clique and assume that $X$ can be covered by a ball of radius $r$ centered at a point of $\mathcal{M}$. Then, $|X| \leq (4/\epsilon)^D$.
\end{lemma}
\begin{proof}
By recursively applying the definition of doubling dimension, we observe that the ball of radius $r$ which covers $X$ can be covered by $2^{j\cdot D}$ balls of radius $2^{-j}\cdot r$, where $j$ is any non negative integer. Let $i$ be the least integer for which $2^{-i}\cdot r \leq \epsilon/2 \cdot r$ holds. Any of the $2^{i \cdot D}$ balls with radius $2^{-i}\cdot r$ can contain at most one point of $X$, since $X$ is a $\epsilon\cdot r$-clique. Thus $|X| \leq 2^{i \cdot D}$.
As $i = 1 + \lceil \log_2{(1/\epsilon)} \rceil$, we finally obtain that $|X| \leq (4/\epsilon)^D$.
\end{proof}
\begin{theorem}
\label{theorem:sizecoverwithballs}
Let $C_w$ be the set returned by the execution of
$\texttt{CoverWithBalls}(P,T,R,\epsilon,\beta)$. Suppose that the
points in $P$ and $T$ belong to a metric space with doubling dimension
$D$. Let $c$ be a real value such that, for any $x \in P$, $c \cdot R
\geq d(x,T)$. Then,
\begin{align*}
|C_w| \leq |T| \cdot \left(16\beta/\epsilon\right)^D \cdot (\log_2{c} + 2)
\end{align*}
\end{theorem}
\begin{proof}
Let $T = \{t_1,\ldots,t_{|T|} \}$ be the set in input to the algorithm. For any $i$, $1 \leq i \leq |T|$, let $P_i = \{ x \in P: x^T = t_i \}$
and $B_{i} = \{x \in P_i: d(x,T) \leq R \}$. In addition, for any integer value $j \geq 0$ and for any feasible value of $i$, we define $D_{i,j} = \{ x \in P_i: 2^{j}\cdot R < d(x,T) \leq 2^{j+1}\cdot R \}$. We observe that for any $j \geq \lceil \log_2{c} \rceil$, the sets $D_{i,j}$ are empty, since $d(x,T) \leq c \cdot R$. Together, the sets $B_{i}$ and $D_{i,j}$ are a partition of $P_i$.
For any $i$, let $C_{i} = C_w \cap B_{i}$. We now want to show that the set $C_{i}$ is a $\epsilon/(2\beta)\cdot R$-clique. Let $c_1,c_2$ be any two different points in $C_{i}$ and suppose, without loss of generality, that $c_1$ was added first to $C_w$. Since $c_2$ was not removed from $P$, this means that $d(c_1,c_2) > \epsilon/(2\beta)\cdot \max \{ d(c_2,T), R \} \geq \epsilon/(2\beta)R$, where we used the fact that $d(c_2,T) \leq R$ since $c_2$ belongs to $B_i$. Also, the set $C_i \subseteq B_{i}$ is contained in a ball of radius $R$ centered in $t_i$, thus we can apply \autoref{lemma:sizeclique} and bound its size, obtaining that $|C_i| \leq (8\beta/\epsilon)^D$.
For any $i$ and $j$, let $C_{i,j} = C_w \cap D_{i,j}$. We can use a
similar strategy to bound the size of those sets. We first show that
the sets $C_{i,j}$ are $\frac{\epsilon}{4\beta}\cdot
2^{j+1}R$-cliques. Let $c_1,c_2$ be any two different points in
$C_{i,j}$ and suppose, without loss of generality, that $c_1$ was
added first to $C_w$. Since $c_2$ was not removed from $P$, this means
that $d(c_1,c_2) > \epsilon/(2\beta)\cdot \max \{ d(c_2,T), R \} \geq
\epsilon/(4\beta)2^{j+1}R$, where we used the fact that $d(c_2,T) >
2^j \cdot R$ since $c_2$ belongs to $D_{i,j}$. Also, the set $C_{i,j}
\subseteq D_{i,j}$ is contained in a ball of radius $2^{j+1}R$
centered in $t_i$, thus we can apply \autoref{lemma:sizeclique} and
obtain that $|C_{i,j}| \leq (16\beta/\epsilon)^D$. Since the sets
$C_i$ and $C_{i,j}$ partition $C_w$, we can bound the size of $C_w$ as
the sum of the bounds of the size of those sets. Hence:
\begin{align*}
|C_w| \leq \sum_{i=1}^{|T|} |C_{i}| + \sum_{i=1}^{|T|} \sum_{j=0}^{\lceil \log_2{c} \rceil - 1}|C_{i,j}| \leq |T|\cdot (16\beta/\epsilon)^D \cdot (\log_2{c}+2)
\end{align*}
\end{proof}
\subsection{A first approach to coreset construction for $k$-median}
\label{subsection:approachkmedian}
In this subsection we present a $1$-round MapReduce algorithm that
builds a weighted coreset $C_w \subseteq P$ of a $k$-median instance $\mathcal{I} =
(P,k)$. The algorithm is parametrized by a value $\epsilon \in (0,1)$,
which represents a tradeoff between coreset size and
accuracy. The returned coreset has the following property. Let $\mathcal{I}'
= (C_w,k)$. If we run an $\alpha$-approximation algorithm on
$\mathcal{I}'$, then the
returned solution is a $(2\alpha+O(\epsilon))$-approximate
solution of $\mathcal{I}$.
Building on this construction, in the next subsection
we will obtain a better coreset which allows us to reduce the
final approximation factor to the desired $\alpha+O(\epsilon)$ value.
The coreset construction algorithm operates as follows. The
set $P$ is partitioned into $L$ equally-sized subsets
$P_1,\ldots,P_L$. In parallel, on each $k$-median instance
$\mathcal{I}_\ell = (P_\ell,k)$, with $\ell=1,\ldots,L$, the
following operations are performed:
\begin{enumerate}
\item Compute a set $T_\ell$ of $m \geq k$ points
such that $\nu_{P_\ell}(T_\ell) \leq \beta \cdot \nu_{P_\ell}(\texttt{\rm opt}_{\mathcal{I}_\ell})$.
\item $R_\ell \longleftarrow \nu_{P_\ell}(T_\ell)/|P_\ell|$.
\item $C_{w,\ell} \longleftarrow \texttt{CoverWithBalls}(P_\ell,T_\ell,R_\ell,\epsilon,\beta)$.
\end{enumerate}
The set $C_w = \cup_{\ell=1}^{L}C_{w,\ell}$ is the output of the
algorithm.
In Step 1, the set $T_\ell$ can be computed through a sequential
(possibly bi-criteria) approximation algorithm for $m$-median, with a
suitable $m \geq k$, to yield a small value of $\beta$. If we assume
that such an algorithm requires space linear in $P_\ell$, the entire
coreset costruction can be implemented in a single MapReduce round,
using $O(|P|/L)$ local memory and $O(|P|)$ aggregate memory. For
example, using one of the known linear-space, constant-approximation
algorithms (e.g., \cite{AryaGKMMP04}), we can get $\beta = O(1)$ with
$m=k$.
\begin{lemma}
\label{lemma:cwboundedcoreset}
For $\ell=1,\ldots,L$, $C_{w,\ell}$ is an $\epsilon$-bounded coreset
of the $k$-median instance $\mathcal{I}_\ell$.
\end{lemma}
\begin{proof}
Fix a value of $\ell$. Let $\tau_\ell$ be the map between the points in $C_{w,\ell}$ and the points in $P_\ell$ of \autoref{lemma:taucoverwithballs}. The set $C_{w,\ell}$ is weighted according to $\tau_\ell$. Also, it holds that:
\begin{align*}
\sum_{x \in P_\ell}d(x,\tau_\ell(x)) \leq \frac{\epsilon}{2\beta} \sum_{x \in P_\ell}\left(R_\ell+d(x,T_\ell) \right) \leq \frac{\epsilon}{2\beta}\left( R_\ell\cdot|P_\ell|+\nu_{P_\ell}(T_\ell) \right) \leq \epsilon \cdot \nu_{P_\ell}(\texttt{\rm opt}_{\mathcal{I}_\ell})
\end{align*}
\end{proof}
By combining \autoref{lemma:cwboundedcoreset} and \autoref{lemma:unionbounded}, the next lemma immediately follows.
\begin{lemma}
\label{lemma:coresetkmedian}
Let $\mathcal{I} = (P,k)$ be a $k$-median instance. The set $C_w$ returned by the above MapReduce algorithm is a $2\epsilon$-bounded coreset of $\mathcal{I}$.
\end{lemma}
It is possible to bound the size of $C_w$ as a function of the doubling
dimension $D$. For any
$\ell=1,\ldots,L$ and $x \in P_\ell$, it holds that $R_{\ell}\cdot
|P_\ell| = \nu_{P_\ell}(T_\ell) \geq d(x,T_\ell)$, thus we can
bound the size of $C_{w,\ell}$ by using
\autoref{theorem:sizecoverwithballs}.
Since $C_w$ is the union of
those sets, this argument proves the following lemma.
\begin{lemma}
Let $\mathcal{I} = (P,k)$ be a $k$-median instance. Suppose that the points in $P$ belong to a metric space with doubling dimension $D$. Let $C_w$ be the set
returned by the above MapReduce algorithm with input $\mathcal{I}$ and $m\geq k$. Then, $|C_w| = O\left( L\cdot m \cdot (16\beta/\epsilon)^{D} \log{|P|} \right)$
\end{lemma}
Let $S$ be an $\alpha$-approximate solution of $\mathcal{I}' =(C_w,k)$, with constant $\alpha$. We will now show that $\nu_{P}(S)/\nu_{P}(\texttt{\rm opt}_\mathcal{I}) = 2\alpha + O(\epsilon)$.
Let $\tau$ be the map of
from $P$ to $C_w$ (see \autoref{lemma:taucoverwithballs}). By triangle inequality, $\nu_{P}(S) \leq \sum_{x \in P}d(x,\tau(x)) + \nu_{C_w}(S)$. We have that $\sum_{x \in P}d(x,\tau(x)) \leq 2\epsilon \cdot \nu_{P}(\texttt{\rm opt}_\mathcal{I})$ since,
by \autoref{lemma:coresetkmedian},
$C_w$ is a $2\epsilon$-bounded coreset.
By the fact that $S$ is an $\alpha$-approximate solution of $\mathcal{I}'$ and by
\autoref{lemma:optimalsolrelation}, we have that
$\nu_{C_w}(S) \leq \alpha \cdot \nu_{C_w}(\texttt{\rm opt}_{\mathcal{I}'}) \leq 2\alpha \cdot \nu_{C_w}(\texttt{\rm opt}_\mathcal{I})$. By \autoref{lemma:boundedtostrongkmedian}, $C_w$ is also a $2\epsilon$-approximate coreset of $\mathcal{I}$, thus $\nu_{C_w}(\texttt{\rm opt}_\mathcal{I}) \leq (1+2\epsilon)\nu_{P}(\texttt{\rm opt}_\mathcal{I})$. Putting it all together, we have that $\nu_{P}(S)/\nu_{P}(\texttt{\rm opt}_\mathcal{I}) \leq 2\alpha(1+2\epsilon)+2\epsilon = 2\alpha + O(\epsilon)$.
We observe that the factor $2$ is due to the inequality which relates $\texttt{\rm opt}_\mathcal{I}$ and $\texttt{\rm opt}_{\mathcal{I}'}$, namely $\nu_{C_w}(\texttt{\rm opt}_{\mathcal{I}'}) \leq 2\nu_{C_w}(\texttt{\rm opt}_\mathcal{I})$. In the next subsection, we will show how to get rid of this
factor.
\paragraph*{Application to the continuous case}
The same algorithm of this subsection can also be used to build a
$O(\epsilon)$-approximate coreset in the continuous scenario where
centers are not required to belong to $P$. It is easy to verify that
the construction presented in this subsection also works in the
continuous case, with the final
approximation factor improving to $(\alpha +
O(\epsilon))$.
Indeed, we can use the stronger inequality
$\nu_{C_w}(\texttt{\rm opt}_{\mathcal{I}'}) \leq \nu_{C_w}(\texttt{\rm opt}_\mathcal{I})$, as
$\texttt{\rm opt}_{\mathcal{I}}$ is also a solution of $\mathcal{I}'$, which allows us to avoid
the factor $2$ in front of $\alpha$. While
the same approximation guarantee has
already been achieved in the literature using more space-efficient but
randomized coreset constructions
\cite{BalcanEL13,BachemLK18}, as mentioned in the
introduction, this result provides evidence of the general
applicability of our approach.
\subsection{Coreset construction for $k$-median}
\label{subsection:coresetkmedian}
In this subsection, we present a $2$-round MapReduce algorithm which
computes a weighted subset which is both an $O(\epsilon)$-bounded
coreset and an $O(\epsilon)$-centroid set of an input instance $\mathcal{I}
= (P,k)$ of $k$-median. The algorithm is similar to the one of the
previous subsection, but applies $\texttt{CoverWithBalls}$ twice in
every subset of the partition. This idea is inspired by the strategy
presented in \cite{Har-Peled2004} for $\mathbb{R}^d$, where a double
exponential grid construction is used to ensure that the returned
subset is a centroid set.
\noindent{\bf First Round.}
$P$ is partitioned into $L$ equally-sized subsets $P_1,\ldots,P_L$. Then in
parallel, on each $k$-median instance $\mathcal{I}_\ell = (P_\ell,k)$, with
$\ell=1,\ldots,L$, the following steps are performed:
\begin{enumerate}
\item Compute a set $T_\ell$ of $m\geq k$ points such that $\nu_{P_\ell}(T_\ell) \leq \beta \cdot \nu_{P_\ell}(\texttt{\rm opt}_{\mathcal{I}_\ell})$.
\item $R_\ell \longleftarrow \nu_{P_\ell}(T_\ell)/|P_\ell|$.
\item $C_{w,\ell} \longleftarrow \texttt{CoverWithBalls}(P_\ell,T_\ell,R_\ell,\epsilon,\beta)$.
\end{enumerate}
\noindent{\bf Second Round.}
Let $C_w = \cup_{\ell =1}^{L}C_{w,\ell}$. The same partition of $P$ of the first round is used. Together with $P_{\ell}$, the $\ell$-th reducer receives a copy of $C_w$, and all values $R_i$ computed in the previous round, for $i = 1, \ldots, L$. On each $k$-median instance $\mathcal{I}_\ell = (P_\ell,k)$, with $\ell = 1,\ldots,L$, the following steps are performed:
\begin{enumerate}
\item $R \longleftarrow \sum_{i=1}^{L} |P_i| \cdot R_i / |P|$
\item $E_{w,\ell} \longleftarrow \texttt{CoverWithBalls}(P_\ell,C_{w},R,\epsilon,\beta)$.
\end{enumerate}
The set $E_w = \cup_{\ell=1}^{L}E_{w,\ell}$ is the output of the
algorithm. The computation of $T_{\ell}$ in the first round is accomplished as described in the previous section.
The following lemma characterizes the properties of $E_w$.
\begin{lemma}
\label{lemma:centroidkmedian}
Let $\mathcal{I} = (P,k)$ be a $k$-median instance. Then, the set $E_w$ returned by
the above MapReduce algorithm is both a $2\epsilon$-bounded coreset
and a $7\epsilon$-centroid set of $\mathcal{I}$.
\end{lemma}
\begin{proof}
The first three steps of the algorithm are in common with the
algorithm of \autoref{subsection:coresetkmedian}. By
\autoref{lemma:cwboundedcoreset}, for $\ell=1,...,L$, the sets
$C_{w,\ell}$ are $\epsilon$-bounded coresets of $\mathcal{I}_\ell$. Let $C_w
= \cup_{\ell=1}^{L}C_{w,\ell}$. By \autoref{lemma:unionbounded}, the
set $C_w$ is a $2\epsilon$-bounded coreset of $\mathcal{I}$, and also, by
\autoref{lemma:boundedtostrongkmedian}, a $2\epsilon$-approximate
coreset. Let $\tau(x)$ be the map from $P$ to $C_w$ as specified
in \autoref{definition:bounded}. It holds that $\nu_{P}(C_w) \leq
\sum_{x \in P}d(x,\tau(x)) \leq 2\epsilon \cdot \nu_{P}(\texttt{\rm opt}_\mathcal{I})$. Let $\phi_\ell$ be the map of
\autoref{lemma:taucoverwithballs} from the points in $P_\ell$ to the
points in $E_{w,\ell}$. By reasoning as in the proof of
\autoref{lemma:cwboundedcoreset}, we obtain that $\sum_{x \in
P_\ell}d(x, \phi_\ell(x)) \leq \epsilon/(2\beta)\left[|P_\ell| \cdot
R + \nu_{P_\ell}(C_{w})\right]$. For any $x \in P$, let $\hat{\ell}$ be the index for which $x \in P_{\hat{\ell}}$, we define $\phi(x) = \phi_{\hat{\ell}}(x)$. We have that
\begin{align*}
\sum_{x \in P}d(x,\phi(x)) \leq \frac{\epsilon}{2\beta}\sum_{\ell = 1}^{L} \left[ R\cdot |P_\ell| + \nu_{P_\ell}(C_w) \right] = \frac{\epsilon}{2\beta}\left(\left(\sum_{\ell=1}^{L} | P_\ell | \cdot R_\ell\right) + \nu_{P}(C_w) \right)
\end{align*}
where in the last equality we applied the definition of $R$. Since
$|P_\ell|\cdot R_\ell = \nu_{P_\ell}(T_\ell) \leq \beta \cdot
\nu_{P_\ell}(\texttt{\rm opt}_{\mathcal{I}_\ell}) \leq 2\beta\cdot\nu_{P_\ell}(\texttt{\rm opt}_\mathcal{I})$, where the last inequality follows from \autoref{lemma:optimalsolrelation}, we have that $\sum_{\ell=1}^{L}
|P_\ell|\cdot R_\ell \leq 2\beta \cdot \nu_{P}(\texttt{\rm opt}_\mathcal{I})$. Additionally, $\nu_{P}(C_w) \leq 2\epsilon \cdot \nu_{P}(\texttt{\rm opt}_\mathcal{I})$ as argued previously in the proof. Therefore $E_w$ is a $2\epsilon$-bounded coreset.
We now show that $E_w$ is a $7\epsilon$-centroid set of $\mathcal{I}$. Let $X = \{ x^{E_w} : x \in \texttt{\rm opt}_\mathcal{I} \}$. We will prove that $\nu_{P}(X) \leq (1+7\epsilon)\nu_{P}(\texttt{\rm opt}_\mathcal{I})$. By triangle inequality, we obtain that:
\begin{align*}
\nu_{P}(X) = \sum_{x \in P}d(x,X) \leq \sum_{x \in P}d(x,\tau(x))+\sum_{x \in P}d(\tau(x),X)
\end{align*}
The first term of the above sum can be bounded as $\sum_{x \in P}d(x,\tau(x)) \leq 2\epsilon \cdot \nu_{P}(\texttt{\rm opt}_\mathcal{I})$, since $C_w$ is a $2\epsilon$-bounded coreset. Also, we notice that the second term of the sum can be rewritten as $\sum_{x \in P}d(\tau(x),X) = \sum_{x \in C_w}w(x)d(x,X)$, due to the relation between $\tau$ and $w$. By triangle inequality, we obtain that:
\begin{align*}
\sum_{x \in C_w}w(x)d(x,X) \leq \sum_{x \in C_w}w(x)d(x,x^{\texttt{\rm opt}_\mathcal{I}})+\sum_{x \in C_w}w(x)d(x^{\texttt{\rm opt}_\mathcal{I}},X)
\end{align*}
Since $C_w$ is a $2\epsilon$-approximate coreset, we can use the bound
$\sum_{x \in C_w}w(x)d(x,x^{\texttt{\rm opt}_\mathcal{I}}) = \nu_{C_w}(\texttt{\rm opt}_{\mathcal{I}})
\leq (1+2\epsilon)\nu_{P}(\texttt{\rm opt}_\mathcal{I})$. Also, by using the definition of $X$, we observe that
\begin{align*}
\sum_{x \in C_w}w(x)d(& x^{\texttt{\rm opt}_\mathcal{I}}, X) = \sum_{x \in C_w}w(x)d(x^{\texttt{\rm opt}_\mathcal{I}},E_w) \leq \sum_{x \in C_w}w(x)d(x^{\texttt{\rm opt}_\mathcal{I}},\phi(x^{\texttt{\rm opt}_\mathcal{I}}))\\
&\leq \frac{\epsilon}{2\beta} \sum_{x \in C_w} w(x) \cdot \left( R +
d(x^{\texttt{\rm opt}_\mathcal{I}}, C_w) \right) \leq \frac{\epsilon}{2\beta}\left(\left( \sum_{\ell=1}^{L} |P_\ell| \cdot R_\ell\right) + \nu_{C_w}(\texttt{\rm opt}_\mathcal{I}) \right)
\end{align*}
In the last inequality, we used the definition of $R$, and the simple observation that for any $x \in C_{w}$, $d(x^{\texttt{\rm opt}_\mathcal{I}},C_{w}) \leq d(x,x^{\texttt{\rm opt}_\mathcal{I}}) = d(x,\texttt{\rm opt}_\mathcal{I})$. As argued previously in the proof, we have that $\sum_{\ell} |P_\ell| \cdot R_\ell \leq 2\beta \cdot \nu_{P}(\texttt{\rm opt}_\mathcal{I})$. Also, $\nu_{C_w}(\texttt{\rm opt}_\mathcal{I}) \leq (1+2\epsilon)\nu_{P}(\texttt{\rm opt}_\mathcal{I})$ as $C_w$ is a $2\epsilon$-approximate coreset of $\mathcal{I}$. Since we assume that $\beta \geq 1$, we finally obtain:
\begin{align*}
\sum_{x \in C_w}w(x)d(x^{\texttt{\rm opt}_\mathcal{I}},X) \leq \frac{\epsilon}{2\beta}( 2\beta + 1 + 2\epsilon) \nu_{P}(\texttt{\rm opt}_\mathcal{I}) \leq 3\epsilon \cdot \nu_{P}(\texttt{\rm opt}_\mathcal{I})
\end{align*}
We conclude that $\nu_{P}(X) \leq (2\epsilon+1+2\epsilon+3\epsilon)\nu_{P}(\texttt{\rm opt}_\mathcal{I}) = (1+7\epsilon) \cdot \nu_{P}(\texttt{\rm opt}_\mathcal{I})$
\end{proof}
The next lemma establishes an upper bound on the size of $E_w$.
\begin{lemma} \label{lemma:kmediansize}
Let $\mathcal{I} = (P,k)$ be a $k$-median instance. Suppose that the points in $P$ belong to a metric space with doubling dimension $D$. Let $E_w$ be the set returned by the above MapReduce algorithm with input $\mathcal{I}$ and $m\geq k$. Then $|E_w| = O\left( L^2\cdot m \cdot (16\beta/\epsilon)^{2D} \log^2{|P|} \right)$.
\end{lemma}
\begin{proof}
From the previous subsection, we know that $|C_w| = O\left(L \cdot
m\cdot(16\beta/\epsilon)^{D}\log{|P|} \right)$. Also, by \autoref{lemma:cwboundedcoreset}, we have that $\nu_{P_\ell}(C_{w,\ell}) \leq \epsilon \cdot \nu_{P_\ell}(\texttt{\rm opt}_{\mathcal{I}_\ell})$ for any $\ell = 1, \ldots, L$. For every $x \in P$
we have that $\epsilon| P|\cdot R = \epsilon \sum_{\ell}|P_\ell|\cdot R_\ell = \epsilon\sum_{\ell}\nu_{P_\ell}(T_\ell) \geq \sum_{\ell}\epsilon\cdot \nu_{P_\ell}(\texttt{\rm opt}_{\mathcal{I}_\ell}) \geq
\sum_{\ell}\nu_{P_\ell}(C_{w,\ell}) \geq \nu_P(C_w) \geq d(x,C_{w}) $. The lemma follows by
applying \autoref{theorem:sizecoverwithballs}
to bound the sizes of the sets $E_{w,\ell}$.
\end{proof}
We are now ready to state the main result of this subsection.
\begin{theorem} \label{theorem:kmediancoresetfactor}
Let $\mathcal{I} = (P,k)$ be a $k$-median instance and
let $E_w$ be the set returned by the above MapReduce algorithm
for a fixed $\epsilon \in (0,1)$. Let
$\mathcal{A}$ be an $\alpha$-approximation algorithm for the
$k$-median problem, with constant $\alpha$. If $S$ is the solution returned by $\mathcal{A}$
with input $\mathcal{I}' = (E_w,k)$, then $\nu_{P}(S)/\nu_{P}(\texttt{\rm opt}_\mathcal{I})
\leq \alpha + O(\epsilon)$.
\end{theorem}
\begin{proof}
Let $\tau$ be the map from $P$ to $E_w$ of \autoref{definition:bounded}. By triangle inequality, it results that $\nu_{P}(S) \leq \sum_{x \in P}d(x,\tau(x)) + \nu_{E_w}(S)$.
The set $E_w$ is a $2\epsilon$-bounded coreset of $\mathcal{I}$, so we have that $\sum_{x \in P}d(x,\tau(x)) \leq 2\epsilon \cdot \nu_{P}(\texttt{\rm opt}_\mathcal{I})$. Since $\mathcal{A}$ is an $\alpha$-approximation algorithm, we have that $\nu_{E_w}(S) \leq \alpha \cdot \nu_{E_w}(\texttt{\rm opt}_{\mathcal{I}'})$. As $E_w$ is also a $7\epsilon$-centroid set, there exists a solution $X \subseteq E_w$ such that $\nu_{P}(X) \leq (1+7\epsilon)\nu_{P}(\texttt{\rm opt}_\mathcal{I})$. We obtain that $\nu_{E_w}(\texttt{\rm opt}_{\mathcal{I}'}) \leq \nu_{E_w}(X) \leq (1+2\epsilon)(1+7\epsilon)\nu_{P}(\texttt{\rm opt}_\mathcal{I})$. In the last inequality, we used the fact that $E_w$ is a $2\epsilon$-approximate coreset of $\mathcal{I}$ due to \autoref{lemma:boundedtostrongkmedian}. To wrap it up, $\nu_{P}(X)/\nu_{P}(\texttt{\rm opt}_\mathcal{I}) \leq \alpha (1+7\epsilon)(1+2\epsilon) + 2\epsilon = \alpha + O(\epsilon)$.
\end{proof}
\subsection{Coreset construction for $k$-means}
\label{subsection:coresetkmeans}
In this subsection, we present a $2$-round MapReduce algorithm to
compute a weighted subset $E_w$ which is both an
$O(\epsilon^2)$-approximate coreset and a $O(\epsilon)$-centroid set
of an instance $\mathcal{I}$ of $k$-means and then show that an
$\alpha$-approximate solution of $\mathcal{I}' = (E_w,k)$ is an $(\alpha +
O(\epsilon))$-approximate solution of $\mathcal{I}$.
The algorithm is an adaptation of the one devised in the previous
subsection for $k$-median, with suitable tailoring of the parameters
involved to account for the presence of squared distances in the
objective function of $k$-means.
\noindent{\bf First Round.}
$P$ is partitioned into $L$ equally-sized subsets $P_1,\ldots,P_L$. Then in
parallel, on each $k$-means instance $\mathcal{I}_\ell = (P_\ell,k)$, with
$\ell=1,\ldots,L$, the following steps are performed:
\begin{enumerate}
\item Compute a set $T_\ell$ of $m\geq k$ points such that $\mu_{P_\ell}(T_\ell) \leq \beta \cdot \mu_{P_\ell}(\texttt{\rm opt}_{\mathcal{I}_\ell})$.
\item $R_\ell \longleftarrow \sqrt{\mu_{P_\ell}(T_\ell)/|P_\ell|}$.
\item $C_{w,\ell} \longleftarrow \texttt{CoverWithBalls}(P_\ell,T_\ell,R_\ell,\sqrt{2}\epsilon,\sqrt{\beta})$.
\end{enumerate}
\noindent{\bf Second Round.}
Let $C_w = \cup_{\ell =1}^{L}C_{w,\ell}$. The same partition of $P$ of the first round is used. Together with $P_{\ell}$, the $\ell$-th reducer receives a copy of $C_w$, and all values $R_i$ computed in the previous round, for $i = 1, \ldots, L$. On each $k$-means instance $\mathcal{I}_\ell = (P_\ell,k)$, with $\ell = 1,\ldots,L$, the following steps are performed:
\begin{enumerate}
\item $R \longleftarrow \sqrt{\sum_{i=1}^{L} |P_i| \cdot R_i^2 / |P|}$
\item $E_{w,\ell} \longleftarrow \texttt{CoverWithBalls}(P_\ell,C_{w},R,\sqrt{2}\epsilon,\sqrt{\beta})$.
\end{enumerate}
The set $E_w = \cup_{\ell=1}^{L}E_{w,\ell}$ is the output of the
algorithm. The computation of $T_\ell$ in the first round can be accomplished using the the linear-space
constant approximation algorithms of \cite{Gupta2005,Kanungo2002}.
The analysis follows the lines of the one carried out for the $k$-median
coreset construction. The following lemma establishes the properties of each $C_{w,\ell}$.
\begin{lemma}
\label{lemma:cwboundedcoresetkmeans}
For $\ell=1,\ldots,L$, $C_{w,\ell}$ is a $\epsilon^2$-bounded coreset
of the $k$-means instance $\mathcal{I}_\ell$.
\end{lemma}
\begin{proof}
Fix a value of $\ell$. Let $\tau_\ell$ be the map between the points in $C_{w,\ell}$ and the points in $P_\ell$ of \autoref{lemma:taucoverwithballs}. The set $C_{w,\ell}$ is weighted according to $\tau_\ell$. Also, it holds that:
\begin{align*}
\sum_{x \in P_\ell}d(x,\tau_\ell(x))^2 \leq \frac{\epsilon^2}{2\beta} \sum_{x \in P_\ell}\left[R_\ell^2+d(x,T_\ell)^2 \right] \leq \frac{\epsilon^2}{2\beta}\left[ R_\ell^2\cdot|P_\ell|+\mu_{P_\ell}(T_\ell) \right] \leq \epsilon^2 \cdot \mu_{P_\ell}(\texttt{\rm opt}_{\mathcal{I}_\ell})
\end{align*}
\end{proof}
Next, in the following two lemmas, we characterize the properties and the
size of $E_w$.
\begin{lemma}
\label{lemma:centroidkmeans}
Let $\mathcal{I} = (P,k)$ be a $k$-means instance and assume that $\epsilon$
is a positive value such that $\epsilon+\epsilon^2 \leq 1/8$. Then, the set
$E_w$ returned by the above MapReduce algorithm is both a
$4\epsilon^2$-bounded coreset and a $27\epsilon$-centroid set of
$\mathcal{I}$.
\end{lemma}
\begin{proof}
Let $\phi_\ell$ be the map of
\autoref{lemma:taucoverwithballs} from the points in $P_\ell$ to the
points in $E_{w,\ell}$. We have that $\sum_{x \in P_\ell}d(x,
\phi_\ell(x))^2 \leq \epsilon^2/(2\beta)\left(|P_\ell| \cdot R_\ell^2
+ \mu_{P_\ell}(C_{w})\right)$. For any $x \in P$, let $\hat{\ell}$ be the index for which $x \in P_{\hat{\ell}}$, we define $\phi(x) = \phi_{\hat{\ell}}(x)$. We have that:
\begin{align*}
\sum_{x \in P}d(x,\phi(x))^2 \leq \frac{\epsilon^2}{2\beta} \sum_{\ell = 1}^{L} \left[R^2 |P_\ell| + \mu_{P_\ell}(C_{w}) \right] = \frac{\epsilon^2}{2\beta}\left(\left( \sum_{\ell=1}^{L} |P_\ell|\cdot R_{\ell}^2\right) + \mu_{P}(C_w) \right)
\end{align*}
Using the fact that
$|P_\ell|\cdot R_\ell^2 = \mu_{P_\ell}(T_\ell) \leq \beta \cdot
\mu_{P_\ell}(\texttt{\rm opt}_{\mathcal{I}_\ell}) \leq 4\beta \cdot
\mu_{P_\ell}(\texttt{\rm opt}_{\mathcal{I}})$, where the last inequality is due to \autoref{lemma:optimalsolrelation}, we have that $\sum_{\ell} R_\ell^2 |P_\ell| \leq \sum_{\ell} 4\beta \cdot
\mu_{P_\ell}(\texttt{\rm opt}_\mathcal{I}) \leq 4\beta \cdot \mu_{P}(\texttt{\rm opt}_\mathcal{I})$. Also, by \autoref{lemma:cwboundedcoresetkmeans} and \autoref{lemma:unionbounded}, $C_{w}$ is an
$4\epsilon^2$-bounded coreset of $P$, thus
$\mu_{P}(C_{w}) \leq 4\epsilon^2 \cdot
\mu_{P}(\texttt{\rm opt}_{\mathcal{I}})$.
Therefore, $E_{w}$ is
an $4\epsilon^2$-bounded coreset of $\mathcal{I}$.
We now show that $E_w$ is a centroid set of $\mathcal{I}$. Let $X = \{
x^{E_w} : x \in \texttt{\rm opt}_\mathcal{I} \}$. By \autoref{lemma:boundedtostrongkmeans}, $C_w$ is a
$\gamma$-approximate coreset of $\mathcal{I}$, with $\gamma = 4(\epsilon +
\epsilon^2) \leq 1/2$. Hence, $\mu_{P}(X) \leq 1/(1-\gamma)\cdot
\mu_{C_w}(X)$. By \autoref{proposition:squareddistance}, we have:
\begin{align*}
\mu_{C_w}(X) = \sum_{x \in C_w}w(x)d(x,X)^2 \leq (1+\epsilon)\mu_{C_w}(\texttt{\rm opt}_\mathcal{I}) + (1+1/\epsilon)\sum_{x \in C_w}w(x)d(x^{\texttt{\rm opt}_\mathcal{I}},X)^2
\end{align*}
Since $C_w$ is a $\gamma$-approximate coreset, it holds that
$\mu_{C_w}(\texttt{\rm opt}_\mathcal{I}) \leq (1+\gamma)\mu_{P}(\texttt{\rm opt}_\mathcal{I})$. By
reasoning
as in the proof of \autoref{lemma:centroidkmedian}, we have that
$\sum_{x \in C_w}w(x)d(x^{\texttt{\rm opt}_\mathcal{I}},X)^2 \leq
(5\epsilon^2/2 + \gamma\epsilon^2/2)\mu_{P}(\texttt{\rm opt}_\mathcal{I})$. Putting it
all together, we conclude:
\begin{align*}
\mu_{P}(X)/\mu_{P}(\texttt{\rm opt}_\mathcal{I}) \leq \left(1+\gamma+5\epsilon^2/2 + \gamma\epsilon^2/2 + 7\epsilon/2+3\gamma\epsilon/2\right)/(1-\gamma).
\end{align*}
Since $\gamma \leq 1/2$, we have that $1/(1-\gamma) \leq 1 + 2\gamma$. By using the constraint on $\epsilon$ and the definition of $\gamma$, after some tedious computations, we obtain $\mu_{P}(X)/\mu_{P}(\texttt{\rm opt}_\mathcal{I}) \leq 1+27\epsilon$.
\end{proof}
\begin{lemma}
\label{lemma:kmeanssize}
Let $\mathcal{I} = (P,k)$ be a $k$-means instance. Suppose that the points
in $P$ belong to a metric space with doubling dimension $D$. Let $E_w$
be the set returned by the above MapReduce algorithm with input
$\mathcal{I}$ and $m\geq k$. Then, $|E_w| = O\left( L^2 \cdot m \cdot
(8\sqrt{2\beta}/\epsilon)^{2D} \log^2{|P|} \right)$
\end{lemma}
\begin{proof}
For any $\ell=1,\ldots,L$ and $x \in P_\ell$, it holds
that $R_{\ell}\cdot \sqrt{|P_\ell|} = \sqrt{\mu_{P_\ell}(T_\ell)} \geq
d(x,T_\ell)$. By using \autoref{theorem:sizecoverwithballs}, we obtain
that $|C_{w,\ell}| = O\left(m \cdot (8\sqrt{2\beta}/\epsilon)^{D}
\log{|P|} \right)$, and we can bound the size of $C_w$ with an union bound. By \autoref{lemma:cwboundedcoresetkmeans}, $C_{w,\ell}$ is a $\epsilon^2$-bounded coreset of $\mathcal{I}_\ell$, hence $\mu_{P_\ell}(C_{w,\ell}) \leq \epsilon^2 \mu_{P_\ell}(\texttt{\rm opt}_{\mathcal{I}_\ell})$. For any $x \in P$ we have that
$\epsilon\sqrt{|P|}\cdot R = \sqrt{\epsilon^2 \sum_{\ell} |P_\ell| R_\ell^2 } = \sqrt{\epsilon^2 \sum_{\ell}
\mu_{P_\ell}(T_\ell)} \geq \sqrt{\epsilon^2 \sum_{\ell}
\mu_{P_\ell}(\texttt{\rm opt}_{\mathcal{I}_\ell})} \geq
\sqrt{\sum_\ell \mu_{P_\ell}(C_{w,\ell})} \geq \sqrt{\mu_{P}(C_w)} \geq d(x,C_{w})$. Thus,
the lemma follows by applying \autoref{theorem:sizecoverwithballs} to
bound the sizes of the sets $E_{w,\ell}$.
\end{proof}
We are now ready to state the main result of this subsection.
\begin{theorem}
\label{theorem:kmeanscoresetfactor}
Let $\mathcal{I} = (P,k)$ be a $k$-means instance and let $E_w$ be the set
returned by the above MapReduce algorithm for a fixed positive
$\epsilon$ such that $\epsilon + \epsilon^2 \leq 1/8$. Let
$\mathcal{A}$ be an $\alpha$-approximation algorithm for the $k$-means
problem, with constant $\alpha$. If $S$ is the solution returned by $\mathcal{A}$ with input
$\mathcal{I}' = (E_w,k)$, then $\mu_{P}(S)/\mu_{P}(\texttt{\rm opt}_\mathcal{I}) \leq \alpha +
O(\epsilon)$.
\end{theorem}
\begin{proof}
By \autoref{lemma:centroidkmeans} and \autoref{lemma:boundedtostrongkmeans}, $E_w$ is a $(4\epsilon^2+4\epsilon)$-approximate coreset of $\mathcal{I}$. Therefore, $\mu_{P}(S) \leq (1/(1-4\epsilon-4\epsilon^2)) \cdot \mu_{E_w}(S)$. Since $\mathcal{A}$ is an $\alpha$-approximation algorithm, $\mu_{E_w}(S) \leq \alpha \cdot \mu_{E_w}(\texttt{\rm opt}_{\mathcal{I}'})$. Also, $E_w$ is a $27\epsilon$-centroid set, thus there exists a solution $X \subseteq E_w$ such that $\mu_{P}(X) \leq (1+27\epsilon)\cdot \mu_{P}(\texttt{\rm opt}_\mathcal{I})$. We have that $\mu_{E_w}(\texttt{\rm opt}_{\mathcal{I}'}) \leq \mu_{E_w}(X) \leq (1+4\epsilon+4\epsilon^2) \cdot \mu_P(X) \leq (1+4\epsilon+4\epsilon^2)(1+27\epsilon)\cdot \mu_P(\texttt{\rm opt}_\mathcal{I})$, where the second
inequality follows again
from the fact that $E_w$ is a $(4\epsilon^2+4\epsilon)$-approximate coreset of $\mathcal{I}$. Because of the constraints on $\epsilon$, we have that $1/(1-4\epsilon-4\epsilon^2) \leq 1+8\epsilon+8\epsilon^2$. Therefore, it finally results that $\mu_{P}(S)/\mu_{P}(\texttt{\rm opt}_\mathcal{I}) \leq \alpha \cdot (1+8\epsilon+8\epsilon^2)(1+4\epsilon+4\epsilon^2)(1+27\epsilon) = \alpha + O(\epsilon)$.
\end{proof}
As noted in Subsection~\ref{subsection:approachkmedian}, a simpler version of this algorithm can be employed if we restrict our attention to the continuous case. Indeed, if we limit the algorithm to the first round and output the set $C_w = \cup_{\ell}C_{w,\ell}$, it is easy to show that an $\alpha$-approximate algorithm executed on the coreset $C_w$ returns a $(\alpha+O(\epsilon))$-approximate solution.
\subsection{MapReduce algorithms for $k$-median and $k$-means}
\label{subsection:mapreducefinal}
Let $\mathcal{I} = (P,k)$ be a $k$-median (resp.,
$k$-means) instance. We can compute an approximate solution of $\mathcal{I}$
in three MapReduce rounds: in the first two rounds, a weighted coreset $E_w$
is computed using the algorithm described in
Subsection~\ref{subsection:coresetkmedian} (resp.,
Subsection~\ref{subsection:coresetkmeans}), while in the third round
the final solution is computed by running a sequential approximation
algorithm for the weighted variant of the problem on $E_w$.
Suppose that in the first of the two rounds of coreset construction we use
a linear-space algorithm to compute the sets $T_\ell$
of size $m = O(k)$,
and cost at most a factor $\beta$ times the optimal cost,
and that in the third round we run a linear-space $\alpha$-approximation
algorithm on $E_w$, with constant $\alpha$. Setting $L = \sqrt[\leftroot{-2}\uproot{2}3]{|P|/k}$ we obtain the following theorem
as an immediate consequence of
Lemmas~\ref{lemma:kmediansize} and~\ref{lemma:kmeanssize},
and Theorems~\ref{theorem:kmediancoresetfactor} and~\ref{theorem:kmeanscoresetfactor}.
\begin{theorem}
Let $\mathcal{I} = (P,k)$ be an instance of $k$-median $($resp.,
$k$-means$)$. Suppose that the points in $P$ belong to a metric space
with doubling dimension $D$. For any $\epsilon \in (0,1)$
$($with $\epsilon+\epsilon^2 \leq 1/8$ for $k$-means$)$
the 3-round MapReduce algorithm
described above computes an $(\alpha+O(\epsilon))$-approximate
solution of $\mathcal{I}$ using local space
$O\left(\ |P|^{2/3}k^{1/3} (16\beta/\epsilon)^{2D} \log^2{|P|} \right)$
$($resp., $O\left(|P|^{2/3}k^{1/3} (8\sqrt{2\beta}/\epsilon)^{2D} \log^2{|P|} \right)$$)$.
\end{theorem}
Note that for a wide range of the relevant parameters,
the local space of the MapReduce algorithms is
substantially sublinear in the input size, and it is easy to show that the aggregate space is linear in $|P|$.
As concrete instantiations of the above result,
both the $T_{\ell}$'s and the final solution may be obtained through
the sequential algorithms in \cite{AryaGKMMP04} for
$k$-median, and in \cite{Gupta2005} for $k$-means. Both algorithms are
based on local search and feature approximations $\alpha = 3+2/t$
for $k$-median, and $\alpha = 5+4/t$ for $k$-means, where $t$ is the
number of simultaneous swaps allowed. With this choice, the result of
the above theorem holds with $\beta = \alpha = O(1)$. Alternatively,
for the $T_{\ell}$'s we could use $k$-means++
\cite{BahmaniMVKV12}
as a bi-criteria approximation algorithm (e.g, see \cite{Wei16}),
which yields a smaller $\beta$, at the expense of a slight, yet
constant, increase in the size $m$ of the $T_\ell$'s. For larger $D$,
this might be a better choice as the coreset size (hence the local
memory) is linear in $m$ and $\beta^{2D}$ (resp., $\beta^D$). Moreover, bi-criteria
approximations are usually faster to compute than actual solutions.
\section{Conclusions} \label{sec:conclusions}
We presented distributed coreset constructions that can
be used in conjunction with sequential approximation algorithms for
$k$-median and $k$-means in general metric spaces to obtain the first
space-efficient, 3-round MapReduce algorithms for the two problems,
which are almost as accurate as their sequential counterparts. The
constructions for the two problems are based on a uniform strategy,
and crucially leverage the properties of spaces of bounded doubling
dimension, specifically those related to ball coverings of sets of
points. One attractive feature of our constructions is their
simplicity, which makes them amenable to fast practical
implementations.
\bibliographystyle{plainurl
|
1,116,691,498,248 | arxiv | \section{Introduction}
In ``collisional'' stellar dynamics the potential in which a star
moves is considered to be smooth to first order, and the fact that the
potential is in fact made out of discrete stars is treated as a
perturbation (e.g. Chandrasekhar \cite{Ch43}; Binney \& Tremaine
\cite{BT87}). Stars have orbital parameters such as angular momentum
and energy that are conserved in the smooth potential, and these
parameters indeed remain constant for many dynamical times $t_d$. Only
the weak encounters allow for changes in these quantities. For
example, two stars which interact with each other, can exchange energy
and angular momentum, so that after their encounters their orbits are
described by slightly different quantities. In this way not only the
orbits of individual stars are modified, but the distribution function
(DF) of the entire system can change, and evolve towards a steady
state. The time-scale over which a system evolves is the {\it
relaxation time} $t_r$. In most systems $t_d\ll t_r$, and relaxation
can indeed be treated as a second order effect.
Analyses of the evolution of the DF near a MBH have almost exclusively
relied on the assumption that the mechanism through which stars
exchange angular momentum and energy is dominated by
\textit{uncorrelated two-body interactions}\footnote{$N$-body
simulations form an exception; for $N$-body simulations near MBHs see
e.g. Baumgardt, Makino \& Ebisuzaki \cite{Baum04a}, \cite{Baum04b};
Preto, Merritt \& Spurzem \cite{P04}; Merritt \& Szell
\cite{MS05}}. Any encounter is assumed to be unrelated to previous and
future encounters, and changes in energy and angular momenta are
considered to be drawn from a specified random
distribution. Relaxation can therefore in a meaningful way be
considered to be a random walk process. In the context of stellar
dynamics near MBHs, the assumption of uncorrelated encounters is made
in Fokker-Planck models (e.g. Bahcall \& Wolf \cite{BW76}; Bahcall \&
Wolf \cite{BW77}; Cohn \& Kulsrud \cite{CK78}; Murphy, Cohn \& Durisen
\cite{MCD91}), where the microscopic interactions are expressed by the
diffusion coefficients, and in Monte Carlo simulations (e.g. Shapiro
\& Marchant \cite{SM79}; Marchant \& Shapiro \cite{MS79}, \cite{MS80};
Freitag \& Benz \cite{FB01}, \cite{FB02}). Stars around MBHs are
described as moving in the smooth average potential of the MBH and the
stars, and the scattering by the fluctuating part of the potential is
modeled as a hyperbolic Keplerian interaction between a passing star
and a test star.
The (non-resonant) relaxation time $T_{\mathrm{NR}}$ can be defined as
the time it takes for the energy $\mathcal{E}$ of a typical star to change by
order unity. This is also the time it takes for its specific angular
momentum $J$ to change by an amount of order $J_{c}(\mathcal{E})$, the maximal
angular momentum for that energy. On Keplerian orbits
$J_{c}\!=\!\sqrt{GM_{\bullet} a}$, where $a$ is the semi-major axis. The
{}``non-resonant'' relaxation time $T_{\mathrm{NR}}$ of stars of mass
$M_{\star}$ can be written in the Keplerian regime as
\begin{equation}
T_{\mathrm{NR}}=A_{\Lambda}\left({\frac{M_{\bullet}}{M_{\star}}}\right)^{2}{\frac{P(a)}{N(<a)}}\qquad(M_{\bullet}\!\gg\!M_{\star})\,,\label{e:tr}\end{equation}
where $P\!=\!2\pi\sqrt{a^{3}/(GM_{\bullet})}$ is the orbital period and
$A_{\Lambda}$ is a dimensionless constant which includes the Coulomb
logarithm. For some stellar systems (like galaxies), $T_{\rm NR}$ is
much longer than the age of the system, implying that the system
cannot evolve significantly towards steady state by two-body
interactions. For other systems, like very dense stellar clusters,
the relaxation time can be as small as a few Myr, and such systems
may even evaporate within a Hubble time. In our GC, the relaxation
time is somewhat smaller than the age of the system, $T_{\rm
NR}\sim{\rm few}{\,\rm Gyr}$ (e.g. Alexander \cite{A99}, \cite{A05}),
implying that the system has evolved considerably, and two-body
relaxation effects such as mass-segregation have occurred (Bahcall \&
Wolf \cite{BW77}; Freitag, Amaro-Seoane \& Kalogera \cite{Fre06};
Hopman \& Alexander \cite{Hop06b}). At the same time, the relaxation
time is much longer than some other relevant times in the GC, in
particular the age of the youngest stars.
The assumption of uncorrelated two-body interactions is well-justified
in many systems, such as globular clusters, where stellar orbits are
not closed. However, the special symmetry of a Keplerian potential
leads to closed, elliptical orbits. The fact that the orbits are
closed can be exploited in numerical treatment, and also leads to
unique dynamical features (see also the contribution of Touma in this
volume). Since $t_r\gg t_d$, the orbits remain closed for many
dynamical times, the system may be thought of as a set of ``wires''
with the mass of the star smeared out over the orbits. In this
picture, it is the wires that interact and cause the evolution of the
system, rather than point particles interacting at given
locations. The idea is reminiscent of the Kozai mechanism in triple
stars (Kozai \cite{KO62}).
Rauch \& Tremaine \cite{RT96} first used this approach in the context
of many body stellar dynamics near MBHs, and coined the term {\it
resonant relaxation} (RR), after the $1\!:\!1$ resonance between the
radial and azimuthal frequencies in a Keplerian potential. The wire
approximation is only relevant for times $\ll t_{\omega}$, where $t_{\omega}$ is the
time for the orbit to precess. Precession may be caused by the fact
that the potential is not entirely determined by a point mass, and
there is still some extended component to the potential due to the
stellar mass; this is especially the case far away ($\gtrsim0.1\,\mathrm{pc}$)
from the MBH. Closer to the MBH ($\lesssim0.01\,\mathrm{pc}$), precession may be
dominated by effects of General Relativity\footnote{For the parameters
of interest here, Lense-Thirring precession is much less efficient
than mass and GR precession even for a maximally spinning MBH, and we
do not consider it here.}.
\subsection{Scalar resonant relaxation}
\label{ss:TRRs}
Scalar relaxation results in changes in \emph{both} the direction and
the magnitude of the angular momenta. The RR time
$T_{\mathrm{RR}}$ is estimated by evaluating $\DeltaJ_{\omega}$, the coherent
change in the magnitude of the specific angular momentum up to a time
$t_{\omega}$. The change $\DeltaJ_{\omega}$ is then the step size for the
non-coherent growth of the angular momentum over times
$t\!>\!t_{\omega}$. Two nearby stars with semi-major axes $a$ exert a mutual
specific torque $\sim\! GM_{\star}/a$. To zeroth order the torques of the
stars on a test wire cancel, so that within a distance $a$ from the
MBH the net torque on a test star is determined by the Poissonian
excess torque $\dot{J}\!\sim\!\sqrt{N(<\! a)}GM_{\star}/a$ and
\begin{equation}
\DeltaJ_{\omega}\sim\dot{J}t_{\omega}=\sqrt{N(<\!
a)}(GM_{\star}/a)t_{\omega}\,.\label{e:Jw}
\end{equation}
For $t\!>\!t_{\omega}$ the torques on a particular star-wire become random,
and the change in angular momentum grows in a random walk fashion with
a timescale $T_{\mathrm{RR}}\!\sim\!(J_{c}/\Delta J_{\omega})^{2}t_{\omega}$,
defined as
\begin{equation}
T_{\mathrm{RR}}\!\equiv\! A_{\mathrm{RR}}\frac{N(>\!\mathcal{E})}{\mu^{2}(>\!\mathcal{E})}\frac{P^{2}(\mathcal{E})}{t_{\omega}}\!\simeq\!\frac{A_{\mathrm{RR}}}{N(<\! a)}\left(\frac{M_{\bullet}}{M_{\star}}\right)^{2}\frac{P^{2}(a)}{t_{\omega}}\,,\label{e:TRR}
\end{equation}
where $\mu\!\equiv\! NM_{\star}/(M_{\bullet}\!+\! NM_{\star})$, $A_{\mathrm{RR}}$ is a
numerical factor of order unity, to be determined by simulations, and
the last approximate equality holds in the Keplerian regime.
Over most of the relevant phase space the precession is due to the
deviations from pure Keplerian motion caused by the potential of the
extended stellar cluster. This occurs on a timescale $t_{\omega}\!=\!
t_{M}=[M_{\bullet}/N(<\! a)M_{\star}]P(a)$, assuming $N(<\! a)M_{\star}\!\ll\!M_{\bullet}$. The
$J$-averaged RR timescale can then be written as
\begin{equation}
T_{\mathrm{RR}}^{M}=A_{\mathrm{RR}}{\frac{M_{\bullet}}{M_{\star}}}P(a)={\frac{A_{\mathrm{RR}}}{A_{\Lambda}}}{\frac{M_{\star}}{M_{\bullet}}}N(<\!
a)T_{\mathrm{NR}}.\label{e:TRRM}\end{equation} Since
$T_{\mathrm{RR}}^{M}\!\ll\! T_{\mathrm{NR}}$ for small $a$ where
$N(<\! a)M_{\star}\!\ll\!M_{\bullet}$, the RR rate of angular
momentum relaxation is much higher than the rate of energy relaxation
in the resonant regime. This qualitative analysis has been verified by
detailed numerical $N$-body simulations by Rauch \& Tremaine
\cite{RT96} and by Rauch \& Ingalls \cite{RI98}.
For most of parameter space, orbital precession is dominated by the
mass of the stellar cluster and the RR timescale is well approximated
by $T_{\mathrm{RR}}\!\sim\! T_{\mathrm{RR}}^{M}$. However, very close
to the MBH, or on wide orbits with very low angular momentum, so that
the periapse is close to the Schwarzschild radius of the MBH,
precession is dominated by GR effects. In this case the timescale for
precession is given by $t_{\omega}\!=\!
t_{\mathrm{GR}}=(8/3)(J/J_{\mathrm{LSO}})^{2}P$; here
$J_{\mathrm{LSO}}\equiv(4GM_{\bullet}/c)$ is the angular momentum of the last
stable orbit (LSO). When $t_{\mathrm{GR}}\!\ll\! t_{M}$ and GR
precession dominates, the RR timescale is (Eq. \ref{e:TRRM})
\begin{equation}
T_{\mathrm{RR}}^{\mathrm{GR}}=\frac{3}{8}A_{\mathrm{RR}}\left(\frac{M_{\bullet}}{M_{\star}}\right)^{2}\left(\frac{J_{\mathrm{LSO}}}{J}\right)^{2}\frac{P(a)}{N(<\!
a)}\,.\label{e:TRRGR}\end{equation}
Generally, GR precession and mass precession occur simultaneously, and
the scalar RR timescale $T_{\mathrm{RR}}^{s}(\mathcal{E},J)$
is given by substituting
$1/t_{\omega}\!=\!\left|1/t_{M}-1/t_{\mathrm{GR}}\right|$ in
Eq. (\ref{e:TRR}), where the opposite signs reflect the fact that mass
precession is retrograde whereas GR precession is prograde. Thus, the
scalar RR timescale is\begin{equation}
T_{\mathrm{RR}}^{s}=\frac{A_{\mathrm{RR}}}{N(<\!
a)}\left(\frac{M_{\bullet}}{M_{\star}}\right)^{2}P^{2}(a)\left|\frac{1}{t_{M}}-\frac{1}{t_{\mathrm{GR}}}\right|\,.\label{e:TRRs}\end{equation}
We use the relation
$\mathrm{d}(J^{2})/J_{c}^{2}\!=\!\mathrm{d}t/T_{\mathrm{RR}}^{s}(\mathcal{E},J)$
(Eqs. \ref{e:TRRs}) to define the $J$-averaged time it takes a star to
random-walk from $J=J_{c}(\mathcal{E})$ to the loss-cone $J=J_{lc}$ as
\begin{equation}
\bar{T}_{\mathrm{RR}}^{s}(\mathcal{E})=\frac{1}{J_{c}^{2}}\int_{J_{lc}^{2}}^{J_{c}^{2}}dJ^{2}T_{\mathrm{RR}}^{s}(\mathcal{E},J)\,.\label{e:Tave}\end{equation}
\subsection{Vector resonant relaxation}
\label{ss:TRRv}
For time scales much larger than the dynamical time, orbits precess
and describe a rosette shape. One can then consider the torques
between different rosettes rather than between different wires. Since
the rosettes describe planar rings to good approximation, they cannot
modify the magnitude of the angular momentum of the star, but they can
change the direction of the angular momentum vector. This process is
known as ``vector resonant relaxation'' (Rauch \& Tremaine
\cite{RT96}). Vector RR grows coherently ($\propto\! t$) on
timescales $t\!\ll\! t_{\varphi}$, where $t_{\varphi}$ is the
timescale for a change of order unity in the total gravitational
potential $\varphi$ caused by the changes in the stellar potential
$\varphi_{\star}$ due to the realignment of the stars as they rotate
by $\pi$ on their orbit,\begin{equation}
t_{\varphi}=\frac{\varphi}{\dot{\varphi_{\star}}}\simeq\frac{N^{1/2}}{\mu}\frac{P}{2}\simeq\frac{1}{2}\frac{M_{\bullet}}{M_{\star}}\frac{P}{N^{1/2}}\,,\label{e:tphi}\end{equation}
the last approximate equality holds for $NM_{\star}\!\ll\!M_{\bullet}$. In analogy
to scalar RR (Eq. \ref{e:Jw}), the maximal coherent change in
$\mathbf{J}$ is
$\left|\Delta\mathbf{J}_{\varphi}\right|\sim\dot{J}t_{\varphi}\sim
J_{c}$, that is, $\mathbf{J}$ rotates by an angle ${\cal {O}}(1)$
already at the coherent phase. On timescales $t\!\gg\! t_{\varphi}$,
$\left|\Delta\mathbf{J}_{\varphi}\right|$ cannot grow larger, as it
already reached its maximal possible value, but the orbital
inclination angle is continuously randomized non-coherently
($\propto\! t^{1/2}$) on the vector RR timescale (Eq. \ref{e:TRR}),
\begin{equation}
T_{\mathrm{RR}}^{v}=2A_{\mathrm{RR}}^{v}\frac{N^{1/2}(>\!\mathcal{E})}{\mu(\mathcal{E})}P(\mathcal{E})\simeq2A_{\mathrm{RR}}^{v}\left(\frac{M_{\bullet}}{M_{\star}}\right)\frac{P(a)}{N^{1/2}(<\! a)},\label{e:Tv}\end{equation}
where the last approximate equality holds for $NM_{\star}\!\ll\!M_{\bullet}$.
It is that while the torques driving scalar and vector resonant
relaxation are the same, vector RR is much more
efficient than scalar RR,
$T_{\mathrm{RR}}^{v}\!\ll\!\bar{T}_{\mathrm{RR}}^{s}$, due to the much
longer coherence time $t_{\varphi}\!\sim\! N^{1/2}t_{M}\!\gg\! t_{M}$.
Furthermore, vector RR proceeds irrespective of any
precession mechanisms that limit the efficiency of scalar resonant
relaxation.
\section{The origin of the young stellar population in the Galactic center}
\begin{figure}[h]
\includegraphics[width=18pc]{GC.eps}
\hspace{2pc}
\begin{minipage}[b]{18pc}
\caption{\label{f:GC}Stellar components, timescales and distance
scales in the GC. The NR timescale $T_{\mathrm{NR}}$ (top straight
line); the timescale $\bar{T}_{\textrm{RR}}^{s}$, estimated for
$1\,M_{\odot}$ stars (top curved line) and $10\,M_{\odot}$ stars (bottom curved
line); the timescale $T_{\mathrm{RR}}^{v}$ (bottom straight line); the
position and estimated age of the young stellar rings in the GC
(filled rectangle in the bottom right); the position and age of the
S-stars if they were born with the disks (empty rectangle in the
bottom left); the position and maximal lifespan of the S-stars (filled
rectangle in the middle left). Reprinted with permission from the
Astrophysical Journal}
\end{minipage}
\end{figure}
Figure (\ref{f:GC}) compares the distance scales and the ages or
lifespans of the various dynamical structures and components in the
inner pc of the GC with the relaxation timescales. The NR timescale in
the GC, which is roughly independent of radius, is
$T_{\mathrm{NR}}\!\sim\!\mathrm{few\!\times\!10^{9}}\,\mathrm{yr}$
(Eq. \ref{e:tr}). The scalar RR time $\bar{T}_{\mathrm{RR}}^{s}$ is
shown for $M_{\star}=1,10\,M_{\odot}$. At large radii the RR time decreases
towards the center, but for small radii, where GR precession becomes
significant, it increases again. The vector RR timescale
$T_{\mathrm{RR}}^{v}$, in contrast, decreases unquenched with
decreasing radius. Structures with estimated ages exceeding these
relaxation timescales must be relaxed.
Two distinct young stellar populations exists in the GC. At distances
of $0.04$--$0.5$ pc from the MBH there are about $\sim\!70$ young
massive OB stars ($M_{\star}\!\gg\!10\,M_{\odot}$, lifespan of
$t_{\star}\!=\!6\pm2$ Myr), which are distributed in two nearly
perpendicular, tangentially rotating disks (Levin \& Belobodorov
\cite{LB03}; Genzel et al. \cite{Gea03}; Paumard et al. \cite{P05}).
It appears that these stars were formed by the fragmentation of gas
disks (Levin \& Belobodorov \cite{LB03}; Levin \cite{L03}; Nayakshin
\& Cuadra \cite{NC04}; Nayakshin \& Sunyaev \cite{NS05}; Nayakshin
\cite{N06}). Inside the inner $0.04\,\mathrm{pc}$ the population
changes. There is no evidence for old stars, and the young stars there
(the {}``S-stars'') are main-sequence B-stars ($M_{\star}\!\lesssim15\,M_{\odot}$,
lifespans of $10^{7}\mathrm{\!\lesssim
t_{\star}\!\lesssim\!2}\!\times\!10^{8}$ yr; Ghez et al. \cite{Gh03};
Eisenhauer et al. \cite{Ei05}) on randomly oriented orbits with a
random (thermal) $J$-distribution. There is to date no satisfactory
explanation for the presence of the S-stars so close to the MBH (see
Alexander \cite{A05} for a review).
The existence of coherent dynamical structures in the GC constrains
the relaxation processes on these distance scales, since the the
relaxation timescales must be longer than the structure age
$t_{\star}$ to avoid randomizing it. Figure (\ref{f:GC}) shows that
the observed systematic trends in the spatial distribution, age and
state of relaxation of the different stellar components of the GC are
consistent with, and perhaps even caused by RR. The star disks are
young enough to retain their structure up to their inner edge at
$0.04\,\mathrm{pc}$, where $t_{\star}\!\sim\! T_{\mathrm{RR}}^{v}$
and vector RR can randomize the disk (Hopman \& Alexander
\cite{Hop06a}). It is tempting to explain the S-stars as originally
being the inner part of the same disks that are currently present in
the GC. However, this scenario is somewhat problematic. First, we note
that vector relaxation can only change the inclinations of the orbits,
and not their eccentricities, while many of the S-stars have high
($e>0.9$) eccentricities; the scalar resonant relaxation time is
larger than the age of the disks. Second, resonant relaxation alone
cannot explain why the S-stars are systematically less massive than
the disk stars. An alternative (Levin \cite{L06}) would be that the
S-stars were perhaps formed in previous accretion disks of which the
dynamical signatures have now disappeared.
If the S-stars were {\it not} formed in the disk, but captured by
either a tidal binary disruption (Gould \& Quillen \cite{GQ03}; see
also contribution from Perets et al. in this volume) or an exchange
interaction with a stellar mass black hole (Alexander \& Livio
\cite{AL04}), they may be much older than the disks, and in particular
their age may be comparable to the local scalar RR time (see figure
\ref{f:GC}). In this case, RR will redistribute their orbits within
their life-time. This may be an essential element of these formation
mechanisms: both scenarios lead to rather eccentric orbits (especially
tidal binary disruption), whereas not all the orbits of the S-stars
are very eccentric: star S1 has eccentricity $e=0.358\pm0.036$, and
S31 has $e=0.395\pm0.032$ (Eisenhauer et al. \cite{Ei05}). Since the
age of these stars may well exceed the RR time, RR may have
redistributed the eccentricities to the current DF, which is
consistent with a thermal DF.
Regardless of the origin of the S-stars, their random orbits are
consistent with the effect of RR. Vector RR can also explain why the
evolved red giants beyond $~\!0.04\,\mathrm{pc}$, in particular the
more massive ones with
$t_{\star}\!\ll\!\min(T_{\mathrm{NR}},\bar{T}_{\mathrm{RR}}^{s})$ are
relaxed, since $T_{\mathrm{RR}}^{v}\!<\! t_{\star}$ out to $\sim\!1$
pc.
\section{Gravitational wave sources}
MBHs with masses $M_{\bullet}\lesssim5\times10^6M_{\odot}$ have Schwarzschild radii
$r_S=2GM_{\bullet}/c^2$, such that a test mass orbiting at a few $r_S$ emits
gravitational waves (GWs) with frequencies $10^{-4}{\rm
Hz}\!\lesssim\!\nu\!\lesssim\!1{\rm Hz}$, detectable by the planned
space based {\it Laser Interferometer Space
Antenna}\footnote{http://lisa.jpl.nasa.gov/} ({\it LISA}). Such GW
sources, for which the mass of the inspiraling object is many orders
of magnitude smaller than the mass of the MBH are known as {\it
extreme mass ratio inspiral sources} (EMRIs). GW inspiral events are
very rare (of the order of $10^{-7}-10^{-8}{\rm \,yr^{-1}}$ per
galactic nucleus; e.g. Hils \& Bender \cite{HB95}; Sigurdsson \& Rees
\cite{SR97}; Ivanov \cite{IV02}; Freitag \cite{FR01}; Alexander \&
Hopman \cite{AH03}; Hopman \& Alexander \cite{HA05}, \cite{Hop06a},
\cite{Hop06b}), and it is unlikely that we will observe GWs from our
own Galactic center (GC), although such a possibility is not entirely
excluded (Freitag \cite{FR03}; Rubbo, Holley-Bockelmann \& Finn
\cite{Rub06}). The galactic MBH plays nevertheless a role of
importance in understanding the dynamics of EMRIs, since its mass is
very close to the mass of the ``optimal'' {\it LISA} EMRI target, and
as a consequence one may use the GC to model extra-galactic nuclei.
Hopman \& Alexander \cite{HA05} used a model based on the GC to
analyze the dynamics of EMRIs. One of the main results was that
inspiraling stars always originate very near the MBH, within a
distance of $\sim0.01\,\mathrm{pc}$: due to the relatively short relaxation time
in galactic nuclei, stars that start to spiral in from larger
distances are very likely to plunge into the MBH before becoming
observable as GW emitters. This result was confirmed qualitatively by
$N$-body simulations by Baumgardt et al. \cite{Bea05} of tidal capture
of MS stars by an intermediate mass black hole (Hopman, Portegies
Zwart \& Alexander \cite{HPZA04}; Hopman \& Portegies Zwart
\cite{HPZ05}).
The fact that only stars within $\sim0.01\,\mathrm{pc}$ spiral in successfully,
implies that it is the stellar content and dynamics of that region
which determine the rate of GW inspiral events for the different
populations in the system. This means, for example, that
mass-segregation is likely to play an important role (Hopman \&
Alexander \cite{Hop06b}; Freitag et al.; \cite{Fre06}; see also
contribution by Marc Freitag in this volume). Since the resonant
relaxation time is very short ($T_{\rm RR}\ll T_{\rm NR}$) near
$\sim0.01\,\mathrm{pc}$, it also implies that RR will dictate the rate at which
stars are driven towards low $J$ orbits, where energy dissipation is
efficient and stars spiral in.
Hopman \& Alexander \cite{Hop06a} used a Fokker-Planck method in
energy space with a sink term due to RR losses in $J$-space to
calculate the GW inspiral rate. At every time-step, stars redistribute
in energy-space due to (non-resonant) two body scattering, and stars
are accreted by the MBH with some specified rate per energy bin. In
the relevant regime, this rate is assumed to be of order $\sim
N(E)/T_{\rm RR}$, i.e., within one RR time all stars in the bin would
be accreted if they were not replaced by new stars that flow to higher
energies (tighter orbits). In spite of the fact that stars are drained
very efficiently near the MBH, Hopman \& Alexander \cite{Hop06a} found
that the rate at which stars are replenished by two-body scattering is
sufficiently high that the stellar distribution will not be depleted
near the MBH, unless the efficiency of RR is more than an order of
magnitude larger than exploratory $N$-body simulations (Rauch \&
Tremaine \cite{RT96}) have indicated. Modifications of the stellar DF
due to RR are too small to be observable. Some of the stars that are
captured by the MBH are swallowed directly without giving a GW signal,
but the stars closed to the MBH (within $\sim0.01\,\mathrm{pc}$) will spiral in
rapidly enough to give obtain an orbit of period $P\lesssim10^4\,{\rm
s}$ for more than a year. Such sources would be observable to {\it
LISA} to distances up to a few Gpc, depending on the mass of the
inspiraling star. Since the enhanced rate at which stars flow to the
loss-cone in angular momentum space is sustained by the larger flow
due to two-body scattering in energy space, the rate at which EMRIs
are produced is increased. The analysis by Hopman \& Alexander
\cite{Hop06a} indicates that the rate at which {\it observable} EMRIs
are formed in galactic nuclei is $\sim8$ times higher than that for
the case in which RR was neglected.
\section{Conclusions}
Resonant relaxation is a relatively unexplored dynamical mechanism.
Vector RR, which only affects the orientation of the orbit but not the
eccentricity, operates in many stellar systems, while scalar RR, which
does affect the eccentricity, is unique for MBH systems. This
mechanism becomes important at distances $\lesssim0.1\,\mathrm{pc}$ from the
MBH. It may have played an important role in redistributing the orbits
of the S-stars, and enhances estimates of the GW inspiral rate by
nearly an order of magnitude. Our own GC provides a unique case study
for resonant relaxation.
\ack We thank the organizers of the GC2006 meeting for a very
stimulating conference, and Yuri Levin for discussions on resonant
relaxation.
\section*{References}
|
1,116,691,498,249 | arxiv | \section{Introduction}
\label{sec1}
In three dimensions, the Navier-Stokes equations are
\begin{equation}
\begin{array}{ll}\label{eq:NSE}
\partial_t v -\Delta v +v\cdot\nabla v+\nabla \pi = 0&\mbox{~in~}{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times [0,\infty)
\\ \nabla\cdot v = 0&\mbox{~in~}{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times [0,\infty),
\end{array}
\end{equation}
and are supplemented with some initial data $v_0$. The existence of weak solutions for finite energy initial data ($v_0\in L^2$) was developed by Leray in \cite{leray} and later generalized by Hopf in \cite{hopf}, based on the a priori bound
\begin{equation}
\int |v(x,t)|^2dx + \int_0^t \int 2 |\nabla v(x,t')|^2 dx\,dt'\le \int |v_0(x)|^2dx ,
\end{equation}
the same bound as the Stokes system which is \eqref{eq:NSE} without the nonlinearity $v\cdot\nabla v$. Another important property of \eqref{eq:NSE} is its natural scaling: given a solution $v$ and $\lambda>0$, it follows that
\begin{equation}
v^{\lambda}(x,t)=\lambda v(\lambda x,\lambda^2t),
\end{equation}
is also a solution with associated pressure
\begin{equation}
\pi^{\lambda}(x,t)=\lambda^2 \pi(\lambda x,\lambda^2t),
\end{equation}
and initial data
\begin{equation}
v_0^{\lambda}(x)=\lambda v_0(\lambda x).
\end{equation}
A solution is self-similar (SS) if it is scaling invariant with respect to this scaling, i.e.~if $v^\lambda(x,t)=v(x,t)$ for all $\lambda>0$. If this scale invariance holds for a particular $\lambda>1$, then we say $v$ is discretely self-similar with factor $\lambda$ (i.e.~$v$ is $\lambda$-DSS). Similarly $v_0$ can be SS or $\lambda$-DSS. The class of DSS solutions contains the SS solutions since any SS $v$ is $\lambda$-DSS for any $\lambda>1$. Recall that $L^3_w({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ is the weak Lebesgue space which is equivalent to the Lorenz space $L^{(q,r)}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ with $(q,r)=(3,\infty)$. The natural spaces to study SS $v_0$ and $v$ are $L^3_w({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ and $L^\infty(0,\infty;L^3_w({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))$.
Self-similar solutions are determined by the behavior at any fixed time. This leads to an ansatz of $v$ in terms of a time-independent profile $u$, namely,
\begin{equation}\label{ansatz1}
v(x,t) = \frac 1 {\sqrt {2t}}\,u\bigg(\frac x {\sqrt{2t}}\bigg),
\end{equation}
where $u$ solves the \emph{Leray equations}
\begin{equation}
\begin{array}{ll}\label{eq:stationaryLeray}
-\Delta u-u-y\cdot\nabla u +u\cdot \nabla u +\nabla p = 0&\mbox{~in~}{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3
\\ \nabla\cdot u=0&\mbox{~in~}{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3,
\end{array}
\end{equation}
in the variable $y=x/\sqrt t$.
Similarly, $\lambda$-DSS solutions are decided by their behavior on the time interval $1\leq t\leq \lambda^2$ and we have
\begin{equation}\label{ansatz2}
v(x,t)=\frac 1 {\sqrt{2t}}\, u(y,s),
\end{equation}
for
\begin{equation}\label{variables}
y=\frac x {\sqrt{2t}},\quad s=\log(\sqrt{2t}),
\end{equation}
where $u$ is time-periodic with period $\log(\lambda)$ and solves the \emph{time-dependent Leray equations}
\begin{equation}
\begin{array}{ll}
\label{eq:timeDependentLeray}
\partial_s u-\Delta u-u-y\cdot\nabla u +u\cdot \nabla u +\nabla p = 0&\mbox{~in~}{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times {\mathbb R }}\newcommand{\RR}{{\mathbb R }
\\ \nabla\cdot u = 0&\mbox{~in~}{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times {\mathbb R }}\newcommand{\RR}{{\mathbb R }.
\end{array}
\end{equation}
Note that the \emph{self-similar transform} \eqref{ansatz2}--\eqref{variables} gives a one-to-one correspondence of solutions of \eqref{eq:NSE} and that of \eqref{eq:timeDependentLeray}. Moreover, when $v_0$ is SS or DSS, the initial condition $v|_{t=0}=v_0$ corresponds to a boundary condition for $u$ at spatial infinity, see Section \ref{sec:DSS}.
The fact that \eqref{eq:stationaryLeray} is time-independent motivates an analogy between the self-similar profile and solutions to the steady state Navier-Stokes equations. It is known for certain large data and appropriate forcing that solutions to the stationary Navier-Stokes boundary value problem are non-unique \cite{Galdi,Temam}. In \cite{JiaSverak}, Jia and \v Sver\'ak conjecture that similar non-uniqueness results might hold for solutions to \eqref{eq:stationaryLeray}. These solutions would necessarily involve large data but, until recently, existence results for self-similar solutions were only known for small data (for small data existence of forward self-similar solutions see \cite{GiMi,Kato,CP,Koch-Tataru}). Jia and \v Sver\'ak addressed this in \cite{JiaSverak} where they proved the existence of a forward self-similar solution using Leray-Schauder degree theory for \emph{large $-1$-homogeneous} initial data which is \emph{locally H\"older continuous} away from the origin. Similar results were later proven in \cite{Tsai-DSSI} for $\lambda$-DSS solutions with factor \emph{close to one} where closeness is determined by the local H\"older norm of $v_0$ away from the origin. It is also shown in \cite{Tsai-DSSI} that the closeness condition on $\lambda$ can be eliminated if the initial data is axisymmetric with no swirl.
In Korobkov-Tsai \cite{KT-SSHS}, the existence of self-similar solutions on the half space (and the whole space) is established for appropriately smooth initial data. The approach of \cite{KT-SSHS} differs from \cite{JiaSverak} and \cite{Tsai-DSSI} in that the existence of a solution to the stationary Leray equations \eqref{eq:stationaryLeray} is established directly. It also gives a second proof of the main result of \cite{JiaSverak}. {A new approach is necessary in
\cite{KT-SSHS} due to lack of spatial decay estimates, which gives \emph{global compactness} needed by the Leray-Schauder theorem in
\cite{JiaSverak} and \cite{Tsai-DSSI}.}
The main goal of the present paper is to construct $\lambda$-DSS solutions for any $\lambda>1$ for a very general class of $L^3_w({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$, possibly rough data. {A key difference between this paper and \cite{JiaSverak, Tsai-DSSI, KT-SSHS} is the lack of \emph{local compactness}, which is required by the Leray-Schauder theorem and is provided by the regularity theory. In contrast, the regularity of general DSS solutions is not known yet.
}
Since $L^3_w({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ embeds continuously into the space of uniformly locally square integrable functions $L^2_{u\,loc}$ it is appropriate to seek \emph{local Leray solutions}. For our purpose, we only consider global in time solutions.
\begin{definition}[Local Leray solutions]\label{def:localLeray} A vector field $v\in L^2_{loc}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times [0,\infty))$ is a local Leray solution to \eqref{eq:NSE} with divergence free initial data $v_0\in L^2_{u\,loc}$ if:
\begin{enumerate}
\item for some $\pi\in L^{3/2}_{loc}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times [0,\infty))$, the pair $(v,\pi)$ is a distributional solution to \eqref{eq:NSE},
\item for any $R>0$, $v$ satisfies
\begin{equation}\notag
\esssup_{0\leq t<R^2}\,\sup_{x_0\in {\mathbb R }}\newcommand{\RR}{{\mathbb R }^3}\, \int_{B_R(x_0 )}\frac 1 2 |v(x,t)|^2\,dx + \sup_{x_0\in {\mathbb R }}\newcommand{\RR}{{\mathbb R }^3}\int_0^{R^2}\int_{B_R(x_0)} |\nabla v(x,t)|^2\,dx \,dt<\infty,\end{equation}
\item for any $R>0$, $v$ satisfies
\begin{equation}\notag
\lim_{|x_0|\to \infty} \int_0^{R^2}\int_{B_R(x_0 )} | v(x,t)|^2\,dx \,dt=0,
\end{equation}
\item for all compact subsets $K$ of ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3$ we have $v(t)\to v_0$ in $L^2(K)$ as $t\to 0^+$,
\item $v$ is suitable in the sense of Caffarelli-Kohn-Nirenberg, i.e., for all cylinders $Q$ compactly supported in $ {\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times(0,\infty )$ and all non-negative $\phi\in C_0^\infty (Q)$, we have
\begin{equation}\label{eq:localEnergyNSE}
2\int \int |\nabla v|^2\phi\,dx\,dt \leq \int\int |v|^2(\partial_t \phi + \Delta\phi )\,dx\,dt +\int\int (|v|^2+2\pi)(v\cdot \nabla\phi)\,dx\,dt.
\end{equation}
\end{enumerate}
\end{definition}
The concept of local Leray solutions was introduced by Lemari\`e-Rieusset \cite{LR}, where he showed the existence of global in time local Leray solutions if $v_0$ further belongs to $E_2$, the closure of $C_0^\infty$ in the $L^2_{u\,loc}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ norm. See Kikuchi-Seregin \cite{KiSe} for more details. In particular, condition 3 justifies a formula of the pressure $\pi$ in terms of the velocity $v$, see \cite[(1.9)]{KiSe} and
\cite[(3.3)]{JiaSverak}.
The definition of suitability appearing above is taken from \cite{JiaSverak} and \cite{LR}. It defines the local energy estimate in terms of test functions compactly supported away from $t=0$. This is an unnecessary restriction. In particular, conditions 4 and 5 from Definition \ref{def:localLeray} together imply the local energy inequality is valid for test functions with support extending down to $t=0$. In this case $\int |v(0)|^2\phi\,dx$ should be added to the right hand side of \eqref{eq:localEnergyNSE}.
Let $e^{t\Delta}v_0(x)=\int_{{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3} (4\pi t)^{-3/2}e^{-|x-z|^2/t}v_0(z)\,dz$; this is the solution to the homogeneous heat equation in ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3$. Our main objective is to prove the following theorem.
\begin{theorem}\label{thrm:main}
Let $v_0$ be a divergence free, $\lambda$-DSS vector field for some $\lambda >1$ and satisfy
\begin{equation}\label{ineq:decayingdata}
\|v_0\|_{L^3_w({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}\leq c_0,
\end{equation} for a possibly large constant $c_0$. Then, there exists a local Leray solution $v$ to \eqref{eq:NSE} which is $\lambda$-DSS and additionally satisfies
\begin{equation}\notag
\| v(t)-e^{t\Delta}v_0 \|_{L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}\leq C_0\,t^{1/4}
\end{equation}
for any $t\in (0,\infty)$ and a constant $C_0=C_0(v_0)$.
\end{theorem}
\noindent Comments on Theorem \ref{thrm:main}:
\begin{enumerate} %
\item The constant $c_0$ is allowed to be large. The condition $v_0\in L^3_w({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ is weaker than initial conditions found in previous constructions for large data. In particular, \cite{JiaSverak,Tsai-DSSI} require $v_0$ to be in $C^\alpha_{loc}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\setminus \{ 0\})$ for some $\alpha>0$. Additionally, in contrast to \cite{Tsai-DSSI}, our construction does not restrict the size of $\lambda$.
\item When $v$ is strictly DSS, smoothness is not known \emph{a priori}. In contrast, the DSS solutions constructed in \cite{Tsai-DSSI} are smooth. This is a consequence of the fact that, whenever $\lambda-1$ is sufficiently small, a local regularity theory is available for $\lambda$-DSS solutions in the local Leray class. One may say that \cite{Tsai-DSSI} constructs strong DSS solutions for special (large) initial data
while this paper considers weak DSS solutions for general initial data.
\item When $v_0$ is $\lambda$-DSS, it will be shown in Lemma \ref{lemma:equivalence} that \eqref{ineq:decayingdata} is equivalent to $v_0 \in L^3(A_1)$ where $A_1 = \{ x \in {\mathbb R }}\newcommand{\RR}{{\mathbb R }^3: 1\le |x|< \lambda\}$. In fact, Theorem \ref{thrm:main} is true under a weaker condition on $v_0$ than \eqref{ineq:decayingdata}. Recall that
the Morrey space $M^{p,\alpha}= M^{p,\alpha}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ is the collection of functions $f$ such that $\norm{f}_{M^{p,\alpha}} :=\sup_{x \in {\mathbb R }}\newcommand{\RR}{{\mathbb R }^3, r>0} \bket{ r^{-\alpha} \int_{B(x,r)} |f|^p}^{1/p} < \infty$.
Theorem \ref{thrm:main} remains valid if we replace \eqref{ineq:decayingdata} by
\begin{equation}\label{data.Morrey}
v_0 \in L^2(A_1), \quad \limsup_{r \to 0_+} \sup_{x \in A_1} r^{-1} \int_{B(x,r)} |v_0|^2 \le \epsilon_0,
\end{equation}
for some constant $\epsilon_0>0$ sufficiently small. Such $v_0$ is in $M^{2,1}$ since it is $\lambda$-DSS.
Condition \eqref{data.Morrey}
with $\epsilon_0=0$ is all we need to prove Lemma \ref{th:2.1} on Assumption \ref{AU_0} on the profile $U_0$, and our construction actually only needs a weakened Assumption \ref{AU_0} ($\lim_{R \to \infty} \Theta(R) \le \epsilon_0$) allowing small $\epsilon_0>0$. For the convergence of the solution $v(t)$ to initial value $v_0$ as $t \to 0$,
although $e^{t \Delta}$ is not a $C_0$-semigroup on the Morrey space $ M^{2,1} $
as noted in Kato \cite[Remark 2.3]{Kato}, it is a $C_0$-semigroup on weighted $L^2$ spaces $L^2_{-k/2}=\{ f: \int _{{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3}\frac {|f(x)|^2}{(1+|x|)^k} dx < \infty \}$, which
$M^{2,1}$ is imbedded into if $k>1$. Thus $e^{t \Delta}v_0 \to v_0$ as $t \to 0$ in the norm of $L^{2}_{-k/2}$, which implies local $L^2$-convergence.
\item
An example of initial data in $M^{2,1}$ that does not satsfy \eqref{data.Morrey} is the following.
Denote by $\chi_S$ the characteristic function for the set $S$. Fix $x_0=(1,0,0)$ and let $x_k=2^kx_0$ and $B_k=B_{2^{k-2}}(x_k)$ denote the ball in ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3$ centered at $x_k$ of radius $2^{k-2}$. Let
\begin{equation}
u=\sum_{k\in \mathbb Z}u_k,\quad
\text{where}
\quad
u_k(x)= \frac {\chi_{B_k}(x)} {|x-x_k|}.
\end{equation}
For $\lambda=2$ and all $k\in \mathbb Z$ we have $\lambda u_k(\lambda x)=u_{k-1}(x)$ and it follows that $u$ is $\lambda$-DSS. This function belongs to $M^{p,3-p}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)\setminus L^3_w{({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}$ for $1\leq p<3$ (these are the critical Morrey spaces). Our approach breaks down for data like this (unless we multiply it by a small number) because we are unable to control the spatial decay of $e^{t\Delta}v_0$. If $v_0\in L^3_w$ on the other hand, discrete self-similarity implies some spatial decay -- this will be made clear in Lemma \ref{lemma:V0decay}.
\item For the usual Leray-Hopf weak solutions, it is well known that the hypothetical singular set is contained in a compact subset of space-time. We would call this property {\it eventual regularity}. The eventual regularity of local Leray solutions is unclear:
If a $\lambda$-DSS solution $v$ is singular at some point $(x_0,t_0)$, it is also singular at $(\lambda^k x_0,\lambda^{2k} t_0)$ for all integers $k$.
Since $v$ is regular if it is SS or if $\lambda$ is close to one, the possibility of a non-compact singular set for some local Leray solutions is suggested by
Theorem \ref{thrm:main}, but not by \cite{JiaSverak,Tsai-DSSI}.
\end{enumerate} %
Our approach is similar to \cite{KT-SSHS} in that we prove a priori estimates for the Leray equations \eqref{eq:stationaryLeray} and
directly prove the existence of the ansatz \eqref{ansatz2} (in \cite{KT-SSHS} this is done for a half space version of \eqref{ansatz1}). In contrast, the solutions of \cite{JiaSverak} and \cite{Tsai-DSSI} are constructed using the Leray-Schauder fixed point theorem
for the equation for $\tilde v = v-e^{t\Delta}v_0$, namely,
\begin{equation}\notag
\tilde v = K(\tilde v) :=T(F(e^{t\Delta}v_0+\tilde v)), \quad K:X \to X,
\end{equation}
where $T$ is essentially the Stokes solver, i.e., the solution operator of the time-dependent Stokes system, $F(u)=u\otimes u$ is the nonlinearity, and $X$ is some function class for $\tilde v$. In this approach one needs to show the compactness of $K(\tilde v)$ which involves the spatially local and asymptotic properties of $K(\tilde v)$. When the norm of $X$ is subcritical or critical (e.g. Prodi-Serrin class), the local compactness is provided by the regularity theory. When the norm of $X$ is {\em supercritical} (e.g.~energy class), the usual bootstrap argument does not provide better regularity.
One may hope to gain the local compactness using the local energy inequality, but this is nonlinear and not well-defined for the Stokes solver $T$. Hence, local energy inequality is likely not available for $K(\tilde v)$ even if it holds for $\tilde v$.
The difference between \cite{KT-SSHS} and this paper is that \cite{KT-SSHS} proves the a priori bounds for \eqref{eq:stationaryLeray}
by a contradiction argument and a study of the Euler equations, and constructs the solutions by the method of invading domains, while the current paper proves the bound directly with a computable constant, and constructs the solutions in the whole space directly.
The key observation leading to the explicit a priori bound is the following: If a solution $u(y,s)$ of the Leray equations \eqref{eq:timeDependentLeray}
asymptotically agrees with a given $U_0(y,s)$, (e.g.~$u-U_0 \in L^\infty({\mathbb R }}\newcommand{\RR}{{\mathbb R },L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))$), then the difference $U=u-U_0$ formally satisfies
\begin{equation}\label{eq1.14}
\int_0^T\! \int \bke{ |{\nabla} U|^2 + \frac 12 |U|^2 }= \int_0^T \!\int (U \cdot {\nabla}) U \cdot U_0 - \int_0^T\! \int \mathcal R(U_0)\cdot U ,
\end{equation}
where the source term $\mathcal R(U_0)$ will be given in \eqref{RW.def}.
The integral {$\iint (U\cdot {\nabla}) U \cdot U_0$} is usually out of control for large $U_0$, but now we have $\iint |V|^2$ on the left side (which is not available for Navier-Stokes), and $U_0$ decays. Thus this trouble term can be controlled if the local part of $U_0$ is suitably ``cut off.'' See Section \ref{sec2} for details.
\medskip
If $v_0$ is SS, we can use our result to construct a SS solution by considering the sequence of solutions obtained from Theorem \ref{thrm:main} treating the data as $\lambda_k$-DSS for an appropriate sequence $\lambda_k$ which decreases to $1$ as $k\to\infty$. Alternatively, one can also construct the SS solution directly without involving the time dependence.
Indeed we have the following theorem.
\begin{theorem}
\label{thrm:selfsimilardata}
Let $v_0$ be a $(-1)$-homogeneous divergence free vector field in ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3$ which satisfies \eqref{ineq:decayingdata}
for a possibly large constant $c_0$. Then, there exists a local Leray solution $v$ to \eqref{eq:NSE} which is self-similar and additionally satisfies
\begin{equation}\notag
\| v(t)-e^{t\Delta}v_0 \|_{L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}\leq C_0\,t^{1/4}
\end{equation}
for any $t\in (0,\infty)$ and a constant $C_0=C_0(v_0)$.
\end{theorem}
This solution is infinitely smooth, as are the solutions from \cite{JiaSverak}, see e.g.~\cite{Grujic}.
As mentioned earlier, the first construction of large self-similar solutions is given in \cite{JiaSverak} using local H\"older estimates and the Leray-Schauder theorem. The second construction is given in \cite{KT-SSHS} using an a priori bound for Leray equations derived by a contradiction argument and a study of the Euler equations. The current paper provides a new (third) construction based on the explicit a priori bound.
We expect our method could give an alternative construction of self-similar solutions in the \emph{half space} ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3_+$, after the first one in \cite{KT-SSHS}:
Assuming the decay estimates for $e^{-A}v_0$ of \cite{KT-SSHS}, one could get a priori bounds by suitable cut-off (this requires some work) and avoid the contradiction argument.
This solution is only in distributional sense and the pressure is not defined. %
The remainder of this paper is organized as follows. In Section 2 we construct solutions to the time periodic Leray system on ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times {\mathbb R }}\newcommand{\RR}{{\mathbb R }$ which satisfy a local energy estimate. In Section 3 we use these solutions to recover a discretely self-similar local Leray solution on ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times (0,\infty)$, thereby proving Theorem \ref{thrm:main}. Finally, in Section 4 we give two proofs of Theorem \ref{thrm:selfsimilardata}. One is essentially a corollary of Theorem \ref{thrm:main} while the other constructs a stationary weak solution to Leray's equation directly.
\medskip
\emph{Notation.}\quad We will use the following function spaces:
\begin{align*}
&\mathcal V=\{f\in C_0^\infty({ {\mathbb R }}\newcommand{\RR}{{\mathbb R }^3;{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3}) ,\, \nabla \cdot f=0 \},
\\& X = \mbox{the closure of~$\mathcal V$~in~$H_0^1({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$} ,
\\& H = \mbox{the closure of~$\mathcal V$~in~$L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$},
\end{align*}where $H_0^1({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ is the closure of $C_0^\infty({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ in the Sobolev space $H^1({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$. Let $X^*({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ denote the dual space of $X({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$.
Let $(\cdot,\cdot)$ be the $L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ inner product and $\langle\cdot,\cdot\rangle$ be the dual product for $H^1$ and its dual space $H^{-1}$, or that for $X$ and $X^*$.
Denote by $\mathcal D_T$ the collection of all smooth divergence free vector fields in ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3 \times {\mathbb R }}\newcommand{\RR}{{\mathbb R }$ which
are time periodic with period $T$ and whose supports are compact in space.
\section{The time periodic Leray system}\label{sec2}
In this section we construct a periodic weak solution to the Leray system
\begin{equation}
\label{eq:wholeSpaceLeray}
\begin{array}{ll}
\partial_s u -\Delta u=u+y\cdot \nabla u -\nabla p -u\cdot\nabla u &\mbox{~in~}{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times {\mathbb R }}\newcommand{\RR}{{\mathbb R }
\\ \nabla\cdot u = 0 &\mbox{~in~}{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times {\mathbb R }}\newcommand{\RR}{{\mathbb R }
\\ \displaystyle \lim_{|y_0|\to\infty} \int_{B_1(y_0)}|u(y,s)-U_0(y,s)|^2\,dx= 0& \mbox{~for all~}s\in {\mathbb R }}\newcommand{\RR}{{\mathbb R }
\\ u(\cdot,s)=u(\cdot, s+T) &\mbox{~in~}{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\mbox{~for all~}s\in {\mathbb R }}\newcommand{\RR}{{\mathbb R },
\end{array}
\end{equation}
for a given $T$-periodic divergence free vector field $U_0$. Here, $U_0$ serves as the boundary value of the system and is required to satisfy the following assumption.
\begin{assumption} \label{AU_0}
The vector field $U_0(y,s) :{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3 \times {\mathbb R }}\newcommand{\RR}{{\mathbb R } \to {\mathbb R }}\newcommand{\RR}{{\mathbb R }^3$ is continuously differentiable in $y$ and $s$, periodic in $s$ with period $T>0$, divergence free, and satisfies
\begin{align*}
& \partial_s U_0-\Delta U_0-U_0-y\cdot \nabla U_0 = 0,
\\& U_0\in L^\infty (0,T;L^4\cap L^q({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)),
\\& \partial_s U_0\in L^\infty(0,T;L_{loc}^{6/5}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)),
\end{align*}
and
\[
\sup_{s\in [0,T]}\|U_0 \|_{L^q({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\setminus B_R)}\leq \Theta(R),
\]
for some $q\in (3,\infty]$ and $\Theta:[0,\infty)\to [0,\infty)$ such that $\Theta(R)\to 0$ as $R\to\infty$.
\end{assumption}
Note that membership in $C^1$ guarantees that $\partial_s U_0\in L^\infty(0,T;L_{loc}^{6/5}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))$ and we only mention this inclusion explicitly since later estimates will depend on the quantity $\norm{U_0}_{L^\infty(0,T;L_{loc}^{6/5}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))}$.
For a given $W(y,s)$ and any $\zeta \in C^1_0({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$, let
\begin{equation}\label{LW.def}\notag
LW = \partial_s W-\Delta W-W-y\cdot \nabla W ,
\end{equation}
and
\begin{equation}
\bka{LW,\zeta} =(\partial_s W-W-y\cdot \nabla W,\zeta) + (\nabla W, \nabla \zeta).
\end{equation}
Periodic weak solutions to \eqref{eq:wholeSpaceLeray} %
are defined as follows.
\begin{definition}[Periodic weak solution]
\label{def:periodicweaksolutionR3}
Let $U_0$ satisfy Assumption \ref{AU_0}.
The field $u$ is a periodic weak solution to \eqref{eq:wholeSpaceLeray} if it is divergence free, if
\begin{equation}\notag
U:=
u-U_0\in L^\infty(0,T;L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))\cap L^2(0,T;H^1({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)),
\end{equation}
and if
\begin{equation}\label{u.eq-weak}
\int_0^T \big( (u,\partial_s f)-(\nabla u,\nabla f)+(u+y\cdot\nabla u-u\cdot\nabla u ,f) \big) \,ds =0,
\end{equation}
holds for all $f \in \mathcal D_T$.
This latter condition implies that $u(0)=u(T)$.
\end{definition}
If $u$ satisfies this definition then there exists a pressure $p$ so that $(u,p)$ constitute a distributional solution to \eqref{eq:wholeSpaceLeray} (see \cite{Temam}; we will provide more details in our proof). In the SS variables our notion of suitability mirrors that in the physical variables.
\begin{definition} [Suitable periodic weak solution]
Let $U_0$ satisfy Assumption \ref{AU_0}.
A pair $(u,p)$ is a \emph{suitable periodic weak solution} to \eqref{eq:wholeSpaceLeray}
on ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3$ if both are time periodic with period $T$,
$u$ is a periodic weak solution on ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3$, $p\in L^{3/2}_{loc}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^4)$, the pair $(u,p)$ solves \eqref{eq:wholeSpaceLeray} in the sense of distributions, and the local energy inequality holds:%
\begin{align}\label{ineq:localEnergy}
\int_{{\mathbb R }}\newcommand{\RR}{{\mathbb R }^4}\bigg( \frac12 {| u|^2} +|\nabla u|^2 \bigg)\psi\,dy\,ds &\leq \int_{{\mathbb R }}\newcommand{\RR}{{\mathbb R }^4} \frac {| u|^2} 2 \big(\partial_s \psi +\Delta \psi \big)\,dy\,ds
\\&\quad + \int_{{\mathbb R }}\newcommand{\RR}{{\mathbb R }^4} \bigg( \frac 1 2 | u|^2 (( u- y)\cdot \nabla \psi ) + p ( u\cdot \nabla \psi) \bigg)\,dy\,ds,\notag
\end{align}
for all nonnegative $\psi \in C_0^\infty({\mathbb R }}\newcommand{\RR}{{\mathbb R }^4)$.
\end{definition}
The main result of this section concerns the existence of suitable periodic weak solutions.
\begin{theorem}[Existence of suitable periodic weak solutions to \eqref{eq:wholeSpaceLeray}]\label{thrm:existenceOnR3}
Assume $U_0(y,s)$ satisfies Assumption \ref{AU_0} with $q=10/3$. Then \eqref{eq:wholeSpaceLeray} has a periodic suitable weak solution $(u,p)$ in ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^4$ with period $T$.
\end{theorem}
We need $3<q\le \frac{18}5$ to show $p \in L^{3/2}_{x,t,loc}$, and it is convenient to take $q=10/3$.
Ideally we would prove the existence of a divergence free time-periodic vector field $U$ where $u=U+U_0$ and $U$ satisfies a perturbed version of \eqref{eq:wholeSpaceLeray}. In view of \eqref{eq1.14}, doing so would require the constant from the pointwise bound on $U_0(y,s)$ be small to ensure that
\begin{equation}
\int (f\cdot\nabla f)\cdot U_0 \,dy \leq \alpha ||f||_{H_0^1({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}^2,
\end{equation}
for any $f\in H_0^1({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ and a small constant $\alpha$. To get around this issue we replace $U_0$ by a perturbation $W$ which eliminates the possibly large behavior of $U_0$ near the origin. Fix $Z\in C^\infty({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ with $0 \le Z \le 1$, $Z(x)=1$ for $|x|>2$ and $Z(x)=0$ for $|x|<1$. This can be done so that $|{\nabla} Z|+|{\nabla}^2 Z| \lesssim 1$.
For a given $R>0$, let $\xi_{R}(y)=Z(\frac yR)$. It follows that $|\nabla^k \xi_R|\lesssim R^{-k}$ for $k\in \{ 0,1\}$.
\begin{lemma}[Revised asymptotic profile]
\label{lemma:W}
Fix $q\in (3,\infty]$ and suppose $U_0$ satisfies Assumption \ref{AU_0} for this $q$.
Let $Z\in C^\infty({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ be as above.
For any $\alpha\in (0,1)$, there exists $R_0=R_0(U_0,\alpha)\ge 1$ so that letting $\xi(y) =Z(\frac y{R_0})$ and setting
\begin{equation}
W (y,s)= \xi(y) U_0(y,s) + w(y,s),
\end{equation}
where
\begin{equation}
w(y,s)=\int_{{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3}\nabla_y \frac 1 {4\pi |y-z|} \nabla_z \xi(z) \cdot U_0 (z,s) \,dz,
\end{equation}
we have that $W$ is locally continuously differentiable in $y$ and $s$, $T$-periodic, divergence free,
$U_0 - W \in L^\infty(0,T; L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))$, and
\begin{equation}\label{ineq:Wsmall}
\|W\|_{L^\infty(0,T;L^q({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))}\leq \alpha, %
\end{equation}
\begin{equation}\label{WL4.est}
\norm{W}_{L^\infty(0,T;L^4({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))}\leq c(R_0,U_0),
\end{equation}
and
\begin{equation}
\label{LW.est}
\norm{LW}_{L^\infty(0,T; H^{-1}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))} \leq c(R_0,U_0), %
\end{equation}
where $LW$ is given in \eqref{LW.def}, $c(R_0,U_0)$ depends on $R_0$ and quantities associated with $U_0$ which are finite by Assumption \ref{AU_0}.
\end{lemma}
{\it Remark.} The correction term $w$, introduced to make $\div W=0$, usually has compact support, see \cite[III.3]{Galdi}. Similar non-compact corrections have also been used, e.g.~in \cite{KMT2012, LuoTsai}.
\begin{proof}
We will typically suppress the $s$ dependence.
Since $U_0$ is divergence free and $w = \nabla (-\Delta)^{-1} ( \nabla \xi \cdot U_0)$, we have $\div W = \nabla \xi \cdot U_0 + \div w = 0$.
We first prove the bound \eqref{ineq:Wsmall}. Since $U_0$ is divergence free we obtain using the integral formula for $w$ and the Calderon-Zygmund theory that
\[
\|w\|_{L^q({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}\leq c_q \|\xi U_0 \|_{L^q({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}\leq c_q\Theta(R_0),
\]
where $\Theta$ is given by Assumption \ref{AU_0} and $c_q$ depends on $q$. Then, assuming $R_0$ is large enough that $\Theta(R_0)\leq \alpha (1+c_q)^{-1}$, it follows that
\[
\|W\|_{L^q({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}\leq (1+c_q)\Theta(R_0)\leq \alpha,
\]
which proves \eqref{ineq:Wsmall}.
The second inequality \eqref{WL4.est} follows immediately from the Calderon-Zygmund theory and Assumption \ref{AU_0}.
Estimates for the third inequality \eqref{LW.est} are more involved. Note that $LW=L(\xi U_0)+Lw$. Using the definition of $w$ we have
\[ \partial_s w(y,s)=\nabla_y\int_{{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3} \frac 1 {4\pi |y-z|} \nabla_z \xi(z) \cdot \partial_s U_0 (z,s) \,dz,\]
and the Hardy-Littlewood-Sobolev inequality implies that
\EQ{\label{pdsw.est}
\|\partial_s w\|_{L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}&= \bigg\| \nabla_y \int_{{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3} \frac 1 {4\pi |y-z|} \nabla_z \xi(z) \cdot \partial_s U_0 (z,s) \,dz \bigg\|_{L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}
\\& \leq c \|\nabla \xi \cdot \partial_s U_0\|_{L^{6/5}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)},
}
which is finite by Assumption \ref{AU_0}.
We have also that
\begin{equation}
\label{eq2.13}
| w(y)|\lesssim \int _{R_0 < |z|<2R_0} \frac1{|y-z|^2} \frac {|U_0(z)|} {R_0} dz \lesssim
\left \{
\begin{split}
& R_0^{-3/4}\|U_0 \|_{L^4({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)} \quad &\text{if } |y|\le 4R_0
\\
& |y|^{-2}R_0^{5/4}\|U_0 \|_{L^4({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)} \quad &\text{if } |y|>4R_0
\end{split}
\right . .
\end{equation}
These estimates are independent of time and therefore
\EQ{ \label{w.est}
\| w\|_{L^\infty(0,T;L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))}\leq C(R_0,U_0).
}
We next show
\begin{equation}
\label{ineq:wgradient}|\nabla w(y)|\leq \frac {C(R_0,U_0)} {1+|y|^{3}},
\end{equation}which will allow us to conclude our estimate for $\|Lw \|_{L^2}$.
If $|y|\leq 4R_0$ we have
\begin{equation}
\nabla w(y) =\int_{{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3}\nabla_y \frac 1 {4\pi |y-z|} \nabla_z(\nabla_z \xi(z) \cdot U_0 (z)) \,dz,
\end{equation}
and, since $U_0$ is continuously differentiable we have $||\nabla U_0||_{L^\infty (B_{2R_0})}<\infty $ and thus
\begin{equation}
|| \nabla w(y)||_{L^\infty (B_{4R_0})}\leq C(R_0,U_0),
\end{equation}
where $c_1$ depends on $R_0$ and $||\nabla U_0||_{L^\infty (B_{2R_0})}$.
If $|y|\geq 4R_0$ then
\begin{equation}
\nabla w(y) = \int \nabla_z \nabla_y \frac 1{4\pi |z-y|} (\nabla \xi \cdot U_0)(z)\,dz,
\end{equation}
and it follows that
\begin{equation}
|\nabla w(y)|\leq \frac {c R_0^{5/4} \|U_0\|_{L^4({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}} {|y|^3}
\end{equation}
Thus we have \eqref{ineq:wgradient}.
The estimates
\eqref{pdsw.est}, \eqref{w.est}, and \eqref{ineq:wgradient} show that $Lw\in L^\infty(0,T;H^{-1}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))$.
We now focus on $L(\xi U_0)$. Note that, because $LU_0=0$ by Assumption \ref{AU_0},
\EQ{
L(\xi U_0) &= \xi L U_0 + W_2 = W_2,}
where \EQ{
W_2&=-(\Delta \xi) U_0 -2(\nabla \xi \cdot \nabla) U_0 -(y\cdot \nabla \xi) U_0.
}
Since both $U_0$ and $\nabla U_0$ belong to $L^\infty_{loc}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times {\mathbb R }}\newcommand{\RR}{{\mathbb R })$ and $\nabla \xi$ is compactly supported,
\[
\sup_{0\leq s\leq T}\norm{W_2(\cdot,s)}_{L^1\cap L^\infty({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)} \le C(R_0,c_0).
\]
The above estimates show \eqref{LW.est} and complete the proof.
\end{proof}
To solve for $u$, we will decompose $u=W+ U$, where $W$ is as in Lemma \ref{lemma:W} for $\alpha = 1/4$, and hence we can drop the $R_0$ dependence in $C(U_0)$.
Note $U$ satisfies a perturbed Leray system, namely
\begin{equation} \label{perturbed-Leray}
L U + (W+U)\cdot \nabla U + U\cdot \nabla W +\nabla p= - \mathcal R(W), \quad \div U=0,
\end{equation}
where the source term is
\begin{equation}\label{RW.def}
\mathcal{R}(W) := \partial_s W-\Delta W-W-y\cdot \nabla W + W\cdot\nabla W.
\end{equation}
To obtain suitable weak solutions (as opposed to just weak solutions) to \eqref{eq:wholeSpaceLeray}, we first construct smooth solutions to the mollified version of \eqref{perturbed-Leray}, see e.g.~discussions in \cite{BCI}. For all $\epsilon>0$, let $\eta_\epsilon(y)=\epsilon^{-3}\eta(y/\epsilon)$ for some $\eta\in C_0^\infty$ satisfying $\int_{{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3}\eta\,dy=1$.
We seek a solution of the form $u_\epsilon=U_\epsilon+W$ where $U_\epsilon$ is %
$T$-periodic, decays faster than $W$ at spatial infinity, and satisfies the {\em mollified perturbed Leray equations} for $U=U_\epsilon$ and $p=p_\epsilon$,
\begin{align}\label{eq:mollifiedLeray}
L U + (W+\eta_\epsilon* U)\cdot \nabla U + U\cdot \nabla W +\nabla p= - \mathcal R(W), \quad \div U=0,
\end{align}
on ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times [0,T]$.
The weak formulation of \eqref{eq:mollifiedLeray} is
\begin{align}\label{eq:boundedWeakForm}
\frac d {ds}(U,f)
&=
- (\nabla U,\nabla f)
+ (U+y\cdot \nabla U, f)
- ((\eta_\epsilon *U) \cdot\nabla U, f)
\\\notag&\quad
-(W\cdot\nabla U+U\cdot \nabla W,f)-\langle \mathcal{R}(W),f\rangle,
\end{align}
and holds for all $f \in \mathcal D_T$ and a.e.~$s\in (0,T)$.
We use the Galerkin method following \cite{GS06} (see also $\cite{Morimoto,Temam}$).
Let $\{a_{k}\}_{k\in {\mathbb N}}\subset \mathcal V$ be an orthonormal basis of $H$.
For a fixed $k$, we look for an approximation solution of the form $U_k(y,s)= \sum_{i=1}^k b_{ki}(s)a_i(y)$.
We first prove the existence of and \emph{a priori} bounds for $T$-periodic solutions $b_k=(b_{k1},\ldots,b_{kk})$ to the system of ODEs
\begin{align}\label{eq:ODE}
\frac d {ds} b_{kj} = & \sum_{i=1}^k A_{ij}b_{ki} +\sum_{i,l=1}^k B_{ilj} b_{ki}b_{kl} +C_j,%
\end{align}
for $j\in \{1,\ldots,k\}$,
where
\begin{align}
\notag A_{ij}&=- (\nabla a_{i},\nabla a_j)
+ (a_i+y\cdot \nabla a_i, a_j)
-( a_i\cdot \nabla W,a_j)
- (W\cdot\nabla a_i, a_j)
\\\notag B_{ilj}&=- (\eta_\epsilon *a_i \cdot\nabla a_l, a_j)
\\\notag C_j&=-\langle \mathcal R (W),a_j\rangle.
\end{align}
\begin{lemma}[Construction of Galerkin approximations]\label{lemma:Galerkin} Fix $T>0$ and let $W$ satisfy the conclusions of Lemma \ref{lemma:W} with $\alpha=\frac 14$.
\begin{enumerate}
\item For any $k\in \mathbb N$ and $\epsilon>0$, the system of ODEs \eqref{eq:ODE} has a $T$-periodic solution $b_{k}\in H^1(0,T)$.
\item Letting
\begin{equation} \notag
U_k(y,s)=\sum_{i=1}^k b_{ki}(s)a_i(y),
\end{equation}we have
\begin{equation}\label{ineq:uniformink}
||U_k||_{L^\infty (0,T;L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))} + ||U_k||_{L^2(0,T;H^1({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))}<C,
\end{equation}where $C$ is independent of both $\epsilon$ and $k$.
\end{enumerate}
\end{lemma}
\begin{proof}
Our argument is standard (see \cite{GS06,Temam}).
Fix $k\in {\mathbb N}$. For any $U^{0}\in \operatorname{span}(a_1,\ldots,a_k)$,
there exist $b_{kj}(s)$ uniquely solving \eqref{eq:ODE} with initial value $b_{kj}(0)=(U^{0},a_j)$, and belonging to $H^1(0,\tilde T)$ for some time $0<\tilde T\leq T$. If $\tilde T<T$ assume it is maximal -- i.e.~$||b_{k}(s)||_{L^2}\to\infty$ as $s\to \tilde T^-$. By multiplying the $j$-th equation of \eqref{eq:ODE} by $b_{kj}$ and summing, since certain cubic terms vanish, we obtain
\begin{equation} \label{ineq:1}
\frac 1 2 \frac d {ds} ||U_k||_{L^2}^2 + \frac 1 2 ||U_k||_{L^2}^2+ ||\nabla U_k||_{L^2}^2\leq - ( U_k\cdot \nabla W, U_k ) - \langle \mathcal{R}(W), U_k\rangle.
\end{equation}
Note that \eqref{ineq:Wsmall} and the fact that $U_k$ is divergence free guarantee that
\begin{equation}
\big| ( U_k\cdot \nabla W, U_k ) \big| \leq \frac 1 8 ||U_k||_{H^1}^2 .%
\end{equation}
By ${\mathcal R}(W)= LW + \div(W\otimes W)$, and \eqref{ineq:Wsmall},
\begin{equation} \label{ineq:2}
|({\mathcal R}(W),U_k)| \le (\norm{LW}_{H^{-1}} + \|W \|_{L^4}^2 ) \norm{U_k}_{H^1} \le C_2+ \frac 1 8 ||U_k||_{H^1}^2 .%
\end{equation}
where $C_{2}=C(\norm{LW}_{H^{-1}} + \|W \|_{L^4}^2)^2 $ is independent of $s$, $T$, $k$, and $\epsilon$.
Using Lemma \ref{lemma:W}, the estimates \eqref{ineq:1}--\eqref{ineq:2} imply
\begin{equation} \label{ineq:kenergyevolution}
\frac d {ds} ||U_k||_{L^2}^2
+ \frac 1 2 ||U_k||_{L^2}^2
+ \frac 1 2 ||\nabla U_k||_{L^2}^2 \leq C_{2} .
\end{equation}
The Gronwall inequality implies
\begin{equation} \label{ineq:gronwall}
\begin{split}
e^{s/2} ||U_k(s)||_{L^2}^2
&\leq ||U^{0}||_{L^2}^2 + \int_0^{\tilde T} e^{\tau/2} C_2 \,dt
\\
& \le ||U^{0}||_{L^2}^2 + e^{T/2} C_2 T
\end{split}
\end{equation}
for all $s\in [0,\tilde T]$. Since the right hand side is finite, $\tilde T$ is not a blow-up time and we conclude that $\tilde T=T$.
By \eqref{ineq:gronwall} we can choose $\rho>0$ (independent of $k$ \footnote{For usual Navier-Stokes we expect $\rho$ to depend on $k$ since we use the imbedding $H^1_0(B) \to L^2(B)$ for $B\subset {\mathbb R }}\newcommand{\RR}{{\mathbb R }^k$. But here for Leray system we don't need it.}) so that
\begin{equation}\notag
||U^{0}||_{L^2}\leq \rho \Rightarrow ||U_{k}(T)||_{L^2}\leq \rho.
\end{equation}
Let $T: B_\rho^k\to B_\rho^k$ map $b_{k}(0)\to b_k(T)$, where $ B_\rho^k$ is the closed ball of radius $\rho$ in ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^k$. This map is continuous and thus has a fixed point by the Brouwer fixed-point theorem, implying there exists some $U^{0}\in \operatorname{span}(a_1,\ldots,a_k)$ so that $b_k(0)=b_k(T)$.
It remains to check that \eqref{ineq:uniformink} holds. The $L^\infty L^2$ bound now follows from \eqref{ineq:gronwall}
since $\norm{U^0}_{L^2} \le \rho$, which is independent of $k$ and $\epsilon$.
Integrating \eqref{ineq:kenergyevolution} in $s \in [0,T]$ and using $U_k(0)=U_k(T)$, we get
\begin{equation} \label{eq2.33}
\frac 1 2 \int_0^T \big(||U_k||_{L^2}^2
+ ||\nabla U_k||_{L^2}^2 \big)\,dt \le C_2 T
\end{equation}
which gives an upper bound for $\| U_k \|_{L^2(0,T;H^1 )}$ uniform in $k$ and $\epsilon$.
\end{proof}
We are now ready to prove Theorem \ref{thrm:existenceOnR3}.
\begin{proof}[Proof of Theorem \ref{thrm:existenceOnR3}]
The Galerkin approximates to the mollified system lead to a solution $U_\epsilon$ through a standard limiting process. Indeed, under the assumptions of Theorem \ref{thrm:existenceOnR3}, standard arguments (e.g.~those in \cite{Temam}) imply that, for $T>0$ and for any $\epsilon>0$, there exists $T$-periodic $U\in {L^2(0,T;H_0^1({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))}$ (with norm bounded independently of $\epsilon$) and a subsequence of $\{U_k\}$ (still denoted by $U_k$) so that
\begin{align*}
& U_k\rightarrow U_\epsilon \mbox{~weakly in}~L^2(0,T;X),
\\& U_k\rightarrow U_\epsilon \mbox{~strongly in}~L^2(0,T;L^2(K)) \mbox{~for all compact sets~}K\subset {\mathbb R }}\newcommand{\RR}{{\mathbb R }^3,
\\& U_k(s)\rightarrow U_\epsilon(s) \mbox{~weakly in}~L^2 \mbox{~for all}~s\in [0,T].
\end{align*}
The weak convergence guarantees that $U_\epsilon(0)=U_\epsilon(T)$. The limit $U_\epsilon$ is a periodic weak solution of the mollified perturbed Leray system \eqref{eq:mollifiedLeray}.
At this stage we construct a pressure $p_\epsilon$ associated to $U_\epsilon$ for the system \eqref{eq:mollifiedLeray}. This will allow us to obtain a suitable weak solution of \eqref{eq:wholeSpaceLeray} when we let $\epsilon\to 0$. Note that $p_\epsilon$ is defined as a distribution whenever $U_\epsilon$ is a weak solution (see \cite{Temam}), but we need to show that $p_\epsilon \in L^{3/2}_{x,t,loc}$ with a bound uniform in $\epsilon$.
Note that $\div L(W)=0$ because $W$ is divergence free and, therefore, taking the divergence of \eqref{eq:mollifiedLeray},
\begin{equation}\label{p.eq}
-\Delta p_\epsilon =\sum_{i,j}{\partial}_i {\partial}_j
\bkt{(\eta_\epsilon * U _i )U_j + W_i U_j + U_i W_j + W_i W_j}.
\end{equation}
Let
\begin{equation}
\tilde p_\epsilon =\sum_{i,j} R_i R_j \bkt{(\eta_\epsilon * U _i )U_j + W_i U_j + U_i W_j + W_i W_j},
\end{equation}
where $R_i$ denote the Riesz transforms. It also satisfies \eqref{p.eq}.
We claim that $p_\epsilon=\tilde p_\epsilon$ up to an additive constant by proving that $\nabla(p_\epsilon - \tilde p_\epsilon)=0$. To this end we use a well known fact about the forced, non-stationary Stokes system on ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times [t_1,t_2]$ where $t_1<t_2$ are given points in time: if $g\in L^\infty(t_1,t_2;H^{-1}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))$ and $V_0\in L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$, then there exists a unique $ V\in C_w([t_1,t_2];L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))\cap L^2(t_1,t_2;H^1({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))$ and unique $\nabla \pi $ satisfying $V(x,t_1)=V_0(x)$ and
\[(\partial_t V -\Delta V +\nabla \pi)(x,t) = g(x,t),\qquad\div V(x,t)=0,\] for $(x,t)\in {\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times [t_1,t_2]$. Formulas for $ V$ and $ \pi $ can be written using the Green tensor for time dependent Stokes system. We only need the uniqueness and recall its proof: Assume $(\hat V,\hat \pi)$ is a second solution. Then $V-\hat V$ and $\pi-\hat \pi$ satisfy the unforced Stokes system and, testing against $V-\hat V$, we obtain
\begin{align*}
\frac 1 2 \int |(V-\hat V)(x,t)|^2\,dx + \int_0^t\int |\nabla (V-\hat V)(x,t')|^2\,dx\,dt' \leq 0,
\end{align*}implying $V=\hat V$. This implies also that $\nabla \pi =\nabla \hat \pi$.
For our purposes let $V(x,t)=(2t)^{-1/2}U_\epsilon(y,s)$,
$ \pi(x,t) = (2t)^{-1} p_\epsilon(y,s)$, and $g(x,t)=(g_1+g_2)(x,t)$ where
\begin{align}
& g_1(x,t)= -\frac 1 {\sqrt{2t}^3}\mathcal (LW)(y,s),
\\& g_2(x,t)= -\frac 1 {\sqrt{2t}^3}(W\cdot \nabla U_\epsilon + U_\epsilon \cdot\nabla W +(\eta_\epsilon*U_\epsilon) \cdot \nabla U_\epsilon +W\cdot\nabla W\big)(y,s),
\end{align}
and $y=x/\sqrt{2t}$ and $s=\log(\sqrt {2t})$.
Then,
$g \in L^\infty(1,\lambda^2;H^{-1}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))$ and $(V,\pi)$ solves the Stokes system on ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times [1,\lambda^2]$ and $V$ is in the energy class.
We conclude that $\nabla \pi $ is unique, and is given by Riesz transforms,
\[
\nabla \pi = \nabla (\Delta)^{-1} \div g_2,
\]
noting that $g_1$ is divergence free. %
Since taking Riesz transforms commutes with the above change of variables and letting $\tilde \pi=(2t)^{-1}\tilde p_\epsilon$, we conclude that $\nabla \pi=\nabla \tilde \pi$, and hence $\nabla (p_\epsilon-\tilde p_\epsilon)=0$. We may therefore replace $p_\epsilon$ by $\tilde p_\epsilon$, and apply the Calderon-Zygmund theory to obtain an \emph{a priori} bound for $p_\epsilon$, namely
\begin{equation}
\|p_\epsilon \|_{L^{5/3}(\mathbb R^3\times [0,T])}\leq C\|U_\epsilon \|_{L^{10/3}(\mathbb R^3\times [0,T])} ^2 +C\| {W} \|_{L^{10/3}(\mathbb R^3\times [0,T])} ^2,
\end{equation}
which is finite and independent of $\epsilon$ by the known properties of $U_\epsilon$ and $W$ (using $q=10/3$).
Because $U_\epsilon$ are bounded independently of $\epsilon$ in $L^\infty (0,T;L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))\cap L^2(0,T;H_0^1({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))$, and $U_\epsilon$ is a weak solution of \eqref{eq:mollifiedLeray} with $W$ bounded by Lemma \ref{lemma:W}, there exists a vector field $U\in L^\infty (0,T;L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))\cap L^2(0,T;H_0^1({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))$ and a sequence $\{ U_{\epsilon_k}\}$ of elements of $\{U_\epsilon \}$ so that
\begin{align*}
& U_{\epsilon_k} \rightarrow U \mbox{~weakly in}~L^2(0,T;X)
\\& U_{\epsilon_k}\rightarrow U \mbox{~strongly in}~L^2(0,T;H(K)) ~ \forall \mbox{~compact sets $K\subset {\mathbb R }}\newcommand{\RR}{{\mathbb R }^3$}
\\& U_{\epsilon_k}(s)\rightarrow U(s) \mbox{~weakly in}~L^2 \mbox{~for all}~s\in [0,T],
\end{align*}
as $\epsilon_k\to 0$. Let $u=U+W$. Furthermore, since $p_{\epsilon_k}$ are uniformly bounded in $L^{5/3}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times [0,T])$ we can extract a subsequence (still denoted $p_{\epsilon_k}$) so that
\begin{equation}
p_{\epsilon_k}\rightarrow p \mbox{~weakly in}~L^{5/3}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times [0,T]),
\end{equation}
for some distribution $p\in L^{5/3}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times [0,T])$ and this convergence is strong enough to ensure that $(u,p)$ solves \eqref{eq:wholeSpaceLeray} in the distributional sense.
It remains to check that the pair $(u,p)$ is suitable. This follows as in \cite[Appendix]{CKN} since the approximating solutions $(u_\epsilon,p_\epsilon)$ \emph{all satisfy the local energy equality}.
\end{proof}
\section{$\lambda$-DSS initial data and the heat equation}
\label{sec3}
In this section we provide estimates for solutions to the heat equation when the initial data $v_0$ is divergence free, $\lambda$-DSS, and belongs to $L^3_w({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)=L^{(3,\infty)}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$. Throughout this section let $V_0(x,t)=e^{t\Delta}v_0(x)$ and $U_0(y,s)= {\sqrt {2t}} (e^{t\Delta}v_0)(x)$ where $x,t,y,s$ satisfy \eqref{variables}.
Generally, functions in $L^3_w({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ can possess arbitrarily many singularities of order $|x|^{-1}$. This is false if the function is discretely self-similar. In this case, the only critical singularity is at the origin; any other singularities must be subcritical. This is clarified in the following lemma.
\begin{lemma}\label{lemma:equivalence}
If $f$ is defined in ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3$ and is $\lambda$-DSS for some $\lambda>1$, then $f \in L^3_{loc}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3 \setminus \{0\})$ if and only if $f \in L^{3}_w({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$.
\end{lemma}
\begin{proof}
Let
\begin{equation}
A_r = \bket{x \in {\mathbb R }}\newcommand{\RR}{{\mathbb R }^3: r \le |x| < r \lambda }.
\end{equation}
Decompose $f = \sum_{k \in \mathbb{Z}} f_k$ where $f_k(x) = f(x)$ if $x \in A_{\lambda^k }$, and $f_k(x)=0$ otherwise. Note $f_k (x) = \lambda^{-k} f_0(\lambda^{-k}x)$ since $f$ is $\lambda$-DSS.
The distribution function for $f$ is
\[
m(\sigma,f) = |\{ x: |f(x)| > \sigma \}|.
\]
Recall the identity
\[
\int |f|^p \,dx = p \int_0^\infty \sigma^{p} m(\sigma,f) d \sigma/\sigma,
\]
which holds for $1\le p < \infty$. For $\beta>0$,
\EQ{\label{eq3.2}
m(\beta, f) &= \sum_{k \in \mathbb{Z}} m(\beta,f_k)
= \sum_{k \in \mathbb{Z}} m(\lambda^k \beta,f_0) \lambda^{3k},
}
where we have used the scaling property $f_k (x) = \lambda^{-k} f_0(\lambda^{-k}x)$.
However,
\EQ{
\int_{A_1} |f_0|^p\,dx
& = p \int_0^\infty \sigma^{p} m(\sigma,f_0) d \sigma/\sigma
\\
& = \sum_{k \in {\mathbb Z}} p \int_{\lambda^{k-1}\beta} ^{\lambda^k \beta} \sigma^{p-1} m(\sigma,f_0) d \sigma
\\
& \ge \sum_{k \in {\mathbb Z}} p \int_{\lambda^{k-1}\beta} ^{\lambda^k \beta} (\lambda^{k-1}\beta) ^{p-1} m(\lambda^{k}\beta,f_0) d \sigma
\\
& = \sum_{k \in {\mathbb Z}} p (\lambda^k \beta- \lambda^{k-1}\beta) (\lambda^{k-1}\beta) ^{p-1} m(\lambda^{k}\beta,f_0)
\\
& = \beta^p p (\lambda-1)\lambda^{-p} \sum_{k \in {\mathbb Z}} \lambda^{kp} m(\lambda^{k}\beta,f_0) .
}
Thus, with the choice $p=3$ and using \eqref{eq3.2}, we get
\EQ{
m(\beta, f) &\leq \frac {\lambda^3}{\beta^3 3 (\lambda-1)} \int_{A_1} |f_0|^3\,dx.
}
Since $\beta>0$ is arbitrary, we conclude
\EQ{
\norm{f}_{L^{3}_w({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)} ^3 \leq \frac {\lambda^3}{ 3 (\lambda-1)} \int_{A_1} | f|^3\,dx.
}
On the other hand,
\EQ{
\int_{A_1} |f_0|^p\,dx
& = p \int_0^\infty \sigma^{p} m(\sigma,f_0) d \sigma/\sigma
\\
& = \sum_{k \in {\mathbb Z}} p \int_{\lambda^{k}\beta}^{\lambda^{k+1} \beta} \sigma^{p-1} m(\sigma,f_0) d \sigma
\\
& \leq \sum_{k \in {\mathbb Z}} p \int_{\lambda^{k}\beta} ^{\lambda^{k+1} \beta} (\lambda^{k+1}\beta) ^{p-1} m(\lambda^{k}\beta,f_0) d \sigma
\\
& = \sum_{k \in {\mathbb Z}} p (\lambda^{k+1} \beta- \lambda^{k}\beta) (\lambda^{k+1}\beta) ^{p-1} m(\lambda^{k}\beta,f_0)
\\
& = \beta^p p (\lambda-1)\lambda^{p-1} \sum_{k \in \mathbb{Z}} \lambda^{kp} m(\lambda^{k}\beta,f_0) .
}
Thus, with $p=3$ and using \eqref{eq3.2}, we get
\EQ{
\int_{A_1} |f_0|^ 3 \,dx\le 3(\lambda-1)\lambda^{2} \beta^3 m(\beta, f) \le 3(\lambda-1)\lambda^{2} \norm{f}_{L^{3}_w({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)} ^3 .
}
\end{proof}
Our next lemma concerns the decay at spatial infinity for times bounded away from $t=0$ of solutions to the heat equation with discretely self-similar $L^3_w$ data.
\begin{lemma}\label{lemma:V0decay}
Suppose $v_0 \in L^3_w({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3 \backslash \{0\})$ is $\lambda$-DSS for some $\lambda>1$ and let $V_0$ be defined as above. Then,
\[
\label{w1.decay}
\sup_{1\leq t\leq \lambda^2}\norm{V_0(t)}_{L^{q}(|x|>R)}\leq \Theta(R),
\]
for any $q \in (3,\infty]$ and $t\in [1,\lambda^2]$ where
$\Theta:[0,\infty)\to [0,\infty)$ depends on $q$ but satisfies $\Theta(R)\to 0$ as $R\to\infty$.
\end{lemma}
\begin{proof} By Lemma \ref{lemma:equivalence} we have $v_0\in L^3_{loc}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\setminus \{0\})$.
Let
\[
{\omega}(r) = \sup_{1<|x_0|<\lambda} \int_{B(x_0,r)} |v_0|^3\,dx.
\]
Clearly ${\omega}(r) \to 0$ as $r \to 0$. Let $A_R=\{ x: R\leq |x| <\lambda R \}$.
We first establish a general estimate for $\|V_0(t)\|_{L^q(A_R)}$ which we will then sum over nested shells. Let
\EQ{
V_0 (x,t)
&= \int_{|z|< R/2} {(4\pi t)^{-3/2}}e^{-|x-z|^2/{2t}}v_0(z) \,dz
\\&\quad +\int_{R/2\leq |z|< 2\lambda R} (4\pi t)^{-3/2}e^{-|x-z|^2/{2t}}v_0(z)\,dz
\\&\quad + \int_{2\lambda R\leq |z| } (4\pi t)^{-3/2}e^{-|x-z|^2/{2t}}v_0(z) \,dz
\\&= I_0^R(x,t)+I_1^R(x,t)+I_2^R(x,t).
}
Fix $(x,t)\in A_R\times [1,\lambda^2]$. Then,
\[ |I_0^R(x,t)|+|I_2^R(x,t)|\lesssim e^{-cR^2}.\]
Hence $I_0^R,\,I_2^R\in L^p (A_R)$ with
\[
\norm{I_0^R}_{L^p(A_R)}+\norm{I_2^R}_{L^p(A_R)}\leq ce^{-cR^2} R^{3/p},
\]
for all $1\leq p \leq \infty$.
We further decompose $I_1^R$ as
\begin{align*}
I_1^R(x,t) &=\bket{ \int_{z \in A_R^* , |z-x| < \th R} + \int_{z \in A_R^*, |z-x| > \th R} }(4\pi t)^{-3/2}e^{-\frac {|x-z|^2}{2t}} v_0(z)dz
\\& = : I_3^R(x,t) + I_4^R(x,t),
\end{align*}
where $0< \th \ll 1$ is an as-of-yet unspecified parameter and $A_R^*=\{ z: R/2\leq |z| <2\lambda R \}$.
We have by H\"older's inequality that
\[
|I_3^R(x,t)| \le C \norm{e^{-cx^2}}_{L^{3/2}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)} \norm{ v_0}_{L^3(B(x,\th R))} \le C {\omega}(\th),
\]
and
\EQ{
|I_4^R(x,t)| & \le C \int _{A_R^*} e^{-c \th^2 R^2} | v_0(z)| \,dz
\\
&\le C e^{-c \th^2 R^2} \norm{ v_0}_{L^3(A_R^*)} \norm{ 1}_{L^{3/2}(A_R^*)}
\le C e^{-c \th^2 R^2} R^2.
}
Therefore, for $R>1$, (and we drop the $t$ dependence of $V_0$ below)
\EQ{
\norm{V_0}_{L^\infty(A_R)} \le C {\omega}(\th) + C e^{-c \th^2 R^2} R^2+ Ce^{-c R^2},
}
where the constants are independent of $R$ and $\theta$. The above inequality is still valid if $\lambda^kR$ replaces $R$ for $k\in {\mathbb N}$, indeed we have
\EQ{
\norm{V_0}_{L^\infty(A_{\lambda^kR})} \le C {\omega}(\th) + C e^{-c \th^2 (\lambda^kR)^2} (\lambda^kR)^2+ Ce^{-c (\lambda^kR)^2}.
}
The right hand side is decreasing in $k$ for fixed $\theta$ and $R$ and we conclude that
\[
\norm{V_0}_{L^\infty({|x|\geq R})}\leq
\sup_{k\in {\mathbb N}} \norm{V_0}_{L^\infty(A_{\lambda^kR})} \leq C {\omega}(\th) + C e^{-c \th^2 R^2} R^2+ Ce^{-c R^2}.
\]
If $q\in (3,\infty)$ we have
\begin{align*}
\norm{V_0}_{L^q(|x|\geq R)} &\leq C \norm{V_0}_{L^\infty(|x|\geq R)}^{1-3/q} \norm{V_0}_{L^{3}_w}^{3/q} \\&\leq C ( C{\omega}(\th) + Ce^{-c \th^2 { R}^2} { R}^2+ C e^{-c { R}^2})^{1-3/q} \norm{V_0}_{L^{3}_w}^{3/q}
.\end{align*}
We now construct $\Theta(R)$. Let $\epsilon_k=2^{-k}$ for $k\in {\mathbb N}$. For each $\epsilon_k$, choose $\th_k>0$ sufficiently small so that
\[C {\omega}(\th_k) \leq \frac {\epsilon_k^{q/(q-3)}} {2 C^{q/(q-3)}\| V_0 \|_{L^{3}_w}^{3/(q-3)}}.\] Then choose $R_k$ sufficiently large so that $R_k>R_{k-1}$ and
\[ C e^{-c \th_k^2 R_k^2} R_k^2 + Ce^{-c R_k^2} \leq \frac {\epsilon_k^{q/(q-3)}} {2C^{q/(q-3)}\| V_0 \|_{L^{3}_w}^{3/(q-3)}}.\]
Finally, let
\[
\Theta (R)=
\begin{cases}
1 &\text{if } 0<R<R_1
\\ \epsilon_k &\text{if } R_k\leq R< R_{k+1}
\end{cases},
\]
which completes our proof.
\end{proof}
\begin{remark}
(i)
The decay rate in Lemma \ref{lemma:V0decay} depends not only on $\norm{v_0}_{L^3(A_1)}$, but also on ${\omega}(r)$, see \eqref{w1.decay} above. It is worth noting that there is no decay rate that applies to all $v_0$ bounded in $L^{3}_w$. Indeed, there is a constant $c_0>0$, a point $x_ 0 \in A_1$, and a sequence of $\lambda$-DSS $v_0^k\in L^{3}_w$, $k \in {\mathbb N}$, such that $\norm{v_0^k}_{L^{3}_w({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)} \le 1$ and, for all $k$ sufficiently large,
\[
\inf_{ B(x_k,1)} |V_0^k| \ge c_0, \quad x_k = \lambda^k x_0 .
\] In particular, choose any $x_0 \in A_1$ not on its boundary, and $r_0>0$ so that $B(x_0,r_0) \subset A_1$. For any integer $k\ge \log_{\lambda} r_0^{-1}$, we have $\lambda^{-k} \le r_0$. Let $v_0^k(x)=0$ for $ x \in A_1 \backslash B(x_0,\lambda^{-k})$ and $v^k_0(x) = c_1\lambda^k$ if $x \in B(x_0, \lambda^{-k})$ for some constant $c_1>0$.
Then $\norm{v_0^k}_{L^{3,\infty}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)} \le C \norm{v_0^k}_{L^3(A_1)}\le 1$ for suitable choice of $c_1$ independent of $k$.
We have $v_0 = c_1$ in $B(x_k,1)$,
$x_k = \lambda^k x_0$. Thus, for $ x \in B(x_k,1)$,
\[
V_0(x,t) \ge \int_{B(x_k,1)} e^{-4c} v_0(y)\,dy=c_0: = \frac {4\pi}3 e^{-4c}c_1.
\]
(ii) If we assume more regularity on $v_0$, then we can get explicit decay rate of $V_0$. For example, if $v_0 \in L^q(A_1)$,
$ 3<q \le \infty$, then for all $ 3<p \le \infty$,
\begin{equation}\label{eq3.12}
\sup_{t \in [1,\lambda^2]} \norm{ V_0(\cdot,t)}_{L^p(|x|>R)} \le C \norm{v_0}_{L^q(A_1)} R^{-\sigma}
\quad
\forall R\gg 1,
\end{equation}
where $\sigma = 1-3/q$ for $p \in [q,\infty]$, and $\sigma = 1-3/p$ for $3<p<q$. This shows that our assumption $v_0 \in L^3(A_1)$ is in borderline. Note \eqref{eq3.12} does not depend on $\omega$-like functions.
The proof of \eqref{eq3.12} is omitted since it is not used.
\end{remark}
The main lemma of this section connects solutions of the heat equation to the boundary data characterized by Assumption \ref{AU_0}.
\begin{lemma}\label{th:2.1}
Suppose $v_0$ satisfies the assumptions of Theorem \ref{thrm:main} and let $x,t,y,s$ satisfy \eqref{variables}. Then
\begin{equation}\label{def:U0}
U_0(y,s)= {\sqrt {2t}} (e^{t\Delta}v_0)(x),
\end{equation}satisfies Assumption \ref{AU_0} with $T=\log \lambda$ and any $q \in (3,\infty]$.
\end{lemma}
\begin{proof}
Since $v_0$ is divergence free and $\lambda$-DSS, $e^{t\Delta}v_0$ is the divergence free, $\lambda$-DSS solution to the heat equation for $(x,t)\in {\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times [0,\infty)$. Under the change of variables \eqref{variables}, it follows that $U_0$ is divergence free, $T$-periodic for $T=\log \lambda$, and satisfies
\begin{equation}\label{eq:periodicU0}
LU_0= \partial_s U_0(y,s)-\Delta U_0(y,s) -U_0(y,s)-y\cdot\nabla U_0(y,s) =0,
\end{equation}
for all $(y,s)\in {\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times {\mathbb R }}\newcommand{\RR}{{\mathbb R }$. Inclusion in $C^1$ comes from the smoothing effect of the heat kernel.
This also implies that $\partial_s U_0\in L^\infty(0,T;L_{loc}^{6/5}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))$.
By Lemma \ref{lemma:V0decay} we know $U_0\in L^\infty(0,T;L^q(|x|>1) )$ and, since it is in $C^1({\mathbb R }}\newcommand{\RR}{{\mathbb R }^4)$, it is also in $L^\infty(0,T;L^q(|x|\leq 1) )$. Hence $U_0\in L^\infty(0,T;L^q({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3) )$.
The last bound in Assumption \ref{AU_0} is a direct consequence of Lemma \ref{lemma:V0decay}.
\end{proof}
\section{Discretely self-similar solutions to 3D NSE}\label{sec:DSS}
In this section we prove Theorem \ref{thrm:main}.
\begin{proof}[Proof of Theorem \ref{thrm:main}]
By Lemma \ref{th:2.1}, $U_0(y,s)$ defined by \eqref{def:U0} satisfies Assumption \ref{AU_0}.
Let $(u,p)$ be the time-periodic weak solution described in Theorem \ref{thrm:existenceOnR3}.
Let $v(x,t)= u(y,s)/\sqrt{2t}$ and $\pi(x,t)=p(y,s)/2t$ where $y= x/\sqrt{2t}$ and $s=\log (\sqrt{2t})$.
Then $(v,\pi)$ is a distributional solution to \eqref{eq:NSE}. Indeed, if we let $\zeta(x,t) = \frac 1{2t} f(y,s)$ where $f(y,s)$ is the test vector
in the weak form \eqref{u.eq-weak} of the $u$-equation, and note that
\[
{\partial}_t \zeta(x,t) = \frac 1{(2t)^2} ({\partial}_s -2 - y \cdot {\nabla} _y)f(y,s),
\]
we recover the weak form of the Navier-Stokes equations \eqref{eq:NSE} for $v$ with test vector $\zeta$ from
the weak form \eqref{u.eq-weak} for $u$.
Note
\begin{equation} \notag v-e^{t\Delta}v_0 \in L^\infty(1,\lambda^2;L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))\cap L^2(1,\lambda^2;H^1({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)). \end{equation}
The $\lambda$-DSS scaling property implies
\begin{equation}\notag
||v(t)-e^{t\Delta}v_0||_{L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}^2\lesssim t^{1/2} \sup_{1\leq \tau\leq \lambda^2} ||v(\tau)-e^{\tau \Delta}v_0||_{L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}^2,
\end{equation}
and
\begin{equation}\notag
\int_0^{\lambda^2} \int || \nabla (v(t)-e^{t\Delta}v_0)||_2^2\,dx\,dt \lesssim \bigg( \sum_{k=0}^\infty \lambda^{-k} \bigg) \int_1^{\lambda^2}\int || \nabla (v(t)-e^{t\Delta}v_0)||_2^2\,dx\,dt.
\end{equation}
It follows that
\begin{equation}\label{ineq:time0} v-e^{t\Delta}v_0 \in L^\infty(0,\lambda^2;L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3))\cap L^2(0,\lambda^2;H^1({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)). \end{equation}
We now check that $v$ is a local Leray solution to \eqref{eq:NSE}.
\emph{Locally finite energy and enstrophy:} This follows from inequality \eqref{ineq:time0} noting that $v_0\in L^2_{u\,loc}$ implies $e^{t\Delta}v_0$ has uniformly locally finite energy and enstrophy.
\emph{Convergence to initial data:} The fact that $||v(t)-e^{t\Delta}v_0||_{L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}\lesssim t^{1/4}$ implies convergence to zero in the $L^2_{loc}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ norm. Using the embedding $L^3_w\subset M^{2,1}\subset L^2_{-3/2}$ where $L^2_{-3/2}$ is the weighted $L^2$ space
(see Comment 4 after Theorem \ref{thrm:main} for the definition)
as well as the fact that $e^{t\Delta}v_0\to v_0$ in $L^{2}_{-3/2}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ (see \cite[Remark 3.2]{Kato}), and this space embeds in $L^2_{loc}$, we conclude that $e^{t\Delta}v_0\to v_0$ in $L^{2}_{loc}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ as $t\to 0^+$. It follows that
\begin{equation}\notag
\lim_{t\to 0} ||v(t)-v_0||_{L^2_{loc}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}=0.
\end{equation}
\emph{ {Decay at spatial infinity:}} For any $R>0$, the $\lambda$-DSS scaling implies $v(t)-e^{t\Delta}v_0 \in L^2(0,R^2;{\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$. Together with the fact that $e^{t\Delta}v_0(x)$ satisfies the same decay requirements at spatial infinity as a local Leray solution (this is easy to see given that $v_0\in L^2_{u\,loc}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$), this implies that
\begin{equation}\notag
\lim_{|x_0|\to \infty} \int_0^{R^2}\int_{B_R(x_0 )} | v(x,t)|^2\,dx \,dt=0.
\end{equation}
\emph{Local energy inequality:} This property for $(v,\pi)$ is inherited from the suitability of $(u,p)$ in the self-similar variables.
Indeed, if we let $\phi(x,t) = \frac 1{\sqrt{2t}} \psi(y,s)$ where $\psi(y,s)$ is the test function
in the local energy inequality \eqref{ineq:localEnergy} for $(u,p)$,
and note that
\[
{\partial}_t \phi(x,t) = \frac 1{(2t)^{3/2}} ({\partial}_s -1 - y \cdot {\nabla} _y)\psi(y,s),
\]
we recover the local energy inequality \eqref{eq:localEnergyNSE} for $(v,\pi)$ with test function $\phi$.
\end{proof}
\begin{remark} In the definitions $\zeta(x,t) = \frac 1{2t} f(y,s)$ and $\phi(x,t) = \frac 1{\sqrt{2t}} \psi(y,s)$ in the above proof,
the exponents of $\sqrt {2t}$ in front of $f$ and $\psi$ can be understood by dimension analysis.
The dimensions of the physical variables $x,t,v,\pi$ are $1,2,-1,-2$ respectively and are reflected in the ansatz \eqref{ansatz2}. \emph{All self-similar variables $y,s,u,p,f,\psi$ are dimension free}, hence so are the weak form and local energy inequality for $(u,p)$ and therefore those for $(v,\pi)$. As a result, the dimension of $\zeta$ is $-2$, and that of $\phi$ is $-1$. Also note that
$\zeta$ should have the same dimension as $\phi v$, which is correct.
\end{remark}
\section{On existence of self-similar solutions}\label{sec:NSE}
\label{sec5}
As mentioned in the comments prior to the statement of Theorem \ref{thrm:selfsimilardata}
our ideas can be extended to give simple proofs of the existence of self-similar solutions to the 3D Navier-Stokes equations with $(-1)$-homogeneous initial data $v_0$ satisfying \eqref{ineq:decayingdata}.
Recall that the first construction is given in \cite{JiaSverak} and the second in \cite{KT-SSHS}. In the first subsection below we show that we can get self-similar solutions as limits of $\lambda_k$-DSS solutions with $\lambda_k \to 1_+$ as $k \to \infty$. %
In the second subsection we present a third construction of self-similar solutions following and simplifying the ideas of the previous sections: It constructs solutions to stationary Leray equations directly, based on our new explicit a priori bound.
\subsection{Self-similar solutions as limits of DSS solutions}
\begin{proof}[Proof of Theorem \ref{thrm:selfsimilardata}] Let $v_0$ be $(-1)$-homogeneous and satisfy the assumptions of Theorem \ref{thrm:main}. Then, $v_0$ is $\lambda$-DSS for every factor $\lambda>1$.
For $k\in \mathbb N$, let $\lambda_k=2^{(2^{-k})}$ so that $\lambda_{k+1}^2 = \lambda_k$.
This sequence decreases strictly to $1$ as $k\to\infty$.
Let $v_k$ be the $\lambda_k$-DSS local Leray solution obtained from Theorem \ref{thrm:main} (or from \cite[Theorem 1.1]{Tsai-DSSI}) with scaling factor $\lambda_k$. Working within the local Leray class provides \emph{a priori} bounds for all $v_k$. In particular, letting $\mathcal N (v_0)$ denote the class of local Leray solutions with initial data $v_0$, the following estimate is well known for local Leray solutions (see \cite{JiaSverak}): for all $\tilde v\in \mathcal N (v_0)$ and $r>0$ we have
\begin{equation}
\esssup_{0\leq t \leq \sigma r^2}\sup_{x_0\in \RR^3} \int_{B_r(x_0)}\frac {|\tilde v|^2} 2 \,dx\,dt + \sup_{x_0\in \RR^3} \int_0^{\sigma r^2}\int_{B_r(x_0)} |\nabla \tilde v|^2\,dx\,dt <C \sigma ,
\end{equation}
where \begin{equation} \sigma(r) =c_0\, \min\bigg\{r^2\bigg( \sup_{x_0\in \RR^3} \int_{B_r(x_0)} \frac {|v_0|^2}2\,dx \bigg)^{-2} , 1 \bigg\},
\end{equation}
for a small universal constant $c_0$. Note that $v_0$ belongs to the Morrey space $M^{2,1}$ by the embedding $L^3_w\subset M^{2,1}$. This implies that $\sigma(r)r^2 \to \infty$ as $r\to \infty$ and, since $v_k\in \mathcal N (v_0)$, we obtain \emph{a priori} bounds for all $v_k$ across the time interval $[0,2]$ which are independent of $k$. This allows us to pass to the limit to obtain a local Leray solution $v\in \mathcal N (v_0)$. Then, for any $k$, and sufficiently large $l$, it follows that $v_l$ is DSS with scaling factor $\lambda_k$, and this property is inherited by $v$.
To show that $v$ is SS we pass to the time periodic variables $y=x/\sqrt{2t}\in\RR^3$ and $s=\log\sqrt{2t}\in \RR$ and write $u_k(y,s)=\sqrt{2t} v_k(x,t)$ where $u_k$ is time periodic with period $T_k=\log \lambda_k$, $T_{k+1}=\frac 12 T_k$. Note that all $u_l$, $l>k$, are $T_k$ periodic, and this property is inherited by $u$.
Let $U_0$ be defined as in \eqref{def:U0} which is now constant in $s$. Note that $u$ is $T_k$ periodic for all $k$, and $u(y,s)-U_0(y)$ is weakly continuous $L^2({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$-valued vector fields. Hence $u$ must be constant in $s$.
Therefore $u$ solves the stationary Leray equations, which proves that $v$ is a self-similar solution on ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times (0,\infty)$.
\end{proof}
\subsection{Third construction}
Alternatively, we may adapt the approach of Sections \ref{sec2}-\ref{sec:DSS} to the stationary Leray system and construct self-similar solutions directly without involving the DSS class.
\begin{proof}[Proof of Theorem \ref{thrm:selfsimilardata}]
Let $U_0$ and $W$ be defined as in Sections \ref{sec2} and \ref{sec3}. Then, $W$ satisfies the estimates \eqref
{ineq:Wsmall}--\eqref{LW.est} with $q=\infty$, and
is constant in the time variable. Our first goal is to find a divergence free function $u\in L^2_{u\,loc}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ satisfying
\begin{align}\label{eq:stationary1}
& -(\nabla u,\nabla f)+(u+y\cdot\nabla u-u\cdot\nabla u ,f) =0,
\end{align}
for all $f\in \mathcal V$, and achieve this by solving a perturbed system for $U=u-W$. The variational form of the perturbed, stationary Leray system is
\begin{align}\label{eq:stationary2}
- (\nabla U,\nabla f)
+ (U+y\cdot \nabla U, f)
- (U \cdot\nabla U, f)
=
(W\cdot\nabla U+U\cdot \nabla W,f)+\langle \mathcal{R}(W),f\rangle,
\end{align}
which should hold for all $f\in \mathcal V$. Solutions to this system can be approximated by a Galerkin scheme, the elements of which are obtained via Brouwer's fixed point theorem. In particular, let $\{ a_k \}\subset \mathcal V$ be an orthonormal basis of $H$. For $k \in \mathbb N$, the approximating solution \[
U_k(y)=\sum_{i=1}^k b_{ki}a_i(y),
\] is required to satisfy
\begin{align}\label{eq:stationaryODE}
& \sum_{i=1}^k A_{ij}b_{ki} +\sum_{i,l=1}^k B_{ilj} b_{ki}b_{kl} +C_j=0,%
\end{align}
for $j\in \{1,\ldots,k\}$,
where
\begin{align}
\notag A_{ij}&=- (\nabla a_{i},\nabla a_j)
+ (a_i+y\cdot \nabla a_i, a_j)
-( a_i\cdot \nabla W,a_j)
- (W\cdot\nabla a_i, a_j)
\\\notag B_{ilj}&=- (a_i \cdot\nabla a_l, a_j)
\\\notag C_j&=-\langle \mathcal R (W),a_j\rangle.
\end{align}
Let $P(x):{\mathbb R }}\newcommand{\RR}{{\mathbb R }^k\to{\mathbb R }}\newcommand{\RR}{{\mathbb R }^k$ denote the mapping
\[
P(x)_j=\sum_{i=1}^k A_{ij}x_{i} +\sum_{i,l=1}^k B_{ilj} x_{i}x_{l} +C_j.
\]
For $x\in {\mathbb R }}\newcommand{\RR}{{\mathbb R }^k$ let $\xi=\sum_{j=1}^k x_j a_j$. We have
\EQ{
\label{eq4.6}
P(x)\cdot x &= -\frac 1 2 ||\xi||_{L^2}^2
- \frac 1 2 ||\nabla \xi||_{L^2}^2 +(\xi \cdot \nabla \xi,W) - \bka{{\mathcal R}(W),\xi}
\\
& \le -\frac 1 4 ||\xi||_{L^2}^2 - \frac 1 4 ||\nabla \xi||_{L^2}^2 +C_*^2 \norm{{\mathcal R}(W)}_{H^{-1}}^2
\\
& \le -\frac 1 4 |x|^2 + C_*^2 \norm{{\mathcal R}(W)}_{H^{-1}}^2,
}
using the smallness of $\norm{W}_{L^\infty}$. We conclude that
\[
P(x)\cdot x< 0,\quad \text{if } |x|=\rho := 3C_* \norm{{\mathcal R}(W)}_{H^{-1}}.
\]
By Brouwer's fixed point theorem, there is one $x$ with $|x|<\rho$ such that $P(x)=0$. (Note this $\rho$ is independent of $k$. This is a feature of the Leray system, not of the Navier-Stokes.) Then $U_k=\xi$ is our approximation solution satisfying
\eqref{eq:stationaryODE}, with \emph{a priori} bound
\[
\norm{U_k}_{L^2}^2 + \norm{\nabla U_k}_{L^2} ^2\le 4C_*^2 \norm{{\mathcal R}(W)}_{H^{-1}}^2,
\]
by the first inequality of \eqref{eq4.6} and $P(x)=0$.
This bound is sufficient to find a subsequence with a weak limit in $H^1({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$ and a strong limit in $L^2(K)$ for any compact set $K$ in ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3$, that is, there exists a solution $U$ to \eqref{eq:stationary2} which satisfies $U\in H^1({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)$. A solution to \eqref{eq:stationary1} is now obtained by setting $u=U+W$. Note that $u\in H^1_{loc}\cap L^q$, $3<q\le 6$, and, following \cite[pp. 287-288]{NRS} or \cite[pp. 33-34]{Tsai-ARMA}, if we define
\[
p = \sum_{i,j} R_i R_j (u_i u_j),
\]
where $R_i$ denotes the Riesz transforms,
then $(u,p)$ solve the stationary Leray system in the distributional sense and, furthermore, by Calderon-Zygmund estimates,
\[
||p||_{L^{q/2}({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}<C ||u||_{L^q({\mathbb R }}\newcommand{\RR}{{\mathbb R }^3)}^2,\quad (3<q\le 6).
\]
A solution pair $(v,\pi)$ to \eqref{eq:NSE} is now obtained by passing from the self-similar to the physical variables at time $t=1/2$ and extending to all times using the self-similar scaling relationships. It remains to show that $(v,\pi)$ is a local Leray solution.
A regularity result for a generalized stationary Stokes system (see \cite[Proposition 1.2.2]{Temam}) leads to higher regularity for the pair $(U,p)$ on compact subsets of ${\mathbb R }}\newcommand{\RR}{{\mathbb R }^3$. In particular, $U$ and $p$ are infinitely differentiable. Since $W,\, U\in C^\infty$, so is $u$. This guarantees that $v$ and $\pi$ are smooth in the spatial variables. Smoothness in time is apparent from the self-similar scaling properties of $v$ and $\pi$. Therefore, testing against the equation for $v$ against $\phi\,v $ where $\phi\in C_0^\infty ( {\mathbb R }}\newcommand{\RR}{{\mathbb R }^3\times (0,\infty))$ is non-negative and integrating by parts confirms that the pair $(v,\pi)$ satisfies the local energy identity. The remaining conditions from Definition \ref{def:localLeray} follow as in the proof of Theorem \ref{thrm:main}.
\end{proof}
\section*{Acknowledgments}
The research of
both authors was partially supported by the Natural Sciences and
Engineering Research Council of Canada grant 261356-13.
|
1,116,691,498,250 | arxiv | \section{Introduction}
The unbridled growth of publications in biomedical literature databases offers a great opportunity for researchers to stand on the shoulders of giants for cutting-edge advancements. Nonetheless, it is also a challenge to digest the extensive information from the huge volume of textual data. Information Extraction (IE) is an effective approach to summarize the knowledge into expressive forms for management and comprehension; it can be integrated with other knowledge resources for innovative discovery \cite{Rebholz-Schuhmann2012}. Examples include protein-protein interactions \cite{Mallory2015}, drug-drug interaction \cite{Zhao2016}, causal relationships between biological entities \cite{Perfetto2015}, and other topic-oriented association mining systems \cite{Lim2016, Canada2017}.
Over the past decades, considerable efforts have been devoted towards rule-based \cite{Bui2012} and trigger-based \cite{Bjorne2010, Bjorne2011} detection methods for biomedical event extraction from PubMed abstracts \cite{Ananiadou2010}. \rehl{In general, trigger detection dominates the whole prediction process which performance will greatly affect the final event detection \cite{Pyysalo2012}. Trigger identification method has been well-studied and improved. The latest trigger-based approach using deep neural network has shown its strength in general event extraction tasks \cite{Nguyen2016}.} Combining with lexical and semantic features, word embedding \cite{Mikolov2013} is proposed to build an advanced trigger classifier \cite{Zhou2014}. Nevertheless, trigger detection is a multi-class classification problem with limited annotation labels. The well-known datasets from BioNLP Shared Task (BioNLPST) include BioNLP'09 \cite{Kim2009}, BioNLP'11 \cite{Kim2011}, BioNLP'13 \cite{Nedellec2013}, and BioNLP'16 \cite{Nedellec2016}. The trigger-based methods are based on the dependency parse tree and character n-grams. The dependency parser in natural language processing (NLP) is well-studied \cite{Nivre2016} and has been developed from empirical techniques to neural network models \cite{Chen2014}. However, there is a performance deviation from the traditional applications when applying to biomedical literature due to the contextual variations. The parser that was developed specifically for biomedical text mining (BioNLP) such as \rehl{McCCJ \cite{McClosky2008}} is necessary for biomedical information extraction \cite{Luo2017}. Bi-directional LSTM has been applied to medical event detection in clinical records \cite{Jagannatha2016}. \rehl{Nonetheless, its events are binary relations which are very different from the complex events in BioNLPST.}
\rehl{One of the major concerns behind this is that the prediction errors would propagate along the whole pipeline.} The training data for trigger detection is quite limited because the ground truth labels are not even given in the BioNLP Shared Task datasets. \rehl{In addition, the training samples are not easily selected manually.} Consequently, it becomes an unbalanced multi-class classification problem which is the main barrier to performance improvement in the subsequent biomedical text mining tasks.
\rehl{In this study, we proposed a novel method to detect biomedical events using a different strategy. We do not need the annotation of triggers and the cumbersome dependency parsing for each sentence. We aspire to model the context embedding for each argument. The argument embeddings are adopted to detect directed relations. The proposed neural network model is applicable to general event extraction, thanks to the universality of the underlying neural language models \cite{Bengio2003}. Our method is specially designed for biomedical event extraction while keeping replaceable components (e.g. pre-trained word embedding) for general event extraction tasks. The remainder of this paper is organized in the following order. Firstly, we briefly introduce the datasets and indicate the defectiveness of the existing approaches. Next, we sketch out the framework of our approach and then elaborate the procedures in detail. After that, we evaluate our method and make a comprehensive comparison with other approaches on the BioNLP Shared Task dataset. Then, we demonstrate the effectiveness of our method by investigating the underlying reasons through experiments.}
\section{Datasets}
\begin{table}[!ht]
\caption{Statistics of the events for two tasks in BioNLP Shared Task \label{tab-evntstat}} \centering{\renewcommand{\arraystretch}{1.4}
\begin{tabular}{@{}m{\dimexpr 0.2\linewidth-2\tabcolsep}m{\dimexpr 0.3\linewidth-2\tabcolsep}m{\dimexpr 0.33\linewidth-2\tabcolsep}cc@{}}
\hline
Task & Event Type & Arguments & \shortstack{Train\\-ing \\Set} & \shortstack{Develo\\-pment \\Set} \\
\hline
\multirow{10}{*}{\shortstack{BioNLP \\Shared \\Task 2011 \\- Bacteria \\Gene \\Interactions}} & ActionTarget & Action-\textgreater{}Target & 108 & 18 \\
& Interaction & Agent-\textgreater{}Target & 126 & 18 \\
& PromoterDependence & Promoter-\textgreater{}Protein & 32 & / \\
& PromoterOf & Promoter-\textgreater{}Gene & 36 & / \\
& RegulonDependence & Regulon-\textgreater{}Target & 11 & / \\
& RegulonMember & Regulon-\textgreater{}Member & 15 & / \\
& SiteOf & Site-\textgreater{}Entity & 17 & / \\
& TranscriptionBy & Transcription-\textgreater{}Agent & 25 & 3 \\
& TranscriptionFrom & Transcription-\textgreater{}Site & 14 & / \\
\hline
\shortstack{BioNLP \\Shared \\Task 2016 \\- Bacteria \\Biotopes} & Lives\_In & Bacteria-\textgreater{}Location & 327 & 223 \\
\hline
\end{tabular}}{}
\end{table}
\rehl{In order to ensure fair comparisons among different approaches, we adopted two datasets from the BioNLP Shared Task with 1 (BioNLPST-BB) and 9 (BioNLPST-BGI) event type(s). They consist of the events of bacteria localization and the genetic processes concerning the bacterium Bacillus subtilis respectively.} We aim to measure how the performance change with different event types for model generalization estimation. The development set is initially used to validate the prediction model or tune the hyper-parameters. However, it only contains 3 out of 10 event types in BioNLPST-BGI. Therefore, we combine the training set and the development set as a single annotated dataset for each task. As shown in Table~\ref{tab-evntstat}, the event types are extremely imbalanced in BioNLPST-BGI; it means that the event detection is an imbalanced multi-class classification problem.
The events come from the sentences of PubMed abstracts and the biological entities are annotated by curators or name entity recognition (NER) tools. \rehl{The objective of event detection is to predict the relationships among the pre-annotated or recognized entities. For example, the sentence ``We now report that the purified product of gerE (GerE) is a DNA-binding protein that adheres to the promoters for cotB and cotC.'' has totally 6 pre-annotated entities, ``T1:purified product of gerE'', ``T2:GerE'', ``T3:DNA-binding protein'', ``T4:promoters'', ``T5:cotB'', ``T6:cotC''. It contains two ``PromoterOf'' events (E1:promoters-\textgreater{}cotB; E2:promoters-\textgreater{}cotC) and two ``Interaction'' events (E3:GerE-\textgreater{}cotB; E4:GerE-\textgreater{}cotC). The events are different from the traditional binary relations (e.g. gene-gene interaction) due to the difficulty of recognizing their directions and the diversity of the entity types as well as the event types. Under the context of knowledge graph topology, our prediction is a directed edge with a specific type instead of a plain binary relation. The before-mentioned example can be used to construct a directed graph consists of 6 nodes (entities) and 4 edges (events). We directly adopted the tokenization and NER results (e.g. ``T1:Protein'', ``T2:Protein'', ``T3:ProteinFamily'', ``T4:Promoter'', ``T5:Gene'', and ``T6:Gene'') from the annotated datasets.}
\begin{table}[!ht]
\caption{Statistics of the arguments for two tasks in BioNLP Shared Task \label{tab-argstat}} \centering {
\begin{tabular}{@{}m{\dimexpr 0.33\linewidth-2\tabcolsep}m{\dimexpr 0.25\linewidth-2\tabcolsep}cc@{}}
\hline
Task & Argument Type & Training Set & Development Set \\
\hline
\multirow{10}{*}{\shortstack{BioNLP Shared \\Task 2011 - \\Bacteria Gene \\Interactions}} & Action & 92 & 16 \\
& Agent & 125 & 15 \\
& Entity & 15 & / \\
& Gene & 36 & 3 \\
& Member & 15 & / \\
& Promoter & 38 & / \\
& Protein & 29 & / \\
& Regulon & 10 & / \\
& Site & 29 & / \\
& Target & 185 & 21 \\
& Transcription & 31 & 3 \\
\hline
\multirow{2}{*}{\shortstack{BioNLP Shared Task \\2016 - Bacteria Biotopes}} & Bacteria & 168 & 118 \\
& Location & 260 & 184 \\
\hline
\end{tabular}}{}
\end{table}
Besides the event annotations (e.g. E1:T4-\textgreater{}T6, E2:T4-\textgreater{}T5, E3:T2-\textgreater{}T5, E4:T2-\textgreater{}T6), the argument labels (e.g. ``T1:Protein'', ``T2:Protein'', ``T3:ProteinFamily'', ``T4:Promoter'', ``T5:Gene'', ``T6:Gene'') within each event type are also used in our method. Table~\ref{tab-argstat} shows the summary of the argument numbers in each task. It is obvious that the labels for the argument types are also imbalanced. The arguments are all annotated on the recognized entities. Therefore, we assume that the error rate of the entity recognition is very low, and we can consider it as known information.
The triggers used in most of the existing approaches are not officially released in the datasets and they are manually annotated by the researchers. However, those trigger words vary across different tasks; it heavily requires manual preprocessing. Furthermore, the classification errors in the trigger detectors can propagate to the argument detection and event detection. \rehl{The nonexistence of trigger words does not affirm none events since different authors may have different styles of writing and the triggers are not guaranteed to appear in the sentence.} Therefore, we do not use any trigger-based method in our study. Instead, the context of the arguments within each event is considered while constructing features.
\section{Methodology}
\subsection{An overview of the event detection framework}
\begin{figure*}[!tpb]
\resizebox{\textwidth}{!}{\centering\includegraphics{workflow}}
\caption{Overview of the neural network architecture for argument embedding and event detection. VecEntNet is trained for argument detection using the argument annotations in the training set. The parameters of VecEntNet is then fixed and the hidden layer of the MLP in VecEntNet is used as the input of VeComNet. VeComNet is trained for directed event detection using the event annotations in the training set. The testing data is passed to VecEntNet for generating argument embedding which is put into VeComNet for event prediction.} \label{fig-wf}
\end{figure*}
The overall workflow of our proposed event detection method is shown in Figure~\ref{fig-wf}. We take the tokenized words in the dataset as input and transform them into word vectors that trained on the PubMed literature. For each event argument $W_{a}$ and $W_{b}$, we input the stream of words on both sides of them to a bi-directional LSTM for constructing the context embeddings \cite{Melamud2016} of arguments. We train the context embedding model (VecEntNet) using the annotations of arguments in each task. The context embeddings are further adopted to train the event detection model (VeComNet) for detecting event type and direction.
\subsection{Word embedding}
To construct robust features for argument recognition, we use the distributed representations of words in a sentence instead of the traditional N-gram features \cite{Mikolov2013}. The adopted word vectors are pre-trained on a corpus of 10,876,004 biomedical abstracts from PubMed, which contains 1,701,632 distinct words and 200 dimensions \cite{Kosmopoulos2015}. The training is actually a transformation from the one hot encoding of the words to a continuous space with dimension reduction. Such unsupervised training on a large corpus captures the general features of each word and help prevent over-fitting for downstream tasks.
\subsection{Bi-directional LSTM}
LSTM \cite{Gers1999} is a recurrent neural network (RNN) cell that can be trained to decide which information should be forgotten or kept. Bi-directional LSTM (BLSTM) is broadly utilized for NLP tasks to learn contextual representations from phrases or sentences \cite{Melamud2016}. Therefore, we use the words surrounding the recognized entities to train the contextual representations. As shown in Figure~\ref{fig-wf}, $W_{a}$ and $W_{b}$ are recognized as two biological entities which can be a word or a phrase. The word embedding sequences $W_{0}...W_{a}$ and $W_{n}...W{a}$ are extracted from two directions as the inputs of a bi-directional LSTM. In practice, we set up a window size $u$ to normalize the sizes of two word-sequences and use a dummy word to pad the sequence with length less than $u$. The inputs are then modified as followed.
\begin{align} \label{eq-input}
\begin{split}
\vec{x_{0}} &= <X_{0}, X_{1}> = <W_{a-u:a}, W_{a+u:a}> \\
\vec{x_{1}} &= <X_{2}, X_{3}> = <W_{b-u:b}, W_{b+u:b}>
\end{split}
\end{align}
\noindent \sloppy where $\vec{x_{i}}$ stands for the surrounding words of entity $i$, $W_{a:b}$ is the sequence of word embeddings from the $a^{th}$ to $b^{th}$ word. We adopt a closed boundary strategy to construct the contextual word sequences because the named entities itself may contain useful information to distinguish the argument context. As for the example mentioned in section 2, the word ``promoters'' itself indicates that it is probably an ``Agent'' argument in the event of ``PromoterOf'', since it is a general word that is also applicable to other entities. In contrast, the words ``cotB'' and ``cotC'' have no contributions to the context modeling, which will be forgotten in BLSTM. Given the window size $u=3$, the inputs for event promoters-\textgreater{}cotB are $\vec{x_{0}}=[adheres, to, the, promoters; and, cotB, for, promoters]; \vec{x_{1}}=[the, promoters, for, cotB; DummyWord, cotC, and, cotB]$. \rehl{The word streams are then input into $LSTM$ cells. In contrast, the output of $BLSTM$ is the concatenation output vector of the left-to-right $LSTM$ and the right-to-left $LSTM$. In the above-mentioned example, the outputs of the $BLSTM$ layers are represented as $LSTM([adheres, to, the, promoters])$ concatenations with $LSTM([and, cotB, for, promoters])$ and $LSTM([the, promoters, for, cotB])$ concatenations with $LSTM([PADDING, cotC, and, cotB])$, where $LSTM([\ldots])$ is the last output of the $LSTM$ layer.}
\begin{align} \label{eq-lstm}
\begin{split}
BLSTM(X_{0}, X_{1}) &= LSTM(W_{a-u:a}) \oplus LSTM(W_{a+u:a}) \\
BLSTM(X_{2}, X_{3}) &= LSTM(W_{b-u:b}) \oplus LSTM(W_{b+u:b})
\end{split}
\end{align}
\subsection{Argument embedding}
We use Multi-Layer Perceptron (MLP) to train the argument classification model. As observed in Table~\ref{tab-argstat}, the skewed label distribution is a challenge for argument identification. We separate this multi-label classification problem into several binary classification problems under the one-vs-all strategy. Then we train each argument classifier separately using the estimator formulated in Equation~\ref{eq-mlp}. We use Dropout layer \cite{Srivastava2014} with drop out rate 0.2 in MLP as regularization to prevent over-fitting.
\begin{align} \label{eq-mlp}
\hat{y} &= MLP(\vec{x}) = Sigmoid(F_{2}(Tanh(F_{1}(BLSTM(\vec{x})))))
\end{align}
\noindent where $F_{i}(x)=A_{i}x + b_{i}$ is a fully connected layer in MLP, $Tanh$ is the hyperbolic tangent activation function, and $Sigmoid$ is the activation function of the last layer of MLP. To tackle the imbalanced problem, we first estimate the distribution of the binary labels from the training dataset, and then use weighted binary cross-entropy in Equation~\ref{eq-loss} as the loss function to optimize the neural network model.
\begin{align} \label{eq-loss}
loss &= -\sum_{i}(zy_{i}\log\hat{y}_{i}+(1-z)(1-y_{i})\log(1-\hat{y}_{i}))
\end{align}
\noindent \rehl{where $y_{i}$ is the true label of sample $i$ and $z$ represents the weight of positive class. The class weight $z$ is estimated as $1-\frac{n}{N}$ with the number of positive class $n$ and the total number of the samples $N$.}
After VecEntNet is trained, we are able to extract the argument embedding $R$ from the first layer of the MLP (Equation~\ref{eq-argvec}) for event detection. In particular, the triggers are actually embedded in the context of each argument. The trigger information, as well as their relations to the arguments, is encoded into the argument embedding which is used for event detection.
\begin{align} \label{eq-argvec}
R(\vec{x}) &= F_{1}(BLSTM(\vec{x}))
\end{align}
For the classifier of an event type $e_{i}: <arg_{s}, arg_{t}>$, we take the input $\vec{x}^{*}$ as the concatenation of both argument embeddings for each recognized entity of candidate pairs within one sentence (Equation~\ref{eq-evnt-input}). Since we are not aware of the true argument type for each entity, we use both embedding types with different orders for the entity pairs.
\begin{align} \label{eq-evnt-input}
\vec{x}^{*} &= <R_{arg_{s}}(\vec{x_{0}}) \oplus R_{arg_{t}}(\vec{x_{0}}), R_{arg_{t}}(\vec{x_{1}}) \oplus R_{arg_{s}}(\vec{x_{1}})>
\end{align}
VeComNet is designed for detecting the event types as well as the event direction of a candidate pair of recognized entities. To be consistent, we also build the multi-class classifiers under the one-vs-all strategy for event detection. For an event type $e_{i}$, we encode the direction $arg_{s} \longrightarrow arg_{t}$ as 1 and others as 0. As a result, the label for a directed event type has two bits, in which one bit encodes the existence of this event type and another one encodes the direction. Therefore, the binary classification problem for each event type is transformed into a multi-label classification problem.
Similar to word vector, argument vector also possess the compositional attribute. To reflect the direction from the model, we use a subtract layer to combine the two input vectors as $VeCom(\vec{x}^{*})$ (Equation~\ref{eq-vecom}) and use it to predict the direction. \rehl{The subtraction of the two argument vectors can be regarded as the multiplication of the concatenation of them and a factor matrix $\begin{bmatrix} I \\ -I \end{bmatrix}$, where $I$ denotes the identity matrix. We explicitly multiply this factor matrix to conduct the vector composition before proceeding to the fully connected layer. In addition, the subtraction layer can decrease the number of neurons in the MLP, and thus its model generalization.} As for the existence, we take the $L^{1}-Norm$ of $VeCom(\vec{x}^{*})$ as input to another MLP for existence prediction.
\begin{align} \label{eq-vecom}
\begin{split}
VeCom(\vec{x}^{*}) = {} & R_{arg_{s}}(\vec{x_{0}}) \oplus R_{arg_{t}}(\vec{x_{0}})\\
{} & - R_{arg_{t}}(\vec{x_{1}}) \oplus R_{arg_{s}}(\vec{x_{1}})
\end{split}
\end{align}
\noindent The resultant directed event estimator is demonstrated in Equations~\ref{eq-evnt-mlp} and ~\ref{eq-evnt-dir} representing the existence and direction respectively.
\begin{align} \label{eq-evnt-mlp}
\begin{split}
\hat{y}^{*} &= MLP^{*}(Abs(VeCom(\vec{x}^{*}))) \\
& = Sigmoid(F_{2}^{*}(ReLU(F_{1}^{*}(Abs(VeCom(\vec{x}^{*}))))))
\end{split}
\end{align}
\begin{align} \label{eq-evnt-dir}
\begin{split}
\hat{y}' &= MLP'(VeCom(\vec{x}^{*})) \\
& = Sigmoid(F_{2}'(ReLU(F_{1}'(VeCom(\vec{x}^{*})))))
\end{split}
\end{align}
\noindent where $F_{i}^{*}(x)=A_{i}^{*}x + b_{i}^{*}$ and $F_{i}'(x)=A_{i}'x + b_{i}'$ are fully connected layers, $Abs$ is a layer for absolute value calculation, $ReLU$ is the Rectified Linear Unit activation function. Binary cross-entropy is adopted as loss function and Stochastic Gradient Descent (SGD) is adopted as the optimizer to train the classifiers for each event type.
\section{Results}
The training set and development set are combined to form an annotated dataset. We evaluate our method under 10-fold cross-validation. For the arguments or events in BioNLPST-BGI with less than 20 data instances, we change to 5-fold cross-validation to ensure that the testing set would not have less than 2 classes. To ensure the training quality of those few labels, we randomly duplicate the samples in the training set so that the binary class ratio is bounded by 5. Only the training samples are duplicated when training the argument embedding. The testing samples are neither duplicated nor used in argument embedding. We trained our models on a Linux machine equipped with a 32-core CPU and 32GB RAM. The hyper-parameters used in the experiments are summarized in Tables~\ref{tab-vecent-hyparam} and ~\ref{tab-vecom-hyparam}. \rehl{Parameter analysis is also conducted to show the robustness of our method. The results shown in the Supplementary indicate that our method are not sensitive to the hyper-parameters.}
\begin{table}[!ht]
\parbox{.5\linewidth}{
\caption{Hyper-parameters used in VecEntNet \label{tab-vecent-hyparam}} \centering {
\begin{tabular}{@{}ll@{}}
\hline
\multicolumn{2}{c}{VecEntNet} \\
\hline
context window size & 10 \\
LSTM hidden/output units & 128 \\
MLP input units & 256 \\
MLP hidden units & 128 \\
Batch size & 32 \\
Epoch & 10 \\
\hline
\end{tabular}}{}}
\hfill
\parbox{.5\linewidth}{
\caption{{Hyper-paramet\newline-ers used in VeComNet} \label{tab-vecom-hyparam}} \centering {
\begin{tabular}{@{}ll@{}}
\hline
\multicolumn{2}{c}{VeComNet} \\
\hline
MLP input units & 128 \\
MLP hidden units & 64 \\
Batch size & 32 \\
Epoch & 10 \\
\hline
\end{tabular}}{}}
\end{table}
\subsection{Performance of VecEntNet and VeComNet during training}
\begin{figure*}[!tpb]
\resizebox{\textwidth}{!}{\centering
\subfigure[]{\includegraphics[width=.33\textwidth]{bgi2011_ent_accuracy}}
\subfigure[]{\includegraphics[width=.33\textwidth]{bgi2011_ent_loss}}
\subfigure[]{\includegraphics[width=.33\textwidth]{bgi2011_ent_mse}}
}
\caption{Performance of VecEntNet on BioNLPST-BGI dataset over training and testing process. (a) Model Accuracy. (b) Model Loss. (c) Mean Squared Error.} \label{fig-bgi2011-ent-model-perf}
\end{figure*}
\begin{figure*}[!tpb]
\resizebox{\textwidth}{!}{\centering
\subfigure[]{\includegraphics[width=.33\textwidth]{bgi2011_evnt_accuracy}}
\subfigure[]{\includegraphics[width=.33\textwidth]{bgi2011_evnt_loss}}
\subfigure[]{\includegraphics[width=.33\textwidth]{bgi2011_evnt_mse}}
}
\caption{Performance of VeComNet on BioNLPST-BGI dataset over training and testing process. (a) Model Accuracy. (b) Model Loss. (c) Mean Squared Error.} \label{fig-bgi2011-evnt-model-perf}
\end{figure*}
\begin{figure*}[!tpb]
\resizebox{\textwidth}{!}{\centering
\subfigure[]{\includegraphics[width=.33\textwidth]{bb2016_accuracy}}
\subfigure[]{\includegraphics[width=.33\textwidth]{bb2016_loss}}
\subfigure[]{\includegraphics[width=.33\textwidth]{bb2016_mse}}
}
\caption{Performance of VecEntNet and VeComNet on BioNLPST-BB dataset over training and testing process. (a) Model Accuracy. (b) Model Loss. (c) Mean Squared Error.} \label{fig-bb2016-ent-model-perf}
\end{figure*}
We use accuracy and mean squared error to keep track of iterative training. As depicted in Figures~\ref{fig-bgi2011-ent-model-perf} and ~\ref{fig-bb2016-ent-model-perf}, VecEntNet converges roughly at the \nth{10} epoch and keeps stable in the following training. Therefore, we use 10 epochs as the default hyper-parameter in the subsequent experiments. Figure~\ref{fig-bgi2011-ent-model-perf} shows that only the argument ``Gene'' converges slower than others.
Nevertheless, the overall performance of training VecEntNet and VeComNet is desirable.
\subsection{Performance of VecEntNet and VeComNet under 10-fold cross-validation}
\begin{figure*}[!tpb]
\resizebox{\textwidth}{!}{\centering
\subfigure[]{\includegraphics[width=.5\textwidth]{bgi2011_ent_cv_roc}}
\subfigure[]{\includegraphics[width=.5\textwidth]{bgi2011_ent_cv_prc}}
}
\caption{Performance of VecEntNet on BioNLPST-BGI dataset. (a) Micro-average ROC curves. (b) Micro-average PRC curves.} \label{fig-bgi2011-vecent-roc-prc}
\end{figure*}
\begin{figure*}[!tpb]
\resizebox{\textwidth}{!}{\centering
\subfigure[]{\includegraphics[width=.5\textwidth]{bgi2011_evnt_cv_roc}}
\subfigure[]{\includegraphics[width=.5\textwidth]{bgi2011_evnt_cv_prc}}
}
\caption{Performance of VeComNet on BioNLPST-BGI dataset. (a) Micro-average ROC curves. (b) Micro-average PRC curves.} \label{fig-bgi2011-vecom-roc-prc}
\end{figure*}
\begin{figure*}[!tpb]
\resizebox{\textwidth}{!}{\centering
\subfigure[]{\includegraphics[width=.5\textwidth]{bb2016_cv_roc}}
\subfigure[]{\includegraphics[width=.5\textwidth]{bb2016_cv_prc}}
}
\caption{Performance of VecEntNet and VeComNet on BioNLPST-BB dataset. (a) Micro-average ROC curves. (b) Micro-average PRC curves.} \label{fig-bb2016-roc-prc}
\end{figure*}
We evaluate the overall performance with precision, recall, and F-score under 10-fold cross-validation experiments. We can observe from Figure~\ref{fig-bgi2011-vecent-roc-prc} that VecEntNet performs very well in most of the argument classifications on BioNLPST-BGI. However, it is expected that VecEntNet can be underestimated on the tasks with limited training samples such as ``Entity'', ``Gene'', and ``Site''. Nevertheless, VeComNet achieves robust performance by leveraging the argument embedding learned by VecEntNet. As for the performance on BioNLPST-BB dataset shown in Figure~\ref{fig-bb2016-roc-prc}, we can learn that VecEntNet as well as VeComNet performs better and keeps stable once sufficient data is given. Our proposed model definitely performs well on balanced data but it is also applicable to imbalanced labels due to the weighted loss function adopted in VecEntNet. The detailed performance is tabulated in Tables~\ref{tab-bgi2011-vecent-perf}, ~\ref{tab-bgi2011-vecom-perf}, and ~\ref{tab-bb2016-perf}
\begin{table*}[!ht]
\caption{Performance of VecEntNet on BioNLPST-BGI dataset with regards to different argument types under 10-fold cross validation \label{tab-bgi2011-vecent-perf}} \centering {
\resizebox{\textwidth}{!}{\centering
\begin{tabular}{@{}m{\dimexpr 0.15\linewidth-2\tabcolsep}ccccccccccc@{}}
\hline
& Action & Agent & Entity & Gene & Member & Promoter & Protein & Regulon & Site & Target & Transcription \\
\hline
\rehl{\# of Training Samples} & 108 & 140 & 15 & 36 & 15 & 38 & 29 & 10 & 29 & 185 & 31 \\
Accuracy & 0.97 & 0.83 & 0.86 & 0.91 & 0.97 & 0.97 & 0.92 & 0.99 & 0.94 & 0.77 & 0.97 \\
Precision & 0.92 & 0.56 & 0.09 & 0.32 & 0.48 & 0.70 & 0.52 & 0.68 & 0.45 & 0.61 & 0.60 \\
Recall & 0.93 & 0.93 & 0.65 & 0.55 & 0.80 & 0.90 & 0.72 & 1.00 & 0.80 & 0.73 & 0.98 \\
F score & 0.92 & 0.70 & 0.15 & 0.37 & 0.54 & 0.77 & 0.50 & 0.77 & 0.52 & 0.66 & 0.75 \\
Train time (s) & 603.73 & 696.15 & 512.58 & 710.31 & 677.00 & 469.79 & 595.32 & 391.12 & 591.96 & 660.29 & 610.50 \\
Test time (s) & 1.65 & 1.59 & 1.56 & 1.79 & 2.15 & 0.89 & 1.02 & 0.82 & 0.94 & 1.87 & 1.09 \\
\hline
\end{tabular}}}{}
\end{table*}
\begin{table*}[!ht]
\caption{Performance of VeComNet on BioNLPST-BGI dataset with regards to different event types under 10-fold cross validation \label{tab-bgi2011-vecom-perf}} {
\resizebox{\textwidth}{!}{\centering
\begin{tabular}{@{}m{\dimexpr 0.1\linewidth-2\tabcolsep}cccccccccc@{}}
\hline
& ActionTarget & Interaction & PromoterDependence & PromoterOf & RegulonDependence & RegulonMember & SiteOf & TranscriptionBy & TranscriptionFrom \\
\hline
Accuracy & 0.93 & 0.91 & 0.98 & 0.98 & 0.99 & 0.99 & 0.98 & 0.97 & 0.99 \\
Precision & 0.70 & 0.73 & 0.82 & 0.79 & 0.97 & 0.99 & 0.95 & 0.60 & 0.99 \\
Recall & 0.91 & 0.82 & 0.84 & 0.78 & 0.99 & 0.99 & 0.98 & 0.98 & 0.99 \\
F score & 0.79 & 0.77 & 0.82 & 0.76 & 0.98 & 0.99 & 0.97 & 0.75 & 0.99 \\
Train time (s) & 4.99 & 5.04 & 5.00 & 5.03 & 5.55 & 5.67 & 6.18 & 5.69 & 5.78 \\
Test time (s) & 0.15 & 0.15 & 0.16 & 0.16 & 0.16 & 0.17 & 0.16 & 0.16 & 0.14 \\
\hline
\end{tabular}}}{}
\end{table*}
\begin{table}[!ht]
\caption{Performance of VecEntNet and VeComNet on BioNLPST-BB dataset \label{tab-bb2016-perf}} \centering {
\begin{tabular}{@{}m{\dimexpr 0.3\linewidth-2\tabcolsep}ccc@{}}
\hline
\multirow{2}{*}{} & \multicolumn{2}{c}{VecEntNet} & VeComNet \\ \cline{2-4}
& Bacteria & Location & Lives\_In \\
\hline
Accuracy & 0.88 & 0.82 & 0.92 \\
Precision & 0.66 & 0.69 & 0.89 \\
Recall & 0.74 & 0.77 & 0.96 \\
F score & 0.69 & 0.72 & 0.92 \\
Train time (s) & 771.44 & 757.76 & 4.83 \\
Test time (s) & 0.72 & 0.74 & 0.15 \\
\hline
\end{tabular}}{}
\end{table}
As for the two worst cases of argument classification, ``Entity'' and ``Gene'' (F scores = 0.15 and 0.37), their corresponding event detection is still satisfactory (F scores = 0.97 and 0.76). We can also observe that the argument with better performance (``Site'' and ``Promoter'') within the same event type compensate the defectiveness of the worse one.
\subsection{Performance comparison with other top-ranked approaches}
\begin{table}[!ht]
\caption{Performance comparison between VeComNet and the \textbf{best} method (Uturku) on BioNLPST-BGI dataset \label{tab-bgi2011-perf-cmp}} \centering {
\begin{tabular}{lcccccc}
\hline
Method & \multicolumn{3}{c}{VeComNet} & \multicolumn{3}{c}{Uturku \cite{Bjorne2012}} \\
\hline
Event Type & Precision & Recall & F-score & Precision & Recall & F-score \\
\hline
ActionTarget & 0.7 & 0.91 & 0.79 & 0.94 & 0.92 & 0.93 \\
Interaction & 0.73 & \textbf{0.82} & \textbf{0.77} & 0.75 & 0.56 & 0.64 \\
PromoterDependence & 0.82 & 0.84 & 0.82 & 1.00 & 1.00 & 1.00 \\
PromoterOf & 0.79 & 0.78 & 0.76 & 1.00 & 1.00 & 1.00 \\
RegulonDependence & 0.97 & 0.99 & 0.98 & 1.00 & 1.00 & 1.00 \\
RegulonMember & 0.99 & \textbf{0.99} & \textbf{0.99} & 1.00 & 0.50 & 0.67 \\
SiteOf & 0.95 & \textbf{0.98} & \textbf{0.97} & 1.00 & 0.17 & 0.29 \\
TranscriptionBy & \textbf{0.60} & \textbf{0.98} & \textbf{0.75} & 0.67 & 0.50 & 0.57 \\
TranscriptionFrom & 0.99 & 0.99 & 0.99 & 1.00 & 1.00 & 1.00 \\
Average & 0.788 & \textbf{0.834} & \textbf{0.791} & 0.91 & 0.83 & 0.79 \\
\hline
\end{tabular}}{}
\end{table}
We compared our performance with that of the best method in the competition on BioNLPST-BGI dataset with respect to each event type. As tabulated in Table~\ref{tab-bgi2011-perf-cmp}, VeComNet and the Uturku's approach \cite{Bjorne2014,Bjorne2015} have their own merits. \rehl{VeComNet performs the best on ``Interaction'', ``RegulonMember'', ``SiteOf'', ``TranscriptionBy'' events with significant improvement on the F-scores (0.12, 0.32, 0.68, 0.4) compared to the best existing approach; and has competitive performance on ``RegulonDependence'' and ``TranscriptionFrom'' events.}
The performance of VeComNet on other events are stable and impressive due to which its average performance is better than the Uturku's approach. \rehl{The compared method from Uturku seems over-fitting to the dataset since in most of the event types it achieved the ideal F-scores of 1.0 where our proposed method does not. Our method outstands from other approaches according to its generalization ability instead of the ideal F-scores.} But the deep learning model adopted in VeComNet is limited by the number of training samples.
\begin{table}[!ht]
\caption{Performance comparison between VeComNet and other top-ranked methods \cite{Deleger2016} on BioNLPST-BB dataset \label{tab-bb2016-perf-cmp}} \centering {
\begin{tabular}{lccc}
\hline
Method & Precision & Recall & F-score \\
\hline
VeComNet & \textbf{0.89} & \textbf{0.96} & \textbf{0.92} \\
VERSE \cite{Lever2016} & 0.51 & 0.62 & 0.56 \\
TurkuNLP \cite{Mehryary2016} & 0.63 & 0.45 & 0.52 \\
LIMSI & 0.39 & 0.65 & 0.49 \\
HK & 0.60 & 0.39 & 0.47 \\
whunlpre & 0.56 & 0.41 & 0.47 \\
DUTIR \cite{Li2016} & 0.57 & 0.38 & 0.46 \\
WXU & 0.56 & 0.38 & 0.46 \\
\hline
\end{tabular}}{}
\end{table}
From Table~\ref{tab-bb2016-perf-cmp} we can observe that VeComNet has the strongest power in single event prediction. The less arguments and event types contained in the detection task, the more powerful VeComNet will be. Besides, VeComNet is a generic model that can be used in different event detection tasks without any tuning and modification. The robustness and strong predictive power of VeComNet enables it to be a promising model in the area of biomedical event extraction.
\section{Case studies}
To reveal how our method works, we randomly picked some cases from the testing dataset for case demonstration. The sample sentence `The expression of rsfA is under the control of both sigma(F) and sigma(G).' with ID `PMID-10629188-S5' in the testing dataset of BioNLPST-BGI has four recognized entities ($T_{1}$:`expression', $T_{2}$:`rsfA', $T_{3}$:`sigma(F)', $T_{4}$:`sigma(G)') and three events (ActionTarget: $[Action]T_{1}\longrightarrow[Target]T_{2}$, Interaction: $[Agent]T_{3}\longrightarrow[Target]T_{2}$, Interaction: $[Agent]T_{4}\longrightarrow[Target]T_{2}$) as ground true annotations. We obtained 11 argument models by fitting VecEntNet on the training dataset with the entity annotations. We further gained the argument embeddings for each possible pair of entities in both training and testing datasets. For the above-mentioned sample, some of the candidate pairs generated are $<T_{1}, T_{2}>, <T_{2}, T_{1}>, <T_{2}, T_{3}>, <T_{3}, T_{2}>, <T_{2}, T_{4}>, <T_{4}, T_{2}>, <T_{1}, T_{3}>, <T_{1}, T_{4}>$. And the argument models for event type ActionTarget are $arg_{action}$ and $arg_{target}$. \rehl{We take them as functions and the candidate pairs of entities as input.} The argument embeddings we obtained for $<T_{1}, T_{2}>$ are $<arg_{action}(T_{1}) \oplus arg_{target}(T_{1}), arg_{target}(T_{2}) \oplus arg_{action}(T_{2})>$. Since we are not aware of the argument type the entities belong to, we concatenated both argument embeddings for each entity and let VeComNet to determine. The argument embeddings are obtained for other candidate entity pairs with respect to different event types in a similar way. We used argument embeddings as the input of VeComNet models. The predicted labels for the aforementioned candidate entity pairs are $<1,1>,<1,0>,<0,0>...<0,0>$ with respect to ActionTarget event and $<0,0>,<0,0>,<1,0>,<1,1>,<1,0>,<1,1>,<0,0>,<0,0>$ with respect to Interaction event, in which the first label indicates the existence of the corresponding event and the second label indicates whether the event is pointed from the first entity to the second one. The binary labels were further post-processed to generate the predicted biomedical events. \rehl{For instance, the candidate pairs $<T_{1}, T_{2}>$ and $<T_{2}, T_{3}>$ are predicted as $<1,1>$ and $<1,0>$ for ActionTarget and Interaction evnets respectively. It means that it exists an ActionTarget event $T_{1} \longrightarrow T_{2}$ (expression-\textgreater{}rsfA) and an Interaction event $T_{3} \longrightarrow T_{2}$ (rsfA-\textgreater{}sigma(F)) in this sentence.}
\section{Discussion}
For many years, scientific literature has served as the major outlet for novel discovery and result dissemination. To extract useful knowledge from the literature for management and query, information extraction is proposed to automate this process. Biomedical event extraction is particularly important because it is able to systematically organize the knowledge as controlled representations such as directed knowledge graphs. However, the existing event detection methods are not satisfactory in performance because most of them are constrained in the trigger-based approach which relies on the lexical and syntactic features from dependency parsing. The quality of manual trigger annotation and the error propagation from trigger detection to the event detection have limited our progress for years.
In this study, we proposed a bottom-up event detection framework using deep learning techniques. We built an LSTM-based model VecEntNet to construct argument embeddings for each recognized entity. We further utilized compositional attributes of the argument vectors to train a directed event classifier VeComNet.
\rehl{LSTM and context embedding have been shown its applicability in several other NLP tasks. Our main contribution is the proposed framework for argument embedding using Bi-directional LSTM and the downstream directed event detection using multi-output neural network. This strategy for event detection is proposed for the first time in this study. It overcomes the error propagation as well as extra annotations of trigger-based approaches. Besides, the continuous space of argument embedding significantly lessen the sensitivity of event detection. In addition, we developed our own loss functions for training the argument embedding with unbalanced data and training the multi-output neural network for directed event detection. These are the key reasons why our method can achieve outstanding performance. Broadly speaking, the proposed method is suitable for general event extraction by using the pre-trained word embedding in the specific area.}
Our method is not sensitive to the hyper-parameters and it works well for a wide range of tasks. The experimental results indicate that the proposed method is competent in the biomedical event extraction. In the future, we envision that it can fundamentally benefit the related downstream tasks in biomedical text mining with broad impacts.
\section*{Acknowledgements}
The authors
are grateful to the organizers of BioNLP Shared Task who provide the public annotated dataset. The authors would also like to thank Prashant Sridhar for his English proofreading. \vspace*{-12pt}
\section*{Funding}
The work described in this paper was substantially supported by three grants from the Research Grants Council of the Hong Kong Special Administrative Region [CityU 21200816], [CityU 11203217], and [CityU 11200218]. We acknowledge the donation support of the Titan Xp GPU from the NVIDIA Corporation. \vspace*{-12pt}
\bibliographystyle{unsrt}
\input{refs.bbl}
\end{document}
|
1,116,691,498,251 | arxiv | \section{Introduction}\label{Introduction}
This work is devoted to the study of solitary wave solutions of the Whitham--Boussinesq system
\begin{equation}\label{eq:bdw}
\begin{aligned}
\partial_t\eta &=-K\partial_xu-\partial_x(\eta u)\\
\partial_tu &=-\partial_x\eta-u\partial_xu.
\end{aligned}
\end{equation}
A solitary wave is a solution of the form
\begin{align}\label{eq:travelling wave ansatz}
\eta(x,t)=\eta(x-ct),\ u(x,t)=u(x-ct),
\end{align}
such that \(\eta(x-ct),u(x-ct)\longrightarrow 0\) as \(|x-ct|\longrightarrow \infty\).
Here, $\eta$ denotes the surface elevation, \(u\) is the rightward velocity at the surface, and $K$ is a Fourier multiplier operator defined by
\begin{equation*}\label{eq:K}
\mathcal{F}(Kf)(k)=m(k)\hat{f}(k),
\end{equation*}
for all \(f\) in the Schwartz space \(\mathcal{S}(\mathbb{R})\). More specifically, we require that
\begin{itemize}
\item[(A1)] The symbol $m\in S_\infty^{m_0}(\mathbb{R})$ for some $m_0<0$, that is
\begin{align*}
|m^{(\alpha)}(k)|\leq C_\alpha(1+|k|)^{m_0-\alpha},\ \alpha\in \mathbb{N}_0.
\end{align*}
\item[(A2)] The symbol \(m:\mathbb{R}\rightarrow\mathbb{R}\) is even and satisfies \(m(0)>0\), $m(k)<m(0),\ \text{for}\ k\neq 0$ and
\begin{align*}
m(k)=m(0)+\frac{m^{(2j_*)}(0)}{(2j_*)!}k^{2j_*}+r(k),
\end{align*}
for some \(j_*\in \mathbb{N}_+\), where \(m^{(2j_*)}(0)<0\) and \(r(k)=\mathcal{O}(k^{2j_*+2})\) as \(k\rightarrow 0\).\\
\end{itemize}
As an example we have $m(k)=\tanh(k)k^{-1}$, which yields the bi-directional Whitham (BDW) system, and this choice of symbol is the main motivation for studying \eqref{eq:bdw}. The BDW system was formally derived in \cite{MR2991247,MR3390078} from the incompressible Euler equations to model fully dispersive shallow water waves whose propagation is allowed to be both left- and rightward, and appeared in \cite{MR3060183,MR3668593} as a full dispersion system in the Boussinesq regime with the dispersion of the water
waves system. There have been several investigations on the BDW system: local well-posedness \cite{2017arXiv170804551E,MR3763731} (in homogeneous Sobolev spaces at a positive background), a logarithmically cusped wave of greatest height \cite{Ehrnström2018}. There are also numerical results, investigating the validity of the BDW system as a model of waves on shallow water \cite{MR3844340}, numerical bifurcation and spectral stability \cite{doi:10.1111/sapm.12221} and the observation of dispersive shock waves
\cite{MR3523508}. However there are no results on the existence of solitary wave solutions.
We also mention that one can include the effects of surface tension in the BDW system by choosing $m(k)=\tanh(k)k^{-1}(1+\beta k^2), \beta>0$. It was recently shown in \cite{2018arXiv180504372K} that \eqref{eq:bdw} is locally well-posed for this choice of symbol. However, the above symbol with $\beta>0$ is not included in the class of symbols considered in the present work.
Moreover, in \cite{MR3749383, DDK, ED}, other types of fully dispersive Whitham-Boussinesq systems are considered. We also mention the generalized class of Green--Nagdhi equations introduced in \cite{MR3564304}, which was shown to posses solitary wave solutions in \cite{MR3841973}.
\section{Solitary wave solutions to the Whitham equation}
In order to prove existence of solitary wave solutions of \eqref{eq:bdw} our strategy will be to reduce this to a problem that is similar to one studied in \cite{EGW}. For this reason we first discuss the results and methods of that paper.
In \cite{EGW} the authors prove the existence of solitary wave solutions of the pseudodifferential equation
\begin{equation}\label{EGW-eq}
u_t+\big(Ku+\tilde{n}(u)\big)_x=0,
\end{equation}
where $K$ have properties $(A1)$, $(A2)$ and the nonlinearity $\tilde{n}$ satisfies
\begin{itemize}
\item[(A3)] The nonlinearity $\tilde{n}$ is a twice continuously differentiable function $\mathbb{R}\rightarrow \mathbb{R}$ with
\begin{equation*}\label{egw}
\tilde{n}(x)=\tilde{n}_p(x)+\tilde{n}_r(x),
\end{equation*}
in which the leading order part of the nonlinearity takes the form $\tilde{n}_p(x)=c_p\abs{x}^p$ for some $c_p\neq 0$ and $p\in [2,4j_*+1)$ or $\tilde{n}_p(x)=c_px^p$ for some $c_p>0$ and odd integer $p$ in the range $p\in [2,4j_*+1)$, while
\begin{equation*}
\tilde{n}_r(x)=\mathcal{O}(\abs{x}^{p+\delta}),\quad \tilde{n}_r'(x)=\mathcal{O}(\abs{x}^{p+\delta-1})
\end{equation*}
for some $\delta>0$ as $x\rightarrow 0$.
\end{itemize}
In particular, the uni-directional Whitham equation, introduced in \cite{MR0671107}, belongs to this class of equations \eqref{EGW-eq}, with $m(k)=\sqrt{\tanh(k)k^{-1}}$. The Whitham equation possesses periodic travelling waves \cite{MR2555644} and solitary waves \cite{EGW}, moreover the solitary waves decay exponentially \cite{MR3603270}.
It was recently confirmed that the Whitham equation possesses a highest cusped wave \cite{2016arXiv160205384E}, as conjectured by Whitham.
Under the travelling wave ansatz: $u(t,x)=u(x-c t)$, the equation \eqref{EGW-eq} becomes
\begin{equation}\label{travellingwave-EGW}
Ku-c u+\tilde{n}(u)=0.
\end{equation}
The existence of solutions of \eqref{travellingwave-EGW} is established via a related minimization problem. Let
\begin{equation*}
\tilde{\mathcal{E}}(u)=-\frac{1}{2}\int_\mathbb{R}uKu\ \mathrm{d}x-\int_\mathbb{R}\tilde{N}(u)\ \mathrm{d}x,\quad \mathcal{I}(u)=\frac{1}{2}\int_\mathbb{R}u^2\ \mathrm{d}x
\end{equation*}
with
\begin{align*}
\tilde{N}(x)&=\tilde{N}_{p+1}(x)+\tilde{N}_r(x),\\
\tilde{N}_{p+1}(x)&=\int_0^x\tilde{n}_p(s)\ \mathrm{d}s=\frac{ c_px^{p+1}}{p+1}, \text{ or }\frac{ c_px\abs{x}^{p}}{p+1}, \\
\tilde{N}_r(x)=&\int_0^x\tilde{n}_r(s)\ \mathrm{d}s=\mathcal{O}(\abs{x}^{p+1+\delta}).
\end{align*}
Let $q,R>0$ and
\begin{equation*}
V_{q,R}:=\{u\in H^1(\mathbb{R}):\ \mathcal{I}(u)=q,\ \norm{u}_{H^1}<R\}.
\end{equation*}
Minimizers of $\tilde{\mathcal{E}}$ over $V_{q,R}$ (that are not on the boundary) satisfy the Euler-Lagrange equation
\begin{equation}\label{EL-egw}
\mathrm{d}\tilde{\mathcal{E}}(u)+\nu\mathrm{d}\mathcal{I}(u)=0,
\end{equation}
for a Lagrange multiplier $\nu$, and \eqref{EL-egw} is precisely \eqref{travellingwave-EGW}, with $c=\nu$. In \cite{EGW} the authors show that there exist solutions of the minimization problem
\[
\arg \underset{V_{q,R}} \inf \tilde{\mathcal{E}}(u),
\]
which by the above argument yields travelling wave solutions of \eqref{EGW-eq}.
The existence of minimizers is established using methods developed in \cite{Buffoni2004,MR2847283} and we give here a brief outline of the proof. The functional $\tilde{\mathcal{E}}$ is not coercive and since the domain is unbounded one cannot use the Rellich--Kondrachov theorem. In particular, direct methods cannot be used to obtain a minimizer. Because of this one needs to study a related penalized functional acting on periodic functions.
Let \(P>0\) and \(L_P^2\) be the space of \(P\)- periodic, locally square-integrable functions with Fourier-series representation
\[
w(x)=\frac{1}{\sqrt{P}}\sum_{k\in\mathbb{Z}}\widehat{w}(k)\exp(2\pi ikx/P),
\]
with
\[
\widehat{w}(k):=\frac{1}{\sqrt{P}}\int_{-\frac{P}{2}}^{\frac{P}{2}}w(x)\exp(-2\pi ikx/P)\,\mathrm{d} x.
\]
For \(s\geq0\), we define
\[
H^s_P:=\{w\in L^2_P:\ \|w\|_{H^s_P}<\infty\},
\]
where the norm is given by
\[
\|w\|_{H^s_P}:=\left(\sum_{k\in \mathbb{Z}}\bigg(1+\frac{4\pi^2k^2}{P^2}\bigg)^s|\widehat{w}(k)|^2 \right)^{\frac{1}{2}}.
\]
The authors \cite{EGW} studied the following penalized functional
\begin{equation*}
\tilde{\mathcal{E}}_{P,\varrho}(u):=\varrho(\norm{u}_{H_P^1}^2)+\tilde{\mathcal{E}}_p(u),
\end{equation*}
over the set
\begin{equation*}
V_{P,q,R}:=\{u\in H_P^1:\ \mathcal{I}_P(u)=q, \ \norm{u}_{H_P^1}<2R\},
\end{equation*}
where $\tilde{\mathcal{E}}_P, \ \tilde{\mathcal{I}}_P$ are the same functionals as $\tilde{\mathcal{E}},\ \tilde{\mathcal{I}}$ but where the integration is over $[-P/2,P/2]$, and $\varrho:[0,(2R)^2]\mapsto [0,\infty)$ is a penalization function such that $\varrho(t)=0$ whenever $t\in[0,R^2]$ and $\varrho(t)\rightarrow \infty$ as $t\rightarrow (2R)^2$. The penalization function makes $\tilde{\mathcal{E}}_{P,\varrho}$ coercive, and the fact that we are now working in $H_P^1$ allows the use of the Rellich-Kondrachov theorem. It is then an easy task to show that there exists a minimizer $u_P\in V_{P,q,2R}$, of $\tilde{\mathcal{E}}_{P,\varrho}$. The next step is to show that $u_P$ in fact minimizes $\tilde{\mathcal{E}}_P$ over $V_{q,R}$. This is immediate after showing that
\begin{equation*}\label{main-ineq}
\norm{u_P}_{H_P^1}^2\leq Cq,
\end{equation*}
and choosing $q$ sufficiently small. The other key ingredient of the proof is the concentration compactness theorem \cite{MR778974}. In the application of this theorem, the main task is to show that `dichotomy' does not occur. This is done using proof by contradiction, where the contradiction is arrived at using the strict subadditivity of
\begin{equation*}
I_q:=\arg\underset{V_{q,R}}\inf\tilde{\mathcal{E}}(u),
\end{equation*}
as a function of $q$. The strict subadditivity of $I_q$ is established by using a special minimizing sequence for $\tilde{\mathcal{E}}$, constructed from the minimizers $u_P$. In addition it is necessary to decompose $u$ into high and low frequencies in order to get satisfactory estimates on $\norm{u}_{L^\infty}$, see \cite[Corollary 4.5]{EGW}.
It is an easy task to show that `vanishing' cannot occur either. Therefore, from the concentration compactness theorem, `concentration' is the only possibility and the existence of minimizers then follows from a standard argument.
Under the additional assumption that
\begin{itemize}
\item[(A4)] $\tilde{n}\in C^{2j_*}(\mathbb{R})$ with
\[\tilde{n}_r^{(j)}(x)=\mathcal{O}(\abs{x}^{p+\delta-j}),\ j=0,\ldots, ,2j_*,\]
\end{itemize}
it is possible to relate the minimizers of $\tilde{\mathcal{E}}$ to those of $\tilde{\mathcal{E}}_{lw}$, where
\begin{equation*}
\tilde{\mathcal{E}}_{lw}(u)=-\int_\mathbb{R}\left(\frac{m^{(2j_*)}(0)}{2(2j_*)!}(u^{(j_*)})^2+ \tilde{N}_{p+1}(u)\right)\, \mathrm{d}x.
\end{equation*}
More specifically,
\begin{equation*}
\sup_{u\in \tilde{D}_q}\text{dist}_{H^{j_*}(\mathbb{R})}(S_{lw}^{-1}u,\tilde{D}_{lw})\rightarrow 0,\quad \text{ as }q\rightarrow 0,
\end{equation*}
where $\tilde{D}_{lw}$ is the set of minimizers of $\tilde{\mathcal{E}}_{lw}$ over the set
\[\{u\in H^{j_*}(\mathbb{R}):\ \mathcal{I}(u)=1\},\]
and \(\tilde{D}_q\) is the set of minimizers of $\tilde{\mathcal{E}}$ over $V_{q,R}$ and
\[(S_{lw}u)(x):=q^\alpha u(q^\beta x)\]
is the 'long-wave test function' with
\begin{equation}\label{alpha-beta}
\alpha=\frac{2 j_*}{4j_*+1-p},\quad \beta=\frac{p-1}{4j_*+1-p}.
\end{equation}
The numbers $\alpha$ and $\beta$ are chosen in such a way that
\[2\alpha-\beta=1,\quad (p-1)\alpha=2j_*\beta.\]
This choice of $\alpha$, $\beta$ appear naturally when deriving the long-wave approximation of \eqref{travellingwave-EGW}.
The functional $\tilde{\mathcal{E}}_{lw}$ is related to $\tilde{\mathcal{E}}$ via (see \cite[Lemma 3.2]{EGW})
\begin{equation*}
\tilde{\mathcal{E}}(S_{lw}u)=-qm(0)+q^{1+(p-1)\alpha}\tilde{\mathcal{E}}_{lw}(u)+o(q^{1+(p-1)\alpha}),
\end{equation*}
for any \(u\in W:=\{u\in H^{2j_*}(\mathbb{R}):\ \norm{u}_{H^{2j_*}}<S\}\) with \(S\) being a positive constant.
We mention here a recent work \cite{arXiv:1802.10040} where they use an entirely different approach to prove the existence of small amplitude solitary wave solutions of the Whitham equation.
\section{Solitary wave solutions to the Whitham--Boussinesq system}\label{sec-bi-directional-whitham}
\subsection{Formulation as a constrained minimization problem}
In the present work we seek solitary wave solutions of \eqref{eq:bdw}, and the idea is to reformulate \eqref{eq:bdw} in such a way that the method of \cite{EGW} can be applied. Under the travelling wave ansatz \eqref{eq:travelling wave ansatz}, the system \eqref{eq:bdw} then becomes
\begin{align}
c\eta &=Ku+\eta u, \label{2}\\
cu &=\eta+\frac{u^2}{2} \label{3}.
\end{align}
It follows from \eqref{3} that \(\eta=u(c-\frac{u}{2})\), and if we insert this into \eqref{2} then we find that
\begin{equation}\label{4}
Ku-u(u-c)(\frac{u}{2}-c)=0.
\end{equation}
We first formally assume that \(\|u\|_{L^\infty}\ll c \) to formulate \eqref{4} into a variational problem. This is no restriction since the constructed solutions will automatically satisfy this smallness condition (see Theorem \eqref{th:main}).
Let \(w=\frac{u}{c}(\frac{u}{c}-2)\), so that \(u=c-c\sqrt{1+w}\). The map \(w \mapsto u\) is well-defined, since
\begin{align*}
\norm{w}_{L^\infty}\leq \norm{\frac{u}{c}}_{L^\infty}\norm{\frac{u}{c}-2}_{L^\infty}\lesssim \norm{\frac{u}{c}}_{L^\infty}\ll1,\\
\end{align*}
We then may rewrite the equation \eqref{4} using the new unknown \(w\) as
\begin{align}\label{vareq}
\frac{2}{\sqrt{1+w}}K(\sqrt{1+w}-1)-\lambda w=0,
\end{align}
with $\lambda=c^2$.
We now define
\begin{equation*}
\mathcal{E}(w)=\underbrace{-\frac{1}{2}\int_\mathbb{R}wKw\ \mathrm{d}x}_{:= \mathcal{K}(w)}\underbrace{-\int_\mathbb{R}N(w)\ \mathrm{d}x}_{:=\mathcal{N}(w)},
\end{equation*}
where
\begin{align*}
N(w)=2\Psi(w)Kw+2\Psi(w)K(\Psi(w)),
\end{align*}
\begin{align*}
\Psi(w)=\sqrt{1+w}-1-\frac{w}{2}=-\frac{w^2}{8}+\Psi_r(w),
\end{align*}
\begin{align*}
\ \Psi_r(x)=\mathcal{O}(x^3).
\end{align*}
To extract the lower-order parts we also write
\begin{equation*}
N(w)=N_h(w)+N_l(w),
\end{equation*}
with
\begin{equation*}
N_h(w)=-\frac{w^2}{4}Kw,\quad N_l(w)=2\Psi(w)Kw+2\Psi(w)K(\Psi(w)).
\end{equation*}
We then note that
\[
\mathrm{d} \mathcal{E}(w)+\lambda \mathrm{d} \mathcal{I}(w)=0
\]
is precisely \eqref{vareq}. Hence, \(w\) is a critical point of \(\mathcal{E}\) under the constraint \(\mathcal{I}(w)=q\), if and only if \(u=c-c\sqrt{1+w}\) is a solution of \eqref{4}, with \(\lambda=c^2\).
We will find critical points of
\(\mathcal{E}(w)+\lambda \mathcal{I}(w)\) by considering the minimization problem
\[
\arg \underset{V_{q,R}} \inf \mathcal{E}(w).
\]
Here we are minimizing a functional $\mathcal{E}$ of almost the same type as in \cite{EGW}, with $p=2$, but with a slightly different nonlinearity. In our case, the nonlocal operator $K$ appears in the nonlinear term $N$. However, since $K$ is a bounded smoothing operator, it is not hard to show that the methods used in \cite{EGW} can be applied to the functional $\mathcal{E}$. However, the results \cite[Lemma 2.3, Lemma 3.2, Lemma 3.3]{EGW} require a bit more care, in particular it is important to know how $\mathcal{N}$ acts under the long-wave scaling, and we therefore include the proofs of these results in the next subsection. We finally have the following existence result:
\begin{theorem}\label{th:main}
There exists \(q_*>0\) such that the following statements hold for each \(q\in(0,q_*)\).
(\(\romannumeral1\)) The set \(D_q\) of minimizers of \(\mathcal{E}\) over the set \(V_{q,R}\) is nonempty and the estimate \(\|w\|_{H^1(\mathbb{R})}^2=\mathcal{O}(q)\) holds uniformly over \(w\in D_q\). Each element of \(D_q\) is a solution of the
travelling wave equation \eqref{vareq}; the squared wave speed \(c^2\) is the Lagrange multiplier in this constrained
variational principle.
(\(\romannumeral2\)) Let \(s<1\) and suppose that \(\{w_n\}_{n\in\mathbb{N}_0}\) is a minimizing sequence for \(\mathcal{E}\) over \(V_{q,R}\).
There exists a sequence \(\{x_n\}_{n\in\mathbb{N}_0}\) of real numbers such that a subsequence of \(\{w_n(\cdot+x_n)\}_{n\in\mathbb{N}_0}\) converges in \(H^s(\mathbb{R})\) to a function in \(D_q\).
\end{theorem}
\subsection{Technical results}
In our case the long-wave functional $\mathcal{E}_{lw}$ is given by
\begin{align*}
\mathcal{E}_{lw}(w):=-\int_\mathbb{R}\left(\frac{m^{2j_*}(0)}{2(2j_*)!}(w^{(j_*)})^2-\frac{m(0)}{4}w^3\right)\mathrm{d} x,
\end{align*}
and we also recall the long-wave scaling:
\[S_{lw}w(x)=\mu^\alpha w(\mu^\beta x),\]
with
\begin{equation}\label{alphabeta_specialcase}
\alpha=\frac{2j_*}{4j_*-1} \quad \text{and} \quad \beta=\frac{1}{4j_*-1}.
\end{equation}
Note that \eqref{alphabeta_specialcase} is a special case of \eqref{alpha-beta}, with $p=2$.
We first present a result corresponding to \cite[Lemma 3.2]{EGW}, which relates $\mathcal{E}$ with $\mathcal{E}_{lw}$.
\begin{lemma}\label{le:long wave} Let $w\in W$ with $\norm{w}_{L^\infty}\ll 1$ and $\mathcal{I}(w)=1$. Then
\begin{align}\label{eq:long wave test}
\mathcal{E}(S_{lw}w)=-q m(0)+q^{1+\alpha}\mathcal{E}_{lw}(w)+o(q^{1+\alpha}).
\end{align}
\end{lemma}
\begin{proof} Recall the definition
\begin{equation*}
\mathcal{E}(S_{lw}w)=\mathcal{K}(S_{lw}w)+\mathcal{N}(S_{lw}w).
\end{equation*}
We first calculate that
\begin{align*}
&\mathcal{K}(S_{lw}w)\\
&=-\frac{1}{2}\int_{\mathbb{R}}q^{2\alpha}w(q^\beta x)Kw(q^\beta \cdot)(x)\, \mathrm{d}x\\
&=-\frac{1}{2}\int_\mathbb{R}q^{2\alpha}m(k)\abs{\mathcal{F}(w(q^\beta\cdot))(k)}^2\, \mathrm{d}k\\
&=-\frac{1}{2}\int_\mathbb{R}q^{2\alpha-\beta}\left(m(0)+q^{2j_*\beta}\frac{m^{(2j_*)}(0)}{(2j_*)!}k^{2j_*}+r(q^\beta k)\right)\abs{\hat{w}(k)}^2\ \mathrm{d}k\\
&=-q m(0)-q^{2\alpha+(2j_*-1)\beta}\int_\mathbb{R}\frac{m^{(2j_*)}(0)}{2(2j_*)!}(w^{j_*})^2\ \mathrm{d}x-\frac{q^{2\alpha-\beta}}{2}\int_\mathbb{R}r(q^\beta k)\abs{\hat{w}(k)}^2\, \mathrm{d}k,
\end{align*}
and one may continuously estimate the last term as
\begin{equation*}
\abs{\frac{q^{2\alpha-\beta}}{2}\int_\mathbb{R}r(q^\beta k)\abs{\hat{w}(k)}^2\, \mathrm{d}k}\lesssim q^{2\alpha+(2j_*+1)\beta}\int_\mathbb{R}k^{2j_*+2}\abs{\hat{w}(k)}^2\, \mathrm{d}k,
\end{equation*}
and $\int_\mathbb{R}k^{2j_*+2}\abs{\hat{w}(k)}^2\ \mathrm{d}k$ is uniformly bounded, since $w\in W$. We next consider
\begin{equation*}
\mathcal{N}(S_{lw}w)=-\int_\mathbb{R}N_h(S_{lw}w)+N_l(S_{lw}w)\, \mathrm{d}x.
\end{equation*}
A direct calculation shows that
\begin{align*}
&-\int_\mathbb{R}N_h(S_{lw}w)\ \mathrm{d}x=\int_{\mathbb{R}}\frac{q^{3\alpha}}{4}w^2(q^\beta x)Kw(q^\beta\cdot)(x)\ \mathrm{d}x\\
&=\int_\mathbb{R}\frac{q^{3\alpha-\beta}}{4}\overline{\mathcal{F}(w^2)(k)}\hat{w}(k)\left(m(0)+q^{2j_*\beta}\frac{m^{(2j_*)}(0)}{(2j_*)!}k^{2j_*}+r(q^\beta k)\right)\, \mathrm{d}k\\
&=q^{3\alpha-\beta}\int_{\mathbb{R}}\frac{m(0)}{4}w^3\, \mathrm{d}x+o(q^{3\alpha-\beta}),
\end{align*}
where we again used that $w\in W$ in order to estimate the remaining terms.
The term $\int_\mathbb{R}N_l(S_{lw}w)\ \mathrm{d}x$ is of lower order and can be estimated in the same way.
Combining all the above estimates yields the identity \eqref{eq:long wave test}.
\end{proof}
We next move to the corresponding result of \cite[Lemma 3.2]{EGW}.
\begin{lemma}\label{lemma:approximate}
Let
\begin{align*}
\mathcal{K}_P(w)=-\frac{1}{2}\int_{-\frac{P}{2}}^{\frac{P}{2}}wKw\ \mathrm{d}x,\quad \mathcal{N}_P(w)=-\int_{-\frac{P}{2}}^{\frac{P}{2}}N(w)\ \mathrm{d}x,
\end{align*}
\begin{align*}
\ \mathcal{E}_P(w)=\mathcal{K}_P(w)+\mathcal{N}_P(w),
\end{align*}
and let $\{\tilde{w}_P\}$ be a bounded family of functions in $H^1(\mathbb{R})$ with $\norm{\tilde{w}_P}_{L^\infty(\mathbb{R})}\ll1$ such that
\begin{equation*}
\mathrm{supp}(\tilde{w}_P)\subset (-\frac{P}{2},\frac{P}{2}) \quad \mathrm{and}\quad \mathrm{dist}\big(\pm \frac{P}{2},\mathrm{supp}(\tilde{w}_P)\big)\geq\frac{1}{2}P^\frac{1}{4},
\end{equation*}
and define $w_P\in H_P^1$ by the formula
\[w_P=\sum_{j\in\mathbb{Z}}\tilde{w}_P(\cdot +jP).\]
\begin{itemize}
\item[(i)] The function $w_P$ satisfies
\begin{equation*}
\lim_{P\rightarrow \infty}\norm{K\tilde{w}_P-Kw_P}_{H^1(-\frac{P}{2},\frac{P}{2})}=0,\quad \lim_{P\rightarrow \infty}\norm{K\tilde{w}_P}_{H^1(\abs{x}>\frac{P}{2})}=0.
\end{equation*}
\item[(ii)] The functionals $\mathcal{E}$, $\mathcal{I}$ and $\mathcal{E}_P$, $\mathcal{I}_P$ have the properties that
\begin{equation*}
\lim_{P\rightarrow \infty}\big(\mathcal{E}(\tilde{w}_P)-\mathcal{E}_P(w_P)\big)=0,\quad \mathcal{I}(\tilde{w}_P)=\mathcal{I}_P(w_P),
\end{equation*}
and
\begin{align*}
&\lim_{P\rightarrow \infty}\norm{\mathcal{E}'(\tilde{w}_P)-\mathcal{E}'_P(w_P)}_{H^1(-\frac{P}{2},\frac{P}{2})}=0, &&\lim_{P\rightarrow \infty}\norm{\mathcal{E}'(\tilde{w}_P)}_{H^1(-\frac{P}{2},\frac{P}{2})}=0\\
&\norm{\mathcal{I}'(\tilde{w}_P)-\mathcal{I}'_P(w_P)}_{H^1(-\frac{P}{2},\frac{P}{2})}=0,&& \norm{\mathcal{I}'(\tilde{w}_P)}_{H^1(\abs{x}>\frac{P}{2})}=0.
\end{align*}
\end{itemize}
\end{lemma}
To prove Lemma \ref{lemma:approximate}, we need the following technical result of \cite[Proposition 2.1]{EGW}.
\begin{proposition}\label{proposition:technical} The linear operator $K$ satisfies\\
(a) $K$ belongs to \(C^\infty(H^s(\mathbb{R}),H^{s+|m_0|}(\mathbb{R}))\) \(\cap\) \(C^\infty(\mathcal{S}(\mathbb{R}),\mathcal{S}(\mathbb{R}))\) for each \(s\geq0\). \\
(b) For each \(j\in \mathbb{N}\) there exists a constant \(C_l=C(\|m^{(l)}\|_{L^2(\mathbb{R})})>0\) such that
\[
|Kf(x)|\leq \frac{C_l\|f\|_{L^2}}{ \mathrm{dist}\big(x,\supp(f)\big)^l},\quad x\in \mathbb{R}\setminus \supp (f),
\]
for all \(f\in L^2_c(\mathbb{R})\).
\end{proposition}
\begin{proof}[Proof of Lemma~\ref{lemma:approximate}]
The limits in (\(\romannumeral1\)) are proved in \cite[Proposition 2.1]{EGW}, so we turn to (\(\romannumeral2\)). Using (\(\romannumeral1\)) we get that $\mathcal{K}(\tilde{w}_P)-\mathcal{K}(w_P)\rightarrow 0$, as $P\rightarrow \infty$. Note that
\begin{equation}\label{estimate:N, step 1}
\begin{aligned}
\mathcal{N}(\tilde{w}_P)&=-2\int_\mathbb{R}\Psi(\tilde{w}_P)K\tilde{w}_P+\Psi(\tilde{w}_P)K(p(\tilde{w}_P))\ \mathrm{d}x\\
&=-2\int_{-\frac{P}{2}}^{\frac{P}{2}}\Psi(w_P)K\tilde{w}_P+\Psi(w_P)K(\Psi(\tilde{w}_P))\ \mathrm{d}x\\
&=-2\int_{-\frac{P}{2}}^\frac{P}{2}\Psi(w_P)K(\tilde{w}_P-w_P)+\Psi(w_P)K\big(\Psi(\tilde{w}_P)-\Psi(w_P)\big)\ \mathrm{d}x\\
&\quad+\mathcal{N}_P(w_P).
\end{aligned}
\end{equation}
In light of (\(\romannumeral1\)) we have
\begin{equation}\label{estimate:N, step 2}
\begin{aligned}
&\bigg|\int_{-\frac{P}{2}}^{\frac{P}{2}}\Psi(w_P)K(\tilde{w}_P-w_P)\ \mathrm{d}x\bigg|\\
&\leq \norm{\Psi(w_P)}_{L^2(-\frac{P}{2},\frac{P}{2})}\norm{K(\tilde{w}_P-w_P)}_{L^2(-\frac{P}{2},\frac{P}{2})}\rightarrow 0, \quad \text{as}\ P\longrightarrow \infty.
\end{aligned}
\end{equation}
Since $\norm{\tilde{w}_P}_{L^\infty}\ll1$, we have $\norm{w_P}_{L^\infty}\ll1$.
To estimate the second term on the right hand side of \eqref{estimate:N, step 1}, one first calculates
\begin{align*}
\Psi(\tilde{w}_P)-\Psi(w_P)&=\sqrt{1+\tilde{w}_P}-\sqrt{1+\sum_{j\in\mathbb{Z}}\tilde{w}_P(\cdot+jP)}+\frac{1}{2}\sum_{\abs{j}\geq 1}\tilde{w}_P(\cdot jP)\\
&=-\frac{\sum_{\abs{j}\geq 1}\tilde{w}_P(\cdot +jP)}{\sqrt{1+\tilde{w}_P}+\sqrt{1+w_P}}+\frac{1}{2}\sum_{\abs{j}\geq 1}\tilde{w}_P(\cdot +jP)\\
&=\left(\frac{1}{2}-\frac{1}{\sqrt{1+\tilde{w}_P}+\sqrt{1+w_P}}\right)\sum_{\abs{j}\geq 1}\tilde{w}_P(\cdot +jP),
\end{align*}
and then applies Proposition \ref{proposition:technical} to get
\begin{equation}\label{estimate:N, step 3}
\begin{aligned}
&\int_{-\frac{P}{2}}^\frac{P}{2}\big|K\big(\Psi(\tilde{w}_P)-\Psi(w_P)\big)\big|^2\, \mathrm{d}x\\
&\leq \int_{-\frac{P}{2}}^\frac{P}{2}\bigg|\sum_{\abs{j}\geq 1}K\left[\tilde{w}_P(\cdot +jP)\big(\frac{1}{2}-\frac{1}{\sqrt{1+\tilde{w}_P}+\sqrt{1+w_P}}\big)\right]\bigg|^2\, \mathrm{d}x\\
&\lesssim \int_{-\frac{P}{2}}^\frac{P}{2}\left(\sum_{\abs{j}\geq 1}\frac{\norm{\tilde{w}_P(\cdot +jP)\left(\frac{1}{2}-\frac{1}{\sqrt{1+\tilde{w}_P}+\sqrt{1+w_P}}\right)}_{L^2(-\frac{P}{2},\frac{P}{2})}}{\text{dist}\big(x+jP,\text{supp}(\tilde{w}_P)\big)^3}\right)^2\, \mathrm{d}x\\
&\lesssim \|\tilde{w}_P\|_{L^2}\int_{-\frac{P}{2}}^\frac{P}{2}\big(\sum_{\abs{j}\geq 1}\frac{1}{(jP+\frac{1}{2}P^{\frac{1}{4}})^3}\big)^2\, \mathrm{d}x\\
&\rightarrow 0, \quad \text{as}\ P\longrightarrow \infty.
\end{aligned}
\end{equation}
Hence we obtain
\begin{equation}\label{estimate:N, step 4}
\begin{aligned}
&\bigg|\int_{-\frac{P}{2}}^\frac{P}{2}\Psi(w_P)K\big(\Psi(\tilde{w}_P)-\Psi(w_P)\big)\ \mathrm{d}x\bigg|\\
&\leq\norm{\Psi(w_P)}_{L^2(-\frac{P}{2},\frac{P}{2})}\norm{K\big(\Psi(\tilde{w}_P)-\Psi(w_P)\big)}_{L^2(-\frac{P}{2},\frac{P}{2})}\rightarrow 0, \quad \text{as}\ P\longrightarrow \infty.
\end{aligned}
\end{equation}
From \eqref{estimate:N, step 1}, \eqref{estimate:N, step 2} and \eqref{estimate:N, step 4}, it follows that $\mathcal{N}(\tilde{w}_P)-\mathcal{N}_P(w_P)\rightarrow 0$, which in turn implies that
\[\mathcal{E}(\tilde{w}_P)-\mathcal{E}_P(w_P)\rightarrow 0, \quad \text{as}\ P\longrightarrow \infty.\]
The equality \(\mathcal{I}(\tilde{w}_P)=\mathcal{I}_P(w_P)\) is immediate.
A direct calculation yields
\begin{equation*}
\mathcal{N}'(w)=-\left(\frac{1}{\sqrt{1+w}}-1\right)Kw-\frac{2}{\sqrt{1+w}}K(\Psi(w)),
\end{equation*}
so we may estimate
\begin{align*}
&\norm{\mathcal{N}'(\tilde{w}_P)-\mathcal{N}'_P(w_P)}_{L^2(-\frac{P}{2},\frac{P}{2})}\\
&\leq \norm{\left(\frac{1}{\sqrt{1+w_P}}-1\right)(K\tilde{w}_P-Kw_P)}_{L^2(-\frac{P}{2},\frac{P}{2})}\\
&\quad +\norm{\frac{2}{\sqrt{1+w_P}}K\big(\Psi(\tilde{w}_P)-\Psi(w_P)\big)}_{L^2(-\frac{P}{2},\frac{P}{2})}\\
&\rightarrow 0, \quad \text{as}\ P\longrightarrow \infty,
\end{align*}
where we have used (\(\romannumeral1\)) and \eqref{estimate:N, step 3}. One can similarly show that
\begin{equation*}
\norm{\frac{\mathrm{d}}{\mathrm{d} x}\mathcal{N}'(\tilde{w}_P)-\frac{\mathrm{d}}{\mathrm{d} x}\mathcal{N}'_P(w_P)}_{L^2(-\frac{P}{2},\frac{P}{2})}\rightarrow 0,\quad \text{as}\ P\rightarrow \infty.
\end{equation*}
Hence
\begin{equation*}
\norm{\mathcal{E}'(\tilde{w}_P)-\mathcal{E}'_P(w_P)}_{H^1(-\frac{P}{2},\frac{P}{2})}\rightarrow 0,\quad \text{as}\ P\rightarrow \infty.
\end{equation*}
Note that $\frac{1}{\sqrt{1+\tilde{w}_P}}-1=0$ for $\abs{x}>\frac{P}{2}$, we calculate
\begin{align*}
&\norm{\mathcal{N}'(\tilde{w}_P)}_{L^2(\abs{x}>\frac{P}{2})}\\
&=\norm{\left(\frac{1}{\sqrt{1+\tilde{w}_P}}-1\right)K\tilde{w}_P+\frac{2}{\sqrt{1+\tilde{w}_P}}K(\Psi(\tilde{w}_P))}_{L^2(\abs{x}>\frac{P}{2})}\\
&=\norm{\frac{2}{\sqrt{1+\tilde{w}_P}}K(\Psi(\tilde{w}_P))}_{L^2(\abs{x}>\frac{P}{2})}.
\end{align*}
Since $\supp(\Psi(\tilde{w}_P))=\supp(\tilde{w}_P)$, we have \(\norm{K(\Psi(\tilde{w}_P))}_{L^2(\abs{x}>\frac{P}{2})}\rightarrow 0\).
It follows that
\[\norm{\mathcal{N}'(\tilde{w}_P)}_{L^2(\abs{x}>\frac{P}{2})}\rightarrow 0,\quad \text{as}\ P\rightarrow \infty.\]
A similar calculation shows that
\[\norm{\frac{\mathrm{d}}{\mathrm{d} x}\mathcal{N}'(\tilde{w}_P)}_{L^2(\abs{x}>\frac{P}{2})}\rightarrow 0.\]
Consequently, we have \[\norm{\mathcal{N}'(\tilde{w}_P)}_{H^1(\abs{x}>\frac{P}{2})}\rightarrow 0, \quad \text{as}\ P\rightarrow \infty.\]
\end{proof}
Just as in \cite[Theorem 6.3]{EGW} we can relate the minimizers of $\mathcal{E}$ with those of $\mathcal{E}_{lw}$:
\begin{equation*}
\sup_{w\in D_q}\text{dist}_{H^{j_*}(\mathbb{R})}(S_{lw}^{-1}w,D_{lw})\rightarrow 0,\quad \text{ as }q\rightarrow 0,
\end{equation*}
where $D_{lw}$ is the set of minimizers of $\mathcal{E}_{lw}$ over the set
\[\{w\in H^{j_*}(\mathbb{R}):\ \mathcal{I}(w)=1\},\]
and $D_q$ is the set of minimizers of $\mathcal{E}$ over $V_{q,R}$.
We finally include a regularity result for the travelling wave solutions of \eqref{vareq} which corresponds to \cite[Lemma 2.3]{EGW}.
\begin{lemma}\label{le:regularity} Let \(w\) be a solution of \eqref{vareq} in with $\norm{w}_{L^\infty}\ll1$. Then for any \(k\in \mathbb{N}_+\), \(w\in H^{k}\) and satisfies
\[
\|w\|_{H^k}\leq C(k,\|w\|_{H^1}).
\]
\end{lemma}
\begin{proof} Let
\[
f=\sqrt{1+w}-1,
\]
then one has \(\|f\|_{L^\infty}\ll1\) due to $\norm{w}_{L^\infty}\ll1$. In view of \eqref{vareq}, \(f\) solves
\begin{equation}\label{eqf}
f=\frac{2}{\lambda (1+f)(2+f)}Kf.
\end{equation}
Differentiating in \eqref{eqf} yields
\begin{align}\label{eq: direvative}
\partial_xf=\frac{2}{\lambda [(1+f)(2+f)+f(2+f)+f(1+f)]}K\partial_xf.
\end{align}
The denominator is positive due to \(\|f\|_{L^\infty}\ll1\).
Let \(l\in \{1,2,\cdots,k\}\). For each fixed \(f\in H^l\) we define a formula \(\phi_f\) by
\[
\phi_f(g)=\frac{2}{\lambda [(1+f)(2+f)+f(2+f)+f(1+f)]}g.
\]
Then one now may follow the argument in [EGW, Lemma 2.3] by using the properties of \(\phi_f\) and \(K\) to show
\begin{align*}
\|\partial_xf\|_{H^l}\leq C(\|f\|_{H^1})\|\partial_xf\|_{L^2}.
\end{align*}
For completeness, we give its proof here. For any \(s\in [0,l]\), it is easy to see that \(\phi_f\) and \(K\) define an operator in \(B(H^s,H^s)\) and \(B(H^s,H^{s+|m_0|})\), respectively. Thus the composition
\[
\psi_f=\phi_f\circ K\in B(H^s,H^{s_*}),\quad s_*=\min\{l,s+|m_0|\},
\]
and the norm of \(\psi_f\) depends upon \(\|f\|_{H^l}\).
Consequently, any solution \(g\) of \(g=\psi_f(g)\) belongs to \(H^{s_*}\) and satisfies
\[
\|g\|_{H^{s_*}}\leq C_{l,\|f\|_{H^l}}\|g\|_{H^s}.
\]
Applying this argument recursively, one finds that any solution \(g\in L^2\) belongs to \(H^l\) and
satisfies
\[
\|g\|_{H^l}\leq C(l,\|f\|_{H^l})\|g\|_{L^2}.
\]
Since \eqref{eq: direvative} is equivalent to \(\partial_xf=\psi_f(\partial_xf)\), a bootstrap argument
shows that \(f^{\prime}\in H^l\) with
\[
\|\partial_xf\|_{H^l}\leq C(l,\|f\|_{H^1})\|\partial_xf\|_{L^2},\ l=1,2,\cdots,k.
\]
So far we have shown that
\[
\|f\|_{H^k}\leq C(k,\|f\|_{H^1}).
\]
Finally, recalling that \(w=f^2+2f\) and \(H^l\) is an algebra, we therefore obtain
\[
\|w\|_{H^k}\leq C(k,\|f\|_{H^1})\leq C(k,\|w\|_{H^1}),
\]
where we have used \(\|w\|_{L^\infty}\ll1\) in the last inequality.
\end{proof}
\begin{remark}
The results of the present work may be extended to a more general class of nonlinearities $N$. On the one hand, we have that the leading order part of $N$ is cubic, but this could be extended to higher power nonlinearities.
On the other hand, the multiplier operator $K$ appearing in $N$ can be replaced by an operator $K'$ belonging to a wider class of Fourier multipliers. For instance, it is not necessary for the symbol of this $K'$ to be of negative order. An example is $K'=\text{Id}$, which yields the nonlinearities studied in \cite{EGW}.
\end{remark}
\section{Acknowledgment} Both authors would like to thank M. Ehrnstr{\"o}m and E. Wahl{\'e}n for suggesting this topic.
\bibliographystyle{siam}
|
1,116,691,498,252 | arxiv | \section{Introduction}
The Smithsonian/NASA Astrophysics Data System (ADS) provides access to
the astronomy and physics literature. As of September 2006 it
provides a search system for almost 4.9 million records, covering most
of the astronomical literature (including planetary sciences and solar
physics) and a large part of the physics literature. The ADS has been
described in detail in a series of articles in Astronomy and
Astrophysics Supplements
\citep{2000A&AS..143...41K,2000A&AS..143...61E,2000A&AS..143...85A,
2000A&AS..143..111G}.
Since 1994, the Astrophysics Data System (ADS) has scanned the
scholarly literature in astronomy. As of September 2006, we have
scanned over 3.3 million pages. These articles are available free of
charge world-wide.
In order to make this resource even more accessible, we have used
Optical Character Recognition (OCR) software to obtain the text of all
the scanned pages in ASCII form. This allows us to index the full text
of all articles and make it searchable.
This search system covers the astronomical literature that was
published only on paper. In order to search the literature published
in electronic form, we developed a system that sends queries to
the search systems of several publishers. The results of these
queries are then combined with the results of the ADS internal queries
to seamlessly cover the majority of the literature.
This article describes some of the features of the search system for
the full text of the astronomical literature.
\section{Current Data in the ADS}
We have so far scanned about 3.3 million pages from 43 journals, 15
conference series and 67 individual conferences, as well as a
significant number of historical observatory publications. The
scanned pages as of September 2006 use about 600 GB of disk space, the
OCRd text uses 72 GB. The OCRd text is so-called ``dirty OCR''
because it has not been checked manually and it contains significant
numbers of errors. This means that this text cannot for instance be
used to extract numerical data from tables, it would be inaccurate.
However, for searching the text for specific words, this ``dirty OCR''
is good enough. Significant words are usually used more than once, so
even if the OCR software made a mistake in recognizing a word once, it
will still show up correctly in other places of the same article.
Indexing of the OCRd text proved to be challenging. The number of
unique words from this text is large. One reason for the large
number of words is the fact that mistakes during the OCR process
create new misspelled words. To reduce this problem, we remove words
that have spurious characters in them that are OCR errors.
But even after removing such words, as well as other unusable words
like numbers, there are 14 million unique words in the index. The
files produced during indexing are large, the largest being about 3.7
GB, close to the limit of 32 bit addressing. There is still some room
for growth, but eventually we will have to move to 64 bit addressing
for the full text search system in the ADS.
\section{Search Forms}
There are two search forms available. The basic search form allows
you to enter the search term(s) and select which search systems to
query. The search terms are combined with AND, meaning that all
search words must be present on a page in order to be selected. The
system supports phrase searching when multiple words are enclosed in
double quotes. By default, synonym replacement is enabled. This
means that the system not only searches for a specified word
but also for all other words that have the same meaning. Synonym
replacement can be turned off for individual words be pre-pending a
'='. This will search for the exact spelling of the word, which can
be useful for words that have synonyms that are very common and would
produce many matches. For instance ``galaxy'' is a synonym for
``extragalactic''. A search for ``=extragalactic'' will remove
``galaxy'' from the matches.
The advanced search form allows in addition the selection of a
publication date range and a journal. It also allows the selection of
several sort options. One important sort option is ``oldest first''.
This allows you to find the first occurrence of a word or phrase in the
literature (see \ref{ex} Example Usage).
\section{Returned Data}
The search returns a list of articles that contain the search terms.
Under each article it lists each page individually that contains the
search terms. It includes a partial sentence around the search terms,
with the search term highlighted in red. For pages that are not in
articles (cover pages, various other material, and pages from issues
where we don't have the pagination information), the pages are listed
individually. The article information links back to the regular ADS
abstract service, the page information links directly to the scanned
page.
\section{Combined Searches}
In order to include the more recent literature that was published in
electronic form, the user can select to include one or more external
search systems in the query. The external search systems are queried
in parallel. As results are returned from the external systems, they
are displayed to the user. Once all results are available, a final
combination of all the results is compiled and displayed. The search
fan-out is still experimental. It is not yet very stable since none
of the external systems provide a dedicated interface for such
external queries. It was implemented by simulating regular user
queries to these systems. This makes our fan-out system vulnerable to
changes in the external search systems. If an API (Application
Programming Interface) becomes available for any of the external
systems, we will implement it to build a more stable system.
We currently query the systems listed in table~\ref{ss}.
\begin{table}[!ht]
\caption{External Search Systems}
\label{ss}
\smallskip
\begin{center}
{\small
\begin{tabular}{ll}
\tableline
\noalign{\smallskip}
Search System & Journals searched\\
\noalign{\smallskip}
\tableline
\noalign{\smallskip}
Google Scholar & Monthly Notices of the Royal Astronomical Society\\
& Annual Review in Astronomy and Astrophysics\\
& Annual Review of Earth and Planetary Sciences\\
& Applied Optics\\
& Journal of the Optical Society of America\\
University of Chicago Press & Astronomical Journal\\
& Astrophysical Journal\\
& Astrophysical Journal Letters\\
& Astrophysical Journal Supplement\\
& Publications of the Astronomical Society\\
& of the Pacific\\
EDP Sciences & Astronomy and Astrophysics\\
Nature & Nature\\
& Nature Physics\\
& Nature Physical Science\\
National Academy of Science & Proceedings of the National Academy of Science\\
\noalign{\smallskip}
\tableline
\end{tabular}
}
\end{center}
\end{table}
\section{\label{ex} Example Usage}
Using the full text search system is different from using the abstract
search system in the ADS. Since there are so many more words in the
full text, there are usually many more matches. It is therefore
generally advisable to use more unique words, more search terms, and/or
phrases.
For instance if you are trying to find out when the concept of a
critical mass was first described, searching for the words ``critical
mass'' without the double quotes would not produce anything useful,
but a search for the phrase
\smallskip
``critical mass''
\smallskip
\noindent
with double quotes from the Advanced Search form, with ``Oldest
first'' selected, quickly finds an article in PASP from 1919 that
attributes this phrase to Professor Eddington.
Another interesting question is to find out when the name Pluto was
first suggested for a planet. If you enter:
\smallskip
planet pluto
\smallskip
\noindent
in the search field and select ``Oldest first'' under the sort
options. One of the first matches will be of an article in ``The
Observatory'' from 1898 that suggests using Pluto as the name for the
recently discovered planet DQ. Incidentally, the name had to wait for
another 30 years before it was actually used for a planet. This
capability can be very useful for astronomy historians.
\section{Conclusion}
The ADS provides a search capability for the full text of a large part
of the astronomical literature. This capability complements the
regular abstract search system. It allows the in-depth analysis of
the older literature and especially the historical observatory
publications, a part of the astronomical literature that has not
been accessible in any search system until now.
\acknowledgements
The ADS is funded by NASA Grant NNG06GG68G.
|
1,116,691,498,253 | arxiv | \section{Introduction}
The investigation of the electron spin dynamics in semiconductor quantum dots (QDs) has caused a very large interest in the last two decades \cite{HansonSpinQdotsRMP2007,dyakonov_book,glazov_book,smirnov:SNS:rev} due to the magnificent fundamental physics and the possible applications in quantum technologies.
The entanglement in interacting spin systems is of high relevance nowadays~\cite{PhysRevLett.107.206806,Gangloff2019,Gangloff:2021te,PhysRevLett.126.216804}.
In particular, the entanglement between the electron and nuclear spins is mediated by the hyperfine interaction between the locally bound charge carrier spin and the surrounding nuclear spins that limits the electron spin coherence time \cite{merkulov_prb2002} in QDs with disordered nuclear spins.
While the fluctuating Overhauser field acting on the electron from the disordered nuclear spins is only of the order of $10$~mT,
polarized nuclei can generate an effective magnetic field of several Tesla in GaAs-type semiconductors~\cite{merkulov_pss1998,dyakonov_book,glazov_book}.
The electron spin affects the nuclei via the Knight field induced by the hyperfine interaction and can be efficiently oriented optically~\cite{opt_or_book,dyakonov_book,glazov_book}.
As a result, optical excitation is responsible for the dynamic nuclear polarization in InAs/GaAs QDs \cite{eble_prb2006} as well as mode locking \cite{greilich_science2006} and nuclei-induced frequency focusing effects \cite{GreilichBayer2007,Evers2018} enabling efficient control of the nuclear spin degrees of freedom by non-magnetic means.
When lowering the temperature, the correlated ground state of the system becomes dominant:
electron and nuclear spins corroborate and form a correlated or entangled nuclear-spin polaron state that minimizes the hyperfine energy.
Such a state has been predicted by Merkulov \cite{merkulov_pss1998} in a framework of the mean-field quasi equilibrium model, assigning the electron and nuclear spins different effective temperatures.
The two temperatures, $T_e$ and $T_n$, were used in mean-field theory \cite{merkulov_pss1998} to predict a critical temperature line on which the transition from an uncorrelated system to a nuclear-polaronic state occurs.
The key idea is based on the observation that the electron remains coupled to the lattice, whereas the very long lifetime of the nuclear spin polarization up to several hours \cite{kkm_nucl_book,vladimirova_prb2017} indicates a strong decoupling of the nuclear spins from the environment.
While the electronic degrees of freedom maintain their base temperature $T_e$ (typically, on the order of several Kelvin), the spin temperature $T_n$ of optically cooled nuclei can be much lower than $T_e$~\cite{opt_or_book,dyakonov_book,glazov_book,Chekhovich2017,vladimirova_prb2018,Kotur:2021wp}.
In particular, recently Ref.~\cite{Kotur:2021wp} reported a nuclear spin temperature as low as $0.54$~$\mu$K.
Progress in the cooling of the nuclear spin systems motivates theoretical studies of the entangled electron-nuclear spin states.
The analysis of the nuclear-spin polaron formation beyond the mean-field approach was presented in Ref.~\cite{scalbert_prb2017}.
In Ref.~\cite{PhysRevB.103.205207}, in addition to the nuclear-spin polaron, a novel state termed a dynamically induced nuclear ferromagnet was predicted.
In a recent paper \cite{fischer_prb2020}, we explored the nuclear polaron formation beyond the mean-field theory by employing a master equation for the distribution function of the interacting electron-nuclear spin system.
The analysis in Ref.~\cite{fischer_prb2020} was restricted to the Ising limit
of the hyperfine interaction, where the eigenstates of the system can be conveniently expressed as products of the electron and nuclear spin states and the spin-flip transition rates between those states mediated by the coupling with external reservoirs can be explicitly written.
The solution of the corresponding master equation has made it possible to obtain not only the transition temperature to the nuclear-spin polaron state, but also the distribution functions of the spins, the fluctuations of electron and nuclear spins and address the dynamics of the polaron formation.
In this paper, we substantially extend the theory to investigate the polaronic state for an arbitrary anisotropic hyperfine interaction, needed to access the physical relevant regimes in semiconductor QDs where an isotropic hyperfine interaction is realized for electrons, Ising-like interaction for the heavy-holes, and anisotropic interaction for the light-holes and heavy-light hole mixtures~\cite{glazov_book}.
We derive a generalized Lindblad approach to two spin reservoirs that impose the two temperatures, $T_e$ and $T_n$, as boundary conditions.
Our approach is suitable for all temperature regimes, and the Lindblad rates are fixed in such a way that the steady state solution of the Lindblad equation is given by the Boltzmann form of the density matrix in thermal equilibrium.
In order to address the nuclear polaron formation in a system with a very large number of nuclear spins upto $N=1000$ in a semi-analytical fashion, we resort to the box model approximation \cite{RS1983,1742-5468-2007-06-P06018,kozlov_jetp2007} of the central spin model (CSM).
We investigate the nuclear polaron formation as a function of the anisotropy parameter $\lambda$ \cite{fischer_prb2008,hackmann_prb2014} where the limit $\lambda=0$ corresponds to the Ising limit \cite{fischer_prb2020} relevant for a purely heavy-hole bound QD state, $\lambda=1$ to the isotropic case of a negatively charged QD, and $\lambda>1$ to the regime of a mixture of heavy and light holes.
This allows to study all relevant regimes of positively and negatively charged InGaAs QDs.
We show that the polaron state is not destroyed by the quantum fluctuations present when reducing the nuclear bath temperature.
The crossover regime is very narrow and follows the mean-field approach to the anisotropic CSM \cite{Gaudin1976,merkulov_pss1998,coish_prb2004,glazov_book}.
In the absence of a symmetry breaking field, however, the nuclear polaronic state still contains the full degeneracy of the ground state in contrary to the mean-field theory.
The paper is organized as follows.
Section~\ref{sec:model} is devoted to the presentation of our Lindblad approach where the included Lindblad operators mediate spin excitations caused by the coupling to the thermal reservoirs.
A general Hamiltonian for the hyperfine interaction is introduced in Sec.\ \ref{sec:hf} and the related Lindblad equation is presented in Sec.\ \ref{sec:lindblad}.
The rate equations for the density matrix in the energy eigenbasis are deduced in Sec.\ \ref{sec:elementwise}.
We adopt the general approach to the anisotropic CSM in Sec.\ \ref{sec:smodel}.
After the model is defined in Sec.\ \ref{sec:bm} and the box model eigenstates \cite{kozlov_jetp2007} are presented, the question of the determination of the Lindblad decay rates is addressed in Sec.\ \ref{sec:bmlindblad}.
Section \ref{sec:polaronstate} is devoted to the emerging nuclear-spin polaron state.
We begin with the presentation of the electron-nuclear spin correlators as a function of temperature for different anisotropy parameters $\lambda$ in Sec.\ \ref{sec:nuclear-polaron-correlators} and compare our stationary Lindblad solution with a simplifying mean-field approach in Sec.\ \ref{sec:mf}.
The critical temperature of the polaron formation and the quantum fluctuations close to the very narrow crossover region are discussed in Sec.\ \ref{sec:temperature-crit}.
We address the nuclear spin distribution in Sec.\ \ref{sec:nucldistfunc} by tracing out the electronic spin configuration.
Our results are linked to a quantum phase transition that occurs at the isotropy point $\lambda=1$.
We discuss the change of the ground state at the critical point in Sec.\ \ref{sec:quantum-phase-transition}.
In Sec.~\ref{sec:fluct}, we present calculations for the spin auto-correlation function of the open quantum system.
Section~\ref{sec:szsz} is devoted to the real time dynamics of the electron spin and Sec.\ \ref{sec:jzjz} extends the discussion to the fluctuations of the nuclear spins.
We finish the paper with a short conclusion.
\section{Model}
\label{sec:model}
In this paper we investigate the formation of a polaronic state and its properties in a system with one localized electronic charge.
We explicitly treat the interaction between the nuclear spins and the localized charge carrier spin via the central spin model (CSM) and include energy and spin exchange with reservoirs within a set of Markovian transition rates.
We start with a presentation of the basic formalism.
\subsection{Hyperfine interaction}
\label{sec:hf}
The hyperfine interaction between the localized charge carrier spin $\mathbf{S}$ and the surrounding nuclear spins $\mathbf{I}_k$ is described by the Hamiltonian \cite{dyakonov_book,glazov_book}
\begin{equation}
H = \sum_{k=1}^N \sum_{\alpha,\beta} A_k^{\alpha,\beta} S^\alpha I_k^\beta .
\label{eq:H}
\end{equation}
Here we label the individual nuclear spins with an index $k \in \left\{ 1, \ldots , N \right\}$ and include all nuclear spins within the charge carrier localization volume.
The matrix $A_k^{\alpha,\beta}$ defines the generally anisotropic hyperfine coupling strength of an individual nuclear spin;
its matrix elements incorporate the electron wave function at the position of the respective nucleus, where $\alpha$ and $\beta \in \left\{ x,y,z \right\}$ refer to the Cartesian axes.
The Hamiltonian, Eq.~\eqref{eq:H}, accounts for a system with an anisotropic hyperfine coupling as well as the isotropic case, where, naturally, $A_k^{\alpha,\beta} \propto \delta_{\alpha,\beta}$ and $\delta_{x,y}$ is the Kronecker $\delta$-symbol \cite{glazov_book}.
Hamiltonian~\eqref{eq:H} is applicable to the description to a variety of semiconductor nanostructures such as singly charged QDs \cite{merkulov_prb2002,abragam_book} or donor-bound electrons \cite{feher_pr1959,pla_nature2012}.
Generally, the charge carrier spin $\mathbf{S}$ can portray an electron spin or a light/heavy hole spin involving a proper adjustment of the spin length and the hyperfine coupling constants $A_k^{\alpha,\beta}$ \cite{abragam_book,testelin_prb2009,hackmann_prl2015}.
\subsection{Lindblad formalism for thermal reservoirs}
\label{sec:lindblad}
To account for the effect of the optical cooling of the nuclear spin bath, we introduce a two-temperature concept \cite{opt_or_book,dyakonov_book,vladimirova_prb2018} with distinct effective inverse temperatures for the electron spin, $\beta_e = 1/k_B T_e$, and the nuclear spins, $\beta_n = 1/k_B T_n$ \cite{glazov_book,merkulov_pss1998,scalbert_prb2017,fischer_prb2020}.
Under optical cooling of the nuclear spin bath, the electron spin mostly retains the lattice temperature while the nuclear spins are cooled below, $\beta_n > \beta_e$.
Consequently, we treat the system as an open quantum system whose dynamics is driven by a unitary time evolution provided by the Hamiltonian $H$, Eq.~\eqref{eq:H}, and some Markovian transition rates between the eigenstates that account for the reservoir inducted energy and spin exchange.
Formally, this can be done by introducing fluctuating effective magnetic fields induced by reservoirs and acting on the electron and nuclear spins~\cite{fischer_prb2020}.
Corresponding coherent and incoherent dynamics of the system is most conveniently described by the density matrix.
Its evolution is governed by the Lindblad master equation \cite{carmichael_book}.
To that end, it is useful to introduce the complete eigenbasis of $H$ in Eq.~\eqref{eq:H}, as $H\ket{\psi_n} = \epsilon_n \ket{\psi_n}$, with eigenenergies $\epsilon_n$ and eigenvectors $\ket{\psi_n}$; the subscript $n$ enumerates all basic states of the systems.
The eigenbasis is used to define the complete operator basis $X_{mn} = \ket{\psi_m} \bra{\psi_n}$ of the Hilbert space.
Taking into account likely degeneracies of the eigenstates, the most general Lindblad operators $L_{m,n}^{k,\alpha}$ in the form
\begin{eqnarray}
L_{m,n}^{k,\alpha} &=& \sqrt{\Gamma_{m,n}^{k,\alpha}} \sum_{a,b} \delta_{\epsilon_a,\epsilon_m} \delta_{\epsilon_b,\epsilon_n} \braket{\psi_a|s_k^\alpha|\psi_b} X_{ab},
\label{eq:Lb}
\end{eqnarray}
describe transitions between the eigenstates $\ket{\psi_n}$ and $\ket{\psi_m}$ that are mediated by the reservoirs with the rate $\Gamma_{m,n}^{k,\alpha}$ (presented below) via the spin-operator $s_k^\alpha$.
>From now on, the index $k$ refers to either the electron spin ($k=0$), $s_0^\alpha=S^\alpha$, or one of the nuclear spins ($k \in \left\{ 1, \ldots , N \right\}$), $s_k^\alpha=I_k^\alpha$ for convenience.
The sum over all states $a,b$ accounts for all combinations of initial and final states sharing the same transition energy difference
\begin{equation}
\label{trans:energ}
\Delta_{mn}=\epsilon_m - \epsilon_n,
\end{equation}
due to the degeneracy of states.
These Lindblad operators and their Hermitian conjugates, $(L_{m,n}^{k,\alpha})^\dag$, enter the Lindblad master equation,
\begin{align}
\dot{\rho} = \mathcal{L} \rho &=
-i \left[ H, \rho \right] - \sum_{k=0}^N \sum_\alpha \sum_{m,n} \left\{ (L_{m,n}^{k,\alpha})^\dag L_{m,n}^{k,\alpha} \rho \right. \notag \\
&\qquad \left. + \rho (L_{m,n}^{k,\alpha})^\dag L_{m,n}^{k,\alpha} - 2 L_{m,n}^{k,\alpha} \rho (L_{m,n}^{k,\alpha})^\dag \right\} ,
\label{eq:La}
\end{align}
governing the temporal evolution of the system's density operator $\rho$.
Generally, the transition rates must be constructed in such a way that the steady-state solution of the density operator in thermal equilibrium aquires the Boltzmann form which commutes with $H$.
Accordingly, the rate of a respective transition is given by
\begin{eqnarray}
\Gamma_{m,n}^{k,\alpha} &=& \frac{W_k^\alpha h_k^\alpha(\Delta_{mn})}{g(\epsilon_m) g(\epsilon_n)},
\label{eq:gamma-rate}
\end{eqnarray}
where $g(\epsilon_m)$ denotes the degeneracy of the eigenenergy $\epsilon_m$ and $W_k^\alpha$ some phenomenological rate that typically is assumed to be several orders of magnitude larger for the electron spin than for the nuclear spins due to the electron's stronger coupling to the environment.
The usefulness of separation between the rate $W_k^\alpha$ and the degeneracy factor $g(\epsilon_m)$ becomes clear below in Sec.\ \ref{sec:elementwise}.
The dimensionless function $h_k^\alpha(\Delta_{mn})$ takes into account an enhancement or suppression of transitions depending on the energy difference between the initial and final states, Eq.~\eqref{trans:energ}.
Demanding the relaxation of $\rho$ to the Boltzmann form in thermodynamic equilibrium requires the ratio
$h_k^\alpha(\Delta_{mn})/h_k^\alpha(-\Delta_{mn}) = \exp (-\Delta_{mn} \beta_k)$, where $\beta_k=\beta$.
In this paper, we allow for two different effective inverse spin reservoir temperatures $\beta_k=\beta_e$ for $k=0$ and $\beta_k=\beta_n$ otherwise as it takes place in the experiments on the optical cooling of lattice nuclei~\cite{opt_or_book,glazov_book,vladimirova_prb2018}.
The above formulation, Eqs.~\eqref{eq:Lb} and \eqref{eq:La}, of the two-reservoir concept for the electron-nuclear spin system constitutes an extension of the rate-equation formalism introduced in Ref.~\cite{fischer_prb2020}.
The Lindblad equation incorporates off-diagonal elements of the density operator $\rho$ and thereby allows for the description of the hyperfine interaction beyond the Ising limit.
For the Ising limit of the hyperfine coupling constants $A_k^{\alpha,\beta}$, it reproduces the results in Ref.~\cite{fischer_prb2020} as a special case.
However, the inclusion of the off-diagonal elements of $\rho$ facilitates the treatment of observables where the corresponding quantum mechanical operator does not commute with the Hamiltonian.
Therefore, this approach goes well beyond the previously considered Ising limit and pushes the theory into experimentally relevant realms.
\subsection{Dynamics of the density matrix}
\label{sec:elementwise}
In the definition of the Lindblad operator, Eq.~\eqref{eq:Lb}, the pair of sums over the energy eigenstates $a$ and $b$ in combination with the Kronecker $\delta$-symbols allows for contributions only from the eigenstates $\ket{\psi_a}$ ($\ket{\psi_b}$ respectively) that belong to the same energetically degenerate subspace as the state $m$ ($n$), i.\ e.\ the states for which $\epsilon_a = \epsilon_m$ ($\epsilon_b = \epsilon_m$).
In case of non degenerate eigenenergies, these sums reduce to a single contribution.
For degenerate eigenenergies however this construction ensures a free choice of the orthonormal eigenbasis within the energetically degenerate subspaces without altering the dynamics.
To avoid a double counting of the transitions, we include the degree of degeneracies $g(\epsilon_m)$, $g(\epsilon_n)$ as a prefactor in Eq.~\eqref{eq:gamma-rate}.
The details of the analysis are presented in Appendix~\ref{app:degen}.
To obtain the coupled differential equations for the density matrix, we convert Eq.~\eqref{eq:La}, see also Eqs. \eqref{eq:Lc}, \eqref{eq:Ld}, to a matrix representation using the energy eigenstates of $H$ and arrive at
\begin{multline}
\dot{\rho}_{mn} =-i \Delta_{mn} \rho_{mn} \\
- \sum_{k,\alpha} W_k^\alpha\sum_{a,b}
\Bigg\{ \delta_{\epsilon_m,\epsilon_b} h_k^\alpha(\Delta_{am})
\left(s_k^\alpha \right)^*_{a,m}
\left( s_k^\alpha \right)_{a,b} \rho_{bn} \Bigg. \\
+ \delta_{\epsilon_n,\epsilon_{b}} h_k^\alpha(\Delta_{a n})
\left(s_k^\alpha \right)^*_{a,b}
\left( s_k^\alpha\right)_{a,n} \rho_{m b}\\
\Bigg. -2 \delta_{\epsilon_m,\epsilon_n}
\delta_{\epsilon_a,\epsilon_{b}}
h_k^\alpha(\Delta_{m a})
\left(s_k^\alpha \right)_{m,a} \left(s_k^\alpha\right)^*_{n,b}
\rho_{ab} \Bigg\} .
\label{eq:Le}
\end{multline}
This equation can be conveniently used for numerical calculations.
\section{Models of hyperfine coupling and transition rates}
\label{sec:smodel}
Here, the general description for an arbitrary hyperfine coupling Hamiltonian, Eq.~\eqref{eq:H}, is customized to a more specific system where the hyperfine interaction anisotropy is uniaxial and described by a single parameter $\lambda$.
The corresponding master equation taking into account the coupling to thermal reservoirs is derived from general Eqs.~\eqref{eq:La} and \eqref{eq:Le}.
\subsection{Anistropic central spin model}
\label{sec:bm}
In systems such as singly charged self-assembled GaAs-type QDs grown on the $(xy) \parallel (001)$ crystallographic plane, the matrix $A_k^{\alpha,\beta}$ describing the hyperfine interaction, Eq.~\eqref{eq:H}, is diagonal and the coupling is, as a rule, isotropic in the $(xy)$ plane~\cite{glazov_book}.
The resulting Hamiltonian,
\begin{equation}
\label{eq:H:uni}
H = \sum_k A_k [ \lambda ( S^x I^x_k + S^y I^y_k ) + S^z I^z_k ] ,
\end{equation}
includes a uniaxial anisotropy parameter $\lambda$ with respect to the $z\parallel [001]$ direction.
The Hamiltonian Eq.~\eqref{eq:H:uni} allows for the description of a variety of semiconductor nanostructures, although the physical origin of the coupling $A_k$ might differ.
The analysis of the situation with biaxial anisotropy or non-collinear hyperfine interaction~\cite{glazov_book,PhysRevB.94.121302,avdeev_nanoad2019} can be performed in the same way and goes beyond the scope of the present paper.
We recall that for the conduction band electron in an $s$-type orbital at an atomic site, the main contribution to the hyperfine coupling stems from the Fermi contact interaction \cite{fermi_zp1930}.
In contrast, for a hole spin coupling to the surrounding nuclear spins, the Fermi contact coupling is strongly suppressed due to the $p$-type wave function, and the dipole-dipole interaction is predominant \cite{testelin_prb2009}.
The coupling strength of the respective scenario is adjusted by the constants $A_k$ and the anisotropy is respected by the parameter $\lambda$ \cite{testelin_prb2009,fischer_prb2008,hackmann_prl2015}.
For $\lambda =1$, the isotropic limit relevant for an electron spin is restored whereas $\lambda=-2$ is a typical parameter for the spin of a light hole.
The Ising limit, $\lambda =0$, captures the heavy hole in a self-assembled InAs/GaAs QD with the sample's growth direction matching the $z$ axis.
In QDs the hole state often is a mixture of the heavy and light hole contribution depending on the geometry of the dot.
In such a case, the coupling can be described by the Hamiltonian~\eqref{eq:H:uni} with the parameter $\lambda$ varying, typically, between $-2$ and $0$.
To enable analytic access to the eigenenergies and eigenstates of the hyperfine Hamiltonian with a relatively large number of nuclear spins, $N\approx 1000$, we set the hyperfine coupling constant $A_k = A_0$ for all nuclear spins which is referred to as the box model approximation.
In this case the Hamiltonian can be written in terms of the total nuclear spin $\mathbf{J} = \sum_k \mathbf{I}_k$,
\begin{equation}
\begin{split}
H &= A_0 \left[ \lambda \left( S^x J^x + S^y J^y \right) + S^z J^z \right] \\
&= A_0 \left[ \frac{\lambda}{2} \left( S^+ J^- + S^-J^+ \right) + S^z J^z \right]
\end{split}
\label{eq:bm}
\end{equation}
with the ladder operators of the electron spin $S^\pm = S^x \pm i S^y$ and the total nuclear spin $J^\pm = J^x \pm i J^y$.
As a characteristic frequency scale of the system we introduce $\omega_h = ( \sum_k A_k^2 )^{1/2} \equiv \sqrt{N} A_0$ based on the dephasing rate of the electron spin in the nuclear spin bath for $\lambda=1$.
We employ $\omega_h$ as a reference scale, e.\ g., for indicating energies and temperatures, in the following.
Since only the total nuclear spin $J$ and the quantum number $J^z$ enter the determination of the eigenstates, we distinguish between the different degenerate multiples \cite{kozlov_jetp2007} arising from the addition theorem for spin with the same $J$ by the index $\gamma$.
The eigenenergies $\epsilon_{J,J^z}^\sigma$ and eigenstates $\ket{\psi^{\sigma,\gamma}_{J,J^z}}$ for a system, in which the central spin $\mathbf{S}$ and the individual nuclear spins $\mathbf{I}_k$ have a length $1/2$ respectively, have been calculated by Kozlov \cite{kozlov_jetp2007} and read
\begin{subequations}
\label{eq:bmmmm}
\begin{align}
\epsilon^+_{J,-J} &= \frac{A_0 J}{2},
&\epsilon^+_{J,J+1} &= \frac{A_0 J}{2}, \label{eq:bm1e}\\
\ket{\psi^{+,\gamma}_{J,-J}} &= \ket{\downarrow} \ket{J,-J,\gamma},
&\ket{\psi^{+,\gamma}_{J,J+1}} &= \ket{\uparrow} \ket{J,J,\gamma}, \label{eq:bm1s}
\end{align}
\end{subequations}
with $J \in \left\{ 0,\ldots,N/2 \right\}$ and
\begin{subequations}
\label{eq:spectrum:boxK}
\begin{align}
\epsilon^\pm_{J,J^z} &= - \frac{A_0}{4} \pm \frac{A_0}{2} \Bigg\{ \left(J^z-\frac12 \right)^2 \Bigg. \notag \\
& \qquad \Bigg. + \lambda^2 \left[ J(J+1) -J^z(J^z-1) \right] \Bigg\}^{1/2}
\label{eq:bm2e} \\
\ket{\psi^{\sigma,\gamma}_{J,J^z}} &= c^\sigma_{J,J^z} \ket{\downarrow} \ket{J,J^z,\gamma} + d^\sigma_{J,J^z} \ket{\uparrow} \ket{J,J^z-1,\gamma} \label{eq:bm2s}
\end{align}
\end{subequations}
where $J \in \left\{ 0,\ldots,N/2 \right\}$, $J^z \in \left\{ -J+1,\ldots,J \right\}$ and $\sigma \in \left\{ +,- \right\}$.
The eigenstates are given in terms of the electron spin and the total nuclear spin $z$ product basis with $\ket{\uparrow / \downarrow}$ referring to the electron spin state and $\ket{J,J^z,\gamma}$ determining the nuclear spin state with the quantum numbers for total nuclear spin length $J$ and the $z$ quantum number $J^z$.
The coefficients $c^\sigma_{J,J^z}$ and $d^\sigma_{J,J^z}$ of the eigenstates, Eq.~\eqref{eq:bm2s}, are obtained from analytical diagonalization of the $2\times 2$ dimensional subblocks of the Hamilton matrix spanned by the states $\ket{\downarrow} \ket{J,J^z,\gamma}$ and $\ket{\uparrow} \ket{J,J^z-1,\gamma}$,
\begin{align}
H_{J,J^z}^{2\times2} =
\begin{pmatrix}
-A_0 J^z/2 & T_{J,J^z} \\
T_{J,J^z} & A_0(J^z-1)/2
\end{pmatrix},
\end{align}
with $T_{J,J^z} = \lambda A_0 \sqrt{J(J+1)-J^z(J^z-1)}$.
Note that the label $J^z=J+1$ in the Eqs.~\eqref{eq:bm1e} and \eqref{eq:bm1s} does not correspond to the actual quantum number of the state, but is chosen in compliance with the labeling in Eqs.~\eqref{eq:bm2e} and \eqref{eq:bm2s}, and allows for a general notation of eigenenergies $\epsilon^\sigma_{J,J^z}$ and eigenstates $\ket{\psi^{\sigma,\gamma}_{J,J^z}}$ where $J^z \in \left\{ -J,\ldots,J+1 \right\}$.
As mentioned above, the quantity $\gamma$ accounts for the degeneracy in the system since the Hamilton matrix is block diagonal and can be split into subblocks with fixed quantum number $J$ whereby for each value of $J$ a number $g_N(J)$ of identical blocks exist.
Assuming an even number $N$ of nuclear spins, this degree of degeneracy is given by
\begin{equation}
g_N(J) = \frac{2J+1}{N/2+J+1} {N\choose{N/2+J}},
\label{eq:Jdeg}
\end{equation}
where ${a\choose b} = a!/[b!(a-b)!]$ is the binomial coefficient.
\subsection{Reduced rate equations}
\label{sec:bmlindblad}
With the aid of the eigenstate decomposition, Eqs.~\eqref{eq:bmmmm} and \eqref{eq:spectrum:boxK}, we specify the final master equation in the box model limit:
Each sum over the eigenstates in the original master equation, Eq.~\eqref{eq:Le}, is split into sums over the box model quantum numbers, $J$, $J^z$, $\sigma$, and $\gamma$. Furthermore, we can assume the density operator to be diagonal in the quantum numbers $J$ and $\gamma$ as the Hamiltonian and thereby reduce the number of sums.
Next, we replace the operator $s_k^\alpha$ in Eq.~\eqref{eq:Le} by a ladder operator, $s_k^\tau$
\begin{equation}
s_k^\tau = \begin{cases} s_k^+ / \sqrt{2}, & \tau = -1, \\
s_k^z, & \tau = 0, \\
s_k^- / \sqrt{2}, & \tau = +1, \end{cases}
\label{eq:sktau}
\end{equation}
with the factor $1/\sqrt{2}$ stemming from normalization.
Taking into account that a spin-flip element $\braket{\psi^{\sigma',\gamma'}_{J',{J^z}'}|s_k^\tau|\psi^{\sigma,\gamma}_{J,J^z}}$ only yields a contribution when ${J^z}' = J^z+\tau$, independent on the fact which spin $k$ is flipped, one obtains the master equation for the density matrix elements
\begin{multline}
\partial_t \braket{\psi^{\sigma_m,\gamma}_{J,J^z_m}|\rho|\psi^{\sigma_n,\gamma}_{J,J^z_n}} =
-i \Delta^{\sigma_m,J,J^z_m}_{\sigma_n,J,J^z_n}
\braket{\psi^{\sigma_m,\gamma}_{J,J^z_m}|\rho|\psi^{\sigma_n,\gamma}_{J,J^z_n}} \\
- \sum_{k,\tau} W_k^\tau
\sum_{J',\gamma'} \sum_{\sigma,\sigma'} \Bigg\{ \delta_{\epsilon^{\sigma_m}_{J,J^z_m},\epsilon^{\sigma'}_{J,J^z_m}}
h_k^\tau(\Delta^{\sigma,J',J^z_m+\tau}_{\sigma_m,J,J^z_m}) \Bigg. \\
\braket{\psi^{\sigma_m,\gamma}_{J,J^z_m}|(s_k^\tau)^\dag|\psi^{\sigma,\gamma'}_{J',J^z_m+\tau}}
\braket{\psi^{\sigma,\gamma'}_{J',J^z_m+\tau}|s_k^\tau|\psi^{\sigma',\gamma}_{J,J^z_m}} \\
\braket{\psi^{\sigma',\gamma}_{J,J^z_m}|\rho|\psi^{\sigma_n,\gamma}_{J,J^z_n}}
+ \delta_{\epsilon^{\sigma_n}_{J,J^z_n},\epsilon^{\sigma'}_{J,J^z_n}}
h_k^\tau(\Delta^{\sigma,J',J^z_n+\tau}_{\sigma_n,J,J^z_n}) \\
\braket{\psi^{\sigma',\gamma}_{J,J^z_n}|(s_k^\tau)^\dag|\psi^{\sigma,\gamma'}_{J',J^z_n+\tau}}
\braket{\psi^{\sigma,\gamma'}_{J',J^z_n+\tau}|s_k^\tau|\psi^{\sigma_n,\gamma}_{J,J^z_n}} \\
\braket{\psi^{\sigma_m,\gamma}_{J,J^z_m}|\rho|\psi^{\sigma',\gamma}_{J,J^z_n}}
-2 \delta_{\epsilon^{\sigma_m}_{J,J^z_m},\epsilon^{\sigma_n}_{J,J^z_n}} \delta_{\epsilon^\sigma_{J',J^z_m-\tau},\epsilon^{\sigma'}_{J',J^z_n-\tau}} \\
h_k^\tau(\Delta^{\sigma_m,J,J^z_m}_{\sigma,J',J^z_m-\tau})
\braket{\psi^{\sigma_m,\gamma}_{J,J^z_m}|s_k^\tau|\psi^{\sigma,\gamma'}_{J',J^z_m-\tau}} \\
\Bigg. \braket{\psi^{\sigma',\gamma'}_{J',J^z_n-\tau}|(s_k^\tau)^\dag|\psi^{\sigma_n,\gamma}_{J,J^z_n}}
\braket{\psi^{\sigma,\gamma'}_{J',J^z_m-\tau}|\rho|\psi^{\sigma',\gamma'}_{J',J^z_n-\tau}} \Bigg\}
\label{eq:Lf}
\end{multline}
with the energy difference $\Delta^{\sigma,J,J^z}_{\sigma',J',{J^z}'} = \epsilon^\sigma_{J,J^z} - \epsilon^{\sigma'}_{J',{J^z}'}$.
Since the eigenenergy of the eigenstate is independent of the label $\gamma$, we combine these matrix elements into a $\gamma$-independent probability distribution $p^J_{J^z_m,\sigma_m;J^z_n,\sigma_n}$ using the degree of degeneracy, Eq.~\eqref{eq:Jdeg},
\begin{equation}
\begin{split}
p^J_{J^z_m,\sigma_m;J^z_n,\sigma_n} &= \sum_\gamma \braket{\psi^{\sigma_m,\gamma}_{J,J^z_m}|\rho|\psi^{\sigma_n,\gamma}_{J,J^z_n}} \\
&= g_N(J) \braket{\psi^{\sigma_m,\gamma}_{J,J^z_m}|\rho|\psi^{\sigma_n,\gamma}_{J,J^z_n}} .
\end{split}
\label{eq:p}
\end{equation}
Finally, using Eq.\ \eqref{eq:Lf}, we arrive at the rate equation,
\begin{multline}
\partial_t p^J_{J^z_m,\sigma_m;J^z_n,\sigma_n} =
-i \Delta^{\sigma_m,J,J^z_m}_{\sigma_n,J,J^z_n} p^J_{J^z_m,\sigma_m;J^z_n,\sigma_n} \\
- \Bigg\{ \sum_{\tau} \sum_{J',\sigma'} \left[
\Gamma^\tau_{J',J}(J^z_m+\tau,J^z_m+\tau;\sigma',\sigma',\sigma_m,\sigma_m) \right. \Bigg. \\
\Bigg. \left. + \Gamma^\tau_{J',J}(J^z_n+\tau,J^z_n+\tau;\sigma',\sigma',\sigma_n,\sigma_n) \right] \Bigg\}
p^J_{J^z_m,\sigma_m;J^z_n,\sigma_n} \\
+ \sum_\tau \sum_{J',\sigma,\sigma'} 2 \Gamma^\tau_{J,J'}(J^z_m,J^z_n;\sigma_m,\sigma_n,\sigma,\sigma') p^{J'}_{J^z_m-\tau,\sigma;J^z_n-\tau,\sigma'},
\label{eq:Lp}
\end{multline}
for $p^J_{J^z_m,\sigma_m;J^z_n,\sigma_n}$.
The prefactors for the three terms inducing transitions between the elements are combined into the total transition rate
\begin{multline}
\Gamma^\tau_{J,J'}(J^z_a,J^z_b;\sigma_a,\sigma_b,\sigma_c,\sigma_d) =
\delta_{\epsilon^{\sigma_a}_{J,J^z_a},\epsilon^{\sigma_b}_{J,J^z_b}}
\delta_{\epsilon^{\sigma_c}_{J',J^z_a+\tau},\epsilon^{\sigma_d}_{J',J^z_b+\tau}}\\
\times\frac{1}{g_N(J')}
\sum_k W^\tau_k
h^\tau_k(\Delta^{\sigma_a,J,J^z_a}_{\sigma_c,J',J^z_a-\tau}) \\
\sum_{\gamma,\gamma'}
\braket{\psi^{\sigma_a,\gamma}_{J,J^z_a}|s_k^\tau|\psi^{\sigma_c,\gamma'}_{J',J^z_a-\tau}}
\braket{\psi^{\sigma_d,\gamma'}_{J',J^z_b-\tau}|(s_k^\tau)^\dag|\psi^{\sigma_b,\gamma}_{J,J^z_b}} .
\label{eq:Grate}
\end{multline}
The occurring matrix elements $\braket{\psi^{\sigma,\gamma}_{J,J^z}|s_k^\tau|\psi^{\sigma',\gamma'}_{J',J^z-\tau}}$ for the spin operator $s_k^\tau$, Eq.~\eqref{eq:sktau}, are evaluated separately for the electron spin operator $S^\tau$ and the nuclear spin operator $I^\tau_k$.
Substitution of the explicit form of the eigenstates, Eq.~\eqref{eq:bm2s}, yields
\begin{multline}
\braket{\psi^{\sigma,\gamma}_{J,J^z}|S^\tau|\psi^{\sigma',\gamma'}_{J',J^z-\tau}} =
\delta_{J,J'} \delta_{\gamma,\gamma'} \\
\times
\begin{cases}
c^{\sigma}_{J,J^z} d^{\sigma'}_{J',J^z-\tau} / \sqrt{2}, & \tau = -1, \\
(d^{\sigma}_{J,J^z} d^{\sigma'}_{J',J^z-\tau} - c^{\sigma}_{J,J^z} c^{\sigma'}_{J',J^z-\tau})/ 2, & \tau = 0, \\
d^{\sigma}_{J,J^z} c^{\sigma'}_{J',J^z-\tau} / \sqrt{2}, & \tau = +1,
\end{cases}
\label{eq:tmes}
\end{multline}
for the electron spin operator due to the orthonormality of the nuclear spin states.
For the nuclear spin operator we obtain the matrix elements
\begin{multline}
\braket{\psi^{\sigma,\gamma}_{J,J^z}|I_k^\tau|\psi^{\sigma',\gamma'}_{J',J^z-\tau}} = \\
c^{\sigma}_{J,J^z} c^{\sigma'}_{J',J^z-\tau} \braket{J,J^z,\gamma|I^\tau_k|J',J^z-\tau,\gamma'} \\
+ d^{\sigma}_{J,J^z} d^{\sigma'}_{J',J^z-\tau} \braket{J,J^z-1,\gamma|I^\tau_k|J',J^z-\tau-1,\gamma'}
\label{eq:tmei}
\end{multline}
as a result of the orthonormality of the electron spin states.
For the calculation of the remaining matrix elements of the type
$\braket{J',J^z+\tau,\gamma'|I^\tau_k|J,J^z,\gamma}$,
we make use of the assumption that the nuclear spins in the box model approximation are indistinguishable and, in compliance, omit any potential dependence of $W^\tau_k$ and $h^\tau_k$ on the individual nuclear spin $k\in \left\{1,\ldots,N\right\}$,
i.e. we set $W^\tau_k = W^\tau_n$ and $h^\tau_k = h^\tau_n$ for all nuclear spins.
The electron spin contribution of these quantities, $W^\tau_0 = W^\tau_e$ and $h^\tau_0 = h^\tau_e$, however differs from that of the nuclear spins.
As a consequence of the assumption, the result of the evaluation for an individual nuclear spin $k$ can be adopted for the other nuclear spins as well, such that the nuclear contribution in the sum over $k$ in Eq.~\eqref{eq:Grate} solely produces a prefactor $N$.
The actual evaluation of the elements
$\braket{J',J^z+\tau,\gamma'|I^\tau_k|J,J^z,\gamma}$ can be performed by virtue of the Clebsch-Gordan coefficients.
The results are presented in Appendix \ref{app:sfe}.
With the above considerations, the transition rate, Eq.~\eqref{eq:Grate}, can be transformed into
\begin{multline}
\Gamma^\tau_{J,J'}(J^z_a,J^z_b;\sigma_a,\sigma_b,\sigma_c,\sigma_d) = \\
\delta_{\epsilon^{\sigma_a}_{J,J^z_a},\epsilon^{\sigma_b}_{J,J^z_b}}
\delta_{\epsilon^{\sigma_c}_{J',J^z_a-\tau},\epsilon^{\sigma_d}_{J',J^z_b-\tau}}
\Big\{ W_e^0 h_e(\Delta^{\sigma_a,J,J^z_a}_{\sigma_c,J',J^z_a-\tau}) \Big. \\
\left. \times \braket{\psi^{\sigma_a,\gamma}_{J,J^z_a}|S^\tau|\psi^{\sigma_c,\gamma'}_{J',J^z_a-\tau}}
\braket{\psi^{\sigma_d,\gamma'}_{J',J^z_b-\tau}|(S^\tau)^\dag|\psi^{\sigma_b,\gamma}_{J,J^z_b}} \right. \\
\left. + N W_n^0 \sum_{j=J\pm1/2}\sum_{j'=J'\pm1/2} \frac{g_{N-1}(j')}{g_N(J')}
h_n(\Delta^{\sigma_a,J,J^z_a}_{\sigma_c,J',J^z_a-\tau}) \right. \\
\Big. \times \braket{\psi^{\sigma_a,\gamma}_{J,J^z_a}|I_k^\tau|\psi^{\sigma_c,\gamma'}_{J',J^z_a-\tau}}
\braket{\psi^{\sigma_d,\gamma'}_{J',J^z_b-\tau}|(I_k^\tau)^\dag|\psi^{\sigma_b,\gamma}_{J,J^z_b}} \Big\}
\label{eq:Gratef}
\end{multline}
where the first term of the sum in the brace accounts for the electron spin flips and the second incorporates spin flips in the nuclear spin bath.
For the electron contribution, the flip rate $W_e^0$ is assumed to be independent on the sign of $\tau$, and the degree of degeneracy, $g_N(J')$, cancels out by the summation over $\gamma'$.
For the nuclear spin flips, we also introduced an isotropic rate $W_n^0$ identical for all nuclei.
The sums over $\gamma$, $\gamma'$ were treated as described in the Appendix~\ref{app:sfe} and yield sums over the quantum number $j$, $j'$ of the total nuclear spin's length in the reduced nuclear spin bath excluding the spin $k$ as well as the degree of degeneracy $g_{N-1}(j')$ as a prefactor.
The quantum numbers $j$, $j'$ are restricted to the values $j=J\pm1/2$, and $j'=J'\pm1/2$ respectively, which enter in the evaluation of the spin flip elements in the last line of Eq.~\eqref{eq:Gratef}, see Appendix \ref{app:sfe} for details.
The temperature-dependent function $h_{e,n}(\epsilon)$ entering the transition rates, Eq.~\eqref{eq:Gratef}, is chosen as
\begin{equation}
{h_{e,n}}(\epsilon) = \begin{cases} e^{-{\beta_{e,n}} \epsilon}, & \epsilon>0, \\
1, & \epsilon \leq 0, \end{cases}
\label{eq:hk}
\end{equation}
in accordance with Ref.~\cite{fischer_prb2020}.
Any transition reducing the system's energy, $\epsilon<0$, or leaving the energy unchanged, $\epsilon=0$, occurs with maximum rate $W_{e,n}^0$, whereas transitions increasing the hyperfine energy are exponentially suppressed with increasing inverse spin temperature $\beta_{e,n}$.
Since the above choice fulfills the relation $h_{e,n}(\epsilon) / h_{e,n}(-\epsilon) = e^{-\beta_{e,n} \epsilon}$ it properly describes coupling with the thermal reservoirs with particular temperature.
Such a choice also ensures the correct Boltzmann weighted distribution of the steady-state density matrix in thermal equilibrium, $\beta_e = \beta_n$.
\section{Nuclear-spin polaron state}
\label{sec:polaronstate}
The Lindblad approach providing the steady-state density operator of the system for a broad temperature range, $T_e$ and $T_n$, forms the basis for the study of the crossover from the disordered high-temperature state to the correlated nuclear-spin polaron states in the low temperature regime.
\subsection{Electron-nuclear spin correlation functions. Anisotropy effects}
\label{sec:nuclear-polaron-correlators}
For the investigation of the nuclear-spin polaron formation, it is instructive to study the correlation of the charge carrier spin and the nuclear spins as shown in Ref.~\cite{fischer_prb2020} for the case of Ising coupling by comparing different criteria of nuclear spin polaron formation.
Indeed, the hyperfine energy of the system is minimized when the electron spin and the nuclear spins align in opposite directions and produce an anti-correlation of the electron and nuclear spins at a positive sign of the hyperfine coupling constants and the anisotropy parameter, i.e., at $A_0>0$ and $\lambda>0$.
In this case, the value of the electron-nuclear spin correlator will be negative.
If $A_0<0$, a positive correlation between the electron and nuclear spins is expected to form, i.e., the central spin and nuclear spin bath will be co-polarized.
However, the examination of the system at low temperatures reveals a profound dependence of the forming nuclear-spin polaron state on the anisotropy factor $\lambda$ of the hyperfine interaction, Eq.~\eqref{eq:bm}.
We illustrate the nature of the polaron state by the expectation value of the electron-nuclear spin correlation as a function of the inverse nuclear spin temperature $\beta_n$ at a fixed inverse electron spin temperature, $\beta_e \omega_h = 0.5$.
For the density operator entering the calculation of the expectation value of an observable $O$, $\left<O\right> = \mr{Tr}\left[ O \rho \right]$, we insert the steady state solution $\rho_0$ of Eq.~\eqref{eq:Lp}.
The data presented in Fig.~\ref{fig:sj} is obtained for a system with $N=1000$ nuclear spins in the box model approximation.
By varying the value of the hyperfine anisotropy $\lambda$, we selected the three physically particularly relevant cases: (a) the Ising case at $\lambda=0$ previously addressed in Ref.~\cite{fischer_prb2020}, (b) the isotropic case at $\lambda=1$, and (c) the case of the strong in-plane hyperfine coupling at $\lambda=2$.
For each case, we study the spacial components of the electron-nuclear spin correlator, $\left< S^x J^x \right>$ (green lines) and $\left< S^z J^z \right>$ (orange lines), separately as well as the total correlation $\left< \mathbf{S} \mathbf{J} \right>$ (blue lines).
The component $\left< S^y J^y \right>$ is not displayed since it is identical to $\left< S^x J^x \right>$ due to the axial rotation ($U(1)$) symmetry of the Hamiltonian, Eq.~\eqref{eq:bm}.
For the same reason, correlators of different spin components, $\langle S^\alpha J^\beta\rangle$ with $\alpha \ne \beta$, vanish.
The flip rates for the electron spin and the nuclear spins are set to $W_e^0 = 10^{-3} \omega_h$ and $W_n^0 = 10^{-6} \omega_h$ providing a three orders of magnitude faster flipping of the electron spin compared to the nuclear spins.
This choice of the rates and the number $N$ is kept throughout the whole work.
\begin{figure}[t!]
\centering
\includegraphics[scale=1]{Fig1.eps}
\caption{Electron-nuclear spin correlation as a function of the inverse nuclear spin temperature $\beta_n$ for various anisotropy factors $\lambda$ of the hyperfine interaction. The inverse electron spin temperature is fixed at $\beta_e \omega_h =0.5$. The dashed vertical red lines correspond to the transition temperatures according to the analytical Eq.~\eqref{eq:tt}. Mean-field results are added as turquoise dotted lines.}
\label{fig:sj}
\end{figure}
The overall behavior of the correlators as a function of the inverse nuclear spin temperature is similar for all three cases:
at high nuclear spin temperatures (small $\beta_n$) all correlators are negligible.
With a reduction of the nuclear spin temperature (increase in $\beta_n$) at least one correlator $\langle S^\alpha J^\alpha \rangle$ and the total spin correlator $\langle \bm S \bm J\rangle$ become significant.
They increase with increasing $\beta_n$ and at $T_n\to 0$ ($\beta_n \to \infty$) saturate.
However, as functions of the anisotropy parameter $\lambda$, the correlators of different electron-nuclear spin components demonstrate different behavior.
In the limit of $\lambda =0$ depicted in Fig.~\ref{fig:sj}(a), the hyperfine interaction consists solely of the Ising contribution along the $z$ axis.
The spin flip terms, i.e.\ the transversal hypefine contributions, are absent.
Therefore, the anti-correlation of the electron spin and the nuclear spins only builds up in $z$ direction, whereas the correlation functions of transversal components $\left< S^x J^x \right>=\langle S^y J^y\rangle$ remain zero.
At low temperatures (large $\beta_e$ and $\beta_n$) the anti-correlation per nuclear spin reaches the maximum value $1/4$ determined by the product of the electron spin length and the spin length of an individual nuclear spin~\cite{fischer_prb2020}.
Since the coupling of the transversal components is absent in the Ising limit, the full correlator $\left< \mathbf{S} \mathbf{J} \right>$ is solely made up by the $z$ contribution.
Interestingly, a similar behavior is displayed by any system with an anisotropy factor in the range $0 \leq \lambda < 1$, for which the hyperfine interaction in $z$ direction is stronger than the $x$ and $y$ components.
Our calculations show that within the numerical accuracy for $\lambda \in [0,1)$ the results coincide with those shown in Fig.~\ref{fig:sj}(a).
In the isotropic case, $\lambda = 1$, see Fig.~\ref{fig:sj}(b), the nuclear-spin polaron state, that forms at large $\beta_n$, has different characteristics.
Due to the lack of spatial preference, the polaron state is isotropic:
The correlators $\left< S^x J^x \right>=\left< S^y J^y \right>$ and $\left< S^z J^z \right>$ build up equally with decreasing temperature.
As a result, the full correlator $\left< \mathbf{S} \mathbf{J} \right>$ is made up by equal contributions for the three spatial directions.
At low nuclear spin temperatures ($\beta_n\to \infty$) it reaches $\left< \mathbf{S} \mathbf{J} \right>/N =-1/4$, whereas each spatial component contributes with the value $-1/12$.
An anisotropy factor $\left| \lambda \right| > 1$ is relevant, e.g., for light holes in QDs, where $\lambda = -2$ \cite{testelin_prb2009,hackmann_prl2015}.
Since the sign of $\lambda$ does not change the overall behavior of the system but affects the sign of the transversal electron-nuclear spin correlator only, i.e., it determines whether the electron spin and the nuclear spins align parallel or anti-parallel within the $(xy)$ plane, we restrict ourselves to positive values of $\lambda$.
The results for $\lambda=2$ are depicted in Fig.~\ref{fig:sj}(c).
Here the transversal contributions of the hyperfine interaction dominate over the $z$ contribution.
Thus, an anti-correlation of the electron and nuclear spins builds within the $(xy)$ plane while no (anti-)correlation in $z$ direction arises.
Consequently, the total anti-correlation, $-\left< \mathbf{S} \mathbf{J} \right>,$ is split between the $x$ and $y$ component which have a maximum value of $1/6$ per nuclear spin.
Note that for $\left| \lambda \right| > 1$ the crossover regime, where the nuclear-spin polaron state starts to form (indicated by the dashed vertical lines in Fig.~\ref{fig:sj}) is shifted to higher temperatures.
This effect is discussed in more detail in Sec.~\ref{sec:temperature-crit} below.
\subsection{Mean-field approach to the anisotropic system}
\label{sec:mf}
For a deeper understanding of the nuclear-spin polaron state that forms in a spin system with anisotropic hyperfine coupling, we refer to a mean-field approach which previously was developed by Merkulov for the isotropic system \cite{merkulov_pss1998}.
In the mean-field approximation we assume the electron spin to experience the average effective field generated by the nuclear spins, i.\ e., the average Overhauser field $\left< \mathbf{B_N} \right>$, caused by the nuclear spin polarization.
In their turn, the nuclear spins are subject to the average effective field of the electron spin, the average Knight field $\left< \mathbf{B_K} \right>$.
These effective fields result in the polarization of the respective spin systems in the form
\begin{subequations}
\begin{align}
\left< \mathbf{S} \right> &= - \frac{\left< \mathbf{B_N} \right>}{2 \left| \left< \mathbf{B_N} \right> \right|} \tanh \left( \frac{\beta_e \left| \left< \mathbf{B_N} \right> \right|}{2} \right), \label{eq:mfs} \\
\left< \mathbf{J} \right> &= - \frac{N \left< \mathbf{B_K} \right>}{2 \left| \left< \mathbf{B_K} \right> \right|} \tanh \left( \frac{\beta_n \left| \left< \mathbf{B_K} \right> \right|}{2} \right) , \label{eq:mfj}
\end{align}
\end{subequations}
where the definitions of the Overhauser field and the Knight field include the anisotropy parameter $\lambda$ of the hyperfine interaction
\begin{subequations}
\begin{align}
\left< \mathbf{B_N} \right> &= A_0 \left( \lambda \left< J^x \right>, \lambda \left< J^y \right>, \left< J^z \right> \right)^T , \label{eq:bn} \\
\left< \mathbf{B_K} \right> &= A_0 \left( \lambda \left< S^x \right>, \lambda \left< S^y \right>, \left< S^z \right> \right)^T , \label{eq:bk}
\end{align}
\end{subequations}
and the fields are measured in the energy units.
To obtain the self-consistency equation for the total nuclear spin $\left< \mathbf{J} \right>$, Eq.~\eqref{eq:mfs} is inserted into Eq.~\eqref{eq:mfj} taking into account the definitions of $\left< \mathbf{B_N} \right>$ and $\left< \mathbf{B_K} \right>$,
\begin{multline}
\left< \mathbf{J} \right> = \frac{N}{2 L_1}
\tanh \left[ \frac{\beta_n A_0}{4} \frac{L_2}{L_1}
\tanh \left( \frac{\beta_e A_0}{2} L_1 \right) \right] \\
\times \left( \lambda \left< J^x \right>, \lambda \left< J^y \right>, \left< J^z \right>\right)^T \label{eq:mfsc}
\end{multline}
where we introduced $L_1 = \sqrt{\lambda^2 ( \left< J^x \right>^2 + \left< J^y \right>^2) + \left< J^z \right>^2}$ and $L_2 = \sqrt{\lambda^4 ( \left< J^x \right>^2 + \left< J^y \right>^2) + \left< J^z \right>^2}$ for brevity.
In order to obtain the critical temperature of the polaron formation let us denote
the angle between the vector $\langle \mathbf J\rangle$ and the $z$ axis by $\theta\in [0,\pi]$.
Since the system is isotropic in the $(xy)$ plane the polar angle of $\langle \mathbf J\rangle$ is unimportant.
As a first step we solve Eq.~\eqref{eq:mfsc} for the absolute value $\left| \left< \mathbf{J} \right> \right|$ and obtain that the polaron can be formed in the mean-field approach provided that the following condition
\begin{equation}
\frac{NA_0^2 \beta_e \beta_n}{16} \sqrt{\lambda^4 \sin^2 \theta + \cos^2 \theta } > 1
\label{eq:mfcond}
\end{equation}
is fulfilled.
Thus, the parameter $\lambda$ induces a modification of the critical temperatures especially for angles $\theta$ close to $\pi/2$.
As a next step we determine the orientation of the spins in the polaron by solving the self-consistency equation for the angle $\theta$.
It can be derived from Eq.~\eqref{eq:mfsc} using the relation $\tan^2 \theta = ( \left< J^x \right>^2 + \left< J^y \right>^2 ) / \left< J^z \right>^2$ and taking into account that the left and right hand sides of Eq.~\eqref{eq:mfsc} should be parallel:
\begin{equation}
\tan^2 \theta = \lambda^4 \tan^2 \theta .
\label{eq:mfsctheta}
\end{equation}
Equation \eqref{eq:mfsctheta} reveals the potential orientations of the polaron state with respect to $\lambda$.
We find that in the isotropic case, $\lambda=1$, the relation holds for arbitrary $\theta$.
Otherwise Eq.~\eqref{eq:mfsctheta} is only consistent with three solutions for the angle $\theta$:
$\theta = 0$, $\theta = \pi$, or $\theta = \pi/2$.
A stability analysis, see Appendix \ref{app:mfstability}, demonstrates that for $\lambda < 1$ the states with $\theta = 0$ and $\theta = \pi$ are stable and $\theta = \pi/2$ is an unstable solution, whereas for $\lambda > 1$ the categorization is switched, i.e., $\theta = \pi/2$ is stable and $\theta = 0$, $\theta = \pi$ are not.
Thus, the mean-field calculations predict that the nuclear-spin polaron forms along the $z$ axis for $\lambda<1$ (easy-axis situation) and within the $(xy)$ plane for $\lambda>1$ (easy-plane situation).
As a result, the polaron formation condition within the mean-field approach can be summarized as:
\begin{equation}
\label{mean:field:fin}
\frac{NA_0^2 \beta_e \beta_n}{16} >
\begin{cases}
1, \quad |\lambda|\leqslant 1,\\
\lambda^{-2}, \quad |\lambda|>1.
\end{cases}
\end{equation}
Naturally, the symmetry breaks in such a way that polarizations build up to maximize the absolute value of the hyperfine coupling.
This analysis is consistent with the results obtained above, in Sec.~\ref{sec:nuclear-polaron-correlators}.
The mean-field solutions for the electron-nuclear spin correlation $\left< S^\alpha J^\alpha \right>/N$ (with $\alpha\in \left\{x,y,z \right\}$) for those spatial components $\alpha$, in which the anti-correlation builds in the low-temperature regime, are added in Fig.~\ref{fig:sj} (dotted turquoise lines) alongside the data obtained by our approach as a comparison.
We find that within the presented temperature range, the two approaches nearly coincide.
The mean-field solution, however, exhibits a sharper transition to the polaron state at the critical temperature consistent with a phase transition even in non-equilibrium, while a smooth crossover is observed in the finite system, see Ref.~\cite{fischer_prb2020} for more details.
\subsection{Crossover temperature for the polaron formation}
\label{sec:temperature-crit}
The mean-field approach, Eq.~\eqref{mean:field:fin}, predicts the formation of a nuclear-spin polaron state below the critical temperatures given by
\begin{equation}
\beta_{e,c} \beta_{n,c} = \frac{16}{N \tilde{A}_0^2} .
\label{eq:mfct}
\end{equation}
The equation combines the criteria for the polaron state along the $z$ direction ($\theta=0$/$\theta=\pi$) and for the polaron oriented within the $(xy)$ plane ($\theta=\pi/2$) by introducing a rescaled hyperfine coupling constant
\begin{equation}
\tilde{A}_0= \begin{cases} A_0, & \lambda \leq 1, \\
\lambda A_0, & \lambda > 1. \end{cases}
\label{eq:tildea}
\end{equation}
In Ref.~\cite{fischer_prb2020} we derived a more complex temperature criterion for the polaron-state formation based on the rate-equation formalism taking into account the finite number of nuclear spins.
We substitute the coupling constant $\tilde{A}_0$ into Eq.\ (31) of Ref.\ \cite{fischer_prb2020} and obtain the temperature criterion
\begin{equation}
\beta_{n,t} = \frac{4}{\tilde{A}_0} \artanh \left( \frac{4}{(N+2) \beta_{e,t} \tilde{A}_0} \right)
\label{eq:tt}
\end{equation}
for the onset of polaron formation generalized to an arbitrary anisotropy.
This defines a line in the $(\beta_{n},\beta_{e})$ plane.
As a common indicator for the crossover to the nuclear-spin polaron state for all values of the hyperfine parameter $\lambda$, we focus on the total electron-nuclear spin correlation since we found that $\left< \mathbf{S}\mathbf{J}\right>$ is maximized consistently in the polaron state, cf. Fig.~\ref{fig:sj}.
The crossover temperature line extracted from the master equation approach is then indicated by the rise of the fluctuations of $\left< \mathbf{S}\mathbf{J}\right>$,
\begin{equation}
\label{fluct:noise}
\sigma^2_{SJ} = \left< (\mathbf{S}\mathbf{J})^2 \right> - \left< \mathbf{S}\mathbf{J} \right>^2,
\end{equation}
which we plotted as a color contour plot in the $(\beta_{n},\beta_{e})$ plane for $\lambda=1$ in Fig.~\ref{fig:sigmasj}(a) and for $\lambda=2$ in Fig.~\ref{fig:sigmasj}(b).
The temperature line defined in Eq.~\eqref{eq:tt} (depicted as a red dotted line) matches the line formed by the maximum of $\sigma^2_{SJ}$.
For comparison, the mean-field critical temperature, Eq.~\eqref{eq:mfct}, is added as well (white line).
\begin{figure}[t!]
\centering
\includegraphics[scale=1]{Fig2.eps}
\caption{Fluctuations $\sigma_{SJ}^2$ of the electron-nuclear spin correlator, Eq.~\eqref{fluct:noise}, as a function of the effective inverse nuclear spin temperature $\beta_n$ and the effective inverse electron spin temperature $\beta_e$ for different values of the hyperfine anisotropy parameter, (a) $\lambda=1$ and (b) $\lambda=2$, and (c) at a fixed electron spin temperature, $\beta_e \omega_h = 0.4$.}
\label{fig:sigmasj}
\end{figure}
For the physical interpretation of the fluctuations $\sigma^2_{SJ}$ we refer to case of equal temperatures $\beta_e = \beta_n$:
At low temperatures, the spins are aligned either within the $(xy)$ plane or in $z$ direction (depending on $\lambda$), and the hyperfine energy is proportional to the spin correlator $\left< \mathbf{S}\mathbf{J}\right>$.
Therefore the fluctuations of the correlator in thermal equilibrium are proportional to the heat capacity of the system which is expected to display a discontinuity at the critical temperature in the Landau theory of phase transitions \cite{landau_lifshitz_book1}.
Since we consider a finite system with $N=1000$ nuclear spins here, the system does not exhibit a genuine phase transition but a crossover behavior that becomes sharper with increasing $N$.
The peak in the fluctuations $\sigma^2_{SJ}$ as a function of $\beta_n$, see Fig.~\ref{fig:sigmasj}(c), is relatively sharp, and its rising edge is positioned at the crossover temperature according to Eq.~\eqref{eq:tt} (red dashed vertical line for $\lambda\leq1$, red dotted vertical line for $\lambda=2$).
It is noteworthy that for $\lambda=2$ the peak of the fluctuations $\sigma^2_{SJ}$ at a fixed electron temperature, $\beta_e \omega_h = 0.4$, is less pronounced than for $\lambda\leq1$ due to the shift of the polaron regime to lower temperatures when $\lambda>1$.
\subsection{Nuclear distribution functions}
\label{sec:nucldistfunc}
\begin{figure*}[t!]
\centering
\includegraphics[scale=1]{Fig3.eps}
\caption{Distribution function of the nuclear spin quantum numbers $J^z$ (upper panels) and ${J^p}^2$ (lower panels) for three typical values of the anisotropy factor $\lambda$ of the hyperfine interaction with $N=1000$ nuclear spins. The inverse nuclear spin temperature $\beta_n$ is displayed on the horizontal axis; the inverse electron spin temperature is fixed at $\beta_e \omega_h =0.5$.}
\label{fig:gjxz}
\end{figure*}
Aiming at a comprehensive investigation of the polaron formation beyond the mean-field approach, we focus on the distribution functions of the nuclear spin quantum numbers which provide an ideal tool to study the reorientation of the nuclear spins related to the formation of a nuclear-spin polaron state in the cooled system.
To this end, we consider again the steady-state density operator of Eq.~\eqref{eq:Lp} at a given electron and nuclear spin temperature and define the distribution function,
\begin{eqnarray}
g(J^z) &=& \sum_{J,\sigma} (c^\sigma_{J,J^z})^2 p^J_{J^z,\sigma;J^z,\sigma}
\nonumber \\
&& + \sum_{J,\sigma}(d^\sigma_{J,J^z+1})^2 p^J_{J^z+1,\sigma;J^z+1,\sigma}
\end{eqnarray}
by transforming from the energy eigenbasis into the spin $z$ basis and summing all contributions with a fixed nuclear spin quantum number $J^z$.
In addition, we introduce the quantum number of the perpendicular component of the total nuclear spin,
\begin{equation}
{J^p}^2 = J(J+1) - {J^z}^2 ,
\end{equation}
that is deduced from the quantum numbers $J$ and $J^z$ and is restricted to ${J^p}^2 \in \left\{0,\dots,N/2(N/2+1)\right\}$.
The related distribution function,
\begin{multline}
g({J^p}^2) = \sum_{J,J^z,\sigma} \left[ (c^\sigma_{J,J^z})^2 p^J_{J^z,\sigma;J^z,\sigma} \right. \\
\left. + (d^\sigma_{J,J^z+1})^2 p^J_{J^z+1,\sigma;J^z+1,\sigma} \right] \delta_{{J^p}^2, (J(J+1)-{J^z}^2)} ,
\end{multline}
is obtained by summation of all contributions to a given value of ${J^p}^2$ analogously to $g(J^z)$.
To display the distribution function $g({J^p}^2)$, the data is processed into a histogram with appropriate bin size (typically $100$ bins within the range ${J^p}^2 \in \left[0,N/2(N/2+1)\right]$).
The distribution functions, $g(J^z)$ and $g({J^p}^2)$, as a function of the effective inverse nuclear spin temperature $\beta_n$ for fixed $\beta_e \omega_h = 0.5$ are displayed in Fig.~\ref{fig:gjxz}.
In the high-temperature limit (small $\beta_n$), the nuclear spins are randomly aligned and
$J^z$ follows an approximately Gaussian distribution centered around zero independent on the hyperfine parameter $\lambda$.
In the high-temperature limit the nuclear spin system is isotropic.
Hence, the distribution of ${J^p}^2$ at high temperatures is proportional to $\exp (-2{J^p}^2/N)$.
However when decreasing the temperature (increasing $\beta_n$) the distributions are altered below a certain point:
The behavior of the system is now determined by the hyperfine interaction and its anisotropy.
In the Ising limit, where the nuclear-spin polaron state is oriented along the positive/negative $z$ direction, we find the two possible orientations reflected by two branches forming for $g(J^z)$, depicted in Fig.~\ref{fig:gjxz}(a).
These results fully match the data in Ref.~\cite{fischer_prb2020} obtained by the kinetic rate equations taking into account the diagonal elements of the density operator.
Naturally, the ${J^p}^2$ component remains distributed closely around zero at $\lambda=0$, see Fig.~\ref{fig:gjxz}(d).
For a better illustration the vertical cut through the panels of Fig.~\ref{fig:gjxz}, is displayed in Fig.~\ref{fig:gjxzbn} for two different values of $\beta_n$.
Here the two peaks in $g(J^z)$ (yellow lines for $\lambda=0$) move further apart with increasing $\beta_n$ from $\beta_n \omega_h = 80$ in the left hand panels to $\beta_n \omega_h = 270$ in the right hand panels.
As a comparison we added the data for $\lambda=0.5$ in Fig.~\ref{fig:gjxzbn} as well.
In this case we find similar behavior as for the Ising limit though the peak of $g({J^p}^2)$ at $\beta_n \omega_h = 270$ is a bit broader indicating that for $0<\lambda<1$ certain correlations appear also in the in-plane nuclear spin components.
For the system with isotropic hyperfine interaction, the distributions $g(J^x)$, $g(J^y)$ and $g(J^z)$ coincide, see Fig.~\ref{fig:gjxz}(b) and Figs.~\ref{fig:gjxzbn}(a) and \ref{fig:gjxzbn}(b) (blue lines) for $g(J^z)$ as an example.
The narrow Gaussian distribution of $J^z$ of the high-temperature regime transforms into a wide and almost uniform distribution at low temperatures.
The range of the uniform distributions broadens with decreasing the temperature until the full range $J^z \in \left[ -N/2, N/2\right]$ is covered.
The distributions for $\lambda=1$ at fixed nuclear spin temperatures, see Fig.~\ref{fig:gjxzbn} (blue lines), are nearly flat.
The uniform distribution of the quantum number $J^z$ complies with the uniform distribution of the polaron orientation on the Bloch sphere.
Accordingly the distribution of ${J^p}^2$ is roughly given by $g({J^p}^2) = 1/(J(J+1)-{J^p}^2)^{1/2}$ at low temperatures, see Fig.~\ref{fig:gjxz}(e) and \ref{fig:gjxzbn}(d).
In an anisotropic system with $\lambda>1$, the polaron forms within the $(xy)$ plane.
The nuclear distribution functions reflect this fact by narrowing the distribution $g(J^z)$ around $J^z=0$ when lowering the temperature starting from the initial Gaussian distribution.
This is depicted in Fig.~\ref{fig:gjxz}(c) as well as Fig.~\ref{fig:gjxzbn}(a) and Fig.~\ref{fig:gjxzbn}(b) for $\lambda=2$.
At the same time the weight in the distribution of ${J^p}^2$ moves from ${J^p}^2=0$ to the maximum value ${J^p}^2=N/2(N/2+1)$ resulting from the maximum quantum number $J=N/2$ and the minimum value $J^z=0$.
\begin{figure}[t]
\centering
\includegraphics[scale=1]{Fig4.eps}
\caption{Distribution function of the nuclear spin quantum numbers $J^z$ (upper panels) and ${J^p}^2$ (lower panels) for various anisotropy factors $\lambda$ of the hyperfine interaction, see legend in panel (c). The inverse nuclear spin temperature is set to $\beta_n \omega_h = 80$ [panels (a) and (c)] or $\beta_n \omega_h = 270$ [panels (b) and (d)]; the inverse electron spin temperature is fixed at $\beta_e \omega_h =0.5$.}
\label{fig:gjxzbn}
\end{figure}
\subsection{Quantum phase transition}
\label{sec:quantum-phase-transition}
The dependence of the nuclear-spin polaron state on the hyperfine anisotropy parameter $\lambda$ also tracks the transition of the ground state of the Hamiltonian, Eq.~\eqref{eq:bm}, at a critical coupling $\lambda_c=1$.
The lowest eigenenergy of the eigenenergies stated in Eqs.~\eqref{eq:bm1e} and \eqref{eq:bm2e}
is always given by Eq.~\eqref{eq:bm2e} for the case $\sigma=-$ and a maximum value of $J$, i.e., $J=N/2$ (we recall that we consider $N$ to be even), independent on the parameter $\lambda$ and takes the form
\begin{align}
\epsilon^-_{J,J^z} &= - \frac{A_0}{4} - \frac{A_0}{2} \Bigg\{ \frac14 + \lambda^2 J(J+1) \Bigg. \notag \\
& \qquad \Bigg. + \left( 1 - \lambda^2 \right) J^z \left(J^z - 1 \right) \Bigg\}^{1/2} .
\end{align}
We note that replacing $J^z$ by $-(J^z-1)$ results in the same eigenenergy.
In the above formulation it becomes clear that for $\lambda^2 < 1$ the term $J^z(J^z-1)$ has to maximize, and therefore the ground states results from $J^z=-N/2+1$ or $J^z=N/2$.
At $\lambda =1$, the value of $J^z$ does not influence the eigenenergy, and the ground state is $N$-fold degenerate in $J^z$.
By contrast, for $\lambda^2 > 1$, the ground state requires a minimum of the term $J^z(J^z-1)$, which corresponds to $J^z = 0$ or $J^z=1$.
Therefore, the system undergoes a quantum phase transition at $\lambda_c\equiv1$ with a change of the ground state degeneracy from a twofold degenerate ground state for $|\lambda|<1$ or $|\lambda|>1$ to a degeneracy of $N$ for $|\lambda|=\lambda_c$.
For odd $N$ the degeneracy of the ground state is $1$ for $|\lambda|>\lambda_c$.
The difference between the two ground states for $\lambda^2<1$ and the two ground states for $\lambda^2>1$ lies in the fact that for $\lambda^2<1$ there is no transition between the two ground states via single spin-flip processes which disconnects these ground states for $\lambda^2<1$.
Hence at zero temperature, thermal spin-flip processes between the two ground states are inhibited.
For $\lambda^2 > 1$ however the two ground states with $J^z = 0$ and $J^z=1$ are directly connected by a single spin-flip process.
The coupling to the environment provides non-zero transition matrix elements as stated in Eqs.~\eqref{eq:tmes} and \eqref{eq:tmei} such that even at zero temperature fluctuations between the two ground states will take place.
In the mean-field approach, presented in Sec.~\ref{sec:bm}, the difference in the nature of the ground state translates to two disconnected polaron states for $\lambda^2<1$ whereas the nuclear spin polaron forms isotropically within the $(xy)$ plane for $\lambda^2>1$.
\begin{figure}[t!]
\centering
\includegraphics[scale=1]{Fig5.eps}
\caption{Distribution function of the nuclear spin quantum number $J^z$. (a) Dependence on the anisotropy factor $\lambda$ of the hyperfine interaction for fixed inverse spin temperatures, $\beta_n \omega_h = 1000$, $\beta_e \omega_h =0.5$. (b) and (c) Dependence on the inverse nuclear spin temperature with $\lambda$ adjusted slightly below (b) or above one (c); $\beta_e \omega_h =0.5$.}
\label{fig:gjxzlambda}
\end{figure}
The quantum phase transition at $\lambda_c=1$ translates to a rapid change of the nuclear distribution function $g(J^z)$ at low temperatures, see Fig.~\ref{fig:gjxzlambda}(a).
For $\lambda<1$, the distribution function has two very sharp peaks at $J^z/N = \pm 0.5$ (which therefore are hard to detect in Fig.~\ref{fig:gjxzlambda}(a)), whereas for $\lambda>1$ the distribution displays a single maximum around $J^z=0$.
In the isotropic limit, $\lambda=1$, $g(J^z)$ covers the full range of potential values of $J^z$ uniformly.
Around the point of isotropy we find a blurred behavior as a result of the finite non-zero temperatures.
The nuclear spin distribution functions for systems close to the quantum critical point reveal that even a slight anisotropy leads to a well distinguished signature of both phases at low temperatures.
We picked $\lambda=0.99<\lambda_c$ and $\lambda=1.01>\lambda_c$ as an example
and plotted the temperature evolution of the distribution function in Fig.~\ref{fig:gjxzlambda}(b) and Fig.~\ref{fig:gjxzlambda}(c) respectively.
For $\lambda=0.99$, the two polaron branches corresponding to the opposite spin alignments in $z$ direction appear similar to Fig.~\ref{fig:gjxz}(a).
In comparison to the results in the Ising limit $\lambda=0$, the branches are just slightly broadened at intermediate inverse nuclear spin temperature $\beta_n$.
The data for $\lambda=1.01$ exhibits similar deviations from the case of stronger anisotropy $\lambda=2$ in Fig.~\ref{fig:gjxz}(c), whereas the overall sharpening of the nuclear distribution function $g(J^z)$ around $J^z=0$ is the same.
For $\lambda=1.01$, however, $g(J^z)$ first resembles the isotropic case in the regime of intermediate temperatures close to the crossover temperature resembling the distribution depicted in Fig.~\ref{fig:gjxz}(b).
Only with further decreasing of the nuclear spin temperatures the distribution focuses around $J^z=0$.
Note that the anisotropy factor for the hyperfine interaction of electron spins in semiconductor nanostructures equates to the quantum critical point $\lambda_c=1$.
Derivation from an isotropic system are characteristic for localized hole spins and significantly effect the polaron formation.
\section{Temporal spin fluctuations}
\label{sec:fluct}
Nuclear-spin polaron formation strongly affects the temporal dynamics of the electron and nuclear spin degrees of freedom.
The direct access to it is provided by the time-dependent spin correlation functions.
In this section we study electron and nuclear spin fluctuations in time domain and highlight the role of the nuclear-spin polaron effects.
\subsection{Electron spin fluctuations}
\label{sec:szsz}
The temporal fluctuations $\left<S^z(0)S^z(t)\right>$ of the electron spin are accessible by optical measurements of the electron spin noise~\cite{aleksandrov81,Oestreich:rev,smirnov:SNS:rev}.
In terms of the Lindblad-master equation formalism, Eq.~\eqref{eq:La}, the electron spin fluctuations are calculated by the quantum mechanical trace with the steady-state density operator $\rho_0$,
\begin{equation}
\begin{split}
C_S^z(t) = \left< S^z(0) S^z(t) \right> &= \tr \left[ \rho_0 S^z S^z(t) \right] \\
&= \tr \left[ S^z e^{\mathcal{L}t} (S^z \rho_0) \right],
\end{split}
\label{eq:szszt}
\end{equation}
where $\mathcal{L}$ is the Liouvillian operator determining the time evolution of the open quantum system and the superoperator $\exp(\mathcal{L}t)$ is applied to $S^z \rho_0$ \cite{Lax1963}.
Figure \ref{fig:szszt} presents the electron spin autocorrelation as a function of time for three distinct values of the hyperfine anisotropy parameter $\lambda$.
The initial value of the electron spin correlator yields $C_S^z(0)=1/4$ regardless of the temperature since both electron spin components are equiprobable.
The electron spin at low temperatures displays long living correlations, related to the spin polaron formation, whose lifetime depends on the choice of $\lambda$, whereas in the high-temperature regime the autocorrelation function completely decays on a timescale given by the inverse thermal electron spin flip rate $\tau_s =1/W_e^0$ ($ = 10^{3}/ \omega_h$ for our choice of parameters) demonstrating also nontrivial dynamics at shorter timescales.
\begin{figure}[t]
\centering
\includegraphics[scale=1]{Fig6.eps}
\caption{Temporal fluctuations of the electron spin components for different values of the hyperfine anisotropy parameter $\lambda$ in (a), (b) and (c). Results for various effective inverse nuclear spin temperatures $\beta_n$ are presented respectively whereas $\beta_e \omega_h = 0.5$ is kept constant.}
\label{fig:szszt}
\end{figure}
In the Ising limit, $\lambda=0$, the correlator decays to zero on a timescale proportional to $\tau_s$ at high temperatures, see Fig.~\ref{fig:szszt}(a) (red line).
In this situation the hyperfine interaction does not effect the electron spin-$z$ component and its decay is fully controlled by the reservoir induced spin-flip processes.
However, when the effective nuclear spin temperature is reduced to the crossover temperature where the polaron formation sets in, the correlator $C_S^z(t)$ does not decay completely anymore within the presented time range up to $t\omega_h=10^9$ but to a plateau with a finite non-zero value (orange line).
The degree of correlation at this plateau increases with the lowering of temperatures.
At very low temperatures, e.g., $\beta_n \omega_h = 1000$ (blue line) deep in the polaronic phase, no decay is visible anymore, and the full correlation of the electron spin persists for the full time interval presented in the figure.
With lowering the temperatures, the reservoir induced spin-flip processes become more and more suppressed, cf.\ Eq.~\eqref{eq:hk}, which shifts the decay of $C_S^z(t)$ to longer time scales.
However, we expect a decay of $C_S^z(t)$ to zero on a prolonged time scale for non-zero temperatures as a result of the exponentially suppressed but non-zero flip rates.
In the isotropic system, the spin-flip terms of the hyperfine Hamiltonian come into play and yield a two-stage behavior.
In the high-temperature limit, the electron spin initially dephases in the nuclear spin bath with the rate $\omega_h$ which produces the characteristic curve $C_S^z(t)$ that reaches a plateau of the value $1/12$, see Fig.~\ref{fig:szszt}(b) (red line), analytically derived in Refs.~\cite{KT,merkulov_prb2002} in the limit of frozen nuclear spins for a closed system.
However, the correlator decays further on a time scale determined by the inverse rate $1/W_e^0$ due to the coupling to the thermal reservoirs~\cite{gi2012noise}.
Note that the equidistant spikes in the correlators for $\lambda=1$ and $\lambda=2$
at time scales of $ \omega_h t\approx 10^2-10^3$ in Fig.~\ref{fig:szszt}(b,c)
are an artifact of the box model approximation and the finite number $N$ of nuclear spins.
For equal hyperfine coupling constants $A_k = A_0$ of all nuclear spins, the Overhauser field is quantized, i.e., the spacial components in Eq.~\eqref{eq:bn} can only assume values that are an integer multiple of $\lambda A_0$.
Thus the precession frequencies of the electron spin are all commensurate and yield a rephasing at times $T_n = 2 \pi n/ (\lambda A_0)$ with an integer $n\in \left\{ -N/2,\ldots,N/2\right\}$~\cite{glazov_book}.
A reduction of the effective nuclear spin temperature yields an oscillatory component to $C_S^z(t)$ in the isotropic system in the absence of an external magnetic field, since the electron spin starts to precess around the emerging nuclear spin polarization which is isotropically distributed and therefore contains components perpendicular to the $z$ axis.
Lowering the temperature, the nuclear spins become more and more oriented and generate a stronger Overhauser field such that the electron precession frequency increases.
Additionally, the stronger nuclear alignment reduces the fluctuations of the nuclear spin which prevents the dephasing of the electron spin and results in an elongated envelope of the oscillating $C_S^z(t)$.
At times $t \gtrsim 1/W_e^0$, the electron spin flip processes resulting from the coupling to the thermal reservoir come into play and provide further dephasing such that the oscillatory component eventually vanishes even at low temperatures and $C_S^z(t)$ reaches the plateau of $1/12$ \cite{KT,merkulov_prb2002} which stems from the spatial electron spin component parallel to the Overhauser field and is protected from thermal spin flips due to a large energy barrier.
The plateau persists for several orders of magnitude in time and then decays further on a timescale determined by the effective nuclear spin temperature and the electron and nuclear spin flip rates in the system.
This decay can be attributed to the rotation of the nuclear-spin polaron state.
Since the system is fully isotropic, a polaron state, for which exemplarily the electron spin formerly was aligned in $z$ direction, may rotate such that the electron spin points in any other direction on the Bloch sphere.
Thus the temporal correlation of the electron spin $z$ component will get lost.
The rate of this loss of correlation may be understood by means of a diffusion process on the diagonal of $S^z \rho_0$ entering Eq.~\eqref{eq:szszt}, see Appendix~\ref{app:rotation} for details.
As a consequence the total rate for the rotation of the nuclear spin polaron state is approximately made up by
\begin{equation}
W_r = W_e^0/N^2 + W_n^0/N .
\label{eq:wr}
\end{equation}
For rates $W_e^0 = 10^{-3} \omega_h$, $W_n^0 = 10^{-6} \omega_h$ and $N=1000$, the rate for rotation of the polaron state correspondingly is $W_r=2\times10^{-9}\omega_h$ which matches the low-temperature result in Fig.~\ref{fig:szszt}(b) (blue line).
The rotation of the nuclear polaron state for $\lambda=1$ maintains a finite rate $W_r$ even for zero temperatures, hence the correlations exhibit a fundamental difference from the case $\lambda<1$ that originates from the different nature of the ground states.
The auto-correlation function of the electron spin $x$ component, $C_S^x(t) = \left< S^x(0) S^x(t) \right>$, for the system with a hyperfine anisotropy $\lambda=2$ resembles the results for the isotropic system and is plotted in Fig.~\ref{fig:szszt}(c).
Due to the amplification of the hyperfine interaction in $x$ and $y$ direction, the dip in the high-temperature limit predicted by Ref.~\cite{KT,merkulov_prb2002} for the isotropic case is shifted to earlier times.
The correlation function for the spin $z$ component, $C_S^z(t)$, for $\lambda=2$ alongside the spin $x$ component, $C_S^x(t)$, for $\lambda=0$ is provided in App.~\ref{app:cperp} for completeness.
Naturally, the electron spin precession is faster for $\lambda=2$ than for $\lambda=1$ at the same temperatures due to the enhanced Overhauser field perpendicular to the $z$ axis.
The correlator $C_S^x(t)$ for $\lambda=2$ decays on the timescale dictated by the rate $W_e^0$ of thermal electron spin flips even in the low-temperature limit such that the electron spin correlations are limited to a lifetime of $10^3\omega_h$ for our choice of parameters:
The twofold degenerate (non-degenerate) ground state does not protect the electron spin correlator from the dephasing induced by thermal electron spin flip processes.
In other words, it is a consequence of the in-plane isotropy of the system.
\subsection{Fluctuations of the nuclear spins}
\label{sec:jzjz}
\begin{figure}[t!]
\centering
\includegraphics[scale=1]{Fig7.eps}
\caption{Temporal fluctuations of the nuclear spin $z$ component for (a) the Ising limit, (b) the isotropic system, and (c) the anisotropic system with $\lambda=2$. The inverse electron spin temperature $\beta_e$ is fixed; the inverse nuclear spin temperature $\beta_n$ is encoded by different colors.
(d) Decay time $\tau_d$ of the nuclear spin fluctuations for different values of the hyperfine anisotropy parameter $\lambda$. The transition temperature, Eq.~\eqref{eq:tt}, is added as a red dashed vertical line.}
\label{fig:jzjzt}
\end{figure}
The long living correlations of the electron spin at low temperatures are related to the similar dynamics of the nuclear spin bath.
In contrast to the electron spin, however, the nuclear correlator does not display any fast modulations but is constant for a long time until a temperature-dependent decay process may take place.
Figure~\ref{fig:jzjzt} displays the nuclear correlator,
\begin{equation}
\begin{split}
C^z_J(t)= \left< J^z(0) J^z(t) \right> &= \tr \left[ \rho_0 J^z J^z(t) \right] \\
&= \tr \left[J^z e^{\mathcal{L}t} (J^z \rho_0 ) \right] ,
\end{split}
\label{eq:jzjzt}
\end{equation}
at different inverse nuclear spin temperatures, Fig.~\ref{fig:jzjzt}(a) for the Ising limit, Fig.~\ref{fig:jzjzt}(b) in the isotropic system, and Fig.~\ref{fig:jzjzt}(c) for the anisotropic system with $\lambda=2$.
Since the spin-fluctuation induced by the thermal reservoirs will ultimately cause a decay to zero for $t\to\infty$ at non-zero temperatures, we define the decay time $\tau_d$ as the point in time where the correlator has reduced by the fraction $e$ with respect to its initial value, i.\ e.\ $C^z_J(\tau_d)= C^{z}_J(0)/e$.
We plot $\tau_d$ as a function of the effective inverse nuclear spin temperature $\beta_n$ whereas the electron spin temperature, $\beta_e \omega_h = 0.5$, remains constant.
The data for various values of the hyperfine anisotropy parameter $\lambda$ is presented in Fig.~\ref{fig:jzjzt}(d).
At high temperatures the decay is inherently dictated by the thermal nuclear spin flip rate $W_n^0$ (in our calculations, $W_n^0=10^{-6}\omega_h$) independent on the hyperfine anisotropy $\lambda$.
The related decay time $1/2W_n^0$ is indicated in Fig.~\ref{fig:jzjzt}(d) by the lower horizontal dotted grey line.
Moving to the temperature regime of the nuclear-spin polaron formation, the characteristic decay time of the correlator $C^{z}_J(t)$ increases for $\lambda<1$ similar to the electron spin correlator depicted in Fig.~\ref{fig:szszt}(a).
At low temperatures, $C^{z}_J(t)$ does not reach half of the starting value within our largest simulation time of $t=10^{15}/\omega_h$, see Fig.~\ref{fig:jzjzt}(a).
In the context of the quantum phase transition, we pointed out that the two-fold degenerate ground state maximizes $J=N/2$ as well as $J^z$ so that the spin flips induced by the thermal reservoir become exponentially suppressed leading to an exponential increase in $\tau_d$.
Therefore, the decay time $\tau_d$ grows exponentially starting at the transition temperature, Eq.~\eqref{eq:tt}, see red dashed vertical line in Fig.~\ref{fig:jzjzt}(d).
For $\lambda \geq 1$, this exponential increase of $\tau_d$ is absent as a result of the rotational symmetry in the nuclear-spin polaron state.
In the isotropic system, the orientation of the nuclear-spin polaron state rotates with the rate $W_r$ ($W_r=2\times10^{-9}\omega_h$ for our choice of parameters) previously deduced in the considerations of the electron spin correlation, see Eq.~\eqref{eq:wr}.
Accordingly, $\tau_d$ in the temperature range of polaron formation rises to approximately $1/2W_r = 0.25 \times 10^{9} \omega_h$ (upper horizontal dotted grey line in Fig.~\ref{fig:jzjzt}(d)).
For $\lambda>1$ the decay time $\tau_d$ reduces when the nuclear spin temperature is lowered.
For an explanation we refer to the dynamic rotation of the polaron state in the isotropic case.
Here, the dynamics of the non-zero matrix-elements of the composite operator $O=J^z \rho_0$ in the groundstate at $T_e=T_n=0$ follow Eq.~\eqref{eq:chioffdiagonal} (off-diagonal elements) and Eq.~\eqref{eq:chidiagonal} (diagonal elements) respectively.
The differential equations yield a decoupled decay of the off-diagonal elements with approximate rate $W_e^0+NW_n^0$, while transitions between the diagonal elements occur with the same rate $W_e^0+NW_n^0$.
The ground state for $\lambda>1$ is solely two-fold degenerate in contrast to the $N$-fold degeneracy in the isotropic case, cf.~Sec.~\ref{sec:quantum-phase-transition}.
Thus, for $\lambda>1$, a single spin flip between the two ground states (generating a transition between the two non zero diagonal elements of $(J^z \rho_0)$) already leads to a complete loss of correlation whereas in the isotropic case the correlation is gradually lost by successive spin flips.
As a result, the decay of the correlator $C^{z}_J(t)$ for $\lambda>1$ remains bound to the decay rate $\tau_d \approx (W_e^0+NW_n^0)^{-1}$ when reducing the temperature, while in the isotropic system the decay is prolonged in the polaronic state.
\section{Conclusion}
\label{sec:conclusion}
We generalized the kinetic approach for the nuclear-polaron formation to an arbitrary anisotropic CSM.
This allows us to investigate all experimentally relevant regimes of singly charged QDs and localized electronic carriers.
We proposed a symmetry conserving Lindblad approach that is applicable to arbitrary hyperfine coupling anisotropy factors $\lambda$ and calculated the steady-state solution for two distinct reservoir temperatures $T_e$ and $T_n$.
Our approach overcomes the limitation of Ref.~\cite{fischer_prb2020} to $\lambda=0$ but includes the previously investigated limit as well.
We have studied the electron-nuclear spin correlator, the nuclear spin distribution function and the temporal autocorrelators of the spins.
The spin correlation functions as well as the nuclear distribution function reveal the nuclear polaronic state formation when reducing the nuclear spin temperature.
The crossover temperature into the nuclear polaron state coincides with enhanced fluctuations of the spin-correlation function and also agrees with a mean-field theory prediction for the anisotropic CSM.
Importantly, we demonstrate a quantum phase transition at the anisotropy parameter $\lambda=1$ which separates distinct polaronic states.
For $\lambda <1$ the result in the polaronic phase is identical to the Ising limit:
spin fluctuations are suppressed by a very large activation barrier.
At $\lambda=1$ the polaron state is fully rotationally invariant, while for $\lambda>1$ we find a rotational invariant phase around the $z$ axis.
Our approach makes it possible to study not only the steady state of the electron-nuclear spin system, but also the dynamics of the polaron formation and temporal fluctuations of spins.
\begin{acknowledgments}
We acknowledge financial support by the Deutsche Forschungsgemeinschaft and the Russian Foundation of Basic Research through the transregio TRR 160 within the Projects No.\ A4, and No.\ A7.
M.M.G. was partially supported by RFBR-DFG project No. 19-52-12038.
The authors gratefully acknowledge the computing time granted by the John von Neumann Institute for Computing (NIC) under Project HDO09 and provided on the supercomputer JUWELS at the J\"ulich Supercomputing Centre.
\end{acknowledgments}
|
1,116,691,498,254 | arxiv | \section{Introduction}
\label{sec:intro}
The study of two-level systems has been a topic of interest since
the first steps in the development of quantum mechanics. The main
advantage of these models is that they can be numerically diagonalized
for very large dimensions and, at the same time, they can model
realistic quantum many-body systems. Typical examples are the
Jaynes-Cummings model of quantum optics \cite{Jaynes63}, the Vibron
Model (VM) of quantum chemistry \cite{Molbook}, the two-level
pairing model in condensed matter \cite{Suhl59} and in nuclear
physics \cite{Hog61}, the Lipkin-Meshkov-Glick model (LMG)
\cite{Lipkin65,Meshkov65,Glick65} and the Interacting Boson Model (IBM) \cite{IBMbook} of
nuclear structure. While some of these models describe two-level
fermion systems, the model Hamiltonian can always be written in
terms of $SU(2)$ pseudo-spin operators. Subsequently, the spin
Hamiltonian can be expressed in terms of bosons using either the
finite Schwinger representation or the infinite Holstein-Primakoff
representation of the $SU(2)$ algebra.
An example is the LMG model which has recently been newly revived as
a model of quantum spins with long-range interactions to investigate the relationship between entanglement and quantum phase transitions (QPTs) \cite{Vidal04_1,Vidal04_2,Vidal04_3, Latorre05_2, Leyvraz05, Dusuel04_3,Unanyan05}. In its boson
representation, it has also been recently used as a simplified model to
describe the Josephson effect between two Bose-Einstein
condensates \cite{Links1}.
\begin{figure}
\centering
\includegraphics[width=8cm]{./figures/gentlbm.eps}
\caption{Schematic representation of the two-level boson model
studied in this paper.}
\label{fig:twbl}
\end{figure}
In this work, we focus on finite two-level boson
Hamiltonians, having the common feature that the lower level is
always a scalar $L=0$ boson, hereafter written as $s$ boson. The
upper level can have different multipolarities, generically noted
as $L$ whose value defines a particular model. The LMG model
in the Schwinger representation has a second scalar ($L=0$) boson for
the upper level. A dipolar ($L=1$) boson leads to the VM and a
quadrupolar ($L=2$) boson corresponds to the IBM. Higher angular
momentum bosons can lead new models, like for example a model of
octupole vibrations in terms of $s$ and octupolar ($L=3$) bosons. A
schematic representation of the model is shown in
Fig. \ref{fig:twbl}.
All these two-level boson models are governed by an algebraic
structure which is constructed out of all bilinear combinations of
creation and annihilation boson operators which generate the
algebra of $U(2L+2)$. One of the main features of these models is
that one can construct dynamical symmetries in which the
Hamiltonian can be written in terms of Casimir invariants of a
nested chain of subalgebras of $U(2L+2)$. In these particular
cases, the
problem is analytically solvable providing a powerful tool to
check approximate methods and a reference for more detailed
calculations. In addition, if the Hamiltonian is written as a
linear combination of Casimir invariants of the subalgebras
$U(2L+1)$ and $O(2L+2)$ the model is still quantum integrable but requires then to solve Bethe-like equations numerically \cite{Pan02}. The exact solution, given by Richardson almost forty years ago
\cite{Richardson68}, is reduced to a set of $M$ nonlinear coupled
equations, where $M$ is the number of boson pairs. For two-level
boson models it turns out that the numerical diagonalization of the
Hamiltonian presented below is more efficient than solving the
Richardson equations.
The aim of this work is to study the QPT
that occurs in the two-level boson system as it evolves from the spherical
vibrational $U(2L+1)$ symmetry to the deformed $O(2L+2)$ symmetry,
as a function of a control parameter. Although, strictly speaking
QPTs are defined for macroscopic systems, there is a renewed
interest in studying structural changes in finite-size systems as
the precursors of a QPT in the thermodynamic limit. Traces of
these QPTs are readily observed in finite systems and their
properties are then correlated with the idealized thermodynamic
system \cite{Iachello04}. The understanding of the modifications on the
characteristics of the QPT induced by finite-size effects is of
crucial importance to extend the concept of phase transitions to
finite systems. Several techniques have long been used to extrapolate numerical results obtained by large-scale diagonalizations or Monte Carlo calculations to the infinite
system. Here, we focus on a somewhat inverse problem which is the
finite-size corrections to the observables in two-level
boson models like the ground-state energy, the gap, the occupation number and
some electromagnetic transition rates. While the zeroth order in the
boson number $N$ is
given by the Hartree mean-field approach for the ground
state and the Random Phase Approximation for the excited
states, going beyond this order implies the use of more
sophisticated techniques. We make use here of the Continuous
Unitary Transformations (CUTs) and give the first $1/N$ corrections
to the observables in the whole $U(2L+1)$ to $O(2L+2)$ transition.
The structure of the paper is the following. In Sec. \ref{sec:model} we
introduce the two-level boson models and the formalism for the
numerical diagonalization for very large number of bosons. Section \ref{sec:MF}
describes the mean-field treatment of the two-level boson models.
Section \ref{sec:sym_phase} is devoted to the study of the symmetric phase using
CUTs. Analytical expressions for different orders in the $1/N$
expansion of the ground-state energy, the gap, the expectation
value of the number of $L$ bosons in the ground state and the
transition matrix element between the ground state and the first excited
state are obtained. In Sec. \ref{sec:brok_phase} the broken phase is analyzed, and in
Sec. \ref{sec:critical} the study of the critical point is presented from the
spherical phase. In this section, we obtain the finite-size scaling exponents for the quantities cited
above by analyzing the divergence of their $1/N$ expansion. In Sec. \ref{sec:numerics} a comparison of the numerical
results obtained using the formalism presented in Sec. \ref{sec:model} with
the analytical CUTs results is presented. Section \ref{sec:conclusion} is for
summary and conclusions. Technical details concerning flow equations can be found in appendices.
\section{Two-level boson models}
\label{sec:model}
In this Section, we present a simple algorithm for diagonalizing
boson Hamiltonians with $O(2L+1)$ symmetry for large boson
numbers. The formalism is based on previous studies
\cite{Pan98,Rowe04_1} and is a generalization of the one presented
recently for treating the IBM \cite{Garcia05}.
We consider the following boson pairing Hamiltonian
\begin{equation}
H=x \: n_L+\frac{1-x}{4(N-1)}\left( P^\dagger_L {P^{\phantom\dagger}\hspace{-\ldag}}_L+P^\dagger_s {P^{\phantom\dagger}\hspace{-\ldag}}_s
-P^\dagger_L {P^{\phantom\dagger}\hspace{-\ldag}}_s-P^\dagger_s {P^{\phantom\dagger}\hspace{-\ldag}}_L\right),
\label{hb}
\end{equation}
with
\begin{eqnarray}
n_L &=& \sum_{\mu=-L}^{+L} L^\dagger_\mu L_\mu,\\
P^\dagger_s &=& ({P^{\phantom\dagger}\hspace{-\ldag}}_s)^\dagger = s^\dagger s^\dagger,\\
P^\dagger_L &=& ({P^{\phantom\dagger}\hspace{-\ldag}}_L)^\dagger = \sum_{\mu=-L}^{+L} (-1)^{\mu}
L^\dagger_\mu L^\dagger_{-\mu}.
\end{eqnarray}
$L^\dagger_\mu$ creates a boson in the excited $L$ level with
projection $\mu$, while $L_\mu$ destroys it. We have introduced
above the pair $P_\rho^\dag$ operators with $\rho=s$ or $L$, which will be used later on.
The total number of bosons $N=n_s+n_L$ is a conserved quantity.
For $x=0$, $H$ can be cast into a linear combination of the
quadratic Casimir operators of
$O(2L+2)$ and the corresponding subalgebras, whereas for $x=1$, $H$ is
the linear Casimir operator of
the $U(2L+1)$ algebra. Here, $x$ plays the role of a control parameter,
driving the system from the $O(2L+2)$ deformed phase to the
$U(2L+1)$ spherical phase.
The boson pairing Hamiltonian (\ref{hb}) can be studied in an
elegant way by means of the noncompact $SU(1,1)$ algebra of boson
pair operators. For the subspace of $\rho$ bosons, where $\rho$ stands
generically either for $s$ or $L$ bosons, the $SU(1,1)$ generators are
the raising operator $K^{+}_\rho$,
the lowering operator $ K^{-}_\rho = ( K^{+}_\rho)^\dagger $
and the Cartan operator $K^0_\rho$ defined as
\begin{eqnarray}
K^{+}_\rho &=& \frac{1}{2} P_\rho^\dag, \\
K^{-}_\rho &=& \frac{1}{2} P_\rho, \\
K_{\rho}^{0} &=&\frac{1}{2}\sum_{\mu}\left( \rho^\dagger_\mu
\rho_\mu+\frac{1} {2}\right) =\frac{1}{2}n_{\rho}+\frac{1}{4}D_{\rho},
\label{K}
\end{eqnarray}
with $D_{\rho}=2\rho+1$. The three operators
$\{K^{+}_\rho,K^{-}_\rho,K_{\rho}^{0}\}$
satisfy the $SU(1,1)$ commutator algebra
\begin{eqnarray}
\left[ K_{\rho}^{0},K_{\rho^{\prime}}^{\pm}\right] &=&\pm\delta_{\rho,\rho^{\prime}} K_{\rho}^{\pm}, \\
\left[ K_{\rho}^{+}, K_{\rho^{\prime}}^{-}\right] &=& -2\delta _{\rho,\rho^{\prime}}K_{\rho}^{0}.
\label{com2}
\end{eqnarray}
The Casimir operator is
\begin{equation}
C_{\rho}^{2}=\frac{1}{2}\left(
K_{\rho}^{+}K_{\rho}^{-}+K_{\rho}^{-}K_{\rho}^{+}\right)
-\left( K_{\rho}^{0}\right) ^{2}=-\frac{D_{\rho}}{4}\left( \frac{D_{\rho}}%
{4}-1\right). \label{casimir}%
\end{equation}
The complete set of eigenstates of the pairing Hamiltonian
(\ref{hb}) can be constructed as a direct product of subspaces
associated to $s$ and $L$ bosons. Each subspace can be written in terms of the rising operators
$K^{+}_\rho$ acting on the corresponding subspace of unpaired $\rho$ bosons
\begin{equation}
\left\vert \tilde n_{\rho},\nu_{\rho}\right\rangle
=\frac{1}{\sqrt{C_{\rho,\nu_\rho}^{\tilde n_\rho}}}K_{\rho}^{+ \tilde
n_\rho}\left\vert \tilde n_{\rho}=0,\nu_{\rho}\right\rangle ,
\label{bosest}
\end{equation}
where $\nu_{s}=\nu_{L=0}=0,1$ and $\nu_{L\neq 0}=0,1,2,\dots$ The
quantity $\nu_{\rho}$ is known as the boson seniority for
$\rho$ bosons
and gives the number of bosons of type $\rho$ not
coupled in pairs to zero. Note that from now on the label
$\tilde n$ means number of boson pairs coupled to zero angular
momentum. The total number of bosons is $2
\tilde n_s+2 \tilde n_{L}+\nu_s + \nu_L$. The normalization constant
in (\ref{bosest}) can be obtained from the action of $K_{\rho}^{-}$ and
$K_{\rho}^{0}$ on the $\rho$ subspace $\left\vert \tilde
n_{\rho}=0,\nu_{\rho}\right\rangle $
\begin{eqnarray}
K_{\rho}^{-}\left\vert \tilde n_{\rho}=0,\nu_{\rho}\right\rangle &=& 0 ,\\
K_{\rho}^{0}\left\vert \tilde n_{\rho}=0,\nu_{\rho}\right\rangle &=&
\left(\frac{1}{2}\nu_{\rho}+\frac{1}{4}D_{\rho}\right) \left\vert
\tilde n_{\rho}=0,\nu_{\rho}\right\rangle,
\end{eqnarray}
and the commutation relation
\begin{equation}
\left[ \left[ K_{\rho}^{-},K_{\rho}^{+}\right] ,K_{\rho}^{+}\right]
=2K_{\rho}^{+} ,
\end{equation}
then
\begin{equation}
K_{\rho}^{-}\left( K_{\rho}^{+}\right) ^{\tilde n_\rho}\left\vert
\tilde n_{\rho}=0,\nu_{\rho}\right\rangle =\tilde n_\rho\left( \tilde
n_\rho+\frac{D_{\rho}}{2}+\nu_{\rho}-1\right) \left( K_{\rho}^{+}\right)
^{\tilde n_\rho-1}\left\vert \tilde n_{\rho}=0,\nu_{\rho}\right\rangle ,
\end{equation}
and
\begin{equation}
\langle \tilde n_{\rho},\nu_{\rho} \vert \tilde n_{\rho},\nu_{\rho}\rangle
=\frac{\tilde n_{\rho}}{2}\left( 2\tilde n_{\rho}+2\rho+2\nu_{\rho}-1\right)
\langle \tilde n_{\rho}-1,\nu_{\rho}
\vert \tilde n_{\rho}-1,\nu_{\rho} \rangle , %
\end{equation}
and finally%
\begin{equation}
\langle \tilde n_{\rho},\nu_{\rho} \vert \tilde n_{\rho},\nu_{\rho}\rangle
=\frac{\tilde n_{\rho}!\left(
2\tilde n_{\rho}+2\rho+2\nu_{\rho}-1\right) !!}{2^{\tilde n_{\rho}}\left( 2\rho+2\nu_{\rho}-1\right) !!}=C_{\rho,\nu_\rho}^{\tilde n_{\rho}}.%
\label{norma}
\end{equation}
Remember that the label $\rho$ stands for $s$ or $L$ bosons and takes
numerical values: $0$ for $s$ bosons and $L$ for $L$ bosons. Once the
basis for each subspace is obtained (\ref{bosest},\ref{norma}), the complete basis
set for the pairing Hamiltonian (\ref{hb}) is easily constructed,
\begin{equation}
\left\vert \tilde n_{s},\nu_{s};\tilde n_{L},\nu_{L}\right\rangle =\frac{1}{\sqrt{C_{s,\nu_s}%
^{\tilde n_{s}}C_{L,\nu_L}^{\tilde n_{L}}}}\left( K_{s}^{+}\right) ^{\tilde n_{s}}\left( K_{L}%
^{+}\right) ^{\tilde n_{L}}\left\vert \tilde n_{s}=0,\nu_{s};\tilde n_{L}=0,\nu_{L}\right\rangle
.
\label{bosestg}
\end{equation}
We now diagonalize the Hamiltonian (\ref{hb}) in the basis (\ref{bosestg}).
Note that in the following $n$ refers to boson number
operators but $\tilde n$ are number of boson pairs. The relevant matrix
elements for the construction of the
Hamiltonian matrix are:%
\begin{equation}
\left\langle \tilde n_{s},\nu_{s};\tilde n_{L},\nu_{L}\right\vert n_{s}\left\vert \tilde n_{s},\nu_{s};
\tilde n_{L},\nu_{L}\right\rangle =2\tilde n_{s}+\nu_{s},%
\end{equation}
\begin{equation}
\left\langle \tilde n_{s},\nu_{s};\tilde n_{L},\nu_{L}\right\vert n_{L}\left\vert \tilde n_{s},\nu_{s};
\tilde n_{L},\nu_{L}\right\rangle =2\tilde n_{L}+\nu_{L},%
\end{equation}
\begin{equation}
\langle \tilde n_{s},\nu_{s};\tilde n_{L},\nu_{L} \vert
K_{s}^{+}K_{s}^{-}\vert
\tilde n_{s},\nu_{s};\tilde n_{L},\nu_{L}\rangle
= \tilde n_{s} \left( \tilde n_{s}+\nu_{s}-\frac{1}{2}\right),
\end{equation}
\begin{equation}
\left\langle \tilde n_{s},\nu_{s};\tilde n_{L},\nu_{L}\right\vert
K_{L}^{+}K_{L}^{-}\left\vert \tilde n_{s},\nu_{s};\tilde
n_{L},\nu_{L}\right\rangle =\tilde n_{L}\left( \tilde
n_{L}+L+\nu_{L}-\frac {1}{2}\right),
\end{equation}
\begin{equation}
\langle \left( \tilde n_{s}-1\right), \nu_{s}; \left( \tilde
n_{L}+1\right) ,\nu _{L}\vert K_{L}^{+}K_{s}^{-}\vert
\tilde n_{s},\nu_{s};\tilde n_{L},\nu
_{L}\rangle
=\frac{1}{2}\sqrt{\tilde n_{s}\left( \tilde n_{L}+1\right)
\left( 2\tilde n_{s}+2\nu_{s}-1\right) \left( 2\tilde
n_{L}+2L+2\nu_{L}+1\right)}.
\end{equation}
It is clear that the Hamiltonian does not mix states with
different boson seniority quantum numbers.
Thus, the Hamiltonian
matrix is block diagonal. In addition, within one block the matrix
is tridiagonal making the diagonalization simple. The different
states are obtained as follows: one starts with the boson subspace
containing $N/2$ boson pairs coupled to zero angular momentum, $\nu_s=\nu_L=0$. The
diagonalization of $H$ in this subspace provides states with angular momentum zero and the
first eigenstate is the ground state. Next, one goes to the block
with one broken boson pair. This block is composed of two separate
blocks since the two bosons can be one $s$ boson and one $L$ boson
($\nu_s=1,\nu_L=1$) or
two $L$ bosons not coupled to zero since the coupling to zero is included
in the first block, ($\nu_s=0,\nu_L=2$). Notice that two unpaired $s$ bosons are not possible
since they are always coupled to zero and consequently they are
counted in the first block. For the case of the LMG model in which
$L=0$, the block $\nu_s=0,\nu_L=2$ is not allowed for the same
reason. The block $\nu_s=1,\nu_L=1$ provides states with angular
momentum $L$, the first of them is the first excited state of the
system. The block $\nu_s=0,\nu_L=2$ contains states with two
$L$ bosons not coupled to zero angular momentum, it contains angular momenta:
$2L,2L-2,\dots,2$. Next, there is another block with two broken
boson pairs composed of two sub-blocks: $\nu_s=1,\nu_L=3$ and
$\nu_s=0,\nu_L=4$. Again, the block $\nu_s=0,\nu_L=4$ is absent
for the LMG model. This construction continues for $3,4,\dots,N/2$ broken
boson pairs. Few first low-lying Hamiltonian eigenstates are depicted schematically in Fig.
\ref{fig:tlbspec}.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{./figures/figbospair.eps}
\caption{Schematic representation of the level sequence obtained by
diagonalizing the Hamiltonian for the two-level boson models studied
in this paper. Numbers above the lines are angular momenta. }
\label{fig:tlbspec}
\end{figure}
Direct block diagonalization of the Hamiltonian in the basis (\ref{bosestg}) provides
observables as the ground-state energy or the gap and also the
wave functions of the states. With the wave function of the ground
state the expectation value of the number of $L$ bosons in the
ground state can be easily calculated. One can also calculate the
transition probability from the ground state to the first excited
state provided with the appropriate operator. The natural
transition operator for the pairing Hamiltonian we are considering is
\begin{equation}
T_{L_\mu}= L^\dagger_\mu s+\left( -1 \right) ^{\mu}s^{\dagger}L_{-\mu}.%
\label{Toper}
\end{equation}
The action of $T_{L_\mu}$ on the subspace
$\nu_{s}=0,\nu_{L}=0$ that includes the ground state is
\begin{equation}
T_{L_\mu} \vert \tilde n_{s}, 0;\tilde n_{L},0\rangle
=\frac{1}{\sqrt{C_{s,0}^{\tilde n_s}C_{L,0}^{\tilde n_L}}}
\left[\tilde n_{s}\left(
K_{s}^{+}\right)
^{\tilde n_{s}-1}\left( K_{L}^{+}\right) ^{\tilde
n_{L}}s^{\dagger} L^\dagger_\mu
+\tilde n_{L}\left( K_{s}^{+}\right) ^{\tilde
n_{s}}\left( K_{L}^{+}\right) ^{\tilde n_{L}-1}s^{\dagger}
L^\dagger_\mu\right]\left\vert \tilde n_{s}=0,0; \tilde n_{L}=0,0\right\rangle,
\end{equation}
where the state $\left\vert \tilde n_{s}=0, \nu_s=0; \tilde
n_{L}=0,\nu_L=0\right\rangle \equiv |0)$ is the boson vacuum.
Then, the matrix elements of $T_{L_\mu}$ connecting the
subspaces $\nu_{s}=0,\nu_{L}=0$ and $\nu_{s}=1,\nu_{L}=1$ (which
includes the first excited state) are%
\begin{equation}
\left\langle \left( \tilde n_{s}-1\right),\nu_s=1; \tilde
n_{L},\nu_L=1\right\vert T_{L_\mu}\left\vert \tilde n_{s},\nu_s=0; \tilde
n_{L},\nu_L=0\right\rangle =\sqrt{\frac{2\tilde n_{s}\left( 2\tilde
n_{L}+2L+1\right)
}{2L+1}} , %
\end{equation}
\begin{equation}
\left\langle \tilde n_{s},\nu_s=1;\left( \tilde n_{L}-1\right)
,\nu_L=1\right\vert T_{L_\mu}\left\vert \tilde n_{s},\nu_s=0;\tilde
n_{L},\nu_L=0\right\rangle =\sqrt{\frac{2\tilde n_{L}\left( 2\tilde
n_{s}+1\right)
}{2L+1}}. %
\end{equation}
If we write each eigenstate of the Hamiltonian as%
\begin{equation}
\left\vert \Psi_i,\nu_{s},\nu_{L}\right\rangle =\sum_{\tilde n_{s},\tilde n_{L}}c_{\tilde n_{s},\tilde n_{L}%
}^{\nu_{s},\nu_{L}}\left\vert \tilde n_{s},\nu_{s};\tilde
n_{L},\nu_{L}\right\rangle ,
\end{equation}
the matrix element of the $T_{L\mu}$ operator between the ground state
$|0\rangle\equiv \vert \Psi_0,0,0 \rangle$ and
the first excited state $|1\rangle\equiv \vert \Phi_0,1,1 \rangle$ is%
\begin{widetext}
\begin{equation}
\left\langle 1\right\vert T_{L\mu}\left\vert 0\right\rangle
= \sum_{\tilde n_{s},\tilde n_{L}} \left[\sqrt{\frac{2\tilde n_{s}\left( 2\tilde n_{L}+2L+1\right) }{2L+1}%
}c_{\tilde n_{s},\tilde n_{L}}^{0,0}c_{\tilde n_{s}-1,\tilde n_{L}}^{1,1} \right.
+ \left. \sqrt{\frac{2\tilde n_{L}\left(
2\tilde n_{s}+1\right) }{2L+1}}c_{\tilde n_{s},\tilde
n_{L}}^{0,0}c_{\tilde n_{s},\tilde n_{L}-1}^{1,1} \right] .%
\end{equation}
\end{widetext}
The formalism presented in this section provides the exact full
solution of the problem. A simpler approach to study ground-state
properties in the large $N$ limit is provided by the mean-field
analysis presented in the next section. This limit is a
good benchmark to test more elaborate results.
\section{Mean-field analysis}
\label{sec:MF}
The geometrical interpretation of the Hamiltonian (\ref{hb}) can
be obtained by introducing a Hartree axial coherent state which
allows to associate to it a geometrical shape in terms of a
deformation variable $\beta$. For
a system with $N$ bosons, this state is obtained by acting $N$ times
with a condensed boson $\Gamma^\dag$ on the boson vacuum $|0)$
\begin{equation}
|N, \beta\rangle = \frac{1}{\sqrt{N!}} (\Gamma^\dag) ^N |0)
~~, \label{ground1}
\end{equation}
where the basic condensed boson operator is given by
\begin{equation}
\Gamma^\dagger=\frac{1}{\sqrt{1+ \beta^2}} \left( s^\dagger +
\beta L^\dagger_0 \right), \label{ground2}
\end{equation}
which depends on the $\beta$ shape variable. The energy surface
is defined as the expectation value of $H$ in the intrinsic state
\begin{equation}
E(N,\beta)= \langle N, \beta| H | N, \beta \rangle
=N \left[ x \frac{\beta^2}{1+\beta^2} + \frac{1-x}{4}~
\left( \frac{1-\beta^2}{1+\beta^2} \right)^2 \right] .
\label{Esur}
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=5cm]{./figures/fig2ensur.eps}
\caption{Energy surfaces per boson for the Hamiltonian (\ref{hb})
for different
values of the control parameter $x$. }
\label{fig3}
\end{figure}
Minimizing the variational energy $E(N,\beta)$ with respect to
$\beta$ leads to a critical point at $x_c=0.5$. For $x>x_c$
(symmetric phase), the ground state is spherical and is obtained
for $\beta=0$ whereas for $x<x_c$ (broken phase) it is deformed
since the minimum of the energy per boson $e_0=E(N,\beta)/N$ is
obtained for $\beta=\sqrt{1-2x}$ as can be seen in Fig.
\ref{fig3}. At the critical point, it is worth noting that the
energy surface is a flat $\beta^4$ surface near $\beta=0$
\cite{Garcia05,Arias03}. Within this mean-field (variational) approach, one
thus gets the following ground-state energy per boson:
\begin{eqnarray}
\label{eq:e0mfs}
e_0(x \geq x_c)&=& \frac{1-x}{4},\\
\label{eq:e0mfb} e_0(x \leq x_c) &=& \frac{x}{4}~\frac{2
-3x}{1-x}.
\end{eqnarray}
One can also straightforwardly compute the expectation value of
$n_L$ in the ground state which is found to vanish in the
symmetric phase and equals
\begin{equation}
\langle n_L \rangle= N \frac{1-2x}{2(1-x)},
\label{eq:nlmf}
\end{equation}
in the broken one.
However, other properties, as excitation energies or transition
probabilities, that imply excited states require to go one step
beyond this simple mean-field level. In the following, we shall
use a combination of several methods already detailed for the
simple case $L=0$ in Ref. \cite{Dusuel05_2} which allow us to
compute the corrections to these mean-field results as well as the gap
or the
transition rates which require the knowledge of excited states.
\section{The symmetric phase $(1/2<x<1)$}
\label{sec:sym_phase}
The starting point of our analysis is the elimination of the $s$ boson by means of a Holstein-Primakoff boson expansion
\cite{Holstein40} of $s$ and $L$ bosons (for a review in boson
expansion techniques see Ref. \cite{Klein91}). Therefore, we
introduce a set of $b_\mu$ bosons such that the mapping
\begin{eqnarray}
\label{eq:def1}
L^\dagger_\mu L_\nu &=& b^\dagger_\mu {b^{\phantom\dagger}\hspace{-\ldag}}_\nu,\\
L^\dagger_\mu {s^{\phantom\dagger}\hspace{-\ldag}} &=& N^{1/2} b^\dagger_\mu (1-n_b/N)^{1/2}=(s^\dagger L_\mu)^\dag,\\
s^\dagger {s^{\phantom\dagger}\hspace{-\ldag}} &=& N-n_b. \label{eq:def3}
\end{eqnarray}
fulfils the commutation relations at each order in $N$ in the
Taylor expansion of the square root.
With these notations, we have:
\begin{eqnarray}
n_L &=& n_b,\\
P^\dagger_L {P^{\phantom\dagger}\hspace{-\ldag}}_L &=& P^\dagger_b {P^{\phantom\dagger}\hspace{-\ldag}}_b,\\
P^\dagger_s {P^{\phantom\dagger}\hspace{-\ldag}}_s &=& (N-1)\left( N-2n_b\right)+:n_b^2 : ,\\
P^\dagger_L {P^{\phantom\dagger}\hspace{-\ldag}}_s &=& \sum_\mu (-1)^\mu
L^\dagger_\mu {s^{\phantom\dagger}\hspace{-\ldag}} L^\dagger_{-\mu}{s^{\phantom\dagger}\hspace{-\ldag}}, \nonumber \\
&=& N\sum_\mu (-1)^\mu b^\dagger_\mu (1-n_b/N)^{1/2}b^\dagger_{-\mu}(1-n_b/N)^{1/2},
\end{eqnarray}
where $:A:$ denotes the normal-ordered form of the operator $A$.
The Holstein-Primakoff mapping eliminates the $s$ boson at the
cost of introducing infinitely many boson terms. However, each term
in the expansion has a definite $1/N$ order. As shown in the
preceding section, for $1/2<x<1$, the number of $L$ boson in the
ground state goes to
zero in the thermodynamical limit. Thus, to capture the finite $N$
corrections, one performs a $1/N$ expansion of the Hamiltonian
considering that $n_b/N \ll 1$. In this phase, the Hamiltonian
(\ref{hb}) reads:
\begin{widetext}
\begin{eqnarray}
H &=& N^1\left( \frac{1-x}{4}\right) +\nonumber\\
&&N^0 \left(\frac{1-x}{4}\right) \left[ \frac{2(3x-1)}{1-x}n_b-\left(
P^\dagger_b+{P^{\phantom\dagger}\hspace{-\ldag}}_b\right)\right] + \nonumber\\
&&N^{-1}\left(\frac{1-x}{4}\right)\left[ :n_b^2:+P^\dagger_b {P^{\phantom\dagger}\hspace{-\ldag}}_b
-\frac{1}{2}\left(P^\dagger_b+{P^{\phantom\dagger}\hspace{-\ldag}}_b\right)+P^\dagger_bn_b+n_b{P^{\phantom\dagger}\hspace{-\ldag}}_b\right]+\nonumber\\
&&N^{-2}\left(\frac{1-x}{4}\right)\left[ :n_b^2:+P^\dagger_b {P^{\phantom\dagger}\hspace{-\ldag}}_b
-\frac{3}{8}\left(P^\dagger_b+{P^{\phantom\dagger}\hspace{-\ldag}}_b\right)+P^\dagger_bn_b+n_b{P^{\phantom\dagger}\hspace{-\ldag}}_b\right]+ O(1/N^3).
\label{eq:hamilHP}
\end{eqnarray}
\end{widetext}
Here, we have restricted this expansion to order $(1/N)^2$ but
the method we used can, in principle, be applied beyond this limit
as shown in Ref. \cite{Dusuel05_2} for the LMG model ($L=0$). Our
aim is to diagonalize $H$ order by order. At leading order, one
obviously recovers the mean-field ground-state energy per boson
$e_0(N)=\frac{1-x}{4}+O(1/N)$. At order $(1/N)^0$, the Hamiltonian
is quadratic and it can thus be easily diagonalized through a
Bogoliubov transform giving rise to the boson Random Phase Approximation formalism
presented in \cite{Duke84} and more recently exploited to describe
the properties of the symmetric and broken-symmetry phases in the
interacting boson model \cite{Rowe04_1,Rowe04_3}. Higher-order terms cannot be
diagonalized by a Bogoliubov transformation, so that one has to
resort a more sophisticated method.
\subsection{CUTs formalism}
\label{subsec:floweq}
The Continuous Unitary Transformations (CUTs) technique has been
conjointly proposed by Wegner \cite{Wegner94} and G{\l}azek and
Wilson \cite{Glazek93,Glazek94}. For a pedagogical introduction,
we refer the reader to Refs. \cite{Wegner01,Bartlett03}. Here we
only sketch the main lines of this simple and powerful
approach.
The idea of the CUTs is to diagonalize the Hamiltonian in a
continuous way starting from the original (bare) Hamiltonian
$H=H(l=0)$. A flowing Hamiltonian is thus defined by
\begin{equation}
H(l)=U^\dagger(l) H(0) U(l), \label{eq:Hl}
\end{equation}
where $l$ is a scaling parameter such that $H(l=\infty)$ is
diagonal, and $U(l)$ is a unitary transformation, i.e. satisfying
$U(l)U^\dagger(l)=U^\dagger(l)U(l)=1$. Taking the derivative of
Eq.(\ref{eq:Hl}) with respect to $l$ yields the differential
(flow) equation
\begin{equation}
\label{eq:dlH}
\partial_l H(l)=[\eta(l),H(l)],
\end{equation}
where the generator of the unitary transformation $\eta(l)$ is
\begin{equation}
\eta(l) = \partial_l U^\dagger(l) U(l) = -U^\dagger(l) \partial_l U(l).
\end{equation}
CUTs are also a powerful tool to compute the expectation value of
any observable $\Omega$. As for the Hamiltonian, we define a
flowing operator
\begin{equation}
\Omega(l)=U^\dagger(l) \Omega(0) U(l),
\end{equation}
which obeys
\begin{equation}
\partial_l \Omega(l)=[\eta(l),\Omega(l)], \label{eq:flow_obs}
\end{equation}
with $\Omega=\Omega(l=0)$. The expectation value of $\Omega$ on
an eigenstate $|\psi\rangle$ of $H$ is then given by:
\begin{equation}
\langle\psi|\Omega|\psi\rangle=\langle\psi|U(l=\infty)\:
\Omega(l=\infty) \: U^\dagger(l=\infty)|\psi\rangle,
\end{equation}
where $U^\dagger(l=\infty)|\psi\rangle$ is simply the eigenstate of
the diagonal Hamiltonian $H(l=\infty)$.\\
The keypoint of this approach is an appropriate choice of the
generator $\eta$ which, in fact, depends on the problem under
consideration. Here, the Hamiltonian $H$ expressed in terms of
$b$ boson can be schematically written as:
\begin{equation}
\label{eq:ham_012}
H(0)=H_0(0)+H_1^+(0) + H_1^-(0) + H_2^+(0) + H_2^-(0),
\end{equation}
where $H_{1,2}^-=\left({H_{1,2}^+}\right)^\dagger$ and 0, 1 or 2
subscripts indicate the number of created ($+$) or annihilated
($-$) excitations.
To perform the CUTs, we choose the so-called quasi-particle
conserving generator first proposed by Mielke \cite{Mielke98} in
the context of finite matrices and generalized to many-body
problems by Knetter and Uhrig \cite{Knetter00} which reads
\begin{equation}
\label{eq:gen_MKU}
\eta(l)=H_1^+(l) - H_1^-(l) + H_2^+(l) - H_2^-(l).
\end{equation}
In the symmetric phase ($H_1^\pm=0$) this choice coincides with
the generator proposed by Stein \cite{Stein00}. The flow equations
are then simple quadratic functions of the Hamiltonians:
\begin{eqnarray}
\label{eq:flow_eq_general0}
\partial_l H_0(l) &=& 2\Big( \left[ H_1^+(l),H_1^-(l)\right]
+ \left[ H_2^+(l),H_2^-(l)\right] \Big),\quad\\
\label{eq:flow_eq_general1}
\partial_l H_1^+(l) &=& \left[ H_1^+(l),H_0(l)\right]
+ 2\left[ H_2^+(l),H_1^-(l)\right],\\
\label{eq:flow_eq_general2}
\partial_l H_2^+(l) &=& \left[ H_2^+(l),H_0(l)\right].
\end{eqnarray}
In the limit $l=\infty$, the Hamiltonian conserves the number of
$b$ boson so that $H_{1,2}^{\pm}(\infty)=0$ and
$H(\infty)=H_0(\infty)$. Following the method developed for the
LMG model in \cite{Dusuel04_3,Dusuel05_2}, we convert these
equations, which deal with operators, into equations involving
coupling constants. This is achieved by expanding Hamiltonians
$H_0$ and $H_{1,2}^{\pm}$ in powers of $1/N$ (see
Sec. \ref{subsec:symphase}).
\subsection{Flow equations for the Hamiltonian}
\label{subsec:symphase}
In the symmetric phase ($H_1^{\pm}=0$), we have three elementary
operators $:n_b:,P^\dagger_b,{P^{\phantom\dagger}\hspace{-\ldag}}_b$ from which $H_0$ and
$H_{2}^{\pm}$ are built. More precisely, the $1/N$ expansion of
these Hamiltonians can be written as:
\begin{eqnarray}
\label{eq:H012_0}
H_0(l) &=& \sum_{\alpha,\beta,\delta \in \nbN}
\frac{h_{0,\alpha,\beta}^{(\delta)}(l) {P^\dagger_b}^{\beta} :n_b ^\alpha: {{P^{\phantom\dagger}\hspace{-\ldag}}_b}^{\beta}}{N^{\alpha+2 \beta+\delta-1}},\\
H_2^+(l)& =& \sum_{\alpha,\beta,\delta \in \nbN}
\frac{h_{2,\alpha,\beta}^{(\delta)}(l) {P^\dagger_b} {P^\dagger_b}^{\beta}:n_b^\alpha: {{P^{\phantom\dagger}\hspace{-\ldag}}_b}^{\beta}}{N^{\alpha+2\beta+\delta}}.
\label{eq:H012_1}
\end{eqnarray}
Note that for $L=0$, one has $P^\dagger_b{P^{\phantom\dagger}\hspace{-\ldag}}_b=:n_b^2:$, so that
$h_{k,\alpha,\beta}^{(\delta)}=h_{k,\alpha+2\beta,0}^{(\delta)}$.
One then readily recovers expressions given in Ref.
\cite{Dusuel05_2} for the case of a scalar $b$ boson.
Using this expansion and Eqs.(\ref{eq:flow_eq_general0}-\ref{eq:flow_eq_general2}), we can
easily derive the flow equations for the couplings
$h_{k,\alpha,\beta}^{(\delta)}(l)$ which are given in Appendix
\ref{app:flow_sym_spec} up to order $(1/N)^2$.
These flow equations can be solved exactly and, at order $(1/N)^2$, one finally gets:
\begin{equation}
H(\infty) = h_{0,0,0}(\infty) + h_{0,1,0}(\infty) n_b + h_{0,2,0}(\infty) :n_b^2: +
h_{0,0,1}(\infty) P^\dagger_b {P^{\phantom\dagger}\hspace{-\ldag}}_b +h_{0,3,0}(\infty) :n_b^3: +
h_{0,1,1}(\infty) P^\dagger_b n_b {P^{\phantom\dagger}\hspace{-\ldag}}_b+O(1/N^3)
\label{eq:Hfin}
\end{equation}
with
\begin{equation}
h_{0,\alpha,\beta}(l)=\sum_{\delta \in \nbN}
\frac{h_{0,\alpha,\beta}^{(\delta)}(l)}{N^{\alpha+2 \beta+\delta-1}},
\end{equation}
and
\begin{widetext}
\begin{eqnarray}
\label{eq:h000}
h_{0,0,0}^{(0)}(\infty) &=& \frac{1-x}{4},\\
%
\label{eq:h0001} h_{0,0,0}^{(1)}(\infty) &=& \frac{2L+1}{2}\left[\frac{1-3 x}{2}+\Xi(x)^{1/2}\right],\\
%
\label{eq:h0002} h_{0,0,0}^{(2)}(\infty) &=& (2L+1)(1-x)x
\left[\frac{-(2L+5)+(6L+13)x}{16\Xi(x)}-\frac{L+2}{4
\Xi(x)^{1/2}}\right] ,\\
%
h_{0,0,0}^{(3)}(\infty) &=& -(2L+1)(1-x)x^2 \nonumber \\
&&\times\left[\frac{(2L+1)-\left(8L^2+6L-33\right)x
+\left(32L^2-2L-149\right)x^2-\left(24L^2-38L-179\right)x^3}
{128\Xi(x)^{5/2}}\right. \nonumber \\
%
&&\left. - \frac{2L+5+\left(2L^2-3L-17\right)x
-\left(2L^2-5L-20\right)x^2}
{16\Xi(x)^{2}}\right],
\label{eq:h0003}
\end{eqnarray}
\begin{eqnarray}
\label{eq:h010} %
%
h_{0,1,0}^{(0)}(\infty) &=& \Xi(x)^{1/2},\\
%
\label{eq:h0101} h_{0,1,0}^{(1)}(\infty) &=& (1-x)x
\left[\frac{-1+(2L+5)x}{4\Xi(x)}-\frac{L+2}{2\Xi(x)^{1/2}}\right],\\
%
h_{0,1,0}^{(2)}(\infty) &=& -(1-x)x^2 \nonumber \\
&&\times\left[\frac{L+1-\left(2L^2+3L-6\right)x+\left(12L^2+15L-23\right)x^2
-\left(10L^2+5L-32\right)x^3}
{16 \Xi(x)^{5/2}}\right.\nonumber\\
&&-\left. \frac{1+(2 L^2+5L-1)x-\left(2L^2+3L-4\right)x^2}
{4\Xi(x)^2}\right] ,
\label{eq:h0102} %
\end{eqnarray}
\begin{eqnarray}
%
h_{0,2,0}^{(0)}(\infty) &=& \frac{(1-x)x^2}{4\Xi(x)} , \\
%
h_{0,2,0}^{(1)}(\infty) &=&
-(1-x)x^2\left[\frac{1-3x+(12L+29)x^2-3(4L+9)x^3}{32\Xi(x)^{5/2}}-x\frac{L+1-L
x} {4\Xi(x)^2} \right],
\end{eqnarray}
\begin{eqnarray}
h_{0,0,1}^{(0)}(\infty) &=& \frac{x(1-x)(3x-1)}{8\Xi(x)}, \\
%
h_{0,0,1}^{(1)}(\infty) &=&
-(1-x)x^2\left[(2L+5)\frac{1-3x+11x^2-9x^3}{128\Xi(x)^{5/2}}-x\frac{1+(L-3)x-(L-4) x^2}
{8\Xi(x)^2}
\right],
\end{eqnarray}
\begin{eqnarray}
h_{0,3,0}^{(0)}(\infty) &=& -\frac{(1-x)^2x^4}{8\Xi(x)^{5/2}},
\end{eqnarray}
\begin{eqnarray}
\label{eq:h011}
h_{0,1,1}^{(0)}(\infty) &=& -\frac{(1-x)^2x^2\left(1-2x+9x^2\right)}{64\Xi(x)^{5/2}},
\end{eqnarray}
\end{widetext}
where we have set $\Xi(x)=x(2x-1)$.
The ground-state energy per particle is thus given by
\begin{equation}
e_0(N)= h_{0,0,0}^{(0)}(\infty)+\frac{1}{N} h_{0,0,0}^{(1)}(\infty) +\frac{1}{N^2} h_{0,0,0}^{(2)}(\infty) + \frac{1}{N^3} h_{0,0,0}^{(3)}(\infty)+O(1/N^4),
\label{e0expan}
\end{equation}
whereas the gap reads
\begin{equation}
\Delta(N) = h_{0,1,0}^{(0)}(\infty)+\frac{1}{N}
h_{0,1,0}^{(1)}(\infty) +\frac{1}{N^2} h_{0,1,0}^{(2)}(\infty)
+ O(1/N^3).
\label{gapexpan}
\end{equation}
Of course, these expressions coincide for $L=0$ with those given
in Refs. \cite{Dusuel04_3,Dusuel05_2}.
For $L=2$, one recovers the results given in
Ref. \cite{Dusuel05_3}. The mean-field result (\ref{eq:e0mfs}) is
also recovered in the thermodynamical limit.
It is important to note that the Hamiltonian
$H(\infty)=H_0(\infty)$ is not diagonal in the eigenbasis of $n_b$
(except for $L = 0$) even though it always commutes with $n_b$.
Consequently, for each number of excitations, $H$ must be
diagonalized.
As can be observed in Eqs. (\ref{eq:h000})-(\ref{eq:h011}), some
divergences appears, for $x=x_c$, in the sub-leading corrections.
We will see in Sec. \ref{sec:critical} that the structure of this
singular $1/N$ expansion at the critical point allows us to
extract nontrivial scaling exponents whose determination is one of
the main motivation of this work.
\subsection{Flow equations for $ b^\dagger_\mu$}
We now proceed to derive the flow equation for the operator
$b^\dagger_\mu(l)$ from which any other observable can be obtained.
Analogously to the treatment of the Hamiltonian flow equations,
the first step consists in transforming the flow equation
(\ref{eq:flow_obs}) for $\Omega(l)=b^\dagger_\mu(l)$ into a set of flow
equations for couplings. Therefore, we expand $b^\dagger_\mu(l)$ in
power of $1/N$. Generically, one may expect to generate any terms
${b^\dagger_\mu}^\alpha {P^\dagger_b}^\beta :n_b^\gamma: {{P^{\phantom\dagger}\hspace{-\ldag}}_b}^\eta
{{b^{\phantom\dagger}\hspace{-\ldag}}_\mu}^\nu$. Here, we shall restrict our discussion to order
$1/N$ for which one only has eight operators
\begin{equation}
b^\dagger_\mu (l) = A _+(l) b^\dagger_\mu+ A _-(l) {\tilde{b}^{\phantom\dagger}\hspace{-\ldag}}_\mu+
B _+(l) b^\dagger_\mu n_b+ B _-(l) n_b {\tilde{b}^{\phantom\dagger}\hspace{-\ldag}}_\mu+
C _+(l) {P^\dagger_b} {\tilde{b}^{\phantom\dagger}\hspace{-\ldag}}_\mu+ C _-(l) b^\dagger_\mu {P^{\phantom\dagger}\hspace{-\ldag}}_b+
D _+(l) b^\dagger_\mu {P^\dagger_b} + D _-(l) {P^{\phantom\dagger}\hspace{-\ldag}}_b {\tilde{b}^{\phantom\dagger}\hspace{-\ldag}}_\mu
+O(1/N^2) ,
\label{eq:exp_obs}
\end{equation}
where we have introduced ${\tilde{b}^{\phantom\dagger}\hspace{-\ldag}}_\mu= (-1)^{\mu} {b^{\phantom\dagger}\hspace{-\ldag}}_{-\mu}$
and where, as previously, each function has a canonical $1/N$
expansion given by the number of bosonic operators it is
associated with, namely
\begin{eqnarray}
\label{eq:Aexp}
A_\pm(l) &=&{A^{(0)}}_\pm(l)+{{A^{(1)}}_\pm(l)\over N} +O(1/N^2), \\
B_\pm(l) &=&{{B^{(0)}}_\pm(l)\over N} +O(1/N^2),\\
C_\pm(l) &=&{{C^{(0)}}_\pm(l)\over N} +O(1/N^2),\\
D_\pm(l) &=&{{D^{(0)}}_\pm(l)\over N} +O(1/N^2).
\end{eqnarray}
The initial condition is of course given by:
$b^\dagger_\mu(l=0)=b^\dagger_\mu$ so that one only has a nonvanishing
initial coupling which is $A_+^{(0)}(0)=1$. The flow equations are
then obtained for these couplings order by order using
Eq.({\ref{eq:flow_obs}). The full set of equations is
given in Appendix \ref{app:flow_sym_obs}. As for the couplings
defining the running Hamiltonians, these equations can be solved
exactly and lead to
\begin{eqnarray}
\label{eq:obsinf1}
A_s^{(0)}(\infty) &=&\frac{1}{2\Phi(x)^{1/4}} , \\
\label{eq:obsinf1b}
A_s^{(1)}(\infty) &=&-\frac{(1-x)}{16 x} \left[{2 L+3 \over\Phi(x)^{7/4}}- {2(L+2) \over \Phi(x)^{5/4}} \right] ,
\end{eqnarray}
\begin{eqnarray}
A_d^{(0)}(\infty) &=&\frac{\Phi(x)^{1/4}}{2} , \\
A_d^{(1)}(\infty) &=&\frac{(1-x)}{16 x} \left[ {2 L+3 \over \Phi(x)^{5/4}}-{2(L+2)\over \Phi(x)^{3/4}} \right] ,
\end{eqnarray}
\begin{eqnarray}
B_s^{(0)}(\infty) &=& -\frac{1-x}{8 x \Phi(x)^{7/4}} , \\
B_d^{(0)}(\infty) &=&\frac{1-x}{8 x \Phi(x)^{5/4}} ,
\end{eqnarray}
\begin{eqnarray}
C_s^{(0)}(\infty) &=& -\frac{1-x}{16 x \Phi(x)^{7/4}} , \\
C_d^{(0)}(\infty) &=&\frac{1-x}{16 x \Phi(x)^{5/4}} ,
\end{eqnarray}
\begin{eqnarray}
D_s^{(0)}(\infty) &=& \frac{(1-x) (3x-1)}{32 x^{2} \Phi(x)^{7/4}} , \\
D_d^{(0)}(\infty) &=&\frac{1-x^2}{32 x^{2} \Phi(x)^{5/4}} ,
\label{eq:obsinf2}
\end{eqnarray}
where we have set $\Phi(x)={\frac{2x-1}{ x}}$, $F_s={1\over 2}
(F_+ + F_-)$ and $F_d={1\over 2} (F_+ - F_-)$, for each function
$F=A,B,C,D$.
The above expansion of $b^\dagger_\mu(\infty)$ allows us to compute
$\langle \Psi | \Omega | \Psi' \rangle$ for any operator $\Omega$
which can be expressed in terms of $b^\dagger_\mu$ and for any
eigenstates $| \Psi \rangle$ and $| \Psi' \rangle$ of the
Hamiltonian $H(\infty)$. In the following, we shall consider two
different examples to show the power of this approach.
\subsection{Expectation value of the occupation number $n_L$}
Let us first consider the case where $\Omega= n_L$ and where $|
\Psi \rangle=| \Psi' \rangle$ is the ground state of $H$. This
quantity normalized by the number of bosons
can be computed straightforwardly by means of the
Hellmann-Feynman theorem which states
\begin{equation}
{\langle n_L \rangle \over N}=\frac{\partial}{\partial y} {[(1+y)
h_{0,0,0}]} ,
\label{eq:HF}
\end{equation}
where $y=x/(1-x)$. Since we have the expansion of $h_{0,0,0}$ up
to order $(1/N)^3$, one can easily get $\langle n_L \rangle$ at
this order. Here, instead, we compute it in terms of the flow
equation for the operator $b_{\mu}^{\dag}$ obtained in the
preceding section, using the fact that:
\begin{equation}
n_L(\infty)=n_b(\infty)=\sum_\mu b^\dagger_\mu(\infty)
{b^{\phantom\dagger}\hspace{-\ldag}}_\mu(\infty) ,
\end{equation}
where $b^\dagger_\mu(\infty)$ is given by Eq.(\ref{eq:exp_obs}) with
final values Eqs.(\ref{eq:obsinf1})-(\ref{eq:obsinf2}).
The ground state of the Hamiltonian (\ref{eq:Hfin}) being defined
as the zero $b$ boson state $|0 \rangle$, one has :
\begin{eqnarray}
{\langle 0 | n_L(\infty) | 0 \rangle \over N} &=& \frac{1}{N}~ \sum_\mu \langle 0 |
b^\dagger_\mu(\infty) {b^{\phantom\dagger}\hspace{-\ldag}}_\mu(\infty) | 0 \rangle, \nonumber \\
&=& \frac{1}{N}~ \sum_\mu A^2_- (\infty) \langle 0 |{b^{\phantom\dagger}\hspace{-\ldag}}_\mu b^\dagger_\mu | 0 \rangle
+ O(1/N^3), \hspace{10pt} \nonumber \\
&=& \frac{1}{N}~ (2L+1)
\Big[A^{(0)}_- (\infty)^2 + {2 \over N} A^{(0)}_- (\infty) A^{(1)}_-
(\infty) \Big]
+O(1/N^3), \nonumber \\
&=& \frac{(2L+1)}{N}~ \bigg[ \frac{3x-1}{4 \Xi(x)^{1/2}}-{1 \over 2}\bigg] +
\frac{(2L+1)}{N^2}~ \frac{x (1-x)^2}{16} \left[ -\frac{(2L+3)x}{\Xi(x)^2}
+\frac{2(L+2)}{\Xi(x)^{3/2}}\right] +O(1/N^3),
\label{nlexpan}
\end{eqnarray}
where, as previously, $\Xi(x)=x(2x-1)$. It can be easily verified
that this expression coincides with Eq.(\ref{eq:HF}).
\subsection{Transition probability between the ground state and the first excited state}
As explained above, the real power of the CUTs method is that it
allows to easily compute off-diagonal matrix elements of any
operator between any eigenstates of the Hamiltonian provided one knows
the expression of the associated running operator. As an example,
we focus here on the transition probability $T= \big| \langle
1| T_{L_0}|0 \rangle \big|^2$ between the ground state
$|0\rangle$ and the first excited state $| 1 \rangle$. The operator $ T_{L_\mu}$ was
defined in Eq. (\ref{Toper}). It is important to note that here,
the ground state has a zero
angular momentum whereas the first excited state has an angular
momentum $L$.
To determine the matrix element of $ T_{L_0}$ of interest, we
shall proceed as for the occupation number and consider its $1/N$
expansion in terms of the $b$ boson:
\begin{eqnarray}
T_{L_0}&=& s^\dagger{L^{\phantom\dagger}\hspace{-\ldag}}_0+L^\dagger_0 s \\
\label{eq:THP}
&=& N^{1/2} \left[b^\dagger_0 + {b^{\phantom\dagger}\hspace{-\ldag}}_0-{1\over 2N} \left( b^\dagger_0 n_b + n_b {b^{\phantom\dagger}\hspace{-\ldag}}_0 \right) +O(1/N^2) \right]. \nonumber
\end{eqnarray}
To be consistent, given that we only have the expression of
$b^\dagger_0(\infty)+ {b^{\phantom\dagger}\hspace{-\ldag}}_0(\infty)$ at order $1/N$, we need to
consider $b^\dagger_0(\infty) n_b(\infty)+ n_b(\infty) {b^{\phantom\dagger}\hspace{-\ldag}}_0(\infty)$
at order $(1/N)^0$. Using the expression (\ref{eq:exp_obs}) of the
operator $b^\dagger_0$, one easily gets
\begin{equation}
\langle 1|b^\dagger_0(\infty)+ {b^{\phantom\dagger}\hspace{-\ldag}}_0(\infty)|0\rangle= A_+(\infty)
+A_-(\infty),
\end{equation}
and
\begin{widetext}
\begin{equation}
\langle 1|b^\dagger_0(\infty) n_b(\infty)+ n_b(\infty)
{b^{\phantom\dagger}\hspace{-\ldag}}_0(\infty)|0\rangle= (2L+3) A_+(\infty) A_-(\infty)^2+
A_-(\infty)\left[(2L+2)A_-(\infty)^2+ A_+(\infty)^2 \right].
\end{equation}
\end{widetext}
Truncating these expressions at order $1/N$ and $(1/N)^0$
respectively and using Eq.(\ref{eq:Aexp}) and
Eqs.(\ref{eq:obsinf1})-(\ref{eq:obsinf1b}), one finally obtains:
\begin{widetext}
\begin{equation}
\label{eq:Tsym} {T \over N}= \frac{x}{\Xi(x)^{1/2}} + \frac{x^2}{N}\left[
-\frac{(2L+1)-4(2L+1)x+(10L+7)x^2}
{4\Xi(x)^{2}} +\frac{-L+(3L+2)x}{2\Xi(x)^{3/2}}\right] +O(1/N^2).
\end{equation}
\end{widetext}
As for the expansion of the spectrum, some singularities appears, and
we shall see that they also provide the scaling exponents at the
critical point.
\section{The broken phase $(0<x<1/2)$}
\label{sec:brok_phase}
As shown by the mean-field analysis, for $x<1/2$, the order
parameter $\langle n_L \rangle/N$ has a nonvanishing value. This
implies that we have to consider a new vacuum for the
Holstein-Primakoff expansion. Therefore, we shift the bosonic modes by a term proportional to $\sqrt{N}$.
We thus define the $c$ bosons by
\begin{equation}
\label{eq:shift}
b^\dagger_\mu = \sqrt{N} \lambda^*_\mu + c^\dagger_\mu
\end{equation}
where the $\lambda_\mu$'s are complex numbers which form a
$(2L+1)$-dimensional vector. Of course, the symmetric phase
results are recovered when setting $\lambda_\mu=0$. Then, using
definitions (\ref{eq:def1})-(\ref{eq:def3}) and assuming that
$n_c/N\ll1$, we expand the Hamiltonian that now contains some a
term proportional to $\sqrt{N}$ which reads
\begin{widetext}
\begin{equation}
\left( c^\dagger\cdot{\tilde{\lambda}^{\phantom\dagger}\hspace{-\ldag}} + \lambda^\dagger\cdot{\tilde{c}^{\phantom\dagger}\hspace{-\ldag}}\right)
\left\{ x+ \frac{1-x}{4} \left[ -2(1-n_\lambda)+ P^\dagger_\lambda +{P^{\phantom\dagger}\hspace{-\ldag}}_\lambda \right] \right\}+
\frac{1-x}{2} \left[
\left( c^\dagger\cdot\lambda^\dagger \right) \left({P^{\phantom\dagger}\hspace{-\ldag}}_\lambda -1+n_\lambda
\right)+ \left( {\lambda^{\phantom\dagger}\hspace{-\ldag}}\cdot{c^{\phantom\dagger}\hspace{-\ldag}} \right) \left( P^\dagger_\lambda
-1+n_\lambda \right)
\right]
\end{equation}
\end{widetext}
where $\lambda^\dagger_\mu=\lambda_\mu^*$ and ${\tilde{\lambda}^{\phantom\dagger}\hspace{-\ldag}}_\mu=(-1)^\mu
\lambda_{-\mu}$. There are several choices of the $\lambda_\mu$'s
which allows one to get rid of these terms. Here, we have chosen
to set $\lambda_\mu=\lambda_0 \delta_{\mu,0}$ with
$\lambda_0^2={1-2x \over
2(1-x)}$. Note that in the thermodynamical limit, we recover the
mean-field value (\ref{eq:nlmf})
\begin{equation}
{\langle n_L \rangle \over N}=\sum_\mu |\lambda_\mu|^2={1-2x \over
2(1-x)}.
\end{equation}
Further, we emphasize that this choice of the $\lambda_\mu$'s is
the same as the one proposed in the mean-field analysis where we
have broken the spherical symmetry by populating macroscopically
$\mu=0$ boson level only. With this choice, the Hamiltonian reads:
\begin{widetext}
\begin{equation}
H = -N x {3x-2 \over 4 (1-x)} + N^0
\left[\frac{(1-3x)(1-2x)}{8(1-x)} + \frac{x}{2} n_c
+ \frac{5}{4}(1-2x)c^\dagger_0{c^{\phantom\dagger}\hspace{-\ldag}}_0 -\frac{x}{4}(P^\dagger_c+{P^{\phantom\dagger}\hspace{-\ldag}}_c)+\frac{3}{8}(1-2x)({c^\dagger_0}^2+{c^{\phantom\dagger}\hspace{-\ldag}}_0^2)\right] +O(1/\sqrt{N}).
\label{eq:HamilHPb}
\end{equation}
\end{widetext}
Contrary to the symmetric phase, we do restrict our discussion to
this order because, as we shall see later, the existence of
gapless modes at this level does not allow to go beyond with the
CUTs.
\subsection{The spectrum}
The Hamiltonian (\ref{eq:HamilHPb}) can be easily diagonalized via
a Bogoliubov transform. Therefore, we introduce the $d$ bosons
defined by:
\begin{eqnarray}
c^\dagger_\mu &=& \cosh(\Theta_\mu /2) d^\dagger_\mu + \sinh(\Theta_\mu /2)
{\tilde{d}^{\phantom\dagger}\hspace{-\ldag}}_\mu , \\
{\tilde{c}^{\phantom\dagger}\hspace{-\ldag}}_\mu &=& \sinh(\Theta_\mu /2) d^\dagger_\mu + \cosh(\Theta_\mu
/2) {\tilde{d}^{\phantom\dagger}\hspace{-\ldag}}_\mu .
\end{eqnarray}
The angles $\Theta_\mu$ are chosen so that $H$ written in terms of
the $d$'s is diagonal. From Eq.(\ref{eq:HamilHPb}), it is clear
that modes with $\mu\neq 0$ and $\mu=0$ plays a different role and
actually decouple. As can be easily seen, eliminating off-diagonal
terms for $\mu\neq 0$ implies to set $\Theta_{\mu\neq 0}=\infty$
and gives $2L$ gapless modes. Since such a transform is singular,
one has to use another route. The contribution of terms with
$\mu\neq 0$ in the Hamiltonian reads
\begin{equation}
H_{\mu}+H_{-\mu}=\frac{x}{2} \left[c^\dagger_\mu {c^{\phantom\dagger}\hspace{-\ldag}}_\mu
+c^\dagger_{-\mu} {c^{\phantom\dagger}\hspace{-\ldag}}_{-\mu}- (-1)^\mu \left(c^\dagger_\mu
c^\dagger_{-\mu}+{c^{\phantom\dagger}\hspace{-\ldag}}_{-\mu} {c^{\phantom\dagger}\hspace{-\ldag}}_{\mu}\right) \right].
\end{equation}
Introducing the position and momentum operator
\begin{equation}
X_\mu={c^\dagger_\mu + {c^{\phantom\dagger}\hspace{-\ldag}}_\mu\over \sqrt{2}} , \:\:\:
P_\mu= i {c^\dagger_\mu - {c^{\phantom\dagger}\hspace{-\ldag}}_\mu\over \sqrt{2}},
\end{equation}
one has
\begin{equation}
H_{\mu}+H_{-\mu}=- \frac{x}{2} +
\frac{x}{4}\left[(P_\mu+(-1)^\mu P_{-\mu})^2+\left(X_\mu-(-1)^\mu X_{-\mu} \right)^2
\right].
\end{equation}
Since $[P_\mu+ (-1)^\mu P_{-\mu},X_\mu- (-1)^\mu X_{-\mu}]=0$, $H_\mu$ is written in a
diagonal form and its spectrum is indeed found to be gapless and
continuous. The correction to the ground-state energy coming from
this contribution is thus $-L x/2$.
Let us now consider the $\mu=0$ part of the Hamiltonian which
reads
\begin{equation}
H_{\mu=0}= \left(\frac{5}{4}-2x\right)c^\dagger_0{c^{\phantom\dagger}\hspace{-\ldag}}_0
+\left(\frac{3}{8}-x\right)({c^\dagger_0}^2+{c^{\phantom\dagger}\hspace{-\ldag}}_0^2).
\end{equation}
For this contribution, the Bogoliubov transform can be simply
achieved and one gets:
\begin{equation}
H_{\mu=0}= \frac{1}{2} \left (\sqrt{1-2x}-{5 \over 4}+2x \right)+\sqrt{1-2x}\:\: d^\dagger_0 {d^{\phantom\dagger}\hspace{-\ldag}}_0.
\end{equation}
The correction to the ground-state energy coming from this
contribution is thus given by the $d_0$ boson state. As a
result, $e_0$ at this order reads:
\begin{widetext}
\begin{equation}
e_0(N)= {x \over 4}{2-3x \over 1-x} + {1 \over N} \left[ \frac{-2-2(L-2)x+(2L-1)x^2}{4(1-x)} +\frac{1}{2}\sqrt{1-2x} \right] +O(1/N^2).
\label{eq:gsebroken}
\end{equation}
\end{widetext}
As in the symmetric phase, the leading corrections coincide with
the mean-field result (\ref{eq:e0mfb}) and $L$ only appears
in the sub-leading terms.
Concerning the gap, the above analysis indicates the existence of
$2L$ gapless modes and a gapped one with excitation energy
\begin{equation}
\Delta^\prime(N)=\sqrt{1-2x}+O(1/N).
\label{eq:gapbroken}
\end{equation}
As previously, one can simply obtain $\langle n_L \rangle/N$ by replacing $h_{0,0,0}$ by
$e_0(N)$ in Eq.(\ref{eq:HF}), and the result is
\begin{equation}
{\langle n_L \rangle \over N}={1-2x \over
2(1-x)} - \frac{1}{2 N}
~\left[L+\frac{x}{\sqrt{1-2x}}+\frac{x}{x-1}\right]+O(1/N^2).
\label{eq:nlbroken}
\end{equation}
At this stage, one can understand the difficulty to go beyond this
order in the presence of gapless modes. Indeed, computing the next-order
corrections would imply to keep on making the $1/N$ expansion around the
(broken) vacuum, but such a procedure does not take into account
the degeneracy due to the gapless modes. Note that for $L=0$ where
no gapless modes emerge, we have been able to compute these
corrections using CUTs \cite{Dusuel05_2}. To conclude this
subsection, we wish to underline that in the two-level BCS model
where gapless modes also exist, Richardson has obtained the
finite-size corrections in the broken phase beyond the Bogoliubov
order using the $1/N$ expansion of the exact solution
\cite{Richardson77}, whereas we computed them more recently using
CUTs in the symmetric phase \cite{Dusuel05_1}.
\subsection{Transition probability between the ground state and the first excited state}
As in the symmetric phase, we shall now compute the transition $T=
\big| \langle 1| T_{L_0}|0 \rangle \big|^2$ where
$ T_{L_0}= s^\dagger{L^{\phantom\dagger}\hspace{-\ldag}}_0+L^\dagger_0 s$. However, the important difference is
that, in the broken phase, one has $2L$ gapless modes which
renders the definition of the first excited states more tricky. In
the thermodynamical limit, the ground state thus becomes
infinitely degenerate and one actually has to simply consider the
expectation value of $ T_{L_0}$ over the ground state. To avoid any
confusion, we shall call this quantity
$T^\prime$ instead of $T$. Using the
expansion (\ref{eq:THP}) and the shift (\ref{eq:shift}) with the
choice of the $\lambda_\mu$ given previously, one gets
\begin{eqnarray}
T^\prime&=& |\langle 0 | T_{L_0} | 0 \rangle|^2, \nonumber \\
&=& N^2 4 \lambda_0^2(1-\lambda_0^2)+O(N), \nonumber \\
&=& N^2 {1-2x \over (1-x)^2}+ O(N).
\label{eq:Tbroken}
\end{eqnarray}
Firstly, it is important to note that $T^\prime$ is proportional to
$N^2$ in this phase whereas $T$ scales as $N$ in the symmetric
phase. Secondly, in the broken phase, $T^\prime$ vanishes at the critical
point whereas $T$ diverges when approaching from the symmetric
phase. This result clearly suggests an anomalous scaling behavior
at the critical point that we shall now investigate in details.
\section{The critical point}
\label{sec:critical}
In this section, we shall analyze the behavior of the $1/N$
expansion of the quantities considered in this study: the ground
state energy, the gap, the expectation value of $n_L$ in the ground
state and the transition rate $T$ between the ground state and the first
excited state. The common point
of all these expansions is that they become singular at the
critical point. Following the arguments presented in a recent
series of papers
\cite{Dusuel04_3,Dusuel05_1,Dusuel05_2,Dusuel05_3} we shall now
recall how this intriguing property allows one to extract the
finite-size scaling exponents at this point.
All quantities considered in this study display a singular
behavior for $x=x_c$. This singular behavior can emerge in
sub-leading corrections as for the ground-state energy but also in
the leading term as illustrated by the transition rate in the
symmetric phase [see Eq. (\ref{eq:Tsym})]. Thus, schematically,
the $1/N$ expansion of a physical quantity $\Phi$ can be
decomposed into a regular and a singular part as
\begin{equation}
\Phi_N(x)=
\Phi_N^\mathrm{reg}(x)+\Phi_N^\mathrm{sing}(x),
\end{equation}
where, contrary to $\Phi_N^\mathrm{sing}$, $\Phi_N^\mathrm{reg}$
and all its derivatives with respect to $x$ do not diverge when
$x$ goes to $x_c$. Furthermore, at each order of the expansion,
the divergence of $\Phi_N^\mathrm{sing}$ is dominated by a single
term. To be more concrete, let us consider the ground-state energy
in the symmetric phase for which
\begin{eqnarray}
\Phi_N^\mathrm{reg}(x) &=& \frac{1-x}{4}+ {1\over N} \frac{(2L+1)(1-3x)}{4},\\
\Phi_N^\mathrm{sing}(x) &=&{1\over N} \frac{(2L+1)\Xi(x)^{1/2}}{2} +{1\over N^2} h_{0,0,0}^{(2)}(\infty) +{1\over N^3} h_{0,0,0}^{(3)}(\infty)+O(1/N^4).
\end{eqnarray}
In the vicinity of the critical point, these diverging terms have a
leading contribution which is proportional to $\Xi(x)^{-1}$ for
$h_{0,0,0}^{(2)}(\infty)$ and to $\Xi(x)^{-5/2}$ for
$h_{0,0,0}^{(3)}(\infty)$. This has lead us to conjecture that
near $x_c$ the singular part behaves as:
\begin{equation}
\label{eq:scalingform}
\Phi_N^\mathrm{sing}(x)\simeq
\frac{\Xi(x)^{\xi_\Phi}}{N^{n_\Phi}}
\mathcal{F}_\Phi\left[N\Xi(x)^{3/2}\right],
\end{equation}
where $\mathcal{F}_\Phi$ is a function that only depends on the
scaling variable $N\Xi(x)^{3/2}$. We underline that for the LMG
model $(L=0)$ we have checked this scaling hypothesis up to high
order in the $1/N$ expansion. For the ground-state energy
discussed above, one has $\xi_\Phi=1/2$ and $n_\Phi=1$.
Once the form (\ref{eq:scalingform}) is accepted, the scaling
exponents are directly obtained. Indeed, since at finite $N$,
physical quantities must not diverge, we conclude that,
necessarily, $\mathcal{F}_\Phi(x) \sim x^{-2 \xi_\Phi /3}$ so that
finally one has, $\Phi_N^\mathrm{sing}(x_\mathrm{c}) \sim
N^{-(n_\Phi+2\xi_\Phi/3)}$. In Table \ref{tab:exponents} we have
gathered the exponents obtained for the quantities
discussed in this paper.
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
$\Phi$ & $\xi_\Phi$ & $n_\Phi$ & $-(n_\Phi+2\xi_\Phi/3)$\\
\hline
\hline
$e_0$ & 1/2 & 1 & -4/3\\
\hline
$\Delta$ & 1/2 & 0 & -1/3\\
\hline
$\langle n_L \rangle$ & -1/2 & 0 & 1/3\\
\hline
$T$ & -1/2 & -1 & 4/3\\
\hline
\end{tabular}
\caption{Scaling exponents for the ground-state energy per boson $e_0$, the gap $\Delta$, the number of $L$ bosons in the ground state $\langle n_L \rangle$ and the $T$ transition probability.}
\label{tab:exponents}
\end{table}
We wish to emphasize that the scaling exponents related to the
spectral quantities, i.e., $e_0, \Delta$ and, using Eq.
(\ref{eq:HF}), $\langle n_L \rangle$, can also be obtained in a
different way. Indeed, as explained in Sec. \ref{sec:MF}, the
energy surface is the one of a quartic oscillator ($\beta^4$-like
potential) and this can be used as a starting point of a
semi-classical description to show that the spectrum, at the
critical point scales as $N^{-1/3}$. For technical details, we
refer the reader to Ref. \cite{Leyvraz05} for the LMG model or
\cite{Rowe04_2} for the IBM with $L=2$, and we also note that the
``critical" scaling exponents do not depend on $L$. However, the
present approach has a real advantage as compared to this latter
method since one can compute the scaling exponents of any
observables using the expression of $b_\mu(\infty)$. We are
further not restricted to expectation value but we can also
investigate off-diagonal terms as illustrated with the transition
rate $T$.
\section{Numerical results}
\label{sec:numerics}
In this Section we check the validity of the
analytical expressions obtained in the preceding Sections using
CUTs. The observables studied are: the ground-state energy per boson
$e_0$, the gap $\Delta$, the expectation value of the number of
$L$ bosons in the ground state $\langle n_L \rangle$ and the
transition probability between the ground state and
the first excited state $T$.
\begin{figure}[h]
\centering
\includegraphics[width=6cm]{./figures/figgen.eps}
\caption{General features of the observables studied in this work as a function of the control parameter $x$ obtained by numerical diagonalization.}
\label{fig1L0123}
\end{figure}
In Fig. \ref{fig1L0123} we present the general features of the selected observables as a function of the
control parameter $x$ for $L=2$ and for $N=500$. Note that in the broken phase $(x<x_c)$, we have plotted $\Delta$ and $T/N^2$ instead of $\Delta^\prime$ and $T^\prime/N^2$, these two latter quantities being discussed in Sec. \ref{subsec:num_broken}. We can thus clearly appreciate the emergence of Goldstone modes in the broken phase.
We emphasize that $\langle n_{L} \rangle/N$ as well as the transition probability $T/N^2$ may be considered as order parameters since they vanish in the symmetric
phase and acquire a finite value in the broken one.
However, while $\langle n_{L} \rangle/N$ is directly related to the physical ground state, $T$ involves the first excited state which turns out to collapse into the ground state in the broken phase. This latter property makes it a more controversial candidate for an order parameter as recently discussed in Ref. \cite{Iachello04,Pan05}.
In Fig. \ref{fig2L0123}, we plot the difference between the numerical and the mean-field value of $e_0$
(dashed line) and $\langle n_{L} \rangle/N$ (full line) as a function of the boson number, $N$, for three characteristic values of the control parameter: $x=0.75$ in the symmetric phase, $x=0.5$
at the critical point and $x=0.25$ in the broken phase.
As can be seen, the mean-field results become exact when increasing the number of
bosons.
It is interesting to note the change in sign in the deviations of the order parameter showing that the mean-field approach underestimates (overestimates) it in the symmetric phase (in the broken phase). To emphasize this effect, we plot in Fig. \ref{fig1MF} the same quantities for $N=20$ bosons as a function of the control parameter $x$. While deviations in the ground-state energy behave smoothly around the critical point, there is a well defined cusp in the deviations of the $\langle n_{L} \rangle/N$.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{./figures/Fig1MFnew.eps}
\caption{Differences between numerical (num) and mean-field (m.f.) results for the ground-state energy (per boson) $e_0$ and expectation value of the number of $L=2$ bosons in the ground state (per boson) $\langle n_L \rangle /N$ as a function of
the boson number $N$ for three values of the control parameter: $x=0.25$ (broken phase), $x=0.5$ (critical point) and $x=0.75$ (spherical phase).} \label{fig2L0123}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=6cm]{./figures/fig6bnew.eps}
\caption{Differences between numerical (num) and mean-field (m.f.) results for the ground-state energy (per boson) $e_0$ and expectation value of the number of $L=2$ bosons in the ground state (per boson) $\langle n_L \rangle /N$ as a function of
the control parameter $x$ at fixed $N=20$. }
\label{fig1MF}
\end{figure}
Now that we have shown the main characteristics of the physical quantities of interest and the general agreement, in the thermodynamical limit, with the simple mean-field results presented in Sec. \ref{sec:MF}, let us analyze in details the finite-size corrections in each phase independently.
\subsection{The symmetric phase}
\label{subsec:num_sym}
In Section IV, we have obtained analytical expressions for the
different corrections in the $1/N$ expansion of the selected
observables. In order to check these results, we present several
plots focusing in a first step, on the case $L=2$ and the dependence with $x$ whereas, in a second step, we discuss the dependence with $L$.
Let us first consider the ground-state energy per boson $e_0$ whose expansion in the symmetric phase is given in Eq. (\ref{e0expan}). In Fig. \ref{fig7e0}, the leading term in Eq. (\ref{e0expan}) is compared with the numerical results for different $N$ values confirming that the $h_{0,0,0}^{(0)}$ is indeed the true asymptotic value of $e_0$.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{./figures/fig7e0new.eps}
\caption{
Comparison between the numerical (symbols) and analytical (line) ground-state energy per boson $e_0$ for different values of $N$ at leading order.
}
\label{fig7e0}
\end{figure}
Next, we compare in Fig. \ref{figL2ener} the numerical and analytical subleading corrections to $e_0$ at each order. The numerical corrections of order $p$ to $e_0$ are defined from the numerical value $e_0^{num}$ as $ N^p \left| e_0^{num} -\sum_{\alpha =0}^{p-1} h_{0,0,0}^{(\alpha)}(\infty)/N^\alpha \right|$ whereas the analytical correction is obviously given by $h_{0,0,0}^{(p)}(\infty)$. We present results for $p=1,2,3$ and $N=10,100,1000$.
As can be clearly seen, for the largest value of $N=1000$, numerical and analytical results are almost indistinguishable even for values of $x$ close to the critical point where $h_{0,0,0}^{(2)}$ and $h_{0,0,0}^{(3)}$ are known to diverge. Note that the critical point $x_c=0.5$ was explicitly excluded.
Of course, the smaller $N$ the larger the discrepancy since the numerical correction defined above still contains higher-order terms which play a role in this case.
Along the same line, we analyze the corrections for the gap $\Delta$ defining the numerical correction of order $p$ from the numerically calculated gap $\Delta^{num}$ as
$ N^p \left|\Delta^{num} -\sum_{\alpha =-1}^{p-1} h_{0,1,0}^{(\alpha)}(\infty)/N^\alpha \right |$ with
$h_{0,1,0}^{(-1)}(\infty)=0$. The analytical correction of order $p$ is $h_{0,1,0}^{(p)}(\infty)$.
Finally, we perform the same analysis for $\langle n_L \rangle/N$ (see Fig. \ref{figL2ven}) and $T/N$ (see Fig. \ref{figL2T}) by comparing the two first terms of their expansion with the numerical results. The numerical corrections are computed as for the gap.
All these plots reflect that the $x$-dependence of the analytical
expressions obtained with CUTs are in complete agreement with the
exact numerical results for large values of $N$.
To end up with these checks, we have investigated the $L$-dependence of the analytical
results. We present in Fig. \ref{fig11} the same observables (with the same notations) as those presented in Figs. \ref{figL2ener}-\ref{figL2T} for fixed $x=0.6$ and $N=1000$ as a function of $L$. Once again, the agreement between the numerical results and the analytical expressions is excellent and confirms the validity of our analytical results.
\newpage
\begin{figure}[h]
\centering
\includegraphics[width=9cm]{./figures/fig7newb.eps}
\caption{
Comparison in $\log$-normal scale between the numerical (symbols) and analytical results (lines) order by order for the ground-state energy per boson $e_0$ (see text for definitions).
}
\label{figL2ener}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=9cm]{./figures/fig8newb.eps}
\caption{
Comparison in $\log$-normal scale between the numerical (symbols) and analytical results (lines) order by order for the gap $\Delta$ (see text for definitions).
}
\label{figL2gap}
\end{figure}
\newpage
\begin{figure}[h]
\centering
\includegraphics[width=9cm]{./figures/fig9newb.eps}
\caption{
Comparison in $\log$-normal scale between the numerical (symbols) and analytical results (lines)
for the expectation value of the occupation number in the $L$ level per boson in the ground state $\langle n_L \rangle/N$. $n_L^{(1)}$ stands for the $1/N$ term and $n_L^{(2)}$ stands for the $1/N^2$ term in Eq. (\ref{nlexpan}).
}
\label{figL2ven}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=9cm]{./figures/fig10newb.eps}
\caption{
Comparison in $\log$-normal scale between the numerical (symbols) and analytical results (lines)
for the transition probability per boson between the ground and the first excited state $T/N$. $T^{(0)}$ stands for the $N$-independent term and $T^{(1)}$ stands for the $1/N$ term in Eq. (\ref{eq:Tsym}).
}
\label{figL2T}
\end{figure}
\newpage
\begin{figure}[h]
\centering
\includegraphics[width=9cm]{./figures/fig11log.eps}
\caption{
Comparison in $\log$-normal scale between numerical (symbols) and analytical results (lines) as a function of $L$. Notations are the same as in Figs. \ref{figL2ener}-\ref{figL2T}.
}
\label{fig11}
\end{figure}
\subsection{The broken phase}
\label{subsec:num_broken}
As explained in Sec. \ref{sec:brok_phase}, the presence of Goldstone modes in the broken phase prevents from computing the corrections at high order. Therefore, we restrict our discussion here to the first nontrivial order.
We present in Fig. \ref{figbroken} a direct comparison between numerical results (symbols) and the analytical ones given in Eqs. (\ref{eq:gsebroken},\ref{eq:gapbroken},\ref{eq:nlbroken} and \ref{eq:Tbroken}) (lines) as a function of the control parameter $x$ for $N=1000$ and $L=2$.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{./figures/figbroken.eps}
\caption{Comparison between numerical (symbols) and analytical (lines) results. We only plot here the leading terms for each quantities.}
\label{figbroken}
\end{figure}
It is worth reminding that the gap associated with a one-phonon state in the symmetric phase turns into a Goldstone boson in the broken phase. The first excited state in the latter phase thus corresponds to a two-phonon state in the symmetric phase. However, this gapped mode (\ref{eq:gapbroken}) goes to zero at the critical point in the thermodynamic limit $N=\infty$. As in the symmetric case, one can appreciate the agreement between analytics and numerics as already discussed, at this order, in Ref. \cite{Rowe04_1}.
We have also checked that the subleading terms of $e_0$ and $\langle n_L \rangle /N$ (beyond the mean-field results), which contains a nontrivial dependence with $L$, were fitting with numerics. In Fig. \ref{figbroken2}, we show for $L=2$, a comparison between numerical and analytical results for $N=10,100,1000$. As in the symmetric phase, the larger $N$ the better the agreement. The dependence with $L$ is tested in Fig. \ref{figbroken2L} at fixed $x$ and $N$.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{./figures/broken2.eps}
\caption{Comparison between numerical (symbols) and analytical (lines) results for $L=2$.
As in the symmetric phase, we have substracted from the numerical data the leading term given by the
mean-field treatment.
}
\label{figbroken2}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{./figures/broken2L.eps}
\caption{Comparison between numerical (symbols) and analytical (lines)
results for the subleading corrections. As in the symmetric phase, we have substracted from the numerical data the leading term; $e_0^{(1)}$ and $n_L^{(1)} $ refers respectively to the term proportional to $1/N$ in Eq. (\ref{eq:gsebroken}) and Eq. (\ref{eq:nlbroken}) respectively.
}
\label{figbroken2L}
\end{figure}
\subsection{The critical point}
We now turn to the critical point study. To check the value of the finite-size scaling exponents derived in Sec.
\ref{sec:critical}, we have performed diagonalizations for large system size (up to $N=2^{13}$ bosons). Let us recall that for $L=0$, we have checked these values for larger system size in Ref. \cite{Dusuel05_2}. We show in Fig. \ref{figxc} our results for different values of $L=0,1,2,3$.
Note that we plot the $\log_2$ of each quantity.
\begin{figure}[h]
\centering
\includegraphics[width=9cm]{./figures/FigcxLogLognew.eps}
\caption{
Plot of the singular parts of $e_0$, $\Delta$, $\langle n_L \rangle$ and $T$ at the critical point $x_c=1/2$ as a function of the boson number $N$ for different values of $L$.}
\label{figxc}
\end{figure}
In this figure, only the singular part of the physical quantities of interest is plotted, the regular one being removed using the {\it ad hoc} expressions given in this work. As can be seen, the exponents are independent of $L$ as expected from our calculations and match very well the predicted values.
For $L=0$, these exponents can also be obtained by noting that the LMG model can be seen as an Ising model in a transverse field with long-range interactions \cite{Botet82,Botet83}.
Then, the scaling variable $N \Xi^{3/2}$ is obtained from the upper critical dimension of the equivalent model with short-range interactions which is known to be $d_c=3$ in this case. For $L \neq 0$, the two-level system studied in this paper cannot be simply mapped onto a short-range interaction model. Thus, it is rather a remarkable fact that the finite-size scaling exponents are independent of $L$. However, as explained in Sec. \ref{sec:critical}, this is due to the $\beta^4$-like potential underlying the critical theory.
\section{Summary and conclusions}
\label{sec:conclusion}
In this paper, we have studied two-level boson models where the lower
boson has a zero angular momentum ($s$ boson), and the upper one, an angular momentum $L$.
All these models are defined by the $U(2L+2)$ algebra, from which one can
find chains of subalgebras going down to the $O(3)$ angular momentum
algebra. When the Hamiltonian is written as a combination of Casimir
operators of a chain of subalgebras, it is said that a dynamical
symmetry occurs and the problem is analytically solvable. In this
paper, we focused on the study of the quantum phase
transition that appears when the boson system has a $O(2L+1)$
symmetry, i.e. a transition from $U(2L+1)$ to $O(2L+2)$ dynamical
symmetries.
This second-order transition is well described by a mean-field approach and the subtleties arise in the finite-size corrections. Here, we have explicitly computed these corrections for several physical quantities using firstly a $1/N$ expansion naturally given by the Holstein-Primakoff representation of the angular momenta, and secondly the continuous unitary transformations to diagonalize the Hamiltonian.
In the spherical (symmetric) phase, we have thus been able to capture corrections beyond the standard Random Phase Approximation and to show that the $1/N$ expansion is singular at the critical point. The analysis of these singularities has allowed us to compute the finite-size scaling exponents which have been found to be independent of $L$.
In the deformed (broken) phase, we have only computed the first corrections via a simple Bogoliubov transformations, in order to show the main difference with the spherical one.
We have also presented a formalism based on boson seniority that provides a simple and efficient way of solving numerically the problem for a large number of bosons (a few thousands).
Using this powerful algorithm, we have compared order by order analytical and numerical results and found an excellent agreement between both. We hope that the present work will help in understanding the approach to the macroscopic limit in such models, a problem that has recently drawn much attention \cite{Iachello04,Rowe04_2}.
|
1,116,691,498,255 | arxiv | \section{Equivalence Checking Algorithm}
\label{sec:algo}
\begin{wrapfigure}[8]{l}{0.47\textwidth}
\begin{minipage}{0.47\textwidth}
\vspace{-0.6in}
\begin{algorithm}[H]
\caption{Equivalence checking}
\label{alg:equ-checking}
\begin{algorithmic}[1]
\State $\mathcal{A}_{paa} \gets \mathit{paaConstruct}$ ($f, g, \mathcal{P}_{align}$){\label{alg1:paaconstruct}}
\State $\mathcal{A}^{inv}_{paa} \gets \mathit{learnInvariants}$($\mathcal{A}_{paa}$) {\label{alg1:learninvariants}}
\If{final-state invariant of $\mathcal{A}^{inv}_{paa} \Rightarrow$ equiv. prop{\label{alg1:checkproofob}}}
\State \textbf{return} \emph{equivalent}
\EndIf
\State \textbf{return} \emph{unknown}{\label{alg1:end}}
\end{algorithmic}
\end{algorithm}
\end{minipage}
\end{wrapfigure}
Alg.~\ref{alg:equ-checking} shows the procedure for checking equivalence
of two programs, $f$ and $g$. Given the programs and an alignment predicate
$\mathcal{P}_{align}$, it builds a program alignment automaton, learns
invariants for every state in the PAA, and then checks if the invariants in the
final state discharge the equivalence goal. The learned invariants need to be
consistent with the PAA, in the sense that if one picks an edge in the PAA,
then the invariants at the target state must follow from the ones at the
source, and the label on the chosen edge.
The inputs to the procedure $\mathit{paaConstruct}$ in
Alg.~\ref{alg:cons-paa} are automata $\mathcal{A}_{P_1}$ and
$\mathcal{A}_{P_2}$, which are CFGs of $f$ and $g$ resp., and an alignment
predicate $\mathcal{P}_{\mathit{align}}$. We assume that each program/function
has unique entry and exit state, $q_{init}$ and $q_{exit}$, akin to initial and
final state in an automaton. The procedure collects the states of both the
automata, and defines the states, $\mathcal{S}$, of the PAA to be their
product, i.e. each state in $\mathcal{S}$ is a tuple of two states $(q_i,
q_j)$, one from each automaton. The initial product state (which is simply the
product of the initial states) is marked reachable using the set
$\mathit{Reach}$, and the transitions ($\mathcal{T}$, an empty set in the
beginning) are populated one at a time, in a \emph{while} loop (lines
$7$-$20$).
In each iteration of the loop, a source state $(q_{i_1}, q_{{j_1}})$ is chosen
as any \emph{unvisited} state from the reachable set $Reach$ (as is marked
\emph{visited} immediately), along with a target state $(q_{i_2}, q_{{j_2}})$
from $\mathcal{S}$ (lines $7$-$9$). Then, the procedure derives a regular
expression denoting words in automaton $\mathcal{A}_{P_1}$, corresponding to
paths beginning at $q_{i_1}$ and ending at $q_{i_2}$. And, similarly, another
regular expressions for words in $\mathcal{A}_{P_2}$, for paths beginning at
$q_{j_1}$ and ending at $q_{j_2}$\footnote{These regular expressions between
program states need to be computed only once for every state combination, and can
be stored in a look-up table to avoid recomputation.}. At this
point, it discards this source-target pair if the regular expression
corresponds to an empty set in any of the automata. It also makes a discard if
the source and target states are the same, and the regular expression for any
of them is the empty word $\epsilon$. Intuitively, a discard of the former kind
means that the target is simply not reachable from the source in the product
program, whereas one of the latter kind denotes one of the programs is stuck in
a no-progress cycle. One may also discard aggressively, e.g. if the program
states in the target are not immediate neighbours of those in the source, but
this may come at the cost of completeness (feasible program behaviours missing
from the PAA).
The regular expressions are split over the top-level \emph{or} (+) to deal with the
different paths one at a time. This results into the sets $R_i$ and $R_j$,
obtained by splitting $rex_i$ and $rex_j$ resp., as shown in line $14$.
For every combination of paths (or, in other words, for every pair of regular
expression $r_i \in R_i$ and $r_j \in R_j$), the expressions are instantiated
by replacing $*$'s with symbolic constants $k_i$'s. The decision whether there
is a solution for the $k_i$'s in the instantiated expressions, such that an
appropriate edge labeling can be obtained, is left to an SMT solver (see
Sect.~\ref{subsec:concretize}). An edge is added between a source
and a target state only if the alignment predicate can be propagated along the
edge. Line $17$ of the pseudocode encodes this check. If an edge is added, the
target state is added to the $\mathit{Reach}$ set with an \emph{unvisited}
mark.
\begin{algorithm}[ht]
\caption{The program alignment automaton construction algorithm}
\label{alg:cons-paa}
\begin{algorithmic}[1]
\Procedure{$\mathit{paaConstruct}$~}{$\mathcal{A}_{P_1}, \mathcal{A}_{P_2}, \mathcal{P}_{\mathit{align}}$}
\State $S_1 \gets$ states of $\mathcal{A}_{P_1}$
\State $S_2 \gets$ states of $\mathcal{A}_{P_2}$
\item[]
\State $\mathcal{S} \gets$ \{($q_i,q_j$)\} where $q_i \in S_1, q_j \in S_2$ \Comment{\textcolor{gray}{set of product states}}
\State $\mathit{Reach} \gets \{(q_{init_1},q_{init_2})\}$ \Comment{\textcolor{gray}{$q_{init_{i}}$ is the start state of automaton $\mathcal{A}_{P_i}$}}
\State $\mathcal{T} \gets \emptyset$
\item[]
\While{$\mathit{Reach}$ has a state $(q_{i_1}, q_{{j_1}})$, not yet marked \emph{visited}}
\State mark $(q_{i_1}, q_{{j_1}})$ as \emph{visited}
\For{$(q_{i_2}, q_{{j_2}}) \in \mathcal{S}$} \Comment{\textcolor{gray}{picking a target state to find transitions}}
\State $rex_i \gets$ $\mathcal{L}$($\mathcal{A}_{P_1}$, with $q_{i_1}$ as initial and $q_{i_2}$ as final states)
\State $rex_j \gets$ $\mathcal{L}$($\mathcal{A}_{P_2}$, with $q_{j_1}$ as initial and $q_{j_2}$ as final states)
\Statex \Comment{\textcolor{gray}{discard the state-pair in case of no paths, or if there is a no-progress cycle}}
\State $\mathbf{next}$ if ($rex_i$ = $\emptyset$) or ($rex_j$ = $\emptyset$
\State $\mathbf{next}$ if ($q_{i_1}$ = $q_{i_2}$ and $q_{j_1}$ = $q_{j_2}$) and ($rex_i = \{\epsilon\}$ or $rex_j = \{\epsilon\}$)
\item[]
\State $R_i \gets \mathit{split}$($rex_i$); $R_j \gets \mathit{split}$($rex_j$)
\For{$(r_i,r_j) \in R_i \times R_j$}
\State ${r^c}_i, {r^c}_j$ $\gets \mathit{instantiate}$($r_i, r_j$) \Comment{\textcolor{gray}{replace $\ast$ with constants $k_i$'s}}
\State find min $k_i$'s: $\mathcal{P}_{\mathit{align}} \wedge {r^c}_i \wedge {r^c}_j \Rightarrow \overline{\mathcal{P}}_{\mathit{align}}$ \Comment{\textcolor{gray}{$\overline{~\cdot~}$ denotes next-state}}
\If {a solution is found}
\State $\mathcal{T} \gets \mathcal{T} ~~ \cup ~~ (q_{i_1}, q_{j_1}) \xrightarrow{{r^c}_i;{r^c}_j} (q_{i_2}, q_{j_2})$
\State $Reach \gets Reach \cup \{(q_{i_2}, q_{j_2})\}$
\EndIf
\EndFor
\EndFor
\EndWhile
\item[]
\State $\mathcal{S} \gets Reach$ \Comment{\textcolor{gray}{unreached states are removed from $\mathcal{S}$ in the end}}
\State $\mathit{simplify}(\mathcal{T}, \mathcal{S})$
\EndProcedure
\end{algorithmic}
\end{algorithm}
The \emph{while} loop exits when all the reachable states have been marked
\emph{visited}. At this point the unreached states are removed from $S$, and
the resulting set along with the set of transitions $\mathcal{T}$, describes
the program alignment automaton obtained thus. The resulting PAA is also
simplified, in a manner similar to~\cite{spa-pldi19}, as explained in
Sect.~\ref{subsec:reduce}.
The usefulness of an alignment predicate reflects in how well it helps align
the programs and discharge the equivalence property. For example, if an
alignment predicate only helps to align the initial and the final states, and
no other state in between, it does not make the proof any more easier than
completely analysing the programs independently. Finding good alignment
predicates is thus important, but also quite challenging at the same
time~\cite{spa-pldi19}. Though we do not address this problem here, we believe
that data- and syntax-guided techniques can be quite helpful in making this
practicable. For example, the technique in~\cite{spa-pldi19} learns a set of
candidate alignment predicates from the training data. Similarly, one may
construct a grammar and sample these candidates automatically from the program
source following the ideas
of~\cite{freqhorn-fmcad17,freqhorn-sas18,freqhorn-cav19}.
\subsection{Propagating preconditions along transitions}
\label{rem:input-pred}
In addition to the alignment predicate, there are also predicates that
capture the \emph{preconditions} under which we are checking
equivalence. This could, for instance, be a predicate equating the
input variables of the two programs. Let $\mathcal{P}_{\mathit{input}}$
denote a set of such predicates. When a transition is added to the PAA,
the predicates in this set are also propagated to the target state if
they hold there. These predicates help in the propagating the alignment
predicate by strengthening the premise of the check in line $17$. If
the alignment predicate can be propagated along an edge without the
help of these input predicates, then the edge is added as it is.
Otherwise, if it is propagated with the assistance of the input
predicates, then the edge is marked (as ``dependent on an input
predicate'') before it is added to a set of marked transitions,
$\mathcal{T}_m$ (instead of $\mathcal{T}$). Once the PAA construction
is over, if an input predicate has not been propagated to any state, we
remove all marked edges from the state that are dependent on that
predicate.
The reason we separate the marked transitions from the unmarked ones is
to avoid backtracking. A predicate $p \in
\mathcal{P}_{\mathit{input}}$ holds at a non-initial state $s$ only if
it is preserved along all paths that reach $s$. At an intermediate
stage in the construction, even if $p$ holds at $s$, it may later be
discovered to not hold there. However, if $p$ was used at that stage
to propagate the alignment predicate, we would need to remove that edge
and backtrack. Marking such transitions and keeping them separately
allows us to get rid of all of them, at once, in the end.
\subsection{Reduction of Program Alignment Automaton}
\label{subsec:reduce}
The procedure $\mathit{simplify~}(\mathcal{T},\mathcal{S})$ reduces the program alignment
automaton for $\mathcal{P}_{align}$ by repeatedly applying the following two
reductions, as long as they have some effect.
\begin{enumerate}
\item $\mathit{RemoveStates~}$ removes every state $s$, other than the
initial and the final state, that does not have a self-loop.
Essentially, it replaces each pair of transitions $s_k
\xrightarrow{P;Q} s$ and $s \xrightarrow{P';Q'} s_l$, where $s$
does not have a self-loop, with a transition $s_k
\xrightarrow{} s_l$ labeled with $PP';QQ'$.
\item $\mathit{RemoveTransitions~}$ removes transitions of the form $s
\xrightarrow{P';Q'} s_k$, if there is a transition $s
\xrightarrow{P;Q} s_l$ where $P$ is a prefix of $P'$ and $Q$
is a prefix of $Q'$.
\end{enumerate}
\subsection{Concretization of regular expressions}
\label{subsec:concretize}
While adding transitions in the PAA, we employ a solver to compute valid
solutions of $k_i$'s in the instantiated regular expressions. In this
subsection, we describe why it is sufficient to find these instantiations such
that they account for all program behaviours. In our PAA construction, there
can be three types of the transition labels $P;Q$.
\begin{enumerate}
\item Both $P$ and $Q$ contain loop blocks, i.e. the label is of form
$r_i r_i^{k_1}; r_jr_j^{k_2}$ where $r_i$ and $r_j$ are the blocks
denoting loops in respective functions. In this case, we find
the minimum values of $k_1$ and $k_2$ such that $k_1+1$
iterations of $r_i$ are aligned with $k_2+1$ iterations of
$r_j$. By not considering their minimum values, we will be
unable to account for the behaviours with smaller number of
loop iterations. However, minimum values automatically
accommodate behaviors with higher number of loop iterations.
\item Only one of $P$ and $Q$ has a loop block, i.e. the label is
of one of the forms $r_{i-1}r_i^{k};r_j$, $r_i^{k}r_{i+1};r_j$,
$r_i;r_j^{k}r_{j+1}$ or $r_i;r_{j-1}r_j^{k}$, where the
expression with superscript $k$ denotes the loop block. In this
case, we check if there exists a value of $k$ such that the transition preserves the validity of alignment predicate.
Intuitively, this value of $k$ determines how many iterations
of loop in one function are aligned with a non-loop block in
other function.
\item Neither $P$ nor $Q$ has a loop block, i.e. the label is
$r_i;r_j$. Here, merely checking that taking this
transition does not violate the alignment predicate is
sufficient.
\end{enumerate}
\section{Illustration on another example: arrayInsert}
\label{sec:arrayinsert}
We underline the usefulness of our direct construction, as compared to the
trace-based technique, using another example borrowed from~\cite{pdsc-cav19}.
Consider two copies, $f$ and $g$, of a program \texttt{arrayInsert}, as shown
in Fig.~\ref{fig:appendix-arrayinsert-code}; their CFGs are shown in
Fig.~\ref{fig:ap-arrayinsert-cfg}. The program takes 3 input parameters:
array $A$, its length \emph{len} and an integer $h$. The precondition under
which the equivalence is to be established is $A=A'$ and $len=len'$. The
variables $h$ and $h'$ are unconstrained by the precondition.
\begin{figure}[h]
\subcaptionbox{Copy $f$\label{subfig:appendix-code-f}}[%
0.49\linewidth
]%
{%
\lstinputlisting[label={lst:appendix-f-code}, caption={}, style=customc]{code-files/appendix-example-f.c}%
}%
\noindent
\subcaptionbox{Copy $g$\label{subfig:appendix-code-g}}[%
0.40\linewidth
]%
{%
\lstinputlisting[label={lst:appendix-g-code}, caption={}, style=customc]{code-files/appendix-example-g.c}%
}%
\caption[]{Copies of \texttt{arrayInsert} program}
\label{fig:appendix-arrayinsert-code}
\end{figure}
The task here is to insert $h$ at its appropriate position in the sorted array
$A$, with the underlying assumption that $h$ is sensitive information and the
place where it is inserted must not be leaked. To achieve this, the programs
have a proxy loop towards the end, to move the counter $i$ to the end,
independent of the position where $h$ was inserted. The postcondition for
equivalence is that the output $i$ is the same for both the programs.
\begin{comment}
\begin{figure}
\begin{minipage}[pos=m]{0.05\linewidth}
~
\end{minipage}%
\begin{minipage}[pos=m]{0.45\linewidth}
\lstinputlisting[
basicstyle=\tiny,
numbers=left,
numbersep=5pt,
xleftmargin=0pt,
frame=l,
]{code-files/appendix-example-f.c}
\end{minipage}%
\noindent
\begin{minipage}[pos=m]{0.45\linewidth}
\lstinputlisting[
basicstyle=\tiny,
numbers=left,
numbersep=5pt,
xleftmargin=0pt,
frame=l,
]{code-files/appendix-example-g.c}
\end{minipage}
\caption{Copies of \texttt{arrayInsert} program}\label{fig:appendix-arrayinsert-code}
\end{figure}
\end{comment}
\begin{wrapfigure}{l}{0.65\textwidth}
\centering
\begin{subfigure}[t]{0.27\textwidth}
\includegraphics[width=\linewidth]{figures/appendix-cfg-f.pdf}
\caption{Program $f$ automaton\label{subfig:ap-arrayinsert-cfg-f}}
\end{subfigure}
\quad
\hspace{1pt}
\begin{subfigure}[t]{0.28\textwidth}
\includegraphics[width=\linewidth]{figures/appendix-cfg-g.pdf}
\caption{Program $g$ automaton\label{subfig:ap-arrayinsert-cfg-g}}
\end{subfigure}
\caption{Control flow graphs for Fig.~\ref{fig:appendix-arrayinsert-code} programs}
\label{fig:ap-arrayinsert-cfg}
\end{wrapfigure}
Naturally, in this case, the predicate $i=i'$ appears to be a good candidate
for the alignment predicate $\mathcal{P}_{align}$ to construct a PAA. There
are 3 scenarios based on the values of parameter \texttt{h} across both copies:
(i) $h=h'$ or both inserted at the same position in respective arrays, (ii)
$h<h'$ where $h$ and $h'$ are inserted at different positions, and (iii) $h>h'$
and both are inserted at different positions. The trace-based technique in
~\cite{spa-pldi19} would require a different pair of executions for computing
the trace alignment in each scenario. Fig.~\ref{fig:ap-trace-paas}
illustrates the program alignment automata constructed for each of these cases.
Absence of any of the pairs would lead to missing behaviors in the final PAA.
In contrast, our approach gives the PAA shown in Fig.~\ref{fig:ap-our-paa}.
We argue that this PAA observes each scenario and overapproximates all
behaviors.
\begin{figure}
\centering
\subcaptionbox{Trace based PAA for $h=h'$ or $h$ an $h'$ are inserted at same positions\label{subfig:ap-trace-paa-1}}{\includegraphics[width=0.75\linewidth]{figures/appendix-trace-paa-1.pdf}}
\subcaptionbox{Trace based PAA for $h>h'$ where $h$ and $h'$ are inserted at different positions\label{subfig:ap-trace-paa-2}}{\includegraphics[width=0.8\linewidth]{figures/appendix-trace-paa-2.pdf}}
\subcaptionbox{Trace based PAA for $h<h'$ where $h$ and $h'$ are inserted at different positions\label{subfig:ap-trace-paa-3}}{\includegraphics[width=0.8\linewidth]{figures/appendix-trace-paa-3.pdf}}
\caption{Trace PAAs for programs in Fig.~\ref{fig:ap-arrayinsert-cfg}}
\label{fig:ap-trace-paas}
\end{figure}
Consider the initial state $q_1q'_1$: each of $q_1$ and $q'_1$ has one outgoing
transition with its guard predicate as $true$ (say, $\alpha$ and $\alpha'$
resp.). Hence there is only one transition $\alpha\alpha'$ at $q_1q'_1$, which
is included in the PAA. Now, let us consider the state $q_2q'_2$. We show that
the behaviours that are not present at $q_2q'_2$ are actually infeasible. The
same argument can be extended to rest of the states in similar manner. There
are two behaviors possible at $q_2$: ($i<len \land A[i]<h$) (say, $\alpha$),
($i\geq len \lor A[i]\geq h$) ($\lnot \alpha$). Similarly, $q'_2$ has two
possible behaviors: ($i'<len' \land A'[i']<h'$) (say, $\gamma$), ($i'\geq len'
\lor A'[i']\geq h'$) ($\lnot\gamma$). This leads to a total of four possible
behaviors at $q_2q'_2$: $\alpha\gamma$, $\lnot\alpha\gamma$,
$\alpha\lnot\gamma$, and $\lnot\alpha\lnot\gamma$, as shown below. The
alignment predicate $\mathcal{P}_{align}$ is $i=i'$, and $len=len'$ is a loop
invariant at $q_2q'_2$.
\begin{enumerate}
\item $\alpha\gamma$: ($i<len \land A[i]<h$) $\land$ ($i'<len' \land A'[i']<h'$)
\vspace{7pt}
\item $\alpha\lnot\gamma$: ($i<len \land A[i]<h$) $\land$ ($i'\geq len' \lor A'[i'] \geq h'$)
\begin{enumerate}
\item ($i<len \land A[i]<h$) $\land$ $i'\geq len'$
\item ($i<len \land A[i]<h$) $\land$ $A'[i'] \geq h'$
\vspace{7pt}
\end{enumerate}
\item $\lnot\alpha\gamma$: ($i \geq len \lor A[i] \geq h$) $\land$ ($i'<len' \land A'[i']<h'$)
\vspace{7pt}
\begin{enumerate}
\item $i \geq len$ $\land$ ($i'<len' \land A'[i']<h'$)
\item $A[i] \geq h$ $\land$ ($i'<len' \land A'[i']<h'$)
\vspace{7pt}
\end{enumerate}
\item $\lnot\alpha\lnot\gamma$: ($i \geq len \lor A[i] \geq h$) $\land$ ($i'\geq len' \lor A'[i'] \geq h'$)
\begin{enumerate}
\item $i \geq len$ $\land$ $i'\geq len'$
\item $i \geq len$ $\land$ $A'[i'] \geq h'$
\item $A[i] \geq h$ $\land$ $i'\geq len'$
\item $A[i] \geq h$ $\land$ $A'[i'] \geq h'$
\end{enumerate}
\end{enumerate}
\begin{figure}
\centering
\includegraphics[width=0.6\linewidth]{figures/appendix-our-paa.pdf}
\caption{Directly constructed PAA for programs in Fig.~\ref{fig:ap-arrayinsert-cfg}}
\label{fig:ap-our-paa}
\end{figure}
\textbf{Case 1} corresponds to the self-loop $b;b'$ at $q_2q'_2$ which is
included in the PAA.
\textbf{Case 2a} shows the predicate $i < len~\land~i' \geq len'$, which is
not satisfiable. The reason is that the alignment predicate $i=i'$ holds at
$q_2q'_2$ and $len=len'$ is a loop invariant. This infeasible transition is not
present in the PAA, which satisfies our requirement.
\textbf{Case 2b} represents the transition $q_2q'_2 \xrightarrow{\epsilon;c'd'}
q_2q'_4$ in our PAA. It is noteworthy that this transition is not included in
the automaton from the trace-based construction
(Figures~\ref{subfig:ap-trace-paa-1} and~\ref{subfig:ap-trace-paa-3}).
\textbf{Case 3a} is not a part of our PAA as well. The predicate $i \geq
len~\land~i' < len'$ is unsatisfiable, therefore, the transition is infeasible.
\textbf{Case 3b} is associated with the transition $q_2q'_2
\xrightarrow{cd;\epsilon} q_4q'_2$ in our PAA. However, this transition is not
included in the trace-based automata in Figures~\ref{subfig:ap-trace-paa-1}
and~\ref{subfig:ap-trace-paa-2}.
\textbf{Cases 4a, 4b, 4c, 4d} correspond to the transition $q_2q'_2
\xrightarrow{cd;c'd'} q_4q'_4$, which is a part of our program alignment
automaton.
It can similarly be argued that the program alignment automaton has all
possible behaviours at every state. Further, notice that $i=i'$ is an alignment
predicate, which holds at each state of the PAA by construction. In particular,
it holds at the exit state ($q_5q'_5$), and thus the PAA establishes
equivalence of the copies $f$ and $g$.
\section{Conclusion and Future Work}
\label{sec:conc}
We presented an algorithm for building program alignment automata, addressing
the equivalence checking problem for two programs. Our algorithm works directly
on the automaton of the individual programs, without needing any test cases or
making any unrealistic assumptions. Developing a prototype tool that implements
this algorithm is an immediate future work. In particular, it would be useful
to explore heuristics that make the technique scale in practice. For example,
by eagerly discarding states, transitions, and alignment predicates that are
not leading to a good alignment automaton. An aggressive reduction of the
product states may also help gain efficiency, though it may come at the cost of
completeness (i.e. the PAA missing some feasible behaviors).
\section{Multiple Alignment Predicates and Disjunctive Invariants}
\label{sec:features}
Intuitively, a PAA is \emph{good} (in other words, \emph{useful} in making the
equivalence proof easier) if it can make the programs align at multiple
locations, i.e. if there are many intermediate nodes. In the worst case, if the
programs align only in the beginning, then the PAA cannot make the proof any
easier (than self-composing the programs and checking).
Consider two PAAs $A$ and $A'$ for alignment predicates $\mathcal{P}_{align}$
and $\mathcal{P'}_{align}$ respectively. If the number of reachable nodes in
$A$ is more than in $A'$, then $A$ is considered better aligned, which
certainly depends on the chosen alignment predicate. As an optimization, we can
parallelize computing transitions for multiple predicates and maintain multiple
transition sets. Additionally, we can discard computing transitions for the
predicates that have significantly less number of reachable nodes than the
other. Multiple alignment predicates can also help in suggesting disjunctive
invariants. For example, consider functions $f$ and $g$ shown in
Figures~\ref{subfig:disjunct-code-f} and~\ref{subfig:disjunct-code-g}. They
take two input parameters, $h$ and $cons$, and define two local variables
$y$, $z$. The function has a branching based on the value of $h$ -- the first
branch corresponds to the case $h > 100$ while the other is taken when $h \leq
100$. Now, assume two alignment predicates: $p_1 \mathrel{\stackrel{\scriptscriptstyle\Delta}{=}} z = z' + cons$ and
$p_2 \mathrel{\stackrel{\scriptscriptstyle\Delta}{=}} y = y' + cons$. Recall that the alignment predicate is, by
assumption, true at initial state. It is easy to observe that $p_1$ helps in
aligning first branch ($h > 100$) whereas $p_2$ assists in the alignment of the
other branch ($h \leq 100$). The predicate $p_1 \wedge p_2$ fails to align
either of the branches, whereas $p_1 \vee p_2$ helps in aligning both the
branches.
\begin{figure}[!ht]
\subcaptionbox{Function f\label{subfig:disjunct-code-f}}[%
0.45\linewidth
]%
{%
\lstinputlisting[label={lst:disjunct-f-code}, caption={}, style=customc]{code-files/disjunct-inv-f.c}%
}%
\noindent
\subcaptionbox{Function g\label{subfig:disjunct-code-g}}[%
0.4\linewidth
]%
{%
\lstinputlisting[label={lst:disjunct-g-code}, caption={}, style=customc]{code-files/disjunct-inv-g.c}%
}%
\caption[]{Multiple alignment predicates and their disjunction}
\label{fig:disjunct-code-f-g}
\end{figure}
\begin{comment}
\begin{figure}
\begin{minipage}[pos=m]{0.05\linewidth}
~
\end{minipage}%
\begin{minipage}[pos=m]{0.45\linewidth}
\lstinputlisting[
basicstyle=\tiny,
numbers=left,
numbersep=5pt,
xleftmargin=0pt,
frame=l,
]{code-files/disjunct-inv-f.c}
\end{minipage}%
\noindent
\begin{minipage}[pos=m]{0.45\linewidth}
\lstinputlisting[
basicstyle=\tiny,
numbers=left,
numbersep=5pt,
xleftmargin=0pt,
frame=l,
]{code-files/disjunct-inv-g.c}
\end{minipage}
\caption[]{Multiple alignment predicates and their disjunction}
\label{fig:disjunct-code-f-g}
\end{figure}
\end{comment}
\section{Illustrative run on an example}
\label{sec:example}
We use the example in Fig.~\ref{fig:code-cfg-f-g} to illustrate
Alg.~\ref{alg:cons-paa}. The inputs to $\mathit{paaConstruct}$ are the
automata shown in figures~\ref{subfig:cfg-f} and~\ref{subfig:cfg-g}, and an
alignment predicate $\mathcal{P}_{align}$: $array+4i=array'$. The sets $S_1$
and $S_2$ are $\{q_1,~q_2,~q_3\}$ and $\{q'_1,~q'_2,~q'_3\}$ (resp.), and thus
the PAA has nine possible states
$\{q_1q'_1,~q_1q'_2,~\ldots,~q_3q'_2,~q_3q'_3\}$. We often denote the product
state $(q_i, q_j)$ as $q_iq_j$. The states $q_1q'_1$ and $q_3q'_3$ are marked
as initial and final, resp. We assume that the alignment predicate holds in the
initial state, without evaluating whether it actually holds or not. As
described in Sect~\ref{rem:input-pred}, we also have a set of input predicates
(omitted from Alg.~\ref{alg:cons-paa} for ease of exposition) that hold in the
beginning. In this example, it is the set $\{array = array', len = len',
\omega = \omega'\}$, where $\omega$ denotes the heap state. The predicate
$\omega = \omega'$ is the precondition that the programs execute from the same
heap state. Input predicates holds at the initial state, and at any subsequent
state unless a transition flips their truth value.
\vspace{-0.35in}
\input{transitions-table}
We mark the initial state $q_1q'_1$ as \emph{reachable} by initializing the set
$Reach$ with it (line $5$). We also initialize the transition set
$\mathcal{T}$ to be an empty set. The process of adding a transition begins by
picking two states: a \emph{source} state from the $Reach$, and a \emph{target}
from $\mathcal{S}$.
Table~\ref{fig:transition_table} shows all the transitions that were added by
the algorithm. In what follows, we describe a few interesting cases in details.
\textbf{Single transition} Consider the pair of states $q_1q'_1 \in Reach$ and
$q_2q'_1 \in \mathcal{S}$ at the entry 1 in Table~\ref{fig:transition_table}.
We mark $q_1q'_1$ as \emph{visited} before proceeding (line $8$). The regular
expression $ab^*$ denotes all the words beginning at $q_1$ and ending at $q_2$
in Fig.~\ref{subfig:cfg-f} (line $10$). Similarly, $\epsilon$ denotes the
words starting at $q'_1$ and ending at $q'_1$ in Fig.~\ref{subfig:cfg-g}
(line $11$). Since these expressions do not have a top-level or (denoted by
`$+$'), we just get two singleton sets -- $\{ab^*\}$ and $\{\epsilon\}$ -- in
line $14$. Recall that, by assumption, both $\mathcal{P}_{align}$ and
$\mathcal{P}_{input}$ hold at $q_1q'_1$. We employ an SMT solver to find an
instantiation, if one exists, of $ab^{k_1};\epsilon$, such that
$\mathcal{P}_{align}$ retains its truth value at $q_2q'_1$ after taking the
transition (lines $16-19$). In particular, we solve the query $array+4i=array'
\land ab^{k_1} \land \epsilon \implies
\overline{array}+4\overline{i}=\overline{array}'$ for the minimum value of
$k_1$. As the solver does not find any satisfying assignment, we try to solve
the query by adding $\mathcal{P}_{input}$ to the premise. The solver now
returns $0$ as the solution which results into a transition $q_1q'_1
\xrightarrow{a;\epsilon} q_2q'_1$. We add this transition to $\mathcal{T}_m$
(see Sect.~\ref{rem:input-pred}) and add $q_2q'_1$ to $Reach$ (lines $20-22$).
Further, since the basic block $a$ in Fig.~\ref{subfig:cfg-f} does not affect
the truth value of $\mathcal{P}_{input}$, we propagate $\mathcal{P}_{input}$ to
$q_2q'_1$ through this transition by making a similar query to the solver.
\begin{wrapfigure}[16]{l}{0.38\textwidth}
\centering
\includegraphics[width=\linewidth]{figures/9-states-paa.pdf}
\caption{PAA for Fig.~\ref{fig:code-cfg-f-g}}
\label{fig:full-paa}
\end{wrapfigure}
Let us pick another pair of states: $q_1q'_1$ and $q_1q'_2$, the second entry in Table~\ref{fig:transition_table}. The regular expressions denoting the words
between component states are $\epsilon$ and $b'c'^* + a'c'^*$, and splitting
gives two sets - $\{\epsilon\}, \{b'c'^*; a'c'^*\}$. We solve two queries in
order to obtain their instantiations: (i) $array+4i=array' \land \epsilon \land
b'c'^{k_1} \implies \overline{array}+4\overline{i}=\overline{array}'$, and (ii)
$\mathcal{P}_{\mathit{input}}\land array+4i=array' \land \epsilon \land
a'c'^{k_2} \implies \overline{array}+4\overline{i}=\overline{array}'$. For the
first query, the solver provides $k_1=0$. We add a transition $q_1q'_1
\xrightarrow{\epsilon;b'} q_1q'_2$ to $\mathcal{T}$ and add $q_1q'_2$ to
$Reach$. Notice that we added $\mathcal{P}_{input}$ to the premise in second
query after we observed that the solver could not find an instantiation without
$\mathcal{P}_{input}$. The second query could not be solved, even with
$\mathcal{P}_{input}$. We propagate $\mathcal{P}_{input}$ to $q_1q'_2$, as it
is not affected by the edge labeled $\epsilon;b'$.
\textbf{Discarding a pair of states} Consider a pair $q_1q'_2$ and $q_2q'_1$ at
entry $3$ in Table~\ref{fig:transition_table}. Note that there is a path from
$q_1$ to $q_2$ in the automaton in Fig.~\ref{subfig:cfg-f} but there is no
path from $q'_2$ to $q'_1$ in ~\ref{subfig:cfg-g}. So, we discard this pair
since no transition can be added from $q_1q'_2$ to $q_2q'_1$ (line $12$ in
Alg.~\ref{alg:cons-paa}). Consider another pair, $q_2q'_1$ and $q_2q'_1$, at
entry $5$. The associated regex $b^*;\epsilon$ represents all the words
starting at $q_2q'_1$ and ending at $q_2q'_1$. As this expression would result
into a \emph{no-progress cycle} (the states does not change in any of the
components, and at least one of the expressions is $\epsilon$) at $q_2q'_1$, we
discard this pair (line $13$ in Alg.~\ref{alg:cons-paa}).
In this example, we only look for the pairs where component states are
immediate neighbors in respective automaton. For instance, we do not look for a
transition between $q_1q'_1$ and $q_1q'_3$ because $q'_1$ and $q'_3$ are not
immediate neighbors in Fig.~\ref{subfig:cfg-g}. Such optimizations, in general,
may lead to loss of behaviours.
\textbf{Multiple transitions}
For the pair $q_2q'_1$ and $q_2q'_2$ at entry $6$, the regular expressions are
$b^*$ and $a'c'^*+b'c'^*$. Splitting gives us the sets: $\{b^*;a'c'^*\}$ and
$\{b^*;b'c'^*\}$. The solver returns $k_1=1, k_2=0$ as the instantiation of
$b^{k_1};a'c'^{k_2}$, which gives the edge label $b;a'$. For the other
component, $b^{k_3};b'c'^{k_4}$, the transition label becomes $\epsilon;b'$ as
the solver return $k_3=0, k_4=0$. These were obtained with
$\mathcal{P}_{input}$ in the premise, and thus, are added to $\mathcal{T}$.
The state $q_2q'_2$ is added to $Reach$. Observe that the truth values of
$array=array'$ and $len=len'$ are affected by the blocks $b$ and $a'$,
therefore these are dropped at the \emph{target} state. However, since $\omega
= \omega'$ is still unaffected, we propagate it to $q_2q'_2$.
\textbf{Self-loop} The transition $b^*;c'^*$ at entry 14 corresponds to a
self-loop at the state $q_2q'_2$. We get $k_1=2,k_2=1$ as the instantiation of
$bb^{k_1};c'c'^{k_2}$ using the solver, and add a self-loop with label $bb;c'$
at $q_2q'_2$. Informally, it shows that a transition with two iterations of $b$
and one iteration of $c'$ preserves the satisfiability of $\mathcal{P}_{align}$
at $q_2q_{2}'$. Note that we do not enquire for minimum values of $k_i$'s in
the case of $b^{k_1};c'^{k_2}$, because the minimum ($k_1=0,k_2=0$) corresponds
to a no-progress cycle. The query is suitably modified for self-loops to ensure
progress.
\begin{figure}
\centering
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\linewidth]{figures/cfg-step-1-simplify.pdf}
\caption{After removing states $q_1q'_3$ and $q_3q'_1$\label{subfig:simplify-1}}
\end{subfigure}
\quad
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\linewidth]{figures/cfg-step-2-simplify.pdf}
\caption{After removing state $q_1q'_2$\label{subfig:simplify-2}}
\end{subfigure}
\caption{Reduction of PAA in Figure~\ref{fig:full-paa}}
\label{fig:simplification}
\end{figure}
\vspace{-0.3in}
Once the while loop ends, the unreachable states are removed from
$\mathcal{S}$, and the valid transition of $\mathcal{T}_m$ are added to
$\mathcal{T}$. A marked transition is valid if the input predicates used in the
premise continue to be available at the source state in the end. The PAA thus
constructed, shown in Fig.~\ref{fig:code-cfg-f-g}, is then simplified. For
instance, we can remove state $q_1q'_3$ by replacing transitions $q_1q'_2
\xrightarrow{\epsilon;d'} q_1q'_3$ and $q_1q'_3 \xrightarrow{a;\epsilon}
q_2q'_3$ with a transition $q_1q'_2 \xrightarrow{a;d'} q_2q'_3$ which is
already present. The reduced PAA is shown in Fig.~\ref{subfig:simplify-1}. In
a similar manner, state $q_1q'_2$ is removed by replacing - (i) the transitions
$q_1q'_1 \xrightarrow{\epsilon;b'} q_1q'_2$ and $q_1q'_2 \xrightarrow{a;d'}
q_2q'_3$ with a transition $q_1q'_1 \xrightarrow{a;b'd'} q_2q'_3$, (ii) the
transitions $q_1q'_1 \xrightarrow{\epsilon;b'} q_1q'_2$ and $q_1q'_2
\xrightarrow{a;\epsilon} q_2q'_2$ with a transition $q_1q'_1 \xrightarrow{a;b'}
q_2q'_2$. Next, we remove the transition
$q_1q'_1 \xrightarrow{a;b'd'} q_2q'_3$ because there exists a transition
$q_1q'_1 \xrightarrow{a;b'} q_2q'_2$ where $a$ is a prefix of $a$ and $b'$ is a
prefix of $b'd'$. We keep applying these reductions until the PAA can not be
simplified further. The final PAA is shown in Fig.~\ref{subfig:our-paa}. This
is exactly same as the PAA obtained by the technique in~\cite{spa-pldi19}.
However, since their technique depends on test cases, if the training set had
only even \texttt{len} cases (for example), they would have ended up with a
different PAA, shown in Fig.~\ref{subfig:pldi-paa}. Observe that this PAA
does not have a transition corresponding to edge $a'$\texttt{[len\%2 = 1]} in
Fig.~\ref{subfig:cfg-g}, and therefore does not overapproximate all possible
behaviors.
\subsection{Learning Invariants and Discharging Proof Obligations}
\label{subsec:learnandprove}
Though we do not have any contributions here, we illustrate how this is done
(in~\cite{spa-pldi19}) to make the paper self-contained.
Once a PAA is constructed, invariants are learned for each state. These
invariants must be consistent with PAA i.e, for each transition $s
\xrightarrow{P;Q} t$, if $\phi_s$ and $\phi_t$ are the invariants at state $s$
and $t$ respectively, then, $\{\phi_s\}~P;Q~\{\phi_t\}$ must be valid. The aim
is to learn sufficiently strong invariants at the final state, so that the
equivalence property can be discharged. There are several techniques that have
been proposed to learn such invariants~\cite{spa-pldi19,DBLP:conf/aplas/DahiyaB17}, including those that aim to learn them from the
program's syntactic source, e.g.~\cite{freqhorn-fmcad17}.
It must be first argued that the constructed PAA overapproximates all program
behaviors. Consider the initial state $q_1q'_1$: the state $q_1$ in $f$ has one
outgoing transition with its guard predicate as $true$ (say, $\alpha$),
whereas, $q'_1$ has two outgoing transitions with guard predicates $len\%2=0$
($\beta$) and $len\%2=1$ ($\gamma$). Hence there are two possible transitions
$\alpha\beta$ and $\alpha\gamma$ at $q_1q'_1$, which are included in our PAA.
For the state $q_2q'_2$, it can be shown that the behaviours that are not
present in the PAA are in fact infeasible. There are two possible behaviors at
$q_2$: $i \geq len$ ($\alpha$) and $i<len$ ($\beta$); similarly, there are two
behaviors at $q'_2$: $len'=0$ ($\gamma$) and $len'\neq0$ ($\delta$). Thus
there are four possible behaviors at $q_2q'_2$: $\alpha \gamma$, $\alpha
\delta$, $\beta \gamma$, and $\beta \delta$. Since the behaviors $\alpha
\gamma$ and $\beta \delta$ are already included in the PAA, showing that
$\alpha \delta$ and $\beta \gamma$ are infeasible is sufficient. Observe that
at state $q_2q'_2$, the predicate $len-i=len' \land i \leq len$ is an
invariant. Since $i \geq len \land len'\neq0 \land len-i=len' \land i \leq len$ is
unsatisfiable, the behavior $\alpha \delta$ is infeasible. Similarly, $i<len
\land len'=0 \land len-i=len' \land i \leq len$ is unsatisfiable which implies
$\beta\gamma$ is infeasible.
We now justify why $len-i=len' \land i \leq len$ is an invariant at $q_2q'_2$.
Initially at $q_1q'_1$, $len$ and $len'$ are same and non-negative. There are
two ways to reach $q_2q'_2$ from $q_1q'_1$ depending on the parity of $len$. If
$len$ is even, both $len$ and $len'$ remain intact and $i$ is initialized to
$0$. If $len$ is odd, there is no change in $len$ and $i$ becomes $1$, however,
$len'$ is decreased by $1$. Therefore, $len-i=len' \land i \leq len$ is
initially true at $q_2q'_2$. Now, we prove the consecution. Assume at any step,
the predicate $len-i=len' \land i \leq len$ holds. Since the self-loop at
$q_2q'_2$ executes $b$ twice and $c'$ once, it preserves the satisfiability of
$len-i=len' \land i \leq len$: $i$ increases by $2$ and $len'$ decreases by
$2$. Therefore, it's an invariant at $q_2q'_2$. Note that $\omega = \omega'$
holds at $q_2q'_2$, which is further propagated to $q_3q'_3$ via transition
$c;d'$. It concludes that the two programs are equivalent since the content of
the arrays or final heaps are same.
\begin{figure}
\centering
\begin{subfigure}[t]{0.36\textwidth}
\includegraphics[width=\linewidth]{figures/our-final-paa.pdf}
\caption{proposed construction\label{subfig:our-paa}}
\end{subfigure}
\quad
\hspace{20pt}
\begin{subfigure}[t]{0.45\textwidth}
\includegraphics[width=\linewidth]{figures/existing-final-paa.pdf}
\caption{using~\cite{spa-pldi19}, and only even \texttt{len} traces\label{subfig:pldi-paa}}
\end{subfigure}
\caption{Final program alignment automaton}
\label{fig:final-paas}
\end{figure}
\subsection{Soundness of our approach}
\label{subsec:soundness}
Our approach is sound by construction. An edge is added in the PAA if and only
if its source and target states are indeed connected through the
transition-label. The choice of alignment predicates, and the inherent
incompleteness of the technique, may sometimes result in a PAA that's
insufficient to establish equivalent (for example, if it does not capture all
possible program behaviors). However, if a PAA and the learned invariants
logically establish the equivalence, the programs are indeed equivalent.
\section{Introduction}
\label{sec:intro}
\iffalse
In recent years, with the advancement of computing, software engineering
education has become very important. With the popularity that MOOCs (Massive
Online Open Courses) have gained, especially in the ongoing pandemic situation,
there has been a lot of focus on making this education accessible to a large
number of people. In popular courses like programming, it is frequently the
case that hundreds or thousands of students get enrolled. The nature of such
courses demands that the students be given assignments (i.e., be asked to write
programs for a given task), and the instructor evaluates them. Manually arguing
for correctness of every student's program can be extremely effort intensive,
or, very often, unimaginable. If a formal specification for correctness is
given, this task of evaluating an assignment can be given to an off-the-shelf
model checker. Even when the search for a correctness proof diverges, bounded
model checking may be able to uncover shallow bugs if there are any. However,
it is unrealistic to assume that a teacher can always specify a formal
criterion of correctness along with the problem statement. An alternative that
is more workable is that the teacher provides a reference implementation, and
the submissions are evaluated with respect to that. A popular evaluation
approach in such a setting is testing -- an empirical comparison of the two
programs over a chosen set of inputs. But what if the aim is to rigorously
establish that a submitted assignment meets the correctness specification, or
produce an evidence that it does not? For that, one can make the reference
program itself as the specification, and argue that a program is correct if the
output computed by it is the same as that computed by the reference
implementation, for every possible input. In other words, the problem reduces
to checking that two programs are semantically equivalent in how they compute
their output.
\fi
\input{fig1}
Checking equivalence of programs is an important problem due to its many diverse applications, including translation validation and compiler
correctness~\cite{10.1007/BFb0054170,GOLDBERG200553,10.1145/349299.349314},
code refactoring~\cite{10.1007/978-3-642-22110-1_55}, program
synthesis~\cite{10.1145/1168857.1168906}, hypersafety
verification~\cite{nier20hyper,ahv-cav19,pdsc-cav19},
superoptimization~\cite{10.1145/2451116.2451150,10.1145/3037697.3037754}, and software engineering education~\cite{Li:2016:MCB:2889160.2889204}, amongst many others. In general,
depending on the application, the criteria for equivalence may be weaker or
stronger. For instance, the condition may be that all the observables including
the machine state (stack, heap, and registers) are equal, or that only a subset
of them are. Informally and broadly speaking, techniques that handle this
problem try to put the two programs together in a way that makes it easier to
justify the semantic equivalence. Note that one may always combine the programs
naively, like in a sequential composition where they are run one after the
other, but then arguing becomes difficult because it necessitates that every
component be analyzed fully. Consider an example (borrowed
from~\cite{spa-pldi19}) shown in Fig.~\ref{fig:code-cfg-f-g}. There are two
functions $f$ and $g$, both of which take two parameters as input:
\textit{array}, which points to an array of 32-bit integers, and \textit{len}
which stores the length of the array. The function $f$ flips the bits of the
array elements by iterating over each array element, and function $g$ flips 64
bits from wherever the array is pointing to, and then moves the \textit{array}
pointer to the end of the flipped bits. In the beginning, however, $g$ checks
whether \textit{len} is odd and if so, flips only 32 bits for the first time,
and then continues flipping 64 at a time as described before. To establish that
the programs are semantically equivalent, one may simply put the two programs
together, one after another as sequential components of a single program, and
assert the equivalence condition at the end. But, to analyze this combined
program, one must learn completely what is happening in $f$, and also in $g$,
and thereby conclude that they are indeed doing the same \emph{thing}.
The equivalence checking technique presented in~\cite{spa-pldi19}, on which we
build, takes two programs and set of test cases, and constructs a trace
alignment for every test case. The trace alignment is essentially a pairing of
states in the execution traces of the programs, corresponding to a test case.
This construction is guided by an alignment predicate that helps in pairing the
states semantically. The technique then builds the product program as a
\emph{program alignment automaton} (PAA), and then learns invariants for all
its states to establish the equivalence. In fact, the test cases are split into
two sets -- to be used for \emph{training} and \emph{testing} -- in the
beginning, and along with a set of candidate alignment predicates, a trace
alignment and a PAA are learned from the training data. In this setting, it
becomes important to ensure that the PAA does not overfit the training data.
Therefore, its viability is checked using the testing set. A PAA is acceptable
only if it soundly overapproximates the two programs, and is rejected
otherwise. In the latter case, the search for an acceptable PAA continues with
a different alignment predicate. Their technique benefits from choosing a good
alignment predicate that allows to capture all possible pairs of program
executions, including those from the testing set, even though it was learned
from the training data alone.
The advantage of a \emph{semantic} alignment is that it can see through the
syntactic differences. However, there are also drawbacks of a trace-based
technique: \emph{a}) obtaining traces that cover all program behaviors is
difficult, and any under-approximation may lead to an incomplete product
program, and \emph{b}) an indirect construction of this kind is unaware of the
missing behaviors, and has no control over the aforesaid incompleteness.
Alternatively, there are techniques that do not need traces to arrive at a
product program, but they make assumptions that are strongly
limiting~\cite{DBLP:conf/aplas/DahiyaB17}. In this work, we propose an
algorithm for direct construction of PAA's, that has the goodness of being
guided by an alignment predicate, while still not needing any test cases or
unrealistic assumptions.
The core contribution of this paper is an algorithm for predicate guided
semantic program alignment without using traces, which we present in
Sect.~\ref{sec:algo}. This is followed by a step-by-step illustration of it on
the example of Fig.~\ref{fig:code-cfg-f-g}, in Sect.~\ref{sec:example}. We
present another illustrative run on an example involving arrays, which
emphasizes the usefulness of our direct construction over a trace-based
technique, in Sect.~\ref{sec:arrayinsert}. This is followed by a short note on
disjunctive invariants (in Sect.~\ref{sec:features}), a discussion of the
related work (in Sect.~\ref{sec:related}), and our concluding remarks (in
Sect.~\ref{sec:conc}).
\section{Related Work}
\label{sec:related}
Our work is closely related to and inspired by~\cite{spa-pldi19}, in that we
also use an alignment predicate to construct a program alignment automata that
semantically aligns the programs for equivalence check. However, our technique
constructs the PAA directly, without needing test cases or execution traces.
Our construction is similar in spirit to~\cite{DBLP:conf/aplas/DahiyaB17},
which builds a product program without using test cases, but it requires the
branching condition of one program to match that of the other. It also fails to
explore many-to-many relationship among paths of component programs, which we
do by constructing regular expressions and looking for suitable instantiations
of them. Another technique, CoVaC~\cite{CoVaC}, geared towards translation
validation, constructs a cross-product of two programs to ensure that
optimizing compiler transformations preserve program semantics. However, it
restricts the domain of transformations such that the optimized program is
consonant (structurally similar) to the source program.
Data-driven equivalence checking~\cite{DBLP:conf/oopsla/0001SCA13} tries to
find an inductive proof of loop equivalence in the domain of compiler
optimizations by inferring \emph{simulation relations} based on execution
traces and equality checking of the machine states. Since its goal is to align
loops, the technique is not suitable for the example in
Fig.~\ref{fig:code-cfg-f-g}. Other related techniques include those that prove
equivalence of loop-free
programs~\cite{1613165,10.1145/1086228.1086284,Feng2002AutomaticFV,10.1145/337292.337339,10.1007/11513988_20},
or programs with finite unwindings of loops or finite input
domains~\cite{10.1007/978-3-642-22110-1_55,10.1145/1453101.1453131,10.1007/978-3-642-31424-7_54,Jackson1994SemanticDA}.
There are also techniques that require some knowledge of the transformations
performed~\cite{eqsat-opt,10.1145/945885.945888} or the order of
optimizations~\cite{10.1007/BFb0054170,10.1145/349299.349314,GOLDBERG200553}.
In contrast, our approach can work with loops as well as in a black-box setting
where knowledge about the syntactic difference in the programs is not
available.
\iffalse
In the domain of programming assignments, testing-based evaluations have been
extensively explored~\cite{10.1007/978-3-540-74063-6_29,Hext:1969:AGS:362946.362981,douce2005} to prove functional equivalence. These
techniques may further be complemented with random sampling and paired-program
symbolic execution~\cite{Li:2016:MCB:2889160.2889204} and metrics for
feedback~\cite{Jackson:1997:GSP:268084.268210,Singh:2013:AFG:2491956.2462195,10.1007/978-3-319-66706-5_21,DBLP:conf/cav/DAntoniSS16}. However, they also
suffer from the drawback of test-set not being exhaustive. Bhatia et
al.~\cite{Bhatia:2018:NPC:3180155.3180219} combine neural networks with
constraint-based reasoning to correct the programming assignments for
introductory programming courses. Their approach primarily focuses on learning
syntax of correct programs using RNN and employing the learned neural network
to fix syntactically buggy programs by predicting potential token sequences
for repair; whereas, our technique is geared towards providing formal
guarantees about semantic equivalence of two programs. A transformation-based
approach~\cite{Wang:2007:SSG:1222228.1222412} represents the given programs
into dependence graphs and performs transformations based on some rules to
standardize graph structures before computing their semantic similarity score.
However, the score is biased towards structurally similar programs.
Additionally, both these works can not handle cases where loops have different
number of iterations, unlike ours.
\fi
|
1,116,691,498,256 | arxiv | \section{Introduction}
The representation theory by quadratic forms has a long history.
It starts with the qualitative problem of determining which integers are represented by a given quadratic form.
For example, Fermat, Legendre and Lagrange dealt with the problem of representation as a sum of two, three and four squares respectively.
After that, the quantitative problem was considered by Jacobi, Kloosterman and Liouville among others.
For instance, Jacobi proved that the number of ways a positive integer $k$ can be written as a sum of four squares is
$$
\displaystyle{8\sum_{m|k,\; 4\nmid m} m},
$$
by determining the Fourier coefficients of the theta function associated to the form $x_1^2+\dots+x_4^2$.
These examples are for positive definite quadratic forms.
For indefinite forms, the literature is much less abundant than for the definite case.
In the present work, we study the quantitative problem for quadratic and hermitian forms of signature $(n,1)$.
Let $\mathbb{F}=\mathbb{R},\mathbb{C}$ or $\mathbb{H}$, and let $\mathcal O$ denote a maximal order in $\mathbb{F}$.
Thus $\mathcal O$ is the ring of integer numbers $\mathbb{Z}$ in the real case, the ring of integers of an imaginary quadratic extension of $\mathbb{Q}$ in the complex case (e.g.\ the Gaussian integers $\mathbb{Z}[\sqrt{-1}]$), and, for instance, the ring of Hurwitz integers if $\mathbb{F}=\mathbb{H}$, though there are many other choices.
We consider in $\mathbb H$ (and then by restriction in $\mathbb{C}$) the canonical involution $\alpha\mapsto\bar\alpha$.
An \emph{$\mathbb{F}$-hermitian matrix} is a square matrix with coefficients in $\mathbb{F}$ that is equal to its own conjugate transpose.
We consider an $\mathbb{F}$-hermitian matrix
\begin{equation}\label{eq:Q}
Q=\begin{pmatrix}
A&\\&-a
\end{pmatrix},
\end{equation}
with $a\in\mathbb{N}$ and $A\in\mathrm{M}_n(\mathcal O)$ a positive definite $\mathbb{F}$-hermitian matrix.
We also denote by $Q$ the induced $\mathbb{F}$-hermitian form of signature $(n,1)$,
$$
Q[x]:= x^*Qx = A[\hat x] - a\,|x_{n+1}|^{2},\qquad x\in\mathbb{F}^{n+1},
$$
where $\hat x=(x_1,\dots,x_n)^t\in\mathbb{F}^n$.
We consider, for $k\in\mathbb{N}$, the solution vectors $x\in\mathcal O^{n+1}$ of the equation
\begin{equation}\label{eq:Q[x]=k}
Q[x]=-k.
\end{equation}
Put
\begin{equation}\label{eq:R(Q,k)}
\mathcal R(Q,-k) =\{x\in\mathcal O^{n+1}:Q[x]=-k\}.
\end{equation}
Since $Q$ is an indefinite form, this set is either empty or infinite.
From now on, we will assume it is not empty, i.e.\ that $-k$ is represented by $Q$.
Consequently, in order to study the quantitative behavior, one needs to impose additional restrictions to the integral solutions of \eqref{eq:Q[x]=k}.
A natural condition is to intersect the set in \eqref{eq:R(Q,k)} with Euclidean balls $\{x\in\mathbb{F}^{n+1}: \|x\|\le s\}$ in $\mathbb{F}^{n+1}$, with $\|x\|^2 := A[\hat x]+a\,|x_{n+1}|^{2}$.
Note that this norm is induced by the positive definite $\mathbb{F}$-hermitian matrix $\left(\begin{smallmatrix}A&\\&a\end{smallmatrix}\right)$.
A related condition consists of requiring that the solutions of \eqref{eq:Q[x]=k} satisfy the bound $|x_{n+1}|\leq t$.
We have, for $x\in\mathcal R(Q,-k)$, that
\begin{equation}\label{eq:relation}
\|x\|\leq s \quad \text{if and only if} \quad |x_{n+1}| \leq \sqrt{\tfrac{s^2+k}{2}}.
\end{equation}
In this paper we establish an asymptotic formula, for large $t$, for
\begin{equation}\label{eq:N_t(Q,k)}
N_t(Q,-k):=\#\{x\in\mathcal R(Q,-k):|x_{n+1}|\leq t\},
\end{equation}
the number of integral solutions of the equation $Q[x]=-k$ satisfying the additional restriction $|x_{n+1}|\leq t$.
Our main result, Theorem~\ref{thm:main4}, is the asymptotic formula
\begin{equation}\label{eq:intro-main}
N_t(Q,-k)=
\frac{2^{(r-1)(n+1)}}{|d_{\mathcal O}|^{\frac{n+1}2}}
\frac{a^\rho\,\mathrm{vol}(S^{nr-1})}{2\rho\,|\det Q|^{\frac{r}{2}}}
\frac{\pi^{\frac{r}{2}}}{\Gamma(\frac{r}{2})}\;
\delta(Q,-k)\;
t^{2\rho} + O(t^{\tau}),
\end{equation}
as $t\to+\infty$, for $\mathbb{F}=\mathbb{R}$ or $\mathbb{C}$.
Here $r=\dim_\mathbb{R}(\mathbb{F})$, $\rho=(n+1)\,r/2-1$, $d_{\mathcal O}$ is the discriminant of the quotient field of $\mathcal O$ and $\delta(Q,-k)$ is the local density of the representation \eqref{eq:Q[x]=k} (see~\eqref{eq:densidad}).
The number $\tau$ is defined in \eqref{eq:tau}.
It depends only on the form $Q$, or more precisely, on the first nonzero eigenvalue of the Laplace-Beltrami operator on $\Gamma_Q^0\backslash \mathrm H_\mathbb{F}^n$, where $\Gamma_Q^0$ is a subgroup of the group of unimodular matrices (see~\ref{eq:Gamma_Q^0}) and $\mathrm H_\mathbb{F}^n$ is the $n$-dimensional $\mathbb{F}$-hyperbolic space.
When $\mathbb{F}=\mathbb{R}$ and $n\geq3$, formula \eqref{eq:intro-main} holds with $\tau=n-3/2$ (note that $2\rho=n-1$).
In the particular case when $\mathbb{F}=\mathbb{R}$ and $Q=I_{n,1}=\left(\begin{smallmatrix}I_n&\\&-1\end{smallmatrix}\right)$, the main theorem asserts that the number $N_t(I_{n,1},-k)$ of vectors $x\in\mathbb{Z}^{n+1}$ such that
$$
x_1^2+\dots+x_n^2-x_{n+1}^2=-k
\qquad\text{and}\qquad
|x_{n+1}|\leq t,
$$
satisfies the following asymptotic estimate, as $t\to+\infty$,
$$
N_t(I_{n,1},-k)=
\frac{\mathrm{vol}(S^{n-1})}{n-1}\,
\delta(I_{n,1},-k)\,
t^{n-1} + O(t^{n-3/2}).
$$
This formula is due to J.~Ratcliffe and S.~Tschantz \cite{Ratcliffe-Tschantz}.
The present article was inspired by this work.
They also considered $k<0$ obtaining the same formula for the leading coefficient without an error term.
Although a similar result should hold for $k<0$ in our context, we will not develop this case here since the tools needed are very different.
Concerning the question of counting integral solutions inside Euclidean balls, we define
\begin{equation}\label{eq:intro-N_s(Q,k)}
\widetilde N_s(Q,-k):=\#\{x\in\mathcal R(Q,-k):\|x\|\leq s\}.
\end{equation}
By applying formula \eqref{eq:intro-main} and relation \eqref{eq:relation} (see Remark~\ref{rmk:norms}), one shows that
\begin{equation}\label{eq:intro-main-ball}
\widetilde N_s(Q,-k) = 2^{-\rho}\,C_{Q,k} \; s^{2\rho} + O(s^{\tau}),
\end{equation}
where $C_{Q,k}$ denotes the main coefficient in \eqref{eq:intro-main}.
Related results for the leading term of the counting function of integral points in quite general algebraic varieties lying in balls of increasing radius as $s\to +\infty$, have been obtained, for example, in \cite{Duke-Rudnick-Sarnak} and \cite{Borovoi-Rudnick}.
In the absence of nonzero exceptional eigenvalues, formulas \eqref{eq:intro-main} and \eqref{eq:intro-main-ball} become respectively
\begin{align*}
N_t(Q,-k) =&\, C_{Q,k}\; t^{2\rho} + O\left(t^{2\rho(1-\frac{1}{n+1})+\varepsilon}\right),\\
\widetilde N_s(Q,-k) =&\, 2^{-\rho}\, C_{Q,k}\; s^{2\rho} + O\left(s^{2\rho(1-\frac{1}{n+1})+\varepsilon}\right),
\end{align*}
with $\varepsilon=0$ if $\mathbb{F}=\mathbb{R}$ and any $\varepsilon>0$ if $\mathbb{F}=\mathbb{C}$. We recall that $2\rho = r(n+1)-2$, where $r=\dim_\mathbb{R}(\mathbb{F})$.
Our main tool in this paper is the hyperbolic lattice point theorem of P.~Lax and R.~Phillips \cite{Lax-Phillips} in the real case (improved by B.~M.~Levitan \cite{Levitan}), and of R.~Bruggeman, R.~Miatello and N.~Wallach \cite{Bruggeman-Miatello-Wallach} in the general case ($\mathbb{F}=\mathbb{R},\mathbb{C}$ and $\mathbb{H}$) .
For $\mathbb{F}=\mathbb{R}$ we use the best lower bound known for the first eigenvalue of the Laplace-Beltrami operator on $\Gamma_Q^0\backslash\mathrm H_\mathbb{R}^n$ (see Theorem~\ref{thm:lambda_1}), which was obtained in \cite{EGMKloosterman} and in \cite{Cogdell-Li-Piatetski-Shapiro-Sarnak}.
After applying the lattice point theorem we obtain (see Proposition \ref{prop:main3})
\begin{equation}\label{eq:intro-N_t-afterLPT}
N_t(Q,-k)= C_{Q,k}'\; \Big(\sum_{y\in F} |\Gamma_{Q,y}|^{-1}\Big)\;t^{2\rho} + O(t^\tau),
\end{equation}
where $C_{Q,k}'$ is a constant depending only on $Q$ and $k$, $F$ is a representative set of $\mathcal R(Q,-k)$ under the action of the group of unimodular matrices $\Gamma_Q$ (see~\ref{eq:Gamma_Q}) and $\Gamma_{Q,y}$ is the stabilizer of $y$ in $\Gamma_Q$.
When $\mathbb{F}=\mathbb{R}$ or $\mathbb{C}$, by using the theory of C.~L.~Siegel \cite{SiegelIndefinite} for indefinite quadratic forms and its generalization for indefinite complex hermitian forms given in \cite{Raghavan}, formula \eqref{eq:intro-N_t-afterLPT} can be made more explicit, yielding our main formula \eqref{eq:intro-main}. This is carried out by expressing the term $\sum_{y\in F} |\Gamma_{Q,y}|^{-1}$ as a product of the local density $\delta(Q,-k)$ times a constant which depends only on $Q$ and $k$ (see Corollary~\ref{cor:term}).
The derivation of a formula like \eqref{eq:intro-main} from \eqref{eq:intro-N_t-afterLPT} in the quaternionic case, would first require developing for indefinite $\mathbb H$-hermitian forms the classical theory due to Siegel, as done by Raghavan (see \cite{Raghavan}) for indefinite $\mathbb{C}$-hermitian forms.
The paper is organized as follows.
In Section~2 we introduce the geometric context, relating $N_t(Q,-k)$ with the number of elements in an arithmetic subgroup of $\operatorname{Iso}(\mathrm H_\mathbb{F}^n)$ satisfying a geometric condition.
In Section~3, we apply the lattice point theorem to count such lattice points.
Section~4 uses Siegel's theory to compute the main term of the formula.
We conclude with Section~5, which contains the main theorem, together with some examples and remarks.
\section{$\mathbb{F}$-Hyperbolic space}\label{sec:hyperbolic-spaces}
Throughout the paper, given $R$ a ring with identity, we denote by $\mathrm{M}(m,n;R)$ the set of $m\times n$ matrices with coefficients in $R$, just $\mathrm{M}(m,R)$ when $n=m$,
by $\mathrm{GL}(m,\mathbb{F})$ the \emph{general linear group} and by $\mathrm{SL}(m,\mathbb{F})$ its derived group, the \emph{special linear group}.
For a matrix $C\in\mathrm{M}(m,l;\mathbb{F})$ we denote by $C^*$ its conjugate transpose and write $B[C]=C^*BC$, where $B\in\mathrm{M}(m,\mathbb{F})$.
Let $R^m$ denote the right $R$-module $\mathrm{M}(m,1;R)$.
For $\mathbb{F}=\mathbb{R},\mathbb{C}$ and $\mathbb H$, let $\mathrm P\mathbb F^{n}$ be the $n$-dimensional projective space over $\mathbb{F}$, i.e.\ $\mathrm P\mathbb F^{n}=\mathbb{F}^{n+1}\smallsetminus\{0\}/\mathbb{F}^{\times}$ where $\mathbb{F}^\times$ denotes the nonzero elements of $\mathbb{F}$.
We now introduce a model for Riemannian symmetric spaces of real rank one and negative curvature (leaving out the Cayley plane).
These are the real, complex and quaternionic hyperbolic spaces.
For a general reference on this subject see \cite[II.\S10]{Bridson-Haefliger} and \cite[\S19]{Mostow}.
Let $Q$ be the matrix defined as in \eqref{eq:Q}. The set
\begin{equation}\label{eq:H_F^n}
\mathrm H_\mathbb{F}^n(Q) = \left\{[x]\in \mathrm P\mathbb F^{n} : Q[x]<0\right\}
\end{equation}
will serve as the set of points for the \emph{$Q$-Kleinian model} of $n$-dimensional $\mathbb{F}$-hyperbolic geometry.
Note that the condition $Q[x]<0$ is well defined on the projective space.
The distance function is defined by
\begin{equation}\label{eq:dist}
\cosh(d([x],[y])) = \frac{|Q(x,y)|}{|Q[x]|^{1/2}\,|Q[y]|^{1/2}},
\end{equation}
where $Q(x,y)=x^*Qy$.
We consider the $\mathbb{F}$-vector space $\mathbb{F}^{n+1}$ endowed with the form $Q(x,y)=x^*Qy$ of type $(n,1)$.
Let $x^\bot= \{u\in\mathbb{F}^{n+1}:Q(x,u)=0\}$ be the \emph{$Q$-orthogonal complement} of $x\in\mathbb{F}^{n+1}$.
If $Q[x]<0$, then the restriction of $Q$ to $x^\bot$ is positive definite.
We identify $x^\bot$ with $T_{[x]}\mathrm H_\mathbb{F}^n(Q)$ using the differential of the natural projection $\mathbb{F}^{n+1}\smallsetminus\{0\}\to\mathrm H_\mathbb{F}^n(Q)$.
We consider the symmetric positive definite $\mathbb{R}$-bilinear form on $T_{[x]}\mathrm H_\mathbb{F}^n(Q)$ given by
\begin{equation}\label{eq:inner_prodct-T_[x]H}
(u,v)= \frac{\mathrm{Re}\left( Q(u,v) \right)}{|Q[x]|},\qquad x,y\in x^\bot.
\end{equation}
In this way $\mathrm H_\mathbb{F}^n(Q)$ is naturally a Riemannian manifolds.
One can check that the metric associated is \eqref{eq:dist}.
Moreover, this metric gives constant curvature $-1$ in the real case, and pinched sectional curvature in the interval $[-4,-1]$ if $\mathbb{F}=\mathbb{C},\mathbb{H}$.
We denote by
$$
\mathrm{U}(Q,\mathbb{F})=\{g\in\mathrm{GL}(n+1,\mathbb{F}):Q[g]=Q\}
$$
the \emph{$Q$-unitary group} and by $\mathrm{SU}(Q,\mathbb{F})=\mathrm{U}(Q,\mathbb{F})\cap \mathrm{SL}(n+1,\mathbb{F})$ the \emph{special $Q$-unitary group}.
For $Q=I_{n,1}$, it is well known the classical notation
$\mathrm{U}(I_{n,1},\mathbb{F}) = \mathrm{O}(n,1)$, $\mathrm{U}(n,1)$, $\mathrm{Sp}(n,1)$ and $\mathrm{SU}(I_{n,1},\mathbb{F}) = \mathrm{SO}(n,1)$, $\mathrm{SU}(n,1)$, $\mathrm{Sp}(n,1)$
for $\mathbb{F}=\mathbb{R},\mathbb{C},\mathbb{H}$ respectively.
The center of the $Q$-unitary group is given by
\begin{equation}\label{eq:Z}
Z(\mathrm{U}(Q,\mathbb{F}))=
\begin{cases}
\{\pm I_{n+1}\} & \text{if $\mathbb{F}=\mathbb{R}$},\\
\{z I_{n+1}:|z|=1\}\cong S^1 & \text{if $\mathbb{F}=\mathbb{C}$},\\
\{\pm I_{n+1}\} & \text{if $\mathbb{F}=\mathbb{H}$}.
\end{cases}
\end{equation}
The group $\mathrm{U}(Q,\mathbb{F})$ acts transitively on $\mathrm H_\mathbb{F}^n(Q)$ by
\begin{equation}\label{eq:vartheta}
[x]\mapsto g\cdot[x]=[gx].
\end{equation}
Indeed, by the distance formula \eqref{eq:dist}, its elements act by isometries.
Let $\operatorname{Iso}(\mathrm H_\mathbb{F}^n(Q))$ (resp.\ $\operatorname{Iso}^+(\mathrm H_\mathbb{F}^n(Q))$) denote the set of isometries (resp.\ orientation-preserving isometries) of $\mathrm H_\mathbb{F}^n(Q)$.
It is clear that the elements of $Z(\mathrm{U}(Q,\mathbb{F}))$ act as the identity map on $\mathrm H_\mathbb{F}^n(Q)$.
Moreover, we have that
$$
\{1\}
\xrightarrow{\makebox[6mm]{}}
Z(\mathrm{U}(Q,\mathbb{F}))
\xrightarrow{\makebox[6mm]{$\iota$}}
\mathrm{U}(Q,\mathbb{F})
\xrightarrow{\makebox[6mm]{$\vartheta$}}
\operatorname{Iso}(\mathrm H_\mathbb{F}^n(Q))
$$
is an exact sequence, where $\iota$ denotes the inclusion map and $\vartheta$ is defined by \eqref{eq:vartheta}.
Furthermore, the group $\mathrm{PU}(Q,\mathbb{F}):=\mathrm{U}(Q,\mathbb{F})/Z(\mathrm{U}(Q,\mathbb{F}))$ is, up to finite index, the full isometry group $\operatorname{Iso}(\mathrm H_\mathbb{F}^n(Q))$.
\begin{remark}\label{rmk:isometry_groups}
When $\mathbb{F}=\mathbb{R}$ the group $\mathrm{O}(Q):=\mathrm{U}(Q,\mathbb{R})$ has four connected components.
The identity connected component is $$\mathrm{PSO}(Q):=\{g\in\mathrm{SO}(Q):g_{n+1,n+1}>0\},$$ which is isomorphic to $\operatorname{Iso}^+(\mathrm H_\mathbb{R}^n(Q))$.
When $\mathbb{F}=\mathbb{C}$, the group $\operatorname{Iso}(\mathrm H_\mathbb{C}^n(Q))$ is generated by $\mathrm{PU}(Q,\mathbb{C})$ and the conjugation $[x_1,\dots,x_{n+1}]\to [\bar x_1,\dots,\bar x_{n+1}]$. If $\mathbb{F}=\mathbb{H}$, then $\operatorname{Iso}(\mathrm H_\mathbb{H}^n(Q))\cong\mathrm{PU}(Q,\mathbb{H})$ for $n>1$.
\end{remark}
We fix a \emph{maximal order} $\mathcal O$ in $\mathbb{F}$; this means that the subset $\mathcal O\subset\mathbb{F}$ satisfies the following conditions:
\begin{enumerate}
\item[(i)] $\mathcal O$ is a lattice in $\mathbb{F}$ (there is an $\mathbb{R}$-basis $v_1,\dots,v_{r}$ of $\mathbb{F}$ such that $\mathcal O=\mathbb{Z} v_1\oplus\dots\oplus \mathbb{Z} v_{r}$);
\item[(ii)] $\mathcal O$ is a subring of $\mathbb{F}$ containing $1$;
\item[(iii)] $2\mathrm{Re}(a)\in\mathbb{Z}$ and $a\bar a\in\mathbb{Z}$ for $a\in\mathcal O$;
\end{enumerate}
and $\mathcal O$ is maximal among all orders (subsets of $\mathbb{F}$ satisfying (i)-(iii)). Here $r:=\dim_\mathbb{R}(\mathbb{F})$.
The elements of $\mathcal O$ will be called \emph{integers}. The ring of integers $\mathbb{Z}$ is the only order in $\mathbb{R}$.
In $\mathbb{C}$, the maximal orders are the rings of integers of the imaginary quadratic extensions $\mathbb{Q}(\sqrt{-D})$ ($D>0$ squarefree) of $\mathbb{Q}$.
More precisely, they are $\mathbb{Z}[\omega]$ where $\omega=(1+\sqrt{-D})/2$ if $-D\equiv 1\pmod4$ and $\omega=\sqrt{-D}$ otherwise.
When $\mathbb{F}=\mathbb{H}$ there are many orders.
As a canonical example in this case, the reader may take
$$\mathcal O=\left\{a+bi+cj+dk\in\mathbb{H}:a,b,c,d\in\mathbb{Z}\text{ or } a,b,c,d\in\mathbb{Z}+\tfrac12\right\},$$
the \emph{Hurwitz integers}.
Let $\Gamma_Q$ be the set of unimodular matrices in $\mathrm{U}(Q,\mathbb{F})$, that is
\begin{equation}\label{eq:Gamma_Q}
\Gamma_Q=\mathrm{U}(Q,\mathbb{F})\cap\mathrm{M}(n+1,\mathcal O).
\end{equation}
This is a discrete subgroup of $\mathrm{U}(Q,\mathbb{F})$ with finite center.
The action of $\Gamma_Q$ on $\mathrm H_\mathbb{F}^n(Q)$ is discontinuous, not free and the quotient $\Gamma_Q\backslash \mathrm H_\mathbb{F}^n(Q)$ is of finite volume and not compact.
On the other hand, the group $\Gamma_Q$ acts by left multiplication on the set $\mathcal R(Q,k)$ given in \eqref{eq:R(Q,k)}.
\begin{lemma}
The set of $\Gamma_Q$-orbits in $\mathcal R(Q,k)$ is finite.
\end{lemma}
\begin{proof}
This assertion follows by applying \cite[Thm.~6.9]{Borel-Harish-Chandra}.
\end{proof}
From now on, we fix $k\in\mathbb{N}$ such that $Q$ \emph{represents} $-k$, that is, there exists $x\in\mathcal O^{n+1}$ satisfying $Q[x]=-k$.
Let $F$ be a (finite) set of representatives of the $\Gamma_Q$-orbits of $\mathcal R(Q,-k)$.
Let $\Gamma_{Q,y}$ be the stabilizer of $y$ in $\Gamma_Q$, which is finite.
We conclude this section by relating the number $N_t(Q,k)$ defined in \eqref{eq:N_t(Q,k)}, with the cardinality of subsets of lattice points in $\Gamma_Q$.
\begin{proposition}\label{thm:main2}
For $t>0$, we have
\begin{equation}\label{eq:main2}
N_t(Q,-k)= \sum_{y\in F} |\Gamma_{Q,y}|^{-1} \;\#\left\{g\in\Gamma_Q:d([e_{n+1}],g\cdot[y])\leq s\right\},
\end{equation}
where $s=\operatorname{arccosh}(a^{1/2}k^{-1/2}\, t)>0$.
\end{proposition}
\begin{proof}
Put $$\mathcal R_t(Q,-k)=\{x\in\mathcal R(Q,-k):|x_{n+1}|\leq t\}.$$
The cardinality of this set is $N_t(Q,-k)$.
Let $x=(x_1,\dots,x_{n+1})^t\in\mathcal R(Q,-k)$. By \eqref{eq:dist},
\begin{equation}\label{eq:cosh(d(e,x))}
\cosh\left(d([e_{n+1}],[x])\right)= {a^{1/2}}{k^{-1/2}}\, |x_{n+1}|.
\end{equation}
Fix $t>0$, thus $\cosh(s)=a^{1/2}k^{-1/2}\,t$. Let $g\in\Gamma_Q$ and $y\in F$ such that $gy=x$, then $g\cdot[y]=[x]$.
Note that \eqref{eq:cosh(d(e,x))} tells that
\begin{equation*}
|x_{n+1}|\leq t\quad\text{if and only if}\quad d([e_{n+1}],g\cdot[y])\leq s.
\end{equation*}
But the condition on the left ensures that $x\in\mathcal R_t(Q,-k)$.
We conclude that
\begin{equation}\label{eq:R_t(Q,-k)}
\mathcal R_t(Q,-k)=
\bigcup_{y\in F} \{gy:g\in\Gamma_Q\;\text{ and }\;d([e_{n+1}],g\cdot[y])\leq s\}.
\end{equation}
The proposition follows by counting the elements of these sets.
\end{proof}
\section{Lattice point theorem}
In this section we use lattice point theorems to determine the asymptotic distribution, for $t\to+\infty$, of the number of elements in the sets $$\left\{g\in\Gamma_Q:d([e_{n+1}],g\cdot[y])\leq s\right\}.$$
Let
\begin{equation}
G=\mathrm{SU}^0(Q,\mathbb{F}),
\end{equation}
the identity connected component of $\mathrm{SU}(Q,\mathbb{F})$.
We have $G=\mathrm{PSO}(Q)$ for $\mathbb{F}=\mathbb{R}$ (see Remark~\ref{rmk:isometry_groups}).
When $\mathbb{F}=\mathbb{C},\mathbb H$ the group $\mathrm{SU}(Q,\mathbb{F})$ is connected.
Let $\mathfrak g$ denote the Lie algebra $\mathfrak{su}(Q,\mathbb{F})$ of $G$.
We have
$$
\mathfrak g= \big\{X\in\mathrm{M}(n+1,\mathbb{F}): X^*Q+QX=0\; (\text{and $\textrm{Tr}(X)=0$ if $\mathbb{F}=\mathbb{C}$})\big\}
$$
\begin{remark}\label{rmk:cartan-involution}
If $T=\left(\begin{smallmatrix}L&\\ &\sqrt{a}\end{smallmatrix}\right)$, where $L^*L=A$, then $I_{n,1}[T]=T^*I_{n,1}T=Q$.
It follows that the map $g\mapsto Tg T^{-1}$ gives an isomorphism from $\mathrm{U}(Q,\mathbb{F})$ to $\mathrm{U}(n,1;\mathbb{F}):=\mathrm{U}(I_{n,1},\mathbb{F})$ and thus also from $G$ to $\mathrm{SU}^0(n,1;\mathbb{F})$.
The corresponding isomorphism at the Lie algebra level $\mathfrak{su}(Q,\mathbb{F})\to\mathfrak{su}(n,1;\mathbb{F})$ is given by $X\mapsto TXT^{-1}$.
This isomorphism allows us to consider the Cartan involution $\theta$ on $\mathfrak g$ by pulling back the standard Cartan involution $X\mapsto -X^*$ on $\mathfrak{su}(n,1;\mathbb{F})$.
It is easy to check that $\theta(X)= -(T^*T)^{-1}X^*(T^*T)$.
\end{remark}
The group $G$ is a connected semisimple Lie group of real rank one and finite center.
Let $\theta$ be the Cartan involution given in Remark~\ref{rmk:cartan-involution}, with corresponding Cartan decomposition $\mathfrak g=\mathfrak k\oplus\mathfrak p$.
Let $H_0$ be the matrix in $\mathfrak g$ defined as the pull back from $\mathfrak{su}(n,1;\mathbb{F})$ of the matrix
$$
\left(\begin{array}{c|c}\rule{0pt}{20pt}\rule{20pt}{0pt}& \raisebox{8pt}{$e_n$}\\ \hline e_n^*&\end{array}\right).
$$
Let $\mathfrak a=\mathbb{R} H_0$, a maximal abelian subspace of $\mathfrak p$ since $G$ has real rank one.
Let $\mathfrak g=\mathfrak k\oplus \mathfrak a\oplus\mathfrak n$ and $G=NAK$ be the corresponding Iwasawa decomposition of $\mathfrak g$ and $G$, respectively.
Here $K$ is a maximal compact subgroup of $G$, with Lie algebra $\mathfrak k$.
Let $M$ be the centralizer of $A$ in $K$ with Lie algebra $\mathfrak m$.
Set $\zeta=\mathrm{vol}(K/M)$.
Let $2\rho$ denote the sum of the positive roots of $G$, thus $2\rho= (n+1)r-2=$ $n-1$, $2n$, $4n+2$ for $\mathbb{F}=\mathbb{R},\mathbb{C},\mathbb{H}$ respectively (recall that $r=\dim_\mathbb{R}(\mathbb{F})$).
Let $B_K$ denote the Killing form on $\mathfrak g$.
We shall work with the inner product $\langle\rule{3pt}{0ex},\rule{2pt}{0ex}\rangle$ on $\mathfrak g$ defined by
\begin{equation}\label{eq:inner_product_g}
\langle X,Y\rangle=-\frac{1}{\xi_\mathbb{F}}\,B_K(X,\theta Y),\qquad
\text{where }\;
\xi_\mathbb{F}=
\left\{
\begin{smallmatrix}
2(n-1)&\quad\text{for }\mathbb{F}=\mathbb{R},\\
4(n+1)&\quad\text{for }\mathbb{F}=\mathbb{C},\\
8(n+2)&\quad\text{for }\mathbb{F}=\mathbb{H}.
\end{smallmatrix}
\right.
\end{equation}
A simple computation shows that $\langle X,Y\rangle=\frac12\textrm{Tr}(X^*\,Y)$, thus $\langle H_0,H_0\rangle=1$.
We consider on the homogeneous manifold $G/K$, the $G$-invariant Riemannian metric defined by the restriction of $\langle\cdot,\cdot\rangle$ to $\mathfrak p$.
One can check that the action of $G$ on $\mathrm H_\mathbb{F}^n(Q)$ given in \eqref{eq:vartheta} is transitive, the element $[e_{n+1}]$ lies in $\mathrm H_\mathbb{F}^n(Q)$ and its stabilizer subgroup is $K$.
Hence, the map $g\mapsto [ge_{n+1}]$ from $G$ to $\mathrm H_\mathbb{F}^n(Q)$ gives rise to a $G$-equivariant bijection between the symmetric space $G/K$ and $\mathrm H_\mathbb{F}^n(Q)$.
Moreover, it follows by standard arguments that this bijection is already an isometry of Riemannian manifolds.
This gives to $\mathrm H_\mathbb{F}^n(Q)$ the structure of a Riemannian symmetric space.
Let $\Gamma$ be a non-cocompact lattice in $G$.
Set $m_\Gamma=|\Gamma\cap Z(G)|$.
Let $\mathrm{vol}(\Gamma \backslash\mathrm H_\mathbb{F}^n(Q))$ denotes the volume on any fundamental domain in $\mathrm H_\mathbb{F}^n(Q)$ relative to $\Gamma_Q$.
Let $\Delta$ be the Laplace-Beltrami operator on $\Gamma\backslash \mathrm H_\mathbb{F}^n(Q)$.
We identify $-\Delta$ with the Casimir operator $C$ of $G$, with respect to the inner product defined in \eqref{eq:inner_product_g}.
We fix a complete orthonormal set $\{\varphi_j\}$ of real valued eigenfunctions of $C$, with eigenvalues $\lambda_j$ arranged in increasing order and exceptional eigenvalues $0=\lambda_0<\lambda_1\leq\dots\leq\lambda_N<\rho^2$, which we write as $\lambda_j=\rho^2-\nu_j^2$, where $0<\nu_N\leq\nu_{N-1}\leq\dots\leq\nu_1<\rho$.
Now we can state the \emph{hyperbolic lattice point theorem}.
It was proved for the real hyperbolic space by Lax and Phillips \cite{Lax-Phillips}, with an improved error term by Levitan \cite{Levitan}, and generalized by Bruggeman, Miatello and Wallach \cite{Bruggeman-Miatello-Wallach} for any symmetric space of real rank one.
\begin{theorem}\label{thm:LPT_BMW}
In the notation above, for $[x],[y]\in \mathrm H_\mathbb{F}^n(Q)$, we have that
\begin{multline}\label{eq:LPT_BMW}
\#\{g\in\Gamma:d([x],g\cdot [y])\leq s\}=
\frac{2^{1-n} \, m_\Gamma \, \zeta}{2\rho \,\mathrm{vol}(\Gamma\backslash \mathrm H_\mathbb{F}^n(Q))}\, e^{2\rho s}\\
+2^{1-n}m_\Gamma \zeta\;\sum_{j=1}^N\frac{c(\nu_j)}{\nu_j+\rho}\, \varphi_j(x)\varphi_j(y)\, e^{(\rho+\nu_j)s}
+O\left(e^{(2\rho\frac{n}{n+1}+\varepsilon)s}\right)
\end{multline}
as $s\to+\infty$, for $\varepsilon=0$ when $\mathbb{F}=\mathbb{R}$ and for any $\varepsilon>0$ otherwise.
Here $c(\nu)$ is the Harish-Chandra $c$-function.
\end{theorem}
Note that the summation in \eqref{eq:LPT_BMW} can be restricted to the indices $j$ such that $\rho+\nu_j> 2\rho\,\frac{n}{n+1}$ (in the real case we can replace $>$ by $\geq$).
Put
\begin{eqnarray}\label{eq:tau}
\tau&=&
\begin{cases}
\rho+\nu_1& \text{if}\quad \rho+\nu_1\geq 2\rho \frac{n}{n+1}\quad \text{and}\quad\mathbb{F}=\mathbb{R},\\[1mm]
\rho+\nu_1& \text{if}\quad \rho+\nu_1 > 2\rho \frac{n}{n+1}\quad\text{and}\quad\mathbb{F}=\mathbb{C},\mathbb{H},\\[1mm]
2\rho\frac{n}{n+1}+\varepsilon&\text{otherwise,}
\end{cases}
\end{eqnarray}
where $\varepsilon$ is zero if $\mathbb{F}=\mathbb{R}$ or any positive value if $\mathbb{F}=\mathbb{C},\mathbb{H}$.
The last case includes the case when there are no exceptional eigenvalues.
With this notation we can rewrite \eqref{eq:LPT_BMW} as
\begin{equation}\label{eq:LPT-unificado}
\#\{g\in\Gamma:d([x],g\cdot [y])\leq s\} =
\frac{2^{1-n} \, m_\Gamma \, \zeta }{2\rho \,\mathrm{vol}(\Gamma\backslash \mathrm H_\mathbb{F}^n(Q))}\, e^{2\rho s}
+O\left(e^{\tau s}\right).
\end{equation}
We see that in \eqref{eq:LPT-unificado} the error term depends on the first nonzero eigenvalue of the Laplace-Beltrami operator on $\Gamma\backslash \mathrm H_\mathbb{F}^n(Q)$.
The following theorem was proved in \cite[Thm.~A]{EGMKloosterman} (see also \cite{Cogdell-Li-Piatetski-Shapiro-Sarnak}).
The notation $\mathrm{PSO}(Q)$ was introduced in Remark~\ref{rmk:isometry_groups}.
\begin{theorem}\label{thm:lambda_1}
Let $n\geq3$ and let $Q_0$ be a quadratic form with rational coefficients such that $Q_0$ is of signature $(n,1)$ and isotropic over $\mathbb{Q}$.
For any congruence subgroup $\Gamma<\mathrm{PSO}(Q_0)$, the first nonzero eigenvalue $\lambda_1$ for the Laplace-Beltrami operator on $\Gamma\backslash \mathrm H_\mathbb{R}^n(Q)$ satisfies $\lambda_1\geq (2n-3)/4$.
\end{theorem}
Bounds of this kind are not known in the complex and quaternionic case (see \cite[Cor.~1.4]{Li} for a related result in the complex case).
\begin{remark}\label{rmk:lambda_1}
Under the assumptions of Theorem~\ref{thm:lambda_1}, the value of $\nu_1$ satisfies
\begin{align*}
\nu_1&\leq \sqrt{\left(\frac{n-1}2\right)^2 -\frac{2n-3}4}=\frac{n-2}2.
\end{align*}
Moreover, in the worst possible case $\nu_1=\frac{n-2}2$, one has $\rho+\nu_1\geq 2\rho \frac{n}{n+1}$ for all $n\geq3$, which implies that \eqref{eq:LPT-unificado} holds for $\tau=\frac{n-1}2+\frac{n-2}2=n-\frac32$ since $\mathbb{F}=\mathbb{R}$.
\end{remark}
Before applying Theorem~\ref{thm:LPT_BMW} to our problem, we give the value of $\zeta=\mathrm{vol}(K/M)$.
\begin{lemma}\label{lem:zeta}
Under the notation above, one has $\zeta=\mathrm{vol}(S^{nr-1})$.
\end{lemma}
\begin{proof}
It is sufficient to prove the lemma for $Q=I_{n,1}$ since the isomorphism between $G$ and $\mathrm{SU}^0(n,1;\mathbb{F})$ given in Remark~\ref{rmk:cartan-involution} preserves the Killing form and the inner product \eqref{eq:inner_product_g}.
Set $S=\{H\in\mathfrak p:\langle H,H\rangle =1\}$ with the Riemannian metric given by the real inner product $\langle\cdot,\cdot\rangle$ restricted to $\{X\in\mathfrak p: \langle X,H\rangle=0\}\cong T_HS$.
The adjoint representation restricted to $K$ leaves the set $S$ invariant, this action is transitive and the stabilizer subgroup of $H_0\in S$ is $M$.
Then, we have a $K$-equivariant bijection from the manifold $K/M$ to $S$, that is already an isometry, considering the Riemannian metric on $K/M$ given by \eqref{eq:inner_product_g} restricted to $\mathfrak k\cap\mathfrak m^\bot$.
It remains to prove that the Riemannian manifold $S$ is isometric to the $(nr-1)$-dimensional sphere in $\mathbb{R}^{nr}$.
It can be checked that
\begin{equation}\label{eq:p_su(n,1)}
\mathfrak p=\left\{X_v=\left(\begin{array}{c|c}\rule{0pt}{12pt}\rule{12pt}{0pt}& \raisebox{3pt}{$v$}\\ \hline v^*&\end{array}\right) : v\in\mathbb{F}^n \right\}.
\end{equation}
Then $\langle X_v,X_w\rangle = \frac12\mathrm{Tr}(X_v^*\, X_w)= \mathrm{Re} (\sum_{l=1}^n \bar v_l w_l)$.
Identifying $\mathfrak p$ with $\mathbb{F}^n\cong\mathbb{R}^{rn}$ in the obvious way, the inner product $\langle\cdot,\cdot\rangle$ on $\mathfrak p$ coincides with the canonical inner product on the (real) vector space $\mathbb{F}^{n}$ of dimension $nr$.
Hence $S$ is the $(nr-1)$-dimensional sphere of radius one in the (standard) Euclidean space $\mathbb{R}^{nr}$.
\end{proof}
For a more detailed description of the realizations of the symmetric spaces $G/K$ and $K/M$, we refer to
\cite[\S10, Ch.~XI]{Kobayashi-Nomizu}.
Let
\begin{equation}\label{eq:Gamma_Q^0}
\Gamma_Q^0\;=\;G\cap\Gamma_Q \;=\; \mathrm{SU}^0(Q,\mathbb{F})\cap\mathrm{M}(n+1,\mathcal O).
\end{equation}
This is a finite index subgroup of $\Gamma_Q$ (see~\eqref{eq:Gamma_Q}).
Let us denote by $w$ the number of units in $\mathcal O$ when $\mathbb{F}=\mathbb{R}$ or $\mathbb{C}$.
It is clear that
$$
\kappa:=[\Gamma_Q:\Gamma_Q^0]=
\begin{cases}
4&\text{if $\mathbb{F}=\mathbb{R}$,}\\
w&\text{if $\mathbb{F}=\mathbb{C}$,}\\
1&\text{if $\mathbb{F}=\mathbb{H}$.}
\end{cases}
$$
Let $\{g_1,\dots,g_\kappa\}$ be a set of representatives of the $\Gamma_Q^0$-coclases of $\Gamma_Q$.
For example
$\left\{\left(\begin{smallmatrix}I_{n-1}&&\\& \pm1&\\&& \pm1\end{smallmatrix}\right)\right\}$ if $\mathbb{F}=\mathbb{R}$,
$\{\left(\begin{smallmatrix}I_{n}&\\ & \alpha\end{smallmatrix}\right):\alpha\in\mathcal O^\times\}$ if $\mathbb{F}=\mathbb{C}$ and
$\{I_{n+1}\}$ if $\mathbb{F}=\mathbb{H}$.
Now we can apply the lattice point theorem to our problem. For each $1\leq j\leq\kappa$, Theorem~\ref{thm:LPT_BMW} implies that
\begin{equation}\label{eq:g-in-GammaQ0}
\#\{g\in\Gamma_Q^0:d([e_{n+1}],g\cdot(g_j\cdot[y]))\leq s\}=
\frac{2^{1-n} \, m_{\Gamma_Q^0} \, \zeta}{2\rho\,\mathrm{vol}(\Gamma_Q^0\backslash \mathrm H_\mathbb{F}^n(Q))}\, e^{2\rho s} + O(e^{s\tau}).
\end{equation}
A trivial verification shows that $\mathrm{vol}(\Gamma_Q^0\backslash\mathrm H_\mathbb{F}^n(Q))= \xi_\mathbb{F}\, \mathrm{vol}(\Gamma_Q\backslash\mathrm H_\mathbb{F}^n(Q))$, with $\xi_\mathbb{R}=2$, $\xi_\mathbb{C}= m_{\Gamma_Q^0}$ and $\xi_{\mathbb H}=1$.
Furthermore, Lemma~\ref{lem:zeta} gives $\zeta=\mathrm{vol}(S^{nr-1})$.
These considerations imply, by adding formula \eqref{eq:g-in-GammaQ0} over $j$, that
\begin{equation}
\#\big\{g\in\Gamma_Q:d([e_{n+1}],g\cdot[y])\leq s\big\}\;=\;
\widetilde w\;
\displaystyle \frac{2^{1-n}\,\mathrm{vol}(S^{nr-1})}{2\rho\,\mathrm{vol}(\Gamma_Q\backslash \mathrm H_\mathbb{F}^n(Q))}\;e^{2\rho s}
+ O(e^{s\tau})
\end{equation}
where $\widetilde w=w$ if $\mathbb{F}=\mathbb{R},\mathbb{C}$ and $\widetilde w=2$ if $\mathbb{F}=\mathbb H$.
Applying this formula to \eqref{eq:main2}, we obtain that
\begin{equation}\label{eq:NafterLPT}
N_t(Q,-k)=\widetilde w\;\frac{2^{1-n}\, \mathrm{vol}(S^{nr-1})}{2\rho\,\mathrm{vol}(\Gamma_Q\backslash \mathrm H_\mathbb{F}^n(Q))}\,\Big(\sum_{y\in F}|\Gamma_{Q,y}|^{-1}\Big)\, e^{2\rho s} + O(e^{s\tau}).
\end{equation}
Recall that $\cosh(s)=a^{1/2}\,k^{-1/2}\, t$.
Notice that we can replace the error term in \eqref{eq:NafterLPT} by $O(t^\tau)$ since $e^s\sim2\cosh(s)=2\,a^{1/2}\,k^{-1/2}\, t$ as $s\to+\infty$, and furthermore $e^{2\rho s}=2^{2\rho} a^{\rho} k^{-\rho}\, t^{2\rho} + O(t^\tau)$.
Collecting all the information in this section, we have obtained the following formula.
\begin{proposition}\label{prop:main3}
The number $N_t(Q,-k)$ satisfies the asymptotic estimate
\begin{equation}\label{eq:main3}
N_t(Q,-k)= \widetilde w\;\frac{2^{2\rho-(n-1)}a^\rho\,\mathrm{vol}(S^{nr-1})}{2\rho\,k^\rho\,\mathrm{vol}(\Gamma_Q\backslash \mathrm H_\mathbb{F}^n(Q))}\,\Big(\sum_{y\in F}|\Gamma_{Q,y}|^{-1}\Big)\; t^{2\rho} + O(t^{\tau}),
\end{equation}
as $t\to+\infty$, where $\tau$ is as in \eqref{eq:tau} and $\widetilde w=w$ if $\mathbb{F}=\mathbb{R},\mathbb{C}$ and $\widetilde w=2$ if $\mathbb{F}=\mathbb H$.
Moreover, when $\mathbb{F}=\mathbb{R}$ and $n>2$, \eqref{eq:main3} holds with $\tau=n-3/2$.
\end{proposition}
The last assertion follows from Remark~\ref{rmk:lambda_1}.
\section{The mass of the representation}
The object of this section is to obtain a formula for the term $\sum_{y\in F}|\Gamma_{Q,y}|^{-1}$ by using Siegel's theory on quadratic forms and its generalization to complex hermitian forms given by Raghavan.
Our main references are \cite{SiegelTata}, \cite{Ramanathan} and \cite{Raghavan}.
From now on we make the assumption $\mathbb{F}=\mathbb{R}$ or $\mathbb{C}$.
We denote by $d_{\mathcal O}$ the discriminant of the quotient field of $\mathcal O$.
The following Remark introduces a canonical volume element in algebraic varieties that will be useful (see \cite[\S5 Ch IV]{SiegelTata} for more details).
\begin{remark}\label{rmk:volume_element}
Let $x_1,\dots,x_m$ denote the coordinates of $\mathbb{R}^m$. Let $y_j=f_j(x_1,\dots,x_m)$ ($1\leq j\leq n$) be $n$ smooth functions with $n\leq m$.
Let $a_1,\dots,a_n\in\mathbb{R}$ such that the surface $\Omega=\{x\in\mathbb{R}^m: y_j(x)=a_j,\;\text{for } 1\leq j\leq n\}$ is non-singular, that is, the matrix with entries $\frac{\partial f_i}{\partial x_j}$ has maximum rank $n$ at every point of $\Omega$.
Choose $m-n$ differentiable functions $y_{n+1},\dots,y_m$ of $\mathbb{R}^m$ so that the Jacobian $J=\det(\frac{\partial y_i}{\partial x_j})$ is nonzero at every point of $\Omega$.
Then
\begin{equation}
d\omega = |J|^{-1}\, d y_{n+1}\dots dy_m
\end{equation}
gives a volume element on $\Omega$ independently of the choice of $y_{n+1},\dots,y_m$. Siegel denoted this volume element by $\frac{\{dx\}}{\{dy\}}$.
\end{remark}
We pick $v\in\mathbb{F}^n$ and $R\in\mathrm M(n,\mathbb{F})$ such that the matrix
$$
W=\begin{pmatrix}
-k&v\\v^t&R
\end{pmatrix}
$$
is $\mathbb{F}$-hermitian of signature $(n,1)$.
For $y\in\mathbb{F}^{n+1}$ such that $Q[y]=-k$, let $\mathrm{U}(Q,\mathbb{F})_y$ denote the set of elements $U\in\mathrm{U}(Q,\mathbb{F})$ such that $Uy=y$.
Note that if $y\in\mathcal O^{n+1}$, the stabilizer of $y$ in $\Gamma_Q$ is $\Gamma_{Q,y}=\mathrm{U}(Q,\mathbb{F})_y\cap \mathrm{GL}(n+1,\mathcal O)$.
Consider the varieties
\begin{eqnarray*}
\Omega(Q,W)&=&\{X\in\mathrm{M}(n+1,\mathbb{F}) : Q[X]=W\},\\
\Omega(Q,W;y)&=&\{Y\in\mathrm{M}(n+1,n;\mathbb{F}) : Q[(y\,|\,Y)]=W\}.
\end{eqnarray*}
The groups $\mathrm{U}(Q,\mathbb{F})$ and $\mathrm{U}(Q,\mathbb{F})_y$ act by left multiplication on $\Omega(Q,W)$ and on $\Omega(Q,W;y)$ respectively. On these varieties we fix the volume elements
$$
d\omega =
\left|\frac{\det(W)}{\det(Q)}\right|^{\frac{2-r}{2}}\,
\frac{\{ d X\}}{\{ d W\}}
\qquad\text{and}\qquad
d\omega^* =
\left|\frac{\det(W)}{\det(Q)}\right|^{\frac{2-r}{2}}\,
\frac{\{ d X\}}{\{ d v\}\{ d R\}},
$$
respectively, where $\frac{\{ d X\}}{\{ d W\}}$ and $\frac{\{ d X\}}{\{ d v\}\{ d R\}}$ are the volume elements given in Remark~\ref{rmk:volume_element}.
In the complex case, we write $X=X^{(1)}+iX^{(2)}$ and $W=W^{(1)} + iW^{(2)}$ with $X^{(1)}$, $X^{(2)}$, $W^{(1)}$ and $W^{(2)}$ real matrices. Thus, the volume element $\frac{\{ d X\}}{\{ d W\}}$ is defined by considering the algebraic equations $\mathrm{Re}(Q[X^{(1)}+iX^{(2)}])=W^{(1)}$ and $\mathrm{Im}(Q[X^{(1)}+iX^{(2)}])=W^{(2)}$.
The factor $\left|{\det(W)}/{\det(Q)}\right|^{\frac{2-r}{2}}$ is included so that $d\omega$ does not depend on $W$.
Similar consideration apply for $\frac{\{ d X\}}{\{ d v\}\{ d R\}}$ and $d\omega^*$.
Siegel (and \cite{Ramanathan} for the hermitian case) defines \emph{the measure of the representation of $-k\in\mathbb{Z}$ by $Q$} as
\begin{equation}\label{eq:measurerep}
\mu(Q,-k)=\sum_{y\in F} \mu(y,Q)/ \mu(Q).
\end{equation}
Here $\mu(Q)$ denotes the \emph{measure of the unit group $\Gamma_Q$} given by $(r^2/|d_{\mathcal O}|)^{(n+1)(n+2)/4}$ times the volume of any fundamental domain for the action of $\Gamma_Q$ on $\Omega(Q,W)$, and similarly,
$\mu(y,Q)$ is the \emph{measure of the representation $y$}, given by $(r^2/|d_{\mathcal O}|)^{n(n+1)/4}$ times the volume of any fundamental domain for the action of $\Gamma_{Q,y}$ on $\Omega(Q,W;y)$.
We will recover from the right hand side of \eqref{eq:measurerep} the term $\sum_{y\in F}|\Gamma_{Q,y}|^{-1}$ and then, by applying Siegel's main theorem, we will obtain an explicit formula for this term.
By \cite[Thm.~7, Ch~IV]{SiegelTata} and \cite[(93)]{Raghavan} (or \cite[(70)]{Ramanathan}) we have that
\begin{equation}\label{eq:volS}
w\,\mu(Q)
=\big(r^2/{|d_{\mathcal O}|}\big)^{\frac{(n+1)(n+2)}4}\,
\frac{\mathrm{vol}(\Gamma_Q\backslash\mathrm H_\mathbb{F}^n(Q))}{r^{n+1}\, |\det Q|^{\frac{r}{2}n+1} }\,
\frac{\pi^{\frac{r}{2}}}{\Gamma(\frac{r}{2})}
\prod_{j=1}^n\frac{\pi^{\frac{r}{2}j}}{\Gamma(\frac{r}{2}j )},
\end{equation}
where $w=\#\mathcal O^\times$.
Let $\widetilde{\mathcal F}(y)$ be a fundamental domain of the action of $\Gamma_{Q,y}$ on $\Omega(Q,W;y)$.
By definition
$
\mu(y,Q)=
|\det W|^{(2-r)/2}\,|\det Q|^{-(2-r)/2}
(r^2/{|d_{\mathcal O}|})^{n(n+1)/4} \int_{\widetilde{\mathcal F}(y)} d \omega^*,
$
but the measure $ d \omega^*$ is invariant by $\Gamma_{Q,y}$, then
$$
\mu(y,Q)=
\frac{1}{|\Gamma_{Q,y}|}\,
\left|\frac{\det W}{\det Q}\right|^{(2-r)/2}
\big({r^2}/{{|d_{\mathcal O}|}}\big)^{\frac{n(n+1)}{4}}
\int_{\Omega(Q,W;y)} d \omega^*.
$$
Using \cite[Thm.~6, Ch~IV]{SiegelTata} and \cite[Lemma~9]{Ramanathan}, we have that
\begin{equation}\label{eq:volS*}
\mu(y,Q)=
|\Gamma_{Q,y}|^{-1}\,
\big({r^2}/{{|d_{\mathcal O}|}}\big)^{\frac{n(n+1)}{4}}\,
\frac{k^{1-\frac{r}{2}(n+1)}}{|\det Q|^{\frac{r}{2}(n-1)+1}}\;
\prod_{j=1}^n\frac{\pi^{\frac{r}{2} j }}{\Gamma(\frac{r}{2} j )}.
\end{equation}
Finally, \eqref{eq:volS} and \eqref{eq:volS*} imply that
\begin{equation}\label{eq:mu(Q,-k)}
\mu(Q,-k) =
w\,
|d_{\mathcal O}|^{\frac{n+1}2}
\frac{k^{1-\frac{r}{2}(n+1)} |\det Q|^{\frac{r}{2}}}
{\mathrm{vol}(\Gamma_Q\backslash\mathrm H_\mathbb{F}^n(Q))}
\frac{\Gamma(\frac{r}{2})}{\pi^{\frac{r}{2}}}
\sum_{y\in F} |\Gamma_{Q,y}|^{-1}.
\end{equation}
Now, we will recall Siegel's main theorem for indefinite quadratic and hermitian forms (see \cite[Thm.~1]{SiegelIndefinite} and \cite[Thm.~7]{Raghavan}).
For every rational prime $p$, the \emph{$p$-adic density of representation of $-k$ by $Q$} is
\begin{equation}\label{eq:densidad}
\delta_p(Q,-k)=
\lim_{j\to\infty} p^{-j(r (n+1)-1)}\;\# \{x\in{\big(\mathcal O /p^j\mathcal O\big)}^{n+1}: Q[x]\equiv -k\;\bmod p^j \}.
\end{equation}
Define $\delta(Q,-k)=\prod_{p} \delta_p(Q,-k)$, the \emph{local density}, where the product is over all prime numbers.
\begin{theorem}[\emph{Siegel's mass formula}]\label{thm:Siegel'sMass}
Let $Q$ be an $\mathbb{F}$-hermitian form as in \eqref{eq:Q} with $n\geq2$ and let $k$ be a positive integer.
Then
\begin{equation}\label{eq:Siegel'sMass}
\mu(Q,-k)=\delta(Q,-k).
\end{equation}
\end{theorem}
See \cite[Thm.~1]{SiegelIndefinite} for the real case and \cite[Thm.~7]{Raghavan} for the complex case.
By combining equation \eqref{eq:mu(Q,-k)} and Theorem~\ref{thm:Siegel'sMass}, we obtain an expression for the term $\sum_{y\in F} |\Gamma_{Q,y}|^{-1}$, the main goal of this section.
\begin{corollary}\label{cor:term}
We have that
$$
\sum_{y\in F} |\Gamma_{Q,y}|^{-1}=
w^{-1}
|d_{\mathcal O}|^{-\frac{n+1}2}
\frac{k^{\frac{r}{2}(n+1)-1}}{|\det Q|^{\frac{r}{2}}}
\mathrm{vol}(\Gamma_Q\backslash\mathrm H_\mathbb{F}^n(Q))\;
\frac{\pi^{\frac{r}{2}}}{\Gamma(\frac{r}{2})}
\delta(Q,-k)
$$
\end{corollary}
Note that ${\pi^{r/2}}/{\Gamma(r/2)}=1,\pi,\pi^2$ for $\mathbb{F}=\mathbb{R},\mathbb{C}$ and $\mathbb H$ respectively.
\section{Main theorem}
We can now state the main result in this paper, which follows by combining Proposition~\ref{prop:main3} and Corollary~\ref{cor:term}. The last assertion is a consequence of Remark~\ref{rmk:lambda_1}.
We first recall some terminology: $r=\dim_\mathbb{R}(\mathbb{F})$, $\rho=(n+1)\,r/2-1$ and $\mathcal O$ is a maximal order in $\mathbb{F}$ with discriminant $d_{\mathcal O}$.
\begin{theorem}\label{thm:main4}
Let $Q$ be an $\mathbb{F}$-hermitian matrix as in \eqref{eq:Q}, with $n\geq2$ and $\mathbb{F}=\mathbb{R}$ or $\mathbb{C}$.
We fix $k\in\mathbb{N}$ such that $-k$ is represented by $Q$.
Then, the number $N_t(Q,-k)$ of elements $x\in\mathcal O^{n+1}$ such that $Q[x]=-k$ and $|x_{n+1}|\leq t$, satisfies the following asymptotic estimate as $t\to+\infty$,
\begin{equation}\label{eq:main4}
N_t(Q,-k)=
\frac{2^{(r-1)(n+1)}}{|d_{\mathcal O}|^{\frac{n+1}2}}
\frac{a^\rho\,\mathrm{vol}(S^{nr-1})}{2\rho\,|\det Q|^{\frac{r}{2}}}
\frac{\pi^{\frac{r}{2}}}{\Gamma(\frac{r}{2})}\;
\delta(Q,-k)\;
t^{2\rho} + O(t^{\tau}),
\end{equation}
where $\tau$ is as in \eqref{eq:tau}.
Moreover, when $\mathbb{F}=\mathbb{R}$ and $n>2$, the formula \eqref{eq:main3} holds for $\tau=n-3/2$.
\end{theorem}
\begin{remark}
In order to get an explicit value of the main term in \eqref{eq:main4} for a fixed $\mathbb{F}$-hermitian form $Q$, one needs to determine $\delta(Q,-k)$.
When $\mathbb{F}=\mathbb{R}$, T.~Yang \cite{Yang} computed this local density for any quadratic form $Q$. For $\mathbb{F}=\mathbb{C}$ see \cite{Hironaka}.
\end{remark}
\begin{remark}\label{rmk:norms}
We consider the real norm $\|\cdot\|$ induced by the $\mathbb{F}$-hermitian form $\left(\begin{smallmatrix}A&\\&a\end{smallmatrix}\right)$,
i.e.\ $\|x\|^2 = A[\hat x]+a|x_{n+1}|^2$, where $\hat x=(x_1,\dots,x_n)^t$.
We claim that Theorem~\ref{thm:main4} provides an asymptotic formula with error term for $\widetilde N_s(Q,-k)$ (see~\eqref{eq:intro-N_s(Q,k)})
as $s\to+\infty$, of the number of solutions lying in the Euclidean ball of radius $s$.
Indeed, it is clear that
$$
\left.
\begin{array}{r@{\;}c@{\;}l}
Q[x] &=& A[\hat x]-a|x_{n+1}|^2 = -k \\[1mm]
\|x\|^2 &=& A[\hat x]+a|x_{n+1}|^2 \leq s^2
\end{array}
\right\}
\quad \text{ imply }\quad
|x_{n+1}| \leq \sqrt{\frac{s^2+k}{2}}=:t_s.
$$
Hence, by Theorem~\ref{thm:main4}, we have
\begin{equation}\label{eq:tilde_N-1st}
\widetilde N_s(Q,-k) = N_{t_s}(Q,-k) = C_{Q,k}\; t_s^{2\rho} + O(t_s^{\tau}),
\end{equation}
where $C_{Q,k}$ denotes the main coefficient of \eqref{eq:main4}.
By Taylor's expansion at $s=\infty$, $t_s = s/\sqrt{2} +O(s^{-1})$,
thus we can replace $O(t_s^{\tau})$ by $O(s^{\tau})$ in \eqref{eq:tilde_N-1st}.
Moreover, $t_s^{2\rho} = s^{2\rho}/2^{\rho}+O(s^{2\rho-2})$.
But $\tau \geq 2\rho\, n/(n+1)+\varepsilon$ by \eqref{eq:tau}, and so it follows immediately that $\tau-(2\rho-2)>0$ for $\mathbb{F}=\mathbb{R}$ and $\mathbb{C}$.
Therefore, \eqref{eq:tilde_N-1st} reduces to
\begin{equation}
\widetilde N_s(Q,-k) = 2^{-\rho}\, C_{Q,k} \; s^{2\rho} + O(s^{\tau}).
\end{equation}
\end{remark}
\begin{example}
The case $\mathbb{F}=\mathbb{R}$ and
$
Q=I_{n,1}=\left(\begin{smallmatrix}I_n&\\&-1\end{smallmatrix}\right)
$
was considered by J.~Ratcliffe and S.~Tschantz \cite{Ratcliffe-Tschantz}.
We have $r=1$, $|d_{\mathcal O}|=1$, $a=1$, $|\det(I_{n,1})|=1$ and $\rho=(n-1)/2$. Theorem~\ref{thm:main4} now yields
\begin{equation}
N_t(I_{n,1},-k)=
\frac{\mathrm{vol}(S^{n-1})}{n-1}\;
\delta(I_{n,1},-k)\;
t^{n-1} + O(t^{n-3/2}).
\end{equation}
In \cite[Thm.~12]{Ratcliffe-Tschantz} there is an explicit formula for the local density $\delta(I_{n,1},-k)$.
\end{example}
\begin{example}
We conclude the article by considering the case of the Lorentzian hermitian form over the Gaussian integers, i.e.\ $Q=I_{n,1}$ for $\mathbb{F}=\mathbb{C}$ and $\mathcal O=\mathbb{Z}[\sqrt{-1}]$.
We have $r=2$, $\rho=n$, $d_{\mathcal O}=-4$, $a=1$ and $|\det(I_{n,1})|=1$.
Furthermore, $\mathrm{vol}(S^{2n-1})=2\pi^{n}/(n-1)!$.
Theorem~\ref{thm:main4} now implies that
\begin{align}\label{eq:Lorentzianhermitian}
N_t(I_{n,1},-k) = \dfrac{\pi^{n+1}}{n!}\; \delta(I_{n,1},-k)\; t^{2n} + O(t^{\tau}).
\end{align}
The local density $\delta(I_{n,1},-k)$ is explicitly computable.
For example, when $n=2$ and $k=1$ we have $\delta(I_{2,1},-1)=2^33\pi^{-3}$ and consequently $N_t(I_{2,1},-1) = 12\, t^{4} + O(t^{\tau}).$
In a future paper we will compute this term for several examples and testing our formula with experimental computations.
\end{example}
\section*{Acknowledgments}
This is part of the author's Ph.D.\ thesis, written under the supervision of Professor Roberto Miatello at the Universidad Nacional de C\'ordoba, Argentina.
The author sincerely thanks Roberto for suggesting the problem and for many helpful discussions concerning the material in this paper.
The author also wishes to express his thanks
to Wai Kiu Chan, Rainer Schulze-Pillot and Takao Watanabe for several helpful comments and
to Jorge Lauret and the referee for reading the draft and making helpful suggestions.
|
1,116,691,498,257 | arxiv | \section{Introduction}
In a recent letter \cite{ABCD} we showed that the remarkable
identical gamma ray bands (isospectral gamma rays)
seen in neighboring superdeformed nuclei \cite{Byrski,Twin} have
a natural description in terms of supersymmetric quantum
mechanics (SSQM).
The role of supersymmetry in superdeformed nuclei has not in general
been emphasized except in \cite{Gelberg}.
The SSQM picture requires that bands with isospectral gamma
rays in an even and neighboring odd nucleus be accompanied by
two more isospectral bands, one in the odd nucleus
and another in a neighboring even nucleus. As we have shown
in \cite{ABCD} this is most naturally realized when the even nuclei
differ by two nucleons.
We also suggested that two of the bands actually occurred at a very high
excitation energy, so that they would be unobservable in experiment.
In this paper we point out that all four band might actually be
observable.
We show that the experimentally observed isospectral pairs can very
naturally be grouped together in two quartets.
We point out that two such
quartets of gamma ray sequences have indeed been observed in
superdeformed nuclei in the $A=150$ mass region, although to our
knowledge
they have seldom been associated together \cite{Zuber,Rag}.
As could be expected,
the quartets are not
perfectly isospectral which suggests a small breaking
of the supersymmetry.
In this paper we devise a simple form of supersymmetry breaking and
show that this accounts for the bulk of the experimental data related
to these quartets.
Our work on symmetry breaking gives promise of making a
connection between our approach and more microscopic ones.
Because SSQM relates the four gamma ray cascades that make up
the quartet, it also makes predictions about transition rates
in the four bands. These are presented here to stimulate experimental
interest when the new facilities to study superdeformed nuclei come
on line \cite{Beck}.
\section{Gamma ray quartets in superdeformed nuclei}
The recently observed sequences of isospectral gamma rays in
neighboring even and odd nuclei lead naturally to an interpretation
in terms of
supersymmetric quantum mechanics. As is well known SSQM is designed to
have equal energy eigenvalues for bosonic (even nucleus) and fermionic
(odd nucleus) states. In addition SSQM requires that the dimension of
the bosonic part
and the fermionic part of a given degenerate multiplet be identical.
For the case of superdeformed nuclei this implies that for a given
excitation energy the number of states in the even nucleus equals that
in the neighboring odd nucleus.
If the state in the even nucleus has angular momentum $L$ (integer) and
the corresponding state in the odd nucleus $J$ (half integer), the
degeneracies are $2L+1$ and $2J+1$, respectively. Of course these two
numbers can never be equal. In \cite{ABCD} we showed that the state
counting problem can be solved in the following way.
Suppose the state in the odd nucleus is obtained by
coupling one particle (or hole) with spin or pseudospin \cite{pseu}
$s=\frac{1}{2}$ to the corresponding state in the even
nucleus. The angular momenta in the odd system are then
$J_>=L+\frac{1}{2}$ and $J_<=L-\frac{1}{2}$.
Suppose further that in the even nucleus with two particles
(or two holes) more than in the original even nucleus
there is a level with angular momentum $L$.
Now there are $2(2L+1)$ even states and precisely the same number of
odd states. These four states form a degenerate SSQM quartet.
Transitions between sets of these states will lead to four isospectral
gamma ray sequences, one in the even nucleus $A$, one
in the other even nucleus $A \pm 2$, and two in the
odd $ A \pm 1$ nucleus. One of these last two is among the
$J_>$ states and the other among the $J_<$ states. As we
shall show below cross-over transitions between
$J_<$ and $J_>$ are very small.
It is this quartet of gamma rays sequences that
is the hallmark of SSQM. Let us examine two candidates.
In Tables I and II we show that all four known isospectral
pairs can be grouped in two of these quartets.
In Table I we compare gamma ray sequences in
four superdeformed bands in the adjacent nuclei
$^{152}$Dy, $^{151}$Tb (twice) and $^{150}$Gd
\cite{Byrski,Twin86,Bentley87,Fallon89,sd}.
As is well known spins have not been measured yet, and we have
thus chosen spin
assignments consistent with a SSQM interpretation. We have followed the
spin assignments in the compilation \cite{sd} as closely as possible.
We see that the four bands are very
close in energy, but are not completely degenerate.
In spite of the breaking of supersymmetry it is clear that the
quartet bears the strong family resemblance suggested
by SSQM. A second example of such a quartet
is shown in Table II . Here the four
bands are in $^{148}$Gd, $^{147}$Gd (twice), and
$^{146}$Gd \cite{Zuber,Rag,sd,Hebbinghaus90,Deleplanque88}.
Again the spins have been assigned to stress
the SSQM connection, and again we see the general pattern
of slightly broken SSQM. The pattern does seem
to shift around $L=46$, which is due to a level crossing
that is not contained in the simple
SSQM quartet picture. The crossing occurs in two bands
that do not form an isospectral pair:
the 1b-band in $^{146}$Gd and the 1b-band in
$^{147}$Gd. Before the level crossing the last band is isospectral
with the 1b-band in $^{148}$Gd, and
the first with the 2b-band in $^{147}$Gd.
As can be seen from Fig.~2 the nature of the crossing is very similar
in both cases. Thus the band crossing stresses the correlation between
the two isospectral pairs within the quartet.
It suggests that a description in terms of
two independent pairs of superdeformed bands is not adequate.
It would certainly be interesting to pursue the search for identical
superdeformed bands \cite{Casten} and investigate whether more
supersymmetric quartets exist. There are preliminary indications
that such quartets are present in the $A=190$ mass region
(see Table III). Furthermore, it is very important to determine
the spins experimentally. The spin assignments of SSQM are a
central feature of the theory and their verification
or refutation is crucial to test the application of
supersymmetry to superdeformed nuclei. Note that
the SSQM spin assignments do not always agree with other
phenomenological determinations\cite{Zeng9192,Draper90}.
\section{SSQM breaking}
In a SSQM description the quartet of states discussed in Section II
is degenerate. This degeneracy implies that the energy only depends
on $L$. A small spin-orbit splitting will break the supersymmetry.
At the algebraic level of our treatment it is irrelevant whether
this is a spin-orbit or a pseudo-spin-orbit coupling.
For each nucleus, we take a Hamiltonian of the form
\begin{equation}
H_i = a_i \; {\bf L} \cdot {\bf L} + b \; {\bf s}\cdot {\bf L} ~,
\end{equation}
where ${\bf L}$ is the angular momentum of the core and ${\bf s}$
is the spin (or pseudo-spin) of the odd particle (hole).
The coefficients $a_i$ are inversely proportional to the moment
of inertia in each nucleus. The strength of the spin-orbit coupling,
$b$, is taken the same for each nucleus. In the sequences
we are considering (Tables I and II) we label the three nuclei by
$i = 0$ for $^{152}$Dy or $^{148}$Gd, by $i = 1$ for $^{151}$Tb
or $^{147}$Gd and by $i = 2$ for $^{150}$Gd or $^{146}$Gd.
Supersymmetry implies that all the $a_i$'s are equal. A plot of
the gamma ray energies (see Figures I and II) against the core
angular-momentum $L$ confirms this
by revealing a near perfect straight line with almost the same slope
for all transitions. With $a_0$ as the reference, we assume small $L$
dependent departures in $a_1$ which we parametrize as
$a_1 = a_0 + \frac{\epsilon}{L}$ \cite{VMI}.
If this departure is due to the extra hole (or extra particle),
we expect $a_2 = a_0 + \frac{2 \epsilon}{L}$. With these expressions
we find for the transition energies $E_{\gamma i}(I)=E_i(I)-E_i(I-2)$,
where $E_i(I)$ is the excitation energy of the state with
angular momentum
$I$,
\begin{eqnarray}
E_{\gamma 0}(L) &=& a_0 (4 L -2) ~,
\nonumber\\
E_{\gamma 1}(L \pm {\textstyle \frac{1}{2}}) &=&
a_0 (4 L -2) + 2 \epsilon \pm b ~,
\nonumber\\
E_{\gamma 2}(L) &=& a_0 (4 L -2) + 4 \epsilon ~.
\end{eqnarray}
In this parametrization
\begin{eqnarray}
\Delta E_{\gamma 01} &=&
E_{\gamma 0}(L)-E_{\gamma 1}(L+{\textstyle \frac{1}{2}})=
\Delta E_{\gamma 12}
\nonumber\\
&=& E_{\gamma 1}(L-{\textstyle \frac{1}{2}})-E_{\gamma 2}(L)
= 2 \epsilon + b ~.
\end{eqnarray}
Both in the case of Table I and
Table II this difference is essentially zero.
This relationship among the gamma ray sequences in the
Gadolinium isotopes has been noted by Ragnarsson \cite{Rag}
but not in the context of SSQM and with different spin assignments.
(We stop the comparison at $L=46$ in Table II
as discussed in the previous Section).
We have no simple explanation for why
$E_{\gamma 1}(L+\frac{1}{2})=E_{\gamma 0}(L)$, but we see that for
both Tables our parametrization immediately gives that if
$E_{\gamma 1}(L+\frac{1}{2})=E_{\gamma 0}(L)$,
then $E_{\gamma 2}(L)=E_{\gamma 1}(L-\frac{1}{2})$ as seen.
The parametrization also implies that
$E_{\gamma 2}(L)-E_{\gamma 0}(L)= 4 \epsilon$ independent of $L$
and that too is fairly well born out in the Tables.
We find for the nuclei of Table I, $\epsilon = -b/2=-6.3 \pm 1.3$ keV,
which should be compared to $a_0 = 47 \pm 1$ keV. Thus the term
proportional
to $\epsilon$ is indeed a small perturbation in the expression for
$a_2$.
Similarly, for Table II we find $\epsilon = -b/2 = 8.3 \pm 0.6$ keV
and $a_0= 55 \pm 3$.
We see that a very simple approach to SSQM breaking based on a weak
spin-orbit coupling and a slight difference in how moments of
inertia change with added particles or holes accounts for the data of
Tables I and II. Although at the moment we do not have a microscopic
understanding of the pair wise equality of gamma ray energies,
$E_{\gamma 1}(L+\frac{1}{2})=E_{\gamma 0}(L)$ and
$E_{\gamma 1}(L-\frac{1}{2})=E_{\gamma 2}(L)$, we have shown
a mechanism in terms of a broken supersymmetry that exhibits this
pattern.
Whether the remaining degeneracy is accidental or the result
of a deeper symmetry remains to be seen.
It is our goal in this paper to show that we can realize the pattern
of super-symmetry breaking apparent in the experimental data.
\section{Transition strengths}
In the tentative spin assignments in Table I and II we have assumed
that the subsequent states in a superdeformed band differ by two units
of angular momentum.
If we further adopt a pseudo-spin picture
in which the $E2$ transition operator is independent of the
pseudo-spin, the ratio between the four possible intraband
transition probabilities only depends on geometric factors from
angular momentum algebra \cite{BK}.
Again taking the $B(E2)$ value in the $A$ nucleus as a reference,
we find the following relations for the intraband transitions
\begin{eqnarray}
B(E2;J_>=L+{\textstyle \frac{5}{2}} \rightarrow J_>=
L+{\textstyle \frac{1}{2}}) &=&
B(E2;L+2 \rightarrow L) ~,
\nonumber\\
B(E2;J_<=L+{\textstyle \frac{3}{2}} \rightarrow J_<=
L-{\textstyle \frac{1}{2}}) &=&
\frac{L(2L+5)}{(2L+1)(L+2)}
\nonumber\\
&& \times B(E2;L+2 \rightarrow L) ~.
\end{eqnarray}
The transition probabilities in the even nuclei in the quartet
are identical in this scheme. For typical values of the angular momenta
in superdeformed bands (see Tables I and II) the intraband
transitions energies are nearly the same
(isospectral). The related interband transition
between the two bands in the odd nucleus,
\begin{eqnarray}
B(E2;J_<=L+{\textstyle \frac{3}{2}} \rightarrow J_>=
L+{\textstyle \frac{1}{2}}) &=&
\frac{2}{(2L+1)(L+2)}
\nonumber\\
&& \times B(E2;L+2 \rightarrow L) ~,
\end{eqnarray}
is highly suppressed with respect to the intraband transitions
for typical values of the angular momentum. The other
interband transition from $J_> =L+\frac{5}{2}$ to $J_< =L-\frac{1}{2}$
is strictly forbidden for quadrupole radiation.
Following the notation of the previous section we label
the bands by $i=0,1\pm,2$. The ratio of the transition probabilities
$B_{ij}$ only depends on a simple geometric factor
$B_{00} : B_{22} : B_{1+1+} : B_{1-1-} : B_{1-1+} = 1 : 1 : 1 :
\frac{L(2L+5)}{(2L+1)(L+2)} : \frac{2}{(2L+1)(L+2)}$.
For typical values of $L$ in superdeformed bands it is appropriate
to take the large $L$ limit for these ratios, giving
$1 : 1 : 1 : 1 : \frac{1}{L^2}$, which shows that for large $L$
all intraband transition probabilities are equal and that the
interband transition from $J_<=L+\frac{3}{2}$ to $J_>=L+\frac{1}{2}$
is down by $\frac{1}{L^2}$, a very big suppression.
The other interband transition from $J_>=L+\frac{1}{2}$ to
$J_<=L-\frac{1}{2}$ depends on the same matrix element as the
static quadrupole moments.
With the quadrupole moment $Q_0(L)$ in the $A$ nucleus as a reference
we find
\begin{eqnarray}
Q_1(L+{\textstyle \frac{1}{2}}) &=& Q_0(L) ~,
\nonumber\\
Q_1(L-{\textstyle \frac{1}{2}}) &=&
\frac{(L-1)(2L+3)}{L(2L+1)} \; Q_0(L) ~.
\end{eqnarray}
The ratio of the quadrupole moments of the states in the four
superdeformed bands is then
$Q_0(L) : Q_2(L) : Q_1(L+\frac{1}{2}) : Q_1(L-\frac{1}{2}) =
1 : 1 : 1 : \frac{(L-1)(2L+3)}{L(2L+1)}$, which shows that in the
large $L$ limit the quadrupole moments of the members of a quartet are
identical.
The $E2$ transition between the two states in the odd nucleus
belonging to the same quartet is suppressed by $1/L^2$,
\begin{eqnarray}
B(E2;J_>=L+{\textstyle \frac{1}{2}} \rightarrow J_<=
L-{\textstyle \frac{1}{2}}) =
\frac{3}{(2L+1)(L+1)} \; B(E2;L \rightarrow L) ~.
\end{eqnarray}
The corresponding gamma ray energy for this transition is determined by
the spin-orbit splitting, $E_{\gamma}=\frac{1}{2} b(2L+1)$.
The present data on lifetimes and quadrupole moments for superdeformed
bands is still quite scarce
\cite{Hebbinghaus90,Deleplanque88,Moore90,Willsau92,Bentley87} and
it will be certainly interesting to check the above predictions when
the technology develops to the point of accurate measurements of
lifetimes.
We stress that our predictions do not depend on the details of
the dynamics, but only on the SSQM prediction that the four states have
nearly identical internal structures and therefore have the same
reduced
matrix elements for quadrupole transitions. They further depend on the
assumption
that the odd states are constructed by coupling a spin (or pseudo-spin)
$s=\frac{1}{2}$ to the orbital state.
The symmetry breaking considered in the previous section is diagonal
and does not affect the transition strengths.
In spite of the weak nature of all
these assumptions the connection of quartets across even and odd nuclei
makes interesting and verifiable predictions.
These prediction should hold to high accuracy, and should not be
confused
with the much weaker relations that can be derived in the
Bohr-Mottelson model
for nuclei with roughly equal deformation.
\section{Prospects and summary}
Supersymmetric quantum mechanics
relates states in odd and even neighboring nuclei. It
describes a quartet of states across nuclei with $A$ (even)
nucleons, $(A + 1)$ and $(A + 2)$ nucleons that are degenerate and
hence lead to four isospectral bands in the nuclei, one each in the
$A$ and $(A + 2)$ nuclei and two in the $(A + 1)$ nucleus.
Although, we pointed out \cite{ABCD} that other realizations
of SSQM are possible, it seems that the $A$, $(A + 1)$, $(A + 2)$
scheme is not only the most natural, but is actually realized in
nature.
In this paper we have shown that two examples
of such quartets appear to exist in the data although they have
seldom been regrouped to emphasize the SSQM connection.
It may very well be that these quartets of gamma ray sequences
in superdeformed nuclei are the best example of supersymmetry
so far discovered in physics.
We have tried to realize the breaking of supersymmetry in the simplest
possible way, in order to describe the experimental data. We have not
attempted
to derive the symmetry from an underlying microscopic Hamiltonian. Such
an approach would be complementary to our approach, but it seems highly
unlikely that current mean-field techniques can give the required
accuracy.
Our approach makes definite assumptions about the spins of the related
states and predicts equal transition strengths across the
quartets. New experiments with EUROBALL and GAMMASPHERE could be very
helpful in checking the spin assignments and the predictions for the
transition strengths, as well as in discovering possible new examples
of quartets of superdeformed bands. As we have already stated, there
are indications of such quartets in the $A=190$ mass region. One can
look
either for isotopic or isotonic quartets. The presently available
data \cite{sd} suggests two examples. The better of the two is an
isotopic $Z=80$ quartet
composed of the superdeformed band in $^{192}$Hg \cite{Ye}, two of
the four known bands ($^{193}$Hg(2) and $^{193}$Hg(3), \cite{Cullen})
in $^{193}$Hg and one of the three known bands ($^{194}$Hg(3),
\cite{Riley90,Beausang90,Stephens90}) in $^{194}$Hg. This is
displayed in Table III where the equality between the quantities
$\Delta E_{\gamma 01}$ and $\Delta E_{\gamma 12}$ holds within the
experimental
uncertainties. We obtain $\epsilon = -0.05 \pm 0.48 $ keV and
$b = 8.85\pm 2.35$ keV. Note also that by shifting the spin of the last
two bands up by two units, the pattern remains the same but the values
of the symmetry breaking parameters $\epsilon$ and $b$ are changed
to $9.3 \pm 0.42$ keV and $-9.85 \pm 2.19$ keV,
respectively.
An other possibility may be the isotonic $N = 112$
quartet made of $^{192}$Hg \cite{Ye}, $^{193}$Tl(a), $^{193}$Tl(b)
\cite{fernand} and $^{194}$Pb \cite{brinkman}
though, there, the difference between $\Delta
E_{\gamma 01}$ and $\Delta E_{\gamma 12}$ is beyond experimental errors
and cannot be accommodated by the simple SSQM breaking
scheme of Section III.
However the measured quadrupole moments
\cite{Moore90,Willsau92,Bentley87} for the superdeformed
bands in $^{192}$Hg and $^{194}$Pb agree within experimental errors.
The balance of boson and fermion degrees of freedom, which
was discussed in Section II and in \cite{ABCD}, implies that these two
examples which
involve the same band in the $^{192}$Hg nucleus should be incompatible
unless there are two (almost) degenerate bands in $^{192}$Hg or unlesss
the supersymmetry multiplet is otherwise enlarged. At the
present time only one superdeformed band has been observed in
$^{192}$Hg
\cite{Ye}.
Further investigations around mass $192$ would be very valuable
to clarify these points.
To summarize, our SSQM description is not cast in terms of microscopic
variables. Some of the dynamics comes in via the symmetry breaking.
The quartet multiplets are not perfectly isospectral. Most
of this symmetry breaking can be accounted for by a small (pseudo-)spin
orbit term and by some very simple $L$-dependent change in the moments
of inertia. These correction terms should begin to give some insight
into the connection between SSQM dynamics and the more conventional
dynamical avenues \cite{bunch,Stephens2,Stephens90}.
\acknowledgments
RDA, FC, and JPD would like to thank the Paul Scherrer Institute
and Milan Locher for
bringing us together and for providing a very pleasant environment
in which much of this work was done. RDA and NRW are
supported in part by the U.S. National Science Foundation and
RB by the Stichting voor Fundamenteel Onderzoek der Materie (FOM)
with financial support from the Nederlandse Organisatie voor
Wetenschappelijk Onderzoek (NWO).
The Division de Physique Th\'eorique is a Research Unit of the
Universities Paris 11 and 6 associated to CNRS.
|
1,116,691,498,258 | arxiv | \section{Introduction}
The COVID-19 epidemic is causing a global damage to practically all aspects of world society since early 2020. Although a huge effort in many fields of sciences is being made to mitigate its effects, the disease is continuously spreading and, in many regions, a second wave is causing great concerns. The difficulties in controlling the epidemic are in part due to a crucial combination of being highly contagious \cite{Gao2020}, having a long incubation period \cite{Lee2020} during which contagions are possible a few days before symptoms onset \cite{He2020}, having mild or asymptomatic cases \cite{Gao2020} and also because the diagnosis may take a few days since contacting the Health Care system. In particular, the latter yields that outbreaks spread and epidemic evolves while laboratory results are being processed. This effect being more important in low and medium income countries due to operational and logistic problems, generally caused by technological and economic inequalities \cite{Amed2020,Verhagen2020}.
We present in this work a method to mitigate the epidemic effects by estimating the number of COVID-19 cases without having to wait for lab confirmations. This provides the Health Care system a tool to react in advance and evaluate current or next Public Health policies.
In mass accidents or major catastrophes Early Warning Systems (EWS) play a key role for disaster mitigation \cite{Texier2017, Goniewicz2019, Kyriacos2014} decreasing response times and improving their effectiveness. The main strategy of EWS in infectious disease surveillance is the incorporation of information produced nearly from the infection\cite{Ginsberg2009,Krause2007}. In this case, the symptoms onset and their detection by the individual and community health perception is the first detectable signal of cases and, in particular, an outbreak. EWS based on syndromic surveillance have been applied in epidemiological surveillance for early outbreaks identification and confirmation \cite{Lombardo2003, Pavlin2003, Katz2011, Stoto2004, Hope2006}. One of the main characteristics of EWS is the utilization of health information provided by the population in order to activate local alarms. Nowadays, with the wide use of cell phone applications and specific health-system phone lines, important databases with information about syndromic surveillance are generated each day \cite{Diwan2015}. Geo-location plays a main role is spatial and temporal definition of the outbreaks detected by EWS \cite{Chen2011}.
In Buenos Aires Province (Argentina), the COVID-19 phone line 148 is one of the first contacts between a person that believes to be infected and the Health Care System. The trained Health Care team receives and responds to people questions generating, simultaneously, a syndromic surveillance database. If the person has symptoms that could indicate a COVID19 infection, it is instructed to follow the corresponding protocol. Importantly, such syndromic database was used as input for estimation of cases and outbreak detection in Buenos Aires Province.
This work is divided as follows. In Section \ref{model} we describe the COVID-line data and present the details of the mathematical model to estimate the number of cases using the phone calls data. In Section \ref{tracking} we show how the model works in Buenos Aires Province and how it can be used to track on-stream the epidemic. In Section \ref{alarm} we present the Early Outbreak Alarm and show its details in Villa Azul (Quilmes) case. We discuss the limitations and current improvements of the model in Section \ref{outlook}, and we present our conclusions in section \ref{conclusions}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\linewidth]{figs/intro}
\caption[Information workflow]{COVID-line 148 workflow. As people call the COVID-line upon their health perception, the COVID-trained operators determine whether they correspond to suspicious or close-contact case. In such a case, their record is passed to the Epidemiological Surveillance team and a COVID-19 swabbing is ordered. Some days later the swabbing lab result is added to the corresponding record. The algorithm described in this paper works with the first part of the information which is delivered on-stream as the operators determine the case passes the corresponding threshold.}
\label{fig:intro}
\end{figure}
\section{Estimating on-stream COVID-19 cases through calls to a COVID-line}
\label{model}
We describe the mathematical model implemented to relate phone calls to a COVID-line to lab-confirmed cases per district per day. In the following paragraphs we outline the functioning of the 148 COVID-line and then we describe the details of the model.
\subsection{COVID-line 148 in Buenos Aires Province}
Buenos Aires Province (PBA for its acronym in Spanish) is the most populated province of Argentina, with more than 17 million inhabitants. Around 13 million people live in the Metropolitan area surrounding the City of Buenos Aires. Importantly, the remaining 4 million live in the vast area and with low population density known as the interior of the province. This demographic heterogeneity leads to a hyper-centralized Health Care system. In order to attend the growing demand for medical assistance caused by the COVID-19, public health authorities implemented in February 2020 a specific-COVID phone that is reached by dialing 148. The objective of this COVID-line is to address all community concerns related to COVID, which includes questions, doubts, symptom reports, and reference to the Health Care system, among others.
The COVID-line grew in personal as the epidemic spread in PBA. The call request grew from a few hundreds per day at March up to approximately 20k per day at the end of August. Until late June the system was not outperformed and all calls requiring assistance were taken. We therefore consider that during this regime an indicator coming from this COVID-line would be relatively unbiased. This is specially true if one compares this to other indicators as testing, or lab processing, which were changing their behavior considerably as epidemic spread during this period. We find that April 1st to June 26th is a period in which the COVID-line has been relatively stable to major changes.
As people call the COVID-line 148, they enter into an automatic voice menu in which one of the options corresponds to COVID-like symptoms. As user go into this option their call is taken by a COVID-trained operator and a short questionnaire on their experience indicates whether the call does not pass the threshold to be registered or corresponds to one of the two registered categories: close-contact and suspicious case. If the call corresponds to any of these categories, then the operator registers their data and in particular the district from which they are calling. We depict in Fig.~\ref{fig:intro} the workflow of the COVID-line. At the early stage that the system was implemented, the record did not include trustful information on the exact address of the user. This crucial fact led us to develop the system we explain below by restricting our information on the user to only their district. Although future upgrades on the system are providing more accurate location of the call, the current work restricts to the caller district and only once their call is taken by a COVID-trained operator.
\subsection{Mathematical Model to estimate cases from phone-calls to the 148 COVID-line}
We present the mathematical model to estimate the new infected using the phone call data, and apply it to PBA. The reasoning in this section follows the same lines as in Ref.~\cite{mild}, but with different purposes and different filtering in the data set.
We consider a data set of calls from many districts and during a given time range to a COVID-line. Each one of these calls can either be
\begin{equation}
\begin{array}{rl}
background\mbox{:}& \mbox{people with similar symptoms but not infected}\\
signal\mbox{:}& \mbox{people infected with COVID-19.}
\end{array}
\nonumber
\end{equation}
Under reasonable assumptions of homogeneity in space and time we can model that background calls in each district and time-window are proportional to the total district population and the time-window length. Whereas signal calls are proportional to the total number of infected people in the district whose record is opened in the corresponding time-window, even though their lab-confirmation may be available in a later time. Therefore, if we divide all our data set in chunks corresponding in space to the districts in PBA, and in time to time-windows of $\Delta t^{(j)}$ days that can be arbitrarily chosen, we can pose the following equations for all the chunks labeled by $j$:
\begin{eqnarray}
n_c^{(j)} = \theta_p \, \Delta t^{(j)} \, N_p^{(j)} + \theta_I \, N_I^{(j)}.
\label{eq:fit}
\end{eqnarray}
Where $N_p^{(j)}$ is the population of the corresponding district and $N_I^{(j)}$ is the number of confirmed infected at the same district and whose record was opened during the corresponding time-window. On the Left Hand Side, $n_c^{(j)}$ is the fit to the total number of calls, whereas $N_c^{(j)}$ (not in the equation) is the total number of actually placed calls. Observe, therefore, that this set of equations (one for each chunk $j$) can be extended depending on the chosen time-window length. Once this set of $j=1...k$ equations has been posed, we can fit the best values of coefficients $\theta_{p,I}$ that minimize the square distance between $n_c^{(j)}$ and $N_c^{(j)}$. We stress that there are only two coefficients $(\theta_{p,I})$ that must fit all $k$ different equations for each chunk.
This fit works better if all chunks correspond to periods in which the testing methods have not changed drastically, as it can be for instance if the number of daily tests is modified considerably, or if new symptoms are considered as threshold for testing, among others. The reason for requiring this is to have a coherent balance between the number of infected reported and the number of calls in all chunks all the time. With this objective, is better to re-fit the parameters every time there are major changes in the testing and reporting methods.
Once the parameters $\theta_{p,I}$ in Eq.~\ref{eq:fit} have been fitted, including their uncertainty from the fit, we can estimate the number of new infected in a given chunk as
\begin{equation}
n_I^{(j)} = \frac{1}{\theta_I} \, \left( N_c^{(j)} - \theta_p\, \Delta t^{(j)} \, N_p^{(j)} \right) .
\label{prediction}
\end{equation}
Observe that the Right Hand Side requires data that is obtained in the same day, and therefore one can estimate the number of cases $n_I$ on-stream, without need of waiting the laboratory results. Notice, that the algorithm allows to estimate the total number of new cases in each chunk, but not to determine {\it which} of the calls correspond to the new cases. The uncertainty in the estimation $n_I^{(j)}$ is computed by applying the usual error expansion formula on Eq.~\ref{prediction}. If variables are correlated, as for instance $\theta_p$ and $\theta_I$, one should take this into account, however in our case we neglected this correlation in comparison to other terms. For the parameters $\theta_{p,I}$ we use the uncertainty coming from the fit, for $N_c$ we use Poisson uncertainty, and for $N_p$ one should decide whether to add a systematic uncertainty or only use Poisson, as we did in this work. As discussed below, uncertainties in the estimations play a central role in the design of the Early Outbreak Alarm, and therefore should be handled with care, specially the systematic ones if present.
In order to apply this algorithm in PBA we have used the data set of phone calls to the 148 COVID-line. We work with all the phone calls entering the COVID-line that reach the threshold for being close contact or suspicious case. The reason for this filtering is because the district the user is calling from is registered by the operator. Although the address is also in principle registered most of the times, in the practice many ambiguities, misspelled words, or other unintended errors yield that only about $\sim 50\%$-$70\%$ of the times it can be correctly reconstructed. We consider the data set of calls between April 1st and June 26th, since after this the call center was overloaded and not all phone could be taken, yielding intractable biases. Along this period we have fitted the data a few times in different data sets, obtaining fairly similar results and with coefficient of determination always satisfying $R^2 > 0.85$. In particular, as cases were increasing, we were obtaining more accurate estimations for $\theta_I$, as it can be expected.
In order to show the robustness of the hypotheses, we show how this model works with data from May 1st to June 26th, divided in two equal length time-windows each. We consider all districts in PBA whose number of calls in these chunks is greater than 100. After this filtering we are left with 43 chunks, i.e.~43 data points. After performing the fit indicated in Eq.~\ref{eq:fit} we obtain
\begin{eqnarray}
\theta_p &=& (5.16 \pm 1.59 ) \times 10^{-6} \mbox{ calls per inhabitant per day} \nonumber \\
\theta_I &=& 0.69 \pm 0.05 \mbox{ calls per infected people} \nonumber
\end{eqnarray}
It is worth noticing that the precise values of these fitted coefficients have a strong dependence on the process of call filtering and call system architecture. In particular, these values differ from those in Ref.~\cite{mild} because we are considering a different level of filtering to obtain the district of each user. The fit for this data set yields a coefficient of determination $R^2 = 0.91$, which indicates the robustness of the involved hypotheses. In Fig.~\ref{fit} we show the comparison between data and fit for the number of phone-calls, as posed in Eq.~\ref{eq:fit}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.7 \textwidth]{figs/fit_paper_2.png}
\end{center}
\caption{Placed versus fitted calls during the fitted period (May 1st to June 26th divided in two time-windows). The number of fitted calls comes out from the number of laboratory-confirmed COVID cases using the fit in Eq.~\ref{eq:fit}. The upper right data-point corresponds to La Matanza district whose population of 1.7M inhabitants is about at least three times larger than all other districts.}
\label{fit}
\end{figure}
\section{Tracking epidemic through model estimations}
\label{tracking}
The mathematical model described in the previous section provides a framework to estimate many days in advance the number of lab-confirmed cases per day, as a function of the spatio-temporal distribution of phone calls to the COVID-line. This is a compelling achievement because the phone call information is available on-stream, whereas lab confirmation of cases may require from a few days to up to a week since patient report their first symptoms. Along this section we show how this system can be used to have an estimate of the epidemic evolution on-stream, along with real case results in PBA.
As this system was developed there was no time for validation. However, obtaining a very satisfactory $R^2\gtrsim 0.85$ in the fit was a signal that the model was insofar working well. As months went by, we had the possibility of comparing in a long range time-window the model estimation against the measured lab-confirmed number of cases per day per district. In Fig.~\ref{longrange} we show the comparison between the estimation and the late lab-confirmed cases per day for two any districts in PBA. Similar results are obtained for other districts. Is central to observe in this figure that the number of lab-confirmed cases (red line) is an information that is available many days after the corresponding date, whereas the model estimation (blue) is available at the end of each day. As it can be seen in the figure the estimation has a good agreement with the real data. There are a few date ranges in which door-to-door swabbing through {\it DETECTAR} operatives \cite{detectar} induce an expected sub-estimation in cases.
This syndromic surveillance has been used to follow the size, spread, and tempo of outbreaks, to monitor disease trends, and to provide reassurance that potential outbreak has not occurred. In particular, it has also been very useful as an early outbreak detection, as we detail in next section. Syndromic surveillance systems seek to use existing health data in real time to provide immediate analysis and feedback to those in charge with investigation and follow-up of potential outbreaks. Particularly, the data collected by the COVID-line calls proved to be a valuable and reliable input to track the epidemic along the PBA.
The tracking of the epidemic through this model is specially useful when the capacity overload of the diagnostic centers leads to delays in obtaining results. For this reason, having a real-time and relatively unbiased estimation of cases gives the Public Health authorities the possibility of taking actions in time \cite{Katz2011}. Furthermore, in a disaster scenario prioritization this tool takes a main role when resources and time are limited, as it occurred at the end of June in PBA. The calls-based syndromic surveillance allowed a rapid characterization of the different PBA districts in terms of their epidemiological status and the consequent action was taken in order to mitigate the epidemic effects.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.47 \textwidth]{figs/la_matanza_full-range.png}
~\includegraphics[width=0.47 \textwidth]{figs/lomas_de_zamora_full-range.png}
\end{center}
\caption{Comparison of real data (red) versus model estimation (blue) for two example districts of Buenos Aires Province (La Matanza and Lomas de Zamora). Recall that the red line corresponding to the real confirmed cases with their record opening in the corresponding date, is reconstructed many days later. In the dates in which the red line goes above the estimation is usually because {\it DETECTAR} operatives (door-to-door testing \cite{detectar}) were carried on. In general, the model yields a very good estimation to monitor the epidemic in all affected PBA districts. }
\label{longrange}
\end{figure}
\section{Early Outbreak Alarm}
\label{alarm}
Along this section we detail a compelling by-product of the model in Section \ref{model} to detect COVID-19 outbreaks considerably earlier than through lab confirmation. We briefly describe the working of the model and then provide its detail through the description of a real case that occurred in mid May in Villa Azul (Quilmes) and Villa Itatí (Avellaneda) in PBA.
\subsection{Identifying an outbreak formation}
Provided the on-stream estimation of the cases per day in each district, we are interested in developing a statistical and automatic tool that can trigger an alarm when a potential outbreak is on rise. Having an early alarm on this kind of epidemiological features is a crucial tool to avoid its spread and drastic consequences.
To detect a potential outbreak there are many indicators that should be simultaneously analyzed. On one hand it is important to have an estimation of the daily absolute and relative number of cases, and on the other hand is also important to have an estimation on the daily variation of these observables. To have an objective quantitative indicator of the potential of an outbreak in a given region, it is essential to have a correct assessment of the uncertainties in all the estimations of the model. Along the system implemented as an Early Outbreak Alarm, we have considered that the important indicator is the {\it significance} of all the basic indicators which signal an anomaly as they depart from zero. Here significance is defined as the distance to zero from the central value of the indicator, measured in units of its uncertainty. Or, in other words,
\begin{equation}
\mbox{significance} = \frac{\mbox{central value}}{\mbox{uncertainty}} .
\label{significance}
\end{equation}
As it can be seen in Eq.~\ref{significance}, the correct computation of the uncertainties (or error bars) is crucial for the functioning of the Early Outbreak Alarm.
The developed algorithm computes everyday the estimation for the total number of new cases in each district in PBA. Since in the studied time-window, specially before June, the number of estimated cases per day of many districts were below $\sim 5-10$, we considered to include the estimation of cases for the last two days. This would reduce the relative Poisson uncertainty due to small numbers. We computed the number of estimated cases in absolute value, and also relative to 100k inhabitants to be equally sensitive to all districts.
A third and decisive observable that signals the level of danger of an outbreak is the daily increase of estimated cases. Given the daily estimation provided by the mathematical model, we can recognize a rapidly increasing curve in many ways. We have considered to fit a straight line to the case estimation for the last three days and use the slope of this line as an estimation of the central value of the daily increase. We also use the significance as the most relevant indicator to decide the level of danger of each district. In this case, the computation of the error bar in the slope of the line includes all the uncertainties in each day estimation included in the computation of the fit uncertainty through the least squares residuals. We use three days to fit a line because is the minimum time needed to see a two-day consecutive increase, while being still well ahead of the laboratory results. In addition, three days is also a good time-window for the specific COVID-19 characteristics.
This Early Outbreak Alarm has provided to PBA Health Care administration very important tools to identify possible outbreaks during the rise of the epidemic curve. Since the granularity of the algorithm is very poor (districts), the system needs to be complemented with other independent indicators, in particular those which can help to provide a more accurate location of the outbreak. This was usually done by calling back manually the recorded cases, and then by sending {\it DETECTAR} operatives \cite{detectar} to verify if in fact the in-situ conditions would be the predicted. The Early Outbreak Alarm has indicated many outbreaks that have been controlled since mid-April to mid-June. In particular, we describe in the following paragraphs the very special\footnote{This case covered the headlines in the news for several weeks \cite{news}} case of Villa Azul (Quilmes) and provide the details on how the Early Outbreak Alarm indicated the Quilmes district.
\subsection{Case study: {\it Villa Azul, Quilmes}}
We report the details of one of the outbreaks indicated by the Early Outbreak Alarm in mid-May in Quilmes district. This case was the first major outbreak in a low-income neighborhood in PBA and had a great impact in the news \cite{news}, not only for its magnitude but also because of its early detection that drove a strict lock-down and isolation of the outbreak to control its spread to the close neighborhoods.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.47 \textwidth]{figs/quilmes_last_2_days_2020-05-20.png}
~\includegraphics[width=0.47 \textwidth]{figs/quilmes_last_2_days_100k_2020-05-20.png}
\end{center}
\caption{Cases-per-day estimation using the model on the COVID-line phone calls for the last two days. Observe that districts are not sorted by their central value, but by their significance, which is defined as the rate between the central value and the uncertainty. This is why error bars are crucial to provide an Early Outbreak Alarm. Results are shown in absolute value (left) and relative to every 100k inhabitants (right). In the figure we show the scenario for May 20th in which Quilmes is almost as large as La Matanza in absolute value, with $\sim 1/3$ of its population. Whereas Quilmes {\it is} in the top position when scaled to relative per 100k inhabitants.}
\label{quilmes}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.7 \textwidth]{figs/tendency_paper_early-alarm.png}
\end{center}
\caption{Slope of a linear fit to the cases per day estimation of last three days. The error bar corresponds to including the uncertainty in each per day estimation and in the slope determination in the fit. Also in this plot is crucial to sort the districts according to the significance in this variable. In the figure we see on May 20th Quilmes in the top position, indicating a potential early alarm for an outbreak, as it was consequently confirmed by other indicators a few days later.}
\label{villaazul}
\end{figure}
On May 20th the alarm was indicating a large number of estimated cases in Quilmes district (see Fig.~\ref{quilmes}), in particular Quilmes had the top estimation in number of cases per inhabitants of the last two days, as measured through the significance of the indicator. In addition to this, the indicator of the daily increase fit was also indicating Quilmes as the top district in significance (see Fig.~\ref{villaazul}). This last indicator on the daily increase fit to the last three days can be visualized in Fig.~\ref{villaazul2}a, where we plot the daily estimation for the last 7 days on Quilmes. The fit is obtained using the last three data points in red. Given all these indicators pointing to Quilmes district, the surveillance team took the duty to locate the phone calls and observed an excess coming from Villa Azul, a low-income neighborhood in Quilmes and next to Avellaneda district.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.47 \textwidth]{figs/quilmes_villa_azul.png}
~\includegraphics[width=0.47 \textwidth]{figs/quilmes_full-range.png}
\end{center}
\caption{Left: Exact Early Outbreak Alarm visualization on May 20th for Quilmes, as available to the Health Care team in Buenos Aires Province. Right: The posterior picture including the lab-confirmed cases (solid red), the day that the Health Care System landed on Villa Azul to begin testing door by door, and a wider range of dates to capture the big picture of the case. The strict lock-down in Villa Azul with no entering nor leaving permission lasted from May 24th to June 8th. As it can be seen in the plot, during the door-by-door testing, the solid red line goes above and uncorrelated to the estimated cases by phone calls, as expected.}
\label{villaazul2}
\end{figure}
These observations indicated in advance by the Early Outbreak Alarm needed to be verified by an independent complementary indicator. On the following day a {\it DETECTAR} operative \cite{detectar} was sent to Villa Azul, where the acute situation was verified and door-to-door swabbing with urgent lab results started right away. As first results were confirming the outbreak in Villa Azul, the PBA Administration decided a strict lock-down and isolation for 14 days since may 24th \cite{news}.
\subsection{Villa Azul epidemiological and operational description}
Villa Azul (Quilmes) and Villa Itatí (Avellaneda) are two adjacent low-income neighborhoods. The last demographic analysis indicates that Villa Azul has a population of 3.128 and Villa Itatí 15.142. High density building and housing, and tiny streets bring the population into close contact. These characteristics make these neighborhoods susceptible to a fast spread \cite{Finch2020}. Taking this into account, early outbreak detection implies a main challenge in these complex cases where detection and propagation block must be done when the first few cases are reported. In particular, the above described early alarm for the outbreak occurred in Villa Azul allowing a fast response of the Health Care System team to mitigate and control its propagation to Villa Itatí.
Once produced the strict lock-down and isolation, water and food-supply were delivered by the social care-team. People were not allowed to leave the house during the entire isolation. Active surveillance health teams started with a door-to-door symptoms monitoring. Those cases with clinical manifestation related to COVID-19 were tested. Confirmed cases were isolated inside their houses in cases where this was possible (if there was an empty room for example) and in cases where it was not, the people were sent to an out-of-Hospital center.
\section{Outlook and scope}
\label{outlook}
The development of the mathematical model to estimate the number of COVID-19 cases was done in urgency and adapting it to the available data. There was no time to request changes in data acquisition nor processing. Of course the algorithm and the system can be improved in many directions. We discuss some of these features in the following paragraphs.
One of the major weakness in the algorithm is the large granularity, which corresponds to districts. Districts population in the relevant area are in average 500k people. This issue is translated in that the Early Outbreak Alarm stops working once the density of cases is such that there are more than a few outbreaks in each district. This happened in late June in PBA. In a future implementation we are carrying out a workaround for this issue by obtaining a trustful address from the COVID-trained operator that takes de call. A more stable solution would be to obtain this information from the telephone company, however regulations many times block this possibility.
On the other hand, the algorithm has a very important benefit that is its unbiasedness. Given that the COVID-line works 24 hours the 7 days of the week and with a fair equal methodology all the time, the algorithm estimation does not rely on tests availability or overloaded testing facilities, among others. Of course, the system does have slight biases that may come --for instance-- from different backgrounds due to different features in the districts, or seasonally social behavior as months go by. Some of these biases may be solved by re-fitting the model once in a while, others by fitting different models in different regions.
Importantly, the algorithm provides information about the background calls that vary in space and time. Further studies could be done in order to understand and extract properties of the background, as it can be its seasonality, variations according to regions, to public announcements or news, etc.
The crucial point in the mathematical model is that recognizes anomalies due to collective behaviors. Therefore, we find that the mathematical model and the Early Outbreak Alarm algorithms can be useful for many other epidemiological diseases --as for instance Dengue--, and other events such as natural catastrophes, among others. We are currently working in the improvement of this system in many aspects, also including Machine Learning algorithms, and these advancements will be published in a future work.
\section{Conclusions}
\label{conclusions}
We have created a syndromic surveillance algorithm based in the correlation between phone calls to a COVID-line, districts population and reported cases. This algorithm works by understanding that phone calls to a COVID-line are a part due to non-infected people having similar symptoms (background) and other part due to infected people (signal). By observing that background has to be proportional to district population, whereas signal proportional to reported cases, we have fitted our assumption. The coefficient of determination for Buenos Aires Province (PBA) is always $R^2>0.85$ for different samples, which indicates the robustness of our hypothesis. In addition, we have validated our model with real data.
Along the manuscript the model, its estimations, and how we compute their error bars were described. Also, it has been shown how the estimations, which are obtained in-stream, can be used to address Public Health policies without requiring to wait for lab results, which require many more days to converge. The algorithm worked in PBA from April to June, since during this time the COVID-trained call center was not overloaded. Therefore the estimation was relatively unbiased.
We have shown how this estimation can be used to create an Early Outbreak Alarm. Furthermore, we describe how the construction of indicators that have to do with daily cases, and daily increase of cases, can indicate outbreaks in advance. The relevant statistic variable in this case is the significance, since it is a real measure on how far from zero are the indicators. Importantly, this system can detect an outbreak and, in particular, we exemplify its application in the outbreak detection in Villa Azul (Quilmes).
The limitations on the Early Outbreak Alarm were discussed in this paper, many of them because of the characteristics of the data available at the moment of its (urgent) development. We have pointed out many ways to improve its sensitivity and accuracy, on which we are currently working. This alarm would also be useful, not only for other epidemiological diseases, but also for events that yield changes in collective behavior, such as Dengue epidemic, natural catastrophes, or others.
The presented algorithm and mathematical model have been one of the main tools in the PBA Health Care system dashboard during the epidemic, and its current and upgrade versions are still being used to track the epidemic and detect outbreaks.
\section*{Acknowledgment}
We thank the fantastic work performed by the 148 COVID call center, in particular to R.~Vaena, P.~Rispoli and L. H. Molinari for useful conversations. E.A.~and F.M.~thank Dr.~I.~Caridi for useful conversations. E.A.~thanks CONICET, UNSAM, CAF and EasytechGreen for finantial and logistic support during this research.
|
1,116,691,498,259 | arxiv | \section{Introduction}
The ability to fabricate of monolayer graphene \cite{Novoselov666} leads researchers to
study quantum Hall effect plateaus that is explained in terms of Dirac-like chiral quasiparticles
with Berry phase $\pi$. Consequently, bilayer graphene (BLG) has been considered as one of the interested structure of study \cite{MCCANN2007110, Nzar.25.465302}.
One of the interested feature of BLG is the possibility to locally induce a bandgap and tune its magnitude by applying a strong electric field perpendicular to the carbon nanosheets \cite{Kanayama2015, 10.3389/fphy.2014.00003, PhysRevB.95.195307, ABDULLAH2018223}. This property of BLG can be used to design next-generation transistors that would work faster and use less energy \cite{7292288, 7736978}.
The physical properties such as the bandgap of BLG can also be tuned by doping of foreign atoms
\cite{doi:10.1021/nn202463g, ABDULLAH2020126807}. Several foreign atoms have been used to improve physical properties of monolayer and bilayer graphene such as Boron/Nitrogen (B/N) atoms \cite{NEMNES2018175, HAN2015618,ABDULLAH2020126350, ABDULLAH2020126807}, cesium (Cs) \cite{doi:10.1021/acsnano.9b08622, abdullah2019thermoelectric}, Silicon (Si) atom \cite{DENIS2010251, ABDULLAH2016280}, and transition metals \cite{C6CP01841F}. The BLG with B/N doping exhibits a semiconding material with a small bandgap,
where the Fermi energy is located in the bandgap \cite{doi:10.1002/adma.200901285, ABDULLAH2020103282} which can be used
to study photocatalysis \cite{doi:10.1063/1.4950993}. The tunning bandgap by Si doped BLG have been reported and shown that the Si atoms induces a small bandgap due to the silimar valence electron of Si and C atoms \cite{DENIS2010251, gudmundsson2019coexisting}.
The sandwiching a graphene by Cs induces
thermodynamically stable flat band materials \cite{Hase2018}. In the the transition metals doped BLG, the electronic and magnetic properties as a function of strain have been also analyzed to indicate that the strain is important for the stabilities of the high-coverage TM-intercalated BLG \cite{C6CP01841F,TANG20171529, en12061082, ABDULLAH20181432}. The bandgap tuning of BLG can thus improve the electrical, the thermal and the optical conductivities and modulate the mechanical properties \cite{McCann_2013}.
The Si doping monolayer graphene has been investigated by several research groups.
The Si atoms influences mechanical property such as Young’s modulus of monolayer graphene, the
optical \cite{Houmad2015} and thermal \cite{C5NR06345K, RASHID2019102625, Abdullah2019, Fatah_2016} characteristics.
But the electronic, mechanical, thermal, and optical properties of Si-doped BLG have not
been systematically investigated.
In this work, we consider Si-doped AA-stacked BLG represented by SiC$_{15}$, and Si$_{2}$C$_{14}$ structures depending on Si concentrations. The electronic, mechanical, optical and thermal characteristics are investigated using density functional theory. A comparison for different Si atoms configurartion in BLG will be shown with detailed mechanisms of improving the BLG structures by Si impurity atoms \cite{abdullah2019manifestation, doi:10.1063/1.4904907, TANG20173960, ABDULLAH2020114221}.
In \sec{Sec:Model} the structure of BLG is briefly over-viewed. In \sec{Sec:Results} the main achieved results are analyzed. In \sec{Sec:Conclusion} the conclusion of the results is presented.
\section{Computational details}~\label{Sec:Model}
The calculations performed in this study are using DFT within Generalized Gradient Approximation (GGA) \cite{PhysRevLett.77.3865} and the computer software is Quantum Espresso (QE) package \cite{Giannozzi_2009}.
The calculations are also carried out on a $2\times2$ bilayer supercell arrangement with plane-wave basis set of Perdew-Burke-Ernzerhof (PBE) psuedopotentials \cite{PhysRevB.23.5048} with $1088.45$~eV cutoff-energy. Under the GGA-PBE, the van der Waals interactions between the layers of system is almost ignored.
The structure relaxation is performed using the $k$-point grid is $12\times12\times1$ and
the force on each atom is less than $10^{-6}$ eV/$\text{\normalfont\AA}$.
For the Brillouin zone sampling and the calculations of the density of state (DOS), a $12\times12\times1$ and a $77\times77\times1$ grids are used, respectively.
The XCrySDen, crystalline and molecular structure visualization program,
is uilized to visualize our structures \cite{KOKALJ1999176, Nzar.25.465302, Nzar_ChinesePhysicsB2016}.
Furthermore, the thermoelectric properties of the systems are investigated using the Boltzmann transport software package (BoltzTraP)~\cite{madsen2006boltztrap-2}.
The BoltzTraP code uses a mesh of band energies and has an interface to the QE package \cite{ABDULLAH2020126578, ABDULLAH2020113996}. The optical peroprties of the systems are obtained
by the QE code with the broadening of $0.5$~eV.
\section{Results}~\label{Sec:Results}
In this section, we show the results of structural stability, electronic, mechanical, thermal, and optical properties of the Si-BLG with different concentration and configurations of Si atoms in a
$2\times2$ bilayer supercell \cite{ABDULLAH2019102686}.
\subsection{Model}
In \fig{fig01} the prisine BLG (a) and Si-BLG (b-d) structure are presented.
We consider one Si atom is doped in the top layer with the doping
concentration ratio of $6.25\%$ identifying as SiC$_{15}$ (b). The vertical gray lines indicate
the border of $2\times2$ supercell \cite{JONSSON201781}.
In addition, we also assume two Si atoms doped in BLG with doping concentration ratio of $12.5\%$ in two configurations of Si atoms: First, both Si atoms are doped in the top layer at the para-positions identifying as Si$_2$C$_{14}$-I (c). Second, one Si atoms is doped at the para-position of the top layer and another Si atom is put at meta-position of the bottom layer called Si$_2$C$_{14}$-II (d).
We avoid the formation of Si-Si bonds in the Si-BLG, which substantially destabilize the hexagonal system \cite{Li2014}.
The interlayer distance, lattice constant, and the C-C bond length of pristine BLG are found to be $4.64$, $2.46$, $1.42$~$\text{\normalfont\AA}$, respectively, which are expected for the
GGA-PBE calculations. But if the van der Waals interactions in the exchange (XC)
functional with LDA is included, the interlayer distance becomes $3.6$~$\text{\normalfont\AA}$, which is close to experimental work \cite{ABDULLAH2020100740, doi:10.1063/1.2975333, PhysRevB.82.195325, nzar_2019_Annalen}.
The interlayer distance for all three Si-BLG structures is $4.64$~$\text{\normalfont\AA}$ which is almost unchanged indicating no interlayer interaction energy due to Si atoms.
This is opposite to the interalayer repulsive interaction between the Si-Si atoms in BLG which is recently observed when the LDA exchange correlation is assumed \cite{abdullah2020interlayer, doi:10.1021/acsphotonics.5b00115}.
Another point is that the Si atom in top layer of SiC$_{15}$ is moved towards the bottom layer with
$0.7$~$\text{\normalfont\AA}$ away from the top layer. This deviation of Si atoms from graphene surface has been
reported with same deviation length, $0.7$~$\text{\normalfont\AA}$~\cite{DENIS2010251,abdullah2020properties}.
The Si shifting here forms the dangling bonds of Si-C in which the Si atom tends to adopt sp$^3$ hybridization, shuch as in silicene where the buckled patter forms.
It thus increases the C-Si bond length to $1.71$~$\text{\normalfont\AA}$ and slightly increase the average C-C bond lengths to $1.45$~$\text{\normalfont\AA}$.
This shifting of the Si atom will influence the physical properties of the system as it will be shown layer.
In the Si$_2$C$_{14}$-I, the average C-C bond length in the top(bottom) layer is $1.37$($1.53$)~$\text{\normalfont\AA}$, and the C-Si bond length is $1.69$~$\text{\normalfont\AA}$.
In the case of Si$_2$C$_{14}$-II, the average C-C bond length in the top(bottom) layer is $1.47$($1.5$)~$\text{\normalfont\AA}$, and the C-Si bond length is $1.68$~$\text{\normalfont\AA}$. It can be clearly seen that the position of Si atoms in the hexagonal structure of graphene influence the bond length and thus the physical properties of the systems \cite{https://doi.org/10.1002/andp.201600177, Rauf_Abdullah_2016, https://doi.org/10.1002/andp.201700334}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{Fig01.pdf}
\caption{AA-stacked pristine BLG (a), SiC$_{15}$ (b), Si$_2$C$_{14}$-I (c), and Si$_2$C$_{14}$-II (d). The C and Si atoms are blue and green colors, respectively.}
\label{fig01}
\end{figure}
\subsection{Mechanical response}
The modifications of bond lengths and the Si-BLG structures affect the mechanical properties of the system. The stress-strain curves for pristine BLG (gray), SiC$_{15}$ (green), Si$_2$C$_{14}$-I (blue), and Si$_2$C$_{14}$-II (red) in the zigzag (a) and armchair (b) directions are shown in \fig{fig02}.
In 2D materials, the two basis vectors are perpendicular to each other. We therefore apply
uniaxial strains directly along the vector direction.
Tension simulation is carried out by imposing in-plane tensile strain at a rate of 0.02
per step in the uniaxial zigzag or armchair direction.
The stress-strain curves of pristine BLG are consistent with those existing in the literature for both zigzag and armchair directions, showing the reability of our calculations \cite{doi:10.1063/1.4789594}. Similar to previous investigations, the pristine BLG obeys linear relationships within a small strain range up to $5\%$. The linear elastic region gives the Young modulus of $974$~GPa which is very close to experimental value \cite{Lee385}. In addition, the ultimate strength of pristine BLG is $99.64$~GPa (zigzag) and $96.22$~GPa (armchair) at atrain of $0.151$ and $0.126$, respectively. Since there is no stretching in the pristine BLG after the fracture
strain, the values of ultimate stress are also equal to the fracture strain.
This slightly anisotropy of the ultimate stress in zigzag and armchair directions has also been observed for pristine monolayer graphene~\cite{Lu_2018}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.35\textwidth]{Fig02.pdf}
\caption{Stress-strain curves for pristine BLG (gray), SiC$_{15}$ (green), Si$_2$C$_{14}$-I (blue), and Si$_2$C$_{14}$-II (red) in the zigzag (a) and armchair (b) directions.}
\label{fig02}
\end{figure}
In the Si-BLG structures, it is interesting to see that linear elastic region and ultimate strength for SiC$_{15}$ are decreased in both zigzag and armchair directions. The reduction is much stronger in the zigzag direction (see \fig{fig02}(inset)). The ultimate strength is $3.8$~GPa with strain $0.4\%$ in the zigzag direction, and $26.58$~GPa at strain $4.3\%$ in the armchair direction.
This is attributed to the dangling bonds of Si-C in which the Si atom tends to adopt sp$^3$ hybridization. In addition, the moving out of Si atom introduces extremely unbalanced bonds strength which distorts the perfect hexagon rings of the $2\times2$ supercell honeycomb.
Consequently, the pre-elongated C-C and Si-C bonds of SiC$_{15}$ under the uniaxial tension are lifted.
One also can clearly see that the stress-strain curves for both Si$_2$C$_{14}$-I, and Si$_2$C$_{14}$-II in the zigzag and armchair directions are irrespective of doping configurations, and
they are the same because number of Si-C bonds are equal with different configurations in both structures. The Yong modulus and tensile or ultimate strength are found to be $732$($728$), and $50.22$($36.72$) for the zigzag(armchair) directions, respectively.
The reduction of stress-strain curves for Si$_2$C$_{14}$-I, and Si$_2$C$_{14}$-II refers to the presence of more Si-C bonds in these structures.
In addition, the ultimate strength and fracture strain in the zigzag direction is higher than that of the armchair which is due to existing of more Si-C bonds in the zigzag direction. This is because in the Si-C bonding the charge distribution of the Si atom is more easily reshaped than that of the C atom as the Si atom has much lower electronegativity. It arises Si-C bonds much more tolerance of the stretched process.
\subsection{Band energy and DOS}
The electronic band structures and the DOS of pristine BLG (a) and Si-BLG (b-d) are shown in \fig{fig03}, and \fig{fig04}, respectively. The linear dispersion
of the first valence band, $\pi$, and the conduction band, $\pi^*$, of the pristine BLG is found with
zero bandgao and DOS. The energy spacing between the $\pi_2\text{-}\pi_1$($\pi^*_2\text{-}\pi^*_1$) is almost $0.11$~eV \cite{GUDMUNDSSON20181672}.
The linear dispersion of pristine BLG band structure around the Fermi level shows the semimetallic nature where valence band maxima ($\pi$ band) and the conduction band minima ($\pi^*$ band) touch each other only at the high symmetry K point.
The Hamiltonian that defines the electronic structure of pristine BLG around the Dirac point can be given as \cite{aliofkhazraei2016graphene}
\begin{equation}
\hat{H} =
\begin{pmatrix}
\Delta & \hbar v_F(k_x-i k_y) \\
\hbar v_F(k_x+i k_y) & -\Delta
\end{pmatrix}
\end{equation}
where, $\Delta$, $v_F$, and $k$ indicate onsite energy difference between the C atoms located at A and B sites, Fermi velocity, and momentum of charge carriers.
The linear relation refers to the zero value of the onsite energy difference, $\Delta$, in pristine BLG arising from the presence of inversion symmetry in pristine BLG.
The zero value of onsite energy difference refers to the potentials
seen by the C atoms at the sites A and B are the same. Consequently, there is no opening up
of energy gap in monolayer and bilayer graphene.
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{Fig03.pdf}
\caption{Electronic band structure of pristine BLG (a), SiC$_{15}$ (b), Si$_2$C$_{14}$-I (c), and Si$_2$C$_{14}$-II (d).}
\label{fig03}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=0.42\textwidth]{Fig04.pdf}
\caption{Density of state of pristine BLG (a), SiC$_{15}$ (b), Si$_2$C$_{14}$-I (c), and Si$_2$C$_{14}$-II (d).}
\label{fig04}
\end{figure}
In the Si-BLG, the overall energy spacing between $\pi$ and $\pi^*$ at M-point determining the interlayer interaction is not much changed because the GGA-PBE is assumed where the interalayer interaction is ignored. But a strong influence between $\pi$ and $\pi^*$ at $\Gamma$-point is seen inflecting the optical transitions \cite{ABDULLAH2020113996,ABDULLAH2019102686, https://doi.org/10.1002/andp.201500298}.
The bandgap of SiC$_{15}$ and Si$_2$C$_{14}$-II are found to be $0.11$ and $0.67$~eV exhibiting semiconducting materials. The small bandgap of Si doped graphene may refers to the same number of the valence electron of Si and C atoms.
In general, a band gap is opened near the K point due to breaking of
the inversion symmetry by the distortion generated by the Si atoms configuration.
The reason for the broken symmetry is that the potential seen by the atoms at sites A and B is now different, leading to a finite value of onsite energy $\Delta \neq 0$, where $\Delta = \alpha (V_{\rm A} - V_{\rm B})$ with $\alpha$ being a constant value and V$_{\rm A}$(V$_{\rm B}$) is the potential seen by an atom at the site A(B)~\cite{Rahman_2014}.
Furthermore, a small overlap of valence and conduction bands of Si$_2$C$_{14}$-I indicate a metallic behavior of this structure. These modifications in electronic band structure and induced bandgap will effectively change the thermal properties of the systems.
\subsection{Thermal response}
The thermal properties such as Seebeck coefficient (a), figure of merit (b), electronic thermal
conductivity (c), and specific heat (d) at temperature $T = 100$~K are shown in \fig{fig05} for the pristine BLG and Si-BLG structures \cite{doi:10.1021/acsphotonics.5b00532, ABDULLAH2018102, Abdullah_2018}. We are interested in thermal behavior of the systems at low temperature and it has been reported that the electrons and phonons thermal behaviors are decoupled at low temperature range from 20 to 160 K \cite{PhysRevB.87.241411}. We thus present the electronic part of thermal properties.
\begin{figure}[htb]
\centering
\includegraphics[width=0.48\textwidth]{Fig05.pdf}
\caption{Seebeck coefficient, $S$, (a), Figure of merit, ZT, (b), electronic thermal conductivity, $\kappa_e$, (c), and specific heat (d) at $T = 100$~K for pristine BLG (gray), SiC$_{15}$ (green), Si$_2$C$_{14}$-I (blue), and Si$_2$C$_{14}$-II (red).}
\label{fig05}
\end{figure}
The $S$ and the $ZT$ of pristine BLG and Si$_2$C$_{14}$-I are very small due to the zero bandgap and overlapping the valence and conduction bands, respectively.
This is attributed to the cancellation effect from electron-hole contributions to the transport quantities.
So, the highest thermal conductivity and specific heat are thus found around the Fermi energy for these two structures.
In contrast, opening up the bandgap of SiC$_{15}$ and Si$_2$C$_{14}$-II maximizes the $S$, and $ZT$ and minimizes the thermal conductivity and specific heat around the Fermi energy. Therefore, the maximum $S$ and $ZT$ are found for Si$_2$C$_{14}$-II as it has the maximum bandgap among these structures.
\subsection{Optical response}
The modified electronic band structures and DOSs due to Si dopants are expected to tune the optical response of BLG.
The optical response of the Si-BLG under applied in- (left panel) and out-plane (right panel) electric fields is demonstrated in \fig{fig06}, where the imaginary part of dielectric function
(a-b), the absorption coefficient (c-d), and the Reflectivity, $R$, (e-f) of pristine BLG and Si-BLG are presented.
In the case of pristine BLG, two main peaks in the imaginary dielectric function are observed
at $3.95$~eV corresponding to the $\pi$ to $\pi^*$ transition and at $13.87$~eV generated due to the $\sigma$ to $\sigma^*$ transition when an in-plane electric field is applied, E$_{\rm in}$.
In the case of out-plane applied electric field, E$_{\rm out}$, the two main peak are formed by transitions from the $\sigma$ to $\pi^*$ at $11.22$~eV and the $\pi$ to $\sigma^*$ at $14.26$~eV.
These transitions are in a good agreement with literature~\cite{NATH2015691,MOHAN20121670, doi:10.1002/andp.201900306}.
The absorption coefficient is related to the imaginary part of dielectric function. The absorption coefficient of pristine BLG shows two obvious peaks
at $4.41$ and $14.35$~eV for E$_{\rm in}$, and two peaks at $11.78$ and $14.67$~eV for E$_{\rm out}$. In addition, it is noticed that two peaks in reflectivity of pristine BLG are found at $4.32$ and $15.55$~eV for E$_{\rm in}$ with almost the same intensity, and two peaks at $11.22$, and $15.31$~eV for E$_{\rm out}$ with different intensities.
It should be mentioned that the peak positions of pristine BLG are strongly dependent on the interlayr distance \cite{NATH2015691, PhysicaE.64.254}. The peak intensity of $\varepsilon_2$, $\alpha$, and $R$ here in both E$_{\rm in}$ and E$_{\rm out}$ directions are quite similar to previous studies when the interlayer distance is close to $4.64$~$\text{\normalfont\AA}$, as it is considered in our work.
It is also seen that the optical response of pristine BLG becomes insignificant beyond the
energy or optical frequency of $ \approx 25$~eV irrespective of polarization state \cite{Abdullah_2014}.
The optical response of Si-BLG is significantly modulated because the band structure and the DOS are changed. The frequency dependent variation of $\varepsilon_2$ for SiC$_{15}$ and Si$_2$C$_{14}$-II is enhanced in both directions of applied electric fields especially at the low energy range.
This is attributed to the opeining up bandgap of these two structures. A new peak in $\varepsilon_2$ at $0.9$~eV for Si$_2$C$_{14}$-II is observed in the E$_{\rm in}$ direction referring to the existing a relativity larger bandgap. Another peak in $\varepsilon_2$ at $1.75$~eV for SiC$_{15}$ in the E$_{\rm out}$ direction is seen, which is due to decreasing the energy spacing between $\sigma$ and $\pi$ bands at the $\Gamma$-point allowing transition between these two bands at lower energy.
In addition, a red shift in the peaks positions of these two structures occur in both directions of applied electric field arising again from the opening bandgap and significant changes in the DOS.
Consequently, the red-shift in absorption spectra and Reflectivity have also been noticed.
\begin{figure}[htb]
\centering
\includegraphics[width=0.45\textwidth]{Fig06.pdf}
\caption{Imaginary part of dielectric function, $\varepsilon_2$,
(a-b), absorption coefficient, $\alpha$, (c-d), and Reflectivity (e-f) of pristine and Si-BLG for in-plane (left panel) and out-plane (right panel) of an applied electric fields.}
\label{fig06}
\end{figure}
In contract, the optical response of Si$_2$C$_{14}$-I is reduced. For instant, the imaginary part of dielectric function absorption coefficient are significantly reduced. This reduction may refer to the overlapping of valence band maxima and conduction band minima at $K$- and $\Gamma$-points.
As a result, the reflectivity is enhanced in both directions of applied electric field.
It should be noticed that there peaks positions are almost unchanged here. The controlling of optical response by dopant atoms may be interested in opto-electronic devices.
\section{Conclusion}~\label{Sec:Conclusion}
To conclude, the first principles DFT technique is employed to compute
the structural, electronic, mechanical, thermal, and optical properties of Si-BLG.
It is observed that tuning the concentration ratio of Si dopant atoms with different configurations, the physical properties of the structures can be modulated.
At low ratio of Si atoms doped in BLG, a modification in s and p orbitals hybridization is seen leading to a strong fracture strain at low strain ratio. These structures show a weak behavior of mechanical response. Furthermore, a small bandgap induced at low Si dopant atoms arises an intermediate Seebeck coefficient, figure of merit, and absorption coefficient.
Increasing the ratio of Si dopant atoms when the Si atoms are doped in both layers of the BLG, a stronger mechanical response of Si-BLG is obatined in which the fracture strain is noticed at high strain. In addition, a larger bandgap due to higher ratio of Si dopant enhances thermal and optical responses of the system. If the Si dopant atoms is doped in only one layer, the valence and conduction bands are overlapped revealing low thermal and optical behaviors.
Our present theoretical work might help to understand some thermal and optical properties of nano
structured materials and to design nano optoelectronic devices involving graphene nano structures.
\section{Acknowledgment}
This work was financially supported by the University of Sulaimani and
the Research center of Komar University of Science and Technology.
The computations were performed on resources provided by the Division of Computational
Nanoscience at the University of Sulaimani.
\scriptsize
|
1,116,691,498,260 | arxiv | \section{\@startsection {section}{1}{\zeta@}%
{-5.5ex \@plus -1ex \@minus -.2ex
{2.3ex \@plus.2ex}%
{\normalfont\large\bfseries}}
\renewcommand\subsection{\@startsection{subsection}{2}{\zeta@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\normalfont\bfseries}}
\newcommand\subsub[1]{
\bigskip
\noindent{\underline{\it #1}}
\smallskip}
\numberwithin{equation}{section}
\makeatother
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\DeclareMathOperator{\arccosh}{arcCosh}
\newcommand{\eq}[1]{\begin{align}#1\end{align}}
\newcommand{\eqst}[1]{\begin{align*}#1\end{align*}}
\newcommand{\eqsp}[1]{\begin{equation}\begin{split}#1\end{split}\end{equation}}
\newcommand{\partial}{\partial}
\newcommand{\overline{\partial}}{\overline{\partial}}
\newcommand{\overline{z}}{\overline{z}}
\newcommand{\overline{w}}{\overline{w}}
\newcommand{\overline{\varphi}}{\overline{\varphi}}
\newcommand{\frac{1}{2}}{\frac{1}{2}}
\newcommand{\frac{1}{4}}{\frac{1}{4}}
\newcommand{{\mathbb Z}}{{\mathbb Z}}
\newcommand{{\mathbb R}}{{\mathbb R}}
\newcommand{{\mathbb C}}{{\mathbb C}}
\newcommand{{\mathbb A}}{{\mathbb A}}
\newcommand{{\mathbb N}}{{\mathbb N}}
\newcommand{{\mathbb H}}{{\mathbb H}}
\renewcommand{\P}{{\mathbb P}}
\newcommand{{\cal M}}{{\cal M}}
\newcommand{{\mathbb Q}}{{\mathbb Q}}
\newcommand{\widetilde{X}}{\widetilde{X}}
\newcommand{\Omega}{\Omega}
\newcommand{{\mathbb J}}{{\mathbb J}}
\def\overline{\tau}{\overline{\tau}}
\def{\rm Tr}{{\rm Tr}}
\def\hat{q}_0{\hat{q}_0}
\newcommand\rref[1]{(\ref{#1})}
\def{\it e.g.~}{{\it e.g.~}}
\def\a{\alpha}
\def\b{\beta}
\def\g{\gamma}
\def\Gamma{\Gamma}
\def\epsilon{\epsilon}
\def\eta{\eta}
\def\th{\theta}
\def\Th{\Theta}
\def\kappa{\kappa}
\def\la{\lambda}
\def\L{\Lambda}
\def\mu{\mu}
\def\nu{\nu}
\def\r{\rho}
\def\s{\sigma}
\def\tau{\tau}
\def\f{\phi}
\def\F{\Phi}
\def\omega{\omega}
\def\W{\Omega}
\def\v{\varphi}
\def\zeta{\zeta}
\def\varphi{\varphi}
\def\partial{\partial}
\def\Vev#1{\big\langle#1\big\rangle}
\def{\mathfrak g}{{\mathfrak g}}
\def\i{{ i}}
\newcommand{\no}[1]{\!:\! #1\!\! :}
\newcommand{{\cal B }}{{\cal B }}
\newcommand{\cW}{{\cal W }}
\newcommand{\cM}{{\cal M }}
\newcommand{\cF}{{\cal F }}
\newcommand{{\cal C }}{{\cal C }}
\newcommand{\cL}{{\cal L }}
\newcommand{\cO}{{\cal O }}
\newcommand{\cH}{{\cal H }}
\newcommand{\cA}{{\cal A }}
\newcommand{{\cal G }}{{\cal G }}
\newcommand{\cN}{{\cal N }}
\newcommand{\cY}{{\cal Y }}
\newcommand{\cD}{{\cal D }}
\newcommand{\cV}{{\cal V }}
\newcommand{{\cal J }}{{\cal J }}
\newcommand{\cI}{{\cal I }}
\newcommand{\E}{{\cal E }}
\newcommand{\B}{{\cal B}}
\renewcommand{\d}{{\partial}}
\newcommand{{\widetilde{\lambda}}}{{\widetilde{\lambda}}}
\def\widetilde{\widetilde}
\newcommand{\ensuremath{\mbox{Gr}}}{\ensuremath{\mbox{Gr}}}
\newcommand{\ensuremath{\mbox{SG}}}{\ensuremath{\mbox{SG}}}
\newcommand{\ensuremath{\mbox{TN}}}{\ensuremath{\mbox{TN}}}
\newcommand{\ensuremath{\mbox{CY}}}{\ensuremath{\mbox{CY}}}
\newcommand{|0\rangle}{|0\rangle}
\newcommand{{\it i.e.~}}{{\it i.e.~}}
\newcommand{{\it etc.~}}{{\it etc.~}}
\newcommand{\mathbf{1}}{\mathbf{1}}
\newcommand{\mathbf{2}}{\mathbf{2}}
\newcommand{\mathbf{3}}{\mathbf{3}}
\newcommand{\epsilon_1}{\epsilon_1}
\newcommand{\epsilon_2}{\epsilon_2}
\newcommand{\mathbf{+}}{\mathbf{+}}
\newcommand{\mathbf{-}}{\mathbf{-}}
\newcommand{{\bar w}}{{\bar w}}
\newcommand{{\bar z}}{{\bar z}}
\newcommand{{\bar x}}{{\bar x}}
\newcommand{{\bar h}}{{\bar h}}
\newcommand{{\bar q}}{{\bar q}}
\newcommand{\Vp}{V^\perp}
\newcommand{\Vpd}{W^\perp}
\newcommand{\rep}{R^G}
\newcommand{\pl}[2][]{\Psi^{(#2)#1}}
\newcommand{\pbl}[2][]{\bar{\Psi}^{(#2)#1}}
\newcommand{\pr}[2][]{\tilde{\Psi}^{(#2)#1}}
\newcommand{\pbr}[2][]{\tilde{\bar{\Psi}}^{(#2)#1}}
\newcommand{\Xl}[2][]{\partial X^{(#2)#1}}
\newcommand{\Xbl}[2][]{\partial\bar{X}^{(#2)#1}}
\newcommand{\Xr}[2][]{\partial\tilde{X}^{(#2)#1}}
\newcommand{\Xbr}[2][]{\partial\tilde{\bar{X}}^{(#2)#1}}
\newcommand{\mm}[1]{\mathbf{m}^{(#1)}}
\newcommand{\ma}[1]{m^{(#1)}}
\newcommand{|\sigma^-\rangle}{|\sigma^-\rangle}
\newcommand{\preprint}[1]{\begin{table}[t]
\begin{flushright}
{#1}
\end{flushright}
\end{table}}
\renewcommand{\title}[1]{\vbox{\center\LARGE{#1}}\vspace{5mm}}
\renewcommand{\author}[1]{\vbox{\center#1}\vspace{5mm}}
\newcommand{\address}[1]{\vbox{\center\footnotesize\em#1}}
\newcommand{\email}[1]{\vbox{\center\footnotesize\tt#1}\vspace{5mm}}
\newcommand{\AC}[1]{{\color{red}{{\bf AC:} #1}}}
\newcommand{\FM}[1]{{\color{red}{{\bf FM:} #1}}}
\begin{document}
\begin{titlepage}
\begin{flushright}
\end{flushright}
\begin{center}
\hfill \\
\hfill \\
\vskip 1cm
\title{Near-Extremal Limits of de Sitter Black Holes}
\author{Alejandra Castro$^{a}$, Francesca Mariani$^{b,c,d}$, and Chiara Toldo$^{b,e,f}$
}
\address{
${}^a$ Department of Applied Mathematics and Theoretical Physics, University of Cambridge,\\ Cambridge CB3 0WA, United Kingdom
\\
${}^{b}$
Institute for Theoretical Physics, University of Amsterdam, Science Park 904, \\
1090 GL Amsterdam, The Netherlands\\
${}^c$ Dipartimento di Fisica, Universita' degli studi di Milano–Bicocca, Piazza della Scienza 3,\\ I-20126 Milano, Italy
\\
${}^d$ Department of Physics and Astronomy,
Ghent University, Krijgslaan, 281-S9, 9000 Gent, Belgium
\\
${}^e$ Department of Physics, Jefferson Lab, Harvard University, 17 Oxford Street,\\ Cambridge, MA 02138, USA
\\
${}^f$ Dipartimento di Fisica, Universita' di Milano, via Celoria 6, 20133 Milano MI, Italy
}
\email{[email protected], [email protected], [email protected]}
\end{center}
\vfill
\abstract{We analyze the thermodynamic response near extremality of charged black holes in four-dimensional Einstein-Maxwell theory with a positive cosmological constant. The latter exhibit three different extremal limits, dubbed cold, Nariai and ultracold configurations, with near-horizon geometries AdS$_2 \times S^2$, dS$_2 \times S^2$, Mink$_2 \times S^2$, respectively. For each of these three cases we analyze small deformations away from extremality, and contrast their response.
We also construct the effective two-dimensional theory, obtained by dimensional reduction, that captures these features and provide a more detailed analysis of the perturbations around the near-horizon geometry for each case.
Our results for the ultracold case in particular show an interesting interplay between the entropy variation and charge variation, realizing a different symmetry breaking with respect to the other two near-extremal limits.}
\vfill
\end{titlepage}
\eject
{
\hypersetup{linkcolor=black}
\tableofcontents
}
\section{Introduction}
Black holes are notorious for their semi-classical features: they are usually characterized by only a few parameters, and exhibit universal laws governing the mechanics (thermodynamics) of their horizons. In this context it is reasonable to expect an overarching principle that accounts for these laws. However, it is also known that some of these features suffer modifications depending on the surrounding of the black hole. In particular, the presence (or absence) of a cosmological constant has both quantitative and qualitative repercussions in our understanding of black holes. Differences due to the surrounding is what we aim to explore and quantify here.
Black holes embedded in de Sitter, i.e., a gravitational theory with a positive cosmological constant, are an ideal laboratory to explore these differences. The presence of de Sitter adds a cosmological horizon which is well-known to mimic thermodynamic behaviour \cite{PhysRevD.15.2738}.\footnote{It should also be mentioned that accounting for a statistical origin of the thermodynamic properties of de Sitter is difficult, as it has been stressed and reviewed by several authors; see for instance \cite{Witten:2001kn,Strominger:2001pn,Banks:2005bm,Anninos:2012qw,Anninos:2017eib,Coleman:2021nor}.} It also allows for interesting generalizations, such as those explored, for instance, in \cite{Dolan:2013ft}. The presence of a cosmological horizon, plus the existing Cauchy horizons of the black hole, provides an extra dial that will allow us to explore different thermodynamic regimes and contrast behaviour.
To be concise, we will focus on four-dimensional electrically charged black holes embedded as solutions to Einstein-Maxwell theory with a positive cosmological constant. These are the so-called Reissner-N\"ordstrom de Sitter black holes (RNdS$_4$), which have the additional property of being spherically symmetric. These configurations generally admit three horizons: an inner, outer and cosmological horizon. The confluence of these horizons, which defines an extremal limit of the original black hole, is an interesting starting point for two reasons. First, for RNdS$_4$ there are three different extremal limits \cite{Romans:1991nq,Mann:1995vb,Booth:1998gf},
dubbed {\it cold} (inner and outer horizon coincide), {\it Nariai} (outer and cosmological horizon coincide) and {\it ultracold} (confluence of three horizons) configurations. The near-horizon geometry for each case are AdS$_2 \times S^2$, dS$_2 \times S^2$, Mink$_2 \times S^2$, respectively. Each of these instances has its own strengths and intricacies which we will highlight below; it also provides us a dial to contrast the effects of the surrounding.
Second, starting from extremality we can apply holographic tools to decode and interpret the thermodynamic responses away from extremality. This has been a powerful strategy for extremal black holes with an AdS$_2$ factor in the near-horizon geometry: deformations away from extremality define the concept of near-AdS$_2$/near-CFT$_1$ \cite{Almheiri:2014cka,Maldacena:2016upp} which leads to important insights on the quantum nature of black holes; see \cite{Mertens:2022irh} for a recent review. Here we will explore the concept of being ``near'' for the three different extremal cases of RNdS$_4$. This is where spherical symmetry comes in handy: we can build a consistent two-dimensional effective theory that captures the AdS$_2$, dS$_2$ and Mink$_2$ dynamics and its deformations. This two-dimensional theory shares many features with Jackiw-Teitelboim (JT) gravity \cite{Jackiw:1984je,Teitelboim:1983ux}, the CGHS \cite{Callan:1992rs} and $\widehat{\rm CGHS}$ models \cite{Afshar:2019axx}, which we will review and exploit.
To summarise, we will capture and contrast various corners of black hole thermodynamics within one overarching solution, the RNdS$_4$ black hole. We will focus on semi-classical properties of the solution, and quantify them from the four-dimensional point of view. We will then provide a different perspective of these features by analyzing the holographic properties of the near-horizon geometry using the two-dimensional description. The most prominent features we find for each near-extremal configuration are the following.
\begin{description}
\item[Heating up cold black holes.] The AdS$_2$ factor in the near-horizon geometry controls the dynamics, which exhibits similar patterns as the analysis of near-extremal Reissner-N\"ordstrom and Reissner-N\"ordstrom in AdS$_4$ \cite{Almheiri:2016fws,Nayak:2018qej}. However, there are two differences in the near-extremal response. The thermodynamic regime is controlled by $M_{\rm gap}$, but there is an upper bound on its positivity. Also, the back reaction of the metric does not have a definite sign, which we find intriguing.
\item[Deformations of Nariai.] Here the rules are dictated by responses around dS$_2$, and we take the perspective of the static patch observer. Our main aim in this case is to highlight how certain responses are similar and different relative to AdS$_2$: several aspects can be obtained by analytic continuation as proposed in \cite{Maldacena:2002vr,Maldacena:2019cbz}, but the interpretation and reality restrictions are delicate.
\item[Kicking ultracold backgrounds.] This is the most novel instance of extremality, where the ultracold geometry has a natural connection to $\widehat{\rm CGHS}$ models. In this case the thermodynamic response is very different relative to the cold and Nariai cases, which we discuss in detail. Actually, it is a case where the temperature plays a minimal role at leading order. We discuss the holographic properties of our Mink$_2$ theory, where dSRN$_4$ serves as a guide in two-dimensions to dictate adequate boundary conditions and interpret the results.
\end{description}
Our results could be extended and applied in several different directions. We expect this to extend to higher-dimensional RNdS$_d$ background rather easily. A more rich direction is to extend our analysis to rotating black holes in de Sitter. Kerr-dS$_4$ also has a Nariai and ultracold limit that would be interesting to revisit in the near extremal limit along the lines of \cite{Anninos:2010gh}, and to connect with the Kerr/CFT correspondence proposed in \cite{Anninos:2009yc}.
Other directions, within the RNdS$_4$ solution is to extend the analysis of perturbations and connect to the more general studies in \cite{Dias:2018etb} that study strong cosmic censorship. It would also be interesting to explore the stability properties of the ultracold solution along the lines of \cite{Anninos:2022hqo,Horowitz:2022mly}.
The paper is structured as follows. In Sec.\,\ref{sec:RN} we introduce our setup, which consists of Reissner-N\"ordstrom de Sitter black holes in Einstein-Maxwell gravity with a positive cosmological constant. We describe the space of solutions and the three different extremal limits, including their near-horizon geometries. In Sec.\,\ref{sec:JTreduction} we report the effective two-dimensional theory obtained upon reduction of Einstein-Maxwell theory on the two-sphere, and present the equations for the perturbations near-extremality of the 2d metric and the dilaton field. In Sec.\,\ref{sec:heating_cold}, Sec.\,\ref{sec:nariai} and Sec.\,\ref{sec:ultracold} we spell out the thermodynamics near-extremality of the cold, Nariai and ultracold geometries respectively, and we compute the mass gap for these configurations. In each of these three sections we supplement our analysis with the study of the perturbations near the extremal geometry, by solving the equations of the JT-gravity like model obtained upon reduction to two dimensions. We compute then the on-shell action via holographic renormalization and we comment on the pattern of symmetry breaking and compare it among the various cases.
\section{Reissner-N\"ordstrom \texorpdfstring{dS$_4$}{dS4} black holes}\label{sec:RN}
Reissner-N\"ordstrom black holes embedded in dS space have several interesting properties that are distinct from their counterparts in AdS or flat space. In this section we will review some of these properties based on the original work of \cite{Romans:1991nq,Mann:1995vb,Booth:1998gf}. We will focus mainly on the mechanics of its horizons, and the accessible extremal limits.
Our interest is in black hole solutions of Einstein-Maxwell theory with a positive cosmological constant in four-dimensions. The action is given by
\begin{equation}
I_{\rm 4D}=\frac{1}{16\pi G_N}\int \mathrm{d}^{4}x \sqrt{-g}\left({\cal R}^{(4)}-2\Lambda_4 -F_{\mu\nu}F^{\mu\nu}\right)~.\label{eq:EML-action}
\end{equation}
An electrically charged black hole, which we will coin as RNdS$_4$, is a spherically symmetric solution of Einstein-Maxwell theory, where the line element and gauge field are
\begin{equation}
\begin{aligned}
ds^{2}&=-V(r)\mathrm{d} t^{2}+\frac{1}{V(r)}\mathrm{d} r^{2}+r^{2}(\mathrm{d} \theta^2 +\sin^2\theta \mathrm{d}\phi^2)~,\\
A&= \frac{Q}{r}\mathrm{d} t~, \label{dsrnds}
\end{aligned}
\end{equation}
and the blackening function in the metric is given by
\begin{equation}
V(r)=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}-\frac{r^2}{\ell_4^2}~.
\label{wfds1}
\end{equation}
In analogy to their flat space counterparts, we will denote the constant $M$ as the `mass' of the black hole and the constant $Q$ as the electric charge. It is also straightforward to generalize these expressions to a dyonic case, where the solution also carries magnetic charge. For simplicity, and without loss of generality, we will focus on the electric case.
The horizon structure of this black hole is dictated by the roots of $V(r)$. As it is clear from \eqref{wfds1} there are four roots, however, even when all roots are real, one of them is always located at negative $r$ and hence nonphysical (it sits behind the curvature singularity). The remaining three roots can be real and positive, which we will denote in ascending order as: $r_{-}$ is the inner horizon; $r_{+}$ is the outer horizon; $r_{c}$ is the cosmological horizon.
In this notation, we are writing \eqref{wfds1} as
\begin{equation} \label{warp_factor}
V(r)=-\frac{1}{\ell_4^2 r^{2}}(r+r_++r_- +r_c)(r-r_{-})(r-r_{+})(r-r_{c})~,
\end{equation}
where\footnote{In \eqref{constraints} we have favored $(r_\pm,\ell_4^2)$; but it is important to stress that $M$ and $Q$ are symmetric with respect to $(r_c,r_\pm)$. One can check that $ 2\ell_4^2M=(r_++r_-)(r_++r_c)(r_-+r_c)$ and $\ell_4^2 Q^2 = r_cr_+r_-(r_c+r_++ r_-)$. }
\begin{equation}
\begin{aligned} \label{constraints}
M&=\frac{1}{2\ell_4^2}(r_++r_-)(\ell_4^2 -r_+^2-r_-^2)~,\\
Q^2&=\frac{r_+r_-}{\ell_4^2}\left(\ell_4^2-r_+^2-r_-^2-r_-r_+\right)~,\\
\ell_4^2&=r_c^2 +r_+^2+r_-^2+r_-r_++r_-r_c+r_cr_+~.
\end{aligned}
\end{equation}
The space of black hole solutions is determined by the discriminant of the quartic polynomial in $V(r)$, which up to a trivial normalization reads
\begin{equation}
\begin{aligned}
\textrm{Discr}_4&\equiv \frac{1}{16\ell_4^6} \prod_{i<j}(r_i-r_j)^2\\
&= -16 Q^6+\ell_4^4(M^2-Q^2)+\ell_4^2(-27 M^4 + 36 M^2 Q^2 - 8 Q^4)
\end{aligned}
\end{equation}
Here $r_i$ are all four roots in $V(r)$. Requiring $\textrm{Discr}_4\geq 0$ and $M>0$ are sufficient conditions to assure that three roots of the polynomial are real positive numbers. This defines for us the space of
solutions admitting a horizon, which we depict in Fig.\,\ref{SharkFin} for a fixed value of cosmological constant. The shaded region corresponds to a classical black hole, while the white area represents naked singularities. The shaded area is usually referred to as ``Shark Fin'' due to its shape, and we will use this nomenclature henceforth.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\linewidth]{"SFnew"}
\caption{Shark Fin diagram for RNdS$_4$ with fixed positive value of cosmological constant $\Lambda_4=0.01$. The shaded area corresponds to black hole solutions, and the white area to naked singularities. The edges correspond to extremal black holes: the dashed line are cold solutions, the solid one are Nariai solutions. The star, where the two lines intersect, corresponds to the ultracold solution.}
\label{SharkFin}
\end{figure}
\subsection{Black hole mechanics}\label{sec:bh-mech}
In this portion we will review mechanical properties of each physical horizon. To keep the nomenclature simple, we will refer to each quantity by its thermodynamic analog. However we do not intend to give them a statistical interpretation; do not take the analogy as an equality.
For each physical horizon $r_{\mathsf{h}}=\{r_c,r_+,r_-\}$ we can define intrinsically on them an entropy, temperature, and chemical potential, which are defined in the standard way. The area-law for the entropy is
\begin{equation}
S_\mathsf{h}=\pi r_\mathsf{h}^2~,
\label{entropy_cosmo}
\end{equation}
while the Hawking temperature and electric potential read
\begin{equation}
T_\mathsf{h}=\frac{1}{4\pi}|V'(r_\mathsf{h})|~, \qquad \Phi_{\mathsf{h}}=\frac{Q}{r_\mathsf{h}}~ .
\label{tcosmo}
\end{equation}
For each horizon $r_\mathsf{h}$ we can verify that a first law is satisfied
\begin{equation}
\begin{aligned}
dM=-T_{-} dS_{-}+\Phi_{-}\, dQ~,\\
dM=\phantom{-}T_{+} dS_{+}+\Phi_{+}\, dQ~,\\
dM=-T_{c}\, dS_{c}+\Phi_{{c}}\, dQ~,
\end{aligned}
\label{1stlawbh}
\end{equation}
where $M$ and $Q$ are the mass and electric charge in \eqref{constraints}, and we are fixing $\Lambda_4$ ($\ell_4$) as we vary the mass, charge, and entropy. Notice that depending on the choice of $r_{\mathsf{h}}$, the first law is modified appropriately. For cosmological horizons this odd sign in \eqref{1stlawbh} was famously observed in \cite{Gibbons:1976ue}; see \cite{Banihashemi:2022htw,Morvan:2022aon} and references within for a recent discussion.
\subsection{Three different extremal limits}\label{sec:ext}
Extremal solutions occur when two, or more, horizons coincide. Due to the presence a positive cosmological constant, it is possible to have different extremal scenarios. For the RNdS$_4$ black hole in consideration, we have three different cases:\footnote{Historically, and in several references, the Nariai black hole refers to the case with $Q=0$ and $r_{+}=r_{c}$ \cite{1999GReGr..31..963N}. Here we define Nariai as solution with $Q\neq 0$ and $r_{+}=r_{c}$. }
\begin{description}
\item[~~~~i. Cold black hole:] $r_{-}=r_{+}\equiv r_0$,
\item[~~~~ii. Nariai black hole:] $r_{+}=r_{c}\equiv r_\mathsf{n}$,
\item [~~~~iii. Ultracold black hole:] $r_{-}=r_{+}=r_{c}\equiv r_\mathsf{uc}$.
\end{description}
These cases describe the edges and tip of the Shark Fin in Fig.\,\ref{SharkFin}. The shared characteristic of all cases is that $T_\mathsf{h}=0$, i.e., the Hawking temperature vanishes at extremality. Each case will also have a decoupling limit, leading to an enhancement of symmetries in the near-horizon geometry. However, as we will explore below, the resulting near-horizon geometry is distinct in each case. In the following we will review the decoupling limit for each case and describe the resulting symmetries in the near-horizon region.
\paragraph{i. Cold black hole.} The cold solution occurs when the inner and the outer black hole horizons coincide
\begin{equation}
r_{-}=r_{+}\equiv r_0~.
\label{Coolddef}
\end{equation}
The blackening factor \eqref{warp_factor} in this case becomes
\begin{equation}
\begin{aligned}
V(r)_{\rm cold}=\left( 1 - \frac{r^2}{\ell_4^2} - 2 \frac{r_0 r}{\ell_4^2} -3\frac{r_0^2}{\ell_4^2} \right)\left(1-\frac{r_{0}}{r}\right)^2~.
\end{aligned}
\label{RomansCold}
\end{equation}
Notice that between $r_0<r<r_c$ we have that $V(r)_{\rm cold}>0$.
To construct the near-horizon geometry we consider the following coordinate transformation,
\begin{equation}
r = r_0 +\lambda R~,\qquad t = \frac{\ell_{\rm A}^2}{\lambda} T~.
\label{ctr}
\end{equation}
With some insight, we have introduced $\ell_{\rm A}$, which we will define below. The decoupling limit is defined as the limit where $\lambda \rightarrow 0 $, while holding $T$ and $R$ fixed; this will take us close to $r\to r_0$. Using \eqref{Coolddef} and \eqref{ctr} in \eqref{dsrnds}, the limit leads to the line element
\begin{equation}
ds^{2}= \ell_{\rm A}^2 \left( - R^{2} \mathrm{d} T^{2}+ \frac{\mathrm{d} R^{2}}{R^2}\right)+r_0^{2}\,\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2\right)~.
\label{dscoldext}
\end{equation}
This is the near-horizon geometry for the cold black hole: it is AdS$_2 \times S^2$. The AdS$_{2}$ radius is given by
\begin{equation}\label{ads2-cold}
\ell_{\rm{A}}^2=\frac{r_0^{2}}{(1-6 r_0^{2}/\ell_4^2)}~,
\end{equation}
while the $S^2$ radius is the horizon radius $r_0$. Note that demanding $6r_0^2<\ell_4^2$, i.e., $\ell_{\rm A}^2>0$, implies that $r_c>r_0$: this is consistent with our hierarchy for the roots of $V(r)$.
It is also useful to write the charge and mass of the black hole: as a consequence of \eqref{Coolddef}, we have
\begin{equation}
Q^2_0=r_0^2\left(1-3 \frac{r_0^2}{\ell_4^2}\right)~,\qquad M_0= r_0\left(1-2\frac{ r_0^2}{\ell_4^2}\right)~.
\label{ZMRomans}
\end{equation}
The requirement $0\leq 6r_0^2<\ell_4^2$ assures to us that $Q_0^2$ and $M_0$ are always non-negative. Hence, starting from a cold solution, we can only find a neutral solution, $Q_0=0$, by setting $r_0=0$. This is a simple way to see that the cold black hole is the dashed line in Fig.\,\ref{SharkFin}.
Finally, the field strength in this limit is also well-defined. Starting from \eqref{dsrnds} and using \eqref{ctr}, we find
\begin{equation}
F=\mathrm{d} A = \frac{Q_0}{r_0^2}\, \ell_{\rm A}^2\mathrm{d} T \wedge \mathrm{d} R~,
\end{equation}
i.e., the field strength is proportional to the volume 2-form of AdS$_2$.
To conclude, for cold black holes we obtained an AdS$_{2}\times S^2$ factor in the metric of the near-horizon region of the extremal solution. The initial RNdS$_4$ metric had only a $u(1)$ symmetry corresponding to time translations and a $so(3)$ spherical symmetry. In the near-horizon geometry of the extremal cold solution we have an $sl(2,\mathbb{R})\times so(3)$ symmetry. Therefore the initial $u(1)$ symmetry is enhanced to a $sl(2,\mathbb{R})$ symmetry.
\paragraph{ii. Nariai black hole.}
In this case the double root $r_\mathsf{n}$ refers to the coinciding horizons $r_{+}$ (outer) and $r_{c}$ (cosmological). The blackening \eqref{wfds1} factor is very similar to the cold case, where after setting $r_+=r_c=r_\mathsf{n}$, we get
\begin{equation}
\begin{aligned}
V(r)_{\rm Nariai}=\left( 1 - \frac{r^2}{\ell_4^2} - 2 \frac{r_\mathsf{n} r}{\ell_4^2} -3\frac{r_\mathsf{n}^2}{\ell_4^2} \right)\left(1-\frac{r_\mathsf{n}}{r}\right)^2~.
\end{aligned}
\label{RNariai}
\end{equation}
However, in contrast with \eqref{RomansCold}, for the Nariai limit the blackening factor obeys
\begin{equation}\label{VN1}
V(r)_{\rm Nariai}<0~, \qquad r_-<r<r_\mathsf{n}~.
\end{equation}
This will be important when obtaining the resulting near-horizon geometry.
The decoupling limit is very similar to the one in \eqref{ctr}: we will consider
\begin{equation}
r = r_\mathsf{n} -\lambda R~,\qquad t = \frac{\ell_{\rm dS}^2}{\lambda} T~,
\label{ctr1}
\end{equation}
and take $\lambda\to0$ as the rest of the variables are held fixed. In comparison to \eqref{ctr}, here we have modified the sign in the radial variable: this is to demonstrate that we are reaching the horizon $r_\mathsf{n}$ from the interior of the cosmological solution (and not the inflationary patch). The parameter $\ell_{\rm dS}$ is given by
\begin{equation}\label{ds2-nar}
\ell_{\rm{dS}}^2=\frac{r_\mathsf{n}^{2}}{(6 r_\mathsf{n}^{2}/\ell_4^2-1)}~.
\end{equation}
Notice the similarity with \eqref{ads2-cold}. However, in this case we have that $r_\mathsf{n}>r_-$, which implies that
$6 r_\mathsf{n}^{2} >\ell_4^2$ and hence $\ell_{\rm{dS}}^2>0$. Implementing \eqref{ctr1} on the line element of RNdS$_4$, we find
\begin{equation}
ds^{2}=\ell^2_{\rm dS}\left( R^{2} \mathrm{d} T^{2}- \frac{\mathrm{d} R^{2}}{R^2}\right)+r_\mathsf{n}^{2}\,\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2\right)~.
\label{dsN}
\end{equation}
The resulting geometry is of the form dS$_2\times S^2$, where
the dS$_{2}$ radius is \eqref{ds2-nar} and the $S^2$ radius being $r_\mathsf{n}$. One of the culprits of this change from dS$_2$, relative to AdS$_2$ in \eqref{ads2-cold}, being the signs in \eqref{VN1}. As in the cold case, the presence of a dS$_2$ factor in the near-horizon region of the extremal Nariai geometry means that we have an enhancement of symmetry with respect to the initial RNdS$_4$ metric.
And similar to the cold case, the field strength in this limit is well-defined. Starting from \eqref{dsrnds}, we find
\begin{equation}\label{Fds2}
F=\mathrm{d} A = \frac{Q_\mathsf{n}}{r_\mathsf{n}^2} \,\ell_{\rm dS}^2\mathrm{d} T \wedge \mathrm{d} R~,
\end{equation}
i.e., the field strength is proportional to the volume 2-form of dS$_2$.
Finally, it is instructive to inspect the mass and the charge. Written in terms of $r_\mathsf{n}$ and $\ell_4$, we have
\begin{equation}
Q^2_\mathsf{n}=r_\mathsf{n}^2\left(1-3 \frac{r_\mathsf{n}^2}{\ell_4^2}\right)~,\qquad M_\mathsf{n}= r_\mathsf{n}\left(1-2\frac{ r_\mathsf{n}^2}{\ell_4^2}\right)~.
\label{ZMN1}
\end{equation}
Our bounds in this case are $\ell_4^2/6< r_\mathsf{n}^{2} \leq \ell_4^2/3$. Hence the neutral solution, $Q_\mathsf{n}=0$, corresponds to $3r_\mathsf{n}^2=\ell_4^2$, for which $M_\mathsf{n}=r_\mathsf{n}/3$, as expected \cite{1999GReGr..31..963N}.
Given this range of $r_\mathsf{n}$, the expressions in \eqref{ZMN1} lead to the solid line in Fig.\,\ref{SharkFin}.
\paragraph{iii. Ultracold black hole.}
This case actually represents the most constrained solution related to the previous ones. The extremal ultracold solution is characterized by
\begin{equation}
r_-=r_+=r_c\equiv r_\mathsf{uc}~.
\label{UC}
\end{equation}
Using \eqref{UC} in \eqref{constraints}, we find
\begin{equation}\label{eq:rucl}
r_\mathsf{uc}=\frac{\ell_4}{\sqrt{6}}~,
\end{equation}
and
\begin{equation}
Q^2_\mathsf{uc}=\frac{9}{8} M_\mathsf{uc}^2=\frac{\ell_4^2}{12}~.
\end{equation}
Hence, all scales in the black hole solution are determined by $\ell_4$. The ultracold solution is where cold black and Nariai black holes intersect, which happens when $\ell_{\rm A}=\ell_{\rm dS}\to \infty$, in accordance to \eqref{eq:rucl}.
The blackening factor in this case reads
\begin{equation}
V(r)_{\rm ultracold}=-\frac{r^2}{6r_\mathsf{uc}^2}\left(1-\frac{r_\mathsf{uc}}{r}\right)^3\left(1+3\frac{r_\mathsf{uc}}{r}\right)~.
\end{equation}
However, to describe the decoupling limit we need to move slightly away from this point. One way to proceed is to start from the cold case, and it is convenient to rewrite \eqref{RomansCold} as follows
\begin{equation}
V(r)_{\rm cold}=-\frac{r^2}{\ell_4^2}\left(1-\frac{r_0}{r}\right)^2\left(1-\frac{r_c}{r}\right)\left(1+\frac{2r_0+r_c}{r}\right)~,
\label{Vcold}
\end{equation}
where we are making explicit the dependence on $r_c$, and hence the additional cosmological horizon in $V(r)_{\rm cold}$.
In order to capture the near-horizon region of the ultracold black hole, we start from (\ref{Vcold}) and approach the ultracold case; this means sending
\begin{equation}
r_0\rightarrow r_\mathsf{uc} -\lambda~,\qquad r_c\rightarrow r_\mathsf{uc}
+\lambda~,
\label{bro}
\end{equation}
where $\lambda$ is the decoupling parameter. For the coordinates, we will be performing the following transformation on the cold metric\footnote{One way to avoid going through the cold black hole, and hence avoid using \eqref{bro}, is to modify \eqref{nearhorizonuc}. We can take \eqref{UC} with $r= r_\mathsf{uc}-R_0\lambda+\sqrt{\frac{2R_0^3 }{3 r_\mathsf{uc}^{3}}}\lambda^{3/2}R$ and $t= \sqrt{\frac{3 r_\mathsf{uc}^{3}}{2R_0^3}}\frac{T}{\lambda^{3/2}}$, and this will also lead to \eqref{eq:ext-ucold-1}. Here $R_0$ is an arbitrary constant. }
\begin{equation}
r= r_\mathsf{uc}+\sqrt{\frac{2 }{3 r_\mathsf{uc}^{3}}}\lambda^{3/2}R~,\qquad t= \sqrt{\frac{3 r_\mathsf{uc}^{3}}{2}}\frac{T}{\lambda^{3/2}}~.
\label{nearhorizonuc}
\end{equation}
Plugging (\ref{bro}) and (\ref{nearhorizonuc}) into the metric \eqref{dsrnds}, with \eqref{Vcold}, in the limit $\lambda\to 0$ we find
\begin{equation} \label{eq:ext-ucold-1}
ds^2=-\mathrm{d} T^2+\mathrm{d} R^2+r_\mathsf{uc}^2 \,\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2\right)~,
\end{equation}
that is a geometry of the form Mink$_2\times S^2$, where the $S^2$ radius is $r_\mathsf{uc}$. This is the resulting near-horizon geometry of the ultracold black hole. The resulting field strength is
\begin{equation}\label{eq:ext-ucoldF-1}
F=\mathrm{d} A = \pm\frac{\sqrt{3}}{\ell_4} \mathrm{d} T \wedge \mathrm{d} R~.
\end{equation}
\section{Effective two-dimensional theory \label{sec:JTreduction}}
In the subsequent sections we will be analyzing the deviations away from the extremal limits of the RNdS$_4$ black hole, for each case described in Sec.\,\ref{sec:ext}. Since all the extremal limits correspond to geometries that are the direct product of two-manifold and round two-sphere, i.e., of the form ${\cal M}_2\times S^2$, it is convenient to construct the effective gravitational theory on ${\cal M}_2$.
There are several references that describe the dimensional reduction of Einstein-Maxwell theory on a two-sphere; here we will be mainly following the conventions of \cite{Nayak:2018qej}---see also \cite{Larsen:2018iou,Castro:2021wzn}. The 4D action is given by \eqref{eq:EML-action}.
We will do a dimensional reduction of this theory to two-dimensions, where we take
\begin{equation}\label{eq:metric4d}
\begin{aligned}
ds^2_4&=g^{(4)}_{\mu\nu} \mathrm{d} x^\mu \mathrm{d} x^\nu= \frac{\Phi_0}{\Phi} g_{ab} \mathrm{d} x^a \mathrm{d} x^b + \Phi^2 \left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2\right)~,\\
F&= F_{ab} \mathrm{d} x^a \wedge \mathrm{d} x^b ~.
\end{aligned}
\end{equation}
Here $g_{ab}$ is a 2D metric, and $\Phi(x)$ is a scalar field, usually coined as the {\it dilaton}; both fields depend only on the 2D coordinates $x^a$, $a,b=(0,1)$. $\Phi_0\equiv \Phi(x=x_0)$ is a constant that we use to normalize solutions and compare the 2D solutions with their four-dimensional counterparts. Notice that we are assuming that all configurations preserve spherical symmetry, and the field strength in four dimensions, $F_{\mu\nu}$, is purely electric (i.e. supported solely by the two-dimensional metric). This is a consistent truncation of the theory, and it will suffice for our purposes.
The result of the dimensional reduction over \eqref{eq:metric4d} on \eqref{eq:EML-action} leads to the effective two-dimensional action
\begin{equation}\label{eq:2daction}
\begin{aligned}
{I_{\rm 2D}=\frac{1}{4 G_4} \int_{{\cal M}_2} \mathrm{d}^2x \sqrt{-g}\Phi^2 \left( {\cal R}+ 2\frac{\Phi_0}{\Phi^3}-{2\Lambda_4}\frac{\Phi_0}{\Phi}-\frac{\Phi}{\Phi_0} F_{ab}F^{ab} \right)~.}
\end{aligned}
\end{equation}
Here ${\cal R}$ is the two-dimensional Ricci scalar associated to $g_{ab}$. The resulting equations of motion are as follow. The variation of the action with respect to the dilaton gives
\begin{equation}\label{eq:eom1}
\begin{aligned}
{{\cal R} - \frac{\Phi_0}{\Phi^3} -\frac{3}{2}\frac{\Phi}{\Phi_0} F^2 -\Lambda_4 \frac{\Phi_0}{\Phi} =0 ~,}
\end{aligned}
\end{equation}
and the variation with respect to the metric leads to
\begin{equation}\label{eq:eom2}
\begin{aligned}
{(\nabla_a\nabla_b -g_{ab}\square) \Phi^2 +g_{ab}\left(\frac{\Phi_0}{ \Phi} +\frac{1}{2}\frac{\Phi^3}{ \Phi_0} F^2 -\Lambda_4 {\Phi_0 \Phi} \right)=0~.}
\end{aligned}
\end{equation}
Lastly, variation with respect to the gauge field yields:
\begin{equation} \label{maxw}
\partial_a \left( \sqrt{-g} \frac{\Phi^3}{\Phi_0} F^{ab} \right) =0~.
\end{equation}
It is important to remark that all solutions to \eqref{eq:eom1}-\eqref{maxw} solve the equations of motion of the four-dimensional theory \eqref{eq:EML-action}. The solution to Maxwell's equations is
\begin{equation}\label{eq:F1}
F_{ab}= Q \frac{\Phi_0}{ \Phi^3}\sqrt{-g}\epsilon_{ab}~,
\end{equation}
and $Q$ is a constant, i.e., the electric charge.
It is also useful to record
\begin{equation}
\begin{aligned}
{F_{ac}F_{b}^{~c}=Q^2 \frac{\Phi_0^2}{ \Phi^6} g_{ab}~,\qquad F^2= -2Q^2 \frac{\Phi_0^2}{ \Phi^6} ~.}
\end{aligned}
\end{equation}
However we will not substitute this on-shell value in the 2d theory since we might not want to keep the electric charge $Q$ fixed: we will proceed with the variation of the gauge field as well.
As an example, it is constructive to write dSRN$_4$ in the language of the two-dimensional theory. Comparing \eqref{dsrnds} with \eqref{eq:metric4d}, we find that the dilaton is simply
\begin{equation} \label{eq:rn1}
\Phi(x)=r~,
\end{equation}
and
\begin{equation}\label{eq:rn2}
g_{ab}\mathrm{d} x^a \mathrm{d} x^b = \frac{\Phi }{\Phi_0}\left( -V(r) \mathrm{d} t^2+ \frac{\mathrm{d} r^2}{V(r)}\right) ~,\qquad F= \frac{Q}{r^2} \mathrm{d} t\wedge \mathrm{d} r~,
\end{equation}
with $V(r)$ given in \eqref{wfds1}. It is straightforward to verify that \eqref{eq:rn1}-\eqref{eq:rn2} is a solution to \eqref{eq:eom1}-\eqref{maxw}. Notice that the electric charge of the black hole in Sec.\,\ref{sec:RN} is exactly the same as the constant entering in \eqref{eq:F1}.
We will mostly be interested in describing the dynamics surrounding the near-horizon geometries of the three extremal cases: cold, Nariai, and ultracold. We will denote these near-horizon backgrounds as the IR solutions. From the two-dimensional perspective, IR means that we analyze solutions starting from a background with $\Phi(x)= \Phi_0$, i.e., constant dilaton background.
When $\Phi$ is constant, we find from \eqref{eq:eom2} that
\begin{equation}
Q^2 = \Phi_0^2 \left( 1-3 \frac{\Phi_0^2}{\ell_4^2} \right)~,
\end{equation}
and hence $F_{ab}$ is a constant ($Q\Phi^{-2}_0$) times the volume element of $g_{ab}$. Equation \eqref{eq:eom1} determines the Ricci scalar to be
\begin{equation}\label{eq:ads2}
{\cal R}_0 = -\frac{2}{\Phi_0^2} \left( 1- 6\frac{\Phi_0^2}{\ell_4^2}\right)~.
\end{equation}
That is, the metric $g_{ab}$ has constant curvature. If $6\Phi_0^2<\ell_4^2$, the solution is locally AdS$_2$, where the curvature radius of the 2D geometry is
\begin{equation} \label{radius3}
\frac{1}{\ell_{\rm A}^2}= \frac{1}{\Phi_0^2} \left( 1- 6\frac{\Phi_0^2}{\ell_4^2}\right)~.
\end{equation}
In comparison to \eqref{dscoldext}-\eqref{ads2-cold}, we have $\Phi_0=r_0$ as expected. If $6\Phi_0^2>\ell_4^2$, then the solution is locally dS$_2$, with curvature radius
\begin{equation} \label{radius2}
\frac{1}{\ell_{\rm dS}^2}= \frac{1}{\Phi_0^2} \left( 6\frac{\Phi_0^2}{\ell_4^2}-1\right) ~.
\end{equation}
Comparing to \eqref{ds2-nar}-\eqref{dsN}, we have $\Phi_0=r_\mathsf{n}$.
And if $6\Phi_0^2=\ell_4^2$, the solution is locally Mink$_2$ with $\Phi_0=r_\mathsf{uc}$, in accordance with \eqref{eq:rucl}.
In the following sections we will consider linear fluctuations around this IR background. For this purpose, we define
\begin{equation}
\begin{aligned}\label{eq:lin3}
\Phi &= \Phi_0 + \lambda\, Y(x)~,\\
g_{ab}&= \bar g_{ab} + \lambda\, h_{ab}~, \\
A_{a}&= \bar A_{a} + \lambda\, \mathcal{A}_{a}~,
\end{aligned}
\end{equation}
where $\bar g_{ab}$ is the metric for a locally AdS$_2$, dS$_2$ space or Mink$_2$ space, i.e., satisfies \eqref{eq:ads2}, and $\lambda$ is a small parameter. The fluctuations of the gauge field $\mathcal{A}_{a}$ enter via the field strength in the equations of motion. And these are determined \eqref{eq:F1}: from there we see that
\begin{equation}\label{eq:F2}
\delta F_{ab}= \frac{\delta Q }{\Phi_0^2}\sqrt{-\bar{g}}\epsilon_{ab} - \frac{3Q }{ \Phi_0^3} Y \sqrt{-\bar{g}}\epsilon_{ab} + \frac{Q }{ 2\Phi_0^2} \sqrt{-\bar{g}} \epsilon_{ab} \bar{g}^{cd} h_{cd}~.
\end{equation}
In most of the prior literature it is common to set $\delta Q=0$, i.e., to hold the charge fixed. However, for our purpose it will be important to keep $\delta Q$ arbitrary. With this in mind, from the equations of motion \eqref{eq:eom2} and \eqref{eq:eom1}, we find that at linear order in $\lambda$ the dynamics of $Y(x)$ and $h_{ab}$ become
\begin{equation}\label{massaged1}
\begin{aligned}
{(\bar \nabla_a\bar \nabla_b -\bar g_{ab}\bar \square) Y(x)-\frac{{\cal R}_0}{ 2} \bar g_{ab} Y(x) -\frac{1}{\Phi_0^3} \bar g_{ab} Q \delta Q =0~,}
\end{aligned}
\end{equation}
and
\begin{equation} \label{massaged2}
\begin{aligned}
{\bar \nabla^a\bar \nabla^b h_{ab} -\bar \square h(x) - \frac{{\cal R}_0}{ 2} h(x)- \frac{12}{\Phi_0^3} \left( 1-4\frac{\Phi_0^2}{\ell_4^2} \right)Y(x) +\frac{6}{\Phi_0^4} Q \delta Q =0~,}
\end{aligned}
\end{equation}
where $h(x)= h_{ab}\bar g^{ab}$. In the following \eqref{massaged1}-\eqref{massaged2} will be dictating the response of the system as we move away from the IR background. And as each extremal case is discussed, we will be solving this system explicitly and connecting it to the response of the RNdS$_4$ black hole.
\section{Heating up the cold black hole }\label{sec:heating_cold}
In this section we analyze the thermodynamic response near extremality of the first branch of solutions, the so-called cold black hole, characterized by an AdS$_2 \times S^2$ near-horizon geometry. For the cold solution the inner and outer black hole horizons coincide, $r_+ = r_- \equiv r_0$, and its conserved quantities are expressed in \eqref{ZMRomans}.
Our analysis will encompass two perspectives, which will be contrasted. First, a perspective from the four-dimensional black hole based on the mechanics in Sec.\,\ref{sec:bh-mech}, which should be viewed as a UV derivation of the response. The second perspective will come from a two-dimensional analysis, where the analysis is done as back-reaction of the the IR background Sec.\,\ref{sec:JTreduction}. We will match both derivations and discuss the holographic interpretations.
This type of analysis has been done extensively in the literature for black holes whose near-horizon geometries contain an AdS$_2$ factor; see \cite{Mertens:2022irh} for a recent review. Hence our discussion here will be brief, and our aim is to set a stage to make comparisons with the Nariai and ultracold black holes.
\subsection{Near-extremality: thermodynamics and geometry}\label{sec:near-cold-thermo}
In this portion, we will quantify the response away from extremality starting from the black hole solution in four-dimensions. That is we will start from a non-extremal black hole and arrange parameters such that we are near to, but not at, the extremal configuration.
The elementary notion of near-extremality we will use is as a deviation of coincident horizons. For the cold black hole this will take the form
\begin{equation}
r_{-}=r_{0}-\lambda\,\epsilon+O(\lambda^2)~,\qquad r_{+}=r_{0}+\lambda\,\epsilon+O(\lambda^2)~,
\label{expds}
\end{equation}
where $\lambda$ is the decoupling parameter in \eqref{ctr}, and $\epsilon$ is a finite parameter. The two effects we will quantify, are the leading order response in $\lambda$ of the black hole mechanics, and how the near-horizon geometry is modified by $\lambda$ and $\epsilon$.
\paragraph{Thermodynamics.} The natural effect of \eqref{expds} is to raise the temperature: for $r_\mathsf{h}=r_+$ we have from \eqref{tcosmo}
\begin{equation}\label{eq:Tplus}
T_+ =\frac{1}{4\pi \ell_4^2 r_+^2} \left(2 r_++r_-+r_c\right)\left( r_+-r_-\right) \left(r_c-r_+\right)~,
\end{equation}
hence the near-extremal limit raises the temperature from zero to $T_+\sim O(\lambda)$. Now, in this process we will like to keep the charge $Q$ fixed: this is not a necessary condition, but one that is consistent to take.\footnote{Throughout we will always take $\ell_4$ fixed, as done in Sec.\,\ref{sec:bh-mech}. This is also not necessary, but reasonable for the comparisons we will make among the three extremal limits. } In this case, one has to adjust the $O(\lambda^2)$ in \eqref{expds}. The result will lead to a response of $Q\sim O(\lambda^3)$ and $M\sim O(\lambda^2)$, hence making the effects of the charge sub-leading.
Taking this into account, and using \eqref{constraints}, \eqref{eq:Tplus} and \eqref{expds}, the response of the mass as a function of the temperature is
\begin{equation}
M=M_{0}+\frac{T_+^{2}}{M_{\rm gap}^{\rm cold}}+O(T_+^3)~,
\label{Mextplusgap}
\end{equation}
where the extremal mass, $M_0$, is defined in \eqref{ZMRomans}. We also identify the mass gap as
\begin{equation} \label{Mgap_cold}
M_{\rm gap}^{\rm cold}=\frac{(\ell_4^2-6r_{0}^{2})}{2\pi^{2} \, \ell_4^2 \, r_{0}^{3}} ~.
\end{equation}
The entropy at the outer horizon is linear in the temperature
\begin{equation}
S_+=\pi r_{+}^{2}
= S_{0}+\frac{2T_+}{M_{\rm gap}^{\rm cold}}+O(T_+^{2}) ~,
\label{Scold1}
\end{equation}
where $S_{0}=\pi r_{0}^{2}$ is the extremal entropy. The first law at this order is simply $dM = T_+ dS_+ + O(\lambda^3) $.
It is worth pointing out that the change in the entropy of the cosmological horizon comes at $O(\lambda^2)$, and it is therefore subleading with respect to the change in entropy at the outer horizon. For this reason, we can consider this an ensemble of fixed charge and fixed cosmological horizon area.
\paragraph{Near-horizon geometry.}
As anticipated, separating the inner and the outer black hole horizons by a small amount increases the temperature, and hence, we see an increase in the entropy. There is as well an effect in the near-horizon geometry, which we now quantify.
To see the form of the near-horizon geometry of this configuration we take into consideration the displacement of the horizons in \eqref{expds} as we take the decoupling limit. Adapting slightly \eqref{ctr},\footnote{This adjustment is just for aesthetic reasons, i.e., to preserve the form $g_{RR}=\ell_{\rm A}^2/R^2$ as we take the limit $\lambda\to 0$.} we now have
\begin{equation}\label{ctr:near}
r= r_0 +\lambda \left(R+ \frac{\epsilon^2}{4} R^{-1}\right) ~, \qquad t=\frac{\ell_{\rm A}^2}{\lambda}T~.
\end{equation}
Using \eqref{expds} and \eqref{ctr:near} on \eqref{dsrnds}, and taking $\lambda\to 0$, we find
\begin{equation}\label{eq:near-cold}
\begin{aligned}
ds^{2}&=\ell_{\rm A}^2 \left( -R^{2}\left(1-\frac{\epsilon^{2}}{4R^2}\right)^2\mathrm{d} T^{2}+\frac{\mathrm{d} R^{2}}{R^{2}} \right)+r_{0}^{2}
\,\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2\right)~,\\
F &=\frac{Q_0}{r_0}\, \ell_{\rm A}^2\left(1-\frac{\epsilon^{2}}{4R^2}\right) \mathrm{d} T\wedge \mathrm{d} R~,
\end{aligned}
\end{equation}
where $\ell_A$ is defined in (\ref{ads2-cold}).
This is an instance of a \textit{nearly}-$AdS_2$ geometry, which arises as the near-horizon region of the near extremal solution. This geometry is locally AdS$_2$, which globally breaks some of the isometries. These are restored if we take the extremal limit, corresponding to $\epsilon\rightarrow 0$.
It is also important to quantify the leading order response, and compare it with the holographic properties we will discuss shortly. Using the two-dimensional notation, we will parameterize the first response in $\lambda$ as
\begin{equation}
\begin{aligned} \label{ads2pluspertCE}
ds^2&= \left(\bar{g}_{ab} + \lambda\, h_{ab}\right) \mathrm{d} x^a \mathrm{d} x^b+ \left(\Phi_0^2 + 2\lambda \Phi_0 \, Y \right)\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2 \right)+\cdots ~,\\
A &= \bar A_{a} \mathrm{d} x^a + \lambda\, {\cal A}_a \mathrm{d} x^a + \cdots~,
\end{aligned}
\end{equation}
that is, there is a response from the AdS$_2$ background ($h_{ab}$, ${\cal A}_a$) and the size of the two-sphere (${Y}$). Here $\bar{g}_{ab}$ is the locally AdS$_2$ background, and $\bar A_a$ is the gauge field associated to $F$ in \eqref{eq:near-cold}. For the near-extremal cold solution we find
\begin{equation}\label{eq:Ycold}
\Phi_0 = r_0~, \qquad Y(x)=R+ \frac{\epsilon^2}{4} R^{-1} ~.
\end{equation}
Note that we are not explicitly reporting on the profile of $ h_{\mu\nu}$, although it is straightforward to extract. This is because $h_{\mu\nu}$ is not an independent degree of freedom. As it will be evident in Sec.\,\ref{sec:hol-cold}, its profile is determined by a choice of gauge and the dynamics of $Y$. Since we are holding the charge $Q$ fixed, the response of ${\cal A}_a$ is also dictated by $ Y$ and $h_{ab}$; see \eqref{eq:F2}.
\subsection{Holographic analysis}\label{sec:hol-cold}
In this portion we will analyze the extremal cold solutions, and their response near-extremality, from the two-dimensional perspective. We will first report the solutions for the linearized perturbations and then we analyze the renormalized action.
This follows closely the analogous derivations in \cite{Cvetic:2016eiv,Castro:2018ffi,Castro:2019vog}, which we refer to for more details.
We will cast our solutions in a radial gauge, where we take
\begin{equation}\label{eq:gauge-cold}
ds^2= \mathrm{d}\rho^2 + \gamma_{TT} \mathrm{d} T^2~, \qquad A=A_T \,\mathrm{d} T~.
\end{equation}
For a cold black hole, the appropriate IR background
is \eqref{radius3}: the locally AdS$_2$ solution. For this case we will cast the metric as
\begin{equation} \label{bg_cold}
\overline{g}_{ab} \mathrm{d} x^a \mathrm{d} x^b= \mathrm{d}\rho^2 + \bar{\gamma}_{TT} \mathrm{d} T^2\,, \qquad \bar \gamma_{TT} = - \left(\alpha(T) e^{\rho/\ell_{\rm A}} + \beta(t) e^{-\rho/\ell_{\rm A}}\right)^2\,,
\end{equation}
and the background gauge field reads
\begin{equation}\label{form_gauge}
\bar A=\bar A_{T}\mathrm{d} T = \mu(T)\mathrm{d} T -\frac{Q \ell_{\rm A}}{\Phi_0^2} \left(\alpha(T) e^{\rho/\ell_{\rm A}} - \beta(T) e^{-\rho/\ell_{\rm A}}\right)\mathrm{d} T~.
\end{equation}
This solution is locally AdS$_2$ for arbitrary metric functions $\alpha(T)$ and $\beta(T)$; note that $\mu(T)$ is a pure gauge function. In comparison to \eqref{eq:near-cold} we have
\begin{equation}\label{eq:compare12}
R= e^{\rho/\ell_{\rm A}}~,\qquad \alpha(T)_{\rm cold}= \ell_{\rm A} ~, \quad \beta(T)_{\rm cold}= -\ell_{\rm A} \frac{\epsilon^2}{4}~.
\end{equation}
The response for this background will be parameterized by $Y(x)$ and $h_{ab}$ as defined in \eqref{eq:lin3}; since we are taking $\delta Q=0$, the response of the gauge field is fixed by \eqref{eq:F2}. The profile of the perturbations comes from solving the linearized equations given in \eqref{massaged1}-\eqref{massaged2}. Starting with the solution to \eqref{massaged1}, in our conventions the dilaton reads
\begin{equation}
Y(x)= \nu(T)e^{\rho/\ell_{\rm A}} + \theta(T) e^{-\rho/\ell_{\rm A}}~,
\end{equation}
with
\begin{equation}
\begin{aligned}\label{eq:beta-nu}
\beta(T) &= -\frac{\ell_{\rm A}^2}{4} \frac{\alpha}{ \partial_T \nu} \partial_T \left( \frac{1}{\nu} \left( c_0 + \frac{(\partial_T \nu)^2}{\alpha^2} \right) \right) ~, \\
\theta(T) & = -\frac{\ell_{\rm A}^2}{4 \nu} \left(c_0 + \frac{(\partial_T \nu)^2}{\alpha^2} \right)~,
\end{aligned}
\end{equation}
where $c_0$ is a constant. Here we have chosen to express the subleading components $\beta(T), \theta(T)$ of the background metric and fluctuations in terms of the leading source terms $\alpha(T), \nu(T)$. Comparing with \eqref{eq:Ycold}, we have
\begin{equation}\label{eq:compare34}
\nu(T)_{\rm cold}=1 ~,\qquad \theta(T)_{\rm cold}=\frac{\epsilon^2}{4}~.
\end{equation}
Finally we report on the metric perturbations. Once we plug in the solution for the dilaton, the equation for the metric \eqref{massaged2} is straightforwardly solved by
\begin{equation}\label{eq:g1}
h_{TT} =
-4\frac{\ell_{\rm A}^2}{ \Phi_0^3} \left(1-4\frac{\Phi_0^2}{\ell_4^2}\right) \left( \bar\gamma_{TT} \, Y(x) -2 \ell_{\rm A}^2 \sqrt{-\bar \gamma}\, {\partial_T} \left( \frac{\partial_T \nu(T) }{\alpha (T)} \right) \right) \,.
\end{equation}
Notice that we have focused on the inhomogeneous part of the solution, given that the homogeneous part can be absorbed in the arbitrary functions $\alpha(t)$ and $\beta(t)$ appearing in the background metric solution \eqref{bg_cold}.
To sum up, the solution to the system \eqref{massaged1}-\eqref{massaged2} are given in function of two functions alone, $\alpha(t)$ and $\nu(t)$, appearing as (finite) source in the background metric and (infinitesimal) source for the irrelevant operator dual to the dilaton $Y$, whose conformal dimension is $\Delta=2$.
Everything until now resembles closely the generic discussion of backreaction in AdS$_2$ as done in, e.g., \cite{Castro:2018ffi}. However, it is worth noticing that the metric backreaction in \eqref{eq:g1} does not have definite sign in de Sitter space. We find that the magnitude of the correction of $h_{TT}$ can change sign. If
\begin{equation}\label{eq:boundQ}
3\Phi_0^2 \left(1-4\frac{\Phi_0^2}{\ell_4^2}\right)= 4 Q^2-\Phi_0^2 \geq0~,
\end{equation}
we will find that the magnitude is in accordance to the backreaction of black holes embedded AdS$_4$ and Mink$_4$. However, the overall sign in \eqref{eq:g1} flips when $4 Q^2-\Phi_0^2 <0$. This is curious since \eqref{eq:boundQ} is a more stringent bound relative to \eqref{radius3}. That is, we have a change in behaviour in $h_{TT}$ before we reach the ultracold limit at the top of Fig.\,\ref{SharkFin}. We stress that when $\Lambda_4\leq0$ the coefficient in \eqref{eq:g1} is always negative; this is the first instance known to us where the metric backreaction changes its behaviour. In the cases considered in \cite{Castro:2021fhc,Castro:2021wzn}, which are models that include matter fields in two-dimensions, the backreaction also changes signs; however, in those cases it is reflected in the interactions between the dilaton and matter content of the theory.
The constraint \eqref{eq:boundQ} might be a reflection that close to the Shark Fin we need to consider perturbations with $\delta Q\neq0$. This would be compatible with our results in Sec.\,\ref{sec:ultracold}; but unfortunately we haven't been able to tie these two observations. It would be interesting to have a holographic understanding of the origin of \eqref{eq:boundQ}, and also find other aspects of the response of the black hole that are affected by it.
\subsubsection{Renormalized action}
In order to compute the renormalized on-shell action we consider the effective 2D action \eqref{eq:2daction},
and we plug in the solution constructed between \eqref{eq:gauge-cold}-\eqref{eq:g1}. The range of integration is taken to be a finite radial value $\rho_h$ in the IR and a cutoff regulator $\rho_0$ in the UV of AdS$_2$. The regulated action is divergent as we send $\rho_0$ to infinity, therefore it needs to be supplemented by additional boundary counterterms, which guarantee a
Dirichlet boundary problem (the Gibbons-Hawking-York term $I_{\rm GH}$) and to remove residual divergencies ($I_{\rm ct}$) once the cutoff is removed:\footnote{Recall that we are setting $G_N=1$.}
\begin{equation} \label{ct_cold_r}
I_{\rm ct} = - \frac{1}{ \, \ell_{\rm A}} \int \mathrm{d} T \, \sqrt{-\gamma}\,\Phi^2~, \qquad I_{\rm GH} = \frac{1}{2 } \int \mathrm{d} T \, \sqrt{-\gamma}\, \Phi^2\,K ~.
\end{equation}
These counterterms are appropriate to a 2-dimensional spacetime ${\cal M}_2$ with a one-dimensional boundary $\partial {\cal M}_2$ with induced metric $\gamma_{ab}$. The Gibbons-Hawking surface term is given in terms of the trace of the extrinsic curvature $K_{ab}$ of the boundary, $K_{ab} = -\frac12 (\nabla_{a} n_{b} + \nabla_{b} n_{a})$,
where $n^{a}$ is the outward-pointing vector normal on $\partial {\cal M}_2$.
Moreover, given the form of the gauge field \eqref{form_gauge}, one can see that the leading component of $A_T$ is not the source term $\mu(T)$, but the term proportional to the volume of AdS$_2$. In order to have a well defined variational principle in terms of the source we have to perform a double Legendre transform \cite{Cvetic:2016eiv} which has the aim of both cancelling the volume term in the conjugate momenta to the gauge field, and impose Dirichlet boundary conditions. We collect below the final form for the action, but more details can be found in \cite{Cvetic:2016eiv,Castro:2018ffi,Castro:2019vog}.
In the final form for the renormalized on shell action the dependence on the regulator $\rho_0$ drops out and the result is finite and depends on the source functions $ \alpha(T), \nu(T), \mu(T)$ appearing explicitly in the solution. Its value is
\begin{equation}
I_{\rm on-shell-cold} = -\ell_{\rm A} \Phi_0 \, \lambda \int \mathrm{d} T \left(\frac{ 4 c_0 \alpha (T)}{ \nu (T)}+\frac{ \nu '(T)^2}{\alpha (T) \nu (T)} - \mu(T) \frac{Q}{\Phi_0^3} \right) + I_{\rm global} \,,
\end{equation}
where $I_{\rm global}$ denotes the value of the integral evaluated at the horizon $\rho_h$, whose explicit form is not necessary for our purposes.
Following the reasoning in \cite{Castro:2018ffi}, we can see that the renormalized action is invariant under infinitesimal time reparameterizations and $U(1)$ gauge transformations. The functions $ \alpha(T)$, $\nu(T)$, and $\mu(T)$ are pure gauge and can be traded for the three independent functions that generate residual gauge symmetries, but the on-shell action actually depends only on one of them, which we call $\sigma(T)$, which generates a boundary Weyl transformation. Without loss of generality, therefore, we can parameterize the sources as a finite Weyl transformation starting from the reference point with $\alpha=1$, $\nu=1$, $\mu=\mu_0$, namely
\begin{equation}\label{eq:weyl}
\alpha(T) = e^{\sigma(T)/\ell_{\rm A}}, \qquad \nu(T) = e^{\sigma(T)/\ell_{\rm A}}, \qquad \mu(T) = \mu_0\,.
\end{equation}
Inserting these in the on-shell action, the latter boils down to this simple expression:
\begin{equation} \label{onshell_final_def}
I_{\rm on-shell-cold} = \ell_{\rm A} \Phi_0 \, \lambda \int \mathrm{d} T \left( -{4 c_0} + 2 \{ \tau(T),T \} + \frac{Q \mu_0}{\Phi_0^3} \right) + I_{\rm global}~.
\end{equation}
To arrive at expression \eqref{onshell_final_def}, we have parameterized $\sigma(T)$ in terms of an arbitrary auxiliary function $\tau(T)$ as
\begin{equation}
\sigma(T) = \ell_{\rm A} \log \partial_T \tau(T)~,
\end{equation}
and added a total derivative term. In the integral we have introduced
\begin{equation}
\{ \tau(T),T \} \equiv \frac{\partial_T^3 \tau}{\partial_T \tau} - \frac32 \left(\frac{\partial_T^2 \tau}{\partial_T \tau} \right)^2~,
\end{equation}
i.e, the Schwarzian derivative.
Unsurprisingly, the response of the system under Weyl transformations of the boundary metric manifests at the level of the on-shell action in the appearance of the Schwarzian derivative.
We are ready now to briefly make contact with the thermodynamic analysis due to the linear response induced by $Y(x)$. We will be working with \eqref{eq:compare12} and \eqref{eq:compare34},\footnote{In order to cast the metric \eqref{dscoldext} in the form of the background $\overline{g}_{ab}$ with \eqref{bg_cold}, we need to re-scale the time coordinate in \eqref{ctr} by $T \rightarrow T/\ell_{\rm A}$, effectively resulting in the multiplicative $\ell_{\rm A}$ factor in the on-shell action.} for which the background metric is \eqref{bg_cold}. This is a solution that contains a horizon at
\begin{equation}
\bar\gamma_{TT}(\rho = \rho_h) =0 \qquad \rightarrow \qquad e^{2\rho_h/\ell_{\rm A}} = -\frac{\beta_{\rm cold}}{\alpha_{\rm cold}}\,.
\end{equation}
The associated temperature in 2D is defined as
\begin{equation} \label{T2}
T_{2D} = \frac{1}{2\pi} \partial_{\rho} \sqrt{-\gamma}|_{\rho_h} = \frac{\sqrt{-\alpha_{\rm cold}\beta_{\rm cold}}}{\pi} = \frac{\epsilon}{2\pi}~.
\end{equation}
Notice that its relation to \eqref{eq:Tplus}, near-extremality, is $T_+= \frac{\lambda}{\ell_{\rm A}^2}T_{2D}$ (in accordance to the change of time coordinate in \eqref{ctr:near}). The entropy is found by evaluating the dilaton at the horizon:
\begin{equation}
\begin{aligned}
S_{2D} &= \pi \Phi(x)^2_{\rm horizon}\\
&= \pi \Phi_0^2 + 2\pi \Phi_0 \lambda Y(x)_{\rm horizon} + \cdots
\end{aligned}
\end{equation}
After using the values reported here, it is straightforward to check that this agrees with \eqref{Scold1} and \eqref{Mgap_cold}, where in the 2D language we have
\begin{equation}
M_{\rm gap}^{\rm cold}=\frac{(\ell_4^2-6r_{0}^{2})}{2\pi^{2} \, \ell_4^2 \, r_{0}^{3}} = \frac{1}{2\pi^{2} \ell_{\rm A}^2\Phi_0}~.
\end{equation}
As shown in \cite{Maldacena:2016upp}, this linear response in temperature arises from the Schwarzian effective action in \eqref{onshell_final_def}, where $(M_{\rm gap}^{\rm cold})^{-1}$ is proportional to the coupling in front of the action.
\section{Deviations away from extremality for Nariai }\label{sec:nariai}
In this section we analyze the response away from extremality for the Nariai black hole. This was the solution with $r_+=r_c\equiv r_\mathsf{n}$, i.e., the outer and cosmological horizons coincide. The properties of the extremal solution are described in \eqref{RNariai}-\eqref{ZMN1}. A key feature here is that the near-horizon geometry for the Nariai solution is dS$_2\times S^2$, where the de Sitter radius is
\begin{equation}
\ell^2_{\rm dS}=\frac{r_{\mathsf{n}}^2 \ell_4^2}{6 r_{\mathsf{n}}^2 - \ell_4^2}~,
\end{equation}
and the key restriction to recall in this case is
\begin{equation} \label{cnd_nar}
6 r_{\mathsf{n}}^2 > \ell_4^2~.
\end{equation}
This will be important as we contrast the cold black hole relative to Nariai: many aspects are shared by dS$_2$ and AdS$_2$, but small differences are key.
There are several recent analyses of dS$_2$, and near-dS$_2$, that apply to the Nariai limit of Schwarzschild dS$_4$ black holes, and studies from the perspective of a two-dimensional theory with a running dilaton; see for example \cite{Maldacena:2019cbz,Moitra:2022glw,Svesko:2022txo,Anninos:2022hqo}. Our presentation here will be brief---and more details are covered in the references---since the main purpose for us is to contrast this scenario with the responses in the cold and ultracold cases.
\subsection{Near-extremality: thermodynamics and geometry}\label{sec:near-nariai-thermo}
\paragraph{Thermodynamics.} In analogy to Sec.\,\ref{sec:near-cold-thermo}, to go slightly beyond extremality, we displace $r_{+}$ and $r_{c}$ around $r_{\mathsf{n}}$ as follows
\begin{equation}
r_{+}=r_{\mathsf{n}}-\lambda\epsilon+O(\lambda^{2})~,\qquad r_{c}=r_{\mathsf{n}}+\lambda\epsilon+O(\lambda^{2})~.
\label{expdsnariai}
\end{equation}
As expected, this deviation ignites a balanced temperature at outer horizon $(T_+)$ and the cosmological horizon $(T_c)$, with both of them linear in $\lambda$ at leading order. That is, $T_+=T_c\sim O(\lambda)$, only at leading order.
To follow the parallel with Sec.\,\ref{sec:near-cold-thermo}, here we can also fix the charge $Q$ of the black hole, which sets constraints on the higher order terms in \eqref{expdsnariai}. With this choice, and taking the perspective of the cosmological horizon, we find the following mechanical response:
\begin{equation}\label{eq:MS12}
M=M_{\mathsf{n}}+\frac{T_c^{2}}{M_{\rm gap}^{\rm n}}+\cdots~,
\qquad
S_c=S_{\mathsf{n}}-\frac{2T_c}{M_{\rm gap}^{\rm n}}+\cdots~,
\end{equation}
where $M_{\mathsf{n}}$ is given in \eqref{ZMN1}, $S_{\mathsf{n}}=\pi r_\mathsf{n}^2$, and
\begin{equation}\label{eq:gapn}
M_{\rm gap}^{\rm n}=\frac{(\ell_4^2-6r_{\mathsf{n}}^{2})}{2\pi^{2} \, \ell_4^2 \, r_{\mathsf{n}}^{3}} =-\frac{1}{2\pi^{2}\,\ell_{\rm dS}^2\, r_{\mathsf{n}}}~<0~.
\end{equation}
Very crucially here the mass gap is negative! And one should expect this: for fixed $Q$, starting from the extremal Nariai solution represented by the right edge of the diagram in Fig. (\ref{SharkFin}), we can only decrease the mass if we want to find physical solutions.
We can also take the perspective of the outer horizon, which gives
\begin{equation}\label{eq:MS123}
M=M_{\mathsf{n}}+\frac{T_+^{2}}{M_{\rm gap}^{\rm n}}+\cdots~,
\qquad
S_+=S_{\mathsf{n}}+\frac{2T_+}{M_{\rm gap}^{\rm n}}+\cdots~,
\end{equation}
and the same value of mass gap in \eqref{eq:gapn}. In this case both the entropy and mass decrease! As discussed in
\cite{Svesko:2022txo,Anninos:2022hqo}, this can be viewed as an instability of the outer horizon, while the cosmological horizon is stable to the near-extremal perturbation.\footnote{In the cold case a similar effect to \eqref{eq:MS123} occurs at the inner horizon. It is usually not interesting to highlight since one tends to not place an observer between $r_-$ and $r_+$, and it is known that the inner horizon is unstable. For a cosmological horizon it is relevant to discuss the perspective of the static patch observer for whom both the outer and cosmological horizon are present.}
\paragraph{Near-horizon geometry.}
The near-horizon region is reached performing the usual coordinate transformation \eqref{ctr1} combined with \eqref{expdsnariai}, where we will make just a small modification
\begin{equation} \label{expnariainext}
r = r_\mathsf{n} \pm \lambda R~,\qquad t = \frac{\ell_{\rm dS}^2}{\lambda} T~.
\end{equation}
In contrast to \eqref{ctr:near}, we have not modified the radial diffeomorphism since we want to reflect the static patch below. We have added a ``$\pm$'' to illustrate the difference of the displacement relative to the outer or cosmological horizon. Replacing (\ref{expnariainext}) and \eqref{expdsnariai} in (\ref{dsrnds}), and taking the decoupling limit $\lambda\rightarrow0$ while holding $T$, $R$ and the sphere fixed, we find
\begin{equation}\label{eq:near-ext-N}
ds^{2}= \ell_{\rm dS}^2 \left( (R^{2}-\epsilon^{2}) \mathrm{d} T^{2}-\frac{\mathrm{d} R^{2}}{(R^{2}-\epsilon^{2})} \right)+r_{\mathsf{n}}^{2}\,\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2\right)~,
\end{equation}
and the field strength is given by \eqref{Fds2}. The near-horizon geometry of the near-extremal Nariai solution contains a \textit{nearly}-dS$_2$ factor. Taking the extremal limit $\epsilon\rightarrow 0$ we indeed restore the dS$_2$ factor we had in the extremal case \eqref{dsN}.
Notice that when obtaining the line element \eqref{eq:near-ext-N}, a reasonable choice is to expand the blackening factor $V(r)$ for $r_+<r<r_c$. This restricts $-\epsilon<R<\epsilon$, and hence the result of the decoupling limit is the static patch of dS$_2$ where the Euclidean geometry is locally a sphere. One could
consider instead $r>r_c$, i.e., to have an observer on the inflationary patch, and then we would have $R>\epsilon$.
As we did in \eqref{ads2pluspertCE}, we also report on the leading order response. Using the two-dimensional notation, we will parameterize the first response in $\lambda$ as
\begin{equation}
\begin{aligned} \label{ds2pluspertCE}
ds^2&= \left(\bar{g}_{ab} + \lambda\, h_{ab}\right) \mathrm{d} x^a \mathrm{d} x^b+ \left(\Phi_0^2 + 2\lambda \Phi_0 \, Y \right)\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2 \right)+\cdots ~,\\
A &= \bar A_{a} \mathrm{d} x^a + \lambda\, {\cal A}_a \mathrm{d} x^a + \cdots~,
\end{aligned}
\end{equation}
that is, there is a response from the dS$_2$ background ($h_{ab}$, ${\cal A}_a$) and the size of the two-sphere (${Y}$). Here $\bar{g}_{ab}$ is the locally dS$_2$ background, and $\bar A_a$ is the gauge field associated to $F$ in \eqref{Fds2}. For the near-extremal Nariai solution we find
\begin{equation}\label{eq:Yn}
\Phi_0 = r_\mathsf{n}~, \qquad Y(x)=\pm R~.
\end{equation}
Here the choice of positive or negative sign would be important in determining which horizon is being deformed: the plus sign will lead to the mechanics in \eqref{eq:MS12} and the minus sign to \eqref{eq:MS123}.
\subsection{Two-dimensional analysis} \label{sec_dS}
In this last portion we will discuss the solution to the linear equations \eqref{massaged1}-\eqref{massaged2}. We will adopt a notation very similar to Sec.\,\ref{sec:hol-cold}, so the contrast with the counterpart of near-AdS$_2$ is manifest. We take the following value for the background 2d metric
\begin{equation}\label{dSinfl}
ds_2^2 = -\mathrm{d}\rho^2 + \bar\gamma_{TT}\, \mathrm{d} T^2~, \qquad \bar \gamma_{TT} = \left(\alpha(T) e^{\rho/\ell_\mathsf{dS}} + \beta(T) e^{-\rho/\ell_{\mathsf{dS}}}\right)^2~,
\end{equation}
which differs from formula \eqref{bg_cold} only by an overall sign, and the presence of $\ell_{\mathsf{dS}}$ instead of $\ell_{\rm A}$. Hence, here $\rho$ is time and $T$ is a spatial direction. The metric \eqref{dSinfl} can be regarded as a generalization of global coordinates for dS$_2$. For the background gauge field we have
\begin{equation}
\bar A_{T} = \mu(T) -\frac{Q \ell_{\mathsf{dS}}}{\Phi_0^2} \left(\alpha(T) e^{\rho/\ell_{\mathsf{dS}}} - \beta(T) e^{-\rho/\ell_{\mathsf{dS}}}\right)~.
\end{equation}
Notice that the solutions for Nariai, both the background and perturbations, can be easily found by noticing that the configuration is formally equivalent to the cold one, upon performing the transformation $\rho \rightarrow i \rho$ and $\ell_{\mathsf{dS}} \rightarrow i \ell_{\mathsf{A}}$. This takes Lorentzian dS$_2$ to Euclidean AdS$_2$. However, the important subtleties come from imposing reality conditions on the various arbitrary functions that appear as we solve the system.
By adopting the same procedure as in Sec.\,\ref{sec:hol-cold}, we start by analyzing the solution to \eqref{massaged1} for $\delta Q=0$. The solution for the dilaton reads
\begin{equation}
Y(x) = \nu(T) e^{\rho/\ell_{\mathsf{dS}}} + \theta(T) e^{-\rho/\ell_{\mathsf{dS}}}~,
\end{equation}
with
\begin{equation}
\beta(T) = \frac{\alpha (T) \theta '(T)}{\nu '(T)}~, \qquad \theta = \frac{c_n}{\nu(T)}-\frac{\ell_{\mathsf{dS}}^2 \left(\nu'(T)\right){}^2}{4 \alpha (T)^2 \nu(T)}~,
\end{equation}
and $c_n$ a constant. The metric perturbation is
\begin{equation}
\sqrt{-\gamma_1} = \frac{4 \ell_{\mathsf{dS}^2} \left(4 Q^2-\Phi_0^2\right)}{3 \Phi_0^5} \left( \sqrt{-\bar \gamma} \, Y(x) +2 \ell_{\mathsf{dS}}^2 {\partial_T } \left( \frac{ \nu'(T) }{\alpha (T)} \right) \right)~.
\end{equation}
We have displayed the solutions in a coordinate system adequate for the inflationary patch of dS$_2$. However, the responses are also interesting from the static patch perspective, as reflected in our discussion in Sec.\,\ref{sec:near-nariai-thermo}. To move to the static patch we first need to extend $\rho$: this can be done by the coordinate change
\begin{equation}
\cosh \frac{\rho}{\ell_{\mathsf{dS}}} = \frac{R}{\ell_{\mathsf{dS}}}~,
\end{equation}
and select
\begin{equation}
\alpha_{\rm static} = -\beta_{\rm static}=\frac{\ell_{\rm dS}}{2}~,
\end{equation}
where now \eqref{dSinfl} becomes
\begin{equation} \label{stat_p}
ds^2 = -\ell_{\mathsf{dS}}^2\left( 1-\cfrac{R^2}{\ell_{\mathsf{dS}}^2} \right) \mathrm{d} T^2 + \cfrac{\mathrm{d} R^2}{\left(1-\cfrac{R^2}{\ell_{\mathsf{dS}}^2}\right)} ~.
\end{equation}
It is important to emphasize that now $R\geq \ell_{\mathsf{dS}} $. The solution for the dilaton in this case is linear in the radial coordinate,
\begin{equation}
Y(x) = R~.
\end{equation}
This is the same solution previously found in \cite{Moitra:2022glw,Svesko:2022txo}, and shows that a metric of the form \eqref{stat_p} can be obtained via a suitable near-extremal limit starting from a Nariai configuration. The delicate aspect of this construction is to extend the coordinates to cover an observer that is inside the cosmological horizon; this requires starting with generic complex functions $\alpha(T)$ and $\beta(T)$, and then imposing non-trivial reality conditions on the metric on the static patch.
\section{Kicking the ultracold black hole} \label{sec:ultracold}
We finally turn to the most novel instance of our extremal cases: the ultracold black hole. Recall that this solution is obtained when all the three horizons coincide
\begin{equation} \label{eq:equal-horizons}
r_-=r_+=r_c\equiv r_{\mathsf{uc}}~,
\end{equation}
and it corresponds to the point in phase space where
\begin{equation}
\begin{aligned}
r_{\mathsf{uc}}= \frac{\ell_4}{\sqrt{6}}~, \qquad
Q_{\mathsf{uc}}^2 = \frac{\ell_4^2}{12}~, \qquad
M_{\mathsf{uc}}^2= \frac{2\ell_4^2}{27}~.
\label{UCext}
\end{aligned}
\end{equation}
This is the first peculiarity of this solution: while in the previous cases extremality can be obtained by choosing different values of $r_0$ or $r_{\mathsf{n}}$ (thereby different values of $M$ and $Q$ according to \eqref{ZMRomans} and \eqref{ZMN1}), the ultracold case is more constrained: the extremal solution corresponds to a single point. The point is located at the tip of the Shark Fin in Fig.\,\ref{SharkFin}: one sees immediately then that moving horizontally (namely, raising the mass, keeping the charge fixed) corresponds to going out of the shaded area, and encountering a naked singularity.
\begin{wrapfigure}{r}{0.38\textwidth}
\centering
\includegraphics[width=\linewidth]{"SF_tip"}
\caption{Close-up of Fig.\,\ref{SharkFin}, near to the ultracold black hole.}
\label{sharkTip}
\end{wrapfigure}
Our strategy for ``heating up'' the ultracold solution will then be different: we should work in an ensemble that allows charge and mass to vary, while keeping us inside the Shark Fin. In other words, we should allow a near-extremal deformation that moves the solution downwards along the red line displayed in Fig.\,\ref{sharkTip}.
The next subsections are devoted to explaining how to achieve this, and what are the consequences of this procedure. That is, we will kick the ultracold black hole away from extremality. We will first investigate the consequences at the level of black hole thermodynamics and the near-horizon geometry. We will then carry out the holographic analysis from the two-dimensional perspective, by analyzing the perturbations around Mink$_2 \times S^2$; we will also show how they match with the black hole response.
\subsection{Near-extremality: thermodynamics and geometry}\label{sec:near-uc}
\paragraph{Thermodynamics.} As familiar by now, the deviation away from extremality is given by introducing $\lambda$ to split the coincident horizons in \eqref{eq:equal-horizons} by a small amount. In the context of a thermodynamic analysis, we will first investigate how $Q$ and $M$ respond to a deviation away from the cold solution. We start by first sending
\begin{equation}
r_-= r_{\mathsf{uc}}-w_1\lambda+O(\lambda^2)~,\qquad r_+= r_{\mathsf{uc}} - w_2 \lambda+O(\lambda^2)~,
\label{rpmw1w2}
\end{equation}
where $w_2<w_1$ are constant coefficients and finite as $\lambda\to 0$.\footnote{At this stage we only ask that $w_2<w_1$, so that $r_-<r_+$ at leading order in $\lambda$.} We will also be holding fixed $\ell_4$, and hence from \eqref{constraints} we can quantify the response of $Q$ and $M$; we find,
\begin{equation}
\begin{aligned}
Q=Q_{\mathsf{uc}}-\frac{2 }{\sqrt{3} \, \ell_4}(w_1^2+w_1w_2+w_2^2)\lambda^2 +O(\lambda^3)~,\\
\label{z}
M = M_\mathsf{uc}-\frac{\sqrt{2}}{\sqrt{3}\, \ell_4} (w_1^2+w_1 w_2+w_2^2) \, \lambda ^2+O(\lambda ^3)~.
\end{aligned}
\end{equation}
This clearly illustrates the basic differences relative to cold and Nariai black holes. First, the leading response is order $\lambda^2$, rather than $\lambda$ for arbitrary $w_{1,2}$. Second, there is no real values of $w_1$ and $w_2$ that will hold $Q$ fixed at leading order. This is compatible with the intuition gathered from Fig.\,\ref{sharkTip}; for any value of $w_{1,2}$ and small $\lambda$, the deviations \eqref{z} are within the shark fin.
Next, it is instructive to quantify how the temperature at each horizon responds to these deviations. For this, it is first useful to record that
\begin{equation}
r_c= r_{\mathsf{uc}}+(w_1+w_2)\lambda+O(\lambda^2)~.
\label{rcw1w24}
\end{equation}
With this we assure that \eqref{rpmw1w2} and \eqref{rcw1w24} leave $\ell_4$ fixed at leading order in $\lambda$. The responses of the cosmological and outer horizons to the deviations in \eqref{rpmw1w2} and \eqref{rcw1w24} give
\begin{equation}
\begin{aligned}
T_c&=\frac{\sqrt{6}}{\pi\, \ell_4^3} \left(2w_2+w_1\right)\left(2w_1+w_2\right) \lambda^2+O(\lambda^3)~,\\
T_+&= \frac{\sqrt{6}}{\pi \ell_4^3}\left(2w_1+w_2\right)\left(w_1-w_2\right)\lambda^2+O(\lambda^3)~.
\label{TUC}
\end{aligned}
\end{equation}
Again this is very different from our previous situations: for the cold and Nariai black holes the response in the temperature was $T_\mathsf{h}\sim O(\lambda)$, while here we obtain $T_\mathsf{h}\sim O(\lambda^2)$.
Actually, the quantities that respond at leading order in this scenario are the electric potential and the entropy. In particular we find
\begin{equation}
\Phi_{\mathsf{c}}= \frac{Q}{r_c}= \frac{1}{\sqrt{2}} - \frac{\sqrt3}{\ell_4}(w_1+w_2) \lambda +O(\lambda^2)~,
\label{phii}
\end{equation}
and the area law at $r_c$ is
\begin{equation}
S_c=\frac{\pi \, \ell_4^2 }{6}+\sqrt{\frac{2}{3}} \pi \, \ell_4 (w_1 +w_2)\lambda +O(\lambda^2)\,.
\label{suc}
\end{equation}
From these expressions it is natural to advocate that
the change in entropy at order $\lambda$ is driven by a change of chemical potential rather than to a change in temperature (which is subleading). This is reminiscent of other studies of two-dimensional gravity theories in flat spacetime \cite{Afshar:2019axx}, where an infinite value for the specific heat
\begin{equation} \label{spec_heat}
C_s^{-1} = \frac{1}{T} \left(\frac{dT}{dS} \right) \bigg|_{Q=const}\,
\end{equation}
was found, since the change in temperature is independent on the change in entropy. The subsequent portions are devoted to showing how to retrieve this feature via an analysis of the IR background for the ultracold case.
\paragraph{Near-horizon geometry.}
After displacing the location of the horizons following \eqref{rpmw1w2} and \eqref{rcw1w24}, we will now construct the near-horizon geometry. To keep expressions simple and succinct, and without loss of generality, we will make a specific choice of $w_{1,2}$: setting $w_1 =\epsilon$ and $w_2=0$, we have
\begin{equation}
r_- = r_{\mathsf{uc}} - \epsilon \, \lambda~,
\qquad r_+ = r_{\mathsf{uc}}~, \qquad
r_ c = r_{\mathsf{uc}} + \epsilon \,\lambda ~.
\end{equation}
This is different than the deviation used in \eqref{bro}, since in that instance the solution was still extremal. The coordinate transformation we will use is
\begin{equation}\label{eq:dec-ucold}
\begin{aligned}
r&= r_{\mathsf{uc}} -{R_0}\lambda + \, \lambda ^{3/2} \sqrt{\frac{2R_0^3 }{3 r_\mathsf{uc}^{3}}} \, R ~, \\
t &= \sqrt{\frac{3 r_\mathsf{uc}^{3}}{2 R_0^3} } \frac{T}{\lambda ^{3/2} } ~,
\end{aligned}
\end{equation}
where $R_0$ is an arbitrary constant. With this choice the resulting near-horizon background is
\begin{equation} \label{eq:ext-ucold-2}
ds^2=-\cfrac{R_0^2-\epsilon^2}{ R_0^2}\, \mathrm{d} T^2+ \cfrac{R_0^2}{R_0^2-\epsilon^2}\, \mathrm{d} R^2+r_\mathsf{uc}^2 \,\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2\right)~,
\end{equation}
and
\begin{equation}
F= \pm \frac{\sqrt{3}}{\ell_4} \mathrm{d} T \wedge \mathrm{d} R~.
\end{equation}
For $\epsilon =0$ we recover \eqref{eq:ext-ucold-1} and \eqref{eq:ext-ucoldF-1}, as expected. However notice that the presence of $\epsilon$ is trivial: it can be completely reabsorbed by a constant rescaling of $T$ and $R$. In this context it is clear that a deviation from extremality is {\it not} heating up Mink$_2$.\footnote{Moreover, have we obtained in \eqref{eq:ext-ucold-2} a finite temperature solution in the IR, it would imply that in UV $T_\mathsf{h}\sim O(\lambda^{3/2})$ due to \eqref{eq:dec-ucold}. However, we know that in the UV $T_\mathsf{h}\sim O(\lambda^{2})$ as discussed in \eqref{TUC}.}
For the comparison with the subsequent holographic analysis it is useful to transform the coordinates to
\begin{equation}\label{eq:tuRr}
T = u +R~, \qquad R = \hat{r}~,
\end{equation}
which brings the metric $\overline{g}_{ab}$ in the Eddington-Finkelstein form
\begin{equation}\label{eq:yy1}
ds^2 = - \mathrm{d} u^2 -2 \mathrm{d} u \mathrm{d}\hat{r}~.
\end{equation}
Note that in \eqref{eq:tuRr} we have rescaled \eqref{eq:ext-ucold-2} to have the usual normalization of Mink$_2$. The first correction in $\lambda$ are also simple to cast. With our choice of coordinates we have
\begin{equation}
\begin{aligned} \label{expans_UC}
ds^2&= \left(\bar{g}_{ab} +\sqrt{\lambda} \, \tilde{g}_{ab} + \lambda\, h_{ab}\right) \mathrm{d} x^a \mathrm{d} x^b+ \left(\Phi_0^2 + 2\lambda \Phi_0 \, Y \right)\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2 \right)+\cdots ~,
\\
A &= \bar A_{a} \mathrm{d} x^a + \lambda\, {\cal A}_a \mathrm{d} x^a + \cdots~.
\end{aligned}
\end{equation}
One difference, relative to cold and Nariai, is that we get a correction that grows like $\lambda^{1/2}$. This is simply an artifact of our decoupling limit; it is straightforward to show that $\tilde{g}_{ab}$ is pure gauge. For the linear corrections we find
\begin{equation}\label{eq:yy2}
\Phi_0= r_{\mathsf{uc}}~,\qquad Y(x)= -R_0~.
\end{equation}
In contrast to cold and Nariai, here we have a constant profile for the deformation.
\subsection{Holographic analysis}
In this section we will account for the unusual thermodynamic behaviour of the ultracold backgrounds by doing the holographic analysis of Mink$_2$. This discussion follows the treatment of \cite{Afshar:2019axx,Godet:2021cdl,Grumiller:2021cwg}, which considered theories of two-dimensional dilaton-gravity that admit a flat space vacuum. The benefit in our case is that we have a black hole embedding for Mink$_2$, and hence we can systematically compare and contrast the two-dimensional results with a four-dimensional realization.
Our starting point is to consider as a background solution the two-dimensional Minkowski space metric. This is the solution described in Sec.\,\ref{sec:JTreduction}, where
\begin{equation}\label{eq:bbuc}
\Phi_0^2 = \frac{\ell_4^2}{6}~, \qquad Q^2= Q^2_{\mathsf{uc}}=\frac{\Phi_0^2}{2}\,.
\end{equation}
We will cast a locally Mink$_2$ space in Eddington-Finkelstein coordinates
\begin{equation} \label{edd-fink}
ds^2 = - 2 \left( \mathcal{P}(u) \hat{r} + \mathcal{T} (u) \right) \mathrm{d} u^2-2 \mathrm{d} u \mathrm{d} \hat{r}~,
\end{equation}
and the field strength is given by the volume form of this space and normalized according to \eqref{eq:F1}. For most of the discussion we leave the functions $\mathcal{P}$ and $\mathcal{T}$ general and dependent on $u$. However for the near-horizon geometry displayed in the previous section, they are both constant: we will show explicit solutions specializing to constant values of $\mathcal{P}$ and $\mathcal{T}$ which we denote by ${\cal P}_0$ and ${\cal T}_0$.
In the following we will discuss certain dynamical aspects that arise when the background solution is deformed away from its fixed point. We will first solve for linear perturbations around this background, and then quantify their imprint on the renormalized action. An important aspect will be to contrast basic properties here against those for AdS$_2$ in Sec.\,\ref{sec:hol-cold}: the interplay between $Y(x)$ and the background solution will not play a central role for Mink$_2$.
\subsubsection{Perturbations around \texorpdfstring{Mink$_2$}{Mink2} } \label{pert_UC}
The aim here is simple: to solve the linear equations \eqref{massaged1} and \eqref{massaged2} when the background is given by \eqref{eq:bbuc}-\eqref{edd-fink}. Starting from \eqref{massaged1}, which determines the dynamics of the dilaton, we find
\begin{equation}
\begin{aligned}\label{UCY}
\partial_{\hat{r}}^2 Y (x) & = 0 \\
\partial_u \partial_{\hat{r}} Y - \mathcal{P}(u) \partial_{\hat{r}} Y & = \frac{ \delta Q}{\sqrt{8}Q^2} \\
\left( {\hat{r}} \mathcal{P}\,'(u) + \mathcal{T}\,'(u) \right) \partial_{\hat{r}} Y - \mathcal{P}(u) \partial_u Y +2 ({\hat{r}} \mathcal{P}(u) + \mathcal{T}(u)) \, \partial_u \partial_{\hat{r}} Y - \partial_u^2 Y & = 0~.
\end{aligned}
\end{equation}
Notice that we have used \eqref{eq:bbuc} to simplify these expressions. For metric perturbations, we will make a choice of gauge where $h_{\hat{r}u}=h_{\hat{r}\hat{r}}=0$; this greatly simplifies \eqref{massaged2}, leaving us with
\begin{equation}
\partial_{\hat{r}}^2 h_{uu} - \frac{\sqrt{2}}{Q^3} \, Y(x) +\frac{3 \, \delta Q}{2 Q^3} = 0~. \label{UCmetricEQ}
\end{equation}
The solution to these equations are simple to decode. From the first equation in \eqref{UCY} we read off the radial profile of $Y(x)$, which is
\begin{equation}
Y = a(u) \hat{r} + b(u)~.
\end{equation}
The functions $a(u)$ and $b(u)$ are determined by the two last equations in \eqref{UCY}, and these lead to
\begin{equation}
\begin{aligned}\label{eq:ab1}
a'(u)- a(u) \mathcal{P}(u)- \frac{\delta Q}{{2 \sqrt2} Q^2} =0 ~,\\
{\cal T}'(u) a(u)-{\cal P}(u)b'(u)+2{\cal T}(u) a'(u)-b''(u)=0~.
\end{aligned}
\end{equation}
The inhomogeneous solution to \eqref{UCmetricEQ} is given by
\begin{equation}
h_{uu} =r^3 \frac{ a(u)}{3 \sqrt2 \, Q^3} + r^2\frac{ \, b(u)}{\sqrt2 \, Q^3} -r^2\frac{3 \, \delta Q }{4\, Q^3}~.
\end{equation}
It is useful to explicitly record two classes of solutions to the equations displayed above. First, when ${\cal P}(u)=0$, and ${\cal T}(u)=1$, the solutions to \eqref{eq:ab1} are
\begin{equation}\label{eq:ysol11}
a(u)= \frac{\delta Q}{{2 \sqrt2} Q{^2}} \, u +a_0~, \qquad b(u)=\frac{\delta Q}{{2 \sqrt2} Q{^2}}\, u^2 + b_1 u + b_0~,
\end{equation}
where $a_0$ and $b_{1,0}$ are arbitrary constants. In comparison to \eqref{eq:yy1} and \eqref{eq:yy2}, relevant for the black hole, we have $\delta Q=a_0=b_1=0$. Then, $b_0=R_0$ is the only non-trivial component of the dilaton. The second solution we will record explicitly is one where the metric functions are constant: ${\cal P}(u)={\cal P}_0$ and ${\cal T}(u)={\cal T}_0$, with ${\cal P}_0$ non-zero. This gives
\begin{equation}
\begin{aligned} \label{YsolTOTUC}
a(u)&= -\frac{1}{{2 \sqrt2} {\cal P}_0}\frac{\delta Q}{ Q{^2}} +a_1 e^{{\cal P}_0 u}~,\\
b(u)&= a_1 \frac{{\cal T}_0}{{\cal P}_0} e^{{\cal P}_0 u}+ b_2\, e^{-{\cal P}_0 u} +b_0~.
\end{aligned}
\end{equation}
Here $a_{1}$, $b_{2}$ and $b_0$ are arbitrary constants.
The solution has the same form as that found in \cite{Godet:2021cdl} in models of $\widehat{\rm CGHS}$ gravity.
At this stage it is worth remarking a feature of static solutions to the perturbations, i.e., those that are independent of $u$. From both \eqref{eq:ysol11} and \eqref{YsolTOTUC} we see that $Y(x)$ becomes independent of the background metric at fixed charge,\footnote{${\cal P}_0$ only enters in $a(u)$ via $\delta Q$ in \eqref{YsolTOTUC}.} which is very different from the analogous condition for AdS$_2$ in Sec.\,\ref{sec:hol-cold}. This can be taken as an indication that there is a strange interplay between the deformation $Y(x)$ and heating up Mink$_2$. Also, as done in \cite{Godet:2021cdl}, if we impose the boundary condition
\begin{equation}
Y(x) \xrightarrow[\hat{r} \to \infty]{} \Phi_r \hat{r}~,
\end{equation}
where $\Phi_r$ is fixed, arbitrary charge variations are not allowed. We would have to require
\begin{equation}\label{eq:bc-godet}
\delta Q = -{2 \sqrt2} Q^2 {\cal P}_0 \Phi_r ~.
\end{equation}
This will have an imprint in the thermodynamics discussed below.
\subsubsection{Thermodynamics around \texorpdfstring{Mink$_2$}{Mink2}}
With the solution for the perturbations at hand, we can compute thermodynamic quantities associated to the two-dimensional black hole, with the aim to connect with the near-extremal thermodynamics of the ultracold black hole in Sec.\,\ref{sec:near-uc}. For a static two-dimensional solution we have to set to zero all terms which are $u$ dependent. For the IR background, this means that we have
\begin{equation} \label{edd-fink1}
ds^2 = - 2 \left( \mathcal{P}_0 \hat{r} + \mathcal{T}_0 \right) \mathrm{d} u^2-2 \mathrm{d} u \mathrm{d} \hat{r}~.
\end{equation}
We will interpret this as a ``black hole'' solution, whose horizon is at $\hat{r}_h=-{\cal T}_0/{\cal P}_0$. The associated Hawking temperature is \cite{Afshar:2019axx}
\begin{equation}
T_{\rm 2D} = \frac{{\cal P}_0}{2 \pi}\,,
\end{equation}
which is defined as the surface gravity of the Killing vector $k=\partial_u$.
A static configuration for the dilaton, with ${\cal P}_0\neq0$, means setting $a_1=0$ and $b_2=0$ in \eqref{YsolTOTUC}, which gives
\begin{equation}\label{eq:Ystatic-uc}
Y(x)= -\frac{1}{{2 \sqrt2}{\cal P}_0}\frac{\delta Q}{Q{^2}} \hat{r} + b_0~.
\end{equation}
Notice that the boundary condition \eqref{eq:bc-godet} now can be interpreted as $\delta Q\sim T_{\rm 2d}$. We can read off the entropy from the value of the dilaton $Y$ evaluated at the horizon, for which we obtain
\begin{equation}
\begin{aligned}\label{eq:entropy-mink2}
S_{2D} &= \pi \Phi(x)^2_{\rm horizon}\\
&= \pi \Phi_0^2 + 2\pi \Phi_0 \lambda Y(x)_{\rm horizon}+\cdots\\
&= \pi \Phi_0^2 + 2\pi \Phi_0 b_0 \lambda - \frac{{\cal T}_0}{T_{\rm 2D}} \Phi_0 \Phi_r \lambda + \cdots
\end{aligned}
\end{equation}
where in the last line we replaced \eqref{eq:bc-godet}. The term controlled by $b_0$ is consistent with the behaviour of the 2d perturbations described by models of dilaton gravity in flat spacetime such as $\widehat{\rm CGHS}$ \cite{Afshar:2019axx,Grumiller:2021cwg}. Indeed, when compared with the ultracold black hole entropy \eqref{suc}, we see no dependence of the entropy variation on the change in temperature, hence $C_s$, as defined in \eqref{spec_heat}, is infinite.
The last term in \eqref{eq:entropy-mink2} is clearly strange and should be treated carefully. First, notice that there is an important order of limits: if $T_{\rm 2D}=0$, i.e., ${\cal P}_0=0$, we need to consider the solutions in \eqref{eq:ysol11} and hence only $b_0$ gives a contribution to the entropy. For this reason we do not see this contribution in \eqref{suc}. There are at least three ways to ``normalize'' this divergent behaviour for ${\cal P}_0\neq0$: one could set ${\cal T}_0=0$ as done in \cite{Godet:2021cdl}, set $\delta Q=0$, or modify the boundary condition \eqref{eq:bc-godet} such that $\delta Q\sim T_{\rm 2D}^2$.\footnote{Formally, it would be interesting to modify \eqref{eq:bc-godet} and study more carefully its repercussions. Unfortunately we don't see an indication that a modified boundary condition for $\delta Q$ is appropriate for the ultracold black hole, so we leave this for future work. }
To complete the picture we perform holographic renormalization to compute the on-shell action. Replacing the solution we found in Sec.\,\ref{pert_UC} into the 2D action \eqref{eq:2daction} gives a divergent result for the on-shell action. We regulate the integral by introducing an extremum of integration $\hat r_0$ taken to be large but finite (UV cutoff), while the other extremum of integration is the black hole horizon located at $\hat r=\hat r_h$. To remove the divergences we add the following counterterms:
\begin{equation}
I_{\rm on-shell-uc} = I_{\rm 2D}+I_{\rm N} + I_{\rm GH} + I_{\rm MM}\,.
\end{equation}
The first term is the action \eqref{eq:2daction}. The subsequent terms are
\begin{equation}
I_{\rm N} = {{-\frac{1}{4}}} \int \mathrm{d} u \, \sqrt{-\gamma} \, \Phi (n^{a} \partial_{a} \Phi) ~, \qquad I_{\rm GH} = {\frac{1}{2}} \int \mathrm{d} u\,\sqrt{-\gamma} \, \Phi^2 \, K ~,
\end{equation}
where $\gamma_{ab}$ is the boundary metric and $n^a$ is the unit vector normal to the boundary. $I_{\rm N}$ is a standard counterterm for models of dilaton- gravity in flat 2d space (see for instance \cite{Godet:2021cdl,Kar:2022sdc,Kar:2022vqy}) and $I_{\rm GH}$ is the standard Gibbons-Hawking-York term, which ensures Dirichlet boundary conditions for the metric. As usual in flat space, we have to supplement this action by the Mann-Marolf boundary term \cite{Mann:2005yr},
\begin{equation}
I_{\rm MM} = -\frac{1}{{2}} \int \mathrm{d} u\, \sqrt{-\gamma_{\rm ref}}\, \Phi^2 \, \hat{K}_{\rm ref}~,
\end{equation}
with representative
\begin{equation}
ds^2_{\rm ref} = -2\mathrm{d} u\mathrm{d} r - 2(\mathcal{P}(u) r_0 +\mathcal{T}(u))\mathrm{d} u^2{+ O(\lambda)}~,
\end{equation}
where $\hat r_0$ is the radial (UV) cutoff. Adding this term effectively amounts to performing a background subtraction and in this way the action is free of divergences.
The final finite expression for the renormalized action boils down to
\begin{equation}\label{onsh_intermediate}
I_{\rm on-shell-uc} = \lambda
\Phi_0 \int \mathrm{d} u \left[ \left(b(u) \mathcal{P}(u) - a(u) \mathcal{T}(u) \right) + \frac{Q \mu(u)}{\Phi_0^3} \right] +I_{\rm global}\,.
\end{equation}
A similar form for the on-shell action (which does not include the chemical potential term) was found in \cite{Godet:2021cdl}.
With \eqref{onsh_intermediate} we can now extract the entropy of the two-dimensional black hole. We will be interested in the case where $\delta Q=0$, and we will also take ${\cal P}_0\neq0$: these choices will facilitate comparison with the ultracold black hole and we will be able to take ${\cal P}_0\to0$ smoothly. Evaluating \eqref{onsh_intermediate} on \eqref{edd-fink1}-\eqref{eq:Ystatic-uc}, the Euclidean action then gives,
\begin{equation}\label{eq:on-shell-uc11}
I_{\rm on-shell-uc} = -2\pi \Phi_0 b_0 \lambda+I_{\rm global}\,,
\end{equation}
where we set $\delta Q=0$.
This expression clearly reflects that the temperature does not affect the on-shell action, and hence the entropy, as we have been expecting. Using the standard thermodynamic relation
\begin{equation}
S = \beta \left( \frac{\partial I}{ \partial \beta} \right) -I ~,
\end{equation}
we find that $S_{\rm 2D}=-I_{\rm on-shell-uc}$ is in agreement with \eqref{eq:entropy-mink2} for fixed charge, and agrees with the response of the ultracold black hole. We emphasise that this is very different from the outcome in Sec.\,\ref{sec:hol-cold}, where temperature is the leading effect in the deformation.
\section*{Acknowledgements}
We thank Dio Anninos, Tarek Anous, Stephane Detournay, Albert Law, Andrew Svesko, and Evita Verheijden for interesting discussions, and collaborations on related topics.
The work of AC and CT was supported by the Delta ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). The work of AC has been partially supported by STFC consolidated grant ST/T000694/1. The work of CT is supported by the Marie Sk\l odowska-Curie Global Fellowship (ERC Horizon 2020 Program) SPINBHMICRO-101024314. FM acknowledges financial support from the European Research Council (grant BHHQG-101040024) funded by the
European Union. Views and opinions expressed are however those of the author(s) only and do not
necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
\bibliographystyle{JHEP-2.bst}
\section{\@startsection {section}{1}{\zeta@}%
{-5.5ex \@plus -1ex \@minus -.2ex
{2.3ex \@plus.2ex}%
{\normalfont\large\bfseries}}
\renewcommand\subsection{\@startsection{subsection}{2}{\zeta@}%
{-3.25ex\@plus -1ex \@minus -.2ex}%
{1.5ex \@plus .2ex}%
{\normalfont\bfseries}}
\newcommand\subsub[1]{
\bigskip
\noindent{\underline{\it #1}}
\smallskip}
\numberwithin{equation}{section}
\makeatother
\newcommand{\begin{eqnarray}}{\begin{eqnarray}}
\newcommand{\end{eqnarray}}{\end{eqnarray}}
\newcommand{\begin{equation}}{\begin{equation}}
\newcommand{\end{equation}}{\end{equation}}
\DeclareMathOperator{\arccosh}{arcCosh}
\newcommand{\eq}[1]{\begin{align}#1\end{align}}
\newcommand{\eqst}[1]{\begin{align*}#1\end{align*}}
\newcommand{\eqsp}[1]{\begin{equation}\begin{split}#1\end{split}\end{equation}}
\newcommand{\partial}{\partial}
\newcommand{\overline{\partial}}{\overline{\partial}}
\newcommand{\overline{z}}{\overline{z}}
\newcommand{\overline{w}}{\overline{w}}
\newcommand{\overline{\varphi}}{\overline{\varphi}}
\newcommand{\frac{1}{2}}{\frac{1}{2}}
\newcommand{\frac{1}{4}}{\frac{1}{4}}
\newcommand{{\mathbb Z}}{{\mathbb Z}}
\newcommand{{\mathbb R}}{{\mathbb R}}
\newcommand{{\mathbb C}}{{\mathbb C}}
\newcommand{{\mathbb A}}{{\mathbb A}}
\newcommand{{\mathbb N}}{{\mathbb N}}
\newcommand{{\mathbb H}}{{\mathbb H}}
\renewcommand{\P}{{\mathbb P}}
\newcommand{{\cal M}}{{\cal M}}
\newcommand{{\mathbb Q}}{{\mathbb Q}}
\newcommand{\widetilde{X}}{\widetilde{X}}
\newcommand{\Omega}{\Omega}
\newcommand{{\mathbb J}}{{\mathbb J}}
\def\overline{\tau}{\overline{\tau}}
\def{\rm Tr}{{\rm Tr}}
\def\hat{q}_0{\hat{q}_0}
\newcommand\rref[1]{(\ref{#1})}
\def{\it e.g.~}{{\it e.g.~}}
\def\a{\alpha}
\def\b{\beta}
\def\g{\gamma}
\def\Gamma{\Gamma}
\def\epsilon{\epsilon}
\def\eta{\eta}
\def\th{\theta}
\def\Th{\Theta}
\def\kappa{\kappa}
\def\la{\lambda}
\def\L{\Lambda}
\def\mu{\mu}
\def\nu{\nu}
\def\r{\rho}
\def\s{\sigma}
\def\tau{\tau}
\def\f{\phi}
\def\F{\Phi}
\def\omega{\omega}
\def\W{\Omega}
\def\v{\varphi}
\def\zeta{\zeta}
\def\varphi{\varphi}
\def\partial{\partial}
\def\Vev#1{\big\langle#1\big\rangle}
\def{\mathfrak g}{{\mathfrak g}}
\def\i{{ i}}
\newcommand{\no}[1]{\!:\! #1\!\! :}
\newcommand{{\cal B }}{{\cal B }}
\newcommand{\cW}{{\cal W }}
\newcommand{\cM}{{\cal M }}
\newcommand{\cF}{{\cal F }}
\newcommand{{\cal C }}{{\cal C }}
\newcommand{\cL}{{\cal L }}
\newcommand{\cO}{{\cal O }}
\newcommand{\cH}{{\cal H }}
\newcommand{\cA}{{\cal A }}
\newcommand{{\cal G }}{{\cal G }}
\newcommand{\cN}{{\cal N }}
\newcommand{\cY}{{\cal Y }}
\newcommand{\cD}{{\cal D }}
\newcommand{\cV}{{\cal V }}
\newcommand{{\cal J }}{{\cal J }}
\newcommand{\cI}{{\cal I }}
\newcommand{\E}{{\cal E }}
\newcommand{\B}{{\cal B}}
\renewcommand{\d}{{\partial}}
\newcommand{{\widetilde{\lambda}}}{{\widetilde{\lambda}}}
\def\widetilde{\widetilde}
\newcommand{\ensuremath{\mbox{Gr}}}{\ensuremath{\mbox{Gr}}}
\newcommand{\ensuremath{\mbox{SG}}}{\ensuremath{\mbox{SG}}}
\newcommand{\ensuremath{\mbox{TN}}}{\ensuremath{\mbox{TN}}}
\newcommand{\ensuremath{\mbox{CY}}}{\ensuremath{\mbox{CY}}}
\newcommand{|0\rangle}{|0\rangle}
\newcommand{{\it i.e.~}}{{\it i.e.~}}
\newcommand{{\it etc.~}}{{\it etc.~}}
\newcommand{\mathbf{1}}{\mathbf{1}}
\newcommand{\mathbf{2}}{\mathbf{2}}
\newcommand{\mathbf{3}}{\mathbf{3}}
\newcommand{\epsilon_1}{\epsilon_1}
\newcommand{\epsilon_2}{\epsilon_2}
\newcommand{\mathbf{+}}{\mathbf{+}}
\newcommand{\mathbf{-}}{\mathbf{-}}
\newcommand{{\bar w}}{{\bar w}}
\newcommand{{\bar z}}{{\bar z}}
\newcommand{{\bar x}}{{\bar x}}
\newcommand{{\bar h}}{{\bar h}}
\newcommand{{\bar q}}{{\bar q}}
\newcommand{\Vp}{V^\perp}
\newcommand{\Vpd}{W^\perp}
\newcommand{\rep}{R^G}
\newcommand{\pl}[2][]{\Psi^{(#2)#1}}
\newcommand{\pbl}[2][]{\bar{\Psi}^{(#2)#1}}
\newcommand{\pr}[2][]{\tilde{\Psi}^{(#2)#1}}
\newcommand{\pbr}[2][]{\tilde{\bar{\Psi}}^{(#2)#1}}
\newcommand{\Xl}[2][]{\partial X^{(#2)#1}}
\newcommand{\Xbl}[2][]{\partial\bar{X}^{(#2)#1}}
\newcommand{\Xr}[2][]{\partial\tilde{X}^{(#2)#1}}
\newcommand{\Xbr}[2][]{\partial\tilde{\bar{X}}^{(#2)#1}}
\newcommand{\mm}[1]{\mathbf{m}^{(#1)}}
\newcommand{\ma}[1]{m^{(#1)}}
\newcommand{|\sigma^-\rangle}{|\sigma^-\rangle}
\newcommand{\preprint}[1]{\begin{table}[t]
\begin{flushright}
{#1}
\end{flushright}
\end{table}}
\renewcommand{\title}[1]{\vbox{\center\LARGE{#1}}\vspace{5mm}}
\renewcommand{\author}[1]{\vbox{\center#1}\vspace{5mm}}
\newcommand{\address}[1]{\vbox{\center\footnotesize\em#1}}
\newcommand{\email}[1]{\vbox{\center\footnotesize\tt#1}\vspace{5mm}}
\newcommand{\AC}[1]{{\color{red}{{\bf AC:} #1}}}
\newcommand{\FM}[1]{{\color{red}{{\bf FM:} #1}}}
\begin{document}
\begin{titlepage}
\begin{flushright}
\end{flushright}
\begin{center}
\hfill \\
\hfill \\
\vskip 1cm
\title{Near-Extremal Limits of de Sitter Black Holes}
\author{Alejandra Castro$^{a}$, Francesca Mariani$^{b,c,d}$, and Chiara Toldo$^{b,e,f}$
}
\address{
${}^a$ Department of Applied Mathematics and Theoretical Physics, University of Cambridge,\\ Cambridge CB3 0WA, United Kingdom
\\
${}^{b}$
Institute for Theoretical Physics, University of Amsterdam, Science Park 904, \\
1090 GL Amsterdam, The Netherlands\\
${}^c$ Dipartimento di Fisica, Universita' degli studi di Milano–Bicocca, Piazza della Scienza 3,\\ I-20126 Milano, Italy
\\
${}^d$ Department of Physics and Astronomy,
Ghent University, Krijgslaan, 281-S9, 9000 Gent, Belgium
\\
${}^e$ Department of Physics, Jefferson Lab, Harvard University, 17 Oxford Street,\\ Cambridge, MA 02138, USA
\\
${}^f$ Dipartimento di Fisica, Universita' di Milano, via Celoria 6, 20133 Milano MI, Italy
}
\email{[email protected], [email protected], [email protected]}
\end{center}
\vfill
\abstract{We analyze the thermodynamic response near extremality of charged black holes in four-dimensional Einstein-Maxwell theory with a positive cosmological constant. The latter exhibit three different extremal limits, dubbed cold, Nariai and ultracold configurations, with near-horizon geometries AdS$_2 \times S^2$, dS$_2 \times S^2$, Mink$_2 \times S^2$, respectively. For each of these three cases we analyze small deformations away from extremality, and contrast their response.
We also construct the effective two-dimensional theory, obtained by dimensional reduction, that captures these features and provide a more detailed analysis of the perturbations around the near-horizon geometry for each case.
Our results for the ultracold case in particular show an interesting interplay between the entropy variation and charge variation, realizing a different symmetry breaking with respect to the other two near-extremal limits.}
\vfill
\end{titlepage}
\eject
{
\hypersetup{linkcolor=black}
\tableofcontents
}
\section{Introduction}
Black holes are notorious for their semi-classical features: they are usually characterized by only a few parameters, and exhibit universal laws governing the mechanics (thermodynamics) of their horizons. In this context it is reasonable to expect an overarching principle that accounts for these laws. However, it is also known that some of these features suffer modifications depending on the surrounding of the black hole. In particular, the presence (or absence) of a cosmological constant has both quantitative and qualitative repercussions in our understanding of black holes. Differences due to the surrounding is what we aim to explore and quantify here.
Black holes embedded in de Sitter, i.e., a gravitational theory with a positive cosmological constant, are an ideal laboratory to explore these differences. The presence of de Sitter adds a cosmological horizon which is well-known to mimic thermodynamic behaviour \cite{PhysRevD.15.2738}.\footnote{It should also be mentioned that accounting for a statistical origin of the thermodynamic properties of de Sitter is difficult, as it has been stressed and reviewed by several authors; see for instance \cite{Witten:2001kn,Strominger:2001pn,Banks:2005bm,Anninos:2012qw,Anninos:2017eib,Coleman:2021nor}.} It also allows for interesting generalizations, such as those explored, for instance, in \cite{Dolan:2013ft}. The presence of a cosmological horizon, plus the existing Cauchy horizons of the black hole, provides an extra dial that will allow us to explore different thermodynamic regimes and contrast behaviour.
To be concise, we will focus on four-dimensional electrically charged black holes embedded as solutions to Einstein-Maxwell theory with a positive cosmological constant. These are the so-called Reissner-N\"ordstrom de Sitter black holes (RNdS$_4$), which have the additional property of being spherically symmetric. These configurations generally admit three horizons: an inner, outer and cosmological horizon. The confluence of these horizons, which defines an extremal limit of the original black hole, is an interesting starting point for two reasons. First, for RNdS$_4$ there are three different extremal limits \cite{Romans:1991nq,Mann:1995vb,Booth:1998gf},
dubbed {\it cold} (inner and outer horizon coincide), {\it Nariai} (outer and cosmological horizon coincide) and {\it ultracold} (confluence of three horizons) configurations. The near-horizon geometry for each case are AdS$_2 \times S^2$, dS$_2 \times S^2$, Mink$_2 \times S^2$, respectively. Each of these instances has its own strengths and intricacies which we will highlight below; it also provides us a dial to contrast the effects of the surrounding.
Second, starting from extremality we can apply holographic tools to decode and interpret the thermodynamic responses away from extremality. This has been a powerful strategy for extremal black holes with an AdS$_2$ factor in the near-horizon geometry: deformations away from extremality define the concept of near-AdS$_2$/near-CFT$_1$ \cite{Almheiri:2014cka,Maldacena:2016upp} which leads to important insights on the quantum nature of black holes; see \cite{Mertens:2022irh} for a recent review. Here we will explore the concept of being ``near'' for the three different extremal cases of RNdS$_4$. This is where spherical symmetry comes in handy: we can build a consistent two-dimensional effective theory that captures the AdS$_2$, dS$_2$ and Mink$_2$ dynamics and its deformations. This two-dimensional theory shares many features with Jackiw-Teitelboim (JT) gravity \cite{Jackiw:1984je,Teitelboim:1983ux}, the CGHS \cite{Callan:1992rs} and $\widehat{\rm CGHS}$ models \cite{Afshar:2019axx}, which we will review and exploit.
To summarise, we will capture and contrast various corners of black hole thermodynamics within one overarching solution, the RNdS$_4$ black hole. We will focus on semi-classical properties of the solution, and quantify them from the four-dimensional point of view. We will then provide a different perspective of these features by analyzing the holographic properties of the near-horizon geometry using the two-dimensional description. The most prominent features we find for each near-extremal configuration are the following.
\begin{description}
\item[Heating up cold black holes.] The AdS$_2$ factor in the near-horizon geometry controls the dynamics, which exhibits similar patterns as the analysis of near-extremal Reissner-N\"ordstrom and Reissner-N\"ordstrom in AdS$_4$ \cite{Almheiri:2016fws,Nayak:2018qej}. However, there are two differences in the near-extremal response. The thermodynamic regime is controlled by $M_{\rm gap}$, but there is an upper bound on its positivity. Also, the back reaction of the metric does not have a definite sign, which we find intriguing.
\item[Deformations of Nariai.] Here the rules are dictated by responses around dS$_2$, and we take the perspective of the static patch observer. Our main aim in this case is to highlight how certain responses are similar and different relative to AdS$_2$: several aspects can be obtained by analytic continuation as proposed in \cite{Maldacena:2002vr,Maldacena:2019cbz}, but the interpretation and reality restrictions are delicate.
\item[Kicking ultracold backgrounds.] This is the most novel instance of extremality, where the ultracold geometry has a natural connection to $\widehat{\rm CGHS}$ models. In this case the thermodynamic response is very different relative to the cold and Nariai cases, which we discuss in detail. Actually, it is a case where the temperature plays a minimal role at leading order. We discuss the holographic properties of our Mink$_2$ theory, where dSRN$_4$ serves as a guide in two-dimensions to dictate adequate boundary conditions and interpret the results.
\end{description}
Our results could be extended and applied in several different directions. We expect this to extend to higher-dimensional RNdS$_d$ background rather easily. A more rich direction is to extend our analysis to rotating black holes in de Sitter. Kerr-dS$_4$ also has a Nariai and ultracold limit that would be interesting to revisit in the near extremal limit along the lines of \cite{Anninos:2010gh}, and to connect with the Kerr/CFT correspondence proposed in \cite{Anninos:2009yc}.
Other directions, within the RNdS$_4$ solution is to extend the analysis of perturbations and connect to the more general studies in \cite{Dias:2018etb} that study strong cosmic censorship. It would also be interesting to explore the stability properties of the ultracold solution along the lines of \cite{Anninos:2022hqo,Horowitz:2022mly}.
The paper is structured as follows. In Sec.\,\ref{sec:RN} we introduce our setup, which consists of Reissner-N\"ordstrom de Sitter black holes in Einstein-Maxwell gravity with a positive cosmological constant. We describe the space of solutions and the three different extremal limits, including their near-horizon geometries. In Sec.\,\ref{sec:JTreduction} we report the effective two-dimensional theory obtained upon reduction of Einstein-Maxwell theory on the two-sphere, and present the equations for the perturbations near-extremality of the 2d metric and the dilaton field. In Sec.\,\ref{sec:heating_cold}, Sec.\,\ref{sec:nariai} and Sec.\,\ref{sec:ultracold} we spell out the thermodynamics near-extremality of the cold, Nariai and ultracold geometries respectively, and we compute the mass gap for these configurations. In each of these three sections we supplement our analysis with the study of the perturbations near the extremal geometry, by solving the equations of the JT-gravity like model obtained upon reduction to two dimensions. We compute then the on-shell action via holographic renormalization and we comment on the pattern of symmetry breaking and compare it among the various cases.
\section{Reissner-N\"ordstrom \texorpdfstring{dS$_4$}{dS4} black holes}\label{sec:RN}
Reissner-N\"ordstrom black holes embedded in dS space have several interesting properties that are distinct from their counterparts in AdS or flat space. In this section we will review some of these properties based on the original work of \cite{Romans:1991nq,Mann:1995vb,Booth:1998gf}. We will focus mainly on the mechanics of its horizons, and the accessible extremal limits.
Our interest is in black hole solutions of Einstein-Maxwell theory with a positive cosmological constant in four-dimensions. The action is given by
\begin{equation}
I_{\rm 4D}=\frac{1}{16\pi G_N}\int \mathrm{d}^{4}x \sqrt{-g}\left({\cal R}^{(4)}-2\Lambda_4 -F_{\mu\nu}F^{\mu\nu}\right)~.\label{eq:EML-action}
\end{equation}
An electrically charged black hole, which we will coin as RNdS$_4$, is a spherically symmetric solution of Einstein-Maxwell theory, where the line element and gauge field are
\begin{equation}
\begin{aligned}
ds^{2}&=-V(r)\mathrm{d} t^{2}+\frac{1}{V(r)}\mathrm{d} r^{2}+r^{2}(\mathrm{d} \theta^2 +\sin^2\theta \mathrm{d}\phi^2)~,\\
A&= \frac{Q}{r}\mathrm{d} t~, \label{dsrnds}
\end{aligned}
\end{equation}
and the blackening function in the metric is given by
\begin{equation}
V(r)=1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}-\frac{r^2}{\ell_4^2}~.
\label{wfds1}
\end{equation}
In analogy to their flat space counterparts, we will denote the constant $M$ as the `mass' of the black hole and the constant $Q$ as the electric charge. It is also straightforward to generalize these expressions to a dyonic case, where the solution also carries magnetic charge. For simplicity, and without loss of generality, we will focus on the electric case.
The horizon structure of this black hole is dictated by the roots of $V(r)$. As it is clear from \eqref{wfds1} there are four roots, however, even when all roots are real, one of them is always located at negative $r$ and hence nonphysical (it sits behind the curvature singularity). The remaining three roots can be real and positive, which we will denote in ascending order as: $r_{-}$ is the inner horizon; $r_{+}$ is the outer horizon; $r_{c}$ is the cosmological horizon.
In this notation, we are writing \eqref{wfds1} as
\begin{equation} \label{warp_factor}
V(r)=-\frac{1}{\ell_4^2 r^{2}}(r+r_++r_- +r_c)(r-r_{-})(r-r_{+})(r-r_{c})~,
\end{equation}
where\footnote{In \eqref{constraints} we have favored $(r_\pm,\ell_4^2)$; but it is important to stress that $M$ and $Q$ are symmetric with respect to $(r_c,r_\pm)$. One can check that $ 2\ell_4^2M=(r_++r_-)(r_++r_c)(r_-+r_c)$ and $\ell_4^2 Q^2 = r_cr_+r_-(r_c+r_++ r_-)$. }
\begin{equation}
\begin{aligned} \label{constraints}
M&=\frac{1}{2\ell_4^2}(r_++r_-)(\ell_4^2 -r_+^2-r_-^2)~,\\
Q^2&=\frac{r_+r_-}{\ell_4^2}\left(\ell_4^2-r_+^2-r_-^2-r_-r_+\right)~,\\
\ell_4^2&=r_c^2 +r_+^2+r_-^2+r_-r_++r_-r_c+r_cr_+~.
\end{aligned}
\end{equation}
The space of black hole solutions is determined by the discriminant of the quartic polynomial in $V(r)$, which up to a trivial normalization reads
\begin{equation}
\begin{aligned}
\textrm{Discr}_4&\equiv \frac{1}{16\ell_4^6} \prod_{i<j}(r_i-r_j)^2\\
&= -16 Q^6+\ell_4^4(M^2-Q^2)+\ell_4^2(-27 M^4 + 36 M^2 Q^2 - 8 Q^4)
\end{aligned}
\end{equation}
Here $r_i$ are all four roots in $V(r)$. Requiring $\textrm{Discr}_4\geq 0$ and $M>0$ are sufficient conditions to assure that three roots of the polynomial are real positive numbers. This defines for us the space of
solutions admitting a horizon, which we depict in Fig.\,\ref{SharkFin} for a fixed value of cosmological constant. The shaded region corresponds to a classical black hole, while the white area represents naked singularities. The shaded area is usually referred to as ``Shark Fin'' due to its shape, and we will use this nomenclature henceforth.
\begin{figure}[H]
\centering
\includegraphics[width=0.5\linewidth]{"SFnew"}
\caption{Shark Fin diagram for RNdS$_4$ with fixed positive value of cosmological constant $\Lambda_4=0.01$. The shaded area corresponds to black hole solutions, and the white area to naked singularities. The edges correspond to extremal black holes: the dashed line are cold solutions, the solid one are Nariai solutions. The star, where the two lines intersect, corresponds to the ultracold solution.}
\label{SharkFin}
\end{figure}
\subsection{Black hole mechanics}\label{sec:bh-mech}
In this portion we will review mechanical properties of each physical horizon. To keep the nomenclature simple, we will refer to each quantity by its thermodynamic analog. However we do not intend to give them a statistical interpretation; do not take the analogy as an equality.
For each physical horizon $r_{\mathsf{h}}=\{r_c,r_+,r_-\}$ we can define intrinsically on them an entropy, temperature, and chemical potential, which are defined in the standard way. The area-law for the entropy is
\begin{equation}
S_\mathsf{h}=\pi r_\mathsf{h}^2~,
\label{entropy_cosmo}
\end{equation}
while the Hawking temperature and electric potential read
\begin{equation}
T_\mathsf{h}=\frac{1}{4\pi}|V'(r_\mathsf{h})|~, \qquad \Phi_{\mathsf{h}}=\frac{Q}{r_\mathsf{h}}~ .
\label{tcosmo}
\end{equation}
For each horizon $r_\mathsf{h}$ we can verify that a first law is satisfied
\begin{equation}
\begin{aligned}
dM=-T_{-} dS_{-}+\Phi_{-}\, dQ~,\\
dM=\phantom{-}T_{+} dS_{+}+\Phi_{+}\, dQ~,\\
dM=-T_{c}\, dS_{c}+\Phi_{{c}}\, dQ~,
\end{aligned}
\label{1stlawbh}
\end{equation}
where $M$ and $Q$ are the mass and electric charge in \eqref{constraints}, and we are fixing $\Lambda_4$ ($\ell_4$) as we vary the mass, charge, and entropy. Notice that depending on the choice of $r_{\mathsf{h}}$, the first law is modified appropriately. For cosmological horizons this odd sign in \eqref{1stlawbh} was famously observed in \cite{Gibbons:1976ue}; see \cite{Banihashemi:2022htw,Morvan:2022aon} and references within for a recent discussion.
\subsection{Three different extremal limits}\label{sec:ext}
Extremal solutions occur when two, or more, horizons coincide. Due to the presence a positive cosmological constant, it is possible to have different extremal scenarios. For the RNdS$_4$ black hole in consideration, we have three different cases:\footnote{Historically, and in several references, the Nariai black hole refers to the case with $Q=0$ and $r_{+}=r_{c}$ \cite{1999GReGr..31..963N}. Here we define Nariai as solution with $Q\neq 0$ and $r_{+}=r_{c}$. }
\begin{description}
\item[~~~~i. Cold black hole:] $r_{-}=r_{+}\equiv r_0$,
\item[~~~~ii. Nariai black hole:] $r_{+}=r_{c}\equiv r_\mathsf{n}$,
\item [~~~~iii. Ultracold black hole:] $r_{-}=r_{+}=r_{c}\equiv r_\mathsf{uc}$.
\end{description}
These cases describe the edges and tip of the Shark Fin in Fig.\,\ref{SharkFin}. The shared characteristic of all cases is that $T_\mathsf{h}=0$, i.e., the Hawking temperature vanishes at extremality. Each case will also have a decoupling limit, leading to an enhancement of symmetries in the near-horizon geometry. However, as we will explore below, the resulting near-horizon geometry is distinct in each case. In the following we will review the decoupling limit for each case and describe the resulting symmetries in the near-horizon region.
\paragraph{i. Cold black hole.} The cold solution occurs when the inner and the outer black hole horizons coincide
\begin{equation}
r_{-}=r_{+}\equiv r_0~.
\label{Coolddef}
\end{equation}
The blackening factor \eqref{warp_factor} in this case becomes
\begin{equation}
\begin{aligned}
V(r)_{\rm cold}=\left( 1 - \frac{r^2}{\ell_4^2} - 2 \frac{r_0 r}{\ell_4^2} -3\frac{r_0^2}{\ell_4^2} \right)\left(1-\frac{r_{0}}{r}\right)^2~.
\end{aligned}
\label{RomansCold}
\end{equation}
Notice that between $r_0<r<r_c$ we have that $V(r)_{\rm cold}>0$.
To construct the near-horizon geometry we consider the following coordinate transformation,
\begin{equation}
r = r_0 +\lambda R~,\qquad t = \frac{\ell_{\rm A}^2}{\lambda} T~.
\label{ctr}
\end{equation}
With some insight, we have introduced $\ell_{\rm A}$, which we will define below. The decoupling limit is defined as the limit where $\lambda \rightarrow 0 $, while holding $T$ and $R$ fixed; this will take us close to $r\to r_0$. Using \eqref{Coolddef} and \eqref{ctr} in \eqref{dsrnds}, the limit leads to the line element
\begin{equation}
ds^{2}= \ell_{\rm A}^2 \left( - R^{2} \mathrm{d} T^{2}+ \frac{\mathrm{d} R^{2}}{R^2}\right)+r_0^{2}\,\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2\right)~.
\label{dscoldext}
\end{equation}
This is the near-horizon geometry for the cold black hole: it is AdS$_2 \times S^2$. The AdS$_{2}$ radius is given by
\begin{equation}\label{ads2-cold}
\ell_{\rm{A}}^2=\frac{r_0^{2}}{(1-6 r_0^{2}/\ell_4^2)}~,
\end{equation}
while the $S^2$ radius is the horizon radius $r_0$. Note that demanding $6r_0^2<\ell_4^2$, i.e., $\ell_{\rm A}^2>0$, implies that $r_c>r_0$: this is consistent with our hierarchy for the roots of $V(r)$.
It is also useful to write the charge and mass of the black hole: as a consequence of \eqref{Coolddef}, we have
\begin{equation}
Q^2_0=r_0^2\left(1-3 \frac{r_0^2}{\ell_4^2}\right)~,\qquad M_0= r_0\left(1-2\frac{ r_0^2}{\ell_4^2}\right)~.
\label{ZMRomans}
\end{equation}
The requirement $0\leq 6r_0^2<\ell_4^2$ assures to us that $Q_0^2$ and $M_0$ are always non-negative. Hence, starting from a cold solution, we can only find a neutral solution, $Q_0=0$, by setting $r_0=0$. This is a simple way to see that the cold black hole is the dashed line in Fig.\,\ref{SharkFin}.
Finally, the field strength in this limit is also well-defined. Starting from \eqref{dsrnds} and using \eqref{ctr}, we find
\begin{equation}
F=\mathrm{d} A = \frac{Q_0}{r_0^2}\, \ell_{\rm A}^2\mathrm{d} T \wedge \mathrm{d} R~,
\end{equation}
i.e., the field strength is proportional to the volume 2-form of AdS$_2$.
To conclude, for cold black holes we obtained an AdS$_{2}\times S^2$ factor in the metric of the near-horizon region of the extremal solution. The initial RNdS$_4$ metric had only a $u(1)$ symmetry corresponding to time translations and a $so(3)$ spherical symmetry. In the near-horizon geometry of the extremal cold solution we have an $sl(2,\mathbb{R})\times so(3)$ symmetry. Therefore the initial $u(1)$ symmetry is enhanced to a $sl(2,\mathbb{R})$ symmetry.
\paragraph{ii. Nariai black hole.}
In this case the double root $r_\mathsf{n}$ refers to the coinciding horizons $r_{+}$ (outer) and $r_{c}$ (cosmological). The blackening \eqref{wfds1} factor is very similar to the cold case, where after setting $r_+=r_c=r_\mathsf{n}$, we get
\begin{equation}
\begin{aligned}
V(r)_{\rm Nariai}=\left( 1 - \frac{r^2}{\ell_4^2} - 2 \frac{r_\mathsf{n} r}{\ell_4^2} -3\frac{r_\mathsf{n}^2}{\ell_4^2} \right)\left(1-\frac{r_\mathsf{n}}{r}\right)^2~.
\end{aligned}
\label{RNariai}
\end{equation}
However, in contrast with \eqref{RomansCold}, for the Nariai limit the blackening factor obeys
\begin{equation}\label{VN1}
V(r)_{\rm Nariai}<0~, \qquad r_-<r<r_\mathsf{n}~.
\end{equation}
This will be important when obtaining the resulting near-horizon geometry.
The decoupling limit is very similar to the one in \eqref{ctr}: we will consider
\begin{equation}
r = r_\mathsf{n} -\lambda R~,\qquad t = \frac{\ell_{\rm dS}^2}{\lambda} T~,
\label{ctr1}
\end{equation}
and take $\lambda\to0$ as the rest of the variables are held fixed. In comparison to \eqref{ctr}, here we have modified the sign in the radial variable: this is to demonstrate that we are reaching the horizon $r_\mathsf{n}$ from the interior of the cosmological solution (and not the inflationary patch). The parameter $\ell_{\rm dS}$ is given by
\begin{equation}\label{ds2-nar}
\ell_{\rm{dS}}^2=\frac{r_\mathsf{n}^{2}}{(6 r_\mathsf{n}^{2}/\ell_4^2-1)}~.
\end{equation}
Notice the similarity with \eqref{ads2-cold}. However, in this case we have that $r_\mathsf{n}>r_-$, which implies that
$6 r_\mathsf{n}^{2} >\ell_4^2$ and hence $\ell_{\rm{dS}}^2>0$. Implementing \eqref{ctr1} on the line element of RNdS$_4$, we find
\begin{equation}
ds^{2}=\ell^2_{\rm dS}\left( R^{2} \mathrm{d} T^{2}- \frac{\mathrm{d} R^{2}}{R^2}\right)+r_\mathsf{n}^{2}\,\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2\right)~.
\label{dsN}
\end{equation}
The resulting geometry is of the form dS$_2\times S^2$, where
the dS$_{2}$ radius is \eqref{ds2-nar} and the $S^2$ radius being $r_\mathsf{n}$. One of the culprits of this change from dS$_2$, relative to AdS$_2$ in \eqref{ads2-cold}, being the signs in \eqref{VN1}. As in the cold case, the presence of a dS$_2$ factor in the near-horizon region of the extremal Nariai geometry means that we have an enhancement of symmetry with respect to the initial RNdS$_4$ metric.
And similar to the cold case, the field strength in this limit is well-defined. Starting from \eqref{dsrnds}, we find
\begin{equation}\label{Fds2}
F=\mathrm{d} A = \frac{Q_\mathsf{n}}{r_\mathsf{n}^2} \,\ell_{\rm dS}^2\mathrm{d} T \wedge \mathrm{d} R~,
\end{equation}
i.e., the field strength is proportional to the volume 2-form of dS$_2$.
Finally, it is instructive to inspect the mass and the charge. Written in terms of $r_\mathsf{n}$ and $\ell_4$, we have
\begin{equation}
Q^2_\mathsf{n}=r_\mathsf{n}^2\left(1-3 \frac{r_\mathsf{n}^2}{\ell_4^2}\right)~,\qquad M_\mathsf{n}= r_\mathsf{n}\left(1-2\frac{ r_\mathsf{n}^2}{\ell_4^2}\right)~.
\label{ZMN1}
\end{equation}
Our bounds in this case are $\ell_4^2/6< r_\mathsf{n}^{2} \leq \ell_4^2/3$. Hence the neutral solution, $Q_\mathsf{n}=0$, corresponds to $3r_\mathsf{n}^2=\ell_4^2$, for which $M_\mathsf{n}=r_\mathsf{n}/3$, as expected \cite{1999GReGr..31..963N}.
Given this range of $r_\mathsf{n}$, the expressions in \eqref{ZMN1} lead to the solid line in Fig.\,\ref{SharkFin}.
\paragraph{iii. Ultracold black hole.}
This case actually represents the most constrained solution related to the previous ones. The extremal ultracold solution is characterized by
\begin{equation}
r_-=r_+=r_c\equiv r_\mathsf{uc}~.
\label{UC}
\end{equation}
Using \eqref{UC} in \eqref{constraints}, we find
\begin{equation}\label{eq:rucl}
r_\mathsf{uc}=\frac{\ell_4}{\sqrt{6}}~,
\end{equation}
and
\begin{equation}
Q^2_\mathsf{uc}=\frac{9}{8} M_\mathsf{uc}^2=\frac{\ell_4^2}{12}~.
\end{equation}
Hence, all scales in the black hole solution are determined by $\ell_4$. The ultracold solution is where cold black and Nariai black holes intersect, which happens when $\ell_{\rm A}=\ell_{\rm dS}\to \infty$, in accordance to \eqref{eq:rucl}.
The blackening factor in this case reads
\begin{equation}
V(r)_{\rm ultracold}=-\frac{r^2}{6r_\mathsf{uc}^2}\left(1-\frac{r_\mathsf{uc}}{r}\right)^3\left(1+3\frac{r_\mathsf{uc}}{r}\right)~.
\end{equation}
However, to describe the decoupling limit we need to move slightly away from this point. One way to proceed is to start from the cold case, and it is convenient to rewrite \eqref{RomansCold} as follows
\begin{equation}
V(r)_{\rm cold}=-\frac{r^2}{\ell_4^2}\left(1-\frac{r_0}{r}\right)^2\left(1-\frac{r_c}{r}\right)\left(1+\frac{2r_0+r_c}{r}\right)~,
\label{Vcold}
\end{equation}
where we are making explicit the dependence on $r_c$, and hence the additional cosmological horizon in $V(r)_{\rm cold}$.
In order to capture the near-horizon region of the ultracold black hole, we start from (\ref{Vcold}) and approach the ultracold case; this means sending
\begin{equation}
r_0\rightarrow r_\mathsf{uc} -\lambda~,\qquad r_c\rightarrow r_\mathsf{uc}
+\lambda~,
\label{bro}
\end{equation}
where $\lambda$ is the decoupling parameter. For the coordinates, we will be performing the following transformation on the cold metric\footnote{One way to avoid going through the cold black hole, and hence avoid using \eqref{bro}, is to modify \eqref{nearhorizonuc}. We can take \eqref{UC} with $r= r_\mathsf{uc}-R_0\lambda+\sqrt{\frac{2R_0^3 }{3 r_\mathsf{uc}^{3}}}\lambda^{3/2}R$ and $t= \sqrt{\frac{3 r_\mathsf{uc}^{3}}{2R_0^3}}\frac{T}{\lambda^{3/2}}$, and this will also lead to \eqref{eq:ext-ucold-1}. Here $R_0$ is an arbitrary constant. }
\begin{equation}
r= r_\mathsf{uc}+\sqrt{\frac{2 }{3 r_\mathsf{uc}^{3}}}\lambda^{3/2}R~,\qquad t= \sqrt{\frac{3 r_\mathsf{uc}^{3}}{2}}\frac{T}{\lambda^{3/2}}~.
\label{nearhorizonuc}
\end{equation}
Plugging (\ref{bro}) and (\ref{nearhorizonuc}) into the metric \eqref{dsrnds}, with \eqref{Vcold}, in the limit $\lambda\to 0$ we find
\begin{equation} \label{eq:ext-ucold-1}
ds^2=-\mathrm{d} T^2+\mathrm{d} R^2+r_\mathsf{uc}^2 \,\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2\right)~,
\end{equation}
that is a geometry of the form Mink$_2\times S^2$, where the $S^2$ radius is $r_\mathsf{uc}$. This is the resulting near-horizon geometry of the ultracold black hole. The resulting field strength is
\begin{equation}\label{eq:ext-ucoldF-1}
F=\mathrm{d} A = \pm\frac{\sqrt{3}}{\ell_4} \mathrm{d} T \wedge \mathrm{d} R~.
\end{equation}
\section{Effective two-dimensional theory \label{sec:JTreduction}}
In the subsequent sections we will be analyzing the deviations away from the extremal limits of the RNdS$_4$ black hole, for each case described in Sec.\,\ref{sec:ext}. Since all the extremal limits correspond to geometries that are the direct product of two-manifold and round two-sphere, i.e., of the form ${\cal M}_2\times S^2$, it is convenient to construct the effective gravitational theory on ${\cal M}_2$.
There are several references that describe the dimensional reduction of Einstein-Maxwell theory on a two-sphere; here we will be mainly following the conventions of \cite{Nayak:2018qej}---see also \cite{Larsen:2018iou,Castro:2021wzn}. The 4D action is given by \eqref{eq:EML-action}.
We will do a dimensional reduction of this theory to two-dimensions, where we take
\begin{equation}\label{eq:metric4d}
\begin{aligned}
ds^2_4&=g^{(4)}_{\mu\nu} \mathrm{d} x^\mu \mathrm{d} x^\nu= \frac{\Phi_0}{\Phi} g_{ab} \mathrm{d} x^a \mathrm{d} x^b + \Phi^2 \left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2\right)~,\\
F&= F_{ab} \mathrm{d} x^a \wedge \mathrm{d} x^b ~.
\end{aligned}
\end{equation}
Here $g_{ab}$ is a 2D metric, and $\Phi(x)$ is a scalar field, usually coined as the {\it dilaton}; both fields depend only on the 2D coordinates $x^a$, $a,b=(0,1)$. $\Phi_0\equiv \Phi(x=x_0)$ is a constant that we use to normalize solutions and compare the 2D solutions with their four-dimensional counterparts. Notice that we are assuming that all configurations preserve spherical symmetry, and the field strength in four dimensions, $F_{\mu\nu}$, is purely electric (i.e. supported solely by the two-dimensional metric). This is a consistent truncation of the theory, and it will suffice for our purposes.
The result of the dimensional reduction over \eqref{eq:metric4d} on \eqref{eq:EML-action} leads to the effective two-dimensional action
\begin{equation}\label{eq:2daction}
\begin{aligned}
{I_{\rm 2D}=\frac{1}{4 G_4} \int_{{\cal M}_2} \mathrm{d}^2x \sqrt{-g}\Phi^2 \left( {\cal R}+ 2\frac{\Phi_0}{\Phi^3}-{2\Lambda_4}\frac{\Phi_0}{\Phi}-\frac{\Phi}{\Phi_0} F_{ab}F^{ab} \right)~.}
\end{aligned}
\end{equation}
Here ${\cal R}$ is the two-dimensional Ricci scalar associated to $g_{ab}$. The resulting equations of motion are as follow. The variation of the action with respect to the dilaton gives
\begin{equation}\label{eq:eom1}
\begin{aligned}
{{\cal R} - \frac{\Phi_0}{\Phi^3} -\frac{3}{2}\frac{\Phi}{\Phi_0} F^2 -\Lambda_4 \frac{\Phi_0}{\Phi} =0 ~,}
\end{aligned}
\end{equation}
and the variation with respect to the metric leads to
\begin{equation}\label{eq:eom2}
\begin{aligned}
{(\nabla_a\nabla_b -g_{ab}\square) \Phi^2 +g_{ab}\left(\frac{\Phi_0}{ \Phi} +\frac{1}{2}\frac{\Phi^3}{ \Phi_0} F^2 -\Lambda_4 {\Phi_0 \Phi} \right)=0~.}
\end{aligned}
\end{equation}
Lastly, variation with respect to the gauge field yields:
\begin{equation} \label{maxw}
\partial_a \left( \sqrt{-g} \frac{\Phi^3}{\Phi_0} F^{ab} \right) =0~.
\end{equation}
It is important to remark that all solutions to \eqref{eq:eom1}-\eqref{maxw} solve the equations of motion of the four-dimensional theory \eqref{eq:EML-action}. The solution to Maxwell's equations is
\begin{equation}\label{eq:F1}
F_{ab}= Q \frac{\Phi_0}{ \Phi^3}\sqrt{-g}\epsilon_{ab}~,
\end{equation}
and $Q$ is a constant, i.e., the electric charge.
It is also useful to record
\begin{equation}
\begin{aligned}
{F_{ac}F_{b}^{~c}=Q^2 \frac{\Phi_0^2}{ \Phi^6} g_{ab}~,\qquad F^2= -2Q^2 \frac{\Phi_0^2}{ \Phi^6} ~.}
\end{aligned}
\end{equation}
However we will not substitute this on-shell value in the 2d theory since we might not want to keep the electric charge $Q$ fixed: we will proceed with the variation of the gauge field as well.
As an example, it is constructive to write dSRN$_4$ in the language of the two-dimensional theory. Comparing \eqref{dsrnds} with \eqref{eq:metric4d}, we find that the dilaton is simply
\begin{equation} \label{eq:rn1}
\Phi(x)=r~,
\end{equation}
and
\begin{equation}\label{eq:rn2}
g_{ab}\mathrm{d} x^a \mathrm{d} x^b = \frac{\Phi }{\Phi_0}\left( -V(r) \mathrm{d} t^2+ \frac{\mathrm{d} r^2}{V(r)}\right) ~,\qquad F= \frac{Q}{r^2} \mathrm{d} t\wedge \mathrm{d} r~,
\end{equation}
with $V(r)$ given in \eqref{wfds1}. It is straightforward to verify that \eqref{eq:rn1}-\eqref{eq:rn2} is a solution to \eqref{eq:eom1}-\eqref{maxw}. Notice that the electric charge of the black hole in Sec.\,\ref{sec:RN} is exactly the same as the constant entering in \eqref{eq:F1}.
We will mostly be interested in describing the dynamics surrounding the near-horizon geometries of the three extremal cases: cold, Nariai, and ultracold. We will denote these near-horizon backgrounds as the IR solutions. From the two-dimensional perspective, IR means that we analyze solutions starting from a background with $\Phi(x)= \Phi_0$, i.e., constant dilaton background.
When $\Phi$ is constant, we find from \eqref{eq:eom2} that
\begin{equation}
Q^2 = \Phi_0^2 \left( 1-3 \frac{\Phi_0^2}{\ell_4^2} \right)~,
\end{equation}
and hence $F_{ab}$ is a constant ($Q\Phi^{-2}_0$) times the volume element of $g_{ab}$. Equation \eqref{eq:eom1} determines the Ricci scalar to be
\begin{equation}\label{eq:ads2}
{\cal R}_0 = -\frac{2}{\Phi_0^2} \left( 1- 6\frac{\Phi_0^2}{\ell_4^2}\right)~.
\end{equation}
That is, the metric $g_{ab}$ has constant curvature. If $6\Phi_0^2<\ell_4^2$, the solution is locally AdS$_2$, where the curvature radius of the 2D geometry is
\begin{equation} \label{radius3}
\frac{1}{\ell_{\rm A}^2}= \frac{1}{\Phi_0^2} \left( 1- 6\frac{\Phi_0^2}{\ell_4^2}\right)~.
\end{equation}
In comparison to \eqref{dscoldext}-\eqref{ads2-cold}, we have $\Phi_0=r_0$ as expected. If $6\Phi_0^2>\ell_4^2$, then the solution is locally dS$_2$, with curvature radius
\begin{equation} \label{radius2}
\frac{1}{\ell_{\rm dS}^2}= \frac{1}{\Phi_0^2} \left( 6\frac{\Phi_0^2}{\ell_4^2}-1\right) ~.
\end{equation}
Comparing to \eqref{ds2-nar}-\eqref{dsN}, we have $\Phi_0=r_\mathsf{n}$.
And if $6\Phi_0^2=\ell_4^2$, the solution is locally Mink$_2$ with $\Phi_0=r_\mathsf{uc}$, in accordance with \eqref{eq:rucl}.
In the following sections we will consider linear fluctuations around this IR background. For this purpose, we define
\begin{equation}
\begin{aligned}\label{eq:lin3}
\Phi &= \Phi_0 + \lambda\, Y(x)~,\\
g_{ab}&= \bar g_{ab} + \lambda\, h_{ab}~, \\
A_{a}&= \bar A_{a} + \lambda\, \mathcal{A}_{a}~,
\end{aligned}
\end{equation}
where $\bar g_{ab}$ is the metric for a locally AdS$_2$, dS$_2$ space or Mink$_2$ space, i.e., satisfies \eqref{eq:ads2}, and $\lambda$ is a small parameter. The fluctuations of the gauge field $\mathcal{A}_{a}$ enter via the field strength in the equations of motion. And these are determined \eqref{eq:F1}: from there we see that
\begin{equation}\label{eq:F2}
\delta F_{ab}= \frac{\delta Q }{\Phi_0^2}\sqrt{-\bar{g}}\epsilon_{ab} - \frac{3Q }{ \Phi_0^3} Y \sqrt{-\bar{g}}\epsilon_{ab} + \frac{Q }{ 2\Phi_0^2} \sqrt{-\bar{g}} \epsilon_{ab} \bar{g}^{cd} h_{cd}~.
\end{equation}
In most of the prior literature it is common to set $\delta Q=0$, i.e., to hold the charge fixed. However, for our purpose it will be important to keep $\delta Q$ arbitrary. With this in mind, from the equations of motion \eqref{eq:eom2} and \eqref{eq:eom1}, we find that at linear order in $\lambda$ the dynamics of $Y(x)$ and $h_{ab}$ become
\begin{equation}\label{massaged1}
\begin{aligned}
{(\bar \nabla_a\bar \nabla_b -\bar g_{ab}\bar \square) Y(x)-\frac{{\cal R}_0}{ 2} \bar g_{ab} Y(x) -\frac{1}{\Phi_0^3} \bar g_{ab} Q \delta Q =0~,}
\end{aligned}
\end{equation}
and
\begin{equation} \label{massaged2}
\begin{aligned}
{\bar \nabla^a\bar \nabla^b h_{ab} -\bar \square h(x) - \frac{{\cal R}_0}{ 2} h(x)- \frac{12}{\Phi_0^3} \left( 1-4\frac{\Phi_0^2}{\ell_4^2} \right)Y(x) +\frac{6}{\Phi_0^4} Q \delta Q =0~,}
\end{aligned}
\end{equation}
where $h(x)= h_{ab}\bar g^{ab}$. In the following \eqref{massaged1}-\eqref{massaged2} will be dictating the response of the system as we move away from the IR background. And as each extremal case is discussed, we will be solving this system explicitly and connecting it to the response of the RNdS$_4$ black hole.
\section{Heating up the cold black hole }\label{sec:heating_cold}
In this section we analyze the thermodynamic response near extremality of the first branch of solutions, the so-called cold black hole, characterized by an AdS$_2 \times S^2$ near-horizon geometry. For the cold solution the inner and outer black hole horizons coincide, $r_+ = r_- \equiv r_0$, and its conserved quantities are expressed in \eqref{ZMRomans}.
Our analysis will encompass two perspectives, which will be contrasted. First, a perspective from the four-dimensional black hole based on the mechanics in Sec.\,\ref{sec:bh-mech}, which should be viewed as a UV derivation of the response. The second perspective will come from a two-dimensional analysis, where the analysis is done as back-reaction of the the IR background Sec.\,\ref{sec:JTreduction}. We will match both derivations and discuss the holographic interpretations.
This type of analysis has been done extensively in the literature for black holes whose near-horizon geometries contain an AdS$_2$ factor; see \cite{Mertens:2022irh} for a recent review. Hence our discussion here will be brief, and our aim is to set a stage to make comparisons with the Nariai and ultracold black holes.
\subsection{Near-extremality: thermodynamics and geometry}\label{sec:near-cold-thermo}
In this portion, we will quantify the response away from extremality starting from the black hole solution in four-dimensions. That is we will start from a non-extremal black hole and arrange parameters such that we are near to, but not at, the extremal configuration.
The elementary notion of near-extremality we will use is as a deviation of coincident horizons. For the cold black hole this will take the form
\begin{equation}
r_{-}=r_{0}-\lambda\,\epsilon+O(\lambda^2)~,\qquad r_{+}=r_{0}+\lambda\,\epsilon+O(\lambda^2)~,
\label{expds}
\end{equation}
where $\lambda$ is the decoupling parameter in \eqref{ctr}, and $\epsilon$ is a finite parameter. The two effects we will quantify, are the leading order response in $\lambda$ of the black hole mechanics, and how the near-horizon geometry is modified by $\lambda$ and $\epsilon$.
\paragraph{Thermodynamics.} The natural effect of \eqref{expds} is to raise the temperature: for $r_\mathsf{h}=r_+$ we have from \eqref{tcosmo}
\begin{equation}\label{eq:Tplus}
T_+ =\frac{1}{4\pi \ell_4^2 r_+^2} \left(2 r_++r_-+r_c\right)\left( r_+-r_-\right) \left(r_c-r_+\right)~,
\end{equation}
hence the near-extremal limit raises the temperature from zero to $T_+\sim O(\lambda)$. Now, in this process we will like to keep the charge $Q$ fixed: this is not a necessary condition, but one that is consistent to take.\footnote{Throughout we will always take $\ell_4$ fixed, as done in Sec.\,\ref{sec:bh-mech}. This is also not necessary, but reasonable for the comparisons we will make among the three extremal limits. } In this case, one has to adjust the $O(\lambda^2)$ in \eqref{expds}. The result will lead to a response of $Q\sim O(\lambda^3)$ and $M\sim O(\lambda^2)$, hence making the effects of the charge sub-leading.
Taking this into account, and using \eqref{constraints}, \eqref{eq:Tplus} and \eqref{expds}, the response of the mass as a function of the temperature is
\begin{equation}
M=M_{0}+\frac{T_+^{2}}{M_{\rm gap}^{\rm cold}}+O(T_+^3)~,
\label{Mextplusgap}
\end{equation}
where the extremal mass, $M_0$, is defined in \eqref{ZMRomans}. We also identify the mass gap as
\begin{equation} \label{Mgap_cold}
M_{\rm gap}^{\rm cold}=\frac{(\ell_4^2-6r_{0}^{2})}{2\pi^{2} \, \ell_4^2 \, r_{0}^{3}} ~.
\end{equation}
The entropy at the outer horizon is linear in the temperature
\begin{equation}
S_+=\pi r_{+}^{2}
= S_{0}+\frac{2T_+}{M_{\rm gap}^{\rm cold}}+O(T_+^{2}) ~,
\label{Scold1}
\end{equation}
where $S_{0}=\pi r_{0}^{2}$ is the extremal entropy. The first law at this order is simply $dM = T_+ dS_+ + O(\lambda^3) $.
It is worth pointing out that the change in the entropy of the cosmological horizon comes at $O(\lambda^2)$, and it is therefore subleading with respect to the change in entropy at the outer horizon. For this reason, we can consider this an ensemble of fixed charge and fixed cosmological horizon area.
\paragraph{Near-horizon geometry.}
As anticipated, separating the inner and the outer black hole horizons by a small amount increases the temperature, and hence, we see an increase in the entropy. There is as well an effect in the near-horizon geometry, which we now quantify.
To see the form of the near-horizon geometry of this configuration we take into consideration the displacement of the horizons in \eqref{expds} as we take the decoupling limit. Adapting slightly \eqref{ctr},\footnote{This adjustment is just for aesthetic reasons, i.e., to preserve the form $g_{RR}=\ell_{\rm A}^2/R^2$ as we take the limit $\lambda\to 0$.} we now have
\begin{equation}\label{ctr:near}
r= r_0 +\lambda \left(R+ \frac{\epsilon^2}{4} R^{-1}\right) ~, \qquad t=\frac{\ell_{\rm A}^2}{\lambda}T~.
\end{equation}
Using \eqref{expds} and \eqref{ctr:near} on \eqref{dsrnds}, and taking $\lambda\to 0$, we find
\begin{equation}\label{eq:near-cold}
\begin{aligned}
ds^{2}&=\ell_{\rm A}^2 \left( -R^{2}\left(1-\frac{\epsilon^{2}}{4R^2}\right)^2\mathrm{d} T^{2}+\frac{\mathrm{d} R^{2}}{R^{2}} \right)+r_{0}^{2}
\,\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2\right)~,\\
F &=\frac{Q_0}{r_0}\, \ell_{\rm A}^2\left(1-\frac{\epsilon^{2}}{4R^2}\right) \mathrm{d} T\wedge \mathrm{d} R~,
\end{aligned}
\end{equation}
where $\ell_A$ is defined in (\ref{ads2-cold}).
This is an instance of a \textit{nearly}-$AdS_2$ geometry, which arises as the near-horizon region of the near extremal solution. This geometry is locally AdS$_2$, which globally breaks some of the isometries. These are restored if we take the extremal limit, corresponding to $\epsilon\rightarrow 0$.
It is also important to quantify the leading order response, and compare it with the holographic properties we will discuss shortly. Using the two-dimensional notation, we will parameterize the first response in $\lambda$ as
\begin{equation}
\begin{aligned} \label{ads2pluspertCE}
ds^2&= \left(\bar{g}_{ab} + \lambda\, h_{ab}\right) \mathrm{d} x^a \mathrm{d} x^b+ \left(\Phi_0^2 + 2\lambda \Phi_0 \, Y \right)\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2 \right)+\cdots ~,\\
A &= \bar A_{a} \mathrm{d} x^a + \lambda\, {\cal A}_a \mathrm{d} x^a + \cdots~,
\end{aligned}
\end{equation}
that is, there is a response from the AdS$_2$ background ($h_{ab}$, ${\cal A}_a$) and the size of the two-sphere (${Y}$). Here $\bar{g}_{ab}$ is the locally AdS$_2$ background, and $\bar A_a$ is the gauge field associated to $F$ in \eqref{eq:near-cold}. For the near-extremal cold solution we find
\begin{equation}\label{eq:Ycold}
\Phi_0 = r_0~, \qquad Y(x)=R+ \frac{\epsilon^2}{4} R^{-1} ~.
\end{equation}
Note that we are not explicitly reporting on the profile of $ h_{\mu\nu}$, although it is straightforward to extract. This is because $h_{\mu\nu}$ is not an independent degree of freedom. As it will be evident in Sec.\,\ref{sec:hol-cold}, its profile is determined by a choice of gauge and the dynamics of $Y$. Since we are holding the charge $Q$ fixed, the response of ${\cal A}_a$ is also dictated by $ Y$ and $h_{ab}$; see \eqref{eq:F2}.
\subsection{Holographic analysis}\label{sec:hol-cold}
In this portion we will analyze the extremal cold solutions, and their response near-extremality, from the two-dimensional perspective. We will first report the solutions for the linearized perturbations and then we analyze the renormalized action.
This follows closely the analogous derivations in \cite{Cvetic:2016eiv,Castro:2018ffi,Castro:2019vog}, which we refer to for more details.
We will cast our solutions in a radial gauge, where we take
\begin{equation}\label{eq:gauge-cold}
ds^2= \mathrm{d}\rho^2 + \gamma_{TT} \mathrm{d} T^2~, \qquad A=A_T \,\mathrm{d} T~.
\end{equation}
For a cold black hole, the appropriate IR background
is \eqref{radius3}: the locally AdS$_2$ solution. For this case we will cast the metric as
\begin{equation} \label{bg_cold}
\overline{g}_{ab} \mathrm{d} x^a \mathrm{d} x^b= \mathrm{d}\rho^2 + \bar{\gamma}_{TT} \mathrm{d} T^2\,, \qquad \bar \gamma_{TT} = - \left(\alpha(T) e^{\rho/\ell_{\rm A}} + \beta(t) e^{-\rho/\ell_{\rm A}}\right)^2\,,
\end{equation}
and the background gauge field reads
\begin{equation}\label{form_gauge}
\bar A=\bar A_{T}\mathrm{d} T = \mu(T)\mathrm{d} T -\frac{Q \ell_{\rm A}}{\Phi_0^2} \left(\alpha(T) e^{\rho/\ell_{\rm A}} - \beta(T) e^{-\rho/\ell_{\rm A}}\right)\mathrm{d} T~.
\end{equation}
This solution is locally AdS$_2$ for arbitrary metric functions $\alpha(T)$ and $\beta(T)$; note that $\mu(T)$ is a pure gauge function. In comparison to \eqref{eq:near-cold} we have
\begin{equation}\label{eq:compare12}
R= e^{\rho/\ell_{\rm A}}~,\qquad \alpha(T)_{\rm cold}= \ell_{\rm A} ~, \quad \beta(T)_{\rm cold}= -\ell_{\rm A} \frac{\epsilon^2}{4}~.
\end{equation}
The response for this background will be parameterized by $Y(x)$ and $h_{ab}$ as defined in \eqref{eq:lin3}; since we are taking $\delta Q=0$, the response of the gauge field is fixed by \eqref{eq:F2}. The profile of the perturbations comes from solving the linearized equations given in \eqref{massaged1}-\eqref{massaged2}. Starting with the solution to \eqref{massaged1}, in our conventions the dilaton reads
\begin{equation}
Y(x)= \nu(T)e^{\rho/\ell_{\rm A}} + \theta(T) e^{-\rho/\ell_{\rm A}}~,
\end{equation}
with
\begin{equation}
\begin{aligned}\label{eq:beta-nu}
\beta(T) &= -\frac{\ell_{\rm A}^2}{4} \frac{\alpha}{ \partial_T \nu} \partial_T \left( \frac{1}{\nu} \left( c_0 + \frac{(\partial_T \nu)^2}{\alpha^2} \right) \right) ~, \\
\theta(T) & = -\frac{\ell_{\rm A}^2}{4 \nu} \left(c_0 + \frac{(\partial_T \nu)^2}{\alpha^2} \right)~,
\end{aligned}
\end{equation}
where $c_0$ is a constant. Here we have chosen to express the subleading components $\beta(T), \theta(T)$ of the background metric and fluctuations in terms of the leading source terms $\alpha(T), \nu(T)$. Comparing with \eqref{eq:Ycold}, we have
\begin{equation}\label{eq:compare34}
\nu(T)_{\rm cold}=1 ~,\qquad \theta(T)_{\rm cold}=\frac{\epsilon^2}{4}~.
\end{equation}
Finally we report on the metric perturbations. Once we plug in the solution for the dilaton, the equation for the metric \eqref{massaged2} is straightforwardly solved by
\begin{equation}\label{eq:g1}
h_{TT} =
-4\frac{\ell_{\rm A}^2}{ \Phi_0^3} \left(1-4\frac{\Phi_0^2}{\ell_4^2}\right) \left( \bar\gamma_{TT} \, Y(x) -2 \ell_{\rm A}^2 \sqrt{-\bar \gamma}\, {\partial_T} \left( \frac{\partial_T \nu(T) }{\alpha (T)} \right) \right) \,.
\end{equation}
Notice that we have focused on the inhomogeneous part of the solution, given that the homogeneous part can be absorbed in the arbitrary functions $\alpha(t)$ and $\beta(t)$ appearing in the background metric solution \eqref{bg_cold}.
To sum up, the solution to the system \eqref{massaged1}-\eqref{massaged2} are given in function of two functions alone, $\alpha(t)$ and $\nu(t)$, appearing as (finite) source in the background metric and (infinitesimal) source for the irrelevant operator dual to the dilaton $Y$, whose conformal dimension is $\Delta=2$.
Everything until now resembles closely the generic discussion of backreaction in AdS$_2$ as done in, e.g., \cite{Castro:2018ffi}. However, it is worth noticing that the metric backreaction in \eqref{eq:g1} does not have definite sign in de Sitter space. We find that the magnitude of the correction of $h_{TT}$ can change sign. If
\begin{equation}\label{eq:boundQ}
3\Phi_0^2 \left(1-4\frac{\Phi_0^2}{\ell_4^2}\right)= 4 Q^2-\Phi_0^2 \geq0~,
\end{equation}
we will find that the magnitude is in accordance to the backreaction of black holes embedded AdS$_4$ and Mink$_4$. However, the overall sign in \eqref{eq:g1} flips when $4 Q^2-\Phi_0^2 <0$. This is curious since \eqref{eq:boundQ} is a more stringent bound relative to \eqref{radius3}. That is, we have a change in behaviour in $h_{TT}$ before we reach the ultracold limit at the top of Fig.\,\ref{SharkFin}. We stress that when $\Lambda_4\leq0$ the coefficient in \eqref{eq:g1} is always negative; this is the first instance known to us where the metric backreaction changes its behaviour. In the cases considered in \cite{Castro:2021fhc,Castro:2021wzn}, which are models that include matter fields in two-dimensions, the backreaction also changes signs; however, in those cases it is reflected in the interactions between the dilaton and matter content of the theory.
The constraint \eqref{eq:boundQ} might be a reflection that close to the Shark Fin we need to consider perturbations with $\delta Q\neq0$. This would be compatible with our results in Sec.\,\ref{sec:ultracold}; but unfortunately we haven't been able to tie these two observations. It would be interesting to have a holographic understanding of the origin of \eqref{eq:boundQ}, and also find other aspects of the response of the black hole that are affected by it.
\subsubsection{Renormalized action}
In order to compute the renormalized on-shell action we consider the effective 2D action \eqref{eq:2daction},
and we plug in the solution constructed between \eqref{eq:gauge-cold}-\eqref{eq:g1}. The range of integration is taken to be a finite radial value $\rho_h$ in the IR and a cutoff regulator $\rho_0$ in the UV of AdS$_2$. The regulated action is divergent as we send $\rho_0$ to infinity, therefore it needs to be supplemented by additional boundary counterterms, which guarantee a
Dirichlet boundary problem (the Gibbons-Hawking-York term $I_{\rm GH}$) and to remove residual divergencies ($I_{\rm ct}$) once the cutoff is removed:\footnote{Recall that we are setting $G_N=1$.}
\begin{equation} \label{ct_cold_r}
I_{\rm ct} = - \frac{1}{ \, \ell_{\rm A}} \int \mathrm{d} T \, \sqrt{-\gamma}\,\Phi^2~, \qquad I_{\rm GH} = \frac{1}{2 } \int \mathrm{d} T \, \sqrt{-\gamma}\, \Phi^2\,K ~.
\end{equation}
These counterterms are appropriate to a 2-dimensional spacetime ${\cal M}_2$ with a one-dimensional boundary $\partial {\cal M}_2$ with induced metric $\gamma_{ab}$. The Gibbons-Hawking surface term is given in terms of the trace of the extrinsic curvature $K_{ab}$ of the boundary, $K_{ab} = -\frac12 (\nabla_{a} n_{b} + \nabla_{b} n_{a})$,
where $n^{a}$ is the outward-pointing vector normal on $\partial {\cal M}_2$.
Moreover, given the form of the gauge field \eqref{form_gauge}, one can see that the leading component of $A_T$ is not the source term $\mu(T)$, but the term proportional to the volume of AdS$_2$. In order to have a well defined variational principle in terms of the source we have to perform a double Legendre transform \cite{Cvetic:2016eiv} which has the aim of both cancelling the volume term in the conjugate momenta to the gauge field, and impose Dirichlet boundary conditions. We collect below the final form for the action, but more details can be found in \cite{Cvetic:2016eiv,Castro:2018ffi,Castro:2019vog}.
In the final form for the renormalized on shell action the dependence on the regulator $\rho_0$ drops out and the result is finite and depends on the source functions $ \alpha(T), \nu(T), \mu(T)$ appearing explicitly in the solution. Its value is
\begin{equation}
I_{\rm on-shell-cold} = -\ell_{\rm A} \Phi_0 \, \lambda \int \mathrm{d} T \left(\frac{ 4 c_0 \alpha (T)}{ \nu (T)}+\frac{ \nu '(T)^2}{\alpha (T) \nu (T)} - \mu(T) \frac{Q}{\Phi_0^3} \right) + I_{\rm global} \,,
\end{equation}
where $I_{\rm global}$ denotes the value of the integral evaluated at the horizon $\rho_h$, whose explicit form is not necessary for our purposes.
Following the reasoning in \cite{Castro:2018ffi}, we can see that the renormalized action is invariant under infinitesimal time reparameterizations and $U(1)$ gauge transformations. The functions $ \alpha(T)$, $\nu(T)$, and $\mu(T)$ are pure gauge and can be traded for the three independent functions that generate residual gauge symmetries, but the on-shell action actually depends only on one of them, which we call $\sigma(T)$, which generates a boundary Weyl transformation. Without loss of generality, therefore, we can parameterize the sources as a finite Weyl transformation starting from the reference point with $\alpha=1$, $\nu=1$, $\mu=\mu_0$, namely
\begin{equation}\label{eq:weyl}
\alpha(T) = e^{\sigma(T)/\ell_{\rm A}}, \qquad \nu(T) = e^{\sigma(T)/\ell_{\rm A}}, \qquad \mu(T) = \mu_0\,.
\end{equation}
Inserting these in the on-shell action, the latter boils down to this simple expression:
\begin{equation} \label{onshell_final_def}
I_{\rm on-shell-cold} = \ell_{\rm A} \Phi_0 \, \lambda \int \mathrm{d} T \left( -{4 c_0} + 2 \{ \tau(T),T \} + \frac{Q \mu_0}{\Phi_0^3} \right) + I_{\rm global}~.
\end{equation}
To arrive at expression \eqref{onshell_final_def}, we have parameterized $\sigma(T)$ in terms of an arbitrary auxiliary function $\tau(T)$ as
\begin{equation}
\sigma(T) = \ell_{\rm A} \log \partial_T \tau(T)~,
\end{equation}
and added a total derivative term. In the integral we have introduced
\begin{equation}
\{ \tau(T),T \} \equiv \frac{\partial_T^3 \tau}{\partial_T \tau} - \frac32 \left(\frac{\partial_T^2 \tau}{\partial_T \tau} \right)^2~,
\end{equation}
i.e, the Schwarzian derivative.
Unsurprisingly, the response of the system under Weyl transformations of the boundary metric manifests at the level of the on-shell action in the appearance of the Schwarzian derivative.
We are ready now to briefly make contact with the thermodynamic analysis due to the linear response induced by $Y(x)$. We will be working with \eqref{eq:compare12} and \eqref{eq:compare34},\footnote{In order to cast the metric \eqref{dscoldext} in the form of the background $\overline{g}_{ab}$ with \eqref{bg_cold}, we need to re-scale the time coordinate in \eqref{ctr} by $T \rightarrow T/\ell_{\rm A}$, effectively resulting in the multiplicative $\ell_{\rm A}$ factor in the on-shell action.} for which the background metric is \eqref{bg_cold}. This is a solution that contains a horizon at
\begin{equation}
\bar\gamma_{TT}(\rho = \rho_h) =0 \qquad \rightarrow \qquad e^{2\rho_h/\ell_{\rm A}} = -\frac{\beta_{\rm cold}}{\alpha_{\rm cold}}\,.
\end{equation}
The associated temperature in 2D is defined as
\begin{equation} \label{T2}
T_{2D} = \frac{1}{2\pi} \partial_{\rho} \sqrt{-\gamma}|_{\rho_h} = \frac{\sqrt{-\alpha_{\rm cold}\beta_{\rm cold}}}{\pi} = \frac{\epsilon}{2\pi}~.
\end{equation}
Notice that its relation to \eqref{eq:Tplus}, near-extremality, is $T_+= \frac{\lambda}{\ell_{\rm A}^2}T_{2D}$ (in accordance to the change of time coordinate in \eqref{ctr:near}). The entropy is found by evaluating the dilaton at the horizon:
\begin{equation}
\begin{aligned}
S_{2D} &= \pi \Phi(x)^2_{\rm horizon}\\
&= \pi \Phi_0^2 + 2\pi \Phi_0 \lambda Y(x)_{\rm horizon} + \cdots
\end{aligned}
\end{equation}
After using the values reported here, it is straightforward to check that this agrees with \eqref{Scold1} and \eqref{Mgap_cold}, where in the 2D language we have
\begin{equation}
M_{\rm gap}^{\rm cold}=\frac{(\ell_4^2-6r_{0}^{2})}{2\pi^{2} \, \ell_4^2 \, r_{0}^{3}} = \frac{1}{2\pi^{2} \ell_{\rm A}^2\Phi_0}~.
\end{equation}
As shown in \cite{Maldacena:2016upp}, this linear response in temperature arises from the Schwarzian effective action in \eqref{onshell_final_def}, where $(M_{\rm gap}^{\rm cold})^{-1}$ is proportional to the coupling in front of the action.
\section{Deviations away from extremality for Nariai }\label{sec:nariai}
In this section we analyze the response away from extremality for the Nariai black hole. This was the solution with $r_+=r_c\equiv r_\mathsf{n}$, i.e., the outer and cosmological horizons coincide. The properties of the extremal solution are described in \eqref{RNariai}-\eqref{ZMN1}. A key feature here is that the near-horizon geometry for the Nariai solution is dS$_2\times S^2$, where the de Sitter radius is
\begin{equation}
\ell^2_{\rm dS}=\frac{r_{\mathsf{n}}^2 \ell_4^2}{6 r_{\mathsf{n}}^2 - \ell_4^2}~,
\end{equation}
and the key restriction to recall in this case is
\begin{equation} \label{cnd_nar}
6 r_{\mathsf{n}}^2 > \ell_4^2~.
\end{equation}
This will be important as we contrast the cold black hole relative to Nariai: many aspects are shared by dS$_2$ and AdS$_2$, but small differences are key.
There are several recent analyses of dS$_2$, and near-dS$_2$, that apply to the Nariai limit of Schwarzschild dS$_4$ black holes, and studies from the perspective of a two-dimensional theory with a running dilaton; see for example \cite{Maldacena:2019cbz,Moitra:2022glw,Svesko:2022txo,Anninos:2022hqo}. Our presentation here will be brief---and more details are covered in the references---since the main purpose for us is to contrast this scenario with the responses in the cold and ultracold cases.
\subsection{Near-extremality: thermodynamics and geometry}\label{sec:near-nariai-thermo}
\paragraph{Thermodynamics.} In analogy to Sec.\,\ref{sec:near-cold-thermo}, to go slightly beyond extremality, we displace $r_{+}$ and $r_{c}$ around $r_{\mathsf{n}}$ as follows
\begin{equation}
r_{+}=r_{\mathsf{n}}-\lambda\epsilon+O(\lambda^{2})~,\qquad r_{c}=r_{\mathsf{n}}+\lambda\epsilon+O(\lambda^{2})~.
\label{expdsnariai}
\end{equation}
As expected, this deviation ignites a balanced temperature at outer horizon $(T_+)$ and the cosmological horizon $(T_c)$, with both of them linear in $\lambda$ at leading order. That is, $T_+=T_c\sim O(\lambda)$, only at leading order.
To follow the parallel with Sec.\,\ref{sec:near-cold-thermo}, here we can also fix the charge $Q$ of the black hole, which sets constraints on the higher order terms in \eqref{expdsnariai}. With this choice, and taking the perspective of the cosmological horizon, we find the following mechanical response:
\begin{equation}\label{eq:MS12}
M=M_{\mathsf{n}}+\frac{T_c^{2}}{M_{\rm gap}^{\rm n}}+\cdots~,
\qquad
S_c=S_{\mathsf{n}}-\frac{2T_c}{M_{\rm gap}^{\rm n}}+\cdots~,
\end{equation}
where $M_{\mathsf{n}}$ is given in \eqref{ZMN1}, $S_{\mathsf{n}}=\pi r_\mathsf{n}^2$, and
\begin{equation}\label{eq:gapn}
M_{\rm gap}^{\rm n}=\frac{(\ell_4^2-6r_{\mathsf{n}}^{2})}{2\pi^{2} \, \ell_4^2 \, r_{\mathsf{n}}^{3}} =-\frac{1}{2\pi^{2}\,\ell_{\rm dS}^2\, r_{\mathsf{n}}}~<0~.
\end{equation}
Very crucially here the mass gap is negative! And one should expect this: for fixed $Q$, starting from the extremal Nariai solution represented by the right edge of the diagram in Fig. (\ref{SharkFin}), we can only decrease the mass if we want to find physical solutions.
We can also take the perspective of the outer horizon, which gives
\begin{equation}\label{eq:MS123}
M=M_{\mathsf{n}}+\frac{T_+^{2}}{M_{\rm gap}^{\rm n}}+\cdots~,
\qquad
S_+=S_{\mathsf{n}}+\frac{2T_+}{M_{\rm gap}^{\rm n}}+\cdots~,
\end{equation}
and the same value of mass gap in \eqref{eq:gapn}. In this case both the entropy and mass decrease! As discussed in
\cite{Svesko:2022txo,Anninos:2022hqo}, this can be viewed as an instability of the outer horizon, while the cosmological horizon is stable to the near-extremal perturbation.\footnote{In the cold case a similar effect to \eqref{eq:MS123} occurs at the inner horizon. It is usually not interesting to highlight since one tends to not place an observer between $r_-$ and $r_+$, and it is known that the inner horizon is unstable. For a cosmological horizon it is relevant to discuss the perspective of the static patch observer for whom both the outer and cosmological horizon are present.}
\paragraph{Near-horizon geometry.}
The near-horizon region is reached performing the usual coordinate transformation \eqref{ctr1} combined with \eqref{expdsnariai}, where we will make just a small modification
\begin{equation} \label{expnariainext}
r = r_\mathsf{n} \pm \lambda R~,\qquad t = \frac{\ell_{\rm dS}^2}{\lambda} T~.
\end{equation}
In contrast to \eqref{ctr:near}, we have not modified the radial diffeomorphism since we want to reflect the static patch below. We have added a ``$\pm$'' to illustrate the difference of the displacement relative to the outer or cosmological horizon. Replacing (\ref{expnariainext}) and \eqref{expdsnariai} in (\ref{dsrnds}), and taking the decoupling limit $\lambda\rightarrow0$ while holding $T$, $R$ and the sphere fixed, we find
\begin{equation}\label{eq:near-ext-N}
ds^{2}= \ell_{\rm dS}^2 \left( (R^{2}-\epsilon^{2}) \mathrm{d} T^{2}-\frac{\mathrm{d} R^{2}}{(R^{2}-\epsilon^{2})} \right)+r_{\mathsf{n}}^{2}\,\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2\right)~,
\end{equation}
and the field strength is given by \eqref{Fds2}. The near-horizon geometry of the near-extremal Nariai solution contains a \textit{nearly}-dS$_2$ factor. Taking the extremal limit $\epsilon\rightarrow 0$ we indeed restore the dS$_2$ factor we had in the extremal case \eqref{dsN}.
Notice that when obtaining the line element \eqref{eq:near-ext-N}, a reasonable choice is to expand the blackening factor $V(r)$ for $r_+<r<r_c$. This restricts $-\epsilon<R<\epsilon$, and hence the result of the decoupling limit is the static patch of dS$_2$ where the Euclidean geometry is locally a sphere. One could
consider instead $r>r_c$, i.e., to have an observer on the inflationary patch, and then we would have $R>\epsilon$.
As we did in \eqref{ads2pluspertCE}, we also report on the leading order response. Using the two-dimensional notation, we will parameterize the first response in $\lambda$ as
\begin{equation}
\begin{aligned} \label{ds2pluspertCE}
ds^2&= \left(\bar{g}_{ab} + \lambda\, h_{ab}\right) \mathrm{d} x^a \mathrm{d} x^b+ \left(\Phi_0^2 + 2\lambda \Phi_0 \, Y \right)\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2 \right)+\cdots ~,\\
A &= \bar A_{a} \mathrm{d} x^a + \lambda\, {\cal A}_a \mathrm{d} x^a + \cdots~,
\end{aligned}
\end{equation}
that is, there is a response from the dS$_2$ background ($h_{ab}$, ${\cal A}_a$) and the size of the two-sphere (${Y}$). Here $\bar{g}_{ab}$ is the locally dS$_2$ background, and $\bar A_a$ is the gauge field associated to $F$ in \eqref{Fds2}. For the near-extremal Nariai solution we find
\begin{equation}\label{eq:Yn}
\Phi_0 = r_\mathsf{n}~, \qquad Y(x)=\pm R~.
\end{equation}
Here the choice of positive or negative sign would be important in determining which horizon is being deformed: the plus sign will lead to the mechanics in \eqref{eq:MS12} and the minus sign to \eqref{eq:MS123}.
\subsection{Two-dimensional analysis} \label{sec_dS}
In this last portion we will discuss the solution to the linear equations \eqref{massaged1}-\eqref{massaged2}. We will adopt a notation very similar to Sec.\,\ref{sec:hol-cold}, so the contrast with the counterpart of near-AdS$_2$ is manifest. We take the following value for the background 2d metric
\begin{equation}\label{dSinfl}
ds_2^2 = -\mathrm{d}\rho^2 + \bar\gamma_{TT}\, \mathrm{d} T^2~, \qquad \bar \gamma_{TT} = \left(\alpha(T) e^{\rho/\ell_\mathsf{dS}} + \beta(T) e^{-\rho/\ell_{\mathsf{dS}}}\right)^2~,
\end{equation}
which differs from formula \eqref{bg_cold} only by an overall sign, and the presence of $\ell_{\mathsf{dS}}$ instead of $\ell_{\rm A}$. Hence, here $\rho$ is time and $T$ is a spatial direction. The metric \eqref{dSinfl} can be regarded as a generalization of global coordinates for dS$_2$. For the background gauge field we have
\begin{equation}
\bar A_{T} = \mu(T) -\frac{Q \ell_{\mathsf{dS}}}{\Phi_0^2} \left(\alpha(T) e^{\rho/\ell_{\mathsf{dS}}} - \beta(T) e^{-\rho/\ell_{\mathsf{dS}}}\right)~.
\end{equation}
Notice that the solutions for Nariai, both the background and perturbations, can be easily found by noticing that the configuration is formally equivalent to the cold one, upon performing the transformation $\rho \rightarrow i \rho$ and $\ell_{\mathsf{dS}} \rightarrow i \ell_{\mathsf{A}}$. This takes Lorentzian dS$_2$ to Euclidean AdS$_2$. However, the important subtleties come from imposing reality conditions on the various arbitrary functions that appear as we solve the system.
By adopting the same procedure as in Sec.\,\ref{sec:hol-cold}, we start by analyzing the solution to \eqref{massaged1} for $\delta Q=0$. The solution for the dilaton reads
\begin{equation}
Y(x) = \nu(T) e^{\rho/\ell_{\mathsf{dS}}} + \theta(T) e^{-\rho/\ell_{\mathsf{dS}}}~,
\end{equation}
with
\begin{equation}
\beta(T) = \frac{\alpha (T) \theta '(T)}{\nu '(T)}~, \qquad \theta = \frac{c_n}{\nu(T)}-\frac{\ell_{\mathsf{dS}}^2 \left(\nu'(T)\right){}^2}{4 \alpha (T)^2 \nu(T)}~,
\end{equation}
and $c_n$ a constant. The metric perturbation is
\begin{equation}
\sqrt{-\gamma_1} = \frac{4 \ell_{\mathsf{dS}^2} \left(4 Q^2-\Phi_0^2\right)}{3 \Phi_0^5} \left( \sqrt{-\bar \gamma} \, Y(x) +2 \ell_{\mathsf{dS}}^2 {\partial_T } \left( \frac{ \nu'(T) }{\alpha (T)} \right) \right)~.
\end{equation}
We have displayed the solutions in a coordinate system adequate for the inflationary patch of dS$_2$. However, the responses are also interesting from the static patch perspective, as reflected in our discussion in Sec.\,\ref{sec:near-nariai-thermo}. To move to the static patch we first need to extend $\rho$: this can be done by the coordinate change
\begin{equation}
\cosh \frac{\rho}{\ell_{\mathsf{dS}}} = \frac{R}{\ell_{\mathsf{dS}}}~,
\end{equation}
and select
\begin{equation}
\alpha_{\rm static} = -\beta_{\rm static}=\frac{\ell_{\rm dS}}{2}~,
\end{equation}
where now \eqref{dSinfl} becomes
\begin{equation} \label{stat_p}
ds^2 = -\ell_{\mathsf{dS}}^2\left( 1-\cfrac{R^2}{\ell_{\mathsf{dS}}^2} \right) \mathrm{d} T^2 + \cfrac{\mathrm{d} R^2}{\left(1-\cfrac{R^2}{\ell_{\mathsf{dS}}^2}\right)} ~.
\end{equation}
It is important to emphasize that now $R\geq \ell_{\mathsf{dS}} $. The solution for the dilaton in this case is linear in the radial coordinate,
\begin{equation}
Y(x) = R~.
\end{equation}
This is the same solution previously found in \cite{Moitra:2022glw,Svesko:2022txo}, and shows that a metric of the form \eqref{stat_p} can be obtained via a suitable near-extremal limit starting from a Nariai configuration. The delicate aspect of this construction is to extend the coordinates to cover an observer that is inside the cosmological horizon; this requires starting with generic complex functions $\alpha(T)$ and $\beta(T)$, and then imposing non-trivial reality conditions on the metric on the static patch.
\section{Kicking the ultracold black hole} \label{sec:ultracold}
We finally turn to the most novel instance of our extremal cases: the ultracold black hole. Recall that this solution is obtained when all the three horizons coincide
\begin{equation} \label{eq:equal-horizons}
r_-=r_+=r_c\equiv r_{\mathsf{uc}}~,
\end{equation}
and it corresponds to the point in phase space where
\begin{equation}
\begin{aligned}
r_{\mathsf{uc}}= \frac{\ell_4}{\sqrt{6}}~, \qquad
Q_{\mathsf{uc}}^2 = \frac{\ell_4^2}{12}~, \qquad
M_{\mathsf{uc}}^2= \frac{2\ell_4^2}{27}~.
\label{UCext}
\end{aligned}
\end{equation}
This is the first peculiarity of this solution: while in the previous cases extremality can be obtained by choosing different values of $r_0$ or $r_{\mathsf{n}}$ (thereby different values of $M$ and $Q$ according to \eqref{ZMRomans} and \eqref{ZMN1}), the ultracold case is more constrained: the extremal solution corresponds to a single point. The point is located at the tip of the Shark Fin in Fig.\,\ref{SharkFin}: one sees immediately then that moving horizontally (namely, raising the mass, keeping the charge fixed) corresponds to going out of the shaded area, and encountering a naked singularity.
\begin{wrapfigure}{r}{0.38\textwidth}
\centering
\includegraphics[width=\linewidth]{"SF_tip"}
\caption{Close-up of Fig.\,\ref{SharkFin}, near to the ultracold black hole.}
\label{sharkTip}
\end{wrapfigure}
Our strategy for ``heating up'' the ultracold solution will then be different: we should work in an ensemble that allows charge and mass to vary, while keeping us inside the Shark Fin. In other words, we should allow a near-extremal deformation that moves the solution downwards along the red line displayed in Fig.\,\ref{sharkTip}.
The next subsections are devoted to explaining how to achieve this, and what are the consequences of this procedure. That is, we will kick the ultracold black hole away from extremality. We will first investigate the consequences at the level of black hole thermodynamics and the near-horizon geometry. We will then carry out the holographic analysis from the two-dimensional perspective, by analyzing the perturbations around Mink$_2 \times S^2$; we will also show how they match with the black hole response.
\subsection{Near-extremality: thermodynamics and geometry}\label{sec:near-uc}
\paragraph{Thermodynamics.} As familiar by now, the deviation away from extremality is given by introducing $\lambda$ to split the coincident horizons in \eqref{eq:equal-horizons} by a small amount. In the context of a thermodynamic analysis, we will first investigate how $Q$ and $M$ respond to a deviation away from the cold solution. We start by first sending
\begin{equation}
r_-= r_{\mathsf{uc}}-w_1\lambda+O(\lambda^2)~,\qquad r_+= r_{\mathsf{uc}} - w_2 \lambda+O(\lambda^2)~,
\label{rpmw1w2}
\end{equation}
where $w_2<w_1$ are constant coefficients and finite as $\lambda\to 0$.\footnote{At this stage we only ask that $w_2<w_1$, so that $r_-<r_+$ at leading order in $\lambda$.} We will also be holding fixed $\ell_4$, and hence from \eqref{constraints} we can quantify the response of $Q$ and $M$; we find,
\begin{equation}
\begin{aligned}
Q=Q_{\mathsf{uc}}-\frac{2 }{\sqrt{3} \, \ell_4}(w_1^2+w_1w_2+w_2^2)\lambda^2 +O(\lambda^3)~,\\
\label{z}
M = M_\mathsf{uc}-\frac{\sqrt{2}}{\sqrt{3}\, \ell_4} (w_1^2+w_1 w_2+w_2^2) \, \lambda ^2+O(\lambda ^3)~.
\end{aligned}
\end{equation}
This clearly illustrates the basic differences relative to cold and Nariai black holes. First, the leading response is order $\lambda^2$, rather than $\lambda$ for arbitrary $w_{1,2}$. Second, there is no real values of $w_1$ and $w_2$ that will hold $Q$ fixed at leading order. This is compatible with the intuition gathered from Fig.\,\ref{sharkTip}; for any value of $w_{1,2}$ and small $\lambda$, the deviations \eqref{z} are within the shark fin.
Next, it is instructive to quantify how the temperature at each horizon responds to these deviations. For this, it is first useful to record that
\begin{equation}
r_c= r_{\mathsf{uc}}+(w_1+w_2)\lambda+O(\lambda^2)~.
\label{rcw1w24}
\end{equation}
With this we assure that \eqref{rpmw1w2} and \eqref{rcw1w24} leave $\ell_4$ fixed at leading order in $\lambda$. The responses of the cosmological and outer horizons to the deviations in \eqref{rpmw1w2} and \eqref{rcw1w24} give
\begin{equation}
\begin{aligned}
T_c&=\frac{\sqrt{6}}{\pi\, \ell_4^3} \left(2w_2+w_1\right)\left(2w_1+w_2\right) \lambda^2+O(\lambda^3)~,\\
T_+&= \frac{\sqrt{6}}{\pi \ell_4^3}\left(2w_1+w_2\right)\left(w_1-w_2\right)\lambda^2+O(\lambda^3)~.
\label{TUC}
\end{aligned}
\end{equation}
Again this is very different from our previous situations: for the cold and Nariai black holes the response in the temperature was $T_\mathsf{h}\sim O(\lambda)$, while here we obtain $T_\mathsf{h}\sim O(\lambda^2)$.
Actually, the quantities that respond at leading order in this scenario are the electric potential and the entropy. In particular we find
\begin{equation}
\Phi_{\mathsf{c}}= \frac{Q}{r_c}= \frac{1}{\sqrt{2}} - \frac{\sqrt3}{\ell_4}(w_1+w_2) \lambda +O(\lambda^2)~,
\label{phii}
\end{equation}
and the area law at $r_c$ is
\begin{equation}
S_c=\frac{\pi \, \ell_4^2 }{6}+\sqrt{\frac{2}{3}} \pi \, \ell_4 (w_1 +w_2)\lambda +O(\lambda^2)\,.
\label{suc}
\end{equation}
From these expressions it is natural to advocate that
the change in entropy at order $\lambda$ is driven by a change of chemical potential rather than to a change in temperature (which is subleading). This is reminiscent of other studies of two-dimensional gravity theories in flat spacetime \cite{Afshar:2019axx}, where an infinite value for the specific heat
\begin{equation} \label{spec_heat}
C_s^{-1} = \frac{1}{T} \left(\frac{dT}{dS} \right) \bigg|_{Q=const}\,
\end{equation}
was found, since the change in temperature is independent on the change in entropy. The subsequent portions are devoted to showing how to retrieve this feature via an analysis of the IR background for the ultracold case.
\paragraph{Near-horizon geometry.}
After displacing the location of the horizons following \eqref{rpmw1w2} and \eqref{rcw1w24}, we will now construct the near-horizon geometry. To keep expressions simple and succinct, and without loss of generality, we will make a specific choice of $w_{1,2}$: setting $w_1 =\epsilon$ and $w_2=0$, we have
\begin{equation}
r_- = r_{\mathsf{uc}} - \epsilon \, \lambda~,
\qquad r_+ = r_{\mathsf{uc}}~, \qquad
r_ c = r_{\mathsf{uc}} + \epsilon \,\lambda ~.
\end{equation}
This is different than the deviation used in \eqref{bro}, since in that instance the solution was still extremal. The coordinate transformation we will use is
\begin{equation}\label{eq:dec-ucold}
\begin{aligned}
r&= r_{\mathsf{uc}} -{R_0}\lambda + \, \lambda ^{3/2} \sqrt{\frac{2R_0^3 }{3 r_\mathsf{uc}^{3}}} \, R ~, \\
t &= \sqrt{\frac{3 r_\mathsf{uc}^{3}}{2 R_0^3} } \frac{T}{\lambda ^{3/2} } ~,
\end{aligned}
\end{equation}
where $R_0$ is an arbitrary constant. With this choice the resulting near-horizon background is
\begin{equation} \label{eq:ext-ucold-2}
ds^2=-\cfrac{R_0^2-\epsilon^2}{ R_0^2}\, \mathrm{d} T^2+ \cfrac{R_0^2}{R_0^2-\epsilon^2}\, \mathrm{d} R^2+r_\mathsf{uc}^2 \,\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2\right)~,
\end{equation}
and
\begin{equation}
F= \pm \frac{\sqrt{3}}{\ell_4} \mathrm{d} T \wedge \mathrm{d} R~.
\end{equation}
For $\epsilon =0$ we recover \eqref{eq:ext-ucold-1} and \eqref{eq:ext-ucoldF-1}, as expected. However notice that the presence of $\epsilon$ is trivial: it can be completely reabsorbed by a constant rescaling of $T$ and $R$. In this context it is clear that a deviation from extremality is {\it not} heating up Mink$_2$.\footnote{Moreover, have we obtained in \eqref{eq:ext-ucold-2} a finite temperature solution in the IR, it would imply that in UV $T_\mathsf{h}\sim O(\lambda^{3/2})$ due to \eqref{eq:dec-ucold}. However, we know that in the UV $T_\mathsf{h}\sim O(\lambda^{2})$ as discussed in \eqref{TUC}.}
For the comparison with the subsequent holographic analysis it is useful to transform the coordinates to
\begin{equation}\label{eq:tuRr}
T = u +R~, \qquad R = \hat{r}~,
\end{equation}
which brings the metric $\overline{g}_{ab}$ in the Eddington-Finkelstein form
\begin{equation}\label{eq:yy1}
ds^2 = - \mathrm{d} u^2 -2 \mathrm{d} u \mathrm{d}\hat{r}~.
\end{equation}
Note that in \eqref{eq:tuRr} we have rescaled \eqref{eq:ext-ucold-2} to have the usual normalization of Mink$_2$. The first correction in $\lambda$ are also simple to cast. With our choice of coordinates we have
\begin{equation}
\begin{aligned} \label{expans_UC}
ds^2&= \left(\bar{g}_{ab} +\sqrt{\lambda} \, \tilde{g}_{ab} + \lambda\, h_{ab}\right) \mathrm{d} x^a \mathrm{d} x^b+ \left(\Phi_0^2 + 2\lambda \Phi_0 \, Y \right)\left(\mathrm{d}\theta^2+\sin^2\theta\mathrm{d}\phi^2 \right)+\cdots ~,
\\
A &= \bar A_{a} \mathrm{d} x^a + \lambda\, {\cal A}_a \mathrm{d} x^a + \cdots~.
\end{aligned}
\end{equation}
One difference, relative to cold and Nariai, is that we get a correction that grows like $\lambda^{1/2}$. This is simply an artifact of our decoupling limit; it is straightforward to show that $\tilde{g}_{ab}$ is pure gauge. For the linear corrections we find
\begin{equation}\label{eq:yy2}
\Phi_0= r_{\mathsf{uc}}~,\qquad Y(x)= -R_0~.
\end{equation}
In contrast to cold and Nariai, here we have a constant profile for the deformation.
\subsection{Holographic analysis}
In this section we will account for the unusual thermodynamic behaviour of the ultracold backgrounds by doing the holographic analysis of Mink$_2$. This discussion follows the treatment of \cite{Afshar:2019axx,Godet:2021cdl,Grumiller:2021cwg}, which considered theories of two-dimensional dilaton-gravity that admit a flat space vacuum. The benefit in our case is that we have a black hole embedding for Mink$_2$, and hence we can systematically compare and contrast the two-dimensional results with a four-dimensional realization.
Our starting point is to consider as a background solution the two-dimensional Minkowski space metric. This is the solution described in Sec.\,\ref{sec:JTreduction}, where
\begin{equation}\label{eq:bbuc}
\Phi_0^2 = \frac{\ell_4^2}{6}~, \qquad Q^2= Q^2_{\mathsf{uc}}=\frac{\Phi_0^2}{2}\,.
\end{equation}
We will cast a locally Mink$_2$ space in Eddington-Finkelstein coordinates
\begin{equation} \label{edd-fink}
ds^2 = - 2 \left( \mathcal{P}(u) \hat{r} + \mathcal{T} (u) \right) \mathrm{d} u^2-2 \mathrm{d} u \mathrm{d} \hat{r}~,
\end{equation}
and the field strength is given by the volume form of this space and normalized according to \eqref{eq:F1}. For most of the discussion we leave the functions $\mathcal{P}$ and $\mathcal{T}$ general and dependent on $u$. However for the near-horizon geometry displayed in the previous section, they are both constant: we will show explicit solutions specializing to constant values of $\mathcal{P}$ and $\mathcal{T}$ which we denote by ${\cal P}_0$ and ${\cal T}_0$.
In the following we will discuss certain dynamical aspects that arise when the background solution is deformed away from its fixed point. We will first solve for linear perturbations around this background, and then quantify their imprint on the renormalized action. An important aspect will be to contrast basic properties here against those for AdS$_2$ in Sec.\,\ref{sec:hol-cold}: the interplay between $Y(x)$ and the background solution will not play a central role for Mink$_2$.
\subsubsection{Perturbations around \texorpdfstring{Mink$_2$}{Mink2} } \label{pert_UC}
The aim here is simple: to solve the linear equations \eqref{massaged1} and \eqref{massaged2} when the background is given by \eqref{eq:bbuc}-\eqref{edd-fink}. Starting from \eqref{massaged1}, which determines the dynamics of the dilaton, we find
\begin{equation}
\begin{aligned}\label{UCY}
\partial_{\hat{r}}^2 Y (x) & = 0 \\
\partial_u \partial_{\hat{r}} Y - \mathcal{P}(u) \partial_{\hat{r}} Y & = \frac{ \delta Q}{\sqrt{8}Q^2} \\
\left( {\hat{r}} \mathcal{P}\,'(u) + \mathcal{T}\,'(u) \right) \partial_{\hat{r}} Y - \mathcal{P}(u) \partial_u Y +2 ({\hat{r}} \mathcal{P}(u) + \mathcal{T}(u)) \, \partial_u \partial_{\hat{r}} Y - \partial_u^2 Y & = 0~.
\end{aligned}
\end{equation}
Notice that we have used \eqref{eq:bbuc} to simplify these expressions. For metric perturbations, we will make a choice of gauge where $h_{\hat{r}u}=h_{\hat{r}\hat{r}}=0$; this greatly simplifies \eqref{massaged2}, leaving us with
\begin{equation}
\partial_{\hat{r}}^2 h_{uu} - \frac{\sqrt{2}}{Q^3} \, Y(x) +\frac{3 \, \delta Q}{2 Q^3} = 0~. \label{UCmetricEQ}
\end{equation}
The solution to these equations are simple to decode. From the first equation in \eqref{UCY} we read off the radial profile of $Y(x)$, which is
\begin{equation}
Y = a(u) \hat{r} + b(u)~.
\end{equation}
The functions $a(u)$ and $b(u)$ are determined by the two last equations in \eqref{UCY}, and these lead to
\begin{equation}
\begin{aligned}\label{eq:ab1}
a'(u)- a(u) \mathcal{P}(u)- \frac{\delta Q}{{2 \sqrt2} Q^2} =0 ~,\\
{\cal T}'(u) a(u)-{\cal P}(u)b'(u)+2{\cal T}(u) a'(u)-b''(u)=0~.
\end{aligned}
\end{equation}
The inhomogeneous solution to \eqref{UCmetricEQ} is given by
\begin{equation}
h_{uu} =r^3 \frac{ a(u)}{3 \sqrt2 \, Q^3} + r^2\frac{ \, b(u)}{\sqrt2 \, Q^3} -r^2\frac{3 \, \delta Q }{4\, Q^3}~.
\end{equation}
It is useful to explicitly record two classes of solutions to the equations displayed above. First, when ${\cal P}(u)=0$, and ${\cal T}(u)=1$, the solutions to \eqref{eq:ab1} are
\begin{equation}\label{eq:ysol11}
a(u)= \frac{\delta Q}{{2 \sqrt2} Q{^2}} \, u +a_0~, \qquad b(u)=\frac{\delta Q}{{2 \sqrt2} Q{^2}}\, u^2 + b_1 u + b_0~,
\end{equation}
where $a_0$ and $b_{1,0}$ are arbitrary constants. In comparison to \eqref{eq:yy1} and \eqref{eq:yy2}, relevant for the black hole, we have $\delta Q=a_0=b_1=0$. Then, $b_0=R_0$ is the only non-trivial component of the dilaton. The second solution we will record explicitly is one where the metric functions are constant: ${\cal P}(u)={\cal P}_0$ and ${\cal T}(u)={\cal T}_0$, with ${\cal P}_0$ non-zero. This gives
\begin{equation}
\begin{aligned} \label{YsolTOTUC}
a(u)&= -\frac{1}{{2 \sqrt2} {\cal P}_0}\frac{\delta Q}{ Q{^2}} +a_1 e^{{\cal P}_0 u}~,\\
b(u)&= a_1 \frac{{\cal T}_0}{{\cal P}_0} e^{{\cal P}_0 u}+ b_2\, e^{-{\cal P}_0 u} +b_0~.
\end{aligned}
\end{equation}
Here $a_{1}$, $b_{2}$ and $b_0$ are arbitrary constants.
The solution has the same form as that found in \cite{Godet:2021cdl} in models of $\widehat{\rm CGHS}$ gravity.
At this stage it is worth remarking a feature of static solutions to the perturbations, i.e., those that are independent of $u$. From both \eqref{eq:ysol11} and \eqref{YsolTOTUC} we see that $Y(x)$ becomes independent of the background metric at fixed charge,\footnote{${\cal P}_0$ only enters in $a(u)$ via $\delta Q$ in \eqref{YsolTOTUC}.} which is very different from the analogous condition for AdS$_2$ in Sec.\,\ref{sec:hol-cold}. This can be taken as an indication that there is a strange interplay between the deformation $Y(x)$ and heating up Mink$_2$. Also, as done in \cite{Godet:2021cdl}, if we impose the boundary condition
\begin{equation}
Y(x) \xrightarrow[\hat{r} \to \infty]{} \Phi_r \hat{r}~,
\end{equation}
where $\Phi_r$ is fixed, arbitrary charge variations are not allowed. We would have to require
\begin{equation}\label{eq:bc-godet}
\delta Q = -{2 \sqrt2} Q^2 {\cal P}_0 \Phi_r ~.
\end{equation}
This will have an imprint in the thermodynamics discussed below.
\subsubsection{Thermodynamics around \texorpdfstring{Mink$_2$}{Mink2}}
With the solution for the perturbations at hand, we can compute thermodynamic quantities associated to the two-dimensional black hole, with the aim to connect with the near-extremal thermodynamics of the ultracold black hole in Sec.\,\ref{sec:near-uc}. For a static two-dimensional solution we have to set to zero all terms which are $u$ dependent. For the IR background, this means that we have
\begin{equation} \label{edd-fink1}
ds^2 = - 2 \left( \mathcal{P}_0 \hat{r} + \mathcal{T}_0 \right) \mathrm{d} u^2-2 \mathrm{d} u \mathrm{d} \hat{r}~.
\end{equation}
We will interpret this as a ``black hole'' solution, whose horizon is at $\hat{r}_h=-{\cal T}_0/{\cal P}_0$. The associated Hawking temperature is \cite{Afshar:2019axx}
\begin{equation}
T_{\rm 2D} = \frac{{\cal P}_0}{2 \pi}\,,
\end{equation}
which is defined as the surface gravity of the Killing vector $k=\partial_u$.
A static configuration for the dilaton, with ${\cal P}_0\neq0$, means setting $a_1=0$ and $b_2=0$ in \eqref{YsolTOTUC}, which gives
\begin{equation}\label{eq:Ystatic-uc}
Y(x)= -\frac{1}{{2 \sqrt2}{\cal P}_0}\frac{\delta Q}{Q{^2}} \hat{r} + b_0~.
\end{equation}
Notice that the boundary condition \eqref{eq:bc-godet} now can be interpreted as $\delta Q\sim T_{\rm 2d}$. We can read off the entropy from the value of the dilaton $Y$ evaluated at the horizon, for which we obtain
\begin{equation}
\begin{aligned}\label{eq:entropy-mink2}
S_{2D} &= \pi \Phi(x)^2_{\rm horizon}\\
&= \pi \Phi_0^2 + 2\pi \Phi_0 \lambda Y(x)_{\rm horizon}+\cdots\\
&= \pi \Phi_0^2 + 2\pi \Phi_0 b_0 \lambda - \frac{{\cal T}_0}{T_{\rm 2D}} \Phi_0 \Phi_r \lambda + \cdots
\end{aligned}
\end{equation}
where in the last line we replaced \eqref{eq:bc-godet}. The term controlled by $b_0$ is consistent with the behaviour of the 2d perturbations described by models of dilaton gravity in flat spacetime such as $\widehat{\rm CGHS}$ \cite{Afshar:2019axx,Grumiller:2021cwg}. Indeed, when compared with the ultracold black hole entropy \eqref{suc}, we see no dependence of the entropy variation on the change in temperature, hence $C_s$, as defined in \eqref{spec_heat}, is infinite.
The last term in \eqref{eq:entropy-mink2} is clearly strange and should be treated carefully. First, notice that there is an important order of limits: if $T_{\rm 2D}=0$, i.e., ${\cal P}_0=0$, we need to consider the solutions in \eqref{eq:ysol11} and hence only $b_0$ gives a contribution to the entropy. For this reason we do not see this contribution in \eqref{suc}. There are at least three ways to ``normalize'' this divergent behaviour for ${\cal P}_0\neq0$: one could set ${\cal T}_0=0$ as done in \cite{Godet:2021cdl}, set $\delta Q=0$, or modify the boundary condition \eqref{eq:bc-godet} such that $\delta Q\sim T_{\rm 2D}^2$.\footnote{Formally, it would be interesting to modify \eqref{eq:bc-godet} and study more carefully its repercussions. Unfortunately we don't see an indication that a modified boundary condition for $\delta Q$ is appropriate for the ultracold black hole, so we leave this for future work. }
To complete the picture we perform holographic renormalization to compute the on-shell action. Replacing the solution we found in Sec.\,\ref{pert_UC} into the 2D action \eqref{eq:2daction} gives a divergent result for the on-shell action. We regulate the integral by introducing an extremum of integration $\hat r_0$ taken to be large but finite (UV cutoff), while the other extremum of integration is the black hole horizon located at $\hat r=\hat r_h$. To remove the divergences we add the following counterterms:
\begin{equation}
I_{\rm on-shell-uc} = I_{\rm 2D}+I_{\rm N} + I_{\rm GH} + I_{\rm MM}\,.
\end{equation}
The first term is the action \eqref{eq:2daction}. The subsequent terms are
\begin{equation}
I_{\rm N} = {{-\frac{1}{4}}} \int \mathrm{d} u \, \sqrt{-\gamma} \, \Phi (n^{a} \partial_{a} \Phi) ~, \qquad I_{\rm GH} = {\frac{1}{2}} \int \mathrm{d} u\,\sqrt{-\gamma} \, \Phi^2 \, K ~,
\end{equation}
where $\gamma_{ab}$ is the boundary metric and $n^a$ is the unit vector normal to the boundary. $I_{\rm N}$ is a standard counterterm for models of dilaton- gravity in flat 2d space (see for instance \cite{Godet:2021cdl,Kar:2022sdc,Kar:2022vqy}) and $I_{\rm GH}$ is the standard Gibbons-Hawking-York term, which ensures Dirichlet boundary conditions for the metric. As usual in flat space, we have to supplement this action by the Mann-Marolf boundary term \cite{Mann:2005yr},
\begin{equation}
I_{\rm MM} = -\frac{1}{{2}} \int \mathrm{d} u\, \sqrt{-\gamma_{\rm ref}}\, \Phi^2 \, \hat{K}_{\rm ref}~,
\end{equation}
with representative
\begin{equation}
ds^2_{\rm ref} = -2\mathrm{d} u\mathrm{d} r - 2(\mathcal{P}(u) r_0 +\mathcal{T}(u))\mathrm{d} u^2{+ O(\lambda)}~,
\end{equation}
where $\hat r_0$ is the radial (UV) cutoff. Adding this term effectively amounts to performing a background subtraction and in this way the action is free of divergences.
The final finite expression for the renormalized action boils down to
\begin{equation}\label{onsh_intermediate}
I_{\rm on-shell-uc} = \lambda
\Phi_0 \int \mathrm{d} u \left[ \left(b(u) \mathcal{P}(u) - a(u) \mathcal{T}(u) \right) + \frac{Q \mu(u)}{\Phi_0^3} \right] +I_{\rm global}\,.
\end{equation}
A similar form for the on-shell action (which does not include the chemical potential term) was found in \cite{Godet:2021cdl}.
With \eqref{onsh_intermediate} we can now extract the entropy of the two-dimensional black hole. We will be interested in the case where $\delta Q=0$, and we will also take ${\cal P}_0\neq0$: these choices will facilitate comparison with the ultracold black hole and we will be able to take ${\cal P}_0\to0$ smoothly. Evaluating \eqref{onsh_intermediate} on \eqref{edd-fink1}-\eqref{eq:Ystatic-uc}, the Euclidean action then gives,
\begin{equation}\label{eq:on-shell-uc11}
I_{\rm on-shell-uc} = -2\pi \Phi_0 b_0 \lambda+I_{\rm global}\,,
\end{equation}
where we set $\delta Q=0$.
This expression clearly reflects that the temperature does not affect the on-shell action, and hence the entropy, as we have been expecting. Using the standard thermodynamic relation
\begin{equation}
S = \beta \left( \frac{\partial I}{ \partial \beta} \right) -I ~,
\end{equation}
we find that $S_{\rm 2D}=-I_{\rm on-shell-uc}$ is in agreement with \eqref{eq:entropy-mink2} for fixed charge, and agrees with the response of the ultracold black hole. We emphasise that this is very different from the outcome in Sec.\,\ref{sec:hol-cold}, where temperature is the leading effect in the deformation.
\section*{Acknowledgements}
We thank Dio Anninos, Tarek Anous, Stephane Detournay, Albert Law, Andrew Svesko, and Evita Verheijden for interesting discussions, and collaborations on related topics.
The work of AC and CT was supported by the Delta ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). The work of AC has been partially supported by STFC consolidated grant ST/T000694/1. The work of CT is supported by the Marie Sk\l odowska-Curie Global Fellowship (ERC Horizon 2020 Program) SPINBHMICRO-101024314. FM acknowledges financial support from the European Research Council (grant BHHQG-101040024) funded by the
European Union. Views and opinions expressed are however those of the author(s) only and do not
necessarily reflect those of the European Union or the European Research Council. Neither the European Union nor the granting authority can be held responsible for them.
\bibliographystyle{JHEP-2.bst}
|
1,116,691,498,261 | arxiv | \section{Out-of-core model}
\Abacus is a cosmological $N$-body solver that integrates particle trajectories under mutual self-gravity in 6D phase-space from the smooth, nearly-uniform condition of the early universe to the clustered richness of filaments, halos, and voids seen in galaxy surveys today. The depth and breadth of upcoming galaxy surveys like the DESI, the Dark Energy Spectroscopic Intrument, demand simulations with hundreds of billions or trillions of particles \cite{desi}.
Particle-based simulation codes are often memory limited, since the natural dimensions for improvement are to simulate physics at smaller scales or in a larger domain---either way, using more particles. The floating-point throughput provided by GPUs has made processing such increasing numbers of particles tractable; in many cases, the challenge is now efficiently storing the state, and on distributed memory systems, communicating state between nodes.
\Abacus is designed to support simulations whether they fit in memory or not. In the latter case, also known as ``out of core'', the state is buffered on disk. A side-effect of this model is that the state \textit{is} the checkpoint: the only data flow between simulation time steps is through files on disk. This enforces the completeness of checkpoints.
When running on a distributed-memory system where the state \textit{does} fit in memory, we preserve this file-oriented model by using POSIX shared memory, or ramdisk\footnote{We are using the term ``ramdisk'' colloquially, since, in Linux kernel parlance, a ramdisk is ``raw'' block device on top of which a file system may be created. We are using the related, but distinct, kernel tmpfs, which is more widely available.} \cite{shm_overview,tmpfs}. The ramdisk is exposed as a directory, so checkpointing consists of launching a file-system copy in between time steps. The simulation ``reads'' the shared memory with a memory map, thus avoiding a copy that a file-system read from ramdisk would incur. However, this mapping procedure has its own overheads, which we will discuss below.
\section{Simulation flow}
\Abacus is divided into a top-level driver code and a simulation executable. The driver code calls the executable in a loop, once per time step. The task of the executable is to load the state at time $t$, called the \textit{read state}, compute forces on particles, update their kinematics, and write a new state at time $t+1$, called the \textit{write state}. The executable is idempotent, relying on the top-level driver code to rename the write state to the read state between invocations.
The fact that a new executable is invoked for each time step, loading the state anew, ensures that the state \textit{is} the checkpoint. There can be no side-channel information that flows through out-of-band allocations; any information that persists across time steps must be part of the state files. This model is a strong enforcement of the completeness and correctness of checkpointing. Furthermore, the executable can remain oblivious to the concept of checkpointing, leaving the driver code to handle the file-oriented tasks to which it is well-suited.
The state files themselves are raw binary representations of the phase-space particle information, with files divided into planar ``slabs'' of the simulation volume. Within a slab, particles are ordered by cell (the atomic unit of our domain decomposition). These files mirror the memory model used by the simulation. Metadata is stored in a separate ASCII file in the state directory.
Each slab may have multiple \textit{slab types}, each stored in a separate file and representing one field, such as the positions, velocities, and particle flags. When a given slab is requested by the code, the type is specified as well. The request is processed by the \textit{slab buffer}, which computes the file path and determines whether it resides on ramdisk. If so, the slab buffer instructs the \textit{arena allocator} to map the slab directly. If not, the arena allocator makes a new allocation, and a file I/O request is passed to the I/O subsystem to read into that allocation.
The determination of whether a path resides on the ramdisk is done by string comparison of the path prefix. Other, more robust methods were deemed either too complex or too expensive.
\section{Shared Memory}
POSIX shared memory is exposed on Linux via a tmpfs file system. By default, it is mounted at \texttt{/dev/shm/} and has a capacity of half of the system's physical memory. Files created in that directory have ``kernel persistence'', meaning they stay in memory until the kernel is terminated.
One way to use this ramdisk is as if it were an ordinary storage device, reading and writing with standard file I/O interfaces. This will speed up I/O in most cases, but it will consume extra memory and the I/O will only be as fast as a memory copy. This can be slow for large allocations---especially if the I/O is only using a single thread on a system with multiple sockets---and exerts unnecessary memory bandwidth pressure.
We instead opt to memory map the ramdisk files. This can be thought of as getting a pointer directly into shared memory, avoiding any memory movement. This is accomplished by getting a file descriptor with \texttt{open()}, setting the size with \texttt{ftruncate()} (if writing), and mapping the shared pages into user space with \texttt{mmap()}.
This model has been very successful in our code, with state files naturally serving as the checkpoint, and the actual backup to disk being as simple as launching a file system copy on each node. We have confirmed that even though the shared memory is held by the kernel, the underlying pages obey user-space first-touch NUMA semantics.
The default ramdisk size limit on Linux systems is half of the system's memory. This is not a limitation in our case, as roughly half of \Abacus's memory allocations are transient, mostly from kinematic data like particle accelerations.
\section{Deployment on Summit}
We ran a suite of simulations on the Summit\footnote{\url{https://www.olcf.ornl.gov/summit/}} supercomputer at Oak Ridge National Lab using this shared-memory checkpointing model. Overall, it was very successful, with timed checkpoints running every few hours, and conditional checkpoints running before time steps that included on-the-fly analysis. These time steps were considered ``riskier'' as they increased the memory footprint and the code path was dependent on the physical state of the simulation, increasing the chance of exposing a rare, corner-case bug. The state copy from the nodes to the Alpine network file system took on average 2 minutes for 13 TB (6800 files) spread across 63 nodes, or about 1.7 GB/s/node.
The primary checkpointing failures were (i) a string of network failures triggered by the copy operation on multi-GB files, (ii) timeouts in the checkpointing caused by variable network file system performance, and (iii) user error in deleting the original checkpoint instead of the partial checkpoint in the case of checkpoint failures.
\section{Overheads}
We find that unmapping shared memory has a noticeable overhead that scales with the size of the mapping. This is shown for a Linux Intel Skylake platform (page size 4096 bytes) in Figure~\ref{fig:overhead}. All mappings and unmapping were performed with a thread affinity fixed and on a single NUMA node. Two cases are shown: with and without the underlying \texttt{/dev/shm/} file name being unlinked (deleted) before the unmapping. If the file name has not been unlinked, then the unmapping is fast (10s of GB/s). If the name has been unlinked, then the unmap runs at 10 GB/s independent of the size of the mapping. This rate is similar to the \texttt{memset()} speed and suggests some kind of operation (zeroing?) is occurring on the contents of the pages, not just the page tables. This work can be assigned to its own thread, but in simulations, we have observed performance degradation in other areas of the code while \texttt{munmap()} is running in a separate thread. Memory bandwidth pressure may be to blame.
We confirm that an ordinary \texttt{malloc()}/\texttt{free()} pair does not exhibit this behavior. The Summit platform exhibited this same pattern of overheads, despite being a IBM POWER9 platform with 64 KB pages.
For certain allocations (write state slabs), we can skip the \texttt{munmap()} call, as it will not free any memory, because the underlying file handle must persist until the next time step. However, we find that doing so simply defers the unmapping cost to the program termination. Similarly, performing an unlink after the unmapping, rather than before, just shifts the time differential into the unlink.
These overheads may be similar or even smaller than methods that stage checkpoints in a main memory buffer or a burst buffer---a write to a burst buffer will likely be slower than 10 GB/s. However, the overheads are incurred for every time step (typically $\mathcal{O}(1000)$), rather than a few times per simulation. A hybrid method that allows the simulation executable to run multiple time steps in memory then switch to the ramdisk method just for the checkpoint step may be superior, at the cost of increased code complexity.
We surmise that the shared memory system was designed to facilitate lightweight inter-process communication, and not allocations of dozens of GB. Because our code requires a large amount of state relative to the time it takes to process it, it is suboptimal to use POSIX shared memory as the only way to pass information between time steps. However, the correctness enforced by the out-of-core model is a useful property. This model may be appropriate for codes with smaller state or a higher compute density (ratio of compute work to state size).
\begin{figure}
\centering
\includegraphics{munmap_rate.pdf}
\caption{In the POSIX shared memory checkpoint model, all persistent allocations are made with \texttt{mmap()} and freed with \texttt{munmap()}. \texttt{mmap()} is fast, but \texttt{munmap()} is noticeably slow, especially when the corresponding file handle has already been unlinked or deleted (dashed line). With checkpoints of 10s of GB, an unmap rate of 10 GB/s can be a bottleneck on the simulation performance.}
\label{fig:overhead}
\end{figure}
\section*{Acknowledgment}
The authors would like to thank the co-developers of the \Abacus code: Marc Metchnik, Doug Ferrer, and Phil Pinto. Abacus development has been supported by NSF AST-1313285 and more recently by DOE-SC0013718, as well as by Simons Foundation funds and Harvard University startup funds. NM was supported as a NSF Graduate Research Fellow. The Summit simulations have been supported by OLCF projects AST135 and AST145, the latter through the Department of Energy ALCC program.
\bibliographystyle{./bibliography/IEEEtran}
|
1,116,691,498,262 | arxiv | \section{1. Introduction}
It is
{commonly}
accepted that, from theoretical quantum mechanics, it follows that the spectrum of the eigenvalues of the angular momentum operator is discrete and is comprised of the integer values
only{;
see,} e.g., \cite{Bohm,Schiff,Fock,LL}.
Non-integer values of angular momentum do not contradict the principles of quantum theory and were considered few times from different viewpoints.
Back in {1932,} Majorana
noted that in the framework of relativistic quantum mechanics, the general formulation of a one-particle equation admits a solution with an arbitrary angular momentum, a predecessor of the theory of infinite dimensional representations of the Lorentz group \cite{M}.
Working on the analytical properties of the scattering amplitude, {Regge}
\cite{Regge} considered angular momentum as a continuous complex
variable, and derived the singularities in the plane of the complex angular momentum that became universally known as Regge poles.
\mbox{G\"{o}tte et {al}. \cite{GG}},
exploiting the freedom in fixing the orientation of phase discontinuity, introduced states with non-integer angular momentum and
applied formalism of the propagation of light modes with the fractional angular momentum in the paraxial and non-paraxial
\mbox{regime.}
Exploring polar solutions for the harmonic oscillator, {Land} \cite{Land} discovered that the Fock space equivalent to the
Hilbert space wave
{functions,} found by solving the Sch\"rodinger equation in spherical coordinates is realized by acting with the creation and
annihilation {operators,} allowing states with both integer and non-integer angular momentum.
In \cite{JKT1,JKT2}, we argued that if only physical conditions are imposed, what can be derived from the principles of quantum mechanics is that the spectrum is discrete with the only condition that the difference $L-|m|$ is integer while $L$ and $m$ could be integer as well as non-integer. Throughout, $L(L+1)$ is the eigenvalue of the angular momentum operator squared and $m$ is the eigenvalue of the operator of the third component of the \mbox{angular momentum}.
In this {paper,}
a solution of the eigenvalue problem for the quantum-mechanical orbital angular momentum (hereafter referred to as angular momentum) operator
{is reported}
obtained when only the physical requirement is imposed on the eigenfunction and is shown that in the framework of theoretical quantum mechanics, the eigenfunctions with both integer and non integer eigenvalues are allowed.
{The paper}
is organized as follows. In {Section 2},
the multivaluedness and periodicity of the eigenfunctions of the operator of the third component of the angular {momentum,}
$\hat M_z$, {are discussed.}
{A}
new prescription for the power of a complex variable, differing from the Euler--de Moivre {prescription,}
$(x+iy)^m=\rho^m\,e^{im\phi}$, used in quantum mechanics, {is presented.}
Based on this prescription,
the eigenfunction of $\hat M_z$ in terms of Gauss's hypergeometric series
{is given.}
This wave function is normalizable {and}
distinct from the traditional {eigenfunction being proportional to}
$e^{im\phi}$,
is
single valued and invariant under the rotations at $2\pi$ for
any, not necessarily integer $m$. In other words, the requirement of single-valuedness of the wave function does not necessarily lead to the solution with only integer $m$.
This eigenfunction satisfies the physical requirement of orthonormality, {and, therefore,}
it can be considered as the wave function describing the physical state with the eigenvalue $m$,
{being}
not necessarily integer.
In {Section 3},
{it is discussed}
how the different prescriptions for the power function alter the eigenfunctions and the spectrum of the angular momentum operator squared $\hat M^2$.
{The}
eigenfunction of $\hat M^2$ {is found}
which is normalizable and satisfies physical requirements for an integer as well {as}
non-integer $L$; to a fixed value of $L$
{corresponds}
a discrete spectrum of $m$, defined by the relation $|m|=L-k,\,k=\{0,1,\cdots , [L] \}$, where $[L]$ is an integer part of $L$. It is shown that the statement that the spectrum of eigenvalues consists of only integer $L$ (see, e.g., \cite{Bohm,Schiff,Fock,LL}) is just an artifact of choosing the Legendre
{function.}
$P^m_L$, as an eigenfunction of $\hat M^2$. Results are discussed in Section 4
\section{2. Eigenfunctions of \boldmath{$\hat M_z$} that Are Single-Valued and Periodic for Integer as well as for Non-Integer
Eigenvalues}
\label{s2}
$\Psi(x,y|m)$, the eigenfunction of the operator of the third component of the angular {momentum,} $\hat M_z
$, is defined as the solution of the following eigenvalue equation:
\begin{equation}
\hat M_z \Psi(x,y|m)= i\left(y{d\over dx}-x{d\over dy}\right)\Psi(x,y|m)=m \, \Psi(x,y|m),
\label{eq1}
\end{equation}
where $m$ is the eigenvalue and throughout {the reduced Planck constant}
$\hbar=1$. Solving Equation~(\ref{eq1}) for the complex $\Psi(x,y|m)=\Psi_R(x,y|m)+i\Psi_I(x,y|m)$ is equivalent
{to solve}
the following system of the two coupled equations for the real and imaginary parts:
\begingroup
\makeatletter\def\f@size{9}\check@mathfonts
\def\maketag@@@#1{\hbox{\m@th\normalsize\normalfont#1}}%
\begin{eqnarray}
\label{eq6}
\left( y \frac{d}{ d x} - x \frac{d}{ d y} \right) \Psi_R(x,y|m) = -m \Psi_I (x,y|m) , \quad\quad \left( y \frac{d}{d x} - x \frac{d}{d y} \right) \Psi_I(x,y|m) = m \Psi_R (x,y|m).
\end{eqnarray}
\endgroup
{Acting}
on
{Equation}
(\ref{eq6}) with the operator $(y\,d/dx-x\,d/dy)$ results into the one and the same equation for both $\Psi_R$ and $\Psi_I$:
\begin{eqnarray}
\label{eq7}
\left(x\,{d\over dy}-y\,{d\over dx}\right)^2\Psi_{R,\,I}(x,y|m)=-m^2\,\Psi_{R,\,I}(x,y|m).
\end{eqnarray}
{The} {two linearly independent solutions of the homogeneous differential \mbox{Equation (\ref{eq7})} can be presented as $\Psi_1(x,y|m)=C_1(x; y|m)F_1(x; y|m)$ and $\Psi_2 = C_2(x; y|m) F_2(x; y|m)$, where $F_{1,\,2}$ are linearly independent particular solutions of Equation~(\ref{eq7}) and $C_{1,\,2}$ satisfy the
{condition,}
$(yd/dx - x d/dy)C_{1,\,2} = 0$.
If
{one chooses}
those $F_{1,\,2}$ that satisfy \mbox{Equation (\ref{eq6})}, then $C_1=C_2$ and the general solution of the eigenvalue Equation (\ref{eq1}) is $\Psi(x,y|m)=C(x; y|m)[F_1(x; y|m)+iF_2(x; y|m)]$, where $C(x; y|m)$ is a complex function
{with an
absolute value,
fixed}
by the physical requirement of normalizability and the phase \mbox{remains undetermined}.}
After
$(x,\,y)$ {is transformed}
{to}
another set of independent {variables,}
$(f(x^2+y^2)$, $\,\zeta(x,y))$,
where $f$ is any differentiable function of $x^2+y^2$, Equation~(\ref{eq1})
{turns}
into an equation with
{the only
variable,}
$\zeta$.
{This}
technique of separating variables
{is used}
below but first let us quote and discuss the function that is cited in textbooks as a solution {of}
Equation~(\ref{eq1}) \cite{Bohm,Schiff,Fock,LL}:
\begin{equation}
F(x,y|m)\sim (x+i y)^m.
\label{eq2}
\end{equation}
{If}
$m$ is
{non-}integer,
$F(x; y|m)$ is undetermined, {since}
for
non-integer {exponents,} the power function is multivalued.
In order for this function to be a solution of Equation~(\ref{eq1}) it must be defined as a differentiable function of $x$ and $y$. This may be achieved using, e.g., the Euler--de Moivre prescription for the power of a complex number \cite{ww}:
\begin{eqnarray}
\label{eq3}
(x+i y)^m=(\rho e^{i\phi})^m =\rho^m e^{i m \phi} =\rho^m (\cos \phi+i \sin\phi)^m =\rho^m (\cos m\phi +i \sin m\phi),
\end{eqnarray}
where
\begin{eqnarray}
\label{eq4}
\rho= |(x^2+y^2)^{1/2}|, \ \sin\phi =\frac{y}{|(x^2+y^2)^{1/2}|},\ \cos\phi =\frac{x}{|(x^2+y^2)^{1/2}|},
\end{eqnarray}
and $|z|$ stands for the absolute value of $z$.
Note that in the chain of Equation (\ref{eq3}) rotational symmetry of the original expression $(x+i y)^m$ is violated when $m$ is
non-integer. Indeed, due to the invariance of the Cartesian {coordinates,} $x,\,y$, under the
{$2 \pi$-rotation,}
$(x+iy)^m$ is formally rotationally invariant for any $m$. Expressions $(\rho e^{i\phi})^m$ and $\rho^m (\cos \phi+i \sin\phi)^m$ are invariant with respect to $\phi\to\phi+2\pi $ for any $m$, while $\rho^m e^{i m \phi}$ and $\rho^m (\cos m\phi +i \sin m\phi)$ violate rotational symmetry for a non-integer $m$.
If
{one requires}
all
expressions in Equation~(\ref{eq3}) {to}
be invariant with respect to $\phi\to\phi+2\pi $, {then,} $m$ must be an integer.
{Then,} $(x+iy)^m$
{is}
a single valued function
of $x$ and $y$. This connection between the rotational invariance and the single valuedness of $(x+i y)^m$
caused the following assertion: if
{one requires}
the invariance of the wave function $(x+iy)^m\to\rho^me^{im\phi}$ with respect to $\phi\to\phi+2\pi $, this is equivalent to the single valuedness of this function \cite{Bohm,Schiff,Fock,LL}. Both these conditions are satisfied when $m$ is integer and that is
{a reason}
why, based on the requirements of single valuedness or/and periodicity, it was declared that $m$ can only be integer and these requirements were formalized in theoretical quantum mechanics as follows \cite{Bohm,Schiff,Fock,LL}:
\begin{eqnarray}
\label{eq5}
\Psi(\rho,\phi|m)=\Psi(\rho,\phi+2\pi k |m),
\end{eqnarray}
where the polar coordinates $\rho$ and $\phi$ are given by
{Equation}
(\ref{eq4}) and $k$ is integer.
Imposing these physical conditions of single valuedness and rotational invariance on the wave function that is not observable has been criticized by Pauli \cite{Pauli} (see also \mbox{in \cite{merz}}).
We agree with Pauli's criticism and emphasize that the purely integer spectrum of the eigenvalues is obtained only when the conditions of single valuedness and/or periodicity are realized in the framework of the Euler--de Moivre prescription (\ref{eq3}). In fact, there are other possible prescriptions for determining $(x+iy)^m$ and it turns out that for one of these
{prescriptions, the}
eigenfunction $\Psi(x,y|m)$ will be single valued, differentiable with respect to its variables, invariant with respect to $\phi\to\phi+2\pi k$ for any $m$, integer as well as non-integer. In other words, even if
{one imposes}
the requirement of single valuedness/rotational invariance on wave function, this still does not necessarily lead to a purely integer spectrum.
To demonstrate {this,}
in
Equation~(\ref{eq7}),
the technique of separating variables
{is used}
transforming from $(x,\,y)$ to $(\rho,\,\zeta)$, where $\rho=|(x^2+y^2)^{1/2}|$.
Now the equation for $F(x,y|m)$ depends only on
$\zeta$ and
$\zeta$ {is chosen}
so that
{Equation}
(\ref{eq7})
takes a form of an equation solutions of which
are well documented. Defining $\zeta=[1/2 - x/(2|(x^2+y^2)^{1/2}|) ]=[1/2-x/(2\rho)]$ turns Equation~(\ref{eq7}) into the Gauss
hypergeometric {equation:}
\begin{equation}
\left[
\zeta(1-\zeta)\,\frac{\rm d^2}{{\rm d} \zeta^2} + \left ( \frac{1}{2}-\zeta \right) \,\frac{\rm d}{{\rm d} \zeta} +m^2
\right]
F(\zeta|m) = 0 .
\label{eq8}
\end{equation}
{As}
known, any pair from the Kummer's 24 solutions can be chosen as a set of linearly independent solutions to the
Gauss equation \cite{ww};
here the {pair,}
\begin{eqnarray}
F_1(\zeta|m) &=& {}_2F_1 \left( m,-m;\frac{1}{2}; \zeta\right) = (1-\zeta)^{\frac{1}{2}} \, {}_2F_1 \left( \frac{1}{2}+m,\frac{1}{2}-m; \frac{1}{2}; \zeta\right) , \nonumber\\
F_2(\zeta|m) &=& \zeta^{\frac{1}{2}} \, {}_2F_1 \left( \frac{1}{2}+m,\frac{1}{2}-m; \frac{3}{2}; \zeta\right) = \zeta^{\frac{1}{2}} (1-\zeta)^{\frac{1}{2}} \, {}_2F_1 \left( 1+m, 1-m; \frac{3}{2}; \zeta\right) ,
\label{eq9}
\end{eqnarray}
{is chosen},
where ${}_2F_1( a, b; c; \zeta)$ is the Gauss's hypergeometric function \cite{ww,Abramowitz,Br}. Finally,
$\Psi(x,y|m)=C(F_1+iF_2)$, the eigenfunction of the operator of the third component of the angular momentum, is given
{by}
\begin{eqnarray}
\nonumber
\Psi(x,y|m) &=& C(\rho| m)
\left[
{}_2F_1 \left(m, -m;{1\over 2}; {1\over 2}- {x\over 2 \rho} \right) \right. \\
&&-i m {y\over\rho } \left({1\over 2}+{x\over 2 \rho}\right)^{-1/2} \left. {}_2F_1 \left({1\over 2}+m, {1\over 2}-m;{3\over 2}; {1\over 2}- {x\over 2 \rho} \right)
\right],
\label{eq10}
\end{eqnarray}
where the square root is determined via prescription $(f^2(x))^{1/2}=|f(x)|$.
It is straightforward to verify that the eigenfunction (\ref{eq10}),
{as} a function of $x,\,y$, is single-valued, invariant under the rotation at $2\pi k$, is continuous and has continuous
derivatives of all orders up to infinity for any real, not necessarily integer $m$. Though $\Psi(x,y|m)$ contains a square root, it is infinitely differentiable. This is guaranteed
{as soon as}
$d(x/|x|)/dx=0$ and $d(y/|y|)/dy=0$ are satisfied which is readily demonstrated using
{Equation}
(\ref{eq4}). Indeed, from $\cos\phi(x,y)=x/|(x^2+y^2)^{1/2}|$, i.e., $\cos\phi(x,y)|_{y=0}=x/|x|$ it follows {that}
\begin{eqnarray}
\label{cos}
&& {d \cos\phi(x,y)\over d x} |_{y=0} = \frac{y^2}{|(x^2+y^2)^{3/2}|} |_{y=0} =0 \quad\to\quad {d(x/|x|)\over dx}=0.
\end{eqnarray}
{Similar} to
{that of}
{Equation} (\ref{cos}), from $d\sin\phi(x,y)/dy|_{x=0}$:
$d(y/|y|)/dy=0$.
{Let us consider particular values of $m$.
{Let us}
start
{with}
the integer} $m=\pm N$, \mbox{$\,N=1, 2, 3\cdots$}.
{{The corresponding}
hypergeometric functions} ${}_2F_1(N,-N;1/2;z)$ and ${}_2F_1(1/2+N,1/2-N;3/2;z)$ are tabulated (see, e.g., \cite{Abramowitz}){.
Then,}
Equation~(\ref{eq10}) {reads:}
\begin{eqnarray}
\Psi(x,y|\pm N)={C(\rho|\pm N)\over \rho^N}\,(x\mp i y)^N.
\label{eq11}
\end{eqnarray}
{So}, for {integer} $m$, $\Psi(x,y|N)$ reproduces, up to the factor $C(\rho|\pm N)/\rho^N$, solution (\ref{eq2}),
\mbox{$(x+i
y)^N$}.
{Let us consider next}
the half-integer values of $m$. Explicit expressions are lengthy and involved;
results for $m=\pm 1/2$ and $m=\pm 3/2$ {are:}
\begin{eqnarray}
\Psi\left(x,y|\pm \frac{1}{2}\right) &=&C\left(\rho|\pm \frac{1}{2}\right)
\left[
\left( \frac{1}{2} +\frac{x}{2\rho}\right)^{\frac{1}{2}} \mp i \frac{y}{2\rho} \left( \frac{1}{2} +\frac{x}{2\rho}\right)^{ -\frac{1}{2}}
\right] ,
\label{eq16}\\
\Psi\left(x,y|\pm \frac{3}{2}\right)&=&C\left(\rho|\pm \frac{3}{2}\right)
\left[
\left( \frac{1}{2} +\frac{x}{2\rho}\right)^{\frac{1}{2}} \left( \frac{2x}{\rho} -1\right) \mp i \frac{y}{2\rho} \left( \frac{1}{2} +\frac{x}{2\rho}\right)^{ -\frac{1}{2}} \left( 1 +\frac{2x}{\rho}\right)
\right]\nonumber.
\end{eqnarray}
A relation between the wave functions for the integer and half-integer $m$
{is found being}
verified for $m=1/2, 3/2,5/2$:
\begin{eqnarray}
\left [{\Psi\left(x,y|\pm \frac{N}{2}\right)\over C\left(\rho|\pm \frac{N}{2}\right)} \right]^2 = { \Psi(x,y|\pm N)\over C\left(\rho|\pm N\right) },
\label{eq18}
\end{eqnarray}
i.e., wave function for the half-integer $m$, $\Psi(x,y|\pm N/2)$, satisfies $\Psi^2(x,y|\pm N/2)\sim \Psi(x,y|\pm N)$, a relation similar to that which holds for
{Equation}
(\ref{eq2}), $((x\pm i y)^{N/2})^{2}=(x\pm i y)^{N}$. This result,
{along}
with
{Equation}
(\ref{eq11}), indicates that the eigenfunction $\Psi(x,y|m)$, given by Equation~(\ref{eq10}), presents one possible prescription for the power function $(x+iy)^m$.
Let us demonstrate with the example of the half-integer $m$ the importance
of
choosing prescription for the square root as $(f^2(x))^{1/2}=|f(x)|$.
{To this end, one moves}
from Cartesian to polar coordinates, $(x,y)\to (\rho, \phi)$, see Equation~(\ref{eq4}). In polar
{coordinates,
the argument,}
$(1/2-x/2\rho)$,
of the hypergeometric
functions
{reads:}
$(1-\cos\phi)/2 = \sin^2(\phi/2)$.
{First,}
the {prescription,} $(f^2(x))^{1/2}=f(x)$ {is used.}
Using in
{Equation}
(\ref{eq16}) the
{known} relation ${}_2F_1(a,b;
b;z)=(1-z)^{-a}$ \cite{Abramowitz,Br}, {one obtains:}
\begingroup
\makeatletter\def\f@size{9}\check@mathfonts
\def\maketag@@@#1{\hbox{\m@th\normalsize\normalfont#1}}%
\begin{eqnarray}
\Psi\left(x,y |\pm \frac{1}{2}\right)&\to&\Psi\left(\rho,\phi |\pm \frac{1}{2}\right)= C\left(\rho|\pm \frac{1}{2}\right)
\left[
\left(
\cos^2\frac{\phi}{2}
\right)^{\frac{1}{2}}
\mp \frac{i}{2} \,\sin\phi
\left(
\cos^2\frac{\phi}{2}
\right)^{-\frac{1}{2}}
\right]
=\cr\cr
&&C\left(\rho|\pm \frac{1}{2}\right)
\left(
\cos\frac{\phi}{2} \mp i \,\sin\frac{\phi}{2}
\right)
= C\left(\rho|\pm \frac{1}{2}\right) \,e^{\mp i \frac{\phi}{2} }.
\label{eq19}
\end{eqnarray}
\endgroup
{Equation}~(\ref{eq19}), apart from the normalization factor, is the
{known}
$e^{im\phi}$, $m=\pm 1/2$, originated by the Euler--de Moivre prescription for the $(x+iy)^{1/2}$ and presented as a standard expression for the eigenfunction of $\hat M_z$ \cite{Bohm,Schiff,Fock,LL}.
{Obviously,}
because of the rotational invariance of Cartesian coordinates, $x(\phi)=x(\phi+2k\pi),\,y(\phi)=y(\phi+2k\pi)$, the left hand side of Equation~(\ref{eq19}), $\Psi(x,y|\pm 1/2)$, given by Equation~(\ref{eq16}), is invariant under the rotation $\phi\to \phi + 2 \pi k$. On the other hand, the
{right-hand side (r.h.s.)}
of
{Equation}
(\ref{eq19}) is invariant under the translations $\phi\to \phi + 4 \pi k$,
{but}
not under $\phi\to \phi + 2 \pi k$. This inconsistency stems from
{the
prescription,} $(f^2(x))^{1/2}=f(x)$ while deriving Equation~(\ref{eq19}). For example, $\cos(\phi/2)$ appeared in the real part
of
{Equation}
(\ref{eq19})
because for $(\cos^2(z))^{1/2}$ we used $\cos(z)$:
\begin{equation}
{}_2F_1\left(
-{1\over 2},{1\over 2};{1\over 2}; \sin^2\left({\phi\over 2}\right)
\right)
=
\left[
1-\sin^2\left({\phi\over 2}\right)
\right]^{1/2}
=
\left[
\cos^2\left({\phi\over 2}\right)
\right]^{1/2}
=\cos\left({\phi\over 2}\right).
\label{eq21}
\end{equation}
{If}
$\phi =2 \pi k$, the
{left-hand side (l.h.s.)}
of this relation is unity, ${}_2F_1(-1/2,1/2;1/2;0)=+1$, while for
{r.h.s.}
{one gets}
$\cos \pi k$, which, depending on $k$, can be either $+1$ or $-1$.
{If}
$\phi=\pi (2 k+1) $, both
{l.h.s. and r.h.s.}
of {Equation}
(\ref{eq21}) vanish. This means that Equation~(\ref{eq21}) is
valid only for
$\phi$
{with}
$\cos(\phi/2)\geq 0${;
this} condition, lacking from \cite{Abramowitz,Br}, is also noted in Ref.~\cite{NIST}.
{Meantime,}
both
{l.h.s. and r.h.s.}
of Equation~(\ref{eq21}) exist and are well defined for all values of $\phi$ and that calls for the question of how relation (\ref{eq21}) has to be interpreted when $\cos(\phi/2)<0$. Note that the inconsistency does not
{arise}
if, as an alternative,
instead of {Equation} (\ref{eq21}),
{the}
following {relation,}
\begin{equation}
{}_2F_1\left(-{1\over 2},{1\over 2};{1\over 2}; \sin^2\left({\phi\over 2}\right)\right)=\left|\cos\left({\phi\over 2}\right)\right|,
\label{eq22}
\end{equation}
{is used,}
i.e., if
{one applies the}
prescription $(\cos^2(\phi/2))^{1/2}=|\cos(\phi/2)|$,
{but}
not $(\cos^2(\phi/2))^{1/2}=\cos(\phi/2)$.
{If,}
instead of the {prescription,} $(f^2(x))^{1/2}=f(x)$,
$(f^2(x))^{1/2}=|f(x)|$ {is applied, one obtains:}
\begin{eqnarray}
\Psi\left(x,y |\pm \frac{1}{2}\right)\to\Psi\left(\rho,\phi |\pm \frac{1}{2}\right)=C\left(\rho|\pm \frac{1}{2}\right)
\left(
\left|\cos\frac{\phi}{2}\right| \mp {i\over 2}\, \sin\phi \left|\cos\frac{\phi}{2}\right|^{-1}
\right)\ .
\label{eq191}
\end{eqnarray}
{Expression} (\ref{eq191}) is well defined for all values of $\phi$ and, most importantly, it is invariant under
{translations,}
$\phi\to \phi + 2 \pi k$; the
{above-mentioned}
inconsistency disappears. Similarly,
using,
for the case {of}
$m=3/2$, the {prescription,} $(f^2(x))^{1/2}=|f(x)|$,
{one obtains}
the same result, $\Psi(\rho,\phi | \pm 3/2
)=\Psi(\rho,\phi+2k\pi | \pm
3/2 )$
{but}
for the {prescription,} $(f^2(x))^{1/2}=f(x)$, the resulting wave function is no longer invariant under $\phi\to \phi+2\pi$.
From Equation~(\ref{eq1}) and its conjugation, using the properties of the Gauss hypergeometric functions (see, e.g., \cite{ww}),
{one finds}
that $|\Psi(x,y|m)|^2$
{only depends}
on \mbox{$\rho=|(x^2+y^2)^{1/2}|$} {and}
the real and imaginary parts of $\Psi(x,y|m)$ satisfy the relation resembling the trigonometric {identity,} $\cos^2 x+\sin^2 x
= 1$:
\begin{eqnarray}
{\Psi^2_R(x,y|m)+\Psi^2_I(x,y|m)\over |C(\rho|m)|^2}=1.
\label{eq25}
\end{eqnarray}
{Particular} examples of the general result
(\ref{eq25}) are cases
of integer $m=N$, when {Equation}
(\ref{eq25}) reduces to $\cos^2 N\phi+\sin^2 N\phi = 1$, and of half-integer $m$,
{quoted} here
{for} $m=1/2$:
$(\Psi^2_R(x,y|1/2) + \Psi^2_I (x,y| 1/2) )/ |C(\rho| 1/2)|^2=
|\cos(\phi/2)|^2+\sin^2(\phi)/(4|\cos(\phi/2)|^{2})=1$. Relation (\ref{eq25}) is another indication that the functions (\ref{eq2})
and (\ref{eq10}) belong to the same {class}
since for any $m$,
{both Equations} (\ref{eq2}) and (\ref{eq10})
satisfy relation $|\Psi/C|^2=1$.
The physical requirement the solution should satisfy is that $\Psi(x,y|m)$ must be orthonormal.
To verify {normalizability, let us} we
use relation (\ref{eq25}). Normalizability {condition,}
\begin{eqnarray}
\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}{\rm d} x \, {\rm d} y \, |\Psi(x,y|m)|^2 = \pi \int_{0}^{\infty}{\rm d} \rho\, |C(\rho|m)|^2 < \infty
\label{eq26}
\end{eqnarray}
can be readily realized by the appropriate choice of $C$. It suffices to choose $|C|^2\sim \rho^\gamma$ with $\gamma< -1$
{for}
$\rho\to\infty$ and $C$ finite
{for}
$\rho\to 0$.
Orthogonality follows from the relation which is obtained from
{Equation}~(\ref{eq1}):
\begin{eqnarray}
&& i(m'-m)\int\int_{-\infty}^{\infty}{\rm d} x\, {\rm d} y \, \Psi^*(x,y|m') \Psi(x,y|m) \nonumber\\
&& \quad = \int_{-\infty}^{\infty} {\rm } {\rm d} y \, y \Psi^* (x,y|m') [ \Psi (x,y|m)|_{x=\infty} - \Psi (x,y|m)|_{x=- \infty} ] \nonumber\\
&&\quad - \int_{-\infty}^{\infty} {\rm } {\rm d} x \, x \Psi^* (x,y|m') [\Psi (x,y|m)|_{y=\infty} - \Psi (x,y|m)|_{y=- \infty}].
\label{eq28}
\end{eqnarray}
{Using} {the {above-mentioned} constraints on $C(\rho|m)$
{one obtains}
$\Psi (x,y|m)|_{x,y=\infty} - \Psi (x,y|m)|_{x,y=- \infty} =0$, from which
{follows} that, if the integral in
{l.h.s.}
of Equation~(\ref{eq28}) exists, {then,}
for $(m' - m)\neq 0$,
{the integral}
is zero {being}
the condition of orthogonality.
Therefore, $\Psi (x,y|m)$ from
{Equation}
(\ref{eq10}) fulfills every physical requirement
{which}
the eigenfunction of the third component of the quantum mechanical angular momentum operator should satisfy.
Using a certain prescription for the power function may lead to an expression of the wave function that is not invariant under the translations at $2k\pi$ for a non-integer eigenvalues. This case is realized by the Euler--de Moivre
{prescription,} $(\rho e^{i\phi})^m=\rho^m e^{im\phi}$. On the other hand, if
another prescription {is applied,}
this may result in a wave function that is invariant under the translations at $2k\pi$ for integer as well as for non-integer
eigenvalues. This case is realized by the eigenfunction (\ref{eq10}),
{where the prescription,} $(f^2(x))^{1/2}=|f(x)|$, is used.
Prescription for the power function {affects
not only the features
of eigenfunctions of} $\hat M_z$, but also {the features of}
the eigenvalues of the operator of the angular momentum squared, {as}
described in
{Section}~\ref{s3} below.
\section{3. Eigenfunctions and Eigenvalues of \boldmath{$ \hat M^2$}}
\label{s3}
The eigenvalue equation for $\hat M^2=\hat M_x^2+\hat M_y^2+\hat M_z^2$, the operator of the angular momentum {squared,}
\begin{equation}
\label{M2}
\hspace{-20mm} \hat M^2\Psi_{M}(L,m|\theta)=\left(\sin^2\theta\,{d^2 \over d \cos^2\theta}-2 \cos\theta { d \over d \cos\theta} - \frac{m^2}{\sin^2\theta}\right)\Psi_{M}(L,m|\theta) = L(L+1)\Psi_{M}(L,m|\theta),
\end{equation}
where $\theta$ is {the} polar
angle and $L>0$ is the eigenvalue,
{reduces}
to the equation for the Gauss's hypergeometric series, solutions of which can be represented by various linearly independent pairs of functions.
{A}
possible pair is $\Psi_{M(1)},\;\Psi_{M(2)}$ \cite{JKT1}:
\begin{eqnarray}
\Psi_{M(1)}(L,m|\theta) &=& f_1(m|\theta) \,\Phi_1(L,m|\theta ), \quad f_1(m|\theta) =(\sin^2\theta) ^ {{(m^2)^{1/2}\over 2}}, \nonumber\\
&& \Phi_1(L,m|\theta ) ={}_2F_1\left({1\over 2}+{(m^2)^{1/2}\over 2}+{L\over 2},{(m^2)^{1/2}\over 2}-{L\over 2};{1\over 2};\cos^2\theta\right) ; \nonumber\\[0.2in]
\Psi_{M(2)}(L, m|\theta) &=& f_2(m|\theta) \,\Phi_2(L,m|\theta ), \quad f_2(m|\theta) = \cos\theta\,(\sin^2\theta) ^ {{(m^2)^{1/2}\over 2}}, \label{eq30}\\
&& \Phi_2(L,m|\theta ) ={}_2F_1\left(1+{(m^2)^{1/2}\over 2}+{L\over 2}, {1\over 2}+{(m^2)^{1/2}\over 2}-{L\over 2};{3\over 2};\cos^2\theta\right).\nonumber
\end{eqnarray}
Any linear superposition of $\Psi_{M(1)}(L, m|\theta)$ and $\Psi_{M(2)}(L, m|\theta)$ is also a solution
{of}
Equation~(\ref{M2}).
The only physical requirement for the functions $\Psi_{M(1,\,2)}(L,m|\theta)$ is
{the} normalizability. The necessary condition for the normalizability is that $\Psi_M(L,m|\theta)$, presented as
a {product,} $\Psi=f\Phi$, must
be a
regular function. Unfortunately it is not known how to realize such a condition for the product and the only option left is that normalizability can be achieved when the factors
$f_j(m|\theta)$ and $\Phi_j(L,m|\theta )$ are regular for {all}
values of their arguments which
{leads}
to a regular product $\Psi=f\Phi$.
{Functions,} $f_j(m|\theta)\sim(\sin^2\theta) ^{(m^2)^{1/2}/2}$, are regular for any $\theta$ when $(m^2)^{1/2}\geq 0$. Hypergeometric
{series,}
${}_2F_1(a,b; c;\cos^2\theta)$, converge for $\cos^2\theta<1$;
{for}
$\cos^2\theta=1$
{the series}
converge only if $a+b-c<0$ \cite{ww}. This condition
for ${}_2F_1$ from Equation~(\ref{eq30}) {reads:} $a+b-c=(m^2)^{1/2}<0$, which is opposite to the condition of {the regularity of}
$f_j(m|\theta)$,
$(m^2)^{1/2}\geq 0$.
Therefore, $f_j(m|\theta)$ and infinite series ${}_2F_1$ cannot
{simultaneously
be
regular, and }
in order}
$\Psi_{M(j)}(L,m|\theta)$ to be regular, the infinite hypergeometric series ${}_2F_1$ should {terminate}
resulting in
polynomials; {hereafter,}
this truncation of infinite hypergeometric series
is referred to as {``polynomialization.''}
As known, the infinite hypergeometric series ${}_2F_1(a;b; c;z)$
{are}
polynomials
{if}
either
$a$ or $b$ is a non-positive integer \cite{ww}. Since the parameters of ${}_2F_1(a,\,b;\,c;\;\cos^2\theta)$ depend on $L,\,m$ (see Equation~(\ref{eq30})), setting $a$ or $b$ to a non-positive integer results into constraints on $L,\,m$.
{It is obvious} that a different prescription for $(m^2)^{1/2}$ generates different restrictions on eigenvalues of the angular momentum.
Let us report results for the eigenvalue problem of the operator of the angular momentum squared{;
for
explicit but somewhat}
lengthy {calculations,} see \cite{JKT1}.
It is essential to specify which prescription is used for the $(m^2)^{1/2}$ appearing in $\Psi_{M(j)}(L,m|\theta)$, given by Equation (\ref{eq30}), since, as shown below, after the normalizability is required, different prescriptions lead to the different results for the spectrum.
There are two possible {prescriptions, namely,}
$(m^2)^{1/2}=|m|$ and $(m^2)^{1/2}=\pm m$.
First,
the case when the prescription is $(m^2)^{1/2}=|m|$ {is considered.} Equating parameters
{$a$ and $b$}
of the two hypergeometric
functions from Equation~(\ref{eq30}) to non-positive integers, $-k$, results in four conditions, two
{conditions}
for $\Psi_{M(1)}$ and two {conditions} for $\Psi_{M(2)}$. One condition out of {the}
two for $\Psi_{M(1)}$ generates singular functions and, {thus,}
to be
dropped; the same is true for $\Psi_{M(2)}$ {and, finally, one remains with} only two conditions of polynomialization, generating regular
{eigenfunctions}~\cite{JKT1}.
The spectrum of eigenvalues corresponding to {the
two left} regular eigenfunctions is obtained from the following polynomialization conditions:
\begin{eqnarray}
\label{eq31}
&&\left|{m_{(1)}\over 2}\right|-{L\over 2}= - k_1; \ L - 2\left[{L\over 2}\right] \leq |m_{(1)}| \leq L,
\end{eqnarray}
\begin{eqnarray}
\label{eq310}
&&\left|{m_{(2)}\over 2}\right|-{L-1\over 2}= - k_2; \ (L-1) - 2\left[{L-1\over 2}\right] \leq |m_{(2)}| \leq (L-1).
\end{eqnarray}
{Here,} $[X]$ stands for the integer part of $X$ satisfying $X-[X]\geq 0$, $k_{1}=0,1,2,\cdots[L/2]$ and $k_{2}=0,1,2,\cdots[(L-1)/2]$. The
sets (\ref{eq31}) and (\ref{eq310}) are comprised of the numeric sequence of positive and negative elements $m_{(j),k},\;j=1,2$ with the step
size 2, e.g., $|m_{(1)}|=L-2k_1$. From Equations~(\ref{eq31}) and (\ref{eq310}) it follows that the spectrum is discrete with the only condition
that $L-|m|$ is necessarily {integer,}
while there are no constraints on $L$ and $m$ separately; the solution (\ref{eq30}) is regular for integer as well as for non-integer $L,\,m$.
The sets of {the
eigenvalues,} $\{m_{(1)}\}$ and $\{m_{(2)}\}$ (and their corresponding {eigenfunctions),} can be formally
{combined}
into one {set,} comprised of the
positive and negative {elements,} $m_k$, with the step size {1:}
$m_{k}-m_{k-1}=\pm 1$. Using numerical ordering from the smallest to the
largest value, {the combined}
set of all possible eigenvalues
{reads} as follows:
\begin{eqnarray}
\{m\}|_{(m^2)^{1/2}=|m|} & =&
\{-L,-L+1,-L+2,.., -m_0 ; m_0,..,L-2,L-1,L \},
\label{eq1.20}
\end{eqnarray}
where, depending on a numeric value of $L$, $m_0$ is either $(L-2[L/2])$, the minimal positive value from the set (\ref{eq31}), or $(L-1-2[(L-1)/2])$, the minimal positive value from the \mbox{set (\ref{eq310}) \cite{JKT1}}.
Starting from the subset of
{Equation}
(\ref{eq1.20}) with $m$ positive, applying $\hat M_{-}=\hat M_{x}-i\hat M_{y}$ leads to a subset with the negative $m$ and vice versa, $\hat
M_{+}\Psi(m<0)=(\hat M_{x}+i\hat M_{y})\Psi(m<0)\rightarrow \Psi(m>0)$ only when $L$
{is}
either integer or half-integer. Acting by $\hat M_{-}$ on the
regular functions with $m$ positive leads to the regular functions with $m$ negative, $\hat M_{-}\Psi_{\mathrm{reg}}(m>0)\to
\Psi_{\mathrm{reg}}(m<0)$ only when $L$ is integer. When $L$ is half-integer, acting by $\hat M_{-}$ on the regular functions with $m$ positive
leads to the singular functions with $m$ negative, $\hat M_{-}\Psi_{\mathrm{reg}}(m>0)\to \Psi_{\mathrm{sing}}(m<0)$. A symmetric result is valid
when applying the rising operator: $\hat M_{+}\Psi_{\mathrm{reg}}(m<0)\to \Psi_{\mathrm{reg}}(m>0)$ only when $L$ is integer and when $L$ is
half-{integer,} $\hat M_{+}\Psi_{\mathrm{reg}}(m<0)\to \Psi_{\mathrm{sing}}(m>0)$ \cite{JKT1}.
{Evidently,}
if it is required that when
moving with the step size 1,
starting from the wave function with $m=(-L)$, one should arrive at the wave function with $m=+L$ and vice
versa, this will be possible only when $m$ is either integer or half-integer. In this case, no analysis of the eigenvalue problem is necessary since the spectrum is already predefined to consist of only integer or half-integer $m$.
The requirement {that,} starting from the state with $m=\mp L$
{one arrives,}
moving with step size 1, to the state with $m=\pm L$, is postulated in the method of commutator algebra of the angular momentum operators \cite{Bohm,Schiff,Fock,LL}.
{This requirement,}
customarily taken for granted to be a physical postulate, is actually a mathematical {condition,}
imposed by hand which filters out possible {non-integer} and {non-half-integer} $m$ from the spectrum, similarly to how imposing the non physical
condition
of
periodicity on $e^{im\phi}$, {filters} out non integer $m$ from the spectrum of $\hat M_z$. When $L$ is non-integer, the requirement that starting from
the state with $m=\mp L$
{one arrives}
at the state with $m=\pm L$, cannot be satisfied. Indeed, {e.g.,}
for $L=1.7$, acting with the lowering operator $\hat M_-=\hat M_x-i\hat M_y$ on $\Psi(1.7,1.7|\theta)$ would never result in $\Psi(1.7,-1.7|\theta)$ and then
{terminate;}
instead
{one gets:}
$\Psi(1.7,1.7|\theta)\rightarrow \Psi(1.7,0.7|\theta)\rightarrow \Psi(1.7,-0.3|\theta)\rightarrow \Psi(1.7,-1.3|\theta)\rightarrow \Psi(1.7,-2.3|\theta)\rightarrow\cdots $. Let us recall that as an alternative to an unphysical requirement of single valuedness of the wave function, Pauli suggested that acting by the rising and lowering operators $\hat M_x\pm i \hat M_y$ on regular wave functions
{one should find:}
$\Psi_{M}(L,\,-L|\theta)\leftrightarrow\Psi_{M}(L,\,-L+1|\theta) \leftrightarrow\cdots\leftrightarrow\Psi_{M}(L,\,-1+L|\theta)\leftrightarrow \Psi_{M}(L,\,L|\theta)$.
{Pauli}
justified this by postulating that as a result of acting on regular wave functions by $\hat M_x\pm i \hat M_y$, no singular functions
\mbox{{appear} \cite{Pauli,merz}}.
In the case of the prescription $(m^2)^{1/2}=|m|$, moving up and down in spectrum with
{step}
size 2,
indeed no singular functions are generated for any, integer or {non-integer,} $m$,
{as} follows from the conditions of polynomialization
\cite{JKT1}.
However, singular functions appear if instead {of}
\mbox{$(m^2)^{1/2}=|m|$} the prescription $(m^2)^{1/2} =\pm m$ is used. In this {case, the} operators, $\hat M_{\pm}=\hat M_x\pm i \hat M_y$,
connecting
wave functions {and}
$m\to m\pm 1$, can be defined {and,} acting by $\hat M_{\pm}$, results in a set of eigenfunctions with the eigenvalues \cite{JKT1},
\begingroup
\makeatletter\def\f@size{8}\check@mathfonts
\def\maketag@@@#1{\hbox{\m@th\normalsize\normalfont#1}}%
\begin{eqnarray}
\{ m_1\}\downarrow_{(m^2)^{1/2} =\pm m} &=& (L-2 k_1) = \{ L; L-2; \ldots ; L-2[L/2]; L-2[L/2]-2; \ldots ; -\infty \} , \nonumber\\
\{ m_1\} \uparrow_{(m^2)^{1/2} =\pm m} &=& (-L+2 k_2) = \{-L; -L+2; \ldots ; -L+2[L/2]; -L+2[L /2] +2 ; \ldots ; \infty \} , \nonumber\\
\{ m_2\} \downarrow_{(m^2)^{1/2} =\pm m} &=& (L-1-2 k_3) = \left\{L-1; L-3; \ldots ; \right. \nonumber\\
&& \left. L-1-2[(L-1)/2]; L-1-2[(L-1) /2]-2; \ldots ; -\infty \right\} , \nonumber\\
\{ m_2\}\uparrow_{(m^2)^{1/2} =\pm m} &=& (-L+1+2 k_4) = \left\{ -L+1; -L+3; \ldots ; \right. \nonumber\\
&& \left.-L+1+2[(L-1)/2]; -L+1+2[(L-1) /2]+2; \ldots ; \infty \right\} ,
\label{eq34}
\end{eqnarray}
\endgroup
where $k_1, k_2,k_3,k_4$ are positive integers and the elements of the sets $\{ m_j\}$ can be any real number, not necessarily integer or half-integer.
{For
$m$s}
from {Equation}
(\ref{eq34}), the corresponding hypergeometric {functions,} ${}_2F_1$, are regular, some
$f(m|\theta)\sim (\sin^2\theta) ^
{(m^2)^{1/2}/2}=(\sin\theta)^{\pm m}$
{are}
singular {and,} therefore, some $\Psi_{M(1,2)}(L,m|\theta)$ also
{are}
singular{;
for explicit calculations
and technicalities,}
see \cite{JKT1}.
For the case {of}
$(m^2)^{1/2}=\pm m$,
{one obtains,}
{similar to
the spectrum, resulting}
from Equations~(\ref{eq31}) and (\ref{eq310}),
that the spectrum is discrete with the only condition that $L-|m|$ is necessarily {integer,} while $L$ and $m$
{can}
be integer as well as
non-integer {each.}
{If}
$L$ is {integer,} the sequences (\ref{eq34}) do not extend to $\pm \infty$ but truncate at $\pm L$ and reproduce the set of eigenvalues (\ref{eq1.20}),
obtained using {prescription,} $(m^2)^{1/2}=|m|$.
Let us note that in the group theoretical {framework, the eigenfunctions,} corresponding to the eigenvalue spectrum (\ref{eq34}), form an irreducible
representation of $SO(3)$, the three dimensional rotation group{;
see,} e.g., \cite{group}.
As mentioned {just} above,
when $L$ and $m$ are integer, infinite sequences (\ref{eq34}) truncate into a finite set of eigenvalues,
$-L\leq m \leq L$, and the corresponding eigenfunctions are regular and form a finite set. In terms of the representation theory, these are the finite
dimensional irreducible representations of the $SO(3)$ group. When $L$ and $m$ are non integer (half-integers included), the sequences (\ref{eq34}) do not
truncate and remain {infinite.
Then,}
the corresponding infinite set of eigenfunctions is formed
{of}
both singular and regular functions. In terms of the representation theory, these are the infinite dimensional irreducible representations of the $SO(3)$ group. Using prescription $(m^2)^{1/2} =\pm m$ results in an infinite set of eigenfunctions
{(containing both singular and regular functions),}
corresponding to an infinite dimensional representation of the rotation group.
{Using}
$(m^2)^{1/2}
=|m|$, the finite set of eigenvalues $-L\leq m \leq L$, symmetric with respect to $m\rightarrow \,-m$, is filtered out from the infinite set (\ref{eq34})
{because of $|m\geq 0$.}
The set of corresponding eigenfunctions is comprised of regular functions only. In other words, the set of regular eigenfunctions is being filtered out from a general infinite set of eigenfunctions exactly the same
{way,
similar to the
case
of integer}
$L$ and $m$.
So, depending on a prescription for $(m^2)^{1/2}$, eigenfunctions of the operator of angular momentum square could be regular or singular and the eigenvalues could be given either \mbox{by
{Equations}
(\ref{eq1.20})} or (\ref{eq34}). When the {prescription,} $(m^2)^{1/2} =|m|$, is used, {all}
eigenfunctions are regular and the eigenvalue spectrum
is
given by Equation~(\ref{eq1.20}). When the prescription is $(m^2)^{1/2} =\pm m$, some eigenfunctions are regular, {while}
some {are} singular and the eigenvalue spectrum is given by Equation~(\ref{eq34}).
{Finally,}
let us discuss what could cause the statement that the eigenvalue problem for the $\hat M^2$ admits normalizable solutions only when $L$ is integer \cite{Bohm,Schiff,Fock,LL}. The eigenfunctions and the spectrum, e.g., the set of regular functions and eigenvalues (\ref{eq1.20}), are obtained by requiring normalizability of a solution that is presented in terms of a specific pair of linearly independent functions, $\Psi_{M(1)}$ and $\Psi_{M(2)}$. Quite a different picture arises when the normalization condition is applied to another pair of linearly independent functions, e.g.,
{to}
the Legendre functions, $P^m_L(\theta)$ and $Q^m_L(\theta)$, which were, from the early days of quantum mechanics, considered as eigenfunctions of the operator of the angular momentum squared \cite{Bohm,Schiff,Fock,LL}.
{Certainly,}
both pairs, $\Psi_{M(1)}(L,m|\theta)$, $\Psi_{M(2)}(L,m|\theta)$ and $P^m_L(\theta)$, $Q^m_L(\theta)$, are solutions of the eigenvalue
{equation}
(\ref{M2}). Legendre functions can be written as linear combinations of hypergeometric functions, $\Psi_{M(1,2)}(L,m|\theta)$ \cite{ww}:
\begin{eqnarray}
P^m_L(\theta)& = &C_{11} \Psi_{M(1)}(L,m|\theta) + C_{12} \Psi_{M(2)}(L,m|\theta);\nonumber\\
Q^m_L(\theta) &=& C_{21} \Psi_{M(1)}(L,m|\theta) + C_{22} \Psi_{M(2)}(L,m|\theta).
\label{eq36}
\end{eqnarray}
{Using} just a polynomialization, it is not {sufficient}
to normalize both $ P^m_L(\theta) $ and $Q^m_L(\theta) $ simultaneously. The {reason}
is that
the sets $\{ m_{(1)}\}$ and $\{ m_{(2)}\}$, generated by the two conditions of polynomialization (\ref{eq31}) and (\ref{eq310}), have no
common element, {and, thus,} it is impossible to satisfy these conditions simultaneously.
{As soon as the}
polynomialization conditions {are applied}, either
$\Psi_{M(1)}(L;m|\theta)$ or $\Psi_{M(2)}(L;m|\theta)$ {is}
singular \cite{JKT1}. Therefore, in order to carry out {the}
normalizability of $ P^m_L(\theta) $ and $Q^m_L(\theta) $, only polynomialization would not
suffice and an additional requirement {to filter}
out singular parts
{of}
$ P^m_L(\theta) $ and $Q^m_L(\theta)
$ is necessary. This can be achieved by choosing {the
coefficients,} $C_{ij}$ in {Equation} (\ref{eq36}). Namely, {as soon as}
the
polynomialization condition (\ref{eq31}) is satisfied, {what leads}
to
regular $\Psi_{M(1)}(L;m|\theta)$ and singular $\Psi_{M(2)}(L;m|\theta)$, {the
coefficients,} $C_{12}$ and $C_{22}$,
of $\Psi_{M(2)}(L;m|\theta)$
must vanish.
Similarly,
{as soon as}
the polynomialization condition (\ref{eq310}) is satisfied
{what leads}
to
regular $\Psi_{M(2)}(L;m|\theta)$ and singular $\Psi_{M(1)}(L;m|\theta)$,
{the}
coefficients,
$C_{11}$ and $C_{21}$,
of $\Psi_{M(1)}(L;m|\theta)$
must vanish.
Coefficients $C_{ij}$
{are}
calculated in \cite{JKT1}:
\begingroup
\makeatletter\def\f@size{9}\check@mathfonts
\def\maketag@@@#1{\hbox{\m@th\normalsize\normalfont#1}}%
\begin{flalign}\nonumber
C_{11}^{-1}\sim\Gamma\left({1\over 2}-{L\over 2}+{|m|\over 2}\right) \,\Gamma\left(1+{L\over 2}+{|m|\over 2}\right),\quad C_{12}^{-1}\sim\Gamma\left({1\over 2}+{L\over 2}+{|m|\over 2}\right) \Gamma\left(-{L\over 2}+{|m|\over 2}\right),\end{flalign}
\begin{equation}
C_{21}\sim{\Gamma\left({1\over 2}+{L\over 2}-{|m|\over 2}\right) \over\Gamma\left(1+{L\over 2}+{|m|\over 2}\right)},\quad
C_{22}\sim{\Gamma\left(1+{L\over 2}-{|m|\over 2}\right) \over \Gamma\left({1\over 2}+{L\over 2}+{|m|\over 2}\right)}.
\label{Tesla}
\end{equation}
\endgroup
{Here} $\Gamma$ is the
Euler Gamma function.
{It} {is straightforward to show {that,} after applying {the} polynomialization \mbox{conditions (\ref{eq31})}} \mbox{and
(\ref{eq310})} {and using the property of the
$\Gamma$ function {such}
that \mbox{$1/\Gamma(\mathrm{nonpositive}\;\;\mathrm{integer})=0$ \cite{ww,Abramowitz}}},
{one obtains}
that $C_{11}=0$ and $C_{12}=0$ can be realized only {for
integer}
$L$ and $m$ and that $C_{21}=0$ and $C_{22}=0$ can never be satisfied \cite{JKT1}. Therefore, $Q^m_L(\theta)$ has to be excluded, and only
$P^m_L(\theta)$ remains as the quantum mechanical eigenfunction. Consequently, for the pair $P^m_L(\theta)$ and $Q^m_L(\theta)$, normalizability is achieved only when $L$ and $m$ are integer. This is not a general result; one explicit example of the eigenfunction, normalizable for integer as well as for non-integer eigenvalues, is presented by the pair $\Psi_{M(1)}(L,m|\theta)$ and $\Psi_{M(2)}(L,m|\theta)$, given in
{Equation}
(\ref{eq30}) and leading
to spectra (\ref{eq1.20}) or (\ref{eq34}),
{depending on the prescription for the power function.}
{Hence,} the statement that from theoretical quantum mechanics it follows that the eigenvalue spectrum of $\hat
M^2$ is comprised of only integers is not necessarily correct in
{sense} that it corresponds to a special case when $P^m_L(\theta)$ and $Q^m_L(\theta)$ are chosen as a pair of linearly independent solutions of the
eigenvalue
{equation}
(\ref{M2}).
\section{4. Conclusions}
\label{s4}
{In this paper,}
the eigenvalue problem for the operator of the angular momentum {is studied}
in the framework of {nonrelativistic} quantum mechanics. The general result for the spectrum is that it is discrete, {namely,} $|m|=L-k$ with $k$
{been} integer,
$k=\{0,1,\cdots , [L] \}$, where $[L]$ is the integer part of $L$. $L$ and $m$ can be integer as well as non-integer and this does not contradict
{any} physical principle.
The above is in stark contrast with the
{known}
statement that from theoretical quantum mechanics it follows that $m$ and $L$ {can
only be}
integer or half-integer \cite{Bohm,Schiff,Fock,LL}. {An
explanation} of {the}
contradiction is that
{the}
result,
obtained {here, does not impose a}
non-physical requirements of either periodicity of the wave function or postulating that, moving with the step size 1 and starting from a state with $m=-L$,
{one}
should arrive at the state with $m=+L$ and vice versa. The discreteness {condition,} $|m|=L-k$, does not require that moving with the step size 1 from
$\Psi_M(L,-L|\theta)$ {one} should {end with}
$\Psi_M(L,+L|\theta)$.
Using the Legendre functions, $P^m_L(\theta)$ and $Q^m_L(\theta)$, as a pair of linearly independent solutions for the eigenvalue equation, $\hat M^2
\Psi=L(L+1)\Psi$, is a specific choice that does not encompass the most general case. When $P^m_L(\theta)$ and $Q^m_L(\theta)$ are used, {the}
normalizability requirement filters out non-integer
{$L$ and $m$,}
but
the eigenvalue equation {solution}
that is normalizable {may exist}
for any real
eigenvalues, integer and non-integer.
{Another} solution, presented here, Equation~(\ref{eq30}),
satisfies the necessary physical requirement of {the} normalizability for integer
and non-integer
{$L$ and $m$.}
Imposing the condition of the single valuedness on the eigenfunction of the third component of the angular momentum $(x+i y)^m$ does not necessarily lead
to the violation of the rotational invariance. Indeed,
{it is found}
such a representation of the power function, Equation~(\ref{eq10}), which for any $m$ is a single valued function of
{($x,\,y$)}
and is invariant under the {$2\pi k$ rotation.}
Two results indicate that $\Psi(x,y|m)$ from Equation~(\ref{eq10}) and $(x+iy)^m$ belong to the same class of functions.
{First,}
according to Equation~(\ref{eq11}), solution $\Psi(x,y|m)$, {coincide,}
up to the normalization factor,
with $(x+iy)^m$ when $m$ is integer.
{Second,}
for the half-integer values of $m=N/2$,
{one obtaines}
$(\Psi(x,y|N/2))^2\sim \Psi(x,y|N)$ (Equation~(\ref{eq18})),
relation mirroring the one valid for the function (\ref{eq2}), $((x+iy)^{N/2})^2=(x+iy)^N$. For the rational $m=p/n$,
{it is}
not shown that $(\Psi(x,y|p/n))^n\sim \Psi(x,y|p)$ but particular cases of integer and half-integer $m$ are
{indications}
that $(\Psi(x,y|\alpha))^\beta\sim \Psi(x,y|\alpha\beta)$ may
{well enough} be true for any
{$\alpha$ and $\beta$.}
Another important {property,} indicating that $(x+iy)^m$ and $\Psi(x,y|m)$ belong to the same {class,}
is that
for
any $m$,
both
{the functions}
satisfy {the} relation, $|\Psi/C|^2=1${;
see} \mbox{Equation~(\ref{eq25})}.
{To summarize,}
the wave function (\ref{eq10}), solution of the eigenvalue equation $\hat M_z\Psi(x,y|m)=m\, \Psi(x,y|m)$,
{represents a}
possible prescription for the power {function,} $(x+i y)^m$.
In the general case, when the only condition imposed on a wave function is the physical requirement of {the} normalizability, i.e., when
the periodicity requirement for a wave function is lifted or when the different pair of linearly independent functions is chosen,
there is no constraint on $L$ and $m$ to be
integer {only.} From
{the physics point of view,}
the only self-consistent approach
is
to drop {all}
non-physical conditions and consider the problem in the presence of only the physical requirements.
This is what {is}
done in
{this paper}
{and,} as a result,
a new quantum-mechanical solution of the eigenvalue problem for the angular momentum operator {is obtained}.
To conclude, the main result of this paper is that from the framework of theoretical quantum mechanics it does not follow that the eigenvalues of the
angular momentum operator should {only} be
integer.
{Surely,}
the spectrum of the angular momentum cannot be defined from theoretical quantum mechanics alone but has to be established by comparing
theoretical calculations with experiments.
{However, this}
is not a goal of the
{current}
paper which seeks to analyze the eigenvalue problem for the angular momentum
operator from the purely theoretical {viewpoint.}
\acknowledgments{We are indebted with C.~M.~Bender, E.~E.~Boos, O.~Daalhuis, J.~T.~Gegelia and V.~A.~Petrov for illuminating discussions.}
|
1,116,691,498,263 | arxiv | \section{Introduction}
\label{sec:intro}
Monitoring of atmospheric trace gases is important to understand
atmospheric composition and global climate change.
In particular climate models require information about the concentration
and global distribution of trace gases like e.g. H$_2$O, CO$_2$,
O$_3$, or CH$_4$. The
trace gases can be observed by measuring solar radiation which is
scattered and absorbed by the molecules. Several instruments have been
developed: satellite instruments provide global observations,
local measurements can be taken from the ground, from
air-plane, or from a balloon. Most instruments designed for
trace gas concentrations observations measure radiance spectra with
high spectral resolution. In the UV-Vis spectral range,
absorption of radiation is due to molecular transitions; at the same
time vibrational and rotational transitions can take place, which
results in band spectra where the individual absorption lines can not
be distinguished. Nevertheless each molecule type has its specific
absorption features so that the measured spectra include information
about the various trace gas concentrations. In the near-infrared
spectral range there are mainly vibrational transitions; here
individual lines can be identified and used for trace gas
measurements.
Examples for currently operating satellite instruments that measure high
resolution radiance spectra of scattered solar radiation are SCIAMACHY on the
ENVISAT satellite (\citet{gottwald2006}), OMI on AURA (\citet{levelt2006}),
GOME-2 on METOP (\citet{callies2000}) and TANSO-FTS on GOSAT
(\citet{kuze2009}). SCIAMACHY and TANSO-FTS have the advantage of measuring
not only the radiance but also the polarization state of the radiation. While
extraterrestrial solar radiation is unpolarized, the radiance arriving
at the satellite is polarized due to
scattering by molecules, aerosols, or clouds and due to surface
reflection. The polarization information may therefore be used to reduce the
uncertainties in trace gas retrievals introduced by aerosols, clouds and
surface reflection.
The retrieval of trace gas concentrations from radiance spectra requires a
so called forward model, which can simulate such measurements for given
realistic atmospheric conditions. For the often used optimal
estimate retrieval method (\citet{rodgers2000}) it is important that the forward model is
fast because it has to be run several times until iteratively the atmospheric
composition is found which best matches the measured spectra.
A commonly used method to simulate solar radiative transfer is the
discrete ordinate method which was first described by
\citet{chandrasekhar50} and which has been implemented for instance
into the freely
available well known software DISORT (\citet{stamnes2000}). The DISORT
code however has the limitations that it assumes a plane-parallel
atmosphere (i.e. horizontal inhomogeneities can not be taken into
account) and that it neglects polarization. Polarization has been
included in the VDISORT code (\citet{schulz99}).
The SCIATRAN code
(\citet{rozanov2005}) is also based on the discrete ordinate method.
It can optionally take into account spherical geometry as well as
polarization (\citet{rozanov2006}).
Another method for the simulation of solar radiative transfer is the
Monte Carlo method (\citet{marchuk1980,marshak2005}), which is usually
much slower than the discrete ordinate
method. For this reason Monte Carlo methods have mostly been used for
simulations including inhomogeneous clouds (e.g. \citet{zinner2010}) for
which the plane-parallel approximation can not be applied.
We have developed a new Monte Carlo method which
calculates high spectral resolution radiance spectra very efficiently. The
algorithm, named ALIS (Absorption Lines Importance Sampling),
does not require any approximations, in particular it can easily take
into account polarization and horizontal inhomogeneity. We show that
the computational
time of ALIS for high resolution radiance spectra is comparable to or
even faster than the discrete ordinate approach, especially if
polarization is included. This means that the algorithm is
sufficiently fast to be used as forward model for trace gas retrieval
algorithms.
The basis of the ALIS method is that all wavelengths are
calculated at the same time based on the same random numbers.
This method which is sometimes called
``method of dependent sampling'' (\citet{marchuk1980}) has been used for various
applications, e.g. to calculate mean radiation fluxes in the near-IR
spectral range (\citet{titov1997}), to compute multiple-scattering of
polarized radiation in circumstellar dust shells
(\citet{voshchinnikov1994}), or to calculate Jacobians (\citet{deutschmann2011}).
We have validated ALIS by comparison to the well-known and well-tested
DISORT program, originally developed and implemented by
\citet{stamnes2000} in FORTRAN77. We use a new version of the code
implemented in C (\citet{buras2011b}) with increased efficiency and
numerical accuracy.
\section{Methodology}
The new method Absorption Lines Importance Sampling (ALIS), which allows fast
calculations of spectra in high spectral resolution, has been
implemented into the radiative transfer model MYSTIC (Monte Carlo code for the
phYsically correct Tracing of photons In Cloudy atmospheres;
\citet{mayer2009}). MYSTIC is operated as one of several solvers of the
libRadtran radiative transfer package (\citet{mayer2005}). The common model
geometry of MYSTIC is a 3D plane-parallel atmosphere to simulate radiances or
irradiances in inhomogeneous cloudy conditions. The model can also be operated
in 1D spherical model geometry (\citet{emde2007}) which makes it suitable also
for limb sounding applications. Recently MYSTIC has been extended to handle
polarization due to scattering by randomly oriented particles, i.e. clouds,
aerosols, and molecules (\citet{emde2010}), and to handle topography
(\citet{mayer2010}). Several variance reduction
techniques were also introduced to MYSTIC in order to speed up the computation
of radiances in the presence of clouds and aerosols (\citet{buras2011}).
\subsection{Monte Carlo method for solar atmospheric radiative transfer}
This section briefly describes the implementation of solar radiative
transfer in MYSTIC which is explained in detail in \citet{mayer2009}.
We describe only those details which are required to understand the
following sections about the ALIS method.
In the
forward tracing mode ``photons''\footnote{We use the term "photon"
to represent an imaginary discrete amount of electromagnetic energy
transported in a certain direction. It is not related to the QED
photon \cite{Mishchenko2009}.} are traced on their way through the
atmosphere. The photons are generated at the top of the
atmosphere where their initial direction is given by the solar zenith
angle and the solar azimuth angle.
Absorption and scattering are treated separately:
Absorption is considered by a photon weight according to Lambert-Beer's law:
\begin{equation}
w_{\rm abs} = \exp \left(-\int_0^s \beta_{\rm abs}(s') {\rm d}s' \right)
\label{eq:w_a}
\end{equation}
Here ${\rm d}s'$ is a path element of the photon path and
$\beta_{\rm abs} =\sum_{i=1}^N{\beta_{\rm abs,i}}$ is the total absorption
coefficient which is the sum of the $N$ individual absorption coefficients
$\beta_{\rm abs,i}$ of molecules, aerosols, water droplets, and ice
crystals. The integration is performed over the full photon path.
The free path of a photon until a scattering
interaction takes place is sampled according to the probability
density function (PDF):
\begin{equation}
P(s)= \beta_{\rm sca}(s) \exp \left({-\int_{0}^{s} \beta_{\rm sca}(s') {\rm d}s'}\right)
\label{eq:pdf_s}
\end{equation}
Here $\beta_{\rm sca}=\sum_{i=1}^N{\beta_{\rm sca,i}}$ is the total
scattering coefficient of $N$ interacting particle and molecule types.
We use a random number $r\in[0,1]$ to decide which interaction takes
place. If there are $N$ types of particles and molecules at
the place of scattering, the photon interacts with the $j^{th}$ type
if the random number fulfills the following condition:
\begin{equation}
\frac{\sum_{i=1}^{j-1} \beta_{\rm sca,i}}{\beta_{\rm sca}} < r \le
\frac{\sum_{i=1}^{j} \beta_{\rm sca,i}}{\beta_{\rm sca}}
\end{equation}
At each scattering event the ``local estimate'' weight is calculated which corresponds to the
probability that the photon is scattered into the direction of the
detector and reaches it without further interactions:
\begin{equation}
w_{{\rm le},is} = \frac{ P_{\rm 11}(\theta_p)
\exp\left(-\int{(\beta_{\rm abs}+\beta_{\rm sca}){\rm d}s'}\right)}{\cos(\theta_d)}
\end{equation}
Here $\theta_p$ is the angle between photon direction (before scattering) and
the radiance direction. The phase function P$_{\rm 11}$ (first
element of the scattering matrix) gives the probability
that the photon is scattered into the direction of the detector, ``$is$''
denotes the scattering order. In order to calculate the probability that the
photon actually reaches the detector the Lambert-Beer term for extinction $
\exp\left(-\int{(\beta_{\rm abs}+\beta_{\rm sca}){\rm d}s'}\right)$
needs to be included. Finally we have to divide
by the zenith angle of the detector direction $\theta_d$ to account for the
slant area in the definition of the radiance. The contribution of the photon
to the radiance measured at the detector is then given as
\begin{equation}
I_i=\sum_{is=1}^{\rm N_s} w_{{\rm abs},is} w_{{\rm le},is}
\label{eq:rad_i}
\end{equation}
Here $i$ is the index of the photon, $N_s$ is the number of scattering events
along the photon path, and $w_{{\rm abs},is}$ is the absorption weight
(Eq.~\ref{eq:w_a}) evaluated at the scattering order $is$. One can show
formally that the sum over the local estimate weights corresponds to a von
Neumann series which is a solution of the integral form of the radiative
transfer equation (see e.g. \citet{marshak2005}).
Additional weights are required to take into account polarization (\citet{emde2010})
and variance reduction techniques (\citet{buras2011}).
After tracing $N_p$ photons the radiance is given by the average
contribution of all photons:
\begin{equation}
I=\frac{\sum_1^{N_p} I_i}{N_p}
\end{equation}
The methods described above are implemented for monochromatic radiative
transfer. If one wants to calculate a radiance spectrum using these
methods one has to calculate
all spectral points subsequently. Here usually a very high accuracy is
required in order to distinguish spectral features from
statistical noise which means that such calculations are
computationally expensive.
\subsection{Calculation of high spectral resolution clear-sky radiance spectra}
In the following an efficient method how to compute high spectral
resolution radiance spectra will
be described and demonstrated on the example of the spectral region from
765--768~nm in the O$_2$A absorption
band where we calculate the spectrum with a resolution of
0.003~nm. The line-by-line gas absorption coefficients have been computed using the
ARTS model (\citet{buehler2005}, \citet{eriksson2011}) for the standard mid-latitude
summer atmosphere (\citet{Anderson1986}).
Fig.~\ref{fig:optdepth} shows the vertically integrated optical thickness of molecular
absorption $\tau_{\rm abs,v}$ (top) and scattering $\tau_{\rm sca,v}$
(bottom).
Whereas the scattering optical thickness for the cloudless,
aerosol-free atmosphere is rather small and almost constant,
it varies only from about 0.0239 to 0.0243, the absorption
optical thickness varies over five orders of magnitude, from about
10$^{-3}$ to 10$^{2}$ (note the logarithmic scale).
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{./optdepth.pdf}
\caption{Integrated vertical optical thickness of molecular absorption (top)
and molecular scattering (bottom).}
\label{fig:optdepth}
\end{figure}
As mentioned in the previous section, absorption is considered
separately by calculating the absorption weight $w_{\rm abs}$
(Eq.~\ref{eq:w_a}). In order to calculate a radiance spectrum taking
into account the spectral variation of the absorption coefficient
$\beta_{\rm abs}$ we can easily calculate the absorption weight for each
wavelength and get a spectrally dependent absorption weight vector:
\begin{equation}
\label{eq:w_abs_lam}
w_{\rm abs}(\lambda) = \exp \left(-\int_0^s \beta_{\rm abs}(\lambda,
s') {\rm d}s' \right)
\end{equation}
Here $\lambda$ denotes the wavelength of the radiation.
In practice the integral corresponding to the absorption optical
thickness $\tau_{\rm abs}=\int_0^s \beta_{\rm abs} {\rm d}s'$ is
calculated step by step while the photon
moves through the layers/boxes of the discretized model
atmosphere (see \citet{mayer2009}):
\begin{equation}
\tau_{\rm abs}(\lambda)=\sum_{p}{\beta_{\rm abs}(\lambda, p)
{\rm \Delta s}_p}
\end{equation}
Here the $p$ denotes the step index along the photon path,
and ${\rm \Delta s}_p$ is the pathlength of step $p$.
We also include the spectrally dependent absorption coefficient
$\beta_{\rm abs}(\lambda)$ in the local estimate weight
$w_{{\rm le},is}(\lambda)$.
Thus we only need to trace the photons for one wavelength,
calculate the spectral absorption weights
and get the full radiances spectrum with high spectral resolution. For
each photon we get (compare Eq.~\ref{eq:rad_i}):
\begin{equation}
I_i (\lambda)= \sum_{is=1}^{\rm N_s} w_{{\rm abs},is}(\lambda) w_{{\rm le},is}(\lambda)
\end{equation}
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{./only_abscorr_1e6.pdf}
\caption{Radiance spectra calculated with MYSTIC in comparison to
DISORT calculations. The top panel shows the transmittance
(radiance normalized to extraterrestrial irradiance) spectra of
two independent MYSTIC runs (grey lines) and the
DISORT result (black line) and the
bottom panel shows the relative differences between the MYSTIC runs
and DISORT in percent.}
\label{fig:only_abscorr}
\end{figure}
Fig.~\ref{fig:only_abscorr} shows two spectral calculations using this
method.
Here we assumed that the sun is in the zenith and the sensor is on the
ground and measures with a viewing angle of 60$^\circ$. We did not
include any sensor response function. The upper
panel shows the transmittance spectra (radiance divided by
extraterrestrial irradiance) and the lower panel shows the
relative difference to the DISORT solver operated
with 32~streams.
The MYSTIC calculations with 10$^6$ photons took 13~s on a single
processor with 2~GHz CPU (all computational times in the following
refer to this machine), the DISORT calculation took 25~s.
The relative difference between MYSTIC and DISORT is less than about
2\% with some exceptions where the transmittance is almost zero.
The spectral features in the MYSTIC calculations are well
resolved.
The two MYSTIC runs used exactly the same setup but the results show
a deviation between each other and with respect to the DISORT result.
This deviation is due to the statistical error of the Monte Carlo
calculation, with 10$^6$ photons the standard deviation is 0.66\%.
Hence the deviation can be reduced by running more photons.
Since the same photon paths
are used to compute all wavelengths the deviation is the same at all
spectral data points and the spectra are not noisy.
However the deviation shows a spectral dependence which is not a
statistical error but can be attributed to
the spectral dependence of Rayleigh scattering which has been
neglected so far. In the calculation $\beta_{\rm sca}$ was
set to a constant value corresponding to $\beta_{\rm sca}$ at
766.5~nm.
In the next section we will describe how to include
the spectral dependence of the scattering coefficient.
\subsection{Importance sampling for molecular scattering}
Eq.~\ref{eq:pdf_s} is the PDF which we use
for sampling the free pathlength of the photons, where the scattering coefficient
$\beta_{\rm sca}$ now becomes wavelength dependent. We want to use
this PDF for sampling the pathlength for all wavelengths. In order to
ensure that the results are correct for all wavelength we
introduce a correction weight (importance sampling method, see
e.g. \citet{ripley2006}):
\begin{align}
&w_{\rm sca1}(\lambda, s) \\
&= \frac{\beta_{\rm sca}(\lambda, s) \exp \left(-\int_0^s \beta_{\rm sca}(\lambda, s') {\rm d}s'
\right)} {\beta_{\rm sca}(\lambda_c, s) \exp \left(-\int_0^s \beta_{\rm sca}(\lambda_c, s') {\rm d}s'
\right)} \nonumber \\
&= \frac{\beta_{\rm sca}(\lambda, s)}{\beta_{\rm sca}(\lambda_c,
s)} \exp \left(-\int_0^s (\beta_{\rm sca}(\lambda, s')- \beta_{\rm
sca}(\lambda_c, s')){\rm d}s' \right) \nonumber
\end{align}
Here $\lambda_c$ ($c$ for ``computational'') is the wavelength
corresponding to the scattering
coefficient that is used to sample the photon free path.
As in the previous
section we write the second part of this expression as a sum over the model layers/boxes:
\begin{align}
\label{eq:w_sca1}
w_{\rm sca1}(\lambda,s) &=
\frac{\beta_{\rm sca}(\lambda, s)}{\beta_{\rm sca}(\lambda_c, s)}
\\\nonumber
&\times \exp\left(-\sum_{p}{ (\beta_{\rm sca}(\lambda, p) -
\beta_{\rm sca}(\lambda_c, p)) \Delta s_p }\right)
\end{align}
The probability that the photon is scattered into a direction with
scattering angle
$\theta_p$ is given by the phase function $P_{11}(\lambda, \theta_p)$.
So we need
another weight to correct for the spectral dependence of the phase
function which can again easily be derived using the importance
sampling method:
\begin{equation}
\label{eq:w_sca2}
w_{{\rm sca2},is}(\lambda, s)=\frac{
P_{11}(\lambda ,\theta_p, s)
}{P_{11}(\lambda_c, \theta_p, s)}
\end{equation}
Here $s$ is the location at which the photon is scattered.
Note that in the case where we have only molecules as interacting
particles and neglect depolarization $P_{11}$ is the Rayleigh phase function
\begin{equation}
P_{11} (\theta_p)=\frac{3}{4} (1+\cos^2 \theta_p)
\end{equation}
Also, as long as we neglect the wavelength dependence of the
Rayleigh depolarization factor (see e.g. \citet{hansen1974})
the Rayleigh phase function is wavelength-independent and
$w_{{\rm sca2},is}(\lambda)=1$.
The final result for the contribution of a photon including the
spectral dependence of absorption and scattering to the
spectral radiance is:
\begin{align}
& I_i (\lambda) = \\ \nonumber
& \sum_{\rm is=1}^{\rm N_s} w_{{\rm abs},is}(\lambda)
\left(\prod_{is'=1}^{is} w_{{\rm sca1},is'} (\lambda, s') w_{{\rm
sca2},is'} (\lambda,s') \right)
w_{{\rm le},is}(\lambda)
\end{align}
\begin{figure*}[t!]
\centering
\includegraphics[width=.8\textwidth]{./scacorr.pdf}
\caption{Relative differences of various model setups with respect
to a DISORT calculation with 64 streams. The left panels show MYSTIC
calculations with 10$^6$ and 10$^9$ photons respectively. The
right panels show DISORT calculations with 16 and 32 streams respectively.}
\label{fig:scacorr}
\end{figure*}
Now we calculate again the spectrum in the O$_2$A-band from 765--768~nm with
$\lambda_c=$765~nm and compare the result with an
accurate DISORT calculation using 64~streams. The top left panel of
Fig.~\ref{fig:scacorr} shows the relative difference of a MYSTIC run with
10$^6$ photons, which takes 14.6~s, to DISORT. We see that there is still a
relative deviation of about 0.4\% which is due to the statistical error of the Monte Carlo
calculation, but the spectral dependence of the deviation is now removed because
we have corrected the spectral dependence of the scattering coefficient. In
order to check whether the method is correctly implemented without any bias
(apart from the statistical error) we performed a MYSTIC run with 10$^9$
photons. The result is shown in the lower left panel. The spectrally independent
deviation has almost vanished ($<$0.01\%)
and the relative difference between MYSTIC and DISORT is below
0.05\%. For
comparison we show in the right panels of the figure DISORT runs with 16 and
32 streams respectively compared to the DISORT run with 64 streams. The
difference between DISORT (16 streams) and DISORT (64 streams) is actually
larger than the difference between MYSTIC (10$^9$ photons) and DISORT (64
streams).
It should be noted that this Monte Carlo method does only work well as long as the
scattering coefficient does not vary too much within the simulated wavelength
region. Else the scattering weights can obtain values very far from 1,
resulting in large statistical noise and slow convergence.
\subsection{Calculation of high resolution spectra including aerosol and
cloud scattering}
It is straightforward to apply the method to an atmosphere including
clouds and/or aerosols. We just need to use the total absorption and scattering coefficients
\begin{eqnarray}
\beta_{\rm abs}(\lambda) =\sum_{i=1}^N{\beta_{\rm abs,i}}(\lambda) \\
\beta_{\rm sca}(\lambda) =\sum_{i=1}^N{\beta_{\rm sca,i}}(\lambda)
\end{eqnarray}
and the average phase function given by
\begin{equation}
P_{\rm 11}(\lambda)=\frac{\sum_{i=1}^N{\beta_{\rm sca,i}(\lambda)
P_{\rm 11,i}(\lambda)}}
{\beta_{\rm sca}(\lambda)}
\end{equation}
Here $N$ is the number of interacting particles/molecules. These
quantities can be used to compute the wavelength dependent weights
$w_{\rm abs}(\lambda)$ (Eq.~\ref{eq:w_abs_lam}), $w_{\rm
sca1}(\lambda)$ (Eq.~\ref{eq:w_sca1}) and $w_{\rm sca2}(\lambda)$
(Eq.~\ref{eq:w_sca2}).
In MYSTIC we so far consider only the spectral dependence of molecular
scattering because the spectral
dependence of cloud and aerosol
scattering can safely be neglected in narrow wavelength intervals.
\section{Applications}
\subsection{Simulation of polarized near infrared spectra in cloudless conditions}
The Greenhouse Gases Observing Satellite (GOSAT) determines the
concentrations of carbon dioxide and methane globally from space.
The spacecraft was launched
on January 23, 2009, and has been operating properly since then.
Information about the project can be found on the web-page
\url{http://www.gosat.nies.go.jp}.
GOSAT carries the Thermal and Near Infrared Sensor for Carbon
Observation Fourier-Transform Spectrometer (TANSO-FTS)
(\citet{kuze2009}) which measures
in 4 spectral bands (band~1: 0.758--0.775~$\mu$m, band~2: 1.56--1.72~$\mu$m,
band~3: 1.92--2.08~$\mu$m, band~4: 5.56--14.3~$\mu$m). The
spectral resolution in all bands is 0.2~cm$^{-1}$. For the visible
spectral range (bands 1--3) polarized observations are performed. In
order to analyze this data a fast polarized radiative transfer code is
required. The Monte Carlo approach which is described in this
study is an alternative to commonly used discrete ordinate or
doubling and adding codes. The approach is fully compatible to the
implementation of polarization in MYSTIC as described in
\citet{emde2010} and validated in \citet{kokhanovsky2010},
because the weight vector which is calculated to
take into account
the polarization state of the photon does not interfere with the
spectral weights. An advantage of the Monte Carlo approach
is of course that it is easy to take into account horizontal
inhomogeneities of clouds, aerosols, and molecules.
In the following we show an example simulation where we selected a
spectral window of band~3 from 4815--4819~cm$^{-1}$ (corresponding to
$\approx$2.075--2.077~$\mu$m) in the near
infrared. The radiance simulation is performed with a spectral
resolution of 0.01~cm$^{-1}$. The atmospheric profiles (pressure,
temperature, trace gases) correspond to the standard mid-latitude
summer atmosphere as defined by \citet{Anderson1986}.
The molecular absorption coefficients
have been computed using the ARTS line-by-line model. We assume
a thin maritime aerosol layer with an optical thickness of
0.05 at 2~$\mu$m. We took the refractive indices and the size distribution data
from \citet{hess1998} (maritime clean aerosol mixture)
and performed Mie calculations to obtain the
aerosol optical properties including the phase matrix.
We assume an underlying water surface
which is modeled using the reflectance matrix as defined in
\citet{mishchenko1997}. The reflectance matrix
is based on the Fresnel equations, on \citet{cox54a,cox54b} to
describe the wind-speed dependent slope of the waves, and on
\citet{tsang1985} to account for shadowing effects. The wind speed was taken
to be 5m/s. The viewing angle of the FTS is 30$^\circ$ and simulations have been
performed for the principal plane assuming different solar zenith
angles $\theta_0$. The full Stokes vector has been computed for all setups.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{./gosat_band3_sunglint.pdf}
\caption{Simulated GOSAT spectra over ocean. The assumed wind
speed is 5~m/s. The viewing angle of the FTS is 30$^\circ$. The
line-styles correspond to different solar zenith angles, the
solid line corresponds to 30$^\circ$, i.e. the sun glint is observed,
the dashed and the dash-dotted lines correspond to 20$^\circ$ and
60$^\circ$, respectively. All simulations are in the principal
plane. The upper panel shows the normalized reflected
intensity $I$ and the lower panel shows the polarization
difference $Q$.}
\label{fig:gosat}
\end{figure}
Fig.~\ref{fig:gosat} shows the simulated GOSAT spectra. The solid
lines correspond to $\theta_0$=30$^\circ$, in this case
the FTS observes the center of the sun glint, therefore this spectrum
shows the highest reflectance. The dashed lines are for
$\theta_0$=20$^\circ$, still in the sun-glint region and
the dashed-dotted lines are for $\theta_0$=60$^\circ$ which
is not influenced much by the sun glint. The computation time for
each polarized spectrum using 10$^6$
photons was 2~minutes and 25~seconds,
the standard deviation (approximately the same for each Stokes vector
component) for $\theta_0$=20$^\circ$ and $\theta_0$=30$^\circ$ is
0.03\%, for $\theta_0$=60$^\circ$ it is
0.16\%.
\subsection{Simulations of differential optical thickness in broken cloud conditions}
Retrievals of the tropospheric NO$_2$ column from SCIAMACHY data are based on the
differential optical absorption spectroscopy (DOAS) method
(\citet{richter2002, richter2005}). For this
method the measured spectra are converted to so-called
differential optical thicknesses defined as
\begin{equation}
D(\lambda)=\ln( I_{\rm TOA}(\lambda)) - P_3(\lambda)
\end{equation}
where $I_{\rm TOA}(\lambda)$ is the reflectance at the top of the
atmosphere. $P_3(\lambda)$ is a third degree least square
polynomial fit of the logarithm of $I_{\rm TOA}(\lambda)$ with respect
to the wavelength, which removes the slowly varying part of the
spectrum. The conversion of $I_{\rm TOA}(\lambda)$ into $D(\lambda)$
improves the contrast of the NO$_2$ absorption line depths and thereby
the accuracy of the retrieval. The retrieval algorithm minimizes the
function
\begin{equation}
F(\lambda, V_{\rm NO_2, ret}) = \left| D(\lambda, V_{\rm NO_2, true})
- D(\lambda, V_{\rm NO_2, ret}) \right|
\end{equation}
where $V_{\rm NO_2, true}$ and $V_{\rm NO_2, ret}$ are the true and the
retrieved tropospheric NO$_2$ columns, respectively.
\begin{figure}[h]
\centering
\includegraphics[width=.48\textwidth]{./diff_opt_depth_no2_1e5_paper.pdf}
\caption{Differential optical thickness calculated for three
different NO$_2$ profiles, corresponding to low (-~-), medium ($\cdots$)
and highly polluted (---) conditions. The spectra have been
computed using MYSTIC with 10$^5$ photons.}
\label{fig:doas}
\end{figure}
Our new method allows efficient computations of $D(\lambda)$.
As an example Fig.~\ref{fig:doas} shows spectra for
three different NO$_2$ profiles, corresponding to low, medium and
highly polluted conditions. The Lambertian surface albedo was set to
0.1 and the solar zenith angle to 32$^\circ$.
NO$_2$ and O$_3$ profiles are the same
as used in the study by \citet{vidot2010}.
The NO$_2$ absorption cross sections have been taken from
\citet{Burrows1998}, ozone absorption was also included in the
simulations using the cross sections by \citet{Molina1986}.
The spectral resolution of the simulation is 0.1~nm.
\begin{figure*}[t!]
\centering
\includegraphics[width=1.\textwidth]{./ALISvsLineByLine_errors.pdf}
\caption{Simulations for NO$_2$ profile corresponding to
medium polluted conditions. The top left panel shows the reflectance and
the bottom left panel the differential optical thickness. The black lines
correspond to Monte Carlo simulations using the ALIS method with
different number of photons (10$^3$, 10$^5$, and 10$^7$), the
grey line shows a Monte Carlo simulation without ALIS (all wavelengths
are calculated independently). The middle panels show differences
w.r.t. DISORT for the ALIS simulations and the right plots show
differences w.r.t. DISORT for the MYSTIC calculation without using ALIS.}
\label{fig:alis_vs_lbl}
\end{figure*}
Fig.~\ref{fig:alis_vs_lbl} shows calculations for the NO$_2$ profile
corresponding to medium polluted conditions. The top left panel shows the
reflectance, where the grey line corresponds to a MYSTIC calculation
without using ALIS, i.e. all wavelengths are simulated subsequently
using 10$^7$ photons for each wavelength. The standard deviation for
each wavelength is about 0.03\%. This calculation took
33~h~12~min on one CPU. The black line shows the calculations
using ALIS with different numbers of photons. The calculation using
10$^3$ photons took 0.9~s, the one with 10$^5$ photons took 38~s, and
the one with 10$^7$ photons took 63~min~53~s.
Obviously the absorption features of NO$_2$ are barely visible in the
reflectance plot. There is a deviation between the simulations
which is due to the
statistical error of the simulation using ALIS with 10$^3$ or 10$^5$
photons. The top middle panel shows the relative differences of the
ALIS simulations w.r.t. DISORT, which requires 30~s computation time
with 32 streams. Obviously the relative difference decreases with
increasing number of photons. The top right panel shows the relative
difference between the MYSTIC calculation without ALIS and DISORT. The
relative difference is less than 0.1\% and shows the typical Monte
Carlo noise.
The bottom left panel shows the differential optical thicknesses
derived from the
simulations. Here the statistical noise of the MYSTIC calculation
without ALIS (grey line) is clearly visible. All ALIS simulations
result in a smooth differential optical thickness because all wavelengths are calculated
based on the same photon paths. This yields a relative deviation in the
reflectance which is independent of wavelengths and can be removed
completely by subtracting the fitted polynomial.
The bottom middle panel shows the absolute difference between the
differential optical thicknesses derived from the ALIS simulations and
the one derived from the DISORT simulation. Even for
the simulation with only 10$^3$ photons the differential optical
thickness is quite accurate, the difference w.r.t. DISORT is of the order of a few
per-cent. Using 10$^5$ photons or more yields very accurate
differential optical thicknesses, the difference is here well below
1\%. The bottom right panel shows the difference between
the MYSTIC calculation
without ALIS and the DISORT calculation. The difference is of the same
order of magnitude as the differential optical thickness itself and
hence the accuracy of this simulation would not be sufficient for
NO$_2$ retrievals.
Fig.~\ref{fig:alis_vs_lbl} clearly demonstrates that a common Monte
Carlo approach which calculates wavelength by wavelength
sequentially is extremely inefficient for simulations of the
differential optical thickness because each wavelength has an
independent statistical error which is larger than the
absorption features unless a huge number of photons is used.
In order to obtain a result with an accuracy
comparable to ALIS with 10$^3$ photons (0.9~s computation time),
MYSTIC without ALIS would require at least 10$^9$ photons per wavelength
which would take about 138~days computation time.
\begin{figure}[h]
\centering
\includegraphics[width=.48\textwidth]{./diff_opt_depth_no2_baum_tau3_sza30_cf50_vroom_alb0_05.pdf}
\caption{Impact of cirrus clouds on a spectrum used for NO$_2$
retrievals. The cirrus clouds are modeled as 1$\times$1$\times$1~km$^3$
cubes, the cloud fraction in the model domain is 0.5. Black lines
correspond to the independent pixel (IPA) calculation and grey lines to
the 3D calculation. Upper panel: The thick lines show the reflectance $R$
of the full domain. The dashed line shows the the clear-sky pixels only
($R_{\rm clear}$) and the thin solid line shows the cloudy pixels ($R_{\rm
cloud}$) for the IPA calculation. The lower panel shows corresponding
differential optical thicknesses, where the lines styles are defined as above.}
\label{fig:doas_cloud}
\end{figure}
The impact of cirrus clouds on tropospheric NO$_2$ retrievals has been
investigated in a sensitivity study by \citet{vidot2010}. They take into
account the sub-pixel inhomogeneity by a simple independent pixel
approach. Using our Monte Carlo approach we can take into account full
3-dimensional radiative transfer, e.g. the interaction between the cloudy and
the clear-sky part of the domain. Fig.~\ref{fig:doas_cloud} shows an example
where the setup is similar to the study by \citet{vidot2010}. We have taken a
very simple 3D cloud field, the cirrus clouds were modeled as
1$\times$1$\times$1~km$^3$ cubes and arranged as a chess-board, hence the
cloud fraction $c$ is 0.5. The surface albedo is 0.05 and the solar
zenith angle is 30$^\circ$. The optical thickness of the clouds is 3, the
geometrical thickness is 1~km, the cloud base height is 10~km and the
effective radius is 30~$\mu$m where the parameterization by
\citet{baum05a:_bulk, baum05b:_bulk} was used for the cirrus optical
properties. The solar zenith angle is 30$^\circ$ and the wavelength range is
400~nm to 470~nm. We performed 3D calculation and also used the independent
pixel approximation (IPA) for comparison. All simulations shown in
Fig.~\ref{fig:doas_cloud} were calculated using MYSTIC with ALIS.
The reflectance for the IPA simulation is
calculated as the sum of the reflectance of the clear-sky part $R_{\rm clear}$
and the reflectance of the cloudy part $R_{\rm cloud}$ weighted with the cloud
fraction:
\begin{equation}
R=cR_{\rm cloud} + (1-c)R_{\rm clear}
\end{equation}
In order to speed up the calculations in the presence of clouds, the variance
reduction technique VROOM (\citet{buras2011}) was used. Using VROOM
the simulation 10$^5$ photons are sufficient to obtain an accurate
result with a standard deviation of approximately 0.5\%.
The 3D calculation using these settings took 1~min~34~s, an IPA calculation
using DISORT with 32 streams takes 58~s.
Fig.~\ref{fig:doas_cloud} shows a part of the spectrum where
we have pronounced features in the differential optical depth. In the
top panel one can see
that for all wavelengths the reflectance in the 3D calculation is smaller than
in the IPA calculation, because photons which are scattered out of the clouds
on the sides have a higher probability of being transmitted to the surface.
The bottom panel of Fig.~\ref{fig:doas_cloud} shows the differential
optical thickness. The difference between IPA and 3D is in this
case about 10\% which will cause an error of some per cent in the
tropospheric NO$_2$ retrievals. Note that this calculation is only an
example to demonstrate the new algorithm to calculate high spectral
resolution spectra using the Monte Carlo method. With different setups
the error on the retrieval can be completely different.
\section{Conclusions}
We have developed the new method ALIS (absorption lines importance sampling)
that allows to compute polarized radiances in high spectral resolution using
the Monte Carlo method in a very efficient way. We sample random photon paths
at one wavelength. For these random paths we calculate a spectral absorption
weight using the wavelength dependent absorption coefficients
of the model boxes. In order to correct for the wavelength dependence of Rayleigh
scattering an importance sampling method is applied. If necessary the same
method can be applied to correct for the spectral dependence of cloud and
aerosol scattering. The method allows us to calculate radiances for
many wavelengths at the same time without significantly increasing the
computational cost. ALIS has been implemented in the MYSTIC model
which handles three-dimensional radiative transfer in cloudy atmospheres including
polarization and complex topography.
The new algorithm ALIS has been validated by comparison to the
well-known and well-tested DISORT solver. It has been shown that ALIS
does not produce any bias apart from the statistical noise. Since all
wavelengths are computed at once, the statistical error is the same at
all wavelength which results mainly in a relative deviation which is
independent of wavelength. However, for remote sensing applications,
where mostly differential absorption features are of interest, this
deviation does not matter.
Two example applications are shown. First the simulation of polarized
near-infrared spectra over an ocean surface as measured by
e.g.~GOSAT. Here we simulated the Stokes vector with a standard deviation
smaller than 0.05\% for 400 spectral
points in 2 minutes 25 seconds on a single PC. For a standard
deviation of 0.5\% the calculation would be 100 times faster. These short
computation times show that the algorithm has the potential to be used
as a forward model for trace gas retrievals from polarized radiance
measurements, in particular since commonly used discrete ordinate
methods become much slower when they include polarization.
The second example is the simulation of the differential optical
thickness from 400~nm to 470~nm which is used to retrieve NO$_2$ from
e.g. SCIAMACHY. Here the computation time for accurate (scalar)
simulations was comparable to DISORT. We performed this calculation also for
an inhomogeneous cloud scene where cirrus clouds are approximated by
simple cubes. We compared the result of the 3D simulation with an
independent pixel calculation and found a difference of about 10~\% in
the differential optical thickness for this example. The calculations
show that ALIS is suitable to study effects of horizontal
inhomogeneity on trace gas retrievals in presence of cirrus clouds.
\section*{Acknowledgement}
We thank Timothy E. Dowling for translating the DISORT code from FORTRAN77 to
C which resulted in great improvements regarding numerical accuracy
and computation time. Furthermore
we thank Jer\^{o}me Vidot for providing NO$_2$ profiles. This work was done
within the project RESINC2 funded by the ``Deutsche
Forschungsgemeinschaft'' (DFG).
\section*{References}
\bibliographystyle{elsarticle-num-names}
\input{emde_JQSRT_20110321.bbl}
\end{document}
|
1,116,691,498,264 | arxiv |
\section{Introduction}
Tabling~\cite{Chen-96} is a recognized and powerful implementation
technique that overcomes some limitations of traditional Prolog
systems in dealing with recursion and redundant
sub-computations. Tabling is a refinement of SLD resolution that stems
from one simple idea: save intermediate answers from past computations
so that they can be reused when a \emph{similar call} appears during
the resolution process. Tabling based models are able to reduce the
search space, avoid looping, and always terminate for programs with
the bounded term-size property~\cite{Chen-96}.
Tabling has become a popular and successful technique thanks to the
ground-breaking work in the XSB Prolog system and in particular in the
SLG-WAM engine~\cite{Sagonas-98}, the most successful engine of
XSB. The success of SLG-WAM led to several alternative implementations
that differ in the execution rule, in the data-structures used to
implement tabling, and in the changes to the underlying Prolog
engine. Currently, the tabling technique is widely available in
systems like XSB Prolog~\cite{Swift-12}, Yap Prolog~\cite{CostaVS-12},
B-Prolog~\cite{Zhou-12}, ALS Prolog~\cite{Guo-01},
Mercury~\cite{Somogyi-06}, Ciao Prolog~\cite{Chico-08} and more
recently in SWI Prolog~\cite{Desouter-15} and Picat~\cite{Zhou-15}.
One of the main advantages of Prolog is its potential for the
\emph{implicit exploitation of parallelism}. Many sophisticated and
well-engineered parallel Prolog systems exist in the
literature~\cite{Gupta-01}, being the most successful those that
exploit \emph{implicit
or-parallelism}~\cite{Aurora-88,Ali-90a,Gupta-99}, \emph{implicit
and-parallelism}~\cite{Hermenegildo-91,Shen-92,Pontelli-97} or a
combination of both~\cite{CostaVS-91}. Or-parallelism arises when more
than one clause unifies with the current call and it corresponds to
the simultaneous execution of the body of those different
clauses. And-parallelism arises when more than one subgoal occurs in
the body of the clause and it corresponds to the simultaneous
execution of the subgoals contained in a clause's body.
On the other hand, as a high-level language, Prolog is also often used
as a means to \emph{explicitly control and schedule concurrent
tasks}~\cite{Carro-99,Fonseca-09a}. The ISO Prolog multithreading
standardization proposal~\cite{Moura-08b} is currently implemented in
several Prolog systems including XSB, Yap, Ciao and SWI, providing a
highly portable solution given the number of operating systems
supported by these systems. In a nutshell, multithreading in Prolog is
the ability to concurrently perform multiple computations, in which
each computation runs independently but shares the database
(clauses). It is therefore unsurprising that implicit and explicit
concurrent/parallel evaluation has been an important subject in the
design and development of Prolog systems.
Nowadays, the increasing availability of computing systems with
multiple cores sharing the main memory is already a standardized,
high-performance and viable alternative to the traditional (and often
expensive) shared memory architectures. The number of cores per
processor is expected to continue to increase, further expanding the
potential for taking advantage of such support as an increasingly
popular way to implement dynamic, highly asynchronous, concurrent and
parallel programs.
Besides the two traditional approaches to concurrency/parallelism: (i)
\emph{fully implicit}, i.e., it is left to the runtime system to
automatically detect the potential concurrent tasks in the program,
assign them for parallel execution and control and synchronize their
execution; and (ii) \emph{fully explicit}, i.e., it is left to the
user to annotate the tasks for concurrent execution, assign them to
the available workers and control the execution and the
synchronization points, the recent years have seen a lot of proposals
trying to combine both approaches in such a way that the user relies
on high-level explicit parallel constructs to trigger parallel
execution and then it is left to the runtime system the control of the
low-level execution details. In the combined approach, in general, a
program begins as a single worker that executes sequentially until
reaching a \emph{parallel construct}. When reaching a parallel
construct, the runtime system launches a set of additional workers to
exploit concurrently the sub-computation at hand. Concurrent execution
is then handled implicitly by the execution model taking into account
additional directive restrictions given to the parallel construct.
Multiple examples of frameworks exist that follow the combined
approach. For example, for imperative programming languages, the
OpenMP~\cite{Chapman-08}, Intel Threading Building
Blocks~\cite{Reinders-07} and Cilk~\cite{Blumofe-1995} frameworks
provide runtime systems for multithreaded parallel programming,
providing users with the means to create, synchronize, and schedule
threads efficiently. For functional programming languages, the
Eden~\cite{Loogen-05} and HDC~\cite{Herrmann-00} Haskell based
frameworks allow the users to express their programs using polymorphic
higher-order functions. For object-oriented programming languages,
MALLBA~\cite{Alba-02} and DPSKEL~\cite{Pelaez-07} frameworks also
showed relevant speedups in the parallel evaluation of combinatorial
optimization benchmarks.
In the specific case of Prolog, given the advantages of tabled
evaluation, the question that arises is if a tabling mechanism has the
potential for the exploitation of concurrency/parallelism. On one
hand, tabling still exploits a search space as traditional Prolog, but
on the other hand, the concurrent model of tabling is necessarily far
more complex than the traditional concurrent models, since it also
introduces concurrency on the access to the tables. In a concurrent
tabling system, tables may be either \emph{private} or \emph{shared}
between workers. On one hand, private tables can be easier to
implement but restrict the degree of concurrency. On the other hand,
shared tables have all the associated issues of locking,
synchronization and potential deadlocks. Here, the problem is even
more complex because we need to ensure the correctness and
completeness of the answers found and stored in the shared
tables. Thus, despite the availability of both threads and tabling in
Prolog compilers such as XSB, Yap, Ciao and SWI, the implementation of
these two features such that they work together seamlessly implies
complex ties to one another and to the underlying engine.
To the best of our knowledge, only the XSB and Yap systems support the
combination of tabling with some form of concurrency/parallelism. In
XSB, the SLG-WAM execution model was extended with a \emph{shared
tables design}~\cite{Marques-08} to support explicit concurrent
tabled evaluation using threads. It uses a semi-naive approach that,
when a set of subgoals computed by different threads is mutually
dependent, then a \emph{usurpation operation} synchronizes threads and
a single thread assumes the computation of all subgoals, turning the
remaining threads into consumer threads. The design ensures the
correct execution of concurrent sub-computations but the experimental
results showed some limitations~\cite{Marques-10}. Yap implements both
implicit and explicit concurrent tabled evaluation, but
separately. The OPTYap design~\cite{Rocha-05a} combines the
tabling-based SLG-WAM execution model with implicit or-parallelism
using shared memory processes. More recently, a second design supports
explicit concurrent tabled evaluation using threads~\cite{Areias-12a},
but using an alternative view to XSB's design. In Yap's design, each
thread has its own tables, i.e., from a thread point of view the
tables are private, but at the engine level it uses a \emph{common
table space}, i.e., from the implementation point of view the tables
are shared among threads.
In this paper, we summarize Yap's main developments and contributions
to concurrent tabled evaluation and we describe the design and
implementation challenges of several alternative table space designs
for implicit and explicit concurrent tabled evaluation which represent
different trade-offs between concurrency and memory usage. We also
motivate for the advantages of using \emph{fixed-size} and
\emph{lock-free} data structures for concurrent tabling and we
elaborate on the key role that the engine's \emph{memory allocator}
plays on such an environment where a higher number of simultaneous
memory requests for data structures in the table space can be made by
multiple workers. We also discuss how Yap’s mode-directed tabling
support~\cite{Santos-13} can be extended to concurrent
evaluation. Mode-directed tabling is an extension to the tabling
technique that allows the aggregation of answers by specifying
pre-defined modes such as \emph{min} or \emph{max}. Mode-directed
tabling can be viewed as a natural tool to implement dynamic
programming problems, where a general recursive strategy divides a
problem in simple sub-problems whose goal is, usually, to dynamically
calculate optimal or selective answers as new results arrive.
Finally, we present our future perspectives towards an efficient and
novel concurrent framework which integrates both implicit and explicit
concurrent tabled evaluations in a single tabling engine. This is a
very complex task since we need to combine the explicit control
required to launch, assign and schedule tasks to workers, with the
built-in mechanisms for handling tabling and/or implicit concurrency,
which cannot be controlled by the user. Such a framework could renew
the glamour of Prolog systems, especially in the concurrent/parallel
programming community. Combining the inherent implicit parallelism of
Prolog with explicit high-level parallel constructs will clearly
enhance the expressiveness and the declarative style of tabling, and
simplify concurrent programming.
In summary, the main contributions of this paper are: (i) a systematic
presentation of the different alternative table space designs
implemented in Yap for implicit and explicit concurrent tabled
evaluation (which were dispersed by several publications); (ii) a
formalization of the total memory usage of each table space design,
which allows for a more rigorous comparison and demonstrates how each
design is dependent on the number of workers and on the number of
tabled calls in evaluation; (iii) a performance analysis of Yap's
tabling engine highlighting how independent concurrent flows of
execution interfere at the low-level engine and how dynamic
programming problems fit well with concurrent tabled evaluation; and
(iv) the authors' perspectives towards a future concurrent framework
which integrates both implicit and explicit concurrent tabled
evaluations in a single tabling engine.
The remainder of the paper is organized as follows. First, we
introduce some basic concepts and relevant background. Then, we
present the alternative table space designs for implicit and explicit
concurrent tabled evaluation. Next, we discuss the most important
engine components and implementation challenges to support concurrent
tabled evaluation and we show a performance analysis of Yap's tabling
engine when using different table space designs. At last, we discuss
future perspectives and challenging research directions.
\section{Background}
This section introduces relevant background needed for the following
sections. It briefly describes Yap's original table space organization
and presents Yap's approach for supporting mode-directed tabling.
\subsection{Table Space Organization}
The basic idea behind tabling is straightforward: programs are
evaluated by saving intermediate answers for tabled subgoals so that
they can be reused when a \emph{similar call} appears during the
resolution process. First calls to tabled subgoals are considered
\emph{generators} and are evaluated as usual, using SLD resolution,
but their answers are stored in a global data space, called the
\emph{table space}. Similar calls are called \emph{consumers} and are
resolved by consuming the answers already stored for the corresponding
generator, instead of re-evaluating them against the program
clauses. During this process, as further new answers are found, they
are stored in their table entries and later returned to all similar
calls.
Call similarity thus determines if a subgoal will produce their own
answers or if it will consume answers from a generator call. There are
two main approaches to determine if a subgoal $A$ is similar to a
subgoal $B$:
\begin{itemize}
\item \emph{Variant-based tabling}~\cite{RamakrishnanIV-99}: $A$ and $B$
are variants if they can be identical through variable renaming. For
example, $p(X,1,Y)$ and $p(W,1,Z)$ are \emph{variants} because both
can be renamed into $p(VAR_0,1,VAR_1)$.
\item \emph{Subsumption-based tabling}~\cite{Rao-96}: subgoal $A$ is
considered similar to $B$ if $A$ is \emph{subsumed} by $B$ (or $B$
\emph{subsumes} $A$), i.e., if $A$ is more specific than $B$ (or an
instance of). For example, subgoal $p(X,1,2)$ is subsumed by
subgoal $p(Y,1,Z)$ because there is a substitution $\{Y=X,Z=2\}$
that makes $p(X,1,2)$ an instance of $p(Y,1,Z)$.
\end{itemize}
Variant-based tabling has been researched first and is arguably better
understood. For some types of programs, subsumption-based tabling
yields superior time performance~\cite{Rao-96,Johnson-99}, as it
allows greater reuse of answers, and better space usage, since the
answer sets for the subsumed subgoals are not stored. However, the
mechanisms to efficiently support subsumption-based tabling are harder
to implement, which makes subsumption-based tabling not as popular as
variant-based tabling. The Yap Prolog system implements both
approaches for sequential tabling~\cite{CostaVS-12,Cruz-10}, but for
concurrent tabled evaluation, Yap follows the variant-based tabling
approach.
\begin{wrapfigure}{R}{7.5cm}
\includegraphics[width=7.5cm]{figures/table_space_original.pdf}
\caption{Yap's original table space organization}
\label{fig_table_space_original}
\vspace{-\intextsep}
\end{wrapfigure}
A critical component in the implementation of an efficient tabling
system is thus the design of the data structures and algorithms to
access and manipulate the table space. Yap uses \emph{trie data
structures} to implement efficiently the table
space~\cite{RamakrishnanIV-99}. Tries are trees in which common
prefixes are represented only once. The trie data structure provides
complete discrimination for terms and permits lookup and possible
insertion to be performed in a single pass through a term, hence
resulting in a very efficient and compact data structure for term
representation.
Figure~\ref{fig_table_space_original} shows the original table space
organization for a tabled predicate $P_i$ in Yap. At the entry point,
we have the \emph{table entry} data structure. This structure stores
common information for the tabled predicate, such as the predicate's
arity or the predicate's evaluation strategy, and it is allocated when
the predicate is being compiled, so that a pointer to the table entry
can be included in its compiled code. This guarantees that further
calls to the predicate will access the table space starting from the
same point. Below the table entry, we have the \emph{subgoal trie
structure}. Each different tabled subgoal call $P_{i.j}$ to the
predicate corresponds to a unique path through the subgoal trie
structure, always starting from the table entry, passing by several
subgoal trie data units, the \emph{subgoal trie nodes}, and reaching a
leaf data structure, the \emph{subgoal frame}. The subgoal frame
stores additional information about the subgoal and acts like an entry
point to the \emph{answer trie structure}. Each unique path through
the answer trie data units, the \emph{answer trie nodes}, corresponds
to a different tabled answer to the entry subgoal.
\subsection{Mode-Directed Tabling and Dynamic Programming}
The tabling technique can be viewed as a natural tool to implement
dynamic programming problems. Dynamic programming is a general
recursive strategy that consists in dividing a problem in simple
sub-problems that, often, are the same. Tabling is thus suitable to
use with this kind of problems since, by storing and reusing
intermediate results while the program is executing, it avoids
performing the same computation several times.
In a traditional tabling system, all arguments of a tabled subgoal
call are considered when storing answers into the table space. When a
new answer is not a variant of any answer that is already in the table
space, then it is always considered for insertion. Therefore,
traditional tabling is very good for problems that require storing all
answers. However, with dynamic programming, usually, the goal is to
dynamically calculate optimal or selective answers as new results
arrive. Solving dynamic programming problems can thus be a difficult
task without further support.
\emph{Mode-directed tabling} is an extension to the tabling technique
that supports the definition of \emph{modes} for specifying how
answers are inserted into the table space. Within mode-directed
tabling, tabled predicates are declared using statements of the form
`$table~p(m_1,...,m_n)$', where the $m_i$’s are \emph{mode operators}
for the arguments. The idea is to define the arguments to be
considered for variant checking (the index arguments) and how variant
answers should be tabled regarding the remaining arguments (the output
arguments). In Yap, index arguments are represented with mode
\emph{index}, while arguments with modes \emph{first}, \emph{last},
\emph{min}, \emph{max}, \emph{sum} and \emph{all} represent output
arguments~\cite{Santos-13}. After an answer is generated, the system
tables the answer only if it is \emph{preferable}, accordingly to the
meaning of the output arguments, than some existing variant answer.
In Yap, mode-directed tabled predicates are compiled by extending the
table entry data structure to include a \emph{mode array}, where the
information about the modes is stored, and by extending the subgoal
frames to include a \emph{substitution array}, where the mode
information is stored together with the number of free variables
associated with each argument in the subgoal
call~\cite{Santos-13}. When a new answer is found, it must be compared
against the answer(s) already stored in the table, accordingly to the
modes defined for the corresponding arguments. If the new answer is
preferable, the old answer(s) must be \emph{invalidated} and the new
one inserted in the table. The invalidation process consists in: (a)
deleting all intermediate answer trie nodes corresponding to the
answers being invalidated; and (b) tagging the leaf nodes of such
answers as invalid nodes. Invalid nodes are only deleted when the
table is later completed or abolished.
Regarding the table space designs that we present next, the support
for mode-directed tabling is straightforward when the table data
structures are not accessed concurrently for write operations. The
problem arises for the designs which do not require the completion of
tables to share answers, since we need to efficiently support
concurrent delete operations on the trie structures and correctly
handle the interface between consumer calls and the navigation in the
answer tries.
\section{Concurrent Table Space Designs}
\label{sec_concurrent_table_designs}
This section presents alternative table space designs for implicit and
explicit concurrent tabled evaluation, which represent different
trade-offs between concurrency and memory usage.
\subsection{Implicit versus Explicit Concurrent Tabled Evaluation}
\label{sec_implicit_explicit}
Remember the two traditional approaches to concurrency/parallelism:
\emph{fully implicit} and \emph{fully explicit}. With fully implicit,
it is left to the runtime system to automatically detect the potential
concurrent tasks in the program, assign them for concurrent/parallel
execution and control and synchronize their execution. In such
approach, the running workers (processes, threads or both) often share
the data structures representing the data of the problem since tasks
do not need to be pre-assigned to workers as any worker can be
scheduled to perform an unexplored concurrent task of the problem. For
tabling, that means that the table space data structures must be fully
shared among all workers. This is the case of the OPTYap
design~\cite{Rocha-05a}, which combines the tabling-based SLG-WAM
execution model with implicit or-parallelism using shared memory
processes.
On the other hand, with a fully explicit approach, it is left to the
user to annotate the tasks for concurrent execution, assign them to
the available workers and control the execution and the synchronization
points. In such approach, the running workers often execute
independently a well-defined (set of) task(s). For tabling, that means
that each evaluation only depends on the computations being performed
by the worker itself, i.e., a worker does not need to consume answers
from other workers' tables as it can always be the generator for all
of its subgoal calls. These are the cases of XSB~\cite{Marques-08} and
Yap~\cite{Areias-12a} designs which support explicit concurrent tabled
evaluation using threads. In any case, the table space data structures
can be either private or partially shared between workers. Yap
proposes several alternative designs to implement the table space for
explicit concurrent tabled
evaluation. Table~\ref{tab_table_space_summary} overviews the several
Yap's table space designs and how they differ in the way the internal
table data structures are implemented and accessed. In the following
subsections, we present the several designs and we show a detailed
analysis of the memory usage of each.
\begin{table}[!ht]
\centering
\caption{Yap's table space designs -- Cooperative Sharing (CS),
No-Sharing (NS), Subgoal-Sharing (SS), Full-Sharing (FS), Partial
Answer Sharing (PAS) and Private Answer Chaining (PAC) -- and the
implementation and access of the data structures in each design: as
private data structures (--); as fully shared data structures (F);
as partially shared data structures (P); and as data structures with
concurrent read (r) and concurrent write (w) operations.}
\begin{tabular}{c|cccccc}
\hline
\textbf{\emph{Data}} & \textbf{\emph{Implicit}} & \multicolumn{4}{c}{\textbf{\emph{Explicit}}} \\ \cline{3-7}
\textbf{\emph{Structure}} & \textbf{\emph{CS}} & \textbf{\emph{NS}} & \textbf{\emph{SS}} & \textbf{\emph{FS}} & \textbf{\emph{PAS}} & \textbf{\emph{PAC}} \\
\hline\hline
\textbf{\emph{Table Entry}} & $F(r)$ & $F(r)$ & $F(r)$ & $F(r)$ & $F(r)$ & $F(r)$ \\
\textbf{\emph{Subgoal Trie}} & $F(rw)$ & -- & $F(rw)$ & $F(rw)$ & $F(rw)$ & $F(rw)$ \\
\textbf{\emph{Subgoal Frame}} & $F(rw)$ & -- & -- & $P(rw)$ & $P(r)$ & $P(rw)$ \\
\textbf{\emph{Answer Trie}} & $F(rw)$ & -- & -- & $F(rw)$ & $P(r)$ & $P(rw)$ \\
\hline
\end{tabular}
\label{tab_table_space_summary}
\end{table}
\subsection{Cooperative Sharing Design}
The \emph{Cooperative Sharing (CS)} design supports the combination of
tabling with implicit or-parallelism using shared memory
processes~\cite{Rocha-05a}. The CS design was the first concurrent
table space design implemented in Yap Prolog. It follows Yap's
original table space organization, as shown in
Fig.~\ref{fig_table_space_original}, and extends it with some sort of
synchronization mechanisms to deal with concurrent accesses. In what
follows, we will not consider synchronization mechanisms which require
extending the table space data structures with extra fields, like lock
fields, since several synchronization techniques exist that do not
require an actual lock field. Two examples are: (i) the usage of an
external global array of locks; or (ii) the usage of low level
\emph{Compare-And-Swap (CAS)} operations. We discuss this in more
detail in section~\ref{sec_engine_components}.
Remember from Fig.~\ref{fig_table_space_original} that, at the entry
point, we have a table entry ($TE$) data structure for each tabled
predicate $P_i$. Underneath each $TE$, we have a subgoal trie
($ST(P_i)$) and several subgoal frame ($SF$) data structures for each
tabled subgoal call $P_{i.j}$ made to the predicate. Finally,
underneath each $SF$, we have an answer trie ($AT(P_{i.j})$) structure
with the answers for the corresponding subgoal call $P_{i.j}$. Please
note that the size of the $TE$ and $SF$ data structures is fixed and
independent from the predicate, but the size of the $ST(P_i)$ and
$AT(P_{i.j})$ data structures varies accordingly to the number of
subgoal calls made and answers found during tabled evaluation.
We can now formalize the \emph{Total Memory Usage (TMU)} of the CS
design. For this, we assume that all tabled predicates are completely
evaluated, meaning that the engine will not allocate any further data
structures on the table space. Given $NP$ tabled predicates,
Eq.~\ref{equation_tmu_cs} presents the $TMU$ of the CS design
($TMU_{CS}$).
\begin{equation}
\begin{aligned}
& TMU_{CS} = \sum\limits_{i = 1}^{NP} MU_{CS}(P_i) \\
& where~~
MU_{CS}(P_i) = TE + ST(P_i) + \sum\limits^{NC(P_i)}_{j=1} [SF +
AT(P_{i.j})]
\end{aligned}
\label{equation_tmu_cs}
\end{equation}
The $TMU_{CS}$ is given by the summation of the \emph{Memory Usage
(MU)} of each predicate $P_i$, i.e, the $MU_{CS}(P_i)$ values, which
correspond then to the sum of each structure inside the table space
for the corresponding predicate $P_i$. The $TE$, $ST(P_i)$, $SF$ and
$AT(P_{i.j})$ values represent the amount of the memory used by
predicate $P_i$ in its table entry, subgoal trie, subgoal frames and
answer trie structures, respectively, and the $NC(P_i)$ value
represents the number of diferent tabled subgoal calls made to the
predicate. For example, in Fig.~\ref{fig_table_space_original}, the
value of $NC(P_i)$ is $n$.
As a final remark, please note that the total memory usage of the CS
design ($TMU_{CS}$) is the same as the total memory usage of Yap's
original table space organization ($TMU_{ORIG}$). Thus, in what
follows, we will use the $TE$, $ST(P_i)$, $SF$ and $AT(P_{i.j})$
values as the reference values for comparison against the other
concurrent table space designs.
\subsection{No-Sharing Design}
Yap implements explicit concurrent tabled evaluation using threads in
which each thread's computation only depends on the evaluations being
performed by the thread itself. The \emph{No-Sharing (NS)} design was
the starting design for supporting explicit concurrent tabled
evaluation in Yap~\cite{Areias-12a}. In the NS design, each thread
allocates fully private tables for each new tabled subgoal being
called. In this design, only the $TE$ structure is shared among
threads. Figure~\ref{fig_table_space_no_sharing} shows the
configuration of the table space for the NS design. For the sake of
simplicity, the figure only shows the configuration for a particular
predicate $P_i$ and a particular subgoal call $P_{i.j}$.
\begin{wrapfigure}{R}{7.5cm}
\includegraphics[width=7.5cm]{figures/table_space_no_sharing.pdf}
\caption{Table space organization for the NS design}
\label{fig_table_space_no_sharing}
\vspace{-\intextsep}
\end{wrapfigure}
The table entry still stores the common information for the predicate
but it is extended with a bucket array ($BA$), where each thread $T_k$
has its own entry, which then points to the private $ST(P_i)$, $SF$
and $AT(P_{i.j})$ data structures of the thread. Each bucket array
contains as much entry cells as the maximum number of threads that can
be created in Yap (currently, Yap supports 1024 simultaneous
threads). However, in practice, this solution can be highly
inefficient and memory consuming, as this huge bucket array must be
always allocated even when only one thread will use it. To solve this
problem, we introduce a kind of \emph{inode pointer structure}, where
the bucket array is split into direct bucket cells and indirect bucket
cells~\cite{Areias-12a}. The direct bucket cells are used as before,
but the indirect bucket cells are allocated only as needed, which
alleviates the memory problem and easily adjusts to a higher maximum
number of threads. This direct/indirect organization is applied to all
bucket arrays.
Since the $ST(P_i)$, $SF$ and $AT(P_{i.j})$ data structures are
private to each thread, they can be removed when the thread finishes
execution. Only the table entry is shared among threads. As this
structure is created by the main thread when a program is being
compiled, no concurrent writing operations will exist between threads
and thus no synchronization points are required for the NS design.
Given an arbitrary number of $NT$ running threads and assuming that
all threads have completely evaluated the same number $NC(P_i)$ of
tabled subgoal calls, Eq.~\ref{equation_mu_ns} shows the memory usage
for a predicate $P_i$ in the NS design ($MU_{NS}(P_i)$).
\begin{equation}
\begin{aligned}
& MU_{NS}(P_i) = TE_{NS} + NT * [ST(P_i) + \sum\limits^{NC(P_i)}_{j=1} [SF + AT(P_{i.j})]] \\
& where~~
TE_{NS} = TE + BA
\end{aligned}
\label{equation_mu_ns}
\end{equation}
The $MU_{NS}(P_i)$ value is given by the sum of the memory size of the
extended table entry structure ($TE_{NS}$) plus the sum of the sizes
of the private structures of each thread multiplied by the $NT$
threads. The memory size of $TE_{NS}$ is given by the size of the
original $TE$ structure added with the memory size of the bucket array
($BA$). The memory size of the remaining structures is the same as in
Yap's original table space organization.
As for Eq.~\ref{equation_tmu_cs}, the total memory usage of the NS
design ($TMU_{NS}$) (not shown in Eq.~\ref{equation_mu_ns}) is given
by the summation of the memory usage of each predicate, i.e, the
$MU_{NS}(P_i)$ values. Comparing $TMU_{NS}$ with $TMU_{ORIG}$ given
$NP$ tabled predicates, the extra memory cost of the NS design to
support concurrency is given by the formula:
\begin{equation*}
\sum^{NP}_{i = 1}[BA + [NT - 1] * [ST(P_i) + \sum\limits^{NC(P_i)}_{j=1}
[SF + AT(P_{i.j})]]]
\end{equation*}
The formula shows that for the base case of 1 thread ($NT=1$), the
amount of extra memory spent by the NS design, given by $NP*BA$,
corresponds to the bucket array extensions. When increasing the number
of threads, the amount of extra memory spent in the $ST(P_i)$, $SF$
and $AT(P_{i.j})$ data structures increases proportionally to
$NT$. This dependency on the number of threads motivated us to create
alternative designs that could decrease the amount of extra memory to
be spent. The following subsections present such alternative designs.
\subsection{Subgoal-Sharing Design}
In the \emph{Subgoal-Sharing (SS)} design, the threads share part of
the table space. Figure~\ref{fig_table_space_subgoal_sharing} shows
the configuration of the table space for the SS design. Again, for the
sake of simplicity, the figure only shows the configuration for a
particular tabled predicate $P_i$ and a particular subgoal call
$P_{i.j}$.
\begin{wrapfigure}{R}{7.5cm}
\includegraphics[width=7.5cm]{figures/table_space_subgoal_sharing.pdf}
\caption{Table space organization for the SS design}
\label{fig_table_space_subgoal_sharing}
\end{wrapfigure}
In the SS design, the $ST(P_i)$ data structures are now shared among
the threads and the leaf data structure in each subgoal trie path,
instead of referring a $SF$ as before, it now points to a $BA$. Each
thread $T_K$ has its own entry inside the $BA$ which then points to
private $SF$ and $AT(P_{i.j})$ structures. In this design, concurrency
among threads is restricted to the allocation of trie nodes on the
$ST(P_i)$ structures. Whenever a thread finishes execution, its
private structures are removed, but the shared part remains present as
it can be in use or be further used by other threads.
Given an arbitrary number of $NT$ running threads and assuming that
all threads have completely evaluated the same number $NC(P_i)$ of
tabled subgoal calls, Eq.~\ref{equation_mu_ss} shows the memory usage
for a predicate $P_i$ in the SS design ($MU_{SS}(P_i)$).
The memory usage for the SS design is given by the sum of the memory
size of the $TE$ and $ST(P_i)$ data structures plus the summation, for
each subgoal call, of the memory used by the $BA$ added with the sizes
of the private structures of each thread multiplied by the $NT$
threads. The memory size of each particular data structure is the same
as in Yap’s original table space organization.
\begin{equation}
\begin{aligned}
& MU_{SS}(P_i) = TE + ST(P_i) + \sum\limits^{NC(P_i)}_{j=1} [BA + NT * [SF + AT(P_{i.j})]]
\end{aligned}
\label{equation_mu_ss}
\end{equation}
Theorem~\ref{theorem_NS_vs_SS} shows the conditions where the SS design
uses less memory than the NS design for an arbitrary number of threads
$NT$ and an arbitrary number of subgoal calls $NC(P_i)$\footnote{The
proofs for all the theorems that follow are presented in detail
in~\ref{appendix_proofs}.}.
\begin{theorem}
\label{theorem_NS_vs_SS}
If $NT \geq 1$ and $NC(P_i) \geq 1$ then $MU_{SS}(P_i) \leq
MU_{NS}(P_i) $ if and only if the formula $ [NC(P_i) - 1] * BA \leq
[NT - 1] * ST(P_i)$ holds.
\end{theorem}
Theorem~\ref{theorem_NS_vs_SS} shows that the comparison between the
NS and SS designs depends directly on the number of subgoal calls
($NC(P_i)$) made to the predicate by the number of threads ($NT$) in
evaluation. These numbers will affect the memory size of the $BA$ and
$ST(P_i)$ structures. The NS design grows in the number of $ST(P_i)$
structures as we increase the number of threads. The SS design grows
in the number of $BA$ structures proportionally to the number of
subgoal calls made to the predicate. The number of subgoal calls and
the size of the $ST(P_i)$ structures depends on the predicate being
evaluated, while the size of the $BA$ structures is fixed by the
implementation and the number of threads is user-dependent. For one
thread ($NT=1$), the following corollaries can be derived from
Thm.~\ref{theorem_NS_vs_SS}:
\begin{corollary}
If $NT = 1$ and $NC(P_i) = 1$ then $MU_{SS}(P_i) = MU_{NS}(P_i)$.
\end{corollary}
\begin{corollary}
If $NT = 1$ and $NC(P_i)>1$ then $MU_{SS}(P_i) > MU_{NS}(P_i)$.
\end{corollary}
In summary, for one thread, the SS design is equal to or worse than
the NS design in terms of memory usage. For a number of threads higher
than one, the SS design performs better than the NS design when the
formula in Thm.~\ref{theorem_NS_vs_SS} holds. The best scenarios for
the SS design occur for predicates with few subgoal calls and for
subgoal trie structures using larger amounts of memory. In such
scenarios, the difference between both designs increases
proportionally to the number of threads.
\subsection{Full-Sharing Design}
The \emph{Full-Sharing (FS)} design tries to maximize the amount of
data structures being shared among
threads. Figure~\ref{fig_table_space_full_sharing} shows the
configuration of the table space for the FS design. Again, for the
sake of simplicity, the figure only shows the configuration for a
particular tabled predicate $P_i$ and a particular subgoal call
$P_{i.j}$.
\begin{wrapfigure}{R}{8.5cm}
\includegraphics[width=8.5cm]{figures/table_space_full_sharing.pdf}
\caption{Table space organization for the FS design}
\label{fig_table_space_full_sharing}
\vspace{-\intextsep}
\end{wrapfigure}
In this design, the $AT(P_{i.j})$ structure and part of the subgoal
frame information, the subgoal entry data structure in
Fig.~\ref{fig_table_space_full_sharing}, are now also shared among all
threads. The previous $SF$ data structure was split into two: the
subgoal entry stores common information for the subgoal call (such as
the pointer to the shared $AT(P_{i.j})$ structure) and a $BA$
structure; and the remaining information (the subgoal frame data
structure in Fig.~\ref{fig_table_space_full_sharing}) is kept private
to each thread. Concurrency among threads now includes also the access
to the subgoal entry data structure and the allocation of trie nodes
on the $AT(P_{i.j})$ structures.
The subgoal entry includes a $BA$ where each thread $T_k$ has its own
entry which then points to the thread's private subgoal frame. Each
private subgoal frame includes an extra field which is a back pointer
to the common subgoal entry. This is important in order to keep
unaltered all the tabling data structures that access subgoal
frames. To access the private information, there is no extra cost (we
still use a direct pointer), and only for the common information on
the subgoal entry we pay the extra cost of following an indirect
pointer.
Comparing with the NS and SS designs, the FS design has two major
advantages. First, memory usage is reduced to a minimum. The only
memory overhead, when compared with a single threaded evaluation, is
the $BA$ associated with each subgoal entry, and apart from the split
on the subgoal frame data structure, all the remaining structures
remain unchanged. Second, since threads are sharing the same
$AT(P_{i.j})$ structures, answers inserted by a thread for a
particular subgoal call are automatically made available to all other
threads when they call the same subgoal.
Given an arbitrary number of $NT$ running threads and assuming that
all threads have completely evaluated the same number $NC(P_i)$ of
tabled subgoal calls, Eq.~\ref{equation_mu_fs} shows the memory usage
for a predicate $P_i$ in the FS design ($MU_{FS}(P_i)$).
\begin{equation}
\begin{aligned}
& MU_{FS}(P_i) = TE + ST(P_i) + \sum\limits^{NC(P_i)}_{j=1} [SE_{FS} + BA + NT * [SF_{FS} + BP] + AT(P_{i.j})] \\
& where~~
SE_{FS} + SF_{FS} = SF
\end{aligned}
\label{equation_mu_fs}
\end{equation}
The memory usage for the FS design is given by the sum of the memory
size of the $TE$ and $ST(P_i)$ data structures plus the summation, for
each subgoal call, of the memory used by the subgoal entry data
structure ($SE_{FS}$), the $BA$ and the $AT(P_{i.j})$ structures
added with the sizes of the private data structures of each thread
multiplied by the $NT$ threads. The private data structures of each
thread include the subgoal frame ($SF_{FS}$) and the back pointer
($BP$). The memory size of the original $SF$ is now given by the size
of the $SE_{FS}$ and $SF_{FS}$ data structures. The memory size of the
remaining structures is the same as in Yap's original table space
organization.
Since the FS design is a refinement of the SS design, next we use
Thm.~\ref{theorem_SS_vs_FS} to show that the FS design always requires
less memory than the SS design for more than one thread.
\begin{theorem}
\label{theorem_SS_vs_FS}
If $NT > 1$ and $NC(P_i) \geq 1$ then $MU_{FS}(P_i) < MU_{SS}(P_i)$.
\end{theorem}
Remember from the previous subsection that the SS behavior depends on
the amount of memory spent in the $BA$. The FS maintains this
dependency, since this structure is co-allocated inside the subgoal
entry structure. The difference between both designs occurs in the
memory usage spent in the subgoal frames and in the answer tries. For
the subgoal frames, the difference is that the size of the private
subgoal frames used by the FS design, including the back pointer, is
lower that the ones used by the SS design. For the answer trie
structures, the FS design simply does not allocate as many of these
structures has the SS design. For one thread ($NT=1$), the following
corollary can be derived from Thm.~\ref{theorem_SS_vs_FS}:
\begin{corollary}
If $NT=1$ and $NC(P_i) \geq 1$ then $ MU_{FS}(P_i) > MU_{SS}(P_i)$.
\end{corollary}
In summary, for one thread, the FS design is always worse than the SS
design and the difference increases proportionally to the number of
subgoal calls. For a number of threads higher than one, the FS design
always performs better than the SS design and the difference increases
as the number of threads and the number of subgoal calls also
increases.
\subsection{Partial Answer Sharing Design}
In the SS design, the subgoal trie structures are shared among threads
but the answers for the subgoal calls are stored in private answer
trie structures to each thread. As a consequence, no sharing of
answers between threads is done. The \emph{Partial Answer Sharing
(PAS)} design~\cite{areias-jss16} extends the SS design to allow
threads to share answers. Threads still view their answer tries as
private but are able to consume answers from completed answer tries
computed by other threads. The idea is as follows. Whenever a thread
calls a new tabled subgoal, first it searches the table space to
lookup if any other thread has already computed the answers for that
subgoal. If so, then the thread reuses the available answers, thus
avoiding recomputing the subgoal call from scratch. Otherwise, it
computes the subgoal itself. Several threads can work on the same
subgoal call simultaneously, i.e., we do not protect a subgoal from
further evaluations while other threads have picked it up already. The
first thread completing a subgoal, shares the results by making them
available (public) to the other
threads. Figure~\ref{fig_table_space_answer_sharing} illustrates the
table space organization for the PAS design.
\begin{wrapfigure}{R}{7.5cm}
\includegraphics[width=7.5cm]{figures/table_space_answer_sharing.pdf}
\caption{Table space organization for the PAS design}
\label{fig_table_space_answer_sharing}
\vspace{-\intextsep}
\end{wrapfigure}
As for the SS design, threads can concurrently access the subgoal trie
structures for both read and write operations, but for the answer trie
structures, they are only concurrently accessed for reading after
completion. All subgoal frames and answer tries are initially private
to a thread. Later, when the first subgoal frame is completed, i.e.,
when we have found the full set of answers for it, it is marked as
completed (black answer trie in
Fig.~\ref{fig_table_space_answer_sharing}) and put in the beginning of
the list of private subgoal frames (configuration shown in
Fig.~\ref{fig_table_space_answer_sharing}). With the PAS design, we
also aim to improve the memory usage of the SS design by removing the
$BA$ data structure. This is a direct consequence of the analysis made
in Eq.~\ref{equation_mu_ss} where we have shown that the performance
of the SS design is directly affected by the size of the memory used
by the $BA$ structures. Thus, instead of pointing to a $BA$ as in the
SS design, now the leaf data structure in each subgoal trie path
points to a list of private subgoal frames corresponding to the
threads evaluating the subgoal call. In order to find the subgoal
frame corresponding to a thread, we may have to pay an extra cost for
navigating in the list but, once a subgoal frame is completed, we can
access it immediately since it is always stored in the beginning of
the list.
Given an arbitrary number of $NT$ running threads and assuming that
all threads have completely evaluated the same number $NC(P_i)$ of
tabled subgoal calls, Eq.~\ref{equation_mu_as} shows the memory usage
for a predicate $P_i$ in the PAS design ($MU_{PAS}(P_i)$).
\begin{equation}
\begin{aligned}
& MU_{PAS}(P_i) = TE + ST(P_i) + \sum\limits^{NC(P_i)}_{j=1} [NT(P_{i.j}) * [SF + AT(P_{i.j})]] \\
& where~~
NT(P_{i.j}) \leq NT
\end{aligned}
\label{equation_mu_as}
\end{equation}
The memory usage for the PAS design is given by the sum of the memory
size of the $TE$ and $ST(P_i)$ data structures plus the summation, for
each subgoal call, of the memory used by the private structures of
each thread multiplied by $NT(P_{i.j})$ threads, where $NT(P_{i.j})$
is the number of threads evaluating the subgoal call $P_{i.j}$ in a
private fashion. Note that $NT(P_{i.j}) \leq NT$, since the threads
consuming answers from completed subgoal frames do not allocate any
extra memory. The memory size of each particular data structure is the
same as in Yap’s original table space organization.
In summary, if comparing Eq.~\ref{equation_mu_ss} with
Eq.~\ref{equation_mu_as}, we can observe that the total memory usage
of the PAS design is always less than the total memory usage of the SS
design. Additionally, we can optimize even further this design and
allow threads to delete their private $SF$ and $AT(P_{i.j})$
structures when completing, if another thread has made public its
completed subgoal frame first. With this optimization, we can end in
practice with a single $SF$ and $AT(P_{i.j})$ structure for each
subgoal call $P_{i.j}$.
If comparing with the FS design, because we only share completed
answer tries, we also avoid some problems present in the FS
design. First, we avoid the problem of dealing with concurrent updates
to the answer tries. Second, we avoid the problem of dealing with
concurrent deletes, as in the case of using mode-directed
tabling. Since the PAS design keeps the answer tries private to each
thread, the deletion of nodes can be done without any complex
machinery to deal with concurrent delete operations. Third, we avoid
the problem of managing the different set of answers that each thread
has found. As we will see in the next subsection, this can be a
problem for batched scheduling evaluation.
\subsection{Private Answer Chaining Design}
During tabled execution, there are several points where we may have to
choose between continuing forward execution, backtracking, consuming
answers from the tables or completing subgoals. The decision about the
evaluation flow is determined by the \emph{scheduling
strategy}. Different strategies may have a significant impact on
performance, and may lead to a different ordering of solutions to the
query goal. Arguably, the two most successful tabling scheduling
strategies are \emph{local scheduling} and \emph{batched
scheduling}~\cite{Freire-96}.
Local scheduling tries to complete subgoals as soon as possible. When
new answers are found, they are added to the table space and the
evaluation fails. Local scheduling has the advantage of minimizing the
size of \emph{clusters of dependent subgoals}. However, it delays
propagation of answers and requires the complete evaluation of the
search space.
Batched scheduling tries to delay the need to move around the search
tree by batching the return of answers to consumer subgoals. When new
answers are found for a particular tabled subgoal, they are added to
the table space and the evaluation continues. Batched scheduling can
be a useful strategy in problems which require an eager propagation
of answers and/or do not require the complete set of answers to be
found.
With the FS design, all tables are shared. Thus, since several threads
can be inserting answers in the same answer trie, when an answer
already exists, it is not possible to determine if the answer is new
or repeated for a particular thread without further support. For local
scheduling, this is not a problem since, for repeated and new answers,
local scheduling always fails. The problem occurs with batched
scheduling that requires that only the repeated answers should
fail. Threads have then to detect, during batched evaluation, whether
an answer is new and must be propagated or whether an answer is
repeated and the evaluation must fail. The \emph{Private Answer
Chaining (PAC)} design~\cite{areias-slate15-post} extends the FS
design to keep track of the answers that were already found and
propagated per thread and subgoal
call. Figure~\ref{fig_table_space_answer_chaining_overview}
illustrates PAC's key idea.
\begin{wrapfigure}{R}{6.25cm}
\includegraphics[width=6.25cm]{figures/table_space_answer_chaining_overview.pdf}
\caption{PAC overview}
\label{fig_table_space_answer_chaining_overview}
\vspace{-\intextsep}
\end{wrapfigure}
In a nutshell, PAC splits \emph{answer propagation} from \emph{answer
representation}, and allows the former to be privately stored in the
subgoal frame data structure of each thread, and the latter to be kept
publicly shared among threads in the answer trie data structure. This
is similar to the idea proposed by Costa and Rocha~\cite{CostaJ-09b}
for the \emph{global trie data structure}, where answers are
represented only once on a global trie and then each subgoal call has
private pointers to its set of answers. With PAC, we follow the same
key idea of representing only once each answer (as given by the FS
design), but now since we are in a concurrent environment, we use a
private chain of answers per thread to represent the answers for each
subgoal call. Later, if a thread completes a subgoal call, its PAC is
made public so that from that point on all threads can use that chain
in complete (only reading)
mode. Figure~\ref{fig_table_space_answer_chaining} illustrates the new
data structures involved in the implementation of PAC's design for a
situation where different threads are evaluating the same tabled
subgoal call $P_{i.j}$.
\begin{figure}[!ht]
\centering
\includegraphics[width=0.75\textwidth]{figures/table_space_answer_chaining.pdf}
\caption{PAC's data structures for (a) private and (b) public
chaining}
\label{fig_table_space_answer_chaining}
\end{figure}
Figure~\ref{fig_table_space_answer_chaining}(a) shows then a situation
where two threads, $T_1$ and $T_{t-2}$, are sharing the same subgoal
entry for a call $P_{i.j}$ still under evaluation, i.e., still not yet
completed. The current state of the evaluation shows an answer trie
with 3 answers found for $P_{i.j}$. For the sake of simplicity, we are
omitting the internal answer trie nodes and we are only showing the
leaf nodes $LN_1$, $LN_2$ and $LN_3$ of each answer.
With the PAC design, the leaf nodes are not chained in the answer trie
data structure, as usual. Now, the chaining process is done privately,
and for that, we use the subgoal frame structure of each thread. On
the subgoal frame structure we added a new field, called
\emph{Answers}, to store the answers found within the execution of the
thread. In order to minimize PAC's impact, each answer node in the
private chaining has only two fields: (i) an entry pointer, which
points to the corresponding leaf node in the answer trie data
structure; and (ii) a next pointer to chain the nodes in the private
chaining. To maintain good performance, when the number of answer
nodes exceeds a certain threshold, we use a hash trie mechanism design
similar to the one presented in~\cite{Areias-ijpp15}, but without
concurrency support, since this mechanism is private to each thread.
PAC's data structures in Fig.~\ref{fig_table_space_answer_chaining}(a)
represent then two different situations. Thread $T_1$ has only found
one answer and it is using a direct answer chaining to access the leaf
node $LN_1$. Thread $T_{t-2}$ has already found three answers for
$P_{i.j}$ and it is using the hash trie mechanism within its private
chaining. In the hash trie mechanism, the answer nodes are still
chained between themselves, thus that repeated calls belonging to
thread $T_{t-2}$ can consume the answers as in the original mechanism.
Figure~\ref{fig_table_space_answer_chaining}(b) shows the state of the
subgoal call after completion. When a thread $T$ completes a subgoal
call, it frees its private consumer structures, but before doing that,
it checks whether another thread as already marked the subgoal as
completed. If no other thread has done that, then thread $T$ not only
follows its private chaining mechanism, as it would for freeing its
private nodes, but also follows the pointers to the answer trie leaf
nodes in order to create a chain inside the answer trie. Since this
procedure is done inside a critical region, no more than one thread
can be doing this chaining process. Thus, in
Fig.~\ref{fig_table_space_answer_chaining}(b), we are showing the
situation where the subgoal call $P_{i.j}$ is completed and both
threads $T_1$ and $T_{t-2}$ have already chained the leaf nodes inside
the answer trie and removed their private chaining structures.
\section{Engine Components}
\label{sec_engine_components}
This section discusses the most important engine components required
to support concurrent tabled evaluation.
\subsection{Fixed-Size Memory Allocator}
A critical component in the implementation of an efficient concurrent
tabling system is the memory allocator. Conceptually, there are two
categories of memory allocators: \emph{kernel-level} and
\emph{user-level} memory allocators. Kernel-level memory allocators
are responsible for managing memory inside the protected
sub-systems/resources of the operating system, while user-level memory
allocators are responsible for managing the \emph{heap} area, which is
the area inside the addressing space of each process where the dynamic
allocation of memory is directly done.
Evidence of the importance of a \emph{User-level Memory Allocator
(UMA)} comes from the wide array of UMA replacement packages that
are currently available. Some examples are the
PtMalloc~\cite{ptmalloc}, Hoard~\cite{Berger-00},
TcMalloc~\cite{tcmalloc} and JeMalloc~\cite{Evans-06} memory
allocators. Many UMA subsystems were written in a time when
multiprocessor systems were rare. They used memory efficiently but
were highly serial, constituting an obstacle to the throughput of
concurrent applications, which require some form of synchronization to
protect the heap. Additionally, when a concurrent application is ran
in a multiprocessor system, other problems can occur, such \emph{heap
blowup}, \emph{false sharing} or \emph{memory
contention}~\cite{Masmano-06,Gidenstam-10}.
Since tabling also demands the multiple allocation and deallocation of
different sized chunks of memory, memory management plays an important
role in the efficiency of a concurrent tabling system. To satisfy this
demand, we have designed a \emph{fixed-size UMA} especially aimed for
an environment with the characteristics of concurrent
tabling~\cite{Areias-12b}. In a nutshell, fixed-size UMA separates
local and shared memory allocation, and uses local and global heaps
with pages that are formatted in blocks with the sizes of the existing
data structures. The page formatting in blocks contributes to avoid
inducing false-sharing, because different threads in different
processors do not share the same cache lines, and to avoid the heap
blowup problem, because pages migrate between local and global heaps.
At the implementation level, our proposal has local and global heaps
with pages formatted for each object type. In addition, global and
local heaps can hold free (unformatted) pages for use when a local
heap runs empty. Since modern computer architectures use pages to
handle memory, we adopted an allocation scheme based also on pages,
where each memory page only contains data structures of the same
type. In order to split memory among different threads, in our
proposal, a page can be considered a \emph{local page}, if owned by a
particular thread, or a \emph{global page},
otherwise. Figure~\ref{fig_page_based_memory} gives an overview of
this organization.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{figures/page_based_memory.pdf}
\caption{Using pages as the basis for the fixed-size memory allocator}
\label{fig_page_based_memory}
\end{figure}
A thread can own any number of pages of the same type, of different
types and/or free pages. Any type of page (including free pages) can
be local to a thread or global, and each particular page only contains
data structures of the same type. When a page $P$ is made local to a
thread $T$, it means that $T$ gains exclusive permission to
allocate and deallocate data structures on $P$. On the other hand,
global pages have no owners and, thus, they are free from
allocate/deallocate operations. To allocate/deallocate data structures
on global pages, first the corresponding pages should be moved to a
particular thread. All running threads can access (for read or write
operations) the data structures allocated on a page, independently of
being a local or global page.
Allocating and freeing data structures are constant-time operations,
because they require only moving a structure to or from a list of free
structures. Whenever a thread $T$ requests to allocate memory for a
data structure of type $S$, it can instantly satisfy the request by
returning the first unused slot on the first available local page with
type $S$. Deallocation of a data structure of type $S$ does not free
up the memory, but only opens an unused slot on the chain of available
local pages for type $S$. Further requests to allocate memory of type
$S$ will later return the now unused memory slot. When all data
structures in a page are unused, the page is moved to the chain of
free local pages. A free local page can be reassigned later to a
different data type. When a thread $T$ runs out of available free
local pages, it must synchronize with the other threads in order to
access the global pages or the operating system's memory allocator, if
no free global page exists. This process eliminates the need to search
for suitable memory space and greatly alleviates memory
fragmentation. The only wasted space is the unused portion at the end
of a page when it cannot fit exactly with the size of the
corresponding data structures.
When a thread finishes execution, it deallocates all its private data
structures and then moves its local pages to the corresponding global
page entries. Shared structures are only deallocated when the last
running thread (usually the main thread) abolishes the tables. Thus,
if a thread $T$ allocates a data structure $D$, then it will be also
responsible for deallocating $D$, if $D$ is private to $T$, or $D$
will remain live in the tables, if $D$ is shared, even when $T$ finish
execution. In the latter case, $D$ can be only deallocated by the last
running thread $L$. In such case, $D$ is made to be local to $L$ and
the deallocation process follows as usual.
\subsection{Lock-Free Data Structures}
Another critical component in the implementation of an efficient
concurrent tabling system is the design of the data structures and
algorithms that manipulate shared tabled data. As discussed before,
Yap's table space follows a two-level trie data structure, where one
level stores the tabled subgoal calls and the other stores the
computed answers. Depending on the number of subgoal calls or answers,
the paths inside a trie, corresponding to the subgoal calls or
answers, might have several trie nodes per internal level of the trie
structure. Whenever an internal trie level becomes saturated, a
\emph{hash mechanism} is used to provide direct node access and
therefore optimize the search for the data within the trie
level. Figure~\ref{fig_trie_hash_overview} shows a hashing mechanism
for an internal trie level within the subgoal and answer data
structures.
\begin{wrapfigure}{R}{8.5cm}
\includegraphics[width=8.5cm]{figures/trie_hash_overview.pdf}
\caption{The hashing mechanism within a trie level}
\label{fig_trie_hash_overview}
\vspace{-\intextsep}
\end{wrapfigure}
Several approaches for hashing mechanisms exist. The most important
aspect of a hashing mechanism is its behavior in terms of hash
collisions, i.e., when two keys collide and occupy the same hash table
location. Multiple solutions exist that address the collision
problem. Among these are the \emph{open addressing} and \emph{closed
addressing} approaches~\cite{Tenenbaum-90,Knuth-98}. In open
addressing, the hash table stores the objects directly within the hash
table internal array, while in closed addressing, every object is
stored directly at an index in the hash table's internal array. In
closed addressing, collisions are solved by using other arrays or
linked lists. Yap's tabling engine uses \emph{separate
chaining}~\cite{Knuth-98} to solve hash collisions. In the separate
chaining mechanism, the hash table is implemented as an array of
linked lists. The basic idea of separate chaining techniques is to
apply linked lists for collision management, thus in case of a
conflict a new object is appended to the linked list.
Our initial approach to deal with concurrency within the trie
structures was to use \emph{lock-based
strategies}~\cite{Areias-12a}. However, lock-based data structures
have their performance restrained by multiple problems, such as,
convoying, low fault tolerance and delays occurred inside a critical
region. We thus shifted our attention in to taking advantage of the
low-level \emph{Compare-And-Swap (CAS)} operation, that nowadays can
be widely found on many common architectures. The CAS operation is an
\emph{atomic instruction} that compares the contents of a memory
location to a given value and, if they are the same, updates the
contents of that memory location to a given new value. The CAS
operation is at the heart of many \emph{lock-free} (also known as
non-blocking) data structures~\cite{Herlihy-87}. Non-blocking data
structures offer several advantages over their blocking counterparts,
such as being immune to deadlocks, lock convoying and priority
inversion, and being preemption tolerant, which ensures similar
performance regardless of the thread scheduling policy. Using
lock-free techniques, we have created two proposals for concurrent
hashing data structures especially aimed to be as effective as
possible in a concurrent tabling engine and without introducing
significant overheads in the sequential
execution. Figure~\ref{fig_trie_hash_proposals} shows the architecture of
the two proposals.
\begin{wrapfigure}{R}{7.5cm}
\includegraphics[width=7.5cm]{figures/trie_hash_proposals.pdf}
\caption{Architecture of the two lock-free hash proposals}
\label{fig_trie_hash_proposals}
\vspace{-\intextsep}
\end{wrapfigure}
Both proposals include a root node $R$ and have a hashing mechanism
composed by a bucket array and a hash function that maps the nodes
into the entries in the bucket array. The first proposal, shown in
Fig.~\ref{fig_trie_hash_proposals}(a), implements a dynamic resizing
of the hash tables by doubling the size of the bucket array whenever
it becomes saturated~\cite{Areias-14}. It starts with an initial
bucket array with $S$ entries and, whenever the hash bucket array
becomes saturated, i.e., when the number of nodes in a bucket entry
exceeds a pre-defined threshold value and the total number of nodes
exceeds $S$, then the bucket array is expanded to a new one with $2*S$
entries. This expansion mechanism is executed by a single thread,
meaning that no more than one expansion can be done at a time. If the
thread executing the expansion suspends for some reason (for example,
be suspended by the operating system scheduler), then all the
remaining threads can still be searching and inserting nodes in the
trie level that is being expanded in a lock-free fashion, but no other
thread will be able to expand the same trie level. When the process of
bucket expansion is completed for all $S$ bucket entries, node $R$ is
updated to refer to the new bucket array with $2*S$ entries. Since the
size of the hashes doubles on each expansion, this proposal is highly
inappropriate to be integrated with the fixed-size UMA.
The second proposal, shown in Fig.~\ref{fig_trie_hash_proposals}(b),
was designed to be compatible with the fixed-size UMA. It is based on
\emph{hash tries data structures} and is aimed to be a simpler and
more efficient lock-free proposal that disperses the synchronization
regions as much as possible in order to minimize problems such as
false sharing or cache memory ping pong
effects~\cite{Areias-ijpp15}. Hash tries (or hash array mapped tries)
are another trie-based data structure with nearly ideal
characteristics for the implementation of hash
tables~\cite{Bagwell-01}. As shown in
Fig.~\ref{fig_trie_hash_overview}, in this proposal, we still have the
original subgoal/answer trie data structures which include a hashing
mechanism whenever an internal trie level becomes saturated, but now
the hashing mechanism is implemented using hash tries data structures.
An essential property of the trie data structure is that common
prefixes are stored only once~\cite{Fredkin-62}, which in the context
of hash tables allows us to efficiently solve the problems of setting
the size of the initial hash table and of dynamically resizing it in
order to deal with hash collisions. In a nutshell, a hash trie is
composed by \emph{internal hash arrays} and \emph{leaf nodes} (nodes
$N1$ and $N2$ in Fig.~\ref{fig_trie_hash_proposals}(b)) and the
internal hash arrays implement a hierarchy of hash levels of fixed
size $S=2^w$. To map a node into this hierarchy, first we compute the
hash value $h$ and then we use chunks of $w$ bits from $h$ to index
the entry in the appropriate hash level. Hash collisions are solved by
simply walking down the tree as we consume successive chunks of $w$
bits from the hash value $h$. Whenever a hash bucket array becomes
saturated, i.e., when the number of nodes in a bucket entry exceeds a
pre-defined threshold value, then the bucket array is expanded to a
new one with $S$ entries. As for the previous proposal, this expansion
mechanism is executed by a single thread. If the thread executing the
expansion suspends for some reason, then all the remaining threads can
still be searching and inserting nodes in the bucket entry in a
lock-free fashion. Compared with the previous proposal, this proposal
has a fined grain synchronization region, because it blocks only one
bucket entry per expansion.
\section{Performance Analysis}
Our work on combining tabling with parallelism started some years ago
when the first approach for implicit parallel tabling was
presented~\cite{Rocha-99a}. Such approach lead to the design and
implementation of an or-parallel tabling system, named
OPTYap~\cite{Rocha-01}. In OPTYap, each worker behaves like a
sequential tabling engine that fully implements all the tabling
operations. During the evaluation, the or-parallel component of the
system is triggered to allow synchronized access to the table space
and to the common parts of the search tree, or to schedule workers
running out of alternatives to exploit.
OPTYap has shown promising results in several tabled
benchmarks~\cite{Rocha-01}. The worst results were obtained in the
transitive closure of the right recursive definition of the path
problem using a grid configuration, where no speedups were obtained
with multiple workers. The bad results achieved in this benchmark were
explained by the higher rate of contention in Yap's internal data
structures, namely in the subgoal frames. A closer analysis showed
that the number of suspension/resumptions operations is approximately
constant with the increase in the number of workers, thus suggesting
that there are answers that can only be found when other answers are
also found, and that the process of finding such answers cannot be
anticipated. In consequence, suspended branches have always to be
resumed to consume the answers that could not be found sooner.
More recently, we shifted our research towards explicit parallelism
specially aimed for multithreaded environments. Initial results were
promising as we were able to significantly reduce the contention for
concurrent table accesses~\cite{Areias-12a,Areias-12b}. Later, we
presented first speedup results for the right recursive definition of
the path problem, using a naive multithreaded scheduler that considers
a set of different starting points (queries) in the graph to be run by
a set of different threads. In this work, Yap obtained a maximum
speedup of $10.24$ for 16 threads~\cite{Areias-ijpp15}. Although,
these results were better than the ones presented earlier for implicit
parallelism, they were mostly due to the different scheduler strategy
adopted to evaluate the benchmark. On the other hand, such work also
showed that with 32 threads, no improvements were obtained compared
with 16 threads. A closer analysis showed again that such behavior was
related with the large number of subgoal call dependencies in the
program. We thus believe that the ordering to which the answers are
found in some problems, like in the evaluation of the transitive
closure of strongly connected graphs, is a major problem that
restricts concurrency/parallelism in tabled programs.
In what follows, we start with worst case scenarios to study how
independent flows of execution running simultaneously interfere at the
low-level engine. Next, we focus on two well-known dynamic programming
problems, the Knapsack and LCS problems, and we discuss how we were
able to scale their execution by using Yap's multithreaded tabling
engine. The environment of our experiments was a machine with 32-Core
AMD Opteron (TM) Processor 6274 (2 sockets with 16 cores each) with
32GB of main memory, running the Linux kernel 3.16.7-200.fc20.x86 64
with Yap Prolog 6.3\footnote{Available at
\url{https://github.com/miar/yap-6.3}.}.
\subsection{Experiments on Worst Case Scenarios}
We begin with experimental results for concurrent tabled evaluation
using local and batched scheduling with the NS, SS and PAC designs for
worst case scenarios that stress the trie data structures. For the
sake of simplicity, we will present only the best results, which were
always achieved when using the fixed-size UMA and the second lock-free
proposal. We do not show results for the CS and PAS designs because
they are not meaningful in this context, as we will see next. The
results for the FS design are identical to PAC's results, except for
batched scheduling which FS does not support.
For benchmarking, we used the set of tabling benchmarks
from~\cite{Areias-12b} which includes 19 different programs in
total. We choose these benchmarks because they have characteristics
that cover a wide number of scenarios in terms of trie usage. They
create different trie configurations with lower and higher number of
nodes and depths, and also have different demands in terms of trie
traversing\footnote{We show a more detailed characterization of the
benchmark set in~\ref{appendix_bechmark_details}.}.
To create worst case scenarios that stress the table data structures,
\emph{we ran all threads starting with the same query goal}. By doing
this, it is expected that threads will access the table space, to
check/insert for subgoals and answers at similar times, thus causing a
huge stress on the same critical regions. In particular, for this set
of benchmarks, this will be especially the case for the answer tries,
since the number of answers clearly exceeds the number of
subgoals. Please note that, despite all threads are executing the same
program they have independent flows of execution, i.e., we are not
trying to parallelize the execution, but study how independent flows
of execution (in this case, identical flows of execution) interfere at
the low-level engine. By focusing first on the worst case scenarios,
we can infer the \emph{highest overhead ratios when compared with one
thread} (or the lowest bounds of performance) that each design might
have when used with multiple threads in other real world
applications. For each table design, there are two main sources of
overheads: (i) the synchronization required to interact with the
memory allocator, which is proportional to the memory consumption
bounds discussed in Section~\ref{sec_concurrent_table_designs}; and
(ii) the synchronization required to interact with the table space,
which is proportional to the number of data structures that can be
accessed concurrently in each design. The overheads originated from
these two sources are not easy to isolate in order to evaluate the
weight of each in the execution time. The design of the memory
allocator clearly plays an important role in the former source of
overhead and the use of lock-free data structures is important to
soften the weight of the latter.
Table~\ref{tab_batched_overhead} shows the overhead ratios, when
compared with the NS design with 1 thread (running with local
scheduling and without the fixed-size UMA) for the NS, SS and PAC
designs running 1, 8, 16, 24 and 32 threads with local and batched
scheduling on the set of benchmarks. In order to give a fair weight to
each benchmark, the overhead ratio is calculated as follows. We begin
by running ten times each benchmark $B$ for each design $D$ with $T$
threads. Then, we calculate the average of those ten runs and use that
value ($D_{BT}$) to put it in perspective against the base time, which
is the average of the ten runs of the NS design with one thread
($NS_{B1}$)\footnote{The base times for the NS design are presented in
Table~\ref{tab_benchs} in~\ref{appendix_bechmark_details}.}. For
that, we use the following formula for the overhead $O_{DBT} = D_{BT}
/ NS_{B1}$. After calculating all the overheads $O_{DBT}$ for a
certain design $D$ and number of threads $T$ corresponding to the
several benchmarks $B$, we calculate the respective minimum, average,
maximum and standard deviation overhead ratios. The higher the
overhead, the worse the design behaves. An overhead of 1.00 means that
the design behaves similarly to the base case and is thus immune to
the fact of having other execution flows running simultaneously.
\begin{table}[t]
\centering
\caption{Overhead ratios, when compared with the NS design with 1
thread (running with local scheduling and without the fixed-size
UMA), for the NS, SS and PAC designs running 1, 8, 16, 24 and 32
threads with local and batched scheduling (best ratios by row and by
design for the Minimum, Average and Maximum are in bold)}
\begin{tabular}{ll|cc|cc|cc}
\multicolumn{2}{l|}{\multirow{2}{*}{\bf Threads}} &
\multicolumn{2}{c|}{\multirow{1}{*}{\bf NS}} &
\multicolumn{2}{c|}{\multirow{1}{*}{\bf SS}} &
\multicolumn{2}{c}{\multirow{1}{*}{\bf PAC}} \\
&
& \multicolumn{1}{c}{\bf Local}
& \multicolumn{1}{c|}{\bf Batched}
& \multicolumn{1}{c}{\bf Local}
& \multicolumn{1}{c|}{\bf Batched}
& \multicolumn{1}{c}{\bf Local}
& \multicolumn{1}{c}{\bf Batched}\\
\hline\hline
\multirow{4}{*}{\bf 1}
& {\bf Min }& {\bf 0.53}& 0.55& {\bf 0.54}& 0.55& 1.01& {\bf 0.95}\\
& {\bf Avg }& {\bf 0.78}& 0.82& {\bf 0.84}& 0.90& {\bf 1.30}& 1.46\\
& {\bf Max }& 1.06& {\bf 1.05}& {\bf 1.04}& {\bf 1.04}& {\bf 1.76}& 2.33\\
& {\bf StD }& 0.15& 0.14& 0.17& 0.16& 0.22& 0.44\\
\hline
\multirow{4}{*}{\bf 8}
& {\bf Min }& 0.66& {\bf 0.63}& 0.66& {\bf 0.63}& 1.16&{\bf 0.99}\\
& {\bf Avg }& {\bf 0.85}& 0.88& {\bf 0.92}& 0.93& {\bf 1.88}& 1.95\\
& {\bf Max }& {\bf 1.12}& 1.14& 1.20& {\bf 1.15}& {\bf 2.82}& 3.49\\
& {\bf StD }& 0.13& 0.14& 0.15& 0.14& 0.60& 0.79\\
\hline
\multirow{4}{*}{\bf 16}
& {\bf Min }& 0.85& {\bf 0.75}& 0.82& {\bf 0.77}& 1.17& {\bf 1.06}\\
& {\bf Avg }& {\bf 0.98}& 1.00& {\bf 1.04}& 1.05& {\bf 1.97}& 2.08\\
& {\bf Max }& {\bf 1.16}& 1.31& 1.31& {\bf 1.28}& {\bf 3.14}& 3.69\\
& {\bf StD }& 0.09& 0.17& 0.12& 0.13& 0.65& 0.83\\
\hline
\multirow{4}{*}{\bf 24}
& {\bf Min }& {\bf 0.91}& 0.93& 1.02& {\bf 0.98}& 1.16& {\bf 1.09}\\
& {\bf Avg }& {\bf 1.15}& 1.16& 1.22& {\bf 1.19}& {\bf 2.06}& 2.19\\
& {\bf Max }& 1.72& {\bf 1.60}& 1.81& {\bf 1.61}& {\bf 3.49}& 4.08\\
& {\bf StD }& 0.20& 0.21& 0.18& 0.16& 0.70& 0.91\\
\hline
\multirow{4}{*}{\bf 32}
& {\bf Min }& 1.05& {\bf 1.04}& {\bf 1.07}& 1.12& 1.33& {\bf 1.26}\\
& {\bf Avg }& 1.51& {\bf 1.49}& 1.54& {\bf 1.51}& {\bf 2.24}& 2.41\\
& {\bf Max }& {\bf 2.52}& 2.63& {\bf 2.52}& 2.62& {\bf 3.71}& 4.51\\
& {\bf StD }& 0.45& 0.45& 0.42& 0.43& 0.74& 1.02\\
\end{tabular}
\label{tab_batched_overhead}
\end{table}
By observing Table~\ref{tab_batched_overhead}, we can notice that for
one thread, on average, local scheduling is sightly better than
batched on the three designs. As we increase the number of threads,
one can observe that, for the NS and SS designs, both scheduling
strategies show very close minimum, average and maximum overhead
ratios. For the PAC design, the best minimum overhead ratio is always
for batched scheduling but, for the average and maximum overhead
ratio, local scheduling is always better than batched scheduling. For
the average and maximum overhead ratios, the difference between local
and batched scheduling in the PAC design is slightly higher than in
the NS and SS designs, which can be read as an indication of the
overhead that PAC introduces into the FS design. Recall that whenever
an answer is found during the evaluation, PAC requires that threads
traverse their private consumer data structures to check if the answer
was already found (and propagated).
Finally, we would like to draw the reader's attention to the worst
results obtained (the ones represented by the maximum rows). For 32
threads, the NS, SS and PAC designs have overhead results of
2.52/2.63, 2.52/2.62 and 3.71/4.51, respectively for local/batched
scheduling. These are outstanding results if we compare them with the
results obtained in our first approach~\cite{Areias-12a}, without the
fixed-size UMA and without lock-free data structures, where for local
scheduling with 24 threads, the NS, SS and FS designs had average
overhead results of 18.64, 17.72 and 5.42, and worst overhead results
of 47.89, 47.60 and 11.49, respectively. Results for the XSB Prolog
system, also presented in~\cite{Areias-12a}, for the same set of
benchmarks showed average overhead results of 6.1 and worst overhead
results of 10.31. We thus argue that the combination of a fixed-size
UMA with lock-free data structures is the best proposal to support
concurrency in general purpose multithreaded tabling applications.
\subsection{Experiments on Dynamic Programming Problems}
As mentioned in subsection~\ref{sec_implicit_explicit}, with a fully
explicit approach, it is left to the user to break the problem into
tasks for concurrent execution, assign them to the available workers
and control the execution and the synchronization points, i.e., it is
\emph{not} the tabled execution system that is responsible for doing
that, the execution system only provides the mechanisms/interface for
allowing simultaneous flows of execution. Thus, the user-level
scheduler implemented by the user, to support the division of the
problem in concurrent tasks and control the execution and
synchronization points, plays a key role in the process of trying to
obtain speedups through parallel execution. This means that we cannot
evaluate the infrastructure of a concurrent tabling engine just by
running some benchmarks if we do not put a big effort in a good
scheduler design, which is independent from such infrastructure.
In this subsection, we show how dynamic programming problems fit well
with concurrent tabled evaluation~\cite{areias-jss16}. To do so, we
used two well-known dynamic programming problems, the \emph{Knapsack}
and the \emph{Longest Common Subsequence (LCS)} problems. The Knapsack
problem~\cite{Martello-90} is a well-known problem in combinatorial
optimization that can be found in many domains such as logistics,
manufacturing, finance or telecommunications. Given a set of items,
each with a weight and a profit, the goal is to determine the number
of items of each kind to include in a collection so that the total
weight is equal or less than a given capacity and the total profit is
as much as possible. The problem of computing the length of the LCS is
representative of a class of dynamic programming algorithms for string
comparison that are based on getting a similarity degree. A good
example is the sequence alignment, which is a fundamental technique
for biologists to investigate the similarity between species.
For the Knapsack problem, we fixed the number of items and capacity,
respectively, 1,600 and 3,200. For the LCS problem, we used sequences
with a fixed size of 3,200 symbols. Then, for each problem, we created
three different datasets, D$_{10}$, D$_{30}$ and D$_{50}$, meaning
that the values for the weights/profits for the Knapsack problem and
the symbols for LCS problem where randomly generated in an interval
between 1 and 10\%, 30\% and 50\% of the total number of
items/symbols, respectively.
For both problems, we implemented either \emph{multithreaded tabled
top-down} and \emph{multithreaded tabled bottom-up} user-level
scheduler approaches. For the top-down approaches, we followed Stivala
\emph{et al.}'s work~\cite{Stivala-10} where a set of threads solve
the entire program independently but with a randomized choice of the
sub-problems. Figure~\ref{fig_knap_top_down_eval_tree-mt} illustrates
how this was applied in the case of the Knapsack problem considering
$N$ items and $C$ capacity. A set of threads begin the execution with
the same top query tabled call, $ks(N,C)$ in
Fig.~\ref{fig_knap_top_down_eval_tree-mt}, but then, on each level of
the evaluation tree, each thread randomly decides which branch will be
evaluated first, the exclude item branch (\emph{Exc}) or the include
item branch (\emph{Inc}). This random decision is aimed to disperse
the threads through the evaluation tree\footnote{A similar strategy
was followed for the LCS problem.}.
\begin{figure}[!ht]
\centering
\includegraphics[width=12cm]{figures/knap_top_down-mt.pdf}
\caption{Knapsack multithreaded tabled top-down approach}
\label{fig_knap_top_down_eval_tree-mt}
\end{figure}
Figure~\ref{fig_knap_top_down_eval_tree-mt}(a) shows a situation
where, starting from a certain item \emph{i} and capacity, thread
$T_1$ is evaluating the left branch of the tree ($Exc_i$), while
thread $T_2$ is evaluating the right branch ($Inc_i$)\footnote{For
simplicity of presentation, the capacity values are not shown in
Fig.~\ref{fig_knap_top_down_eval_tree-mt}. Note however that the
tabled call corresponding to a $Exc_i$ or $Inc_i$ branch in
different parts of the evaluation tree can be called with different
capacity values, meaning that, in fact, they are different tabled
calls. Only when the item and the capacity values are the same, the
tabled call is also the same.}. Notice that although the threads are
evaluating the branches of the tree in a random order, they still have
to evaluate all branches so that they can find the optimal solution
for the Knapsack problem. So, the random decision is only about the
evaluation order of the branches and not about skipping
branches. Figure~\ref{fig_knap_top_down_eval_tree-mt}(b) shows then a
situation where thread $T_1$ has completely evaluated the $Exc_i$
branch of the tree and has moved to the $Inc_i$ branch where it is now
evaluating a $Inc_j$ branch already evaluated by thread $T_2$. Since
the result for that branch is already stored in the corresponding
table, thread $T_1$ simply consumes the result, thus avoiding its
computation.
For each sub-problem, two alternative execution choices are available:
(i) exclude first and include next, or (ii) include first and exclude
next. The randomized choice of sub-problems results in the threads
diverging to compute different sub-problems simultaneously while
reusing the sub-problem's results computed in the meantime by the
other threads. Since the number of sub-problem is usually high in
this kind of problems, it is expected that the available set of
sub-problems will be evenly divided by the number of available threads
resulting in less computation time required to reach the final result.
We have implemented two alternative versions. The first version
(YAP$_{TD_1}$) simply follows Stivala et al.'s original random
approach. The second version (YAP$_{TD_2}$) extends the first one with
an extra step where the computation is first moved forward (i.e.,
to a deeper item/symbol in the evaluation tree) using a random
displacement of the number of items/symbols (we used a $maxRandom$
value corresponding to $10\%$ of the total number of items/symbols in
the problem) and only then the computation is performed for the next
item/symbol, as usual.
For the bottom-up user-level scheduler approaches (YAP$_{BU}$), the
Knapsack version is based on~\cite{Kumar-94} and the LCS version is
based on~\cite{Kumar-02}. Figure~\ref{fig_knapsack_matrix-mt}
illustrates the case of the Knapsack problem for $N$ items and $C$
capacity. The evaluation is done bottom-up with increasing capacities
$c \in \{1,...,C\}$ until computing the maximum profit for the given
capacity $C$, which corresponds to the query goal $ks(N,C)$. The
bottom-up characteristic comes from the fact that, given a Knapsack
with capacity $c$ and using $i$ items, $i < N$, the decision to
include the next item $j$, $j=i+1$, leads to two situations: (i) if
$j$ is not included, the Knapsack profit is unchanged; (ii) if $j$ is
included, the profit is the result of the maximum profit of the
Knapsack with the same $i$ items but with capacity $c - w_j$ (the
capacity needed to include the weight $w_j$ of item $j$) increased by
$p_j$ (the profit of the item $j$ being included). The algorithm then
decides whether to include an item based on which choice leads
to maximum profit. Thus, computing a row $i$ depends only on the
sub-problems at row $i-1$. A possible parallelization is, for each
row, to divide the computation of the $C$ columns between the
available threads and then wait for all threads to complete in order
to synchronize before computing the next
row. Figure~\ref{fig_knapsack_matrix-mt}(a) shows an example with two
threads, $T_1$ and $T_2$, where the computation of the $C$ columns
within the evaluation matrix is divided in smaller chunks and each
chunk is evaluated by the same thread.
\begin{figure}[!ht]
\centering
\includegraphics[width=\textwidth]{figures/knapsack_matrix-mt.pdf}
\caption{Knapsack multithreaded tabled bottom-up approach}
\label{fig_knapsack_matrix-mt}
\end{figure}
Figure~\ref{fig_knapsack_matrix-mt}(b) shows then a situation where
the cell corresponding to call $ks(j,c)$ is being evaluated by thread
$T_1$. As explained above, this involves computing the values for
$ks(i,c-w_j)$ and $ks(i,c)$ (cells denoted with a black circle in
Fig.\ref{fig_knapsack_matrix-mt}(b)). Since we want to take advantage
of the built-in tabling mechanism, we can avoid the synchronization
between rows mentioned above. Hence, when a sub-problem in the
previous row was not computed yet (i.e., marked as completed in one of
the subgoal frames for the given call), instead of waiting for the
corresponding result to be computed by another thread, the current
thread starts also its computation and for that it can recursively
call many other sub-problems not computed yet. Despite this can lead
to redundant sub-computations, it avoids synchronization. In fact, as
we will see, this approach showed to be very effective. The situation
in Fig.~\ref{fig_knapsack_matrix-mt}(b) shows the case where thread
$T_1$ consumes the value for call $ks(i,c-w_j)$ from the tables
(already computed by $T_2$) but computes the value for $ks(i,c)$.
To evaluate the performance of the multithreaded tabled top-down and
bottom-up approaches, we used local scheduling with the PAS design,
together with the fixed-size UMA and the support for lock-free data
structures within the subgoal trie data structure. For the bottom-up
approaches, standard tabling is enough but for the top-down
approaches, mode-directed tabling is mandatory since we want to
maximize the profit, in the case of the Knapsack problem, and the
length of the longest common subsequence, in the case of the LCS
problem. To put our results in perspective, we also experimented with
XSB Prolog version 3.4.0 using the shared tables
model~\cite{Marques-08} for the bottom-up approaches (since XSB does
not support mode-directed tabling, it could not be used for the
top-down approaches).
\begin{table}[t]
\centering
\caption{Execution time, in milliseconds, for one thread (sequential
and multithreaded version) and corresponding speedup (against one
thread running the multithreaded version) for the execution with 8,
16, 24 and 32 threads, for the top-down and bottom-up approaches of
the Knapsack problem using the Yap and XSB Prolog systems}
\begin{tabular}{cc||c|c|rrrr||r}
\multicolumn{2}{c||}{\multirow{3}{*}{\bf System/Dataset}}
& {\bf Seq.}
& \multicolumn{5}{c||}{\bf \# Threads (p)}
& {\bf Best} \\
&
& {\bf Time}
& {\bf Time (T$_1$)} & \multicolumn{4}{c||}{\bf Speedup (T$_1$/T$_p$)}
& {\bf Time}\\
&
& {\bf (T$_{seq}$)}
& {\bf 1} & {\bf 8} & {\bf 16} & {\bf 24} & {\bf 32} & {\bf (T$_{best}$)}\\
\hline\hline
\multicolumn{8}{l}{\bf Top-Down Approaches} \\
\multicolumn{1}{c}{\multirow{3}{*}{\bf YAP$_{TD_1}$}}
& {\bf D$_{10}$} & 14,330 & 19,316 & 1.96 & {\bf 2.12} & 2.04 & 1.95 & 9,115 \\
& {\bf D$_{30}$} & 14,725 & 19,332 & 3.57 & {\bf 4.17} & 4.06 & 3.93 & 4,639 \\
& {\bf D$_{50}$} & 14,729 & 18,857 & 4.74 & 6.28 & {\bf 6.44} & 6.41 & 2,930 \\
\hline
\multicolumn{1}{c}{\multirow{3}{*}{\bf YAP$_{TD_2}$}}
& {\bf D$_{10}$} & 19,667 & 24,444 & 6.78 & 12.35 & 15.44 & {\bf 18.19} & 1,344 \\
& {\bf D$_{30}$} & 19,847 & 25,609 & 7.15 & 13.83 & 17.37 & {\bf 20.47} & 1,251 \\
& {\bf D$_{50}$} & 19,985 & 25,429 & 7.27 & 13.70 & 17.35 & {\bf 20.62} & 1,233 \\
\hline
\multicolumn{8}{l}{\bf Bottom-Up Approaches} \\
\multicolumn{1}{c}{\multirow{3}{*}{\bf YAP$_{BU}$}}
& {\bf D$_{10}$} & 12,614 & 17,940 & 7.17 & 13.97 & 18.31 & {\bf 22.15} & 810 \\
& {\bf D$_{30}$} & 12,364 & 17,856 & 7.23 & 13.78 & 18.26 & {\bf 21.94} & 814 \\
& {\bf D$_{50}$} & 12,653 & 17,499 & 7.25 & 14.01 & 18.34 & {\bf 21.76} & 804 \\
\hline
\multicolumn{1}{c}{\multirow{3}{*}{\bf XSB$_{BU}$}}
& {\bf D$_{10}$} & {\bf 32,297} & 38,965 & 0.87 & 0.66 & 0.62 & 0.55 & 32,297 \\
& {\bf D$_{30}$} & {\bf 32,063} & 38,007 & 0.86 & 0.61 & 0.56 & 0.53 & 32,063 \\
& {\bf D$_{50}$} & {\bf 31,893} & 38,534 & 0.84 & 0.58 & 0.57 & 0.57 & 31,893 \\
\end{tabular}
\label{tab_knapsack}
\end{table}
\begin{table}[t]
\centering
\caption{Execution time, in milliseconds, for one thread (sequential
and multithreaded version) and corresponding speedup (against one
thread running the multithreaded version) for the execution with 8,
16, 24 and 32 threads, for the top-down and bottom-up approaches of
the LCS problem using the Yap and XSB Prolog systems}
\begin{tabular}{cc||c|c|rrrr||r}
\multicolumn{2}{c||}{\multirow{3}{*}{\bf System/Dataset}}
& {\bf Seq.}
& \multicolumn{5}{c||}{\bf \# Threads (p)}
& {\bf Best} \\
&
& {\bf Time}
& {\bf Time (T$_1$)} & \multicolumn{4}{c||}{\bf Speedup (T$_1$/T$_p$)}
& {\bf Time}\\
&
& {\bf (T$_{seq}$)}
& {\bf 1} & {\bf 8} & {\bf 16} & {\bf 24} & {\bf 32} & {\bf (T$_{best}$)}\\
\hline\hline
\multicolumn{8}{l}{\bf Top-Down Approaches} \\
\multicolumn{1}{c}{\multirow{3}{*}{\bf YAP$_{TD_1}$}}
& {\bf D$_{10}$} & 26,030 & 33,969 & {\bf 1.58} & 1.53 & 1.50 & 1.42 & 21,509\\
& {\bf D$_{30}$} & 26,523 & 34,213 & {\bf 1.60} & 1.54 & 1.50 & 1.42 & 21,424\\
& {\bf D$_{50}$} & 26,545 & 34,234 & {\bf 1.60} & 1.54 & 1.51 & 1.40 & 21,408\\
\hline
\multicolumn{1}{c}{\multirow{3}{*}{\bf YAP$_{TD_2}$}}
& {\bf D$_{10}$} & 34,565 & 44,371 & 7.23 & 13.23 & 16.45 & {\bf 19.74} & 2,248\\
& {\bf D$_{30}$} & 34,284 & 44,191 & 7.12 & 13.09 & 16.52 & {\bf 19.77} & 2,235\\
& {\bf D$_{50}$} & 33,989 & 44,158 & 7.06 & 13.30 & 16.49 & {\bf 19.58} & 2,255\\
\hline
\multicolumn{8}{l}{\bf Bottom-Up Approaches} \\
\multicolumn{1}{c}{\multirow{3}{*}{\bf YAP$_{BU}$}}
& {\bf D$_{10}$} & 20,799 & 28,909 & 6.47 & 12.21 & 16.48 & {\bf 20.32} & 1,423\\
& {\bf D$_{30}$} & 21,174 & 28,904 & 6.94 & 12.61 & 16.63 & {\bf 20.40} & 1,417\\
& {\bf D$_{50}$} & 21,166 & 28,857 & 6.44 & 12.31 & 16.44 & {\bf 20.52} & 1,406\\
\hline
\multicolumn{1}{c}{\multirow{3}{*}{\bf XSB$_{BU}$}}
& {\bf D$_{10}$} & {\bf 60,983} & 74,108 & n.a. & n.a. & n.a. & n.a. & 60,983\\
& {\bf D$_{30}$} & {\bf 59,496} & 74,410 & n.a. & n.a. & n.a. & n.a. & 59,496\\
& {\bf D$_{50}$} & {\bf 59,700} & 74,628 & n.a. & n.a. & n.a. & n.a. & 59,700\\
\end{tabular}
\label{tab_lcs}
\end{table}
Table~\ref{tab_knapsack} and Table~\ref{tab_lcs} show the average of
10 runs results obtained, respectively, for the Knapsack and LCS
problems for both top-down and bottom-up approaches using the Yap and
XSB Prolog systems. The columns of both tables show the following
information. The first column describes the system and the dataset
used. The second column (T$_{seq}$) shows the sequential execution
time in milliseconds. For T$_{seq}$, the Prolog systems where compiled
without multithreaded support and ran without multithreaded code. The
next five columns show the execution time for one thread (T$_1$) and
the corresponding speedup for the execution with 8, 16, 24 and 32
threads (columns T$_1$/T$_p$). For each system/dataset configuration,
the results in bold highlight the column where the best execution time
was obtained and the last column (T$_{best}$) presents such result in
milliseconds.
Analyzing the general picture of both tables, one can observe that the
sequential time (T$_{seq}$) is always lower than the multithreaded
time (T$_{1}$). This is expected since the multithreaded version is
compiled and equipped with all the complex machinery required to
support concurrency in Yap, which includes not only all the new tabled
stuff but also all the base support for multithreaded in Yap.
When scaling the problem with multiple threads, the YAP$_{TD_2}$
top-down and YAP$_{BU}$ bottom-up approaches have the best results
with excellent speedups for 8, 16, 24 and 32 threads. In particular,
for 32 threads, they obtain speedups around 21 and 20, respectively,
for the Knapsack and LCS problems (T$_{1}$/T$_{best}$). If comparing
against the sequential version for 32 threads (not shown in the
tables), the speedups are around 15 and 16, respectively, for the
Knapsack and LCS problems (T$_{seq}$/T$_{best}$). The results for the
top-down YAP$_{TD_1}$ approach are not so interesting, regardless of
the fact that it can slightly scale for the Knapsack problem up to 16
threads.
Despite the similar average speedups for YAP$_{TD_2}$ and YAP$_{BU}$,
their execution times are quite different. Consider, for example, the
$D_{50}$ dataset of the Knapsack problem with 32 threads. While the
speedup $20.62$ of YAP$_{TD_2}$ corresponds to an execution time of
$1,233$ milliseconds, the speedup $21.76$ of YAP$_{BU}$ only
corresponds to $804$ milliseconds. Similarly, for the LCS problem, if
considering the $D_{50}$ dataset with 32 threads, while the speedup
$19.58$ of YAP$_{TD_2}$ corresponds to $2,255$ milliseconds, the
speedup $20.52$ of the YAP$_{BU}$ only corresponds to $1,406$
milliseconds.
The results also suggest that the execution times are not affected by
the values for the weights/profits generated. In general, the speedups
obtained for the different datasets ($D_{10}$, $D_{30}$ and $D_{50}$)
are always very close for the same number of threads. Note that for
the bottom-up approaches this was expected since the complete matrix
of results has to be computed. For the top-down approaches, it can be
affected by the values for the weights/profits due to the depth in the
evaluation tree where solutions can be found. However, since we are
using randomized values in the datasets, we are aiming for the average
case.
Regarding the comparison with XSB's shared tables model, Yap's results
clearly outperform those of XSB. For the execution time with one
thread, XSB shows higher times than all Yap's approaches. For the
concurrent execution of the Knapsack problem, XSB shows no speedups
and for the concurrent execution of the LCS problem we have no results
available ($n.a.$) since we got \emph{segmentation fault} execution
errors. From our point of view, XSB's results are a consequence of the
\emph{usurpation operation}~\cite{Marques-08} that restricts the
potential of concurrency to non-mutually dependent
sub-computations. As the concurrent versions of the Knapsack and LCS
problems create mutual dependent sub-computations, which can be
executed in different threads, the XSB is actually unable to execute
them concurrently. In other works, even if we launch an arbitrary
large number of threads on those programs, the system would tend to
use only one thread at the end to evaluate most of the computations.
\section{Future Perspectives and Challenging Research Directions}
Currently, Yap provides the ground technology for both implicit and
explicit concurrent tabled evaluation, but separately. From the user's
point of view, tabling can be enabled through the use of single
directives of the form `\emph{:-~table p/n}', meaning that common
sub-computations for \emph{p/n} will be synchronized and shared
between workers at the engine level, i.e., at the level of the tables
where the results for such sub-computations are stored. Implicit
concurrent tabled evaluation can be triggered if using the OPTYap
design~\cite{Rocha-05a}, which exploits implicit or-parallelism using
shared memory processes. Explicit concurrent tabled evaluation can be
triggered if using the thread-based implementation~\cite{Areias-12a},
but the user still needs to explicitly implement the thread management
and scheduler policy for task distribution, which is orthogonal to the
focus of this work. Table~\ref{tab_concurrent_features} highlights the
key differences between the two concurrent tabling strategies in Yap's
current implementation.
\begin{table}[!ht]
\centering
\caption{Concurrent tabling supported features}
\begin{tabular}{c|cccc}
\multirow{2}{*}{\textbf{\emph{Strategy}}}
& \textbf{\emph{Execution}} & \textbf{\emph{Memory}} & \textbf{\emph{Synchronization}} & \textbf{\emph{Mode-Directed}} \\
& \textbf{\emph{Model}} & \textbf{\emph{Allocator}} & \textbf{\emph{Mechanisms}} & \textbf{\emph{Tabling}} \\
\hline\hline
\textbf{\emph{Implicit}} & \emph{Processes/Threads} & \emph{Fixed-Size} & \emph{Lock-Based} & \emph{--} \\
\textbf{\emph{Explicit}} & \emph{Threads} & \emph{Fixed-Size} & \emph{Lock-Free} & \emph{NS/SS/PAS Designs} \\
\end{tabular}
\label{tab_concurrent_features}
\end{table}
The present work could thus be viewed as the basis to further
directions and further research in this area. So far, we have achieved
our initial goal. Even so, the system still has some restrictions that
may reduce its use elsewhere and its contribution to general Prolog
applications. We next discuss future perspectives and challenging
research directions:
\begin{description}
\item[Extend CS design to support lock-free data structures.] Due to
the good performance results obtained with the lock-free proposals,
an obvious research direction for further work is to extend the
original CS design to use lock-free data structures instead of the
lock-based data structures.
\item[Extend CS/FS/PAC designs to support mode-directed tabling.] In
the previous section, we observed the advantages of combining
mode-directed tabling with the PAS design. However, in the PAS
design, the answers to common tabled subgoal calls are only shared
when the corresponding tables are completed. Since the CS/FS/PAC
designs do not require the completion of tables to share answers,
threads would be able to share and propagate answers sooner. The
problem of combining mode-directed tabling with the CS/FS/PAC
designs is on how to efficiently support concurrent delete
operations on the trie structures and on how to efficiently handle
the interface between consumer calls and the navigation in the trie
of answers for the several running workers.
\item[Support concurrent delete operations on the trie structures.]
As mention above, this is a key feature to allow for an efficient
implementation of concurrent mode-directed tabling with the
CS/FS/PAC designs. Moreover, this extension could also be applied to
concurrent incremental tabling~\cite{Diptikalyan-PhD}, where
specific subgoal calls and answers can be dynamically deleted during
tabled evaluation.
\item[Concurrent linear tabling.] Since the evaluation of programs
with a linear tabling engine is less complex than the evaluation
with a suspension-based engine, it would be interesting to study how
different linear tabled strategies~\cite{Areias-11,Areias-13} could
run concurrently and take advantage of the different table space
designs presented in this work.
\item[Implicit and explicit concurrent evaluation in a single
framework.] This is our most challenging goal towards an efficient
concurrent framework which integrates both implicit and explicit
concurrent tabled evaluation in a single tabling engine. This is a
very complex task since we need to combine the explicit control
required to launch, assign and schedule tasks to workers, with the
built-in mechanisms for handling tabling and/or implicit
concurrency, which cannot be controlled by the user. In such a
framework, a program begins as a single worker that executes
sequentially until reaching an implicit or explicit concurrent
construct. When reaching an explicit concurrent construct, the
execution model launches a set of additional workers to exploit
concurrently a set of independent sub-computations (which may
include tabled and non-tabled predicates). From the workers point of
view, each concurrent sub-computation computes its tables but, at
the implementation level, the tables can be shared following the
table space designs presented before for implicit concurrent tabled
evaluation. Otherwise, if reaching an explicit concurrent construct,
the execution model launches a set of additional workers to exploit
in parallel a given sub-computation. Parallel execution is then
handled implicitly by the execution model taking into account
possible directive restrictions. For example, we may have directives
to define the number of workers, the scheduling strategy to be used,
load balancing policies, etc. By taking advantage of these explicit
parallel constructs, a user can write parallel logic programs from
scratch or parallelise existing sequential programs by incrementally
pinpointing the sub-computations that can benefit from parallelism,
using the available directives to test and fine tune the program in
order to achieve the best performance. Such a framework could renew
the glamour of Prolog systems, especially in the concurrent/parallel
programming community. Combining the inherent implicit parallelism
of Prolog with explicit high-level parallel constructs will clearly
enhance the expressiveness and declarative style of tabling and
simplify concurrent programming.
\end{description}
\bibliographystyle{acmtrans}
|
1,116,691,498,265 | arxiv | \section{Introduction}
\label{sec:introduction}
The study of numerical solutions of Partial Differential Equations (PDEs) posed on spheres and other surfaces arises naturally in a large number of applications. It is specially important in geophysical fluid dynamics, where a sphere is usually adopted as domain. For instance, in weather forecasting and climate modeling, PDEs are used and discretized through finite differences, finite elements, and finite volume methods on a spherical domain \cite{giraldo1997lagrange,stuhne1999new,heikes1995anumerical}.
The efficiency and accuracy of the approximate solutions depend on certain characteristics of the discretization of the sphere and of the differential operators. In the present work, we start by looking at the approximations of solutions of the Poisson problem on the unit sphere using \emph{spherical icosahedral geodesic grids} \cite{sadourny1968integration,williamson1968integration,baumgardner1985icosahedral,heikes1995anumerical}, \textit{i.e.}, grids that come from an icosahedron inscribed on the sphere, with its vertices and faces projected onto the spherical surface. We obtain a spherical triangular grid whose edges are geodesic arcs and satisfy the so-called Delaunay criterion \textit{i.e.}, maximizing the smallest angle \cite{glitzky2010discrete,gartner2019why,hjelle2006triangulations}. Following \cite{augenbaum1985construction,renka1997algorithm}, the mentioned construction will allow us to define two types of grids on $\mathbb{S}^{2}$: a triangular Delaunay (primal) decomposition and a Voronoï (dual) decomposition. In what follows, we will consider such grids in a more general way, not necessarily restricted to those directly built from an icosahedron, therefore, we will refer to them as general Voronoï-Delaunay decompositions of the sphere throughout this paper.
Voronoï-based finite volume methods are very popular and allow great flexibility. However, the finite volume scheme may not be formally consistent, yet can still lead to second order convergence results, leading to what is sometimes called supra-convergence \cite{barbeiro2009supraconvergent,despres2004lax,bouche2005error,pascal2007supraconvergence,manteuffel1986numerical,kreiss1986supra,diskin2010notes}. Error estimates of first order of convergence for approximate solutions of planar Voronoï-based finite volume method in the $H^{1}$ and $L^{2}$-norm have been reported by \cite{mishev1998finite,gallouet2000error,eymard2001finite,du2003voronoi,eymard2006cell}. Second-order accuracy in the $L^{2}$-norm using planar dual Donald decompositions, which uses the triangle barycenters as vertices of the dual grid, were considered in \cite{jianguo1998finite,li2000generalized,chou2000error,chou2007superconvergence} and for general surfaces in \cite{ju2009finite,ju2009posteriori}. These latter works explicitly use properties of the barycentric dual cells, \textit{i.e.}, the quadratic order is obtained by using the centroid of triangles and therefore these constructions cannot be, in general, extended to the arbitrary Voronoï-Delaunay decompositions.
Previous work \cite{li2000generalized,chou2000error,chen2002note,wu2003error} also usually have an extra requirement in the regularity of the exact solution (belong to $H^{3}$ or $W^{3,p}$), which is excessive compared to the requirements in finite element methods \cite{brenner2007mathematical,ciarlet2002finite,rannacher1982some}. However, \cite{ewing2002accuracy} has reported a sufficient condition to decrease the regularity of the exact solution (is in $H^{2}$) but imposes an added regularity requirement on the forcing source term, which is in $H^{1}$ to get the optimal convergence order. The authors also highlighted that, except for one-dimensional domains or the solution domain has a boundary smooth enough, the $H^{1}$-regularity of the source term does not automatically imply the $H^{3}$-regularity of the exact solution.
On the sphere, \cite{du2005finite} show a quadratic order estimate in the Voronoï-Delaunay decomposition on $\mathbb{S}^{2}$ in the $L^{2}$-norm and a Spherical Centroidal Voronoï Tesselation (SCVT) optimized grid \cite{du1999centroidal,du2003constrained} is used as their Voronoï-Delaunay decomposition and the excessive regularity assumption in the exact solution. Based on this, as a first minor result of this work, we show a more general error estimates than given in \cite{du2005finite} applying the approach of \cite{ewing2002accuracy}, \textit{i.e.}, decreasing the regularity in the exact solution and imposing a minimum regularity in the source term, thus we obtain the desired order of the convergence. We conclude, as in \cite{du2005finite}, that the proof cannot be extended to Voronoï-Delaunay decompositions in general due to the explicit use of the criteria of the SCVT. In general, determining a quadratic convergence order in the $L^{2}$-norm is an open issue for the Voronoï-based finite volume method. Several efforts have been made to answer this question and are limited to topological aspects, for example, \cite{omnes2011second} gives a partial answer and an important improvement in relation to what is known so far.
The topic of this paper is the convergence analysis for approximate solutions of Voronoï-based finite volume method in the maximum-norm, extending existing results for the plane \cite{ewing2002accuracy,zhang2014superconvergence,chou2003lp} to the sphere. It is not to our knowledge that advances in this direction have been previously described in the literature, particularly for Voronoï-Delaunay decompositions on $\mathbb{S}^{2}$. We opted to consider the Voronoï-based finite volume method as a perturbation of the finite element method, which is an approach widely described in the literature \cite{li2000generalized, ewing2002accuracy,lin2013finite}. The main idea is to use the standard error estimation procedures developed for the finite elements on surfaces such as \cite{demlow2009higher, kroner2017approximative,kovacs2018maximum} along with the use of the regularized Green's functions on the sphere.
The main result of our work is the proof of sub-linear convergence order of a classic finite volume method on general spherical Voronoï-Delaunay decompositions, for the Poisson equation, in the maximum-norm. This result tightens the gap between theoretical convergence analysis and existing empirical evidence for convergence of such scheme on the sphere. Empirical evidence indicated the possibility of linear convergence, however, here our results contain a logarithmic factor caused by the use of linear functions in the primal Delaunay decomposition, which apparently cannot be avoided, as also examined in Euclidean domains by \cite{scott1976optimal,rannacher1982some,schatz1998pointwise}. Additionally, linear convergence order in the maximum-norm is proved for SCVT grids.
The outline of the paper is as follows: in Section \ref{sec:problsetting}, we briefly introduce some notation and the model equation used in this work. Section \ref{sec:grids} is devoted to the usual recursive construction of the spherical icosahedral geodesic grids. In Section \ref{sec:fvm}, we establish the classical finite volume method and discrete function spaces. The error estimates for the discrete finite volume scheme are given in Section \ref{sec:erroranalysis}. Numerical experiments and final comments are given in Section \ref{sec:numericalexps}.
\section{Problem setting}
\label{sec:problsetting}
In this section, we start defining the model problem, some notations and function spaces that will be used throughout the paper. Let $\mathbb{S}^{2}:=\{\mathrm{x}\in\mathbb{R}^{3}:\|\mathrm{x}\|=1\}$ be the unit sphere, where $\|\cdot\|$ represents the Euclidean norm in $\mathbb{R}^{3}$. Let $\nabla_{s}$ denote the tangential gradient \cite{dziuk2013finite} on $\mathbb{S}^{2}$ defined by,
\[
\nabla_{s} u(\mathrm{x})=\nabla u(\mathrm{x})-\left(\nabla u(\mathrm{x})\cdot\vec{\mathrm{n}}_{\S,\x}\right)\vec{\mathrm{n}}_{\S,\x},
\]
where $\nabla$ denotes the usual gradient in $\mathbb{R}^{3}$ and $\vec{\mathrm{n}}_{\S,\x}$ represents the unit outer normal vector to $\mathbb{S}^{2}$ at $\mathrm{x}=(x_{1},x_{2},x_{3})$. We shall adopt standard notation for Sobolev spaces on $\mathbb{S}^{2}$ (see \textit{e.g.}, \cite{hebey1996sobolev}). Given $1\leq p\leq \infty$ and $k$ non-negative integer, we denote the Sobolev spaces by,
\[
\begin{split}
L^{p}(\mathbb{S}^{2})&=\left\{u(\mathrm{x}):\int_{\mathbb{S}^{2}}|u(\mathrm{x})|^{p}ds(\mathrm{x})<\infty\right\},\\
W^{k,p}(\S)&=\left\{u\in L^{p}(\mathbb{S}^{2}):\nabla_{s}^{\alpha}u\in L^{p}(\mathbb{S}^{2}),\quad\mbox{for }0\leq|\alpha|\leq k\right\},
\end{split}
\]
where $\nabla_{s}^{\alpha}=\nabla_{s,1}^{\alpha_{1}}\nabla_{s,2}^{\alpha_{2}}\nabla_{s,3}^{\alpha_{3}}$ is the multi-index notation for the weak tangential derivatives up to order $k$, where $\alpha=(\alpha_{1},\alpha_{2},\alpha_{3})$ is a vector of non-negative integers with $|\alpha|=\alpha_{1}+\alpha_{2}+\alpha_{3}$. The function space $W^{k,p}(\S)$ is equipped with the norm
\[
\|u\|_{W^{k,p}(\mathbb{S}^{2})}=
\begin{cases}
\left(\sum_{0\leq|\alpha|\leq k}\|\nabla_{s}^{\alpha}u\|_{L^{p}(\mathbb{S}^{2})}^{p}\right)^{1/p},&\mbox{for }1\leq p< \infty\\
\max_{0\leq |\alpha|\leq k}\|\nabla_{s}^{\alpha}u\|_{L^{\infty}(\mathbb{S}^{2})},& \mbox{for }p=\infty.
\end{cases}
\]
We set $H^{k}(\mathbb{S}^{2})=W^{k,2}(\mathbb{S}^{2})$ along with the standard inner product
\[
(u,v)=\int_{\mathbb{S}^{2}}u(\mathrm{x})v(\mathrm{x})ds(\mathrm{x}),\quad\mbox{for all }u,v\in L^{2}(\mathbb{S}^{2}),
\]
where $ds(\mathrm{x})$ is the surface area measure. Additionally, we define the zero-averaged subspace of $H^{1}(\S)$ as
\[
H_{0}^{1}(\S):=\left\{u\inH^{1}(\S):\int_{\mathbb{S}^{2}}u(\mathrm{x})ds(\mathrm{x})=0\right\},
\]
equipped with the $H^{1}$-norm. Throughout this paper, we use $C_{\ast}$ as a generic positive real constant, which may vary with the context and depends on the problem data and other model parameters.
We now introduce the Poisson equation to be considered. Let $f\inL^{2}(\S)$ be a given forcing (source) satisfying the compatibility condition,
\begin{equation}
\label{eq:compf}
\int_{\mathbb{S}^{2}}f(\mathrm{x})ds(\mathrm{x})=0.
\end{equation}
The model problem consists of finding a scalar function $u:\mathbb{S}^{2}\to \mathbb{R}$ satisfying
\begin{equation}
\label{eq:strongform}
-\Delta_{s} u(\mathrm{x})=f(\mathrm{x}),\quad\mbox{for each }\mathrm{x}\in\mathbb{S}^{2},
\end{equation}
where $-\Delta_{s}=-\nabla_{s}\cdot\nabla_{s}$ denotes the Laplacian on $\mathbb{S}^{2}$. We impose $\int_{\mathbb{S}^{2}}u(\mathrm{x})ds(\mathrm{x})=0$ in order to ensure uniqueness of solution. For any $u,v\inH_{0}^{1}(\S)$, we define the bilinear functional $\mathcal{A}:H_{0}^{1}(\S)\timesH_{0}^{1}(\S)\to\mathbb{R}$ such that
\begin{equation}
\label{eq:biformFE}
\mathcal{A}(u,v)=\int_{\mathbb{S}^{2}}\nabla_{s} u(\mathrm{x})\cdot \nabla_{s} v(\mathrm{x})ds(\mathrm{x}).
\end{equation}
The bilinear functional is well-defined on the space $H_{0}^{1}(\S)\timesH_{0}^{1}(\S)$ and is continuous and coercive, i.e.,
there are positive constants $C_0$ and $C_1$ such that
\[
\begin{split}
\left|\mathcal{A}(u,v)\right|&\leq C_{0}\|\nabla_{s} u\|_{L^{2}(\S)}\|\nabla_{s} v\|_{L^{2}(\S)},\quad\mbox{for each }u,v\inH_{0}^{1}(\S),\\
\mathcal{A}(u,u)&\geq C_{1}\|\nabla_{s} u\|_{L^{2}(\S)}^{2},\quad\mbox{for each }u\inH_{0}^{1}(\S).
\end{split}
\]
The variational formulation of \eqref{eq:strongform} reads: find $u\inH_{0}^{1}(\S)$ such that
\begin{equation}
\label{eq:weakform}
\mathcal{A}(u,v)=(f,v),\quad\mbox{for each }v\inH_{0}^{1}(\S),
\end{equation}
in which $(f,v)=\int_{\mathbb{S}^{2}}f(\mathrm{x})v(\mathrm{x})ds(\mathrm{x})$ and the source data $f$ satisfies \eqref{eq:compf}.
As a consequence of the Lax-Milgram theorem \cite{ciarlet2002finite,brenner2007mathematical}, problem \eqref{eq:weakform} has a unique solution. This solution is such that for some positive constant $C_{S}$,
\begin{equation}
\label{eq:stability}
\|\nabla_{s} u\|_{L^{2}(\S)}\leq C_{S}\|f\|_{L^{2}(\S)}.
\end{equation}
Moreover, we have the regularity property: for $f\inL^{2}(\S)$ satisfying \eqref{eq:compf}, the unique weak solution $u\inH^{2}(\S)\cap\HqS$ of \eqref{eq:weakform} satisfies, for some positive constant $C_{R}$,
\begin{equation}
\label{eq:regularity}
\|u\|_{H^{2}(\S)}\leq C_{R}\|f\|_{L^{2}(\S)}.
\end{equation}
A detailed proof of \eqref{eq:regularity} can be found in \cite[Theorem 3.3 pp.~304]{dziuk2013finite}.
\section{Spherical icosahedral geodesic grids}
\label{sec:grids}
In this section, we describe the discretization framework to approximate the sphere $\mathbb{S}^{2}$, following \cite{baumgardner1985icosahedral,giraldo1997lagrange,heikes1995anumerical}. In general, the spherical icosahedral grid can be constructed by defining an icosahedron inscribed inside $\mathbb{S}^{2}$, which has triangular faces and vertices. Each edge of the original icosahedron whose vertices are on $\mathbb{S}^{2}$ is projected onto the surface of $\mathbb{S}^{2}$. We employ a recursive refinement of the grid, by connecting the midpoints of the geodesic arcs to generate four subtriangles in each geodesic triangle. This procedure may be applied to all geodesic triangles of the initial icosahedron to create a grid of desired resolution (see Figure \ref{fig:refinamenttriangle}).
\begin{figure}[!h]
\centering
\includegraphics[scale=0.26]{icos_pol_0.png}\hspace{0.2cm}
\includegraphics[scale=0.26]{icos_pol_nopt_1.png}
\hspace{0.2cm}
\includegraphics[scale=0.26]{icos_pol_nopt_2.png}
\caption{\label{fig:refinamenttriangle} Spherical icosahedral grids with levels $0$, $1$ and $2$ with $12,42$ and $162$ vertices, respectively.}
\end{figure}
\subsection{Voronoï-Delaunay decomposition}
Let $\mathrm{d}(\mathrm{x},\mathrm{y})$ denote the geodesic distance between $\mathrm{x}$ and $\mathrm{y}$ on $\mathbb{S}^{2}$, defined by
\[
\mathrm{d}(\mathrm{x},\mathrm{y}):=\arccos\langle\mathrm{x},\mathrm{y}\rangle_{\mathbb{R}^{3}}\in [0,\pi],
\]
where $\langle\cdot,\cdot\rangle_{\mathbb{R}^{3}}$ denotes the Euclidean scalar product in $\mathbb{R}^{3}$. We will use the notation $m_{a}(\cdot)$ and $m_{l}(\cdot)$ for the standard measures of superficial area and curve length respectively.
Let $\mathrm{S}_{N}=\{\x_{i}\}_{i=1}^{N}$ denote the set of distinct vertices on $\mathbb{S}^{2}$, where $N=2^{2\ell}\cdot10+2$ is the number of vertices and $\ell$ is the level of grid refinement \cite{baumgardner1985icosahedral}. We denote $\widetilde{\mathrm T}_{ijk}$ as a geodesic triangle with vertices $\x_{i},\x_{j},\x_{k}\in\mathrm{S}_{N}$ and define the spherical Delaunay (primal) decomposition on $\mathbb{S}^{2}$ as the set $\widetilde{\Th}:=\{\widetilde{\mathrm T}_{ijk}:ijk\in\Sigma\}$, where $\Sigma$ is the set of indices such that $i,j,k$ are adjacent neighbors in $\mathrm{S}_{N}$. Here the subscript $h$ denotes the main grid parameter to be defined later.
The dual Voronoï decomposition of
$\widetilde{\Th}$ is constructed following (\cite{renka1997algorithm,augenbaum1985construction}). For each vertex $\x_{i}\in\mathrm{S}_{N}$, $1\leq i\leq N$, its associated Voronoï~cell $\widetilde{\mathrm{V}}_{i}$ is given by
\[
\widetilde{\mathrm{V}}_{i}:=\left\{\mathrm{x}\in\mathbb{S}^{2}:\mathrm{d}(\mathrm{x},\x_{i})<\mathrm{d}(\mathrm{x},\x_{j}),\quad\mbox{for each }1\leq j \leq N,\mbox{ and }j\neq i \right\}.
\]
Each Voronoï cell $\widetilde{\mathrm{V}}_{i}$ consists of all points $\mathrm{x}\in\mathbb{S}^{2}$ that are closer to $\x_{i}$ than to any other vertex of $\mathrm{S}_{N}$. Voronoï~cells are open and convex polygons on $\mathbb{S}^{2}$, limited by geodesic arcs at their boundaries. In particular, every two cells have an empty intersection and $\bigcup_{i=1}^{N}\mathrm{cl}(\widetilde{\mathrm{V}}_{i})=\mathbb{S}^{2}$, where $\mathrm{cl}(\cdot)$ denotes the closure of the cell. Further, given two adjacent vertices $\x_{i}$ and $\x_{j}$, we denote by $\widetilde{\Gamma}_{ij}=\mathrm{cl}(\widetilde{\mathrm{V}}_{i})\cap\mathrm{cl}(\widetilde{\mathrm{V}}_{j})\neq\emptyset$ the geodesic Voronoï~edge on $\mathbb{S}^{2}$ associated to the vertices $\x_{i}$ and $\x_{j}$. Thus, for each vertex $\x_{i}$ we can denote the set of indices of its neighbors $\x_{j}$ such that $m_{l}(\widetilde{\Gamma}_{ij})>0$, \textit{i.e.},
\[
\LiN=\left\{j:j\neq i\mbox{ and }\widetilde{\Gamma}_{ij}=\mathrm{cl}(\widetilde{\mathrm{V}}_{i})\cap\mathrm{cl}(\widetilde{\mathrm{V}}_{j})\neq 0\right\}.
\]
Each $\widetilde{\mathrm{V}}_{i}$ has smooth piecewise boundary $\partial\widetilde{\mathrm{V}}_{i}$ formed by the Voronoï~dual edges $\widetilde{\Gamma}_{ij}$, with $j\in\LiN$, \textit{i.e.}, $\partial\widetilde{\mathrm{V}}_{i}=\bigcup_{j\in\LiN}\widetilde{\Gamma}_{ij}$. For $\x_{i}$ and $\x_{j}$ neighboring vertices with $j\in\LiN$, we denote by $\widetilde{\tau}_{ij}$ the Delaunay edge joining vertices $\x_{i}$ and $\x_{j}$, and by $\x_{ij}$ and $\m_{ij}$ the midpoints of the geodesic edges $\widetilde{\tau}_{ij}$ (Delaunay) and $\widetilde{\Gamma}_{ij}$ (Voronoï) respectively. By construction, each geodesic Delaunay edge $\widetilde{\tau}_{ij}$ is perpendicular to geodesic Voronoï edge $\widetilde{\Gamma}_{ij}$ and the plane formed by $\widetilde{\Gamma}_{ij}$ and the origin, bisects $\widetilde{\tau}_{ij}$ at its midpoint $\x_{ij}$ (see \cite{du2003voronoi,renka1997algorithm}). Therefore $\mathrm{d}(\x_{i},\mathrm{x})=\mathrm{d}(\mathrm{x},\x_{j})$, for each $\mathrm{x}\in\widetilde{\Gamma}_{ij}$, and we denote by $\vec{\mathrm{n}}_{\x,\gij}$ the co-normal unit vector at the Voronoï edge $\widetilde{\Gamma}_{ij}$ lying in the plane $\mathbb{T}_{\x,\S}$ tangent to $\mathbb{S}^{2}$ at $\mathrm{x}$. Finally, $\vec{\mathrm{n}}_{\x,\gij}$ is parallel to $\overrightarrow{\x_{i}\x_{j}}$, \textit{i.e.}, $\vec{\mathrm{n}}_{\x,\gij}\parallel\overrightarrow{\x_{i}\x_{j}}$ for each $\mathrm{x}\in\widetilde{\Gamma}_{ij}$, see Figure \ref{fig:voronoicell}.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.26]{icos_pol_nopt_0_VD.png}\hspace{0.2cm}
\includegraphics[scale=0.26]{icos_pol_nopt_1_VD.png}\hspace{0.2cm}
\begin{tikzpicture}[scale=0.6,font=\small]
\coordinate (xi) at (0,0);
\coordinate (xi1) at (3,0);
\coordinate (xi2) at (2.1,2.7);
\coordinate (xi3) at (-1,3);
\coordinate (xi4) at (-3.0,0.8);
\coordinate (xi5) at (-2.1,-2.2);
\coordinate (xi6) at (1.5,-2.6);
\coordinate (mi1) at (1.5,0);
\coordinate (mi2) at (1.1,1.3);
\coordinate (mi3) at (-0.5,1.5);
\coordinate (mi4) at (-1.5,0.4);
\coordinate (mi5) at (-1.0,-1.1);
\coordinate (mi6) at (0.7,-1.3);
\coordinate (m12) at (2.6,1.3);
\coordinate (m23) at (0.6,2.8);
\coordinate (m34) at (-2,1.9);
\coordinate (m45) at (-2.5,-0.7);
\coordinate (m56) at (-0.4,-2.4);
\coordinate (m61) at (2.2,-1.3);
\coordinate (v1) at (1.5,1);
\coordinate (v2) at (0.5,1.8);
\coordinate (v3) at (-1.3,1.2);
\coordinate (v4) at (-1.7,-0.4);
\coordinate (v5) at (-0.3,-1.8);
\coordinate (v6) at (1.5,-0.8);
\coordinate (ngix) at (1.5,0.3);
\draw[Blue,dashed] (xi) -- (xi1);
\draw[Blue,dashed] (xi)--(xi2);
\draw[Blue,dashed] (xi)--(xi3);
\draw[Blue,dashed] (xi)--(xi4);
\draw[Blue,dashed] (xi)--(xi5);
\draw[Blue,dashed] (xi)--(xi6);
\draw[Blue,dashed] (xi1)--(xi2);
\draw[Blue,dashed] (xi2)--(xi3);
\draw[Blue,dashed] (xi3)--(xi4);
\draw[Blue,dashed] (xi4)--(xi5);
\draw[Blue,dashed] (xi5)--(xi6);
\draw[Blue,dashed] (xi6)--(xi1);
\draw[Black,->] (ngix)--(2.5,0.3) node[right]{$\vec{\mathrm{n}}_{\x,\gij}$};
\draw (v1)--(v2);
\draw (v2)--(v3);
\draw (v3)--(v4);
\draw (v4)--(v5);
\draw (v5)--(v6);
\draw (v6)--(v1) node[right]{$\widetilde{\Gamma}_{ij}$};
\filldraw [black,thick](xi1) circle (1.5pt);
\filldraw [black,thick](xi2) circle (1.5pt);
\filldraw [black,thick](xi3) circle (1.5pt);
\filldraw [black,thick](xi4) circle (1.5pt);
\filldraw [black,thick](xi5) circle (1.5pt);
\filldraw [black,thick](xi6) circle (1.5pt);
\filldraw [black,thick](xi) circle (1.5pt) node[below right]{$\x_{i}$};
\filldraw [black,thick](xi1) circle (1.5pt) node[below right]{$\x_{j}$};
\filldraw [black,thick](mi1) circle(0.5pt) node[below right]{$\x_{ij}$};
\filldraw [black,thick](v3) node[below right]{$\widetilde{\mathrm{V}}_{i}$};
\end{tikzpicture}
\caption{\label{fig:voronoicell}Voronoï-Delaunay decomposition on $\mathbb{S}^{2}$ with levels $0$ and $1$, and the geometric configuration of Voronoï cell and its associated triangles.}
\end{figure}
\begin{rem}
The Voronoï-Delaunay decomposition constructed as described above will be called a non-optimized grid and denoted throughout this paper as Voronoï-Delaunay decomposition NOPT.
\end{rem}
\begin{rem}
In the NOPT decomposition, the constrained cell centroids normally do not coincide with the nodes. However, this can be an important property for discretizations. Therefore, we will also consider Spherical Centroidal Voronoï Tessellations (SCVT). These are constructed through an iterative method (Lloyd's algorithm, see for example \cite{du2003constrained,du2003voronoi}). In the SCVT tessellation, the vertex generating each cell $\widetilde{\mathrm{V}}_{i}\subset\mathbb{S}^{2}$, is the \emph{constrained cell centroid} $ \xip^{c}$, \textit{i.e.}, the
minimum of the following function
\[
F(\mathrm{x})=\int_{\widetilde{\mathrm{V}}_{i}}\|\mathrm{y}-\mathrm{x}\|^{2}ds(\mathrm{y}).
\]
\end{rem}
Given a Voronoï-Delaunay decomposition, we consider some grid parameters previously defined in \cite{du2003voronoi,du2005finite}. Let $h_{i}=\max_{\mathrm{x}\in \widetilde{\mathrm{V}}_{i}}\mathrm{d}(\x_{i},\mathrm{x})$ be the maximum geodesic distance between vertex $\x_{i}$ and the points in its associated cell $\widetilde{\mathrm{V}}_{i}$ and $h=\max_{i=1,\dots,N}h_{i}$.
In addition, we consider the following shape regularity or almost uniform conditions given by \cite{mishev1998finite,li2000generalized,ciarlet2002finite}:
\begin{defn}[Almost uniform]
\label{def:aluniform}
We say that a Voronoï-Delaunay decomposition $\widetilde{\mathcal{V}}_{h}=\{\x_{i},\widetilde{\mathrm{V}}_{i}\}_{i=1}^{N}$ on $\mathbb{S}^{2}$ is finite volume regular if for every spherical polygon $\widetilde{\mathrm{V}}_{i}\in\widetilde{\mathcal{V}}_{h}$ with boundary $\partial\widetilde{\mathrm{V}}_{i}=\bigcup_{j\in\LiN}\widetilde{\Gamma}_{ij}$, there exist positive constants $C_0$ and $C_1$, independent of $h$ such that
\[
\frac{1}{C_{0}}h\leq m_{l}(\widetilde{\Gamma}_{ij}) \leq C_{0}h,\quad\mbox{and }\quad
\frac{1}{C_{1}}h^{2}\leq m_{a}(\widetilde{\mathrm{V}}_{i}) \leq C_{1}h^{2}.
\]
\end{defn}
From now onward, we always assume that the Voronoï-Delaunay decompositions NOPT and SCVT are almost uniform grids on $\mathbb{S}^{2}$.
\subsection{Geometric correspondence}
We describe here geometric relations between $\mathbb{S}^{2}$ and its polyhedral approximation $\mathbf{S}_{h}$, using the framework given by \cite{demlow2009higher,ju2009finite}. First, we assume that $\mathbb{S}^{2}$ is approximated by a polyhedral sequence $\mathbf{S}_{h}$ that is formed by the decomposition $\mathcal{T}_{h}$ (planar triangles) as $h$ goes to zero. The smooth and bijective mapping ${\mathcal P}:\mathbf{S}_{h}\to \mathbb{S}^{2}$ is defined as radial projection of any point $\x^{\ast}\in\mathbf{S}_{h}$ onto the spherical surface, \textit{i.e.}, ${\mathcal P}(\x^{\ast})=\x^{\ast}/\|\x^{\ast}\|$ (Figure \ref{fig:radialproj}). Observe that, by construction, the vertices ($\mathrm{S}_{N}=\{\x_{i}\}_{i=1}^{N}$) of the polyhedral $\mathbf{S}_{h}$ belong to the surface of $\mathbb{S}^{2}$ \textit{i.e.}, $\mathrm{S}_{N}=\mathbf{S}_{h}\cap\mathbb{S}^{2}$. This implies that $\mathbf{S}_{h}=\bigcup_{ijk\in\Sigma}{\mathrm T}_{ijk}$ and $\mathbb{S}^{2}=\bigcup_{ijk\in\Sigma}\widetilde{\mathrm T}_{ijk}$, where $\Sigma$ is the set of neighboring vertices in $\mathrm{S}_{N}$. Notice that the polyhedral of level zero is the initial icosahedron.
\begin{figure}[th]
\begin{center}
\begin{tikzpicture}[scale=1.5]
\draw [blue,thick](0,0) to (3,1);
\draw (0,0) to[bend left] (3,1);
\draw (0,-0.15) node {$\mathrm{x}_{i}$};
\draw [fill=black](0,0) circle (1.0pt);
\draw (3,0.85) node {$\mathrm{x}_{j}$};
\draw [fill=black](3,1) circle (1.0pt);
\draw [blue,dashed](1.5,0.5) to (1.3,0.9);
\draw (1.26,1.08) node {${\mathcal P}(\x^{\ast})$};
\draw [fill=black](1.3,0.9) circle (1.0pt);
\draw (1.5,0.32) node {$\x^{\ast}$};
\draw [fill=DarkBlue](1.5,0.5) circle (1.0pt);
\draw (2.5,0.6) node {$\mathbf{S}_{h}$};
\draw (0.5,0.7) node {$\mathbb{S}^{2}$};
\end{tikzpicture}
\end{center}
\caption{\label{fig:radialproj}Radial projection of polyhedral $\mathbf{S}_{h}$ on sphere $\mathbb{S}^{2}$.}
\end{figure}
We also consider the following spherical shell in $\mathbb{R}^{3}$:
\[
\Omega_{h}:=\left\{\x^{\ast}\in\mathbb{R}^{3}\setminus\{0\}:1-Ch^{2}<\|\x^{\ast}\|<1+Ch^{2}\right\},
\]
with $C$ chosen such that $\mathbb{S}^{2}$ and $\mathbf{S}_{h}$ are contained in $\Omega_{h}$. For functions $u\inH^{2}(\S)$, we denote by $u^{\Omega}$ the extension of $u$ to $\Omega_{h}$ given by $u^{\Omega}(\x^{\ast})=u({\mathcal P}(\x^{\ast}))$, for each $\x^{\ast}\in\mathbf{S}_{h}$. The following result has been shown in \cite[Proposition 1, pp. 1677]{du2003voronoi}:
\begin{prp}
\label{prp:duextension}
For any $\x^{\ast}\in\Omega_{h}$ and $\mathrm{x}={\mathcal P}(\x^{\ast})\in\mathbb{S}^{2}$, and $i,j\in\{1,2,3\}$,
\[
\begin{split}
\nabla_{s} u(\mathrm{x})=\nablau^{\Omega}(\mathrm{x}),& \quad \nabla(\partial_{i}u^{\Omega}(\mathrm{x}))=\nabla_{s}(\partial_{s,i} u(\mathrm{x}))-(\partial_{s,i} u(\mathrm{x}))\vec{\mathrm{n}}_{\S,\x},\\
\|\x^{\ast}\|\nablau^{\Omega}(\x^{\ast})=\nablau^{\Omega}(\mathrm{x}),& \quad \|\x^{\ast}\|^{2}\partial_{i}\partial_{j}u^{\Omega}(\x^{\ast})=\partial_{i}\partial_{j}u^{\Omega}(\mathrm{x}),
\end{split}
\]
where $\partial_{i}$ denotes partial derivative with respect to $x_i$ and $\partial_{s,i}$ denotes the $i$-th component of the tangential derivative, for $i=1,2,3$.
\end{prp}
The following result compares the norms of functions defined on $\mathbb{S}^{2}$ and $\mathbf{S}_{h}$. A proof is given in \cite{demlow2009higher}.
\begin{prp}
\label{prp:equivnormext}
Let $u\inW^{2,p}(\S)$ with $1\leq p\leq \infty$. There exist positive constants $C_0, C_1$ and $C_2$ such that for $h$ small enough,
\[
\begin{split}
\frac{1}{C_{0}}\|u\|_{L^{p}(\widetilde{\mathrm T}_{ijk})}&\leq\|\overline{u}^{\Omega}\|_{L^{p}({\mathrm T}_{ijk})}\leq C_{0}\|u\|_{L^{p}(\widetilde{\mathrm T}_{ijk})},\\
\frac{1}{C_{1}}\|\nabla_{s} u\|_{L^{p}(\widetilde{\mathrm T}_{ijk})}&\leq\|\nabla\overline{u}^{\Omega}\|_{L^{p}({\mathrm T}_{ijk})}\leq C_{1}\|\nabla_{s} u\|_{L^{p}(\widetilde{\mathrm T}_{ijk})},\\
\|\nabla^{\alpha}\overline{u}^{\Omega}\|_{L^{p}({\mathrm T}_{ijk})}&\leq C_{2}\sum_{0\leq |\alpha|\leq2}\|\nabla_{s}^{\alpha} u\|_{L^{p}(\widetilde{\mathrm T}_{ijk})},
\end{split}
\]
where $\overline{u}^{\Omega}$ is the extension of $u$ to $\Omega_{h}$ restricted to $\mathbf{S}_{h}$, and $\nabla^{\alpha},\nabla_{s}^{\alpha}$ denote the usual derivatives and tangential derivatives up to order $2$.
\end{prp}
\section{A Voronoï-based finite volume method}
\label{sec:fvm}
In this section, we seek an approximate solution of \eqref{eq:strongform} via a finite difference/volume scheme. First, from Gauss theorem, we have that
\[
-\int_{\widetilde{\mathrm{V}}_{i}}\Delta_{s} u(\mathrm{x})ds(\mathrm{x})=-\int_{\partial\widetilde{\mathrm{V}}_{i}}\nabla_{s} u(\mathrm{x})\cdot\vec{\mathrm{n}}_{\x,\Vi} d\gamma(\mathrm{x}),
\]
where $\vec{\mathrm{n}}_{\x,\Vi}$ denotes the co-normal unit vector on $\partial\widetilde{\mathrm{V}}_{i}$ at $\mathrm{x}$, and $d\gamma(\mathrm{x})$ is the geodesic length measure. Since $\partial\widetilde{\mathrm{V}}_{i}=\cup_{j\in\LiN}\widetilde{\Gamma}_{ij}$, we have that
\[
-\sum_{j\in\LiN}\int_{\widetilde{\Gamma}_{ij}}\nabla_{s} u(\mathrm{x})\cdot\vec{\mathrm{n}}_{\x,\gij} d\gamma(\mathrm{x})=\int_{\widetilde{\mathrm{V}}_{i}}f(\mathrm{x})ds(\mathrm{x}).
\]
We denote the continuous flux of $u$ across the edge $\widetilde{\Gamma}_{ij}$ by
\begin{equation}
\label{eq:cflux}
\widetilde{\mathcal{F}}_{ij}(u):=- \int_{\widetilde{\Gamma}_{ij}}\nabla_{s} u(\mathrm{x})\cdot\vec{\mathrm{n}}_{\x,\gij} d\gamma(\mathrm{x}),
\end{equation}
and define its central difference approximation
\begin{equation}
\label{eq:dflux}
\overline{\mathcal{F}}_{ij}(u_{h}):=-m_{l}(\widetilde{\Gamma}_{ij})\frac{u_{h}(\x_{j})-u_{h}(\x_{i})}{\|\x_{j}-\x_{i}\|}.
\end{equation}
Additionally, for each cell $\widetilde{\mathrm{V}}_{i}$, let $f_{i}$ denote the mean value of the data $f$ on $\widetilde{\mathrm{V}}_{i}$, \textit{i.e.},
\begin{equation}
\label{eq:fi}
f_{i}=\frac{1}{m_{a}(\widetilde{\mathrm{V}}_{i})}\int_{\widetilde{\mathrm{V}}_{i}}f(\mathrm{x})ds(\mathrm{x}).
\end{equation}
The Voronoï-based finite volume scheme is defined as
\begin{equation}
\label{eq:FVscheme}
(\mathrm{L}_{s,h}(u_{h}))_{i}:=\frac{1}{m_{a}(\widetilde{\mathrm{V}}_{i})}\sum_{j\in\LiN}\overline{\mathcal{F}}_{ij}(u_{h})=f_{i},\quad\mbox{for }1\leq i\leq N,
\end{equation}
where $\mathrm{L}_{s,h}$ is a discretization of the Laplacian, $u_{h}$ is an approximate solution and $f_{i}$ is defined in \eqref{eq:fi}. To emphasize the dependence of the grid, we always will use the subscript $h$. The finite volume scheme \eqref{eq:FVscheme} is conservative, since we have that,
\[
\sum_{i=1}^{N}\sum_{j\in\LiN}\overline{\mathcal{F}}_{ij}(u_{h})=0,
\]
which follows from $\overline{\mathcal{F}}_{ij}=-\overline{\mathcal{F}}_{ji}$ if the vertices $\x_{i}$ and $\x_{j}$ are neighbors with $m_{l}(\widetilde{\Gamma}_{ij})>0$.
\subsection{Discrete function spaces}
In this subsection, we introduce some local function spaces on $\mathbb{S}^{2}$, as in \cite{dziuk2013finite,demlow2009higher,ju2009finite}. We define the Lagrange finite element space on a polyhedral $\mathbf{S}_{h}$ as
\[
{\mathbb P}_{1}(\Sh,\Th):=\left\{\overline{u}_{h}\in C^{0}(\mathbf{S}_{h}):\overline{u}_{h}\big|_{{\mathrm T}_{ijk}},\mbox{is linear affine for each }{\mathrm T}_{ijk}\in\mathcal{T}_{h}\right\},
\]
and the corresponding lifted finite element space on $\mathbb{S}^{2}$,
\[
{\mathbb P}_{1}(\S,\Ths):=\left\{u_{h}\in C^{0}(\mathbb{S}^{2}):u_{h}=\overline{u}_{h}\circ\Prj^{-1},\,\mbox{for each }\overline{u}_{h}\in{\mathbb P}_{1}(\Sh,\Th)\right\},
\]
where $\Prj^{-1}$ denotes the inverse of the radial projection ${\mathcal P}$. For the approximation of functions on $H_{0}^{1}(\S)$, we consider $\widetilde{U}_{h}$ the the zero-averaged subspace of ${\mathbb P}_{1}(\S,\Ths)$ given by
\[
\widetilde{U}_{h}(\mathbb{S}^{2}):=\left\{u_{h}\in{\mathbb P}_{1}(\S,\Ths):\int_{\mathbb{S}^{2}}u_{h}(\mathrm{x})ds(\mathrm{x})=0\right\}.
\]
Notice that, from Proposition \ref{prp:duextension}, we can get $\widetilde{U}_{h}\subsetH_{0}^{1}(\S)$ endowed with the $H^{1}$-norm. As in \cite{li2000generalized,lin2013finite}, we define the piecewise constant function space associated to the dual Voronoï~decomposition given by
\[
\widetilde{V}_{h}(\mathbb{S}^{2}):=\left\{v\inL^{2}(\S):v\big|_{\widetilde{\mathrm{V}}_{i}},\mbox{ is constant in each }\widetilde{\mathrm{V}}_{i}\mbox{ for }1\leq i\leq N\right\}.
\]
We introduce interpolation operators $\widetilde{\Pi}_{h}$ and $\widetilde{\Pi}^{\ast}_{h}$ mapping functions defined on $\mathbb{S}^{2}$ onto $\widetilde{U}_{h}$ and $\widetilde{V}_{h}$, respectively. Note that, given the function values at the vertices of the Voronoï grid the operators are uniquely defined.
\begin{prp}[Interpolation estimates]
\label{prp:interp}
Assume $u\in \Wtp\cap\HqS$ for $2\leq p\leq \infty$ and $v\in H^{2}(\S)\cap\HqS$. Then, for $h$ small enough, there exist positive constants $C_U$ and $C_V$ independent of $h$ such that,
\[
\begin{split}
\|u-\widetilde{\Pi}_{h}(u)\|_{W^{k,p}(\S)}&\leq C_{U}h^{2-k}\|u\|_{W^{2,p}(\S)},\quad\mbox{ for each }k\in\{0,1\},\\
\|v-\widetilde{\Pi}^{\ast}_{h}(v)\|_{L^{2}(\S)}&\leq C_{V}h\|v\|_{H^{2}(\S)}.
\end{split}
\]
\end{prp}
We also define the linear transference interpolation $\widetilde{\mathrm{I}}_{h}:\widetilde{U}_{h}\to\widetilde{V}_{h}$ as
\[
\widetilde{\mathrm{I}}_{h}(u_{h})(\mathrm{x})=\sum_{i=1}^{N}u_{h}(\x_{i})\chi_{i}(\mathrm{x}),\quad\mbox{for each }u_{h}\in\widetilde{U}_{h},
\]
where $\chi_{i}$ represents the characteristic function corresponding to the cell $\widetilde{\mathrm{V}}_{i}$, with $1\leq i\leq N$.
In order to show basic estimates, we shall need the following inverse estimate for finite element functions (see \textit{e.g.}, \cite{ciarlet2002finite,brenner2007mathematical,demlow2009higher}), which follows from the almost-uniformity of the decomposition $\widetilde{\Th}$. The proof is valid on $\mathbb{S}^{2}$ under conditions of Propositions \ref{prp:duextension} and \ref{prp:equivnormext}, cf. \cite[Proposition 2.7, pp.~812]{demlow2009higher} or \cite[Lemma 3.4, pp.~524]{kovacs2018maximum}.
\begin{lem}[Inverse estimate]
\label{lem:propinverse}
Let $\widetilde{\Th}$ be an almost uniform Voronoï-Delaunay decomposition on $\mathbb{S}^{2}$. Assume that $l,m$ are non negative integers with $
l\leq m$ and $1\leq p,q\leq \infty$, such that $\widetilde{U}_{h}\subset W^{l,p}(\widetilde{\mathrm T}_{ijk})\cap W^{m,q}(\widetilde{\mathrm T}_{ijk})$. Then, there exists a positive constant $C$ independent of $h$, such that $v_{h}\in\widetilde{U}_{h}$ satisfies,
\[
\|v_{h}\|_{W^{m,p}(\widetilde{\mathrm T}_{ijk})}\leq Ch^{l-m-2\left(1/q-1/p\right)}\|v_{h}\|_{W^{l,q}(\widetilde{\mathrm T}_{ijk})}.
\]
\end{lem}
We establish the following auxiliary result for later use in the analysis.
\begin{lem}
\label{lem:edgeIntp}
Let $\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}$ be a geodesic triangle of an almost uniform Voronoï-Delaunay decomposition on $\mathbb{S}^{2}$ and $\widetilde{\tau}_{ij}\subset\partial\widetilde{\mathrm T}_{ijk}$. Then, for $1\leq q<\infty$ and $v_{h}\in\widetilde{U}_{h}$, there exists a positive constant $C$ independent of $h$, such that
\begin{subequations}
\begin{align}
\int_{\widetilde{\tau}_{ij}}[v_{h}(\mathrm{x})-\widetilde{\mathrm{I}}_{h} (v_{h})(\mathrm{x})]d\gamma(\mathrm{x})&=0,\label{eq:taus}\\
\|v_{h}-\widetilde{\mathrm{I}}_{h}(v_{h})\|_{L^{q}(\Tgsi)}&\leq Ch\|v_{h}\|_{W^{1,q}(\Tgsi)}.\label{eq:estimIntp}
\end{align}
\end{subequations}
\end{lem}
\begin{proof}
Let $\x_{ij}$ be the midpoint of $\widetilde{\tau}_{ij}\subset \partial\widetilde{\mathrm T}_{ijk}$, we define $\widetilde{\tau}_{ij}^{(n)}$ as the geodesic distance between the points $\x_{ij}$ and $\x_{n}$ with $n\in\{i,j\}$. We know that $\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})=v_{h}(\x_{n})$ for each $\mathrm{x}\in\widetilde{\tau}_{ij}^{(n)}$, where $n\in\{i,j\}$ and $\widetilde{\tau}_{ij}=\widetilde{\tau}_{ij}^{(i)}\cup\widetilde{\tau}_{ij}^{(j)}$. We can immediately derive the following estimates,
\begin{align}
\int_{\widetilde{\tau}_{ij}}[v_{h}(\mathrm{x})-\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})]d\gamma(\mathrm{x})&=\int_{\widetilde{\tau}_{ij}^{(i)}}[v_{h}(\mathrm{x})- v_{h}(\x_{i})]d\gamma(\mathrm{x})+\int_{\widetilde{\tau}_{ij}^{(j)}}[v_{h}(\mathrm{x})-v_{h}(\x_{j})]d\gamma(\mathrm{x})\nonumber\\
&=\int_{\widetilde{\tau}_{ij}}v_{h}(\mathrm{x})d\gamma(\mathrm{x})-\int_{\widetilde{\tau}_{ij}^{(i)}}[v_{h}(\x_{i})+v_{h}(\x_{j})]d\gamma(\mathrm{x})\nonumber\\
&=\int_{\widetilde{\tau}_{ij}}v_{h}(\mathrm{x})d\gamma(\mathrm{x})-\frac{1}{2}m_{l}(\widetilde{\tau}_{ij})[v_{h}(\x_{i})+v_{h}(\x_{j})],\label{eq:lemtri01}
\end{align}
where $m_{l}(\widetilde{\tau}_{ij})$ denotes the length of $\widetilde{\tau}_{ij}$. Let $\vh^{\Omega}$ be the lift of $v_{h}$ to the spherical shell $\Omega_{ijk}:=\left\{\x^{\ast}\in\Omega_{h}:{\mathcal P}(\x^{\ast})\in\widetilde{\mathrm T}_{ijk}\right\}$, with $v_{h}(\mathrm{x})=\vh^{\Omega}(\mathrm{x})$ for all $\mathrm{x}\in\mathbb{S}^{2}$. Also let $\xp_{ij}$ be the midpoint of $[\mathrm{x}_{i},\mathrm{x}_{j}]$ (the segment goes from $\x_{i}$ to $\x_{j}$) such that $\x_{ij}={\mathcal P}(\xp_{ij})$. Now, assume that $\vh^{\Omega}\in C^{1}(\Omega_{ijk})$, by Taylor's Theorem for $\vh^{\Omega}$ around $\x_{ij}$ and integrating over $\widetilde{\tau}_{ij}$, we obtain
\begin{align*}
\int_{\widetilde{\tau}_{ij}}v_{h}(\mathrm{x})d\gamma(\mathrm{x})&=\int_{\widetilde{\tau}_{ij}}\vh^{\Omega}(\mathrm{x})d\gamma(\mathrm{x})=m_{l}(\widetilde{\tau}_{ij})\vh^{\Omega}(\x_{ij})\\
&\quad +\int_{\widetilde{\tau}_{ij}}\int_{0}^{1}\nabla \vh^{\Omega}(\x_{ij}+t(\mathrm{x}-\x_{ij}))\cdot(\mathrm{x}-\x_{ij})dtd\gamma(\mathrm{x}).
\end{align*}
the second term in the right-hand side vanishes by parity of $\nabla\vh^{\Omega}(\mathrm{y})$ for any $\mathrm{y}\in[\mathrm{x},\x_{ij}]$ and a symmetry of $\widetilde{\tau}_{ij}$ with respect to the midpoint $\x_{ij}$. It follows that,
\[
\int_{\widetilde{\tau}_{ij}}v_{h}(\mathrm{x})d\gamma(\mathrm{x})=m_{l}(\widetilde{\tau}_{ij})\vh^{\Omega}(\x_{ij}).
\]
Substituting the expression above into \eqref{eq:lemtri01}, one obtains
\[
\int_{\widetilde{\tau}_{ij}}[v_{h}(\mathrm{x})-\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})]d\gamma(\mathrm{x})=m_{l}(\widetilde{\tau}_{ij})\vh^{\Omega}(\x_{ij})-\frac{1}{2}m_{l}(\widetilde{\tau}_{ij})[v_{h}(\x_{i})+v_{h}(\x_{j})]=0,
\]
which follows from $\vh^{\Omega}(\x_{ij})=\vh^{\Omega}(\xp_{ij})$ and $\vh^{\Omega}(\xp_{ij})=\tfrac{1}{2}[v_{h}(\mathrm{x}_{i})+v_{h}(\mathrm{x}_{j})]$. This shows the identity \eqref{eq:taus}. For \eqref{eq:estimIntp}, we consider $\widetilde{\mathrm Q}_{i},\widetilde{\mathrm Q}_{j}$ and $\widetilde{\mathrm Q}_{k}$ the three spherical polygonal regions makeup by intersection of the triangle $\widetilde{\mathrm T}_{ijk}$ with the Voronoï~cells associated to each vertex $\x_{i},\x_{j}$ and $\x_{k}$, \textit{i.e.},
\[
\widetilde{\mathrm Q}_{n}=\widetilde{\mathrm T}_{ijk}\cap\widetilde{\mathrm{V}}_{n},\quad\mbox{for }n\in\{i,j,k\}.
\]
For $\mathrm{x}\in\widetilde{\mathrm Q}_{i}$ we have $\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})=v_{h}(\x_{i})$. Then
\begin{align*}
\left|v_{h}(\mathrm{x})-v_{h}(\x_{i})\right|&=\left|\int_{0}^{1}\nabla \vh^{\Omega}(\x_{i}+t(\mathrm{x}-\x_{i}))\cdot(\mathrm{x}-\x_{i})dt\right|\\&\leq |\mathrm{x}-\x_{i}|\max_{\x^{\ast}\in[\mathrm{x},\x_{i}]}\left|\nabla\vh^{\Omega}(\x^{\ast})\right|,
\end{align*}
where $\vh^{\Omega}$ is the radial extension of $v_{h}$ to the spherical shell $\Omega_{h}$, we used its Taylor expansion and $[\mathrm{x},\x_{i}]$ is a segment that connects $\mathrm{x}$ with $\x_{i}$. For $1\leq q< \infty$ and, by integrating over $\widetilde{\mathrm Q}_{i}$, we get
\begin{align*}
\int_{\widetilde{\mathrm Q}_{i}}\left|v_{h}(\mathrm{x})-\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})\right|^{q}ds(\mathrm{x})&\leq \int_{\widetilde{\mathrm Q}_{i}}|\mathrm{x}-\x_{i}|^{q}\max_{\x^{\ast}\in[\mathrm{x},\x_{i}]}\left|\nabla\vh^{\Omega}(\x^{\ast})\right|^{q}ds(\mathrm{x})\\
&\leq Ch^{q}\int_{\widetilde{\mathrm Q}_{i}}\max_{\mathrm{y}\in\widetilde{\mathrm Q}_{i}}\left|\nabla\vh^{\Omega}(\mathrm{y})\right|^{q}ds(\mathrm{x}).
\end{align*}
Recalling Definition \ref{def:aluniform} and Lemma \ref{lem:propinverse} with $m=l=1$ and $p=\infty$, we can have
\begin{align*}
\|v_{h}-\widetilde{\mathrm{I}}_{h}(v_{h})\|_{L^{q}(\widetilde{\mathrm Q}_{i})}^{q}&\leq Ch^{q}m_{a}(\widetilde{\mathrm Q}_{i})\|\nabla \vh^{\Omega}\|_{L^{\infty}(\widetilde{\mathrm Q}_{i})}^{q}\leq Ch^{q}h^{2}\|\nabla \vh^{\Omega}\|_{L^{\infty}(\widetilde{\mathrm Q}_{i})}^{q}\\
&\leq Ch^{q}h^{2}h^{-2}\|\nabla \vh^{\Omega}\|^{q}_{L^{q}(\widetilde{\mathrm Q}_{i})}\leq Ch^{q}\|\nabla \vh^{\Omega}\|^{q}_{L^{q}(\widetilde{\mathrm Q}_{i})}.
\end{align*}
The estimates similarly hold for $\widetilde{\mathrm Q}_{j}$ and $\widetilde{\mathrm Q}_{k}$. Combining these results, we get that
\[
\|v_{h}-\widetilde{\mathrm{I}}_{h}(v_{h})\|_{L^{q}(\widetilde{\mathrm T}_{ijk})}\leq Ch\|\nabla_{s}v_{h}\|_{L^{q}(\widetilde{\mathrm T}_{ijk})},
\]
which completes the proof.
\end{proof}
Let us now introduce some discrete norms and seminorms for functions on $\widetilde{U}_{h}$. Similarly to \cite{du2003voronoi,droniou2014introduction,bessemoulin2015discrete}, for $1\leq p <\infty$, we denote
\[
\begin{split}
\|u_{h}\|_{0,p,h}^{p}&=\sum_{i=1}^{N}m_{a}(\widetilde{\mathrm{V}}_{i})|u_{h}(\x_{i})|^{p},\\%
|u_{h}|_{1,p,h}^{p} & =\sum_{i=1}^{N}\sum_{j\in\LiN}\frac{1}{2}m_{l}(\widetilde{\Gamma}_{ij})\mathrm{d}(\x_{i},\x_{j})\left|\frac{u_{h}(\x_{i})-u_{h}(\x_{j})}{\|\x_{i}-\x_{j}\|}\right|^{p},\\
\|u_{h}\|_{1,p,h}^{p} & =\|u_{h}\|_{p,h}^{p}+|u_{h}|_{1,p,h}^{p}.
\end{split}
\]
In the case $p=2$, we omit $p$ in our notation and simply write $\|\cdot\|_{0,2,h}=\|\cdot\|_{0,h}$ and $\|\cdot\|_{1,2,h}=\|\cdot\|_{1,h}$ for the norms and $|\cdot|_{1,2,h}=|\cdot|_{1,h}$ for the seminorm. Furthermore, for the case $p=\infty$, we can use the usual notational convention for the $\max$-norm, \textit{i.e.}, $\|\cdot\|_{L^{\infty}(\S)}$, for functions on $\widetilde{U}_{h}$.
\begin{prp}
\label{prp:equivnorm}
For $u_{h}\in\widetilde{U}_{h}$, there exist positive constants $C_{0}$ and $C_{1}$, independent of $h$ such that,
\begin{subequations}
\begin{align}
\frac{1}{C_{0}}\|u_{h}\|_{0,p,h}&\leq\|u_{h}\|_{L^{p}(\S)}\leq C_{0}\|u_{h}\|_{0,p,h},\label{eq:mequiv01}\\
\frac{1}{C_{1}}|u_{h}|_{1,p,h}&\leq \|\nabla_{s}u_{h}\|_{L^{p}(\S)}\leq C_{1}|u_{h}|_{1,p,h},\label{eq:mequiv02}
\end{align}
\end{subequations}
with $p\in\{1,2\}$.
\end{prp}
\begin{proof}
About $p=2$, we cite \cite[Proposition 4, pp.~1678]{du2005finite} and \cite[Lemma 3.2.1, pp.~124]{li2000generalized}. About $p=1$, from Proposition \ref{prp:equivnormext}, for the extension $\overline{u}^{\Omega}_{h}$ of $u_{h}$ to $\Omega_{h}$ restricted to $\mathbf{S}_{h}$, we have
\begin{equation}
\label{eq:equiv01}
\int_{\widetilde{\mathrm T}_{ijk}}|u_{h}(\mathrm{x})|ds(\mathrm{x})\leq C\int_{{\mathrm T}_{ijk}}|\overline{u}^{\Omega}_{h}(\x^{\ast})|ds(\x^{\ast}).
\end{equation}
\begin{figure}[!h]
\begin{center}
\begin{tikzpicture}[scale=0.7,
x=1cm,y=1cm]
\clip(-1.0,-0.5) rectangle (8.0,7.0);
\fill[line width=1.6pt,color=DarkBlue,fill=Blue,fill opacity=0.1] (0,0) -- (4.126324716897398,5.772286864654252) -- (7.02,0.38) -- cycle;
\draw [line width=1.2pt,color=DarkBlue] (0,0)-- (4.126324716897398,5.772286864654252);
\draw [line width=1.2pt,color=DarkBlue] (4.126324716897398,5.772286864654252)-- (7.02,0.38);
\draw [line width=1.2pt,color=DarkBlue] (7.02,0.38)-- (0,0);
\draw [line width=1.2pt,color=BrickRed] (3.416420081044503,1.9187658712304956)-- (2.063162358448699,2.886143432327126);
\draw [line width=1.2pt,color=BrickRed] (3.416420081044503,1.9187658712304956)-- (5.573162358448698,3.0761434323271257);
\draw [line width=1.2pt,color=BrickRed] (3.416420081044503,1.9187658712304956)-- (3.51,0.19);
\draw [line width=1.2pt,dashed,color=Gray] (0,0)-- (3.416420081044503,1.9187658712304956);
\draw [line width=1.2pt,dashed,color=Gray] (7.02,0.36)--(3.416420081044503,1.9187658712304956);
\draw [line width=1.2pt,dashed,color=Gray] (4.126324716897398,5.772286864654252)--(3.416420081044503,1.9187658712304956);
\draw [color=BrickRed](3.71,4.0) node[anchor=north west] {$\mathrm{Q}_{k}$};
\draw [color=BrickRed](1.79,1.2) node[anchor=north west] {$\mathrm{Q}_{i}$};
\draw [color=BrickRed](4.65,1.85) node[anchor=north west] {$\mathrm{Q}_{j}$};
\draw [color=Gray](4.5,3.6) node[anchor=north west] {$\mathrm{S}_{i}$};
\draw [color=Gray](1.79,2.7) node[anchor=north west] {$\mathrm{S}_{j}$};
\draw [color=Gray](3.65,1.2) node[anchor=north west] {$\mathrm{S}_{k}$};
\draw [fill=black] (0,0) circle (2pt);
\draw[color=black] (-0.23,-0.15) node {$\x_{i}$};
\draw [fill=black] (4.126324716897398,5.772286864654252) circle (2pt);
\draw[color=black] (4.11,6.13) node {$\x_{k}$};
\draw [fill=black] (7.02,0.36) circle (2pt);
\draw[color=black] (7.4,0.31) node {$\x_{j}$};
\draw [fill=black] (2.063162358448699,2.886143432327126) ++(-1.5pt,0 pt) -- ++(1.5pt,1.5pt)--++(1.5pt,-1.5pt)--++(-1.5pt,-1.5pt)--++(-1.5pt,1.5pt);
\draw[color=black] (1.79,3.18) node {$\x^{\ast}_{ik}$};
\draw [fill=black] (5.573162358448698,3.0761434323271257) ++(-1.5pt,0 pt) -- ++(1.5pt,1.5pt)--++(1.5pt,-1.5pt)--++(-1.5pt,-1.5pt)--++(-1.5pt,1.5pt);
\draw[color=black] (5.86,3.34) node {$\x^{\ast}_{jk}$};
\draw [fill=black] (3.51,0.19) ++(-1.5pt,0 pt) -- ++(1.5pt,1.5pt)--++(1.5pt,-1.5pt)--++(-1.5pt,-1.5pt)--++(-1.5pt,1.5pt);
\draw[color=black] (3.64,-0.15) node {$\x^{\ast}_{ij}$};
\draw [fill=BrickRed] (3.416420081044503,1.9187658712304956) circle (2pt);
\draw[color=black] (4.2,1.9) node {$\mathrm{q}_{ijk}$};
\end{tikzpicture}
\end{center}
\caption{\label{fig:trianref} Geometric configuration of planar triangle ${\mathrm T}_{ijk}\in\mathcal{T}_{h}$ with vertices $\x_{i},\x_{j}$ and $\x_{k}$ and its circumcenter denoted by $\mathrm{q}_{ijk}$. }
\end{figure}
Note that $\overline{u}^{\Omega}_{h}$ is linear in planar triangle ${\mathrm T}_{ijk}\in\mathcal{T}_{h}$ and assume that $m_{a}({\mathrm T}_{ijk})=\sum_{n\in\{i,j,k\}}m_{a}(\mathrm{S}_{n})$, where $\mathrm{S}_{n}$ denotes the triangular regions in ${\mathrm T}_{ijk}$, see Figure \ref{fig:trianref}. Then, by numerical integration formula with second order accuracy, we compute that
\[
\sum_{n\in\{i,j,k\}}\int_{\mathrm{S}_{n}}|\overline{u}^{\Omega}_{h}(\x^{\ast})|ds(\x^{\ast})=m_{a}(\mathrm{S}_{k})|\overline{u}_{h}(\xp_{ij})|+m_{a}(\mathrm{S}_{i})|\overline{u}_{h}(\xp_{jk})|+m_{a}(\mathrm{S}_{j})|\overline{u}_{h}(\xp_{ik})|,
\]
where $\xp_{ij},\xp_{jk}$ and $\xp_{ik}$ represent the midpoints of each edge of ${\mathrm T}_{ijk}$. Then, we have
\begin{align}
\sum_{n\in\{i,j,k\}}\int_{\mathrm{S}_{n}}|\overline{u}^{\Omega}_{h}(\x^{\ast})|&ds(\x^{\ast})=m_{a}(\mathrm{S}_{k})\left|\frac{\overline{u}_{h}(\x_{i})+\overline{u}_{h}(\x_{j})}{2}\right|+m_{a}(\mathrm{S}_{i})\left|\frac{\overline{u}_{h}(\x_{j})+\overline{u}_{h}(\x_{k})}{2}\right|\nonumber\\
&\quad+m_{a}(\mathrm{S}_{j})\left|\frac{\overline{u}_{h}(\x_{i})+\overline{u}_{h}(\x_{k})}{2}\right|\nonumber\\
&\leq \frac{m_{a}(\mathrm{S}_{k})+m_{a}(\mathrm{S}_{j})}{2}\left|\overline{u}_{h}(\x_{i})\right|+\frac{m_{a}(\mathrm{S}_{k})+m_{a}(\mathrm{S}_{i})}{2}\left|\overline{u}_{h}(\x_{j})\right|\nonumber\\
&\quad+\frac{m_{a}(\mathrm{S}_{i})+m_{a}(\mathrm{S}_{j})}{2}\left|\overline{u}_{h}(\x_{k})\right|\label{eq:equiv02}.
\end{align}
Notice that
\[
\tfrac{m_{a}(\mathrm{S}_{k})+m_{a}(\mathrm{S}_{j})}{2}=m_{a}(\mathrm{Q}_{i}),\, \tfrac{m_{a}(\mathrm{S}_{k})+m_{a}(\mathrm{S}_{i})}{2}=m_{a}(\mathrm{Q}_{j})\mbox{ and }\tfrac{m_{a}(\mathrm{S}_{i})+m_{a}(\mathrm{S}_{j})}{2}=m_{a}(\mathrm{Q}_{k}).
\]
In fact, we have $\widetilde{\mathrm Q}_{n}={\mathcal P}(\mathrm{Q}_{n})$ for $n\in\{i,j,k\}$. Thus, gathering \eqref{eq:equiv01} and \eqref{eq:equiv02}, and summing up all triangles of $\widetilde{\Th}$, we obtain
\[
\|u_{h}\|_{L^{1}(\S)}=\sum_{\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}}\int_{\widetilde{\mathrm T}_{ijk}}|u_{h}(\mathrm{x})|ds(\mathrm{x})\leq C\sum_{i=1}^{N}m_{a}(\widetilde{\mathrm{V}}_{i})|u_{h}(\x_{i})|=C\|u_{h}\|_{0,1,h},
\]
which yields the right-hand side of \eqref{eq:mequiv01}. Similarly, we can obtain the left-hand side. The inequality \eqref{eq:mequiv02} follows by using the fact that $\overline{u}_{h}$ is linear on each ${\mathrm T}_{ijk}\in\mathcal{T}_{h}$. Furthermore, $\nabla\overline{u}_{h}$ is constant on each ${\mathrm T}_{ijk}\in\mathcal{T}_{h}$, then the result is given by using the numerical integration and central difference approximation with second order of accuracy.
\end{proof}
\subsection{A variational formulation}
We now describe a variational formulation for the finite volume scheme. For $(u_{h},v_{h})\in\widetilde{U}_{h}\times\widetilde{U}_{h}$, we define the total flux bilinear form $\widetilde{\A}_{h}:\widetilde{U}_{h}\times\widetilde{V}_{h}\to\mathbb{R}$ such that
\begin{equation}
\label{eq:biformFV}
\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(v_{h}))=\sum_{i=1}^{N}v_{h}(\x_{i})\sum_{j\in\LiN}\widetilde{\mathcal{F}}_{ij}(u_{h}),
\end{equation}
and its discrete version
\begin{equation}
\label{eq:biformFVd}
\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(v_{h}))=\sum_{i=1}^{N}v_{h}(\x_{i})\sum_{j\in\LiN}\overline{\mathcal{F}}_{ij}(u_{h}),
\end{equation}
where $\widetilde{\mathcal{F}}_{ij}$ and $\overline{\mathcal{F}}_{ij}$ are defined in \eqref{eq:cflux} and \eqref{eq:dflux} respectively. So, an approximation $u_{h}\in\widetilde{U}_{h}$ of \eqref{eq:FVscheme} is defined as the unique solution of the discrete problem: find $u_{h}\in\widetilde{U}_{h}$ such that
\begin{equation}
\label{eq:weakformv}
\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(v_{h}))=\langle f_{h},\widetilde{\mathrm{I}}_{h}(v_{h})\rangle_{h},\quad\mbox{for each }v_{h}\in\widetilde{U}_{h},
\end{equation}
where $\langle\cdot,\cdot\rangle_{h}$ denotes the discrete inner product on $\mathbb{S}^{2}$ and $f_{h}$ is the piecewise constant function, whose values are given by $f_{i}$ in \eqref{eq:fi}.
\begin{prp}[\cite{mishev1998finite,du2003voronoi}]
\label{prop:well-v}
Let $\widetilde{\mathcal{V}}_{h}=\{\x_{i},\widetilde{\mathrm{V}}_{i}\}_{i=1}^{N}$ be an almost uniform Voronoï-Delaunay decomposition on $\mathbb{S}^{2}$. Consider $\overline{\mathcal{F}}_{ij}$ as the discrete flux defined in \eqref{eq:dflux}. Then, for the solution $u_{h}\in\widetilde{U}_{h}$ of problem \eqref{eq:weakformv}, there are positive constants $C_0$ and $C_1$ such that
\[
\begin{split}
\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(v_{h}))&\leq C_{0}|u_{h}|_{1,h}|v_{h}|_{1,h},\quad\mbox{for each }v_{h}\in\widetilde{U}_{h},\\
\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(u_{h}))&\geq C_{1}|u_{h}|_{1,h}^{2}.
\end{split}
\]
Here $|\cdot|_{1,h}$ denotes the discrete seminorm with $p=2$.
\end{prp}
The following result establishes an estimate of the stability of scheme \eqref{eq:FVscheme}, which is immediate consequence of proposition above.
\begin{prp}
\label{prp:estabilityv}
Let $f\in L^{2}(\S)$ satisfying the compatibility condition \eqref{eq:compf}. A unique approximate solution $u_{h}\in\widetilde{U}_{h}$ of the discrete problem \eqref{eq:weakformv} satisfies,
\begin{equation}
\label{eq:stabilityv}
|u_{h}|_{1,h}\leq C\|f\|_{L^{2}(\S)},
\end{equation}
where $C$ is a positive constant independent of parameter $h$.
\end{prp}
\subsection{Geometric error estimates}
In this subsection, we present two bounds concerning to the geometric perturbation errors in the bilinear forms. We begin with the following lemma.
\begin{lem}
\label{lem:AhsAhd}
Let $\widetilde{\A}_{h}(\cdot,\cdot)$ and $\overline{\A}_{h}(\cdot,\cdot)$ be the total flux bilinear forms defined in \eqref{eq:biformFV} and \eqref{eq:biformFVd} respectively. Assume that $u_{h}\in\widetilde{U}_{h}$ is the unique solution to the discrete problem \eqref{eq:weakformv}. Then, for each $v_{h}\in\widetilde{U}_{h}$ and $p > 1$ with $1/p+1/q=1$, there exists a positive constant $C$, independent of $h$, such that
\[
\left|\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(v_{h}))-\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(v_{h}))\right|\leq Ch^{2}\|\nabla_{s} u_{h}\|_{L^{p}(\S)}\|\nabla_{s}v_{h}\|_{L^{q}(\S)}.
\]
\end{lem}
\begin{proof}
This estimate was established for the case $p=q=2$ in \cite[Lemma 4, pp.~1686]{du2005finite}. To show the estimates for general $p$ and $q$, the same proof applies using a Hölder inequality and norm equivalence from Proposition \ref{prp:equivnorm}.
\end{proof}
\begin{lem}
\label{lem:AAhs}
Assume $p>1$ such that $1/p+1/q=1$. Let $\widetilde{\mathcal{V}}_{h}=\{\x_{i},\widetilde{\mathrm{V}}_{i}\}_{i=1}^{N}$ be an almost uniform Voronoï-Delaunay decomposition on $\mathbb{S}^{2}$. Let $\mathcal{A}(\cdot,\cdot)$ and $\widetilde{\A}_{h}(\cdot,\cdot)$ be the bilinear forms defined in \eqref{eq:biformFE} and \eqref{eq:biformFV} respectively. Also, assume that $u_{h}\in\widetilde{U}_{h}$ is the unique solution of problem \eqref{eq:weakformv}. Then, there exists a positive constant $C$, independent of $h$, such that
\begin{equation}
\label{eq:lemh2:main}
|\mathcal{A}(u_{h},v_{h})-\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(v_{h}))|\leq C h^{2} \|\nabla_{s}u_{h}\|_{L^{p}(\S)} \|\nabla_{s}v_{h}\|_{L^{q}(\S)},
\end{equation}
for each $v_{h}\in\widetilde{U}_{h}$.
\end{lem}
\begin{proof}
Given $u_{h}\in\widetilde{U}_{h}$, then $u_{h}\big|_{\widetilde{\mathrm T}_{ijk}}\inW^{2,p}(\Tgsi)$ for each $\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}$. Multiplying $-\Delta_{s}u_{h}$ by $v_{h}\in\widetilde{U}_{h}$ and by using Gauss Theorem, we have
\[
\begin{split}
-\int_{\widetilde{\mathrm T}_{ijk}}\Delta_{s}u_{h}(\mathrm{x})v_{h}(\mathrm{x}) ds(\mathrm{x})&=\int_{\widetilde{\mathrm T}_{ijk}}\nabla_{s}u_{h}(\mathrm{x})\cdot\nabla_{s}v_{h}(\mathrm{x})ds(\mathrm{x})\\
&\quad-\int_{\partial\widetilde{\mathrm T}_{ijk}}\nabla_{s}u_{h}(\mathrm{x})\cdot\vec{\mathrm{n}}_{\x,\Tgsi}v_{h}(\mathrm{x})d\gamma(\mathrm{x}).
\end{split}
\]
From definition of $\mathcal{A}(\cdot,\cdot)$ and summing up all $\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}$, we get
\begin{equation}
\label{eq:bifem}
\begin{split}
\mathcal{A}(u_{h},v_{h})&=\sum_{\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}}\left[\int_{\widetilde{\mathrm T}_{ijk}}-\Delta_{s}u_{h}(\mathrm{x})v_{h}(\mathrm{x})ds(\mathrm{x})\right.\\
&+\left.\int_{\partial\widetilde{\mathrm T}_{ijk}}\nabla_{s}u_{h}(\mathrm{x})\cdot\vec{\mathrm{n}}_{\x,\Tgsi}v_{h}(\mathrm{x})d\gamma(\mathrm{x})\right].
\end{split}
\end{equation}
From Lemma \ref{lem:edgeIntp}, we consider again the spherical polygonal regions $\widetilde{\mathrm Q}_{i},\widetilde{\mathrm Q}_{j}$ and $\widetilde{\mathrm Q}_{k}$ made of the intersection of the geodesic triangle $\widetilde{\mathrm T}_{ijk}$ with the three Voronoï~cells associated to each vertex of the triangle, \textit{i.e.}, $\widetilde{\mathrm Q}_{n}=\widetilde{\mathrm{V}}_{n}\cap\widetilde{\mathrm T}_{ijk}$, $n\in\{i,j,k\}$, with boundaries
\[
\partial\widetilde{\mathrm Q}_{n}=(\partial\widetilde{\mathrm{V}}_{n}\cap\widetilde{\mathrm T}_{ijk})\cup(\partial\widetilde{\mathrm T}_{ijk}\cap\widetilde{\mathrm{V}}_{n}),\quad n\in\{i,j,k\}.
\]
Now, multiplying $-\Delta_{s}u_{h}$ by $\widetilde{\mathrm{I}}_{h}(v_{h})\in\widetilde{V}_{h}$ and integrating over $\widetilde{\mathrm T}_{ijk}$, follows that
\[
\begin{split}
-\int_{\widetilde{\mathrm T}_{ijk}}\Delta_{s}u_{h}(\mathrm{x})\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})ds(\mathrm{x})&=\sum_{n=i,j,k}\int_{\widetilde{\mathrm Q}_{n}}\nabla_{s}u_{h}(\mathrm{x})\cdot\nabla_{s}\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})ds(\mathrm{x})\\
& \quad-\sum_{n=i,j,k}\int_{\partial\widetilde{\mathrm Q}_{n}}\nabla_{s}u_{h}(\mathrm{x})\cdot\vec{\mathrm{n}}_{\x,\Qn} \widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})d\gamma(\mathrm{x})\\
& = -\sum_{n=i,j,k}\int_{\partial \widetilde{\mathrm Q}_{n}}\nabla_{s} u_{h}(\mathrm{x})\cdot\vec{\mathrm{n}}_{\x,\Qn} \widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})d\gamma(\mathrm{x}).
\end{split}
\]
Rearranging the boundary $\partial\widetilde{\mathrm Q}_{n}$ with $n\in\{i,j,k\}$, we have
\[
\begin{split}
-\int_{\widetilde{\mathrm T}_{ijk}}\Delta_{s}u_{h}(\mathrm{x})\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})ds(\mathrm{x})& = -\int_{\partial\widetilde{\mathrm T}_{ijk}}\nabla_{s}u_{h}(\mathrm{x})\cdot\vec{\mathrm{n}}_{\x,\Tgsi}\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x}) d\gamma(\mathrm{x})\\
& \quad-\sum_{n=i,j,k}\int_{\partial\widetilde{\mathrm{V}}_{n}\cap\widetilde{\mathrm T}_{ijk}}\nabla_{s}u_{h}(\mathrm{x})\cdot\vec{\mathrm{n}}_{\x,\Vn} \widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})d\gamma(\mathrm{x}).
\end{split}
\]
Now, summing up all geodesic triangles of $\widetilde{\Th}$ and using duality principle, \textit{i.e.}, each edge of $\widetilde{\mathrm T}_{ijk}$ intersects to a unique dual Voronoï edge, we get
\[
\begin{split}
-\sum_{\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}}\int_{\widetilde{\mathrm T}_{ijk}}\Delta_{s}u_{h}(\mathrm{x})&\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})ds(\mathrm{x}) = -\sum_{i=1}^{N}\int_{\partial\widetilde{\mathrm{V}}_{i}}\nabla_{s}u_{h}(\mathrm{x})\cdot\vec{\mathrm{n}}_{\x,\Vi} \widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})d\gamma(\mathrm{x})\\
& -\sum_{\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}}\int_{\partial\widetilde{\mathrm T}_{ijk}}\nabla_{s}u_{h}(\mathrm{x})\cdot\vec{\mathrm{n}}_{\x,\Tgsi}\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x}) d\gamma(\mathrm{x}).
\end{split}
\]
Observe that the last term above is equal to the bilinear form $\widetilde{\A}_{h}(\cdot,\cdot)$. Therefore
\begin{equation}
\label{eq:bifvm}
\begin{split}
\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(v_{h}))&=\sum_{\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}}\left[-\int_{\widetilde{\mathrm T}_{ijk}}\Delta_{s}u_{h}(\mathrm{x})\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})ds(\mathrm{x})\right.\\
&\quad+\left.\int_{\partial\widetilde{\mathrm T}_{ijk}}\nabla_{s}u_{h}(\mathrm{x})\cdot \vec{\mathrm{n}}_{\x,\Tgsi}\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x}) d\gamma(\mathrm{x})\right].
\end{split}
\end{equation}
Then, we subtract \eqref{eq:bifvm} from \eqref{eq:bifem} to obtain
\[
\begin{split}
\left|\mathcal{A}(u_{h},v_{h})-\right.&\left.\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(v_{h}))\right|\leq\left|\sum_{\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}}\int_{\widetilde{\mathrm T}_{ijk}}\Delta_{s}u_{h}(\mathrm{x})[v_{h}(\mathrm{x})-\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})]ds(\mathrm{x})\right|\\
& \quad+\left|\sum_{\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}}\int_{\partial\widetilde{\mathrm T}_{ijk}}\nabla_{s}u_{h}(\mathrm{x})\cdot\vec{\mathrm{n}}_{\x,\Tgsi}[v_{h}(\mathrm{x})-\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})]d\gamma(\mathrm{x})\right|.
\end{split}
\]
For each $\widetilde{\tau}_{ij}\in\partial\widetilde{\mathrm T}_{ijk}$ the function $\nabla_{s}u_{h}(\mathrm{x})\cdot\vec{\mathrm{n}}_{\x,\Tgsi}[v_{h}(\mathrm{x})-\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})]$ is anti-symmetric with respect to the edge's midpoint, and therefore as shown in Lemma \ref{lem:edgeIntp} its integral along the edge vanishes.
It follows that
\[
\left|\mathcal{A}(u_{h},v_{h})-\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(v_{h}))\right|\leq\left|\sum_{\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}}\int_{\widetilde{\mathrm T}_{ijk}}\Delta_{s}u_{h}(\mathrm{x})[v_{h}(\mathrm{x})-\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})]ds(\mathrm{x})\right|.
\]
To finish the proof, we shall consider $\overline{u}^{\Omega}_{h}$ and $\overline{v}^{\Omega}_{h}$ the extensions of $u_{h}$ and $v_{h}$ to the spherical shell $\Omega_{h}$, restricted to $\mathbf{S}_{h}$ respectively. Since $\overline{u}^{\Omega}_{h}$ is linear on each planar triangle ${\mathrm T}_{ijk}\in\mathcal{T}_{h}$ we have that $\Delta\overline{u}^{\Omega}_{h}(\x^{\ast})=0$ for all $\x^{\ast}\in{\mathrm T}_{ijk}$. Also, we know that $v_{h}(\mathrm{x})=\overline{v}^{\Omega}_{h}(\x^{\ast})$ for each $\mathrm{x}={\mathcal P}(\x^{\ast})$, then this gives us the relation $\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})=\widetilde{\mathrm{I}}_{h}(\vh^{\Omega})(\Prj^{-1}(\mathrm{x}))=\widetilde{\mathrm{I}}_{h}(\overline{v}^{\Omega}_{h})(\x^{\ast})$ for each $\x^{\ast}\in{\mathrm T}_{ijk}$. Now, multiplying $\Delta\overline{u}^{\Omega}_{h}$ by $\overline{v}^{\Omega}_{h}-\widetilde{\mathrm{I}}_{h}(\overline{v}^{\Omega}_{h})$ and integrating over ${\mathrm T}_{ijk}$, we have
\[
\int_{{\mathrm T}_{ijk}}\Delta\overline{u}^{\Omega}_{h}(\x^{\ast})\left[\overline{v}^{\Omega}_{h}(\x^{\ast})-\widetilde{\mathrm{I}}_{h}(\overline{v}^{\Omega}_{h})(\x^{\ast})\right]ds(\x^{\ast})=0.
\]
According to the change of variable, for $\mathrm{x}={\mathcal P}(\x^{\ast})$ and using $|\|\x^{\ast}\|-1|\leq Ch^{2}$, we have that
\[
\begin{split}
\left|\mathcal{A}(u_{h},v_{h})\right.-&\left.\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(v_{h}))\right|=\left|\sum_{\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}}\int_{\widetilde{\mathrm T}_{ijk}}\Delta_{s}u_{h}(\mathrm{x})[v_{h}(\mathrm{x})-\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})]ds(\mathrm{x})\right|\\
&=\left|\sum_{{\mathrm T}_{ijk}\in\mathcal{T}_{h}}\left(\int_{{\mathrm T}_{ijk}}\|\x^{\ast}\|\Delta\overline{u}^{\Omega}_{h}(\x^{\ast})\left[\overline{v}^{\Omega}_{h}(\x^{\ast})-\widetilde{\mathrm{I}}_{h}(\overline{v}^{\Omega}_{h})(\x^{\ast})\right]ds(\x^{\ast})\right.\right.\\
& \quad\left.\left.-\int_{{\mathrm T}_{ijk}}\Delta\overline{u}^{\Omega}_{h}(\x^{\ast})\left[\overline{v}^{\Omega}_{h}(\x^{\ast})-\widetilde{\mathrm{I}}_{h}(\overline{v}^{\Omega}_{h})(\x^{\ast})\right]ds(\x^{\ast})\right)\right|\\
&\leq \sum_{{\mathrm T}_{ijk}\in\mathcal{T}_{h}}\int_{{\mathrm T}_{ijk}}|\|\x^{\ast}\|-1|\left|\Delta\overline{u}^{\Omega}_{h}(\x^{\ast})\left[\overline{v}^{\Omega}_{h}(\x^{\ast})-\widetilde{\mathrm{I}}_{h}(\overline{v}^{\Omega}_{h})(\x^{\ast})\right]\right|ds(\x^{\ast})\\
&\leq Ch^{2}\sum_{{\mathrm T}_{ijk}\in\mathcal{T}_{h}}\int_{{\mathrm T}_{ijk}}\left|\Delta\overline{u}^{\Omega}_{h}(\x^{\ast})\left[\overline{v}^{\Omega}_{h}(\x^{\ast})-\widetilde{\mathrm{I}}_{h}(\overline{v}^{\Omega}_{h})(\x^{\ast})\right]\right|ds(\x^{\ast}).
\end{split}
\]
Finally, for $p\geq 1$ with $1/p+1/q=1$ we invoke the H\"{o} lder inequality, Proposition \ref{prp:equivnormext}, Lemma \ref{lem:edgeIntp} and the inverse inequality from Lemma \ref{lem:propinverse}, to show that
\[
\begin{split}
|\mathcal{A}(u_{h},v_{h})-\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(v_{h}))|&\leq Ch^{2}
\sum_{{\mathrm T}_{ijk}\in\mathcal{T}_{h}}\left(\int_{{\mathrm T}_{ijk}}\left|\Delta\overline{u}^{\Omega}_{h}(\x^{\ast})\right|^{p}ds(\x^{\ast})\right)^{1/p}\\
& \quad\times\left(\int_{{\mathrm T}_{ijk}}\left|\overline{v}^{\Omega}_{h}(\x^{\ast})-\widetilde{\mathrm{I}}_{h}(\overline{v}^{\Omega}_{h})(\x^{\ast})\right|^{q}ds(\x^{\ast})\right)^{1/q}\\
&\leq Ch^{2}\sum_{{\mathrm T}_{ijk}\in\mathcal{T}_{h}}\|\Delta\overline{u}^{\Omega}_{h}\|_{L^{p}({\mathrm T}_{ijk})}\|\overline{v}^{\Omega}_{h}-\widetilde{\mathrm{I}}_{h}(\overline{v}^{\Omega}_{h})\|_{L^{q}({\mathrm T}_{ijk})}\\
&\leq Ch^{2}\sum_{\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}}\|\Delta_{s}u_{h}\|_{L^{p}(\Tgsi)}\|v_{h}-\widetilde{\mathrm{I}}_{h}(v_{h})\|_{L^{q}(\widetilde{\mathrm T}_{ijk})}\\
&\leq Ch^{2}\sum_{\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}}\|\nabla_{s}u_{h}\|_{L^{p}(\Tgsi)}\|\nabla_{s}v_{h}\|_{L^{q}(\Tgsi)}\\
& \leq Ch^{2}\|\nabla_{s}u_{h}\|_{L^{p}(\S)}\|\nabla_{s}v_{h}\|_{L^{q}(\S)}.
\end{split}
\]
This finishes the proof.
\end{proof}
\begin{lem}
\label{lem:mainfNOPT}
Assume that $f\in L^{p}(\mathbb{S}^{2})$, with $p>1$ satisfies the compatibility condition \eqref{eq:compf}. Then, for each $v_{h}\in\widetilde{U}_{h}$, with $1/p+1/q=1$ there exists a positive constant $C$, independent of $h$, such that
\begin{equation}
\label{eq:mainfNOPT}
\left|(f,v_{h})-\langle f_{h},\widetilde{\mathrm{I}}_{h}(v_{h})\rangle_{h}\right|\leq C h \|f\|_{L^{p}(\S)}\|\nabla_{s}v_{h}\|_{L^{q}(\S)},
\end{equation}
\end{lem}
\begin{proof}
Given $f_{h}$, the approximation of $f$, the inner product yields for $v_{h}\in\widetilde{U}_{h}$, $\langle f_{h},\widetilde{\mathrm{I}}_{h}(v_{h})\rangle_{h}=(f,\widetilde{\mathrm{I}}_{h}(v_{h}))$. Then, applying the H\"{o} lder inequality with $1/p+1/q=1$ and using Lemma \ref{lem:edgeIntp}, we obtain
\[
\begin{split}
\left|(f,v_{h})-\langle f,\widetilde{\mathrm{I}}_{h}(v_{h})\rangle_{h}\right|&=\left|\sum_{\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}}\int_{\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}}f(\mathrm{x})\left[v_{h}(\mathrm{x})-\widetilde{\mathrm{I}}_{h}(v_{h})(\mathrm{x})\right]ds(\mathrm{x})\right|\\
& \leq Ch\sum_{\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}}\|f\|_{L^{p}(\S)}\|\nabla_{s}v_{h}\|_{L^{q}(\Tgsi)}\\
&\leq Ch\|f\|_{L^{p}(\S)}\|\nabla_{s}v_{h}\|_{L^{q}(\S)}.
\end{split}
\]
Therefore, we get \eqref{eq:mainfNOPT}.
\end{proof}
\section{Error analysis}
\label{sec:erroranalysis}
In this section, we establish the estimates of convergence order of the approximate solutions of FVM in classical $H^{1}$, $L^{2}$-norm and $\max$-norm using the framework of \cite{ewing2002accuracy,du2005finite,ju2009finite} for Voronoï-Delaunay decomposition on $\mathbb{S}^{2}$ and highlight that the estimated convergence rates depend on the position of vertices of the geometric setting.
\subsection{Classical $H^{1}$ and $L^{2}$ estimates}
The following result provides an error estimate of the finite volume solution $u_{h}$ in the $H^{1}$ and $L^{2}$-norms with minimal regularity assumptions for the exact solution $u$ and is valid for Voronoï-Delaunay decompositions in general on $\mathbb{S}^{2}$.
\begin{thm}
\label{thm:H1-NOPT}
Let $\widetilde{\mathcal{V}}_{h}=\{\x_{i},\widetilde{\mathrm{V}}_{i}\}_{i=1}^{N}$ be an almost uniform Voronoï-Delaunay decomposition on $\mathbb{S}^{2}$. Assume that $f\in L^{2}(\mathbb{S}^{2})$ satisfies \eqref{eq:compf} and the unique solution $u$ of \eqref{eq:weakform} belongs to $H^{2}(\S)\capH_{0}^{1}(\S)$. Let $\overline{\mathcal{F}}_{ij}$ be the discrete flux defined in \eqref{eq:dflux}, such that the discrete problem \eqref{eq:weakformv} has unique solution $u_{h}\in\widetilde{U}_{h}$. Then, there exists a positive constant $C$, independent of $h$, such that
\begin{subequations}
\begin{align}
\|\nabla_{s}\varepsilon_{h}\|_{L^{2}(\S)}&\leq Ch \left(\|u\|_{H^{2}(\S)}+\|f\|_{L^{2}(\S)}\right),\label{eq:mainH1}\\
\|\varepsilon_{h}\|_{L^{2}(\S)}&\leq Ch \|f\|_{L^{2}(\S)},\label{eq:mainL2}
\end{align}
\end{subequations}
where $\varepsilon_{h}=u-u_{h}$.
\end{thm}
\begin{proof}
Firstly, for \eqref{eq:mainH1} we consider the coercivity of $\mathcal{A}(\cdot,\cdot)$. Next, we define $\varphi_{h}=\widetilde{\Pi}_{h}(u)-u_{h}$, then by Proposition \ref{prp:interp} and triangular inequality, we get
\begin{equation}
\label{eq:H1M1}
\begin{split}
\|\nabla_{s} (u-u_{h})\|_{L^{2}(\S)}^{2}&\preceq \left|\mathcal{A}(u-u_{h},u-u_{h})\right|\\
& \preceq \left|\mathcal{A}(u-u_{h},u-\widetilde{\Pi}_{h}(u))\right|+\left|\mathcal{A}(u,\varphi_{h})-\mathcal{A}(u_{h},\varphi_{h})\right|\\
&\preceq \left|\mathcal{A}(u-u_{h},u-\widetilde{\Pi}_{h}(u))\right|+\left|\mathcal{A}(u,\varphi_{h})-\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\varphi_{h}))\right|\\
& \quad+\left|\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\varphi_{h}))-\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\varphi_{h}))\right|\\
& \quad +\left|\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\varphi_{h})-\mathcal{A}(u_{h},\varphi_{h})\right|\\
& = I_{1}+I_{2}+I_{3}+I_{4},
\end{split}
\end{equation}
where the hidden constant in \lq{}\lq{}$\preceq$\rq{}\rq{} comes from the coercivity of $\mathcal{A}(\cdot,\cdot)$. For $I_{1}$, applying the continuity of $\mathcal{A}(\cdot,\cdot)$ and Proposition \ref{prp:interp}, we have
\begin{align}
I_{1}=|\mathcal{A}(u-u_{h},u-\widetilde{\Pi}_{h}(u))|&\leq C\|\nabla_{s} (u-u_{h})\|_{L^{2}(\S)}\|\nabla_{s}(u-\widetilde{\Pi}_{h}(u))\|_{L^{2}(\S)}\nonumber\\
&\leq Ch\|\nabla_{s} (u-u_{h})\|_{L^{2}(\S)}\|u\|_{H^{2}(\S)}.\label{eq:H101}
\end{align}
For $I_{2}$, from right-hand sides of variational problems \eqref{eq:weakform} and \eqref{eq:weakformv} and putting $p=q=2$ in Lemma \ref{lem:mainfNOPT}, we find
\[
I_{2}=\left|\mathcal{A}(u,\varphi_{h})-\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\varphi_{h}))\right|=|(f,\varphi_{h}-\widetilde{\mathrm{I}}_{h}(\varphi_{h}))|\leq Ch\|f\|_{L^{2}(\S)}\|\nabla_{s}\varphi_{h}\|_{L^{2}(\S)}.
\]
Invoking Proposition \ref{prp:interp}, we have
\begin{align*}
\|\nabla_{s}\varphi_{h}\|_{L^{2}(\S)}&\leq \|\nabla_{s}(u-\widetilde{\Pi}_{h}(u))\|_{L^{2}(\S)}+\|\nabla_{s}(u-u_{h})\|_{L^{2}(\S)}\|\\
&\leq Ch\|u\|_{H^{2}(\S)}+\|\nabla_{s}(u-u_{h})\|_{L^{2}(\S)}.
\end{align*}
Thus, we obtain
\begin{equation}
\label{eq:H102}
I_{2}\leq Ch\|f\|_{L^{2}(\S)}\|\nabla_{s}(u-u_{h})\|_{L^{2}(\S)}+Ch^{2}\|f\|_{L^{2}(\S)}\|u\|_{H^{2}(\S)}.
\end{equation}
About $I_{3}$, by Lemma \ref{lem:AhsAhd}, we have
\[
I_{3}=\left|\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\varphi_{h}))-\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\varphi_{h}))\right|\leq Ch^{2}\|\nabla_{s}u_{h}\|_{L^{2}(\S)}\|\nabla_{s}\varphi_{h}\|_{L^{2}(\S)}.
\]
From \eqref{eq:stabilityv} and Proposition \ref{prp:equivnorm} follows that $\|\nabla_{s}u_{h}\|_{L^{2}(\S)}\leq C\|f\|_{L^{2}(\S)}$. Now, by using a similar argument to $I_{2}$, we have
\begin{equation}
\label{eq:H103}
I_{3}\leq Ch^{2}\|f\|_{L^{2}(\S)}\|\nabla_{s}(u-u_{h})\|_{L^{2}(\S)}+Ch^{3}\|f\|_{L^{2}(\S)}\|u\|_{H^{2}(\S)}.
\end{equation}
As for $I_{4}$, using Lemma \ref{lem:AAhs} we get
\begin{equation}
\label{eq:H104}
\begin{split}
I_{4}=\left|\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\varphi_{h}))-\mathcal{A}(u_{h},\varphi_{h})\right|&\leq C h^{2}\|\nabla_{s}u_{h}\|_{L^{2}(\S)}\|\nabla_{s}\varphi_{h}\|_{L^{2}(\S)}\\
&\leq Ch^{2}\|f\|_{L^{2}(\S)}\|\nabla_{s}(u-u_{h})\|_{L^{2}(\S)}\\
&\quad+Ch^{3}\|f\|_{L^{2}(\S)}\|u\|_{H^{2}(\S)}.
\end{split}
\end{equation}
Gathering \eqref{eq:H1M1} and inequalities \eqref{eq:H101}--\eqref{eq:H104}, for $h>0$ small enough, we find
\[
\|\nabla_{s}(u-u_{h})\|_{L^{2}(\S)}^{2}\leq C h\|\nabla_{s}(u-u_{h})\|_{L^{2}(\S)}\left(\|u\|_{H^{2}(\S)}+\|f\|_{L^{2}(\S)}\right).
\]
Finally, dividing through by $\|\nabla_{s}(u-u_{h})\|_{L^{2}(\S)}$ we obtain \eqref{eq:mainH1}.
In order to prove \eqref{eq:mainL2}, we derive an error estimate following classical duality argument: for $u-u_{h}\in H_{0}^{1}(\S)$, from \eqref{eq:weakform}, there exists a unique weak solution $w\in H^{2}(\S)\cap\HqS$ satisfying
\[
\mathcal{A}(v,w)=(v,u-u_{h}),\quad \mbox{for all }v\inH_{0}^{1}(\S).
\]
Taking $v=u-u_{h}$ and by using the regularity estimate \eqref{eq:regularity}, we have
\begin{equation}
\label{eq:regularityw}
\|w\|_{H^{2}(\S)}\leq C\|u-u_{h}\|_{L^{2}(\S)}.
\end{equation}
Now, assume that $w_{h}=\widetilde{\Pi}_{h}(w)\in\widetilde{U}_{h}$ and invoking a similar argument to Theorem \ref{thm:H1-NOPT}, we obtain
\[
\begin{split}
\|u-u_{h}\|_{L^{2}(\S)}^{2}&=(u-u_{h},u-u_{h})\leq |\mathcal{A}(u-u_{h},w)|\\
&=|\mathcal{A}(u-u_{h},w-w_{h})+\mathcal{A}(u,w_{h})-\mathcal{A}(u_{h},w_{h})|\\
&\leq |\mathcal{A}(u-u_{h},w-w_{h})|+|\mathcal{A}(u,w_{h})-\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(w_{h}))|\\
& \quad+|\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(w_{h}))-\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(w_{h}))|\\
&\quad+|\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(w_{h}))-\mathcal{A}(u_{h},w_{h})|\\
&= I_{1}+I_{2}+I_{3}+I_{4}.
\end{split}
\]
About $I_{1}$, by Theorem \ref{thm:H1-NOPT}, Proposition \ref{prp:interp} and inequality \eqref{eq:regularityw}, we have
\begin{equation}
\label{eq:01L2NOPT}
\begin{split}
I_{1}=|\mathcal{A}(u-u_{h},w-w_{h})|&\leq C \|\nabla_{s}(u-u_{h})\|_{L^{2}(\S)}\|\nabla_{s}(w-w_{h})\|_{L^{2}(\S)}\\
&\leq C h^{2}\left(\|u\|_{H^{2}(\S)}+\|f\|_{L^{2}(\S)}\right)\|u-u_{h}\|_{L^{2}(\S)}.
\end{split}
\end{equation}
For $I_{2}$, by Lemma \ref{lem:mainfNOPT} and expression \eqref{eq:regularityw}, we have
\begin{equation}
\label{eq:02L2NOPT}
I_{2}=|\mathcal{A}(u,w_{h})-\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(w_{h}))|\leq Ch\|f\|_{L^{2}(\S)}\|u-u_{h}\|_{L^{2}(\S)}.
\end{equation}
For $I_{3}$, analogously to Theorem \ref{thm:H1-NOPT}, using Lemma \ref{lem:AhsAhd}, equations \eqref{eq:stabilityv} and \eqref{eq:regularityw} follows that
\begin{equation}
\label{eq:03L2NOPT}
I_{3}=\left|\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(w_{h}))-\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(w_{h}))\right|\leq C h^{2}\|f\|_{L^{2}(\S)}\|u-u_{h}\|_{L^{2}(\S)}.
\end{equation}
Finally, by Lemma \ref{lem:AAhs} along with estimates \eqref{eq:stabilityv} and \eqref{eq:regularityw} yields the expression
\begin{equation}
\label{eq:04L2NOPT}
I_{4}=\left|\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(w_{h}))-\mathcal{A}(u_{h},w_{h})\right|\leq C h^{2}\|f\|_{L^{2}(\S)}\|u-u_{h}\|_{L^{2}(\S)}.
\end{equation}
Combining \eqref{eq:01L2NOPT}--\eqref{eq:04L2NOPT} with $h>0$ small enough, we have
\[
\|u-u_{h}\|_{L^{2}(\S)}^{2}\leq C h \|f\|_{L^{2}(\S)}\|u-u_{h}\|_{L^{2}(\S)}.
\]
Dividing by $\|u-u_{h}\|_{L^{2}(\S)}$ encloses the proof of the theorem.
\end{proof}
\begin{rem}
Notice that, the convergence rate in the $L^{2}$-norm is low compared to what is obtained in FEM and, as we will see in the numerical experiment, the errors of the finite volume solutions behave better than the estimates given above. Meanwhile, there is no known standard method to increase the convergence estimate in the $L^{2}$-norm by using Voronoï-Delaunay decomposition in general.
\end{rem}
Note that the dominant term in the bounds of Theorem \ref{thm:H1-NOPT} are given by Lemma \ref{lem:mainfNOPT}. One way to gain an extra power in the convergence rate is to use the SCVT optimizations in the standard decomposition. A quadratic order estimate was reported by \cite{du2005finite}, but we will illustrate below that a decrease in regularity in the exact solution is possible by using the techniques investigated by \cite{ewing2002accuracy}.
The following lemma is a simplified version of a result given by \cite[Lemma 1, pp.~1682]{du2005finite}, assuming that the density function $\rho$ is equal to $1$.
\begin{lem}[\cite{du2005finite}]
\label{lem:h2du}
Let $\widetilde{\mathcal{V}}_{h}=\{\x_{i},\widetilde{\mathrm{V}}_{i}\}_{i=1}^{N}$ be an almost uniform Spherical Centroidal Voronoï-Delaunay decomposition (SCVT) on $\mathbb{S}^{2}$. Then, for any $w\in H^{2}(\S)\cap\HqS$, there exists a positive constant $C$, independent of $h$, such that
\[
\left|\int_{\widetilde{\mathrm{V}}_{i}}[w(\mathrm{x})-\widetilde{\Pi}^{\ast}_{h}(w)(\mathrm{x})]ds(\mathrm{x})\right|\leq Ch^{2}m_{a}(\widetilde{\mathrm{V}}_{i})^{1/2}\|w\|_{H^{2}(\widetilde{\mathrm{V}}_{i})}.
\]
\end{lem}
In light of the lemma above, we have the following result.
\begin{lem}
\label{lem:fh2SCVT}
Let $f\inH_{0}^{1}(\S)$ be a function that satisfies the compatibility condition \eqref{eq:compf} and $\widetilde{\mathcal{V}}_{h}=\{\x_{i},\widetilde{\mathrm{V}}_{i}\}_{i=1}^{N}$ be an almost uniform Spherical Centroidal Voronoï-Delaunay decomposition (SCVT) on $\mathbb{S}^{2}$. Then, for each $w\in H^{2}(\S)\cap\HqS$, there exists a positive constant $C$, independent of $h$, such that
\[
\left(f,\widetilde{\Pi}_{h}(w)-\widetilde{\Pi}^{\ast}_{h}(w)\right)\leq Ch^{2}\|f\|_{H^{1}(\S)}\|w\|_{H^{2}(\S)},
\]
where $\widetilde{\Pi}_{h}$ and $\widetilde{\Pi}^{\ast}_{h}$ are the interpolation operator on spaces $\widetilde{U}_{h}$ e $\widetilde{V}_{h}$ respectively.
\end{lem}
\begin{proof}
By definition of inner product we have
\[
\begin{split}
\left(f,\widetilde{\Pi}_{h}(w)-\widetilde{\Pi}^{\ast}_{h}(w)\right)&=\sum_{\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}}\int_{\widetilde{\mathrm T}_{ijk}}f(\mathrm{x})\left[\widetilde{\Pi}_{h}(w)(\mathrm{x})-w(\mathrm{x})\right]ds(\mathrm{x})\\
&\quad+\sum_{i=1}^{N}\int_{\widetilde{\mathrm{V}}_{i}}\left[f(\mathrm{x})-\mathrm{P}_{h}(f)\right]\left[w(\mathrm{x})-\widetilde{\Pi}^{\ast}_{h}(w)(\mathrm{x})\right]ds(\mathrm{x})\\
& \quad+\sum_{i=1}^{N}\int_{\widetilde{\mathrm{V}}_{i}}\mathrm{P}_{h}(f)\left[w(\mathrm{x})-\widetilde{\Pi}^{\ast}_{h}(w)(\mathrm{x})\right]ds(\mathrm{x})\\
& = I_{1}+I_{2}+I_{3},
\end{split}
\]
where $\mathrm{P}_{h}(f)$ denotes the $L^{2}$-projection of the function $f$ on $\widetilde{V}_{h}$. For $I_{1}$, by using Cauchy-Schwarz inequality and Proposition \ref{prp:interp}, we obtain
\begin{equation}
\label{eq:L2h2dua1}
\begin{split}
I_{1}&\leq \sum_{\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}}\left(\int_{\widetilde{\mathrm T}_{ijk}}|f(\mathrm{x})|^{2}ds(\mathrm{x})\right)^{1/2}\left(\int_{\widetilde{\mathrm T}_{ijk}}|\widetilde{\Pi}_{h}(w)(\mathrm{x})-w(\mathrm{x})|^{2}ds(\mathrm{x})\right)^{1/2}\\
&\leq Ch^{2}\|f\|_{L^{2}(\S)}\|w\|_{H^{2}(\S)}.
\end{split}
\end{equation}
Analogously, for $I_{2}$,
\begin{equation}
\label{eq:L2h2dua2}
\begin{split}
I_{2}&\leq \sum_{i=1}^{N}\left(\int_{\widetilde{\mathrm{V}}_{i}}|f(\mathrm{x})-\mathrm{P}_{h}(f)(\mathrm{x})|^{2}ds(\mathrm{x})\right)^{1/2}\left(\int_{\widetilde{\mathrm{V}}_{i}}|w(\mathrm{x})-\widetilde{\Pi}^{\ast}_{h}(w)(\mathrm{x})|^{2}ds(\mathrm{x})\right)^{1/2}\\
&\leq Ch\sum_{i=1}^{N}\|\nabla_{s} f\|_{L^{2}(\widetilde{\mathrm{V}}_{i})}\|w-\widetilde{\Pi}^{\ast}_{h}(w)\|_{L^{2}(\widetilde{\mathrm{V}}_{i})}\leq Ch^{2}\| f\|_{H^{1}(\S)}\|w\|_{H^{2}(\S)}.
\end{split}
\end{equation}
Finally, for $I_{3}$, using Lemma \ref{lem:h2du} and Proposition \ref{prp:equivnorm}, we have that
\begin{equation}
\label{eq:L2h2dua3}
\begin{split}
I_{3}&\leq Ch^{2}\sum_{i=1}^{N}|\mathrm{P}_{h}(f)\big|_{\widetilde{\mathrm{V}}_{i}}m_{a}(\widetilde{\mathrm{V}}_{i})^{1/2}\|w\|_{H^{2}(\widetilde{\mathrm{V}}_{i})}\\
&\leq Ch^{2}\left(\sum_{i=1}^{N}m_{a}(\widetilde{\mathrm{V}}_{i})\left|\mathrm{P}_{h}(f)\big|_{\widetilde{\mathrm{V}}_{i}}\right|^{2}\right)^{1/2}\left(\sum_{i=1}^{N}\|w\|_{H^{2}(\widetilde{\mathrm{V}}_{i})}\right)^{1/2}\\
&\leq Ch^{2}\|f\|_{L^{2}(\S)}\|w\|_{H^{2}(\S)}.
\end{split}
\end{equation}
Combining \eqref{eq:L2h2dua1}--\eqref{eq:L2h2dua3} we obtain the result.
\end{proof}
We will show below a subtle modification of the proof given in \cite{du2005finite}, assuming that the exact solution $u\in H^{2}(\S)\cap\HqS$ and source term $f\inH_{0}^{1}(\S)$.
\begin{thm}
\label{thm:L2SCVT}
Let $\widetilde{\mathcal{V}}_{h}=\{\x_{i},\widetilde{\mathrm{V}}_{i}\}_{i=1}^{N}$ be an almost uniform Spherical Centroidal Voronoï-Delaunay decomposition (SCVT) on $\mathbb{S}^{2}$. Assume also that $f\inH_{0}^{1}(\S)$ and that $u\in H^{2}(\S)\cap\HqS$ is the unique solution of \eqref{eq:strongform}. Let $\overline{\mathcal{F}}_{ij}$ be the discrete flux defined in \eqref{eq:dflux}, such that the discrete problem \eqref{eq:weakformv} has unique solution $u_{h}\in\widetilde{U}_{h}$. Then, there exists a positive constant $C$, independent of $h$, such that
\[
\|\varepsilon_{h}\|_{L^{2}(\S)}\leq C h^{2}\left(\|u\|_{H^{2}(\S)}+\|f\|_{H^{1}(\S)}\right),
\]
where $\varepsilon_{h}=u-u_{h}$.
\end{thm}
\begin{proof}
The proof is analogous to \eqref{eq:mainL2} in Theorem \ref{thm:H1-NOPT}, now using Lemma \ref{lem:fh2SCVT} into equation \eqref{eq:02L2NOPT} to guarantee an extra power in the term $I_{2}$, \textit{i.e.},
\[
I_{2}=\left|\left(f,w_{h}-\widetilde{\mathrm{I}}_{h}(w_{h})\right)\right|\leq Ch^{2}\|f\|_{L^{2}(\S)}\|w\|_{H^{2}(\S)}\leq Ch^{2}\|f\|_{L^{2}(\S)}\|u-u_{h}\|_{L^{2}(\S)},
\]
where $w_{h}=\widetilde{\Pi}_{h}(w)$. Therefore, the quadratic order is shown and completes the proof of the theorem.
\end{proof}
\subsection{Pointwise error estimates}
In this subsection, we proceed to find error estimates in the maximum-norm for problem \eqref{eq:strongform}. We shall use the variational formulation \eqref{eq:biformFE} for regularized Green's functions. We consider some properties of these auxiliary functions along with punctual estimates of FEM on surfaces defined by \cite{demlow2009higher,kroner2017approximative}.
\subsubsection{Regularity properties of Green's functions}
The lemma below has fundamental properties of the regularized Green's functions on $\mathbb{S}^{2}$. The proof is detailed in \cite[Theorem 4.13, pp.~108]{aubin1998some}.
\begin{lem}[\cite{aubin1998some}]
There exists $G(\mathrm{x},\mathrm{y})$, a Green's function for the Laplacian on $\mathbb{S}^{2}$ satisfying, for each $u\in C^{2}(\S)$ and $\mathrm{x},\mathrm{y}\in\mathbb{S}^{2}$ with $\mathrm{x}\neq\mathrm{y}$,
\[
u(\mathrm{y})=\frac{1}{m_{a}(\mathbb{S}^{2})}\int_{\mathbb{S}^{2}}u(\mathrm{x})ds(\mathrm{x})-\int_{\mathbb{S}^{2}}G(\mathrm{x},\mathrm{y})\Delta_{s} u(\mathrm{x})ds(\mathrm{x}),
\]
and there exist positive constants $C_{0},C_{1}$ and $C_{2}$ such that
\begin{align*}
|G(\mathrm{x},\mathrm{y})|&\leq C_{0}(1+\ln|\mathrm{d}(\mathrm{x},\mathrm{y})|),\\
|\nabla_{s} G(\mathrm{x},\mathrm{y})|&\leq C_{1}\frac{1}{|\mathrm{d}(\mathrm{x},\mathrm{y})|},\quad |\Delta_{s} G(\mathrm{x},\mathrm{y})|\leq C_{2}\frac{1}{|\mathrm{d}(\mathrm{x},\mathrm{y})|^{2}},
\end{align*}
where $\nabla_{s}$ and $\Delta_{s}$ denote the tangential gradient and Laplacian acting on a function of $\mathrm{x}$. Finally, $G(\mathrm{x},\mathrm{y})$ satisfies
\begin{equation}
\label{eq:compaticondG}
\int_{\mathbb{S}^{2}}G(\mathrm{x},\mathrm{y})ds(\mathrm{x})=0.
\end{equation}
\end{lem}
From now on, we denote by $\Gy(\x)$ the Green's function $G(\mathrm{x},\mathrm{y})$. Further, we consider the notation $g^{\y}$ to refer to the discrete Green's function defined as
\[
\Gyd(\x):=\int_{\mathbb{S}^{2}}G^{\x}(\z)\dy(\z) ds(\mathrm{z}),
\]
where $\dy(\z)$ represents the discrete Dirac delta function at the point $\mathrm{y}$.
We now present a proposition by Demlow \cite[Proposition 4.2, pp.~813]{demlow2009higher}, as it will be used further on the analysis.
\begin{prp}[\cite{demlow2009higher}]
\label{prp:demlow-delta}
Consider $v_{h}\in\widetilde{U}_{h}$ and fix $\mathrm{y}\in\widetilde{\mathrm T}_{ijk}\subset\mathbb{S}^{2}$. Let $\vec{\mathrm{n}}_{\y,\Tgsi}$ be a unit vector laying on the tangent plane $\mathbb{T}_{\y,\S}$ at $\mathrm{y}$. Then, there exist $\delta^{\y}$ and $\boldsymbol{\delta}^{\y,\Omega}$, both independent of $v_{h}$, such that for some positive constant $C$,
\[
\|\delta^{\y}\|_{W^{m,p}(\Tgsi)}+\|\boldsymbol{\delta}^{\y,\Omega}\|_{W^{m,p}(\Tgsi)}\leq Ch^{-m-2\left(\frac{p-1}{p}\right)},
\]
for $m\in\{0,1\}$ and $1\leq p\leq \infty$. Further, there exists positive generic constants $C_{0}$ and $C_{1}$, such that
\begin{align*}
|v_{h}(\mathrm{y})|&\leq C_{0}\left|\int_{\widetilde{\mathrm T}_{ijk}}\dyxv_{h}(\mathrm{x})ds(\mathrm{x})\right|,\\
\left|\nabla_{s} v_{h}(\mathrm{y})\cdot \vec{\mathrm{n}}_{\y,\Tgsi}\right|&\leq C_{1}\left|\int_{\widetilde{\mathrm T}_{ijk}}v_{h}(\mathrm{x})\nabla_{s}\cdot\boldsymbol{\delta}^{\y,\Omega} (\mathrm{x}) ds(\mathrm{x})\right|,
\end{align*}
where $\boldsymbol{\delta}^{\y,\Omega}=\|\x^{\ast}\|\delta^{\yp,\Omega}\vec{\mathrm{n}}_{\y,\Tgsi}$, with $\mathrm{y}={\mathcal P}(\y^{\ast})$.
\end{prp}
Let us now introduce a variational formulation for the discrete Green's functions. Fix $\mathrm{y}\in\mathbb{S}^{2}$, then we consider two kinds of regularized Green's functions $\Gyd_{0},\Gyd_{1}\in C^{\infty}(\mathbb{S}^{2})$ satisfying the following variational problems:
\begin{subequations}
\begin{align}
\mathcal{A}(v,\Gyd_{0})&=(v,\delta^{\y}),\quad\mbox{for each }v \inH_{0}^{1}(\S),\label{eq:weakformGd0}\\
\mathcal{A}(v,\Gyd_{1})&=(v,\nabla_{s} \cdot \boldsymbol{\delta}^{\y,\Omega}),\quad\mbox{for each }v \inH_{0}^{1}(\S).\label{eq:weakformGd1}
\end{align}
\end{subequations}
Accordingly, we define $\Gyd_{0,h},\Gyd_{1,h}\in\widetilde{U}_{h}$ as the solutions for the finite element approximate problems
\begin{equation}
\label{eq:weakformGh0}
\mathcal{A}(w_{h},\Gyd_{0,h})=(w_{h},\delta^{\y}),\quad\mbox{for each }w_{h}\in\widetilde{U}_{h},
\end{equation}
and
\begin{equation}
\label{eq:weakformGh1}
\mathcal{A}(w_{h},\Gyd_{1,h})=(w_{h},\nabla_{s} \cdot \boldsymbol{\delta}^{\y,\Omega}),\quad\mbox{for each }w_{h}\in\widetilde{U}_{h}.
\end{equation}
The finite element approximation $\Gyd_{n,h}\in\widetilde{U}_{h}$ with $n\in\{0,1\}$ is taken to be the unique solution of problem
\begin{equation}
\label{eq:weakformGh}
\mathcal{A}(w_{h},\Gyd_{n}-\Gyd_{n,h})=0,\quad\mbox{for each }w_{h}\in\widetilde{U}_{h}.
\end{equation}
Notice that $\Gyd_{n,h}$ satisfies \eqref{eq:compaticondG} from the structure of the space $\widetilde{U}_{h}$ for $n\in\{0,1\}$. On the other hand, the term $\Gyd_{n}-\Gyd_{n,h}$ satisfies the error estimates in $H^{1}$ and $L^{2}$-norm \cite{demlow2009higher}. The appearance of these discrete Green's functions is due to we will need some additional \emph{a priori} estimates, which are well studied in the literature by \cite{demlow2009higher,kroner2017approximative,rannacher1982some,schatz1995interior}.
\begin{lem}[\cite{demlow2009higher}]
\label{lem:limitGreen}
Let $\Gyd_{n}$ be a discrete Green's function and $\Gyd_{n,h}\in\widetilde{U}_{h}$ its finite element approximation with $n\in\{0,1\}$. Then, we have:
\begin{subequations}
\begin{align}
\|\nabla_{s}(\Gyd_{1}-\Gyd_{1,h})\|_{L^{1}(\mathbb{S}^{2})}&\leq C,\label{eq:G1}\\
\|\nabla_{s} \Gyd_{1}\|_{L^{1}(\mathbb{S}^{2})}+\|\Gyd_{0}\|_{W^{2,1}(\mathbb{S}^{2})}&\leq C|\ln h|,\label{eq:G2}\\
\|\nabla_{s}\Gyd_{0,h}\|_{L^{2}(\S)}&\leq C |\ln h|^{1/2}.\label{eq:G3}
\end{align}
\end{subequations}
where $C$ are positive generic constants independent of $h$. Here the factor $|\ln h|^{1/2}$ has order ${\mathcal O}(h^{-\eta})$ for $\eta\in(0,1)$.
\end{lem}
\begin{proof}
We omit the details of \eqref{eq:G1} and \eqref{eq:G2}, but we highlight that these proofs are detailed in \cite[Lemma 3.3, pp.~819]{demlow2009higher} and \cite[Lemma 5.2, pp.~10]{kroner2017approximative}. About \eqref{eq:G3}, let $\Gyd_{0,h}\in\widetilde{U}_{h}$ be the finite element approximation of $\Gyd_{0}$. From Proposition \ref{prp:equivnormext} follows that
\[
\|\nabla_{s} \Gyd_{0,h}\|^{2}_{L^{2}(\S)}\leq C\|\nabla \overline{g}^{\y,\Omega}_{0,h}\|_{L^{2}(\mathbf{S}_{h})}^{2}=C\int_{\mathbf{S}_{h}}\nabla \overline{g}^{\y,\Omega}_{0,h}(\x^{\ast})\cdot\nabla \overline{g}^{\y,\Omega}_{0,h}(\x^{\ast})ds(\x^{\ast}),
\]
where $\overline{g}^{\y,\Omega}_{0,h}$ denotes the extension of $\Gyd_{0,h}$ to $\Omega_{h}$ restricted to $\mathbf{S}_{h}$. Invoking Proposition \ref{prp:demlow-delta}, we get
\[
\left|\int_{\mathbf{S}_{h}}\nabla \overline{g}^{\y,\Omega}_{0,h}(\x^{\ast})\cdot\nabla \overline{g}^{\y,\Omega}_{0,h}(\x^{\ast})ds(\x^{\ast})\right|=\left|\int_{\mathbf{S}_{h}}\overline{g}^{\y,\Omega}_{0,h}(\x^{\ast})\delta^{\yp,\Omega}(\x^{\ast})ds(\x^{\ast})\right|=\left|\overline{g}^{\y,\Omega}_{0,h}(\y^{\ast})\right|,
\]
where $\mathrm{y}={\mathcal P}(\y^{\ast})$. Thus,
\begin{equation}
\label{eq:dSo1}
\|\nabla_{s} \Gyd_{0,h}\|^{2}_{L^{2}(\S)}\leq C\left|\overline{g}^{\y,\Omega}_{0,h}(\y^{\ast})\right|\leq C\left|\Gyd_{0,h}(\mathrm{y})\right|.
\end{equation}
From discrete Sobolev inequality (see \textit{e.g.}, \cite[Lemma 4.9.2, pp.~124]{brenner2007mathematical} or \cite[Lemma 3.12, pp.~527]{kovacs2018maximum}), there exists a positive constant $C$, such that
\begin{equation}
\label{eq:dSo2}
\left|\Gyd_{0,h}(\mathrm{y})\right|\leq C|\ln h|^{1/2}\|\nabla_{s}\Gyd_{0,h}\|_{L^{2}(\S)}.
\end{equation}
Combining \eqref{eq:dSo1}-\eqref{eq:dSo2} we obtain \eqref{eq:G3}.
\end{proof}
Now, we can show a weak stability condition of approximate solutions of \eqref{eq:FVscheme} in the $\max$-norm.
\begin{prp}
\label{lem:weaklnhv2}
Assume that $u_{h}\in\widetilde{U}_{h}$ is the unique solution of \eqref{eq:weakformv}. Then, there exists a positive constant $C$, independent of $h$, such that
\[
\|u_{h}\|_{L^{\infty}(\S)}\leq C|\ln h|^{1/2}\|f\|_{L^{2}(\S)}.
\]
\end{prp}
\begin{proof}
From Proposition \ref{prp:demlow-delta}, there exists $\delta^{\y}$ supported in $\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}$ such that $\left|u_{h}(\mathrm{y})\right|=\left|(u_{h},\delta^{\y})\right|=\left|\mathcal{A}(u_{h},\Gyd_{0,h})\right|$. In virtue of continuity of $\mathcal{A}(\cdot,\cdot)$ and Lemma \ref{lem:limitGreen}, we have
\[
\left|u_{h}(\mathrm{y})\right|=\left|\mathcal{A}\left(u_{h},\Gyd_{0,h}\right)\right|\leq C\|\nabla_{s}u_{h}\|_{L^{2}(\S)}\|\nabla_{s}\Gyd_{0,h}\|_{L^{2}(\S)}\leq C|\ln h|^{1/2}\|\nabla_{s}u_{h}\|_{L^{2}(\S)}.
\]
Finally, applying Propositions \ref{prp:estabilityv} and \ref{prp:equivnorm}, we get the desired inequality.
\end{proof}
We now state and show the main results of this section.
\begin{thm}
\label{thm:maxH1-NOPT}
Let $\widetilde{\mathcal{V}}_{h}=\{\x_{i},\widetilde{\mathrm{V}}_{i}\}_{i=1}^{N}$ be an almost uniform Voronoï-Delaunay decomposition on $\mathbb{S}^{2}$. Assume that $f\in L^{2}(\S)$ satisfies \eqref{eq:compf} and the unique weak solution $u$ of \eqref{eq:strongform} belongs to $W^{2,\infty}(\S)\capH^{2}(\S)\cap\HqS$. Let $\overline{\mathcal{F}}_{ij}$ be the discrete flux defined in \eqref{eq:dflux}, such that the discrete problem \eqref{eq:weakformv} has unique solution $u_{h}\in\widetilde{U}_{h}$. Then, there exists a positive constant $C$, independent of $h$, such that
\[
\|\varepsilon_{h}\|_{L^{\infty}(\S)}\leq C h |\ln h|^{1/2}\left(\|u\|_{H^{2}(\S)}+\|f\|_{L^{2}(\S)}\right)+C h^{2}\|u\|_{W^{2,\infty}(\S)},
\]
where $\varepsilon_{h}=u-u_{h}$.
\end{thm}
\begin{proof}
We shall fix $\mathrm{y}\in\mathbb{S}^{2}$ and consider a discrete Green's function $\Gyd_{0}$ satisfying \eqref{eq:compaticondG} and the problem \eqref{eq:weakformGd0}. We also consider the finite element approximation $\Gyd_{0,h}\in\widetilde{U}_{h}$ satisfying the problem \eqref{eq:weakformGh}. By Proposition \ref{prp:demlow-delta}, there exists $\delta^{\y}$ supported in $\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}$, we then have
\begin{equation}
\label{eq:MmaxH1}
\begin{split}
\left|(u-u_{h})(\mathrm{y})\right|&\leq \left|(u-\widetilde{\Pi}_{h}(u))(\mathrm{y})\right|+\left|(\widetilde{\Pi}_{h}(u)-u_{h})(\mathrm{y})\right|\\
&\preceq\left|(u-\widetilde{\Pi}_{h}(u))(\mathrm{y})\right|+\left|\int_{\mathbb{S}^{2}}(\widetilde{\Pi}_{h}(u)-u_{h})(\mathrm{x})\delta^{\mathrm{y}}(\mathrm{x})ds(\mathrm{x})\right|\\
&\preceq\|u-\widetilde{\Pi}_{h}(u)\|_{L^{\infty}(\S)}+\left|\mathcal{A}(\widetilde{\Pi}_{h}(u)-u_{h},\Gyd_{0})\right|\\
&\preceq \|u-\widetilde{\Pi}_{h}(u)\|_{L^{\infty}(\S)}+\left|\mathcal{A}\left(\widetilde{\Pi}_{h}(u)-u_{h},\Gyd_{0}-\Gyd_{0,h}\right)\right|\\
&\quad+\left|\mathcal{A}\left(\widetilde{\Pi}_{h}(u)-u_{h},\Gyd_{0,h}\right)\right|
\\
&=I_{1}+I_{2}+I_{3}.
\end{split}
\end{equation}
About $I_{1}$, using Proposition \ref{prp:interp} with $p=\infty$, we have
\begin{equation}
\label{eq:maxHI1}
I_{1}=\|u-\widetilde{\Pi}_{h}(u)\|_{L^{\infty}(\S)}\leq C h^{2}\|u\|_{W^{2,\infty}(\S)}.
\end{equation}
For $I_{2}$, from \eqref{eq:weakformGh}, we obtain
\begin{equation}
\label{eq:maxHI2}
I_{2}=0.
\end{equation}
Finally, for $I_{3}$, utilizing the continuity of $\mathcal{A}(\cdot,\cdot)$ and Proposition \ref{prp:interp}, Theorem \ref{thm:H1-NOPT} and inequality \eqref{eq:G3} from Lemma \ref{lem:limitGreen}, we have
\begin{equation}
\label{eq:maxHI3}
\begin{split}
I_{3}&\leq C\|\nabla_{s}(\widetilde{\Pi}_{h}(u)-u_{h})\|_{L^{2}(\S)}\|\nabla_{s}\Gyd_{0,h}\|_{L^{2}(\S)}\\
&\leq C\left(\|\nabla_{s}(u-\widetilde{\Pi}_{h}(u))\|_{L^{2}(\S)}+\|\nabla_{s}(u-u_{h})\|_{L^{2}(\S)}\right)\|\nabla_{s}\Gyd_{0,h}\|_{L^{2}(\S)}\\
& \leq Ch|\ln h|^{1/2}\left(\|u\|_{H^{2}(\S)}+\|f\|_{L^{2}(\S)}\right).
\end{split}
\end{equation}
Combining \eqref{eq:maxHI1}-\eqref{eq:maxHI3} and \eqref{eq:MmaxH1} for $h>0$ small enough, we find
\[
\left|(u-u_{h})(\mathrm{y})\right|\leq Ch|\ln h|^{1/2}\left(\|u\|_{H^{2}(\S)}+\|f\|_{L^{2}(\S)}\right)+Ch^{2}\|u\|_{W^{2,\infty}(\S)}.
\]
Finally, by taking the maximum value leads the desired result.
\end{proof}
Notice that the finite volume scheme \eqref{eq:FVscheme} is included in the estimate of $H^{1}$-norm. Further, the result above is not optimal concerning the regularity required of the exact solution \cite{ewing2002accuracy}. This excessive regularity can be removed as follows: put the restriction in the weak solution $u$ of \eqref{eq:strongform}, belonging to $W^{2,\infty}(\S)\cap\HqS$. Next, estimate a $max$-norm for tangential gradient of the solutions by using Propositions \ref{prp:interp}, \ref{prp:demlow-delta} and Lemma \ref{lem:limitGreen}. Finally, compute the error estimates of approximate solutions in the $\max$-norm.
In order to prove that, the following result provides a pointwise error estimate for the tangential gradient of the approximate solution.
\begin{thm}
\label{thm:W1-NOPT}
Let $\widetilde{\mathcal{V}}_{h}=\{\x_{i},\widetilde{\mathrm{V}}_{i}\}_{i=1}^{N}$ be an almost uniform Voronoï-Delaunay decomposition on $\mathbb{S}^{2}$. Assume that $f\inL^{\infty}(\S)$ satisfies \eqref{eq:compf} and that the unique weak solution $u$ of \eqref{eq:strongform} belongs to $W^{2,\infty}(\S)\cap\HqS$. Let $\overline{\mathcal{F}}_{ij}$ be a discrete flux \eqref{eq:dflux}, such that the discrete problem \eqref{eq:weakformv} has a unique solution $u_{h}\in\widetilde{U}_{h}$. Then, there exist positive constants $C$ and $h_{0}$ independent of $u$, such that for $0<h\leq h_{0}<1$,
\[
\|\nabla_{s}\varepsilon_{h}\|_{L^{\infty}(\S)}\leq C h |\ln h|\left(\|u\|_{W^{2,\infty}(\S)}+\|f\|_{L^{\infty}(\S)}\right),
\]
where $\varepsilon_{h}=u-u_{h}$.
\end{thm}
\begin{proof}
We will proceed via a duality argument: we fix $\mathrm{y}\in\mathbb{S}^{2}$ and consider $\vec{\mathrm{n}}_{\y,\Tgsi}\in\mathbb{T}_{\y,\S}$ a tangent unit vector and from Proposition \ref{prp:demlow-delta}, there exists $\boldsymbol{\delta}^{\y,\Omega}$ supported in $\widetilde{\mathrm T}_{ijk}$. Let $\Gyd_{1}$ be a discrete Green's function satisfying \eqref{eq:compaticondG} and the variational problem \eqref{eq:weakformGd1}. Also, we consider the finite element approximation $\Gyd_{1,h}$ as the unique solution of \eqref{eq:weakformGh}. From the triangular inequality, we have
\begin{equation}
\label{eq:MW1}
\begin{split}
\left|\nabla_{s}(u-u_{h})(\mathrm{y})\cdot\vec{\mathrm{n}}_{\y,\Tgsi}\right|&\leq \left|\nabla_{s}(u-\widetilde{\Pi}_{h}(u))(\mathrm{y})\cdot\vec{\mathrm{n}}_{\y,\Tgsi}\right|\\
&\quad+\left|\nabla_{s}(\widetilde{\Pi}_{h}(u)-u_{h})(\mathrm{y})\cdot\vec{\mathrm{n}}_{\y,\Tgsi}\right|\\
&\preceq\|\nabla_{s}(u-\widetilde{\Pi}_{h}(u))\|_{L^{\infty}(\S)}\\
&\quad+\left|\int_{\mathbb{S}^{2}}(\widetilde{\Pi}_{h}(u)-u_{h})(\mathrm{x})\nabla_{s}\cdot\boldsymbol{\delta}^{\y,\Omega}(\mathrm{x}) ds(\mathrm{x})\right|\\
&\preceq \|\nabla_{s}(u-\widetilde{\Pi}_{h}(u))\|_{L^{\infty}(\S)}+\left|\mathcal{A}(\widetilde{\Pi}_{h}(u)-u_{h},\Gyd_{1})\right|\\
&\preceq \|\nabla_{s}(u-\widetilde{\Pi}_{h}(u))\|_{L^{\infty}(\S)}+\left|\mathcal{A}(\widetilde{\Pi}_{h}(u)-u_{h},\Gyd_{1}-\Gyd_{1,h})\right|\\
&\quad +\left|\mathcal{A}(\widetilde{\Pi}_{h}(u)-u,\Gyd_{1,h})\right|+\left|\mathcal{A}(u-u_{h},\Gyd_{1,h})\right|\\
& = I_{1}+I_{2}+I_{3}+I_{4}.
\end{split}
\end{equation}
About $I_{1}$, applying Proposition \ref{prp:interp}, we have
\begin{equation}
\label{eq:W1I1}
I_{1}=\|\nabla_{s}(u-\widetilde{\Pi}_{h}(u))\|_{L^{\infty}(\S)}\leq Ch\|u\|_{W^{2,\infty}(\S)}.
\end{equation}
For $I_{2}$, from \eqref{eq:weakformGh}, we obtain
\begin{equation}
\label{eq:W1I2}
I_{2}=\left|\mathcal{A}(\widetilde{\Pi}_{h}(u)-u_{h},\Gyd_{1}-\Gyd_{1,h})\right|=0.
\end{equation}
For $I_{3}$, using the continuity of $\mathcal{A}(\cdot,\cdot)$, Proposition \ref{prp:interp}, the triangular inequality and equations \eqref{eq:G2} and \eqref{eq:G1} from Lemma \ref{lem:limitGreen}, we have
\begin{equation}
\label{eq:W1I3}
\begin{split}
I_{3}&\leq C\|\nabla_{s}(u-\widetilde{\Pi}_{h}(u))\|_{L^{\infty}(\S)}\|\nabla_{s}\Gyd_{1,h}\|_{L^{1}(\mathbb{S}^{2})}\\
&\leq C\|\nabla_{s}(u-\widetilde{\Pi}_{h}(u))\|_{L^{\infty}(\S)}\left(\|\nabla_{s}(\Gyd_{1}-\Gyd_{1,h})\|_{L^{1}(\mathbb{S}^{2})}+\|\nabla_{s}\Gyd_{1}\|_{L^{1}(\mathbb{S}^{2})}\right)\\
&\leq Ch(1+|\ln h|)\|u\|_{W^{2,\infty}(\S)}.
\end{split}
\end{equation}
Now about $I_{4}$, by using the linearity of $\mathcal{A}(\cdot,\cdot)$ and adding up and subtracting the total fluxes $\overline{\A}_{h}(\cdot,\cdot)$ and $\widetilde{\A}_{h}(\cdot,\cdot)$, we obtain
\begin{equation}
\label{eq:W1I4}
\begin{split}
I_{4}&\leq \left|\mathcal{A}(u,\Gyd_{1,h})-\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\Gyd_{1,h}))\right|+\left|\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\Gyd_{1,h}))-\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\Gyd_{1,h}))\right|\\
&\quad +\left|\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\Gyd_{1,h}))-\mathcal{A}(u_{h},\Gyd_{1,h})\right|\\
&= I_{4,1}+I_{4,2}+I_{4,3}.
\end{split}
\end{equation}
For $I_{4,1}$, applying Lemmas \ref{lem:mainfNOPT} and \ref{lem:limitGreen} for $f\inL^{\infty}(\S)$ and $\Gyd_{1,h}\in\widetilde{U}_{h}$, we find that
\begin{equation}
\label{eq:W1I41}
\begin{split}
I_{4,1}&=\left|(f,\Gyd_{1,h}-\widetilde{\mathrm{I}}_{h}(\Gyd_{1,h}))\right| \leq Ch\|f\|_{L^{\infty}(\S)}\|\nabla_{s}\Gyd_{1,h}\|_{L^{1}(\S)}\\
&\leq Ch(1+|\ln h|)\|f\|_{L^{\infty}(\S)}.
\end{split}
\end{equation}
For $I_{4,2}$, by using Lemma \ref{lem:AhsAhd} and collecting the inequalities \eqref{eq:G1} and \eqref{eq:G2} from Lemma \ref{lem:limitGreen}, we arrive at
\begin{equation}
\label{eq:W1I42}
\begin{split}
I_{4,2}\leq Ch^{2}\|\nabla_{s} u_{h}\|_{L^{\infty}(\S)}\|\nabla_{s}\Gyd_{1,h}\|_{L^{1}(\S)}\leq Ch^{2}(1+|\ln h|)\|\nabla_{s} u_{h}\|_{L^{\infty}(\S)}.
\end{split}
\end{equation}
Similarly, for $I_{4,3}$, from Lemma \ref{lem:AAhs} and Lemma \ref{lem:limitGreen}, we obtain
\begin{equation}
\label{eq:W1I43}
I_{4,3}\leq Ch^{2}\|\nabla_{s}u_{h}\|_{L^{\infty}(\S)}\|\nabla_{s}\Gyd_{1,h}\|_{L^{1}(\S)}\leq Ch^{2}(1+|\ln h|)\|\nabla_{s}u_{h}\|_{L^{\infty}(\S)}.
\end{equation}
Notice that, from Proposition \ref{prp:interp}, we obtain
\begin{equation}
\label{eq:W1Iprp}
\|\nabla_{s} u_{h}\|_{L^{\infty}(\S)}\leq \|\nabla_{s}(u-u_{h})\|_{L^{\infty}(\S)}+\|\nabla_{s} u\|_{L^{\infty}(\S)}.
\end{equation}
Thus, gathering all the estimates \eqref{eq:W1I41}-\eqref{eq:W1Iprp} into \eqref{eq:W1I4}, follows that
\begin{equation}
\label{eq:W1I4m}
I_{4}\leq Ch(1+|\ln h|)\|f\|_{L^{\infty}(\S)}+Ch^{2}(1+|\ln h|)\left(\|\nabla_{s}(u-u_{h})\|_{L^{\infty}(\S)}+\|\nabla_{s} u\|_{L^{\infty}(\S)}\right).
\end{equation}
Finally, combining \eqref{eq:W1I4m} with \eqref{eq:MW1}--\eqref{eq:W1I3}, for $h_{0}\in\mathbb{R}_{+}$, such that $0<h\leq h_{0}<1$ and by applying the maximum value, we find
\[
\|\nabla_{s}(u-u_{h})\|_{L^{\infty}(\S)}\leq Ch|\ln h|\left(\|u\|_{W^{2,\infty}(\S)}+\|f\|_{L^{\infty}(\S)}\right),
\]
which leads to the desired result.
\end{proof}
Now, we show error estimates for approximate solution in $\max$-norm.
\begin{thm}
\label{thm:max-NOPT}
Under the assumptions of Theorem \ref{thm:W1-NOPT}. Then, there exist positive constants $C$ and $h_{0}$ independent of $u$, such that for $0<h\leq h_{0}<1$,
\[
\|\varepsilon_{h}\|_{L^{\infty}(\S)}\leq C h |\ln h|\left(\|u\|_{W^{2,\infty}(\S)}+\|f\|_{L^{\infty}(\S)}\right),
\]
where $\varepsilon_{h}=u-u_{h}$.
\end{thm}
\begin{proof}
We proceed similarly to Theorem \ref{thm:W1-NOPT}. We fix $\mathrm{y}\in\mathbb{S}^{2}$ and from Proposition \ref{prp:demlow-delta}, there exists a smooth function $\delta^{\y}$ supported in $\widetilde{\mathrm T}_{ijk}\in\widetilde{\Th}$. Let $\Gyd_{0}$ be a discrete Green's function satisfying \eqref{eq:compaticondG} and the variational problem \eqref{eq:weakformGd0}. We also consider the finite element approximation $\Gyd_{0,h}$ as a unique solution of the problem \eqref{eq:weakformGh}. By using triangular inequality, we have
\begin{equation}
\label{eq:maxM}
\begin{split}
\left|(u-u_{h})(\mathrm{y})\right|&\leq \left|(u-\widetilde{\Pi}_{h}(u))(\mathrm{y})\right|+\left|(\widetilde{\Pi}_{h}(u)-u_{h})(\mathrm{y})\right|\\
& \preceq \|u-\widetilde{\Pi}_{h}(u)\|_{L^{\infty}(\S)}+\left|\int_{\mathbb{S}^{2}}(\widetilde{\Pi}_{h}(u)-u_{h})(\mathrm{x})\delta^{\y}(\mathrm{x})ds(\mathrm{x})\right|\\
& \preceq \|u-\widetilde{\Pi}_{h}(u)\|_{L^{\infty}(\S)}+\left|\mathcal{A}(\widetilde{\Pi}_{h}(u)-u_{h},\Gyd_{0})\right|\\
& \preceq \|u-\widetilde{\Pi}_{h}(u)\|_{L^{\infty}(\S)}+\left|\mathcal{A}(\widetilde{\Pi}_{h}(u)-u_{h},\Gyd_{0}-\Gyd_{0,h})\right|\\
& \quad +\left|\mathcal{A}(\widetilde{\Pi}_{h}(u)-u,\Gyd_{0,h})\right|+\left|\mathcal{A}(u-u_{h},\Gyd_{0,h})\right|\\
& = I_{1}+I_{2}+I_{3}+I_{4}.
\end{split}
\end{equation}
For $I_{1}$, from Proposition \ref{prp:interp}, we have
\begin{equation}
\label{eq:maxI1}
I_{1}=\|u-\widetilde{\Pi}_{h}(u)\|_{L^{\infty}(\S)}\leq C h^{2}\|u\|_{W^{2,\infty}(\S)}.
\end{equation}
For $I_{2}$, from \eqref{eq:weakformGh}, we have
\begin{equation}
\label{eq:maxI2}
I_{2}=\mathcal{A}(\widetilde{\Pi}_{h}(u)-u_{h},\Gyd_{0}-\Gyd_{0,h})=0.
\end{equation}
For $I_{3}$, using the continuity of $\mathcal{A}(\cdot,\cdot)$, Proposition \eqref{prp:interp} and Lemma \ref{lem:limitGreen} with \eqref{eq:G3}, we obtain
\begin{equation}
\label{eq:maxI3}
I_{3}=\left|\mathcal{A}(\widetilde{\Pi}_{h}(u)-u,\Gyd_{0,h})\right|\leq Ch|\ln h|^{1/2}\|u\|_{W^{2,\infty}(\S)}.
\end{equation}
About $I_{4}$, analogously to Theorem \ref{thm:W1-NOPT}, by using the linearity of $\mathcal{A}(\cdot,\cdot)$ and the total fluxes $\widetilde{\A}_{h}(\cdot,\cdot)$ and $\overline{\A}_{h}(\cdot,\cdot)$, we have
\begin{equation}
\label{eq:maxI4}
\begin{split}
I_{4}&\leq \left|\mathcal{A}(u,\Gyd_{0,h})-\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\Gyd_{0,h}))\right|+\left|\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\Gyd_{0,h}))-\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\Gyd_{0,h}))\right|\\
& \quad+\left|\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\Gyd_{0,h}))-\mathcal{A}(u_{h},\Gyd_{0,h})\right|\\
& = I_{4,1}+I_{4,2}+I_{4,3}.
\end{split}
\end{equation}
For $I_{4,1}$, by using Lemmas \ref{lem:mainfNOPT} and \ref{lem:limitGreen}, we find
\begin{equation}
\label{eq:maxI41}
I_{4,1}=\left|\mathcal{A}(u,\Gyd_{0,h})-\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\Gyd_{0,h}))\right|=\left|(f,\Gyd_{0,h}-\widetilde{\mathrm{I}}_{h}(\Gyd_{0,h}))\right|\leq Ch|\ln h|\|f\|_{L^{\infty}(\S)}.
\end{equation}
For $I_{4,2}$, Lemma \ref{lem:AhsAhd} yields
\[
I_{4,2}=\left|\overline{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\Gyd_{0,h}))-\widetilde{\A}_{h}(u_{h},\widetilde{\mathrm{I}}_{h}(\Gyd_{0,h}))\right|\leq Ch^{2}|\ln h|\|\nabla_{s} u_{h}\|_{L^{\infty}(\S)}.
\]
Applying Theorem \ref{thm:W1-NOPT}, there exists $h_{0}\in\mathbb{R}_{+}$ such that for all $0<h\leq h_{0}$, we have
\[
\begin{split}
\|\nabla_{s} u_{h}\|_{L^{\infty}(\S)}\leq Ch|\ln h|\left(\|u\|_{W^{2,\infty}(\S)}+\|f\|_{L^{\infty}(\S)}\right)+\|\nabla_{s} u\|_{L^{\infty}(\S)}.
\end{split}
\]
Then
\begin{equation}
\label{eq:maxI42}
I_{4,2}\leq Ch^{2}|\ln h|\|u\|_{W^{2,\infty}(\S)}.
\end{equation}
As for $I_{4,3}$, using Lemma \ref{lem:AhsAhd}, we have
\begin{equation}
\label{eq:maxI43}
I_{4,3}\leq Ch^{2}|\ln h|\|u\|_{W^{2,\infty}(\S)}.
\end{equation}
Consequently, for $h>0$ small enough and gathering \eqref{eq:maxI4} with \eqref{eq:maxI41}--\eqref{eq:maxI43}, we have
\begin{equation}
\label{eq:maxI4m}
I_{4}\leq Ch|\ln h|\|f\|_{L^{\infty}(\S)}.
\end{equation}
Finally, combining \eqref{eq:maxM} with \eqref{eq:maxI1}--\eqref{eq:maxI3} and \eqref{eq:maxI4m}, we find
\[
|(u-u_{h})(\mathrm{y})|\leq C h|\ln h|\left(\|u\|_{W^{2,\infty}(\S)}+\|f\|_{L^{\infty}(\S)}\right).
\]
Therefore, by applying the maximum value yields the expected result.
\end{proof}
To end this section, we can get an additional estimate, now using a Voronoï-Delaunay decomposition SCVT.
\begin{thm}
Let $\widetilde{\mathcal{V}}_{h}=\{\x_{i},\widetilde{\mathrm{V}}_{i}\}_{i=1}^{N}$ be an almost uniform Spherical Centroidal Voronoï-Delaunay decomposition (SCVT) on $\mathbb{S}^{2}$. Assume also that $f\inH_{0}^{1}(\S)$ and that $u\in W^{2,\infty}(\S)\capH^{2}(\S)\cap\HqS$ is the unique solution of \eqref{eq:strongform}. Let $\overline{\mathcal{F}}_{ij}$ be the discrete flux defined in \eqref{eq:dflux}, such that the discrete problem \eqref{eq:weakformv} has unique solution $u_{h}\in\widetilde{U}_{h}$. Then, there exist positive constants $C$ and $h_{0}$, independent of $u$, such that for $0<h\leq h_{0}<1$,
\[
\|\varepsilon_{h}\|_{L^{\infty}(\S)}\leq Ch\left(\|u\|_{H^{2}(\S)}+\|f\|_{H^{1}(\S)}\right),
\]
where $\varepsilon_{h}=u-u_{h}$.
\end{thm}
\begin{proof}
From triangular inequality, Proposition \ref{prp:interp}, Lemma \ref{lem:propinverse} and Theorem \ref{thm:L2SCVT}, we then obtain
\begin{align*}
\|u-u_{h}\|_{L^{\infty}(\S)}&\leq \|u-\widetilde{\Pi}_{h}(u)\|_{L^{\infty}(\S)}+\|\widetilde{\Pi}_{h}(u)-u_{h}\|_{L^{\infty}(\S)}\\
&\leq Ch^{2}\|u\|_{W^{2,\infty}(\S)}+Ch^{-1}\|\widetilde{\Pi}_{h}(u)-u_{h}\|_{L^{2}(\S)}\\
&\leq Ch^{2}\|u\|_{W^{2,\infty}(\S)}+Ch\left(\|u\|_{H^{2}(\S)}+\|f\|_{H^{1}(\S)}\right).
\end{align*}
For $h$ small enough, the proof is complete.
\end{proof}
\section{Numerical example and final remarks}
\label{sec:numericalexps}
This section illustrates an example for the FV approach of Laplace-Beltrami operator using the recursive Voronoï-Delaunay decomposition. We consider three types of grids: the non-optimized grid (NOPT) and two of the most used grid optimizations in the literature \cite{miura2005comparison}, the Heikes and Randall optimized grids (HR$95$), proposed in \cite{heikes1995anumerical,heikes1995bnumerical}, and the \emph{Spherical Centroidal Voronoï~Tessellations} (SCVT) described by Du and collaborators in \cite{du2003constrained,du2003voronoi}. We consider the SCVT grids with constant density function ($\rho=1$). In order to verify the error estimate $\varepsilon_{h}$, we shall use the example defined in \cite{heikes1995bnumerical}. The exact solution $u$ in geographic coordinates $(\phi,\theta)$ is defined as
\begin{equation}
\label{eq:ex1solu}
u(\phi, \theta)=\cos(\theta)\cos^{4}(\phi),
\end{equation}
and the forcing source as,
\begin{equation}
\label{eq:ex1solf}
\begin{split}
f(\phi,\theta)= -\cos(\theta)\cos^{2}(\theta)[\cos^{2}(\phi)-4\sin(\phi)\cos(\phi)\sin(\phi)\cos(\phi)\\
-12\cos^{2}(\phi)+16\cos^{2}(\phi)\cos^{2}(\phi)]/\cos^{2}(\phi),
\end{split}
\end{equation}
where $\phi\in[-\pi/2,\pi/2]$ is the latitude and $\theta\in[-\pi,\pi]$ is the longitude.
\begin{figure}[!h]
\centering
\subfloat[NOPT]{
\centering
\includegraphics[scale=0.38]{NOPT_ex.png}
}
\\
\subfloat[SCVT]{
\centering
\includegraphics[scale=0.38]{SCVT_ex.png}
}
\\
\subfloat[HR$95$]{
\centering
\includegraphics[scale=0.38]{HR95_ex.png}
}
\caption{\label{fig:solnorm} Errors $\varepsilon_{h}$ and numerical convergence rates $CR$ in the finite volume approximation for problem \eqref{eq:ex1solf} with exact solution \eqref{eq:ex1solu} in $H^{1}$, $L^{2}$, $\max$ and $W^{1,\infty}$-norm using grids NOPT, SCVT and HR$95$ with different refinement levels.}
\end{figure}
Approximate solutions were obtained using the finite volume scheme \eqref{eq:FVscheme}. Figure \ref{fig:solnorm} shows error estimates and convergence rates of the approximate solution of the problem \eqref{eq:ex1solf} in $H^{1}$, $L^{2}$, $W^{1,\infty}$ and $\max$-norm using three types of grids and different refinement levels. The numerical convergence rate $CR$ with respect to the norm $\|\cdot\|_{\star}$ is given as
\[
CR=\frac{\left|\ln\|\varepsilon_{n}\|_{\star}-\ln\|\varepsilon_{n-1}\|_{\star}\right|}{\ln 2},\quad\mbox{for }n=2,\dots,N,
\]
where $\varepsilon_{n}=u-u_{h}$ denotes the error of the $n$-th level.
Firstly, by using grid NOPT, we observe that the numerical convergence rate is just about ${\mathcal O}(h)$ in $H^{1}$-norm and matches with the theoretical convergence rate predicted in Theorem \ref{thm:H1-NOPT}. Analogously, from Theorem \ref{thm:W1-NOPT}, we have that the theoretical prediction for the solution error is ${\mathcal O}(h|\ln h|)$ in $W^{1,\infty}$-norm, while the obtained logarithmic factor is not detected numerically.
Furthermore, the numerical convergence rates for problem \eqref{eq:ex1solf} indicate that error given in $L^{2}$ and $\max$-norm tends to be quadratic order as we refine the grid NOPT. In general, the difference between the right-hand sides of the variational problems \eqref{eq:weakform} and \eqref{eq:weakformv}, when using non optimized Voronoï-Delaunay decomposition, is a dominant factor for optimal convergence rate (just about ${\mathcal O}(h)$), as predicted in Theorems \ref{thm:H1-NOPT} and \ref{thm:max-NOPT} respectively. To our knowledge, there are no existing analytical results that confirm the quadratic order in these norms for general Voronoï-Delaunay tessellations.
However, observe that the numerical convergence rate for SCVT is just about ${\mathcal O}(h^{2})$ in the $L^{2}$-norm, as had been showed by Du in \cite{du2005finite}.
Here, we modify the proof by using a minimal regularity requirement in the exact solution and highlight that the quadratic order error estimates depend on the geometric criterion of the SCVT. The numerical convergence rates in $\max$-norm is also ${\mathcal O}(h^{2})$. This case is under study and results will be presented elsewhere. Note also that the best behavior of error estimates is given in $H^{1}$-norm in both optimized grids. Thus, there exists a degree of superconvergence of the approach on gradients. To date, there seem to be no existing theoretical criteria that prove these behaviors and, consequently, this brings a good challenge for future research. Extensions of the analysis to HR$95$ grids is our current investigation.
\section*{Acknowledgements}
This work is in memory of Saulo R. M. Barros, who participated in this work but tragically passed away in July 2021, prior to its conclusion. The work presented here was supported by FAPESP (Fundação de Amparo à Pesquisa do Estado de São Paulo) through grant 2016/18445-7 and also by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES), Finance Code 001.
\bibliographystyle{plain}
|
1,116,691,498,266 | arxiv | \section{Introduction}
When an impure white dwarf (WD) or neutron star crust (NSC) is slowly cooled from above its melting temperature, one expects the extra compositional degrees of freedom are taken advantage of to form crystals which are more efficiently packed than phase-separated bcc lattices. Indeed, several investigators have considered non-Bravais and multicomponent lattices as the possible ground state of astrophysical compact objects. One of the earliest was \citet{dys71}, who suggested a rock salt structure of Fe and He nuclei might be stable. More recently, \citet{koz12} studied cesium-chloride and magnesium-diboride structures within the Coulomb crystal model. \citet{kob14} have argued that the ground state structure above neutron drip density may be similar to that of the displacive ferroelectric BaTiO$_3$, due to the symmetry-lowering effect of interstitial neutrons on a bcc lattice of nuclei. A related line of inquiry concerns the freezing of multicomponent ion plasmas from the liquid state. See \cite{med10} for a semi-analytic calculation and references to earlier numerical methods. One such method -- classical molecular dynamics -- has been used extensively to simulate a multicomponent plasma with the \citet{gup07} composition \citep{hor07, hor09pre, hor09prc}. The latter of these works features a 14-component, $\approx\,$28,000 particle system which was annealed for $\sim\,$$10^7$ phonon cycles below the melting temperature. A dominantly Se ($Z$=34) bcc lattice was formed, with small-$Z$ nuclei occupying interstitial positions and larger-$Z$ nuclei acting as substitutional impurities. In addition, there was a tendency for small-$Z$ nuclei to cluster together, forming an effective large-$Z$ particle. In a different simulation where annealing was again carried out for $\sim\,$$10^7$ phonon cycles \citep{hor09pre}, phase-separated regions (microcrystals) formed in the solid phase. One phase was depleted in small-$Z$ nuclei, while another was enriched.
Simulated annealing is an excellent means for directly modeling the dynamics of crystalline systems, but it often cannot access the very long timescales associated with the nucleation and growth of complex multicomponent crystal phases, due to the exponentially slow dynamics of surmounting reaction barriers against the complex cooperative rearrangements needed to form such crystals. For example, terrestrial carbon steels, which typically have only 2--3 alloying elements, must be annealed for a minimum of $\sim\,$$10^{13}$ phonon cycles ($\sim\,$10 seconds) to find their ground state \citep{asm77}. Alternative methods including random structure searching \citep{pic11}, particle swarm optimization \citep{wan10}, and genetic/evolutionary search techniques \citep{oga06, abr08, wu14} have been applied with great success to this ``crystal structure problem," but have not yet been applied at the extreme conditions of compact astrophysical objects. When coupled with an appropriate description of the (fully pressure-ionized) microphysics, such methods could provide a means to efficiently search for new crystal structures in multicomponent WDs and NSCs, complementing the existing simulated annealing work.
The existence of lower-symmetry (i.e. non-cubic) and/or multinary phases within WDs and NSCs could have several astrophysical implications. Most astrophysical calculations assume the material is a bcc polycrystal with grain sizes small compared to the other macroscopic physical scales in the problem. Therefore, for example, the rank-four elastic tensor is averaged and smoothed to produce a scalar shear modulus relating the strain response to an applied stress (one popular averaging procedure is described by \citet{oga90}). The possibility of multiple, complicated lattice structures, and preferential alignment with e.g. the local magnetic field, would necessitate computing the full elastic tensor. Anisotropies, soft phonon modes, and elastic instabilities such as the incipient ones described in \citet{eng15} could have significant effects on elasticity-related astrophysical observables such as magnetar flares \citep{per11}, related quasi-periodic oscillations \citep{isr05}, and possibly some pulsar glitches \citep{cha08}. It could also significantly affect the future observability of gravitational-wave emission, both in the context of magnetar flares and continuous waves \citep{joh13}. Grain/phase domain boundaries would lead to preferred stress-failure locations, and on large scales might affect dissipation of modes involving the crust such as torsion or shear modes (\citet{isr05}, used to explain quasi-periodic oscillations after magnetar flares) and $r$-modes (similar to the ``crust freezing" scenario in \citet{lin00}). These again would have implications both for electromagnetic and gravitational wave observations. Another kind of implication has to do with the composition of WD debris disks and planetary systems, inferred from metal abundances in the accreting WD's atmosphere \citep{bar12, raf11}. Entering into this calculation is the settling rate of the high-$Z$ metals. In principle, this rate depends on the buoyancy of the settling species' host phase(s) as well as the microphysics involved in ordinary grain growth processes, namely interfacial energies and grain boundary mobilities \citep{kri02}.
In this work we carry out a systematic search for the ground state crystal structure of three-component systems at conditions relevant to WDs and NSCs. The main goals are 1) to identify possible new phases through global search of the multicomponent crystal structure phase diagram, and 2) to determine in what contexts those phases might appear in WDs and NSCs through sample applications to layering stability. To the first end, we employ a popular genetic search algorithm. The lowest enthalpy structures found by the genetic search are included in bulk phase diagram calculations, which reveal five new complicated binary and ternary crystal structures, four having sub-cubic lattice symmetry. To the second end, we demonstrate a self-consistent coupling of the phase stability calculation with the basic equations of a self-gravitating, hydrostatically stable white dwarf. Several compositional instances of the newly found binary phases show up as finite thickness ``interphases" between pure bcc strata in cold, He-C-O and C-O-Ne white dwarfs. Additional binary phases make a transient appearance in nonequilibrium settling calculations, as host phases for the settling species.
\section{bulk phase diagram calculation}
This section describes a global search of composition and structure, using five ternary systems of nuclei thought to be relatively prevalent in WD or NSC matter, and covering a range of distinct ``crystal chemistries." The starting point is an effective Hamiltonian for completely pressure-ionized matter. We work within linear response theory -- see Section V of \citet{pol73}, and \citet{bai02}, for example. In this framework, a system of point nuclei (with charges $Z_ie$ and static positions $\mathbf{r}_i$) immersed in a polarizable, charge compensating background of electrons has kinetic plus electrostatic potential energy
\begin{eqnarray}
E = T_0 + \frac{e^2}{2}&\Bigg{\{}& \sum_{i\neq j}Z_iZ_j\int\frac{d^3k}{(2\pi)^3}\frac{4\pi e^{i\mathbf{k}\cdot(\mathbf{r}_i-\mathbf{r}_j)}}{k^2\epsilon(\mathbf{k})} \label{E}\\
&+& \sum_iZ_i^2\int\frac{d^3k}{(2\pi)^3}\frac{4\pi}{k^2}\bigg[\frac{1}{\epsilon(\mathbf{k})}-1\bigg] \nonumber\\
&-& \sum_{i,\,j}Z_iZ_j\;\frac{1}{V}\int d^3r\int\frac{d^3k}{(2\pi)^3}\frac{4\pi e^{i\mathbf{k\cdot r}}}{k^2\epsilon(\mathbf{k})} \Bigg{\}},\nonumber
\end{eqnarray}
(compare \citet{bai02} Equations 1-3). It is not immediately obvious that the above Hamiltonian includes the leading order correction to the kinetic energy $T_0$ of the uniform electron gas (it does). This can be seen by expanding the kinetic energy in powers of the density nonuniformity correction: $T=T_0 + \frac{e^2}{2}\int d^3r\, d^3r'\, \delta n_e(\mathbf{r})G(\mathbf{r-r'})\delta n_e(\mathbf{r}') + \dots$ and keeping only the first two terms such that a total energy minimization identifies $-G(\mathbf{k})^{-1}$ as the static response function of the uniform gas. Equation \ref{E} also contains all Coulomb interactions except for the infinite nuclear self energies. With a choice of the simple Thomas-Fermi dielectric function $\epsilon_{TF}(\mathbf{k})=1+k_0^2/k^2$, the integrals are standard ones and the Hamiltonian reduces to
\begin{eqnarray}
E_{TF} = T_0 + \frac{e^2}{2}\Bigg{\{} &-&k_0\sum_iZ_i^2 - \frac{4\pi}{k_0^2V}\sum_{i,\,j}Z_iZ_j\nonumber\label{ETF}\\
&+& \sum_{i\neq j}\frac{Z_iZ_je^{-k_0|\mathbf{r}_i-\mathbf{r}_j|}}{|\mathbf{r}_i-\mathbf{r}_j|} \Bigg{\}}.
\end{eqnarray}
In performing structural optimizations, one converges the energy by working with a supercell of volume $V_c$ and $N-1$ periodic copies thereof. If $\mathbf{R}$ is a primitive lattice vector (supercell translation vector) and $p,q$ index the basis, the total energy per supercell is written
\begin{eqnarray}
\frac{E_{TF}}{N} = \tau_0V_c + \frac{e^2}{2}\Bigg{\{} &-&k_0\sum_pZ_p^2 - \frac{4\pi}{k_0^2V_c}\sum_{p,\,q}Z_pZ_q\nonumber\label{ETF/N}\\
&+& {\sum_{\mathbf{R},\,p,\,q}}'\;\frac{Z_pZ_qe^{-k_0\mathscr{R}_{pq}}}{\mathscr{R}_{pq}} \Bigg{\}},
\end{eqnarray}
where $\mathscr{R}_{pq} = |\mathbf{R}+\mathbf{r}_p-\mathbf{r}_q|$ and the prime on the last sum indicates that terms with $\mathscr{R}_{pq}=0$ are excluded.
\pagebreak
For this work we use kinetic energy density
\begin{eqnarray}
\tau_0 &=& \frac{1}{8\pi^2}\frac{m_ec^2}{\lambda_e^3} \Big[ \frac{x^2(1+2x^2)}{\beta} - \ln\Big(x+\frac{x}{\beta}\Big)\Big] - m_ec^2n_e,\nonumber\label{tau0}\\
&&
\end{eqnarray}
and screening length
\begin{equation}
k_0^{-1} = \frac{\lambda_e}{2x}\sqrt{\frac{\pi\beta}{\alpha}}, \label{k0inverse}
\end{equation}
corresponding to the relativistic, degenerate gas. Here $\lambda_e=\hbar/m_ec$ is the reduced Compton wavelength, $x=p_F/m_ec = \lambda_e(3\pi^2n_e)^{1/3}$, $\beta=x/\sqrt{1+x^2}$, and $\alpha\approx1/137$ is the fine structure constant. The Thomas-Fermi description breaks down when the screening length localizes electrons to within their Compton wavelength; this occurs for $x\gtrsim10$ ($\rho\gtrsim10^9$ g/cc). Approaching this extreme relativistic limit, the ratio of screening length to Wigner-Seitz radius $r_s = (3\langle Z\rangle/4\pi n_e)^{1/3}$ tends to a constant (for a bcc lattice of Fe nuclei, the constant is 1.82), so we might anticipate that phase boundaries become stationary in this scale-invariant limit. Saturation of $k_0^{-1}/r_s$ at this small value is indicative of the over-screening predicted by the Thomas-Fermi model. Within these constraints however, the model has the advantage of being both reasonably accurate and computationally efficient, readily incorporated into a global search of crystal structure and composition.
Structural optimizations using Equations \ref{ETF/N}--\ref{k0inverse} are conveniently carried out at constant volume -- one need only minimize a pairwise sum of effective Yukawa interactions, which converges rapidly in real space for $k_0^{-1}/r_s\sim1$. The simplicity of this method is conducive to a large basis, useful for studying interfaces, defects and failure mechanisms (see for example, \citet{hor09prl}). Because it will prove useful for the phase layering calculation described later, we choose instead to perform structural optimizations at constant external pressure $P$ and minimize the enthalpy. This can be accomplished using a modified version of the General Utility Lattice Program (GULP) \citep{gal03}, where the Yukawa interaction (available as a special case of the ``general" pair potential) is given the capability to handle a $V_c$-dependent screening length. We modify GULP's enthalpy per unit cell to $h_{TF}=E_{TF}/N+PV_c$, and the first strain derivatives of the enthalpy (sufficient for steepest descents and conjugate gradient methods) are accordingly modified to
\begin{eqnarray}
\frac{\partial h_{TF}}{\partial \epsilon_{\alpha\beta}} & = & \Big[(P-P_0)V_c + \frac{2\pi e^2(2-\beta^2)}{3k_0^2V_c}\sum_{p,\,q}Z_pZ_q\\
&+& \frac{e^2k_0(1+\beta^2)}{12}\sum_{\mathbf{R},\,p,\,q}\;Z_pZ_qe^{-k_0\mathscr{R}_{pq}}\Big]\delta_{\alpha\beta} \nonumber \\
& - & \frac{e^2}{2}{\sum_{\mathbf{R},\,p,\,q}}'\;\frac{Z_pZ_q\,\mathscr{R}_{pq}^{\alpha}\,\mathscr{R}_{pq}^{\beta}\,e^{-k_0\mathscr{R}_{pq}}}{\mathscr{R}_{pq}^2}\Big(k_0 + \frac{1}{\mathscr{R}_{pq}}\Big),\nonumber
\end{eqnarray}
where $P_0=n_e^2\partial (\tau_0/n_e)/\partial n_e$ is the kinetic pressure of the uniform electron gas. Derivatives of $h_{TF}$ with respect to GULP's remaining degrees of freedom (fractional basis coordinates) are not affected by $k_0 \to k_0(V_c)$.
We carry out ground-state structure searches using the evolutionary crystal structure prediction software XtalOpt r8.0 \citep{lon11}, together with GULP optimization.\footnote{
It is convenient to reinterpret XtalOpt and GULP's internally-consistent (eV, \AA, GPa) unit system as ($10^d$ eV, $10^{-d}$ \AA, $10^{4d}$ GPa), so that issues with numerical limits can be avoided. These codes were, of course, originally intended for Earth-condition materials! Useful choices of the integer $d$ include 2, 3 and 4. In this scheme, the relativity parameter appearing in Equations \ref{tau0} \& \ref{k0inverse} becomes $x=1.1946484\times10^{d-2}\;n_e^{1/3}$, and the prefactor in Equation \ref{tau0} becomes $m_ec^2/8\pi^2\lambda_e^3=1.1239083\times10^{11-4d}$.}
It is assumed the ground state is a polycrystalline mixture of stoichiometric compounds. We don't consider solution (alloy) phases, and we consider only a subset of possible stoichiometries. For a given ternary system of nuclei $A$-$B$-$C$, a constant pressure search is performed at $P=10^{11}$ and $10^{16}$ GPa for each of the nominally 125 stoichiometries $A_nB_mC_{\ell}$ where $n,m,\ell=0\dots4$. Search cells with small number of particles $n+m+\ell$ are removed from the search program if they are submultiples of larger cells, thus there are 98 searches per ternary system, per pressure. Each of these searches is run out to at least 480 optimized, genetically-operated-on structures, except in the case of single-component searches, which are run out to at least 80 optimized structures (the first 20 seed structures are randomly generated). Lattice sums are done in real space with cutoff $\approx20k_0^{-1}$ and are expected to be converged to 7--8 digits. This level of accuracy is important, as we find enthalpies of competing structures can be the same out to 6 digits. Default XtalOpt search parameters are used throughout, and following the suggestions put forth in the XtalOpt implementation paper \citep{lon11}, we benchmark the search parameters by constructing Hartke plots for several relativistic screened-Coulomb systems (see Figure \ref{fig:hartke}). Hartke plots gauge the performance of a genetic search and help establish a stopping criterion.
\begin{figure}[h]
\plotone{hartkeplot.pdf}
\caption{
Hartke plots for search cells FeO$_3$C$_2$ (top) and Fe$_4$O$_4$C$_4$ (bottom). $y(x)$ is the enthalpy of the lowest enthalpy structure found within the range of structure numbers: zero to $x$. The initial 20 seed structures (randomly generated by XtalOpt) are not included, thus the Hartke plot contains information only about structures which have been genetically-operated-on. For each plot, 100 runs were made with identical parameters. In the case where all runs eventually found the same lowest enthalpy structure (top), the worst-best is the single run that took the longest to find it. In the case where not all runs found the same lowest enthalpy structure (bottom), the worst-best is the single run whose winning structure had the highest enthalpy. Best-best was the quickest to find the overall lowest enthalpy structure, and average-best is the average over all 100 runs. The Hartke lifetimes associated with the exponential fits to average-best (black dashed lines) are 35 (top) and 123 (bottom). The winning structure found in the FeO$_3$C$_2$ search appears in the bulk phase diagram, as the $\eta$ phase. The Fe$_4$O$_4$C$_4$ search has the most degrees of freedom out of any search cell in our program; its winning structure does not appear in the phase diagram. \label{fig:hartke}}
\end{figure}
Our choice of search duration, previously mentioned, is in part motivated by the ``Hartke lifetimes" found in these tests.
The lowest enthalpy structure found in each search is included in a bulk phase stability calculation, using Thermo-Calc software \citep{and02}. For a given set of $N_C+2$ state variables ($N_C$ being the number of components) Thermo-Calc finds the global minimum Gibbs free energy which lies on a plane tangent to the available phases' Gibbs energy surfaces. A phase diagram representable as a 2d plot is then constructed from the set of tangent planes found by varying any two of the state variables. (For a pedagogical reference to phase diagrams, see the book by \citet{hil07}). Since all the phases considered in this work are stoichiometric crystal structures, there is a simplification in that the phases' Gibbs energy surfaces are themselves points. Consequently, all phase regions in a phase diagram obtained by the above procedure must be 3-phase regions. If a structure appears in the equilibrium phase diagram at either $P=10^{11}$ or $10^{16}$ GPa, it is re-optimized at intermediate pressure decades to obtain the pressure dependence of the phase diagram. It is possible, though unlikely, that there are phases stable only over a narrow band of pressure which are missed by this procedure (the full search scheme described in the last paragraph was performed for the C-O-Fe system at several intermediate pressures and found no such ``missed'' phases, lending support to this approach).
Five ternary systems of nuclei were selected for study: He-C-O, C-O-Ne, C-O-Fe, O-Fe-Se, and Fe-As-Se. (The single exception to the search program described above has to do with He-C-O: our real space method converges much more slowly in this system due to the larger $k_0^{-1}/r_s$, so the full 98 searches are carried out only at $P=10^{11}$ GPa). The first two ternary systems are relevant to WDs, likely including those which are type Ia supernovae (SNIa) progenitors \citep{she14}. The third may also be relevant to WDs having undergone a failed-detonation SNIa \citep{jor12}. The rationale for choosing the remaining two is that these particular nuclei are representative and/or prevalent among the \citet{gup07} abundances near neutron drip. Incorporating a full list of abundances ($\approx\,$17 species) would be intractable for the type of calculation we have described. Moreover, we can take a lesson from earth-condition crystals, which typically have only 1, 2, or 3 elements (sometimes 4). This appears to be due to general properties of crystal stability related to phase separation of complex unit cells: Basically, once a structure reaches a sufficient level of complexity that it can accommodate the special geometrical characteristics of its constituent atoms, it is disadvantageous to make the unit cell any larger (in the sense of adding more atoms), since that just reduces the amount of favorable repetition possible with a given number of atoms. Exploring more ternary combinations is also likely to give diminishing returns in the prediction of new structures. Roughly speaking, with a smooth and spherically-symmetric interaction such as the Yukawa potential, there are only four qualitatively different ternary combinations: one big $Z$ and two small $Z$s, two big and one small, all three mismatched, and all three similar. As long as the screening length regimes (characterized by $k_0^{-1}/r_s$) are not too different, one expects to see a continuity of structures formed by systems having similar relative $Z$s (or perhaps squared $Z$s), since there are no atom shell effects that come into play. Another reason for studying the specific ternary systems mentioned above is that they cover at least three (arguably all four) of the qualitatively different combinations.
\section{bulk phase diagram results}
\begin{figure*}[t]
\vspace{\baselineskip}
\centering
\includegraphics[width=0.95\textwidth]{ternaryPDs.pdf}
\caption[width=1.0\textwidth]{$T=0$ bulk phase diagrams for relativistic screened-Coulomb systems. Each pair of vertically-aligned diagrams corresponds to a specific ternary system of nuclei, while the two rows give the pressure dependence. Across the five ternary systems investigated, no pressure dependence was found in the range $10^{12}$--$10^{16}$ GPa, despite the fact that the screening length $k_0^{-1}$ varies considerably over this range (from 1.36$r_s$ to 1.81$r_s$ for bcc Fe). Following from our assumptions described in the main text, the microstructure in a given triangular region is a polycrystalline mixture of stoichiometric compounds (phases). All distinct phases are labeled with Greek letters and explained in Table \ref{tab:structures}.
\vspace{\baselineskip}
\label{fig:ternary}}
\end{figure*}
While pressure-invariance of the $T=0$ phase diagram was anticipated in the extreme relativistic limit, it comes as some surprise that the pressure-independence persists well below this limit, nearly to the threshold for full pressure ionization. Figure \ref{fig:ternary} shows that for all five ternary systems studied, no $P$-induced phase transitions were found above $10^{12}$ GPa. For bcc Fe, this pressure corresponds to density $6.18\times10^5$ g/cc and screening length $k_0^{-1}/r_s=1.36$, or about 75 percent of the saturation value. In general, screening length on the order of the lattice spacing appears to be a requisite for $P$-driven phase transitions. Further supporting this conclusion is the fact that the He-C-O and C-O-Ne systems don't undergo any pressure-induced transitions in the range $10^{11}$--$10^{16}$ GPa; within that range, screening lengths in these small $Z$ systems are significantly larger than one lattice spacing.
Another finding is that combinations of nuclei with significantly mismatched $Z$s are much more conducive to efficient multicomponent packings than are systems where the $Z$s are fairly similar. For example, the Fe-As-Se system has an extremely simple low-pressure phase diagram: at any composition, the microstructure consists simply of phase-separated bcc crystallites. Multicomponent phases appear at high pressure, but they have the simple cesium chloride structure. In contrast, the C-O-Fe phase diagram is quite rich. Combining one large $Z$ and two smaller $Z$s results in a variety of binary and ternary crystal structures (enumerated in Table \ref{tab:structures}), all of which are more efficient (have a higher packing fraction) than phase-separated bcc lattices.
Both He-C-O and O-Fe-Se systems (two large, one small) feature all the same phases as C-O-Fe, except for the two ternary compounds which don't appear. While a continuity of structures appearing between these systems was anticipated, it is striking that at high pressures the two phase diagrams are identical. Close similarity is also noted between the Fe-As-Se and C-O-Ne systems which both consist of three similar $Z$s. The outlier in this comparison is the nontrivial C-Ne binary structure, described in Table \ref{tab:structures}. These observations are consistent with the idea that it is the combination of relative $Z$s, and not of absolute $Z$s, that is important in determining the high-pressure phase diagram.
\begin{table*}[t]
\centering
\caption{Selected compounds appearing in the C-O-Fe and C-O-Ne bulk phase diagrams, as indicated in Figure \ref{fig:ternary}. All numerical values given here correspond to $P=10^{16}$ GPa. Relative proton density is a measure of geometrical packing efficiency; relative baryon density includes the nongeometrical effect of neutron fractions. The reference phase for these relative densities is $\alpha$-Fe, except in the case of $\theta$-Ne$_2$C$_4$, for which the reference phase is $\alpha$-Ne. Renderings have grey C, red O, green Fe, and violet Ne with sphere volume proportional to the nuclear charge $Z$. In the $\delta$ and $\epsilon$ renderings, bonds indicate Fe-Fe first nearest neighbors. If the space group is listed instead of a specific crystal structure, the unit formula gives the composition of the search cell in which the structure was found, not necessarily that of the primitive cell. pdb files of the structures (for all compositional instances) are included as supplementary materials in the online version.} \label{tab:structures}
\begin{tabular}{p{1.0cm}p{1.4cm}p{3.2cm}lllp{4.5cm}}
\hline & & crystallographic & relative & relative & density relative \\
phase & unit & space group & proton & baryon & to bulk, phase- & views along (or slightly oblique to)\\
label & formula & or structure & density & density\footnote{Using $^{12}$C, $^{16}$O, $^{20}$Ne, and $^{56}$Fe} & separated bcc\footnote{Defined as the sum of cell volumes after phase-separation into bulk bcc phases, divided by the original cell volume\\} & some high-symmetry directions \\ \hline
$\alpha$ & Fe & bcc & 1 & 1 & 1 & \\
$\alpha$ & O & bcc & 0.982$\;\;\;\;\;$ & 0.912$\;\;\;\;\;$ & 1 & \\
$\alpha$ & C & bcc & 0.980 & 0.910 & 1 & \\
$\beta$ & OC & cesium-chloride & 0.981 & 0.911 & 1.000001 & \\
$\gamma$ & FeC$_2$ & magnesium-diboride & 0.994 & 0.971 & 1.000061 & \\
$\delta$ & Fe$_4$O$_4$ & Cmcm (orthorhombic) & 0.996 & 0.979 & 1.000040 & similar to $\delta$-Fe$_4$C$_4$, see below \\
$\delta$ & Fe$_4$C$_4$ & Cmcm (orthorhombic) & 0.996 & 0.983 & 1.000056 & \parbox[c]{1em}{
\vspace{6pt}\includegraphics[scale=0.11]{delta-Fe4C4-1.pdf}$\;\;\;\;\;\;$\includegraphics[scale=0.11]{delta-Fe4C4-2.pdf}} \\
$\epsilon$ & Fe$_4$O$_2$ & I4/mcm (tetragonal) & 0.998 & 0.988 & 1.000024 & \parbox[c]{1em}{
\vspace{6pt}\includegraphics[scale=0.13]{epsilon-Fe4O2-1.pdf}$\;\;\;\;$\includegraphics[scale=0.15]{epsilon-Fe4O2-2.pdf}} \\
$\zeta$ & FeOC$_4$ & P6/mmm (hexagonal) & 0.989 & 0.950 & 1.000044 & \parbox[c]{1em}{
\vspace{6pt}\includegraphics[scale=0.16]{zeta-FeOC4-1.pdf}$\;\;\;$\includegraphics[scale=0.14]{zeta-FeOC4-2.pdf}} \\
$\eta$ & FeO$_3$C$_2$ & P6/mmm (hexagonal) & 0.989 & 0.948 & 1.000039 & \parbox[c]{1em}{
\vspace{6pt}\includegraphics[scale=0.19]{eta-FeO3C2-1.pdf}$\;\;\;\;\;$\includegraphics[scale=0.20]{eta-FeO3C2-2.pdf}} \\
$\theta$ & Ne$_2$C$_4$ & Fd-3m (cubic) & 0.997 & 0.997 & 1.000021 & \parbox[c]{1em}{
\vspace{6pt}\includegraphics[scale=0.2]{theta-Ne2C4-1.pdf}$\;\;\;\;\;$\includegraphics[scale=0.15]{theta-Ne2C4-2.pdf}} \\
\hline
\end{tabular}
\end{table*}
In the pressure and screening length regimes appropriate to this work (while $P$ ranges from $10^{11}$ to $10^{16}$ GPa, $k_0^{-1}/r_s$ ranges from 1.13 to 1.81 for bcc Fe), there is a competition between close packing and next nearest neighbor interactions, which the closest packed structures tend not to win. This is exemplified by bcc's favorability over fcc, and the fact that only one of the equilibrium phases found (magnesium diboride structure) also appears in the phase diagram of densest binary sphere packings \citep{hop11,hop12}. The simplest multicomponent crystals have structures that are also assumed by some ionic compounds under low pressure conditions, which may reflect the fact that ionic solids have a simple close-shell electronic structure (ionic solids also have strong +/$-$ Coulomb interactions that are missing here). When a pair of $Z$s are not too dissimilar they usually form cesium chloride structure, e.g. OC, NeO, SeFe, SeAs and AsFe. When they are more dissimilar they tend to form magnesium diboride structure, e.g. OHe$_2$, FeC$_2$ and SeO$_2$. Magnesium diboride is our first encounter with sub-cubic symmetry, which could give rise to transport anisotropy, elastic anisotropy, and other effects such as a magnetic field coupling to the structure orientation. A quite prevalent but more complicated orthorhombic structure occurs at chemical compositions O$_4$He$_4$, C$_4$He$_4$, Fe$_4$O$_4$, Fe$_4$C$_4$ and Se$_4$O$_4$, where these different instances can be interconverted by small adjustments of bond lengths and angles. A tetragonal structure occurs at compositions C$_4$He$_2$ and Fe$_4$O$_2$; this is the second-highest density structure in the C-O-Fe system and could potentially drive oxygen to greater depths than it would otherwise go. The C-O-Fe system also features two ternary structures FeOC$_4$ and FeO$_3$C$_2$, both with hexagonal symmetry. FeOC$_4$ can be viewed as magnesium diboride structure, with the triangular magnesium planes alternating between Fe and O compositions. FeO$_3$C$_2$ consists of alternating layers of kagome O and honeycomb C, with Fe at the holes in the honeycomb layers.
A general feature of the the ternary bulk phase diagrams is that coexisting phases have mass density differences, due to a combination of neutron fraction and geometrical packing effects. These differences can be as large as $\sim\,$10 percent of the total density and will result in stratification of phase domains in the presence of a gravitational field -- the problem to which we now turn.
\pagebreak
\section{equilibrium layering calculation}
Here we give an application of our high pressure crystal chemistry results to white dwarfs at a given fixed overall composition. The equilibrium phase-layering diagram of a zero temperature WD is computed self-consistently, allowing for arbitrary numbers of components $N_C$ and phases $N_P$ that can be formed from these components. The problem is decomposed into two parts: one part is a microscopic phase stability calculation which produces a function $\rho(h)$ where $\rho$ is density and $h$ is enthalpy per unit mass, the other is a simple stellar structure calculation which determines the pressure-radius dependence $P(r)$. We iterate between these two parts. The former is inspired by a technique used among chemical engineers to study species segregation in oil reservoirs, cf. \citet{esp00}.
In the following, we will make use of the virial theorem for the gravitational potential energy $W$ of a WD, given by
\begin{equation}
W = -3 \int_0^R P\,4\pi r^2 dr. \label{virial}
\end{equation}
We begin by discretizing the star into $N_L$ onion layers of uniform thickness $\Delta r=R/N_L$. If $\Delta r$ is small compared to the scale height of pressure $H_P=-dr/d\log P$, the $i^{th}$ layer may be treated as a bulk equilibrium system at constant pressure $P_i$, and one may work with the modified Helmholtz free energy
\begin{equation}
F^* = \sum_{i=1}^{N_L} \Big[ -4P_iV_i + \sum_{\alpha=1}^{N_P} n_{\alpha i}\,\mu_{\alpha}(T,P_i) \Big]. \label{helmholtz}
\end{equation}
For each term in the sum over layers, $-3P_iV_i$ comes from the discrete version of Equation \ref{virial}, and another $-P_iV_i$ cancels the corresponding quantity in the Gibbs free energy of the layer, $\sum_{\alpha}n_{\alpha i}\,\mu_{\alpha}(T,P_i)$. Here $n_{\alpha i}$ is the (unknown) molar amount of phase $\alpha$ present in layer $i$, and $\mu_{\alpha}$ is the bulk chemical potential of phase $\alpha$. (The phase index $\alpha$ is not to be confused with the bcc structure, as in Table \ref{tab:structures}). We have thus avoided the complication of introducing a gravitational term into the chemical potentials, including it instead at the level of the layers. This comes at the cost of supplying a pressure function $P(r)$ consistent with hydrostatic equilibrium, implicit in Equations \ref{virial} \& \ref{helmholtz}. Let's assume that we have such a pressure function. (For an initial guess, we will take $P(r)$ from a $n=3$ polytrope.) Now fix a set of layer pressures $P_i=P(i\Delta r)$. The problem of minimizing $F^*$ has been reduced to the problem of minimizing the linear objective function $\sum_i\sum_{\alpha} n_{\alpha i}\,\mu_{\alpha}(T,P_i)$, subject to $2N_L+N_C-1$ constraints
\begin{eqnarray}
&&1 = \frac{1}{V_i}\sum_{\alpha} \frac{n_{\alpha i}\,m_{\alpha}}{\rho_{\alpha}(T,P_i)} \;\;\;\;\textrm{for }i=1\dots N_L, \\
&&0 = \sum_{i,\alpha} [(1-x_A)s_{A\alpha} - x_As_{B\alpha} - x_As_{C\alpha} ]n_{\alpha i},\nonumber\\
&& \textrm{etc. for } x_B \dots x_{N_C}, \label{comp} \\
\nonumber\\
&&0 \geq \sum_{\alpha} \Big[\frac{n_{\alpha i+1}}{V_{i+1}} - \frac{n_{\alpha i}}{V_i} \Big] m_{\alpha} \;\;\;\;\textrm{for }i=1\dots N_L\textrm{-1}, \label{BV}
\end{eqnarray}
each of which is also linear in the $n_{\alpha i}$. The first set of constraints ensures the volume filling fraction is equal to 1 for each layer, where the molar mass $m_{\alpha}$ and density $\rho_{\alpha}(T,P_i)$ are assumed to be known for each phase. Equations \ref{comp} constrain the global mole fractions $x_A \dots x_{N_C}$, where, for example, $s_{A\alpha}$ specifies the number of $A$-type nuclei per formula unit of the $\alpha$ phase. The reason for constraining mole fractions rather than component masses is that the latter tends to cause infeasibility problems for the Simplex solver. Finally, there is a set of inequality constraints which guarantee reality of the inter-layer Brunt-V\"ais\"al\"a frequencies $\omega=\sqrt{-(g/\rho)(d\rho/dr)}$. Thus, there is built-in stability against convective overturn of adjacent layers, but note that it is still possible to have unstably stratified material \textit{within} a layer. As noted above, this problem is straightforwardly solved using the Simplex method. For number of variables $N_LN_P\sim10^3$--$10^4$, we use the high-performace lp\_solve routines.
So far we have considered the case of an isothermal WD. For the special case $T=0$, the Simplex solution provides the layer enthalpy per unit mass $h_i=\sum_{\alpha} n_{\alpha i}\,\mu_{\alpha}(0, P_i) / \sum_{\alpha} n_{\alpha i}\,m_{\alpha}$ and layer density $\rho_i=\sum_{\alpha} n_{\alpha i}\,m_{\alpha} / V_i$. In certain cases, one can interpolate to obtain a smooth function $\rho(h)$, which can be combined with the enthalpy-transformed stellar structure equations \citep{lin92}.\footnote{
An issue can arise near a density discontinuity, where $h_i$ and $\rho_i$ obtained by the above procedure describe a function $h(\rho)$ which is non-monotonic. The stellar structure calculation cannot then make use of the enthalpy transformation, because the sign of Equation \ref{drdh} is incorrect in the vicinity of the interface. A density discontinuity occurs as a consequence of mismatched $Z$s (compare bcc C and O in Table \ref{tab:structures}) but is made much more severe when there is also a mismatch in neutron fraction (compare bcc O and Fe in Table \ref{tab:structures}). Fortunately, for typical white dwarf compositions, the neutron fraction is continuous (or nearly so) across phase boundaries and the issue of non-monotonicity is avoided by choosing a suitably large layer thickness -- on the order of $R/200$ for He-C-O and C-O-Ne compositions.}
In the nonrelativistic limit, these read
\begin{eqnarray}
\frac{dP}{dh} & = & \rho, \label{dPdh}\\
\frac{dm}{dh} & = & \frac{-4\pi r^4\rho}{Gm}, \\
\frac{dr}{dh} & = & \frac{-r^2}{Gm}.\label{drdh}
\end{eqnarray}
The reason for using the enthalpy transformation is twofold. First, the total mass $M$, which we were not able to constrain in the Simplex calculation, now enters as a boundary condition. Second, if we simply used the layer masses $M_i=\sum_{\alpha} n_{\alpha i}\,m_{\alpha}$ along with the discretized equations of mass continuity and hydrostatic equilibrium (the usual stellar structure equations) to update the $P_i$, no information about the microscopic energy scale would carry over. In other words, we could multiply all the chemical potentials by 2 and get the same $P_i$. Equations \ref{dPdh}--\ref{drdh} are to be integrated inward from the boundary conditions $P(0)=0$, $m(0)=M$, and $r(0)=R$. Unfortunately we have no \textit{apriori} knowledge of the radius $R$ that is consistent with $M$ and $\rho(h)$ in the sense of the Oppenheimer-Volkoff map (to use the language of Lindblom). We therefore have to complete the mapping $\rho(h) \mapsto (M,R)$. A simple way to accomplish this is by ``aiming" the boundary condition $r(0)$ until the integration yields the physically-correct behavior $dP/dr=0$ as $r\to0$. Approaching $R$ from below, the solutions are smooth and well-behaved except at $r=0$ due to a singularity in Equation \ref{drdh}. A change of variable $u=r^2$ removes this singularity, but we find no particular advantage to working with the resulting transformed equations. Approaching $R$ from above generates sign changes and the solutions are generally chaotic. The qualitatively different behaviors in these two regimes can be exploited to obtain $R$ to arbitrarily high precision. In practice, we minimize $dP/dr$ at a fixed, small fraction of the starting boundary condition $r(0)$ (but see the next paragraph for discussion of a special case). In this process of ``completing the map," an updated pressure function $P(r)$ is obtained at no extra cost. Layer pressures are reassigned and input to the Simplex calculation, and the process iterated. One choice of convergence criterion is that successive iterations produce stellar radii which are the same to within a tolerance of $10^{-6}R_{\odot}$ -- typically this criterion is met within just a few iterations. Another choice is that radial positions and thicknesses of phase strata (as fractions of $R$) are static to within the resolution set by the number of simulation layers -- typically this occurs after just one iteration.
The main type of numerical error incurred is of the following nature. In the first iteration, integration of Equations \ref{dPdh}--\ref{drdh} never proceeds past the point for which we have tabulated $h_i$, $\rho_i$ data available to interpolate within. This is just a consequence of having used the polytrope initial guess. In subsequent iterations, however, we are sometimes forced to make a choice: carry out the integration past the highest tabulated $h_i$, replacing interpolations with extrapolations, or simply terminate the integration when interpolations become impossible, using the current value of $dP/dr$ in the aiming procedure discussed above. We choose the second option. The miminization problem within the aiming procedure is, in these cases, somewhat ill-defined, tending to generate some numerical noise which is expressed in the layering diagram near $r/R=0$. For this reason we present layering diagrams as they appear after the first iteration, noting that changes to the layering diagram are already imperceptible by the second iteration, save for an increase in the level of this numerical noise.
\begin{figure*}[t]
\centering
\includegraphics[scale=0.84]{heoc_neoc_multiplot.pdf}
\caption{
Equilibrium phase layering diagrams for $1.0M_{\odot}$, He-C-O white dwarfs (top row) and $1.0M_{\odot}$, C-O-Ne white dwarfs (bottom row). Carbon-oxygen ratio is held at a fixed value in each panel, and mole fraction He or Ne makes up the balance. The discrete color map indicates the stable phases after one iteration of the procedure described in the main text. Further iterations produce no perceptible change in the layering diagram, save for an increase in the level of numerical noise. In cases where a given layer contains a two phase mixture of bcc and a more complicated structure (these are the only types of mixture to occur), the color map indicates the more complicated phase. White crosses give the stellar radius (right y-axis) as a function of composition, also after one iteration of the method. 200 layers were used in the computation of these diagrams.\label{fig:layering}}
\end{figure*}
\section{equilibrium layering results}
The previous section described a method of determining the radial positions and amounts of phase strata in a white dwarf with fixed mass and overall composition, in particular, strata composed of the new multi-component crystal structures. Using this method, we computed the $T=0$ equilibrium phase layering diagrams and radius-composition dependence of 1.0 solar mass, $^4$He-$^{12}$C-$^{16}$O and $^{12}$C-$^{16}$O-$^{20}$Ne white dwarfs. For each composition, an initial guess corresponding to the polytrope $P=3.8\times10^{14} \rho^{4/3}$ c.g.s. and stellar radius $R=7.5\times10^{-3}R_{\odot}$ was used, although the method appears to converge to the same result if these starting values are adjusted within reasonable limits. Figure \ref{fig:layering} shows the result of the calculation. Evidently pure bcc phases make up the majority of the stellar interior, despite the multicomponent structures being more efficiently packed. Since multicomponent phases tend to show up at interfaces, we refer to them as ``interphases.'' In the He-C-O star, for example, $\delta$-C$_4$He$_4$ and $\epsilon$-C$_4$He$_2$ interphases are formed between $\alpha$-C and $\alpha$-He, while $\beta$-OC appears at the low density part of the C-O boundary. For compositions near $x_{\footnotesize{\textrm{He}}}=1$, the thinness of the carbon shell allows O-He interphases to form, namely $\gamma$-OHe$_2$ and $\delta$-O$_4$He$_4$. Compared to sharp bcc-bcc interfaces, interphases offer free energy savings due to optimized crystal packing density, nearest and next-nearest neighbor interactions, etc. arising from the extra compositional degrees of freedom. Interphase thinness relative to bcc strata can be understood from the gravitational contribution to the free energy having a tendency to ``pull apart" the different Z components of the multicomponent phases. Competition between these two energy scales apparently causes interphases to become slightly thicker with depth. Consider, for example, $\delta$-C$_4$He$_4$ in the diagram with $x_{\footnotesize{\textrm{O}}}/x_{\footnotesize{\textrm{C}}}=1$. At $x_{\footnotesize{\textrm{He}}}=0.5$, only one simulation layer (out of 200) is completely filled with this compound, while an adjacent layer contains a mixture of $\delta$-C$_4$He$_4$ and $\alpha$-He. This interphase gradually thickens with $x_{\footnotesize{\textrm{He}}}$ and by $x_{\footnotesize{\textrm{He}}}=0.95$, 13 simulation layers are completely filled, another two are partially filled, and $\delta$-C$_4$He$_4$ has squeezed out $\alpha$-C from the layering diagram.
An unexpected but apparently generic feature of the radius-composition curves in Figure \ref{fig:layering} is the existence of a shallow minimum of the WD radius at an impure composition. Even as the level of numerical noise increases with further iterations, this minimum clearly persists. The cusps near $x_{\footnotesize{\textrm{He}}}=0.8$ and $x_{\footnotesize{\textrm{Ne}}}=0.3$ may be a numerical effect rather than a physical one, however, as they tend to smooth out upon further iterations.
\section{nonequilibrium layering results}
A simple modification of the equilibrium layering calculation enables a quasi-static settling calculation. If the settling species is $X$, an additional set of linear constraints
\begin{equation}
0 = \sum_{\alpha} s_{X\alpha}n_{\alpha i} \;\;\;\;\textrm{if }i<i_{min},
\end{equation}
enforces the minimum depth $i_{min}$ at which $X$ can appear. This minimum allowed depth can then be incrementally stepped down. We carry out a test of the method by settling $0.09M_{\odot}$ of O on a $0.91M_{\odot}$ He-C white dwarf, and $0.1M_{\odot}$ of Ne on a $0.9M_{\odot}$ C-O white dwarf. Overall compositions are fixed at $x_{\footnotesize{\textrm{He}}}=0.95$ with $x_{\footnotesize{\textrm{C}}}=x_{\footnotesize{\textrm{O}}}=0.025$, and $x_{\footnotesize{\textrm{Ne}}}=0.07$ with $x_{\footnotesize{\textrm{C}}}=2x_{\footnotesize{\textrm{O}}}=0.62$, respectively; the final settled-out states are given by Figure \ref{fig:layering}. (Admittedly, these are not particularly realistic settling scenarios but they serve as interesting test cases, forcing the presence of an interface between the highest and lowest $Z$s which otherwise doesn't happen in equilibrium). Note that if the starting value for $i_{min}$ is too near the surface, there is no feasible solution that can accommodate the settling mass. For this reason we restrict our study to the second half of the settling problem: $i_{min}=N_L/2,\dots,0$.
Results of the quasi-static settling calculation are plotted in Figure \ref{fig:settling}.
\begin{figure}[t]
\centering
\includegraphics[scale=0.62]{settling_multiplot.pdf}
\caption{
Quasi-static nonequilibrium phase diagram for settling $0.09M_{\odot}$ of O on a $0.91M_{\odot}$ He-C white dwarf (top) and settling $0.1M_{\odot}$ of Ne on a $0.9M_{\odot}$ C-O white dwarf (bottom). The $x$-axis can be regarded as time remaining until equilibrium, multiplied by the settling rate. White crosses give the evolution of the stellar radius during the settling process. The top(bottom) diagram was computed using 180(200) layers. Other details of the plots are the same as in Figure \ref{fig:layering}.
\label{fig:settling}}
\end{figure}
In both settling scenarios, the out-of-equilibrium star contains one or more phases that do not appear in the final, equilibrium stacking sequence. One function of these extra phases is to serve as transient host structures for the settling species: $\delta$-O$_4$He$_4$ and $\gamma$-OHe$_2$ are hosts for settling oxygen, and $\theta$-Ne$_2$C$_4$ is a host for settling neon. The phase settling diagram for the He-O-C star is fairly non-trivial, with as many as seven distinct strata near $i_{min}/N_L\approx0.25$. A minimum in the stellar radius appears around this point (as it also does in the O-C-Ne settling calculation) indicating that the lowest enthalpy star is not the most compact star. This minimum appears to track the size of the $\epsilon$-C$_4$He$_2$ core, possibly also the thickness of the $\delta$-O$_4$He$_4$ interphase. However, the minimum persists when the calculation is repeated excluding first one and then the other of these phases. These result hints at the prospects for new phenomena that are enabled by compositional and structural heterogeneity that takes advantage of the additional ``chemical'' degrees of freedom afforded by multinary phases.
In the first settling calculation, both He and C must eventually find their way through the sinking O-containing layer. It is interesting that, with the exception of a single point near $i_{min}/N_L=0.3$ where all the oxygen is bound up in $\delta$-O$_4$He$_4$, there is no continuous migration pathway (in the sense of stoichiometric compounds) assuming any deviations from spherical symmetry are sufficiently weak to maintain contiguity of the layering sequence. Several binary phases are available to provide such a pathway, but the system does not use them to this advantage -- for example, $\beta$-OC is never formed. Consequently, in the final stages of settling, carbon has to diffuse through an oxygen barrier having thickness $t\sim10^3$ km. The associated timescale can be estimated as $\tau\sim t^2/D_0$, where $D_0=3\Omega_Pr_s^2/\Gamma^{4/3}$ is the diffusion coefficient for a one-component plasma (see \citet{han75} and more recently, \citet{hug10}). Here $\Omega_P=(4\pi e^2Zn_e/M)^{1/2}$ is the ion plasma frequency and $\Gamma=Z^2e^2/(r_sk_BT)$ is the Coulomb coupling parameter, which we take to be the melting point value: $\Gamma_m=175$. Putting in the numbers gives $\tau\sim10^{13}$ yrs. While this is an oversimplified analysis, it does suggests that compact object phase strata may be far from the equilibrium stacking sequence. It is particularly intriguing to consider whether strong ``chemical'' deviations from the equilibrium phase stacking sequence could accumulate excess free energy that is eventually liberated in energetic (and thus observable) events. Our analysis here is a first step towards developing a framework towards consideration of such possibilities.
\section{Finite temperature}
Here we give a brief, qualitative discussion of some effects that will become important at finite temperatures; a detailed analysis is a subject for future work. At finite $T$, the chemical potentials $\mu_{\alpha}(T,P_i)$ must be modified to account for smearing of the Fermi surface, associated smearing of the electron capture layers which will give an adjustment in composition, phonons, and if $\alpha$ denotes an alloy or solution phase, mixing entropy. Since the entropic part of these contributions does not enter into the enthalpy, it is not possible to self-consistently include thermal effects in our equilibrium phase layering method, which relies on the enthalpy transformation. However, thermal effects could be included post hoc. For example, one could compute the phonon free energy of an interphase crystal such as $\delta$-C$_4$He$_4$ along with that of the equivalent phase-separated $\alpha$-C and $\alpha$-He crystals, add this quantity to the $T=0$ free energy, and predict whether the interphase tends to thicken or thin at finite $T$. A rough estimate based on phases' bulk moduli ($K=-VdP/dV$) suggests the thermal (phonon) correction to the free energy will tend to increase the stability of the soft outer phase strata relative to the stiff innner strata, likely shifting the interphases slightly towards the stellar center. There is also a possibility for additional, particularly soft interphases to appear in the stacking sequence, if entropic terms are large enough to affect the phase competition that winnows the phases of Table \ref{tab:structures} down to the phase layering in Figure \ref{fig:layering}. A phonon calculation would also yield the elastic tensor and help characterize the degree of elastic anisotropy as a function of depth, as well as give a prediction for the relative melting temperatures via the Lindemann parameter (ratio of RMS nucleus displacement to equilibrium lattice spacing). Finally, we note that the simple mechanical stability criterion used here should be replaced with the Ledoux criterion at finite $T$. See \citet{rei01} for an application to multicomponent neutron stars. Again, it is difficult to see how this more sophisticated criterion can be built in to the current method, but it could at least be checked after including thermal effects in the manner described.
\acknowledgments
T.A.E. acknowledges an Academic Computing Fellowship from The Pennsylvania State University. We thank Julian Gale for providing a GULP patch with the necessary infrastructural changes to handle a $V_c$-dependent screening length, and thank Ben Owen and Steinn Sigurdsson for stimulating discussions. Teresa Hamill's preliminary investigation of binary crystal packings helped motivate some of the methods outlined in Section 2.
\pagebreak
|
1,116,691,498,267 | arxiv | \section{Calibration of $\boldsymbol a$}\label{app:calibration}
The state transition density can be written as
\begin{align}
p(\mathbf{x}_t|\mathbf{x}_{t-1},\boldsymbol\theta) = h(\mathbf{x}_t)g(\mathbf{x}_{t-1},\boldsymbol\theta)\exp\left( \boldsymbol{\eta}(\mathbf{x}_{t-1},\boldsymbol\theta)^\top \boldsymbol{T}(\mathbf{x}_t)\right),
\end{align}
where $h(.)$, $g(.,.)$, $\boldsymbol\eta(.,.)$, and $\boldsymbol T(.)$ are known analytical functions. Note that $g(\mathbf{x}_{t-1},\boldsymbol\theta)^{-1}$ is the normalizing constant of $p(\mathbf{x}_t|\mathbf{x}_{t-1},\boldsymbol\theta)$. The transition kernel $k(\mathbf{x}_t,\mathbf{x}_{t-1}|\boldsymbol a_t,\boldsymbol\varphi)$ is given by
\begin{align}
k(\mathbf{x}_t,\mathbf{x}_{t-1}|\boldsymbol a_t,\boldsymbol\varphi) = \exp\left(\boldsymbol{a}_t^\top \boldsymbol{T}(\mathbf{x}_t)\right)h(\mathbf{x}_t)g(\mathbf{x}_{t-1},\boldsymbol\theta)\exp\left( \boldsymbol{\eta}(\mathbf{x}_{t-1},\boldsymbol\theta)^\top \boldsymbol{T}(\mathbf{x}_t)\right).
\end{align}
We have that
\begin{align}
q(\mathbf{x}_t|\mathbf{x}_{t-1},\mathbf{y},\boldsymbol\varphi)=\frac{k(\mathbf{x}_t,\mathbf{x}_{t-1}|\boldsymbol{a}_t,\boldsymbol\varphi)}{\chi(\mathbf{x}_{t-1}|\boldsymbol{a}_t,\boldsymbol\varphi)}.
\end{align}
The normalising constant of this expression can be written as
\begin{align}
\chi(\mathbf{x}_{t-1}|\boldsymbol{a}_t,\boldsymbol\varphi) = &\int \exp\left(\boldsymbol{a}_t^\top \boldsymbol{T}(\mathbf{x}_t)\right)p(\mathbf{x}_t|\mathbf{x}_{t-1},\boldsymbol\varphi)d\mathbf{x}_t\\
=& \int \exp\left(\boldsymbol{a}_t^\top \boldsymbol{T}(\mathbf{x}_t)\right)h(\mathbf{x}_t)g(\mathbf{x}_{t-1},\boldsymbol\theta)\exp\left( \boldsymbol{\eta}(\mathbf{x}_{t-1},\boldsymbol\theta)^\top \boldsymbol{T}(\mathbf{x}_t)\right)d\mathbf{x}_t\\
=& g(\mathbf{x}_{t-1},\boldsymbol\theta)\int \exp\left((\boldsymbol{a}_t+\boldsymbol{\eta}(\mathbf{x}_{t-1},\boldsymbol\theta))^\top \boldsymbol{T}(\mathbf{x}_t)\right)h(\mathbf{x}_t)d\mathbf{x}_t\\
=&g(\mathbf{x}_{t-1},\boldsymbol\theta)\int \exp\left(\tilde{\boldsymbol{\eta}}(\boldsymbol{a}_t,\mathbf{x}_{t-1},\boldsymbol\theta)^\top \boldsymbol{T}(\mathbf{x}_t)\right)h(\mathbf{x}_t)d\mathbf{x}_t\\
=&\frac{g(\mathbf{x}_{t-1},\boldsymbol\theta)}{\tilde{g}(\boldsymbol a_t,\mathbf{x}_{t-1},\boldsymbol\theta)},
\end{align}
with $\tilde{\boldsymbol{\eta}}(\boldsymbol{a}_t,\mathbf{x}_{t-1},\boldsymbol\theta) = \boldsymbol{a}_t+\boldsymbol{\eta}(\mathbf{x}_{t-1},\boldsymbol\theta)$, $\tilde{g}(\boldsymbol a_t,\mathbf{x}_{t-1},\boldsymbol\theta)^{-1} = \int \exp\left(\tilde{\boldsymbol{\eta}}(\boldsymbol{a}_t,\mathbf{x}_{t-1},\boldsymbol\theta)^\top \boldsymbol{T}(\mathbf{x}_t)\right)h(\mathbf{x}_t)d\mathbf{x}_t$. Note that the solution to this integral depends on the value of $\tilde{\boldsymbol{\eta}}(\boldsymbol{a}_t,\mathbf{x}_{t-1},\boldsymbol\theta)$, and therefore $\boldsymbol{a}_t$. Thus, the values $\boldsymbol{a}_t$ must be constrained to the space where $q(\mathbf{x}_t|\mathbf{x}_{t-1},\mathbf{y},\boldsymbol\varphi)$ is a valid distribution function. For instance, for a multivariate normal distribution this would imply that $\boldsymbol{a}_t$ must induce a variance-covariance matrix that is positive definite.
\cite{richard2007efficient} calibrate $\boldsymbol{a}_t$ as the solution to the optimization problem:
\begin{align*}
\boldsymbol{a}_t &= \argmin_{\tilde{\boldsymbol a}_t\in A_t} \sum_{s=1}^S \left(-\gamma_{0,t}+\log(p (\mathbf{y}_t|\mathbf{x}_t^{[s]},\boldsymbol\varphi)p (\mathbf{x}_t^{[s]}|\mathbf{x}_{t-1}^{[s]},\boldsymbol\varphi)\chi(\mathbf{x}_{t}^{[s]}|\tilde{\boldsymbol{a}}_{t+1},\boldsymbol\varphi))-\log(k(\mathbf{x}_t^{[s]},\mathbf{x}_{t-1}^{[s]}|\tilde{\boldsymbol a}_t,\boldsymbol\varphi)) \right)^2
\end{align*}
where $S$ state paths are drawn as $\mathbf{x}^{[s]}\sim q(\mathbf{x}|\mathbf{y})$.
For our choice of kernel function and state transition, we can show that
\begin{align}
-\gamma_{0,t}+\log(p (\mathbf{y}_t|\mathbf{x}_t^{[s]},\boldsymbol\varphi)p (\mathbf{x}_t^{[s]}|\mathbf{x}_{t-1}^{[s]},\boldsymbol\varphi)\chi(\mathbf{x}_{t}^{[s]}|\tilde{\boldsymbol{a}}_{t+1},\boldsymbol\varphi))-\log(k(\mathbf{x}_t^{[s]},\mathbf{x}_{t-1}^{[s]}|\tilde{\boldsymbol a}_t,\boldsymbol\varphi))=\\
-\gamma_{0,t}+\log(p (\mathbf{y}_t|\mathbf{x}_t^{[s]},\boldsymbol\varphi)\chi(\mathbf{x}_{t}^{[s]}|\tilde{\boldsymbol{a}}_{t+1},\boldsymbol\varphi)-\tilde{\boldsymbol{a}}_t^\top\boldsymbol{T}(\mathbf{x}_t^{[s]}),
\end{align}
which induces a sequence of linear regression problems. Thus, we can set $\boldsymbol a_t = \hat{\boldsymbol\gamma}_t$, where $\hat{\boldsymbol\gamma}_t$ is the OLS coefficient estimate of the linear regression $$\log(p (\mathbf{y}_t|\mathbf{x}_t^{[s]},\boldsymbol\varphi)\chi(\mathbf{x}_{t}^{[s]}|\boldsymbol{a}_{t+1},\boldsymbol\varphi) = \gamma_{0,t}+ \boldsymbol{\gamma}_t^\top T(\mathbf{x}_t^{[s]})+\nu_{s,t}.$$
The intercept $\gamma_{0,t}$ does not play any further role in the method. Algorithm \ref{alg:vi2} summarises the implementation details.
\begin{algorithm}
\begin{algorithmic}[1]
\State{Choose an initial value $\boldsymbol a$.}
\State{Generate $S$ state paths $\mathbf{x}^{[s]}\sim q(\mathbf{x}|\mathbf{y})$.}
\For{$t = T,\dots,1$}
\State{Set $\tilde{{y}}_{s,t} = \log \left[p (\mathbf{y}_t|\mathbf{x}_t^{[s]},\boldsymbol\varphi)\chi(\mathbf{x}_{t}^{[s]}|\boldsymbol{a}_{t+1},\boldsymbol\varphi)\right]$ for $s = 1,\dots,S$.}
\State{Set $\boldsymbol a_t = \hat{\boldsymbol\gamma}_t$, where $\hat{\boldsymbol\gamma}_t$ is the OLS coefficient estimate of the linear regression $$\tilde{{y}}_{s,t} =\gamma_{0,t}+ \boldsymbol{\gamma}_t^\top\boldsymbol{T}(\mathbf{x}_t^{[s]})+\nu_{s,t},$$
where $S$ is the total number of observations used for estimation.}
\EndFor
\end{algorithmic}
\caption{Calibration of $\boldsymbol a$}
\label{alg:vi2}
\end{algorithm}
Note that Algorithm \ref{alg:vi2} is required in step 6 in Algorithm \ref{alg:vi}. When $j=1$ in Algorithm \ref{alg:vi}, we initialise $\boldsymbol{a} =\boldsymbol 0$. For $j>1$, we initialise $\boldsymbol{a}$ using its latest value. We choose the number of paths $S$ to be equal to three times the dimension of the vector $\boldsymbol{a}_t$.
\cite{richard2007efficient} suggest running Algorithm \ref{alg:vi2} iteratively until convergence of $\boldsymbol a$ is achieved. In our examples, we find that only one iteration of Algorithm 2 is necessary, and more iterations have no impact on the accuracy of the approach.
\section{Implementation details for the stochastic volatility model}\label{A:sv}
\subsection{Choices of prior}
The prior in the transformed parameter space is given as $p\left(\boldsymbol{\theta}\right) = p(\bar{x})p(\kappa)p(c)$, with
\begin{align*}
\text{(i)}\hspace{0.3cm} &p(\bar{x}) = \phi_1\left(\bar{x};0,1000\right),\hspace{1cm}\text{(ii)}\ p(\kappa) = \frac{\exp(\kappa)}{(1+\exp(\kappa))^2},\hspace{1cm}\text{(iii)}\ p(c)\propto e^{-\alpha c}\exp\left(-\frac{\beta}{e^{c}}\right).
\end{align*}
Here, $p(c)$ was constructed by considering an inverse gamma prior on $\sigma^2$, and deriving the corresponding priors on $c$. The shape and rate parameters of the inverse prior are set as $\alpha = 1.001$ and $\beta = 1.001$.
The prior $p(\kappa)$ was constructed by considering a uniform prior on $\rho$ and deriving the corresponding priors on $\kappa$.
\subsection{Augmented posterior}
The parameters of the SV model are $\boldsymbol{\theta}=\left(\bar{x},\kappa,c\right)^\top$. The augmented posterior can be written as
\begin{align*}
p(\boldsymbol{\theta},\mathbf{x}|\mathbf{y})\propto&\frac{1}{e^{\frac{x_1}{2}} s}\phi_1\left(\frac{y_1}{e^{\frac{x_1}{2}}}\right)\phi_1\left(\frac{x_1-\bar{x}}{s}\right)\prod_{t=2}^{T}\frac{1}{e^{\frac{x_t}{2}}\sigma}\phi_1\left(\frac{y_t}{e^{\frac{x_t}{2}}}\right)\phi_1\left(\frac{x_t-\bar{x}-\rho(x_{t-1}-\bar{x})}{\sigma}\right)p(\boldsymbol{\theta})\,,
\end{align*}
where $s^2 = \frac{\sigma^2}{1-\rho^2}$. Denote $g(\boldsymbol\theta,\mathbf{x}) = p(\mathbf{y}|\mathbf{x})p(\mathbf{x}|\boldsymbol{\theta})p(\boldsymbol{\theta})$. The closed-form expression for $\log g(\boldsymbol\theta,\mathbf{x})$ is:
\begin{align*}
\log g(\boldsymbol\theta,\mathbf{x})= &\log p(\boldsymbol{\theta})-\frac{x_1}{2}-\log(s)-\frac{1}{2}\left(\frac{y_1}{e^{\frac{x_1}{2}}}\right)^2-\frac{1}{2}\left(\frac{x_1-\bar{x}}{s}\right)^2+\\ &\sum_{t=2}^{T}\left[-\frac{x_t}{2}-\log(\sigma)-\frac{1}{2}\left(\frac{y_t}{e^{\frac{x_t}{2}}}\right)^2-\frac{1}{2}\left(\frac{x_t-\bar{x}-\rho(x_{t-1}-\bar{x})}{\sigma}\right)^2\right].
\end{align*}
\subsection{MCMC estimation
For exact Bayesian inference we implement the following MCMC sampling scheme:\\
\noindent$\underline{\text{Sampling Scheme}}$\\
\ \ \hspace{2cm} Step 1: Generate from $\mathbf{x}|\boldsymbol{\theta},\mathbf{y}$.\\
\ \ \hspace{2cm} Step 2: Generate from $\bar{x}|\mathbf{x},\mathbf{y},\{\boldsymbol{\theta}\backslash\bar{x}\}$.\\
\ \ \hspace{2cm} Step 3: Generate from $\sigma^2|\mathbf{x},\mathbf{y},\{\boldsymbol{\theta}\backslash\sigma^2\}$.\\
\ \ \hspace{2cm} Step 4: Generate from $\rho|\mathbf{x},\mathbf{y},\{\boldsymbol{\theta}\backslash\rho\}$.\\
\ \\
For Step 1, we proceed as in \cite{kim1998stochastic}, using a
mixture of seven normals to approximate the distribution of $\log\left[y_t^2e^{-2x_t}\right]$, and then the precision sampler in \cite{chan2009efficient} to generate $\mathbf{x}$.
In Step 2 we use the Gaussian distribution:
$p(\bar{x}|\mathbf{x},\mathbf{y},\{\boldsymbol{\theta}\backslash\bar{x}\}) = \text{N}\left(\mu_{\bar{x}},s_{\bar{x}}^2\right)$
with $s_{\bar{x}}^2 = \left[\frac{1}{1000}+\frac{(T-1)(1-\rho)^2+(1-\rho^2)}{\sigma^2}\right]^{-1}$ and $\mu_{\bar{x}}=s_{\bar{x}}^2\left[\frac{(1-\rho^2)x_1}{\sigma^2}+\frac{(1-\rho)}{\sigma^2}\sum_{t=2}^{T}(x_t-\rho x_{t-1})\right]$. For Step 3 we use the inverse gamma distribution:
$$p(\sigma^2|\mathbf{x},\mathbf{y},\{\boldsymbol{\theta}\backslash\sigma^2\}) = \text{IG}\left(\alpha+\frac{T}{2},\beta+\frac{1}{2}\left[(x_1-\bar{x})^2(1-\rho^2)+\sum_{t=2}^T(x_t-\rho x_{t-1}-\bar{x}(1-\rho))^2\right]\right).$$ In Step 4 we use a Metropolis Hastings step, with corresponding proposal $p(\rho) = \text{N}\left(\mu_{\rho},s_{\rho}^2\right) $
where $s_{\rho}^2 = \sigma^2\left[\sum_{t=1}^{T-1}\left(x_t-\bar{x}\right)^2\right]^{-1}$ and $\mu_{\rho} = s_{\rho}^2\frac{\sum_{t=2}^{T}\left(x_t-\bar{x}\right)\left(x_{t-1}-\bar{x}\right)}{\sigma^2}$. Note here that Step 1 can also be employed to generate $p(\mathbf{x}|\mathbf{y},\boldsymbol{\theta})$ needed for the hybrid variational Bayes method.
\subsection{Gaussian variational approximation}
In this section we denote the augmented parameter space of the SV model as $\boldsymbol{\psi} = (\boldsymbol{\theta}^\top,\mathbf{x}^\top)^\top$. The Gaussian variational approximation considers $$q_\lambda(\boldsymbol{\psi}) = q_{\lambda_1}(\boldsymbol{\theta})q_{\lambda_2}(\mathbf{x}),$$
with $q_\lambda(\boldsymbol{\theta}) = \phi_d(\boldsymbol{\theta};\boldsymbol{\mu},BB^\top+D^2)$, $q_\lambda(\mathbf{x}) = \phi_T(\mathbf{x};\boldsymbol{\mu}_x,C_xC_x^\top)$, $C_x$ is a lower triangular Cholesky factor with three bands and $B$ is of dimension $d\times 1$. The variational parameter vectors are $\boldsymbol{\lambda}_1 = (\boldsymbol{\mu}^\top,\text{vech}(B)^\top,\boldsymbol{d}^\top)^\top$, and $\boldsymbol{\lambda}_2 = (\boldsymbol{\mu}_x^\top,\boldsymbol{c}_x^\top)^\top$, where $\boldsymbol{d}$ denotes the diagonal elements in $D$, and $\boldsymbol{c}_x$ denotes the vector of non-zero elements in $C_x$.
The ELBO for this approximation is given as
\begin{align}
\mathcal{L}(\boldsymbol{\lambda}) = E_{\lambda}\left[\log p(\mathbf{y}|\mathbf{x})p(\mathbf{x}|\boldsymbol{\theta})p(\boldsymbol{\theta})-\log q_\lambda(\boldsymbol{\psi}) \right].
\end{align}
The reparametrization gradient of this expression can be computed by writing
\begin{align*}
\boldsymbol{\theta} & = \boldsymbol{\mu}+B z+D\boldsymbol{\varepsilon}_\theta,\\
\mathbf{x} & = \boldsymbol{\mu}_x+C_x\boldsymbol{\varepsilon}_x,
\end{align*}
where $z\sim N(0,1)$ and $\boldsymbol{\varepsilon} = (\boldsymbol{\varepsilon}_\theta^\top,\boldsymbol{\varepsilon}_x^\top)^\top\sim N(\boldsymbol{0}_{d+T},I_{d+T})$. Then we get that
\begin{align}
\nabla_\lambda\mathcal{L}(\boldsymbol{\lambda}) = E_{z,\varepsilon}\left[\frac{\partial \boldsymbol{\psi}}{\partial\boldsymbol{\lambda}}^\top\left[\nabla_\psi\log p(\mathbf{y}|\mathbf{x})p(\mathbf{x}|\boldsymbol{\theta})p(\boldsymbol{\theta})-\nabla_\psi\log q_\lambda(\boldsymbol{\psi})\right] \right].
\end{align}
Note that $\frac{\partial \boldsymbol{\psi}}{\partial\boldsymbol{\lambda}} = \text{blockdiag}(\frac{\partial \boldsymbol{\theta}}{\partial\boldsymbol{\lambda}_1},\frac{\partial \mathbf{x}}{\partial\boldsymbol{\lambda}_2})$, where the operator blockdiag indicates the diagonal stacking of two matrices, $\frac{\partial \boldsymbol{\theta}}{\partial\boldsymbol{\lambda}_1}$ was provided in a previous section, and $\frac{\partial \mathbf{x}}{\partial\boldsymbol{\lambda}_2} = \left[I_T \hspace{0.2cm} (\boldsymbol{\varepsilon}_x^\top\otimes I_T)P\right]$, where $P$ is a matrix such that $\frac{\partial \mathbf{x}}{\partial\boldsymbol{c}_x} = \frac{\partial \mathbf{x}}{\partial C_x}P$. Additionally, note that $\nabla_\psi\log q_\lambda(\boldsymbol{\psi}) = (\nabla_\theta\log q_{\lambda_1}(\boldsymbol{\theta})^\top,\nabla_x\log q_{\lambda_2}(\mathbf{x})^\top)^\top$. An expression for $\nabla_\theta\log q_{\lambda_1}(\boldsymbol{\theta})^\top$ is provided in \cite{ong2018gaussian}, while $\nabla_x\log q_{\lambda_2}(\mathbf{x}) = -(C_xC_x^\top)^{-1}(\mathbf{x}-\boldsymbol{\mu}_x)$. The Gaussian variational approximation is calibrated by using an unbiased estimate of this ELBO gradient inside an SGA algorithm.
\subsection{Required gradients
The VB methods require the gradient $\nabla_{\theta}\log g(\boldsymbol\theta,\mathbf{x}) = \nabla_{\theta}\log p(\mathbf{y}|\mathbf{x})p(\mathbf{x}|\boldsymbol{\theta})p(\boldsymbol{\theta})$. Note that the derivatives of the priors with respect to their corresponding arguments are:
\begin{align*}
\text{(i)}\hspace{0.3cm} &\frac{\partial \log p(\bar{x})}{\partial\bar{x}}=-\frac{\bar{x}}{1000},\hspace{1cm}\text{(ii)}\ \frac{\partial \log p(\kappa)}{\partial\kappa}=1-2\frac{\rho}{0.995},\hspace{1cm}\text{(iii)}\ \frac{\partial \log p(c)}{\partial c}=-\alpha+\beta e^{-c}.
\end{align*}
With these derivatives we can then construct $$\nabla_{\theta}\log g(\boldsymbol\theta,\mathbf{x}) = \left(\nabla_{\bar{x}}\log g(\boldsymbol\theta,\mathbf{x}),\nabla_{\kappa}\log g(\boldsymbol\theta,\mathbf{x}),\nabla_{c}\log g(\boldsymbol\theta,\mathbf{x})\right)^\top,$$
with each of its elements defined as:
\begin{align*}
\nabla_{\bar{x}}\log g(\boldsymbol\theta,\mathbf{x}) &=\frac{\partial \log p(\bar{x})}{\partial\bar{x}}+\frac{x_1-\bar{x}}{s^2}-\sum_{t=2}^{T}\frac{(\rho-1)}{\sigma}\left[\frac{x_t-\bar{x}-\rho(x_{t-1}-\bar{x})}{\sigma}\right],\\
\nabla_{\kappa}\log g(\boldsymbol\theta,\mathbf{x}) &=\frac{\partial \log p(\kappa)}{\partial\kappa}+\left\{\frac{\rho}{1-\rho^2}\left[\frac{(x_1-\bar{x})^2}{s^2}-1\right]+\sum_{t=2}^{T}\frac{x_{t-1}-\bar{x}}{\sigma^2}\left[x_t-\bar{x}-\rho(x_{t-1}-\bar{x})\right]\right\}\frac{0.995\exp(\kappa)}{(1+\exp(\kappa))^2},\\
\nabla_{c}\log g(\boldsymbol\theta,\mathbf{x}) &=\frac{\partial \log p(c)}{\partial c}-\frac{1}{2}+\frac{(x_1-\bar{x})^2}{2s^2}+\sum_{t=2}^{T}-\frac{e^{\frac{c}{2}}}{2\sigma}+\frac{\left[x_t-\bar{x}-\rho(x_{t-1}-\bar{x})\right]^2e^{\frac{c}{2}}}{2\sigma^3}.
\end{align*}
\section{Additional results from the numerical experiments}\label{A:numerical}
\begin{figure}[H]
\caption{Posterior intervals using different update frequencies of $q(\mathbf{x}|\mathbf{y})$}
\centering
\includegraphics*[width=.9\textwidth]{Figs/SensitivityUpdate.eps}\\[3mm]
\begin{flushleft}
This figure shows the 99\% posterior intervals for the parameters of the model. The first row corresponds to a sample size of $T = 500$. The second row corresponds to a sample size of $T=4000$. The gray areas indicate the posterior intervals for MCMC, while the red lines indicate those for Efficient VB. The horizontal dashed lines indicate the true values in the data generating process.
\end{flushleft}
\label{fig:SVlargeApp}
\end{figure}
\section{Implementation details for the TVP-VAR-SV model}\label{A:var}
This section demonstrates how the TVP-VAR-SV model in \eqref{eq:tvpvarsv}, can be represented by the $N$ unrelated equations in \eqref{eq:ssm_tvpvarsv}. First, pre-multiply~\eqref{eq:tvpvarsv}
by $L_t$, so that
\begin{eqnarray*}
L_t\mathbf{y}} \def \mY {\text{\boldmath$Y$}_t &= &L_t\text{\boldmath$\beta$}_{0,t}+\sum_{s=1}^p L_tB_{s,t}\mathbf{y}} \def \mY {\text{\boldmath$Y$}_{t-s}+\text{\boldmath$\epsilon$}_t
= \text{\boldmath$\gamma$}_{0,t}+\sum_{s=1}^p\Gamma_{s,t}\mathbf{y}} \def \mY {\text{\boldmath$Y$}_{t,s}+\text{\boldmath$\epsilon$}_t,
\end{eqnarray*}
where $\Gamma_{s,t} = L_tB_{s,t}$ and $\gamma_{0,t} = L_t B_{0,t}$.
Denote the
non-fixed elements of the $i$th row of $L_t$ as $\boldsymbol{l}_{1:i-1,t} = \left(l_{i,1,t},\dots,l_{i,i-1,t}\right)^\top$
for $i\geq 2$, so that the entire $i$th row of $L_t$ is $(\boldsymbol{l}_{1:i-1,t}^\top,1,\boldsymbol{0}_{N-i}^\top)$.
Then, each of the $i=1,\ldots,N$ individual equations of the model can be written as
\begin{equation}
y_{i,t}+\mathbf{y}_{1:i-1,t}^\top\boldsymbol{l}_{1:i-1,t} =\left(\mathbf{y}_{t-1}^\top,\dots,\mathbf{y}_{t-p}^\top,1\right)\boldsymbol{\gamma}_{i,t}+\epsilon_{i,t},
\label{eq:tvpvarsveqi}
\end{equation}
where $\mathbf{y}_{1:i-1,t} = \left(y_{1,t},\dots,y_{i-1,t}\right)^\top$, $\boldsymbol{\gamma}_{i,t} = \left(\Gamma_{i,1,t},\dots,\Gamma_{i,p,t},\gamma_{i,0,t}\right)^\top$,
$\Gamma_{i,s,t}$ denotes the $i$th row of $\Gamma_{s,t}$,
$\gamma_{i,0,t}$ is the $i$th element in $\boldsymbol{\gamma}_{0,t}$, and $\epsilon_{i,t}\sim N(0,\exp(h_{i,t}))$.
The $i$th equation can be simplified to
\[
y_{i,t}=\mathbf{z}} \def \mZ {\text{\boldmath$Z$}_{i,t}^\top\text{\boldmath$\eta$}_{i,t}+\epsilon_{i,t},
\]
where $\mathbf{z}} \def \mZ {\text{\boldmath$Z$}_{i,t}^\top=\left(\mathbf{y}_{t-1}^\top,\dots,\mathbf{y}_{t-p}^\top,1,-\mathbf{y}_{1:i-1,t}^\top\right)$
and $\text{\boldmath$\eta$}_{i,t}^\top = \left(\text{\boldmath$\gamma$}_{i,t}^\top,\boldsymbol{l}_{1:i-1,t}^\top\right)$. The $\text{\boldmath$\eta$}_{i,t}$ state vector notation is not to be confused with the function $\boldsymbol\eta(.,.)$ employed to define exponential density functions in Section \ref{sec:vb}.
In this representation, the coefficient vector is of dimension $Np+i=J_i/2$ and follows the
random walk
$\text{\boldmath$\eta$}_{i,t}=\text{\boldmath$\eta$}_{i,t-1}+\diag({\boldsymbol{\alpha}_{2,i}})\tilde{\text{\boldmath$\varepsilon$}}_{i,t}$,
with $\tilde{\text{\boldmath$\varepsilon$}}_{i,t}\sim N(\boldsymbol{0},I_{J_i/2})$.
The coefficients $\text{\boldmath$\eta$}_{i,t}$ are
transformed to the ``non-centered'' representation
$\text{\boldmath$\eta$}_{i,t}=\boldsymbol{\alpha}_{1,i}+\mbox{diag}(\boldsymbol{\alpha}_{2,i})\widetilde\text{\boldmath$\eta$}_{i,t}$ as a sum of a time-invariant term $\boldsymbol{\alpha}_{1,i}$
and scaled time-varying deviations $\widetilde\text{\boldmath$\eta$}_{i,t}$.
Substituting in this parameterization gives
the state space model
\begin{eqnarray}
y_{i,t} &= & ({\mathbf{z}}_{i,t}^\top,{\mathbf{z}}_{i,t}^\top\text{diag}(\tilde{\boldsymbol{\eta}}_{i,t}))\boldsymbol{\alpha}_i+\epsilon_{i,t}, \nonumber\\
\tilde{\boldsymbol{\eta}}_{i,t}&=&\tilde{\boldsymbol{\eta}}_{i,t-1}+\tilde{\text{\boldmath$\varepsilon$}}_{i,t},\nonumber\\
h_{i,t}& = & \bar{h}_i+\rho(h_{i,t-1}-\bar{h}_i)+e_{i,t},\mbox{ for }
i=1,\ldots,N,
\label{eq:apptvp1}
\end{eqnarray}
with $\text{\boldmath$\alpha$}_i^\top=(\boldsymbol{\alpha}_{1,i}^\top,\boldsymbol{\alpha}_{2,i}^\top)$.
\cite{huber2021inducing} employ a horseshoe prior for the vector of coefficients $\text{\boldmath$\alpha$}_i = \left(\alpha_{i,1},\dots,\alpha_{i,J_i}\right)^\top$, which can be represented as
\begin{align}
\alpha_{i,j}|\xi_i\chi_{i,j}~\sim N(0,\xi_i\chi_{i,j}),\qquad \chi_{i,j}|\nu_{i,j}\sim \mathcal{G}^{-1}\left(\frac{1}{2},\frac{1}{\nu_{i,j}}\right),\\ \xi_i|\kappa_{i}\sim \mathcal{G}^{-1}\left(\frac{1}{2},\frac{1}{\kappa_{i}}\right),\qquad \nu_{i,1},\dots,\nu_{i,J_i},\kappa_{i}\sim \mathcal{G}^{-1}\left(\frac{1}{2},1\right).
\end{align}
As noted in \cite{ingraham2017variational}, the horseshoe prior can lead to funnel-shaped posteriors for the elements of $\boldsymbol{\alpha}_i$, which cannot be approximated using Gaussian distributions. This hinders the performance of Gaussian variational inference. The authors suggest to transform $\boldsymbol{\alpha}_i$ into $\boldsymbol{\tau}_i = \left(\tau_{i,1},\dots,\tau_{i,J_i}\right)^\top$, where $\alpha_{i,j} = \tau_{i,j}\sqrt{\xi_i\chi_{i,j}}$ with prior $\tau_{i,j}\sim N(0,1)$. Unlike $\boldsymbol{\alpha}_i$, the posterior distributions for the elements of $\boldsymbol{\tau}_i$ can be easily approximated by Gaussians. Note that $\boldsymbol{\alpha}_i = \sqrt{\xi_i}(\boldsymbol{\tau}_i\circ\sqrt{\boldsymbol{\chi}_i})$, where `$\circ$' denotes the Hadamard product. We can replace this expression into \eqref{eq:apptvp1} to obtain the measurement equation in \eqref{eq:ssm_tvpvarsv}. The state equation in \eqref{eq:ssm_tvpvarsv} can be recovered by stacking the latent states in the single vector $\mathbf{x}} \def \mX {\text{\boldmath$X$}_{i,t} = (\tilde{\text{\boldmath$\eta$}}_{i,t}^\top,h_{i,t})^\top$, such that
$$\mathbf{x}_{i,t} = \bar{\mathbf{x}}_i+A_{1,i}\mathbf{x}_{i,t-1}+A_{2,i}\boldsymbol{\varepsilon}_{i,t},
$$
where
$$\bar{\mathbf{x}}_i =\begin{bmatrix}
\boldsymbol{0}_{J_i/2} \\
\bar{h}_i(1-\rho_i)
\end{bmatrix}, \text{ } A_{1,i} = \begin{bmatrix}
I_{J_i/2} & \boldsymbol{0}_{J_i/2} \\
\boldsymbol{0}_{J_i/2}^\top & \rho_i
\end{bmatrix} \text{, } A_{2,i}= \begin{bmatrix}
I_{J_i/2} & \boldsymbol{0}_{J_i/2} \\
\boldsymbol{0}_{J_i/2}^\top & \sigma_i
\end{bmatrix},$$
and $\text{\boldmath$\varepsilon$}_{i,t}\sim N(\boldsymbol{0},I_{(J_i/2)+1}).$
\clearpage
\section{Additional results from the empirical application}\label{A:application}
\begin{figure}[h!]
\caption{ELBO for the TVP-VAR-SV model}
\centering
\includegraphics*[scale = 0.7,trim={0cm 1.5cm 0 1.3cm}]{Figs/TVPVARSV_ELBO.eps}
\begin{flushleft}
This figure presents the ELBO traces for each of the equations in the TVP-VAR-SV empirical application. The yellow lines correspond to Efficient VB while the blue lines correspond to Gaussian VB.
\end{flushleft}
\label{fig:VARELBO}
\end{figure}
\begin{figure}[tb!]
\caption{Posterior mean of a time-varying volatilities for all equations}
\centering
\includegraphics[scale=0.7]{Figs/TVP_VAR_SV_states.eps}\\[3mm]
\begin{flushleft}
\end{flushleft}
\label{fig:VolstatesAll}
\end{figure}
\section{Empirical application}\label{sec:application}
To illustrate Efficient VB with real data, we fit a time-varying parameter vector autoregression with a stochastic volatility model (TVP-VAR-SV) to eight macroeconomic variables. This application to a model with a nonlinear measurement equation and a high-dimensional state vector shows that our approach is also fast and accurate in complex state space models.
The data contains 150 quarterly observations from 1980:Q3 to 2017:Q4 on eight macroeconomic variables. The FRED mnemonics for these variables are GDPC1, PCECC96, FPIx, CE16OV, CES0600000007, GDPCTPI, CES0600000008, and FEDFUNDS. We fit a TVP-VAR-SV with a lag length of 2. The data set is described in detail by \citet{huber2021inducing}.
\subsection{TVP-VAR-SV model}
The VAR representation of the model is
\begin{align}\label{eq:tvpvarsv}
\mathbf{y}} \def \mY {\text{\boldmath$Y$}_t &= \text{\boldmath$\beta$}_{0,t}+\sum_{s=1}^p B_{s,t}\mathbf{y}} \def \mY {\text{\boldmath$Y$}_{t-s}+L_t^{-1}\text{\boldmath$\epsilon$}_t, \quad\text{\boldmath$\epsilon$}_t \sim N(\boldsymbol{0},H_t),\nonumber\\
\text{\boldmath$\beta$}_t &= \text{\boldmath$\beta$}_{t-1} + \text{\boldmath$w$}} \def \mW {\text{\boldmath$W$}_t, \qquad\qquad\qquad\qquad\text{\boldmath$w$}} \def \mW {\text{\boldmath$W$}_t \sim N(0,V), \nonumber \\
h_{i,t} &= \bar{h}_i+\rho_i(h_{i,t-1}-\bar{h}_i)+e_{i,t}, \quad e_{i,t} \sim N(0,\sigma_i^2),\quad \mbox{ for }
i=1,\ldots,N,
\end{align}
where $\mathbf{y}} \def \mY {\text{\boldmath$Y$}_t=(y_{1,t},y_{2,t},\ldots,y_{N,t})^\top$ represents the $N$ macroeconomic variables at time $t$, $L_t^{-1}$ is a lower triangular matrix with unit-valued diagonal elements and lower-diagonal elements denoted as $\boldsymbol{l}_t$, $\text{\boldmath$\beta$}_{0,t}$ is the intercept vector, $B_{1,t},\ldots,B_{p,t}$
are $(N\times N)$ autoregressive coefficient matrices, and $H_t=\mbox{diag}(e^{h_{1,t}},\ldots,e^{h_{N,t}})$ is a diagonal matrix.
The $K=(pN^2+N+N(N-1)/2)$ time-varying coefficients are collected in the $K$-dimensional vector $\text{\boldmath$\beta$}_t^\top\equiv
(\text{\boldmath$\beta$}_{0,t}^\top,\mbox{vec}(B_{1,t})^\top,\ldots,\mbox{vec}(B_{p,t})^\top,\boldsymbol{l}_t^\top)$ and $V=\mbox{diag}(v_1,\ldots,v_K)$ is a diagonal matrix.
The logarithms of the volatilities
$h_{i,1},\ldots,h_{i,T}$ follow a stationary
first order autoregression with mean $\bar{h}_i$ and autoregressive parameter $|\rho_i|<1$.
We use a horseshoe prior to regularize the time-varying parameters, as proposed by \citep{huber2021inducing}.
Estimation of the joint model in \eqref{eq:tvpvarsv} is difficult, and therefore it is common to transform the VAR model to $N$ unrelated regressions \citep{carriero2019large,kastner2020sparse}. Moreover, horseshoe priors are known to result in posterior densities that are difficult to approximate \citep{ghosh2019model}, which can be solved by adopting the re-parametrization proposed by \citet{ingraham2017variational}. Appendix~\ref{A:var} shows that after these two transformations, \eqref{eq:tvpvarsv} can be represented by $i=1,\dots,N$ state space models:
\begin{align}\label{eq:ssm_tvpvarsv}
p(y_{i,t}|\mathbf{x}_{i,t},\boldsymbol\theta_i)& = \phi_1(y_{i,t};({\boldsymbol{z}}_{i,t}^\top,{\boldsymbol{z}}_{i,t}^\top\text{diag}(\tilde{\boldsymbol{\eta}}_{i,t}))\text{\boldmath$\alpha$}_i,e^{h_{i,t}}),\nonumber\\
p(\mathbf{x}_{i,t}|\mathbf{x}_{i,t-1},\boldsymbol{\theta}_i)& = \phi_{(Np+i+1)}(\mathbf{x}_{i,t};\bar{\mathbf{x}}_i+A_{1,i}\mathbf{x}_{i,t-1},A_{2,i}^2),\,
\end{align}
where $\mathbf{x}_{i,t} = (\tilde{\boldsymbol{\eta}}_{i,t}^\top,h_{i,t})^\top$ is the $(Np+i+1)-$dimensional state vector, with $\tilde{\boldsymbol{\eta}}_{i,t}^\top$ a function of the coefficient vector $\boldsymbol\beta_t$. The parameter vector for equation $i$ is defined as $\boldsymbol{\theta}_i=\left(\boldsymbol{\tau}_i^\top,\boldsymbol{\chi}_i^\top,\xi_i,\bar{h}_i,\rho_i,\sigma_i^2\right)^\top$, with $\boldsymbol{\alpha}_i = \sqrt{\xi_i}(\boldsymbol{\tau}_i\circ\sqrt{\boldsymbol{\chi}_i})$ and $J_i$-dimensional parameter vectors $\text{\boldmath$\tau$}_i = (\tau_{i,1},\dots,\tau_{i,J_i})^\top$ and $\sqrt{\boldsymbol{\chi}_i} = \left(\chi_{i,1}^{1/2},\dots,\chi_{i,J_i}^{1/2}\right)^\top$ with $J_i=2(pN+i)$, and scalar parameter $\xi_i$.
The ($p+N-1$)-dimensional vector ${\boldsymbol{z}}_{i,t}=\left(\boldsymbol{y}_{t-1}^\top,\dots,\boldsymbol{y}_{t-p}^\top,1,-\boldsymbol{y}_{1:i-1,t}^\top\right)^\top$ with $\boldsymbol{y}_{1:i-1,t} = \left(y_{1,t},\dots,y_{i-1,t}\right)^\top$, represents the covariates in the measurement density. The parameters in the state density $\bar{\mathbf{x}}_i$, $A_{1,i}$ and $A_{2,i}$ are functions of $\rho_i$, $\sigma_i$ and $\bar{h}_i$. Here, $A_{2,i}^2$ denotes the operation of squaring each of the elements in $A_{2,i}.$
Since the state vector $\mathbf{x}} \def \mX {\text{\boldmath$X$}_{i,t}$ is high-dimensional and enters the measurement equation non-linearly via $h_{i,t}$, the states cannot be analytically integrated out of the likelihood function. Hence, we consider the augmented posterior distribution. Let $\mathbf{y}} \def \mY {\text{\boldmath$Y$}_{(i)} \equiv (y_{i,1},\ldots,y_{i,T})^\top$ be the observations on the $i$th macroeconomic variable, $\mathbf{y}} \def \mY {\text{\boldmath$Y$}_{(\backslash i)}$ be the observations on the other $N-1$ macroeconomic variables, $\mathbf{x}} \def \mX {\text{\boldmath$X$}_{(i)}\equiv (\mathbf{x}} \def \mX {\text{\boldmath$X$}_{i,1}^\top,\ldots,\mathbf{x}} \def \mX {\text{\boldmath$X$}_{i,T}^\top)^\top$ the latent states in the $i$th equation, then the augmented posterior is
\begin{align}\label{eq:augpost}
p(\text{\boldmath$\theta$}_i,\mathbf{x}} \def \mX {\text{\boldmath$X$}_{(i)}|\mathbf{y}} \def \mY {\text{\boldmath$Y$}) \propto p(\mathbf{y}} \def \mY {\text{\boldmath$Y$}_{(i)}|\mathbf{x}} \def \mX {\text{\boldmath$X$}_{(i)},\mathbf{y}} \def \mY {\text{\boldmath$Y$}_{(\backslash i)})p(\mathbf{x}} \def \mX {\text{\boldmath$X$}_{(i)}|\text{\boldmath$\theta$}_i)p(\text{\boldmath$\theta$}_i)
=
\prod_{t=1}^{T}\left\{\phi_{1}\left(y_{i,t};({\boldsymbol{z}}_{i,t}^\top,{\boldsymbol{z}}_{i,t}^\top\text{diag}(\tilde{\boldsymbol{\eta}}_{i,t}))
\text{\boldmath$\alpha$}_i
,e^{h_{i,t}}\right)\right\}\times\\
\phi_{(Np+i+1)}\left( \mathbf{x}} \def \mX {\text{\boldmath$X$}_{i,1}; \bar{\mathbf{x}} \def \mX {\text{\boldmath$X$}}_{i,1},V_{i,1}\right)
\prod_{t=2}^T \left\{\phi_{Np+i+1}\left( \mathbf{x}} \def \mX {\text{\boldmath$X$}_{i,t}; \bar{\mathbf{x}}_i+A_{1,i}\mathbf{x}_{i,t-1},A_{2,i}^2\right)\right\}
p(\text{\boldmath$\theta$}_i),
\end{align}
where $\bar{\mathbf{x}} \def \mX {\text{\boldmath$X$}}_{i,1} = (\boldsymbol{0}_{Np+i}^\top,\bar{h}_i)^\top$ and $V_{i,1} = \text{diag}((\boldsymbol{1}_{Np+i}^\top,\frac{\sigma_i^2}{1-\rho_i^2}))$.
The prior density for $\boldsymbol\theta_i$ is specified as $p(\text{\boldmath$\theta$}_i)=p(\xi_{i}|\kappa_{i})p(\kappa_i)p(\bar{h}_i)p(\rho_i)p(\sigma_i^2)\prod_{j=1}^{J_i}p(\tau_{i,j})p(\chi_{i,j}|\nu_{i,j})p(\nu_{i,j})$, with $p(\xi_{i}|\kappa_{i})=$Inverse-Gamma(0.5,$\kappa_i^{-1})$, $p(\kappa_i)=$Inverse-Gamma(0.5,1), $p(\bar{h}_i)=N(0,100)$, $p((\rho_i+1)/2)=$Beta(25,5), $p(\sigma_i^2)$=Gamma(0.5,0.5), $p(\tau_{i,j})=N(0,1)$, $p(\chi_{i,j}|\nu_{i,j})=$Inverse-Gamma(0.5,$\nu_{i,j}^{-1})$, and $p(\nu_{i,j})=$Inverse-Gamma(0.5,1).
The MCMC sampler from the augmented posterior in \eqref{eq:augpost} is discussed by \citet{huber2021inducing}.
\subsection{Variational approximations}
Each separate augmented posterior of the model admits an approximation $q_\lambda(\boldsymbol{\theta}_i,\mathbf{x}_{(i)}) = q_\lambda(\boldsymbol{\theta}_i)q(\mathbf{x}_{(i)}|\mathbf{y})$, as proposed in \eqref{eq:vas}.
Since the state transition is a multivariate Gaussian, the sufficient summary vector is $\boldsymbol T( \mathbf{x}_t)=(\mathbf{x}_{i,t}^\top,\text{vec}(\mathbf{x}_{i,t}\mathbf{x}_{i,t}^\top))^\top$. We define the vector of kernel parameters to be $\boldsymbol a_{i,t} = (\mathbf{b}_{i,t},\text{vec}(C_{i,t}))^\top$ with $\boldsymbol{b}_{i,t}$ an $(Np+i+1)$-dimensional vector and $C_{i,t} = \text{diag}(\boldsymbol{c}_{i,t})$ specified as a diagonal matrix for computational efficiency, where $\boldsymbol{c}_{i,t}$ is an $(Np+i+1)$-dimensional vector.
The approximation to the states $q(\mathbf{x}_{(i)}|\mathbf{y}) = \prod_{t=1}^Tq(\mathbf{x}_{i,t}|\mathbf{x}_{i,t-1},\mathbf{y},\boldsymbol\varphi_i)$ is a product of Gaussian densities such that $q(\mathbf{x}_{i,t}|\mathbf{x}_{i,t-1},\mathbf{y},\boldsymbol\varphi_i) = \phi_{(Np+i+1)}(\mathbf{x}_{i,t};\boldsymbol\mu_{i,t},\Sigma_{i,t})$ with
$\Sigma_{i,t} = \left(A_{2,i}^{-2}-2C_{i,t}\right)^{-1}$, and $\boldsymbol{\mu}_{i,t} = \Sigma_{i,t} \left(\boldsymbol{b}_{i,t}+A_{2,i}^{-2}\left(\bar{\mathbf{x}}_i+A_{i,1}\mathbf{x}_{i,t-1}\right)\right)$.
The variational approximation in our VB approach requires an expression for the integration constant of the transition kernel, as defined in \eqref{eq:kernel}, which equals
\begin{align}
\chi(\mathbf{x}_{i,t-1}|\boldsymbol a_{i,t},\boldsymbol\varphi_i)
&= \exp\left[ \frac{1}{2}\log \frac{|\Sigma_{i,t}|}{|A_{2,i}^2|}+\frac{1}{2}\boldsymbol{\mu}_{i,t}^\top\Sigma_{i,t}^{-1}\boldsymbol{\mu}_{i,t}-\frac{1}{2}\left(\bar{\mathbf{x}}_i+A_{1,i}\mathbf{x}_{i,t-1}\right)^\top A_{2,i}^{-2}\left(\bar{\mathbf{x}}_i+A_{1,i}\mathbf{x}_{i,t-1}\right) \right]. \notag
\end{align}
While not made explicit in the notation, the parameters $\bar{\mathbf{x}}_i$, $A_{1,i}$ and $A_{2,i}$ are determined by the proxy parameter vector $\boldsymbol\varphi_i$.
Since we assume $C_{i,t}$ to be diagonal, only the kernel parameters $\boldsymbol{b}_{i,t}$ and $\boldsymbol{c}_{i,t}$ have to be calibrated. Hence, $\boldsymbol{\gamma}_{i,t}^\top\boldsymbol{T}(\mathbf{x}_{i,t}^{[s]})$ boils down to $\boldsymbol{\tilde{\gamma}}_{i,t}^\top(\mathbf{x}_{i,t}^\top,(\mathbf{x}_{i,t}^2)^\top )^\top$ in Algorithm~\ref{alg:vi2} in Appendix~\ref{app:calibration}.
In addition to our method, we also consider Gaussian and Hybrid variational approximations. The gradients for implementation of our variational approximation and the benchmark methods are provided in \cite{loaiza2022fast}.
MCMC is implemented using a burn-in sample of size 15000 and inference sample of size 15000.
We run all three VB algorithms for a total of 10000 iterations. Appendix~\ref{A:application} shows that this number of iterations is enough to achieve convergence.
\subsection{Results}
The computation time of Efficient VB is 2.170 minutes. This is faster than Gaussian VB and Hybrid VB, which takes 2.452 and 8.791 minutes, respectively. Since MCMC takes 26.390 minutes, Efficient VB uses less than 9\% of the time required for MCMC.
\subsubsection{Posterior distribution of the states}
The TVP-VAR-SV model in \eqref{eq:tvpvarsv} contains a total of 172 states at each of the 150 time periods. To illustrate the posterior estimates for the state vectors, we consider the posterior mean of one of the time-varying VAR coefficients and of one of the time-varying volatilities.
First, Figure~\ref{fig:Coefstates} shows the posterior mean of one of the time varying VAR coefficients in \eqref{eq:tvpvarsv}, $B_{2,t}(1,3)$, across time $t$, for Efficient VB, Gaussian VB, and MCMC. Remember that Hybrid VB uses the exact conditional density of the states in its variational approximation, and hence its posterior mean for the states is very similar to MCMC and not included in the figure. Many of the time-varying VAR coefficients are regularized to zero with all methods. Therefore we illustrate the posterior state distributions by the posterior mean of $B_{2,t}(1,3)$, a coefficient that actually has time dynamics and differences in these dynamics across the different methods.
The posterior mean for Efficient VB is similar to the posterior mean for MCMC over the whole sample period. Both show substantial variation over time, with a posterior mean close to zero in 1980, increasingly positive between 1980 and 1995, decreasing between 1995 and 2010, and close to zero again between 2010 and 2017. The posterior mean for Gaussian VB follows a different time path with a small amount of variation and close to zero across the whole sample.
\begin{figure}[tb!]
\caption{Posterior mean of a time-varying VAR coefficient}
\centering
\includegraphics*[width=\textwidth]{Figs/B_state.eps}\\[3mm]
\begin{flushleft}
This figure shows the posterior mean of $B_{2,t}(1,3)$ in \eqref{eq:tvpvarsv} across time $t$, for Efficient VB, Gaussian VB, and MCMC, indicated by the solid yellow, dotted blue, and solid black line, respectively.
\end{flushleft}
\label{fig:Coefstates}
\end{figure}
Second, Figure~\ref{fig:Volstates} shows the posterior mean of one of the time varying volatilities in \eqref{eq:tvpvarsv}, $\exp(h_{4,t}/2)$, across time $t$. The results are similar to the posterior means for the time-varying coefficients. Efficient VB more accurately approximates the posterior mean for MCMC than Gaussian VB. Similar to the findings for Figure~\ref{fig:SVstates} in the numerical experiment, Gaussian VB seems to overestimate states compared to the posterior means for MCMC. Figure \ref{fig:VolstatesAll} in Appendix~\ref{A:application} shows that a similar conclusion can be drawn when looking at the posterior mean for the time varying volatilities of the remaining equations.
\begin{figure}[tb!]
\caption{Posterior mean of a time-varying volatility}
\centering
\includegraphics*[width=\textwidth]{Figs/TVPVARSVestimatesvols.eps}\\[3mm]
\begin{flushleft}
This figure shows the posterior mean of $\exp(h_{4,t}/2)$ in \eqref{eq:tvpvarsv} across time $t$, for Efficient VB, Gaussian VB, and MCMC, indicated by the solid yellow, dotted blue, and solid black line, respectively.
\end{flushleft}
\label{fig:Volstates}
\end{figure}
\subsubsection{Posterior distribution of the parameters}
To assess the accuracy of the posterior distribution for the parameters, we focus on the parameters in the stochastic volatility model for Real Gross Domestic
Product (GDPC1), which is the first variable in \eqref{eq:tvpvarsv}. Figure~\ref{fig:Param} shows the posterior parameter distributions for $\bar{h}_1$, $\rho_1$, and $\sigma_1^2$. The posterior of Efficient VB is close to the location of the posterior of MCMC for all three parameters, but underestimates the posterior variances. Although slightly more accurate, we find the same for Hybrid VB. The location of the posterior distribution of Gaussian VB is less accurate for $\rho_1$ and $\sigma_1^2$.
\begin{figure}[tb!]
\caption{Posterior parameter distributions stochastic volatility model}
\centering
\includegraphics*[scale=0.7]{Figs/TVPSVestimates.eps}\\[3mm]
\begin{flushleft}
This figure shows the posterior parameter distributions in the stochastic volatility model for variable 1 in \eqref{eq:tvpvarsv}, for Efficient VB, Gaussian VB, Hybrid VB, and MCMC. These are indicated by the solid yellow, dotted blue, dashed purple, and solid black line, respectively. Panel (a) shows the posterior distribution for $\bar{h}_1$, Panel (b) for $\rho_1$, and Panel (c) for $\sigma_1^2$.
\end{flushleft}
\label{fig:Param}
\end{figure}
Table~\ref{tab:elbo} shows the ELBO values for the augmented posterior for both Efficient VB and Gaussian VB, for each equation in \eqref{eq:ssm_tvpvarsv} averaged over the final 100 VB iteration divided by one thousand. These numbers summarize the accuracy of the variational approximations to the augmented posterior, with larger numbers being preferred. We find that Efficient VB produces larger ELBO's for all equations.
\begin{table}[tb!]
\centering
\caption{ELBO values for each state space model in \eqref{eq:ssm_tvpvarsv}}
\begin{threeparttable}
\begin{tabular}{lrrrrrrrr}
\toprule \toprule
Equation & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 \\ \midrule
Efficient VB & -0.140 & -0.136 & -0.109 & -0.165 & -0.179 & -0.157 & -0.160 & -0.239 \\
Gaussian VB & -0.960 & -1.005 & -1.031 & -1.107 & -1.181 & -1.262 & -1.296 & -1.244 \\
\bottomrule \bottomrule
\end{tabular}%
\begin{tablenotes}
\footnotesize
\item This table shows the ELBO values for each equation $i$ in \eqref{eq:ssm_tvpvarsv} averaged over the final 100 VB iteration divided by one thousand, for Efficient VB and Gaussian VB. Note that the ELBO cannot be calculated for Hybrid VB.
\end{tablenotes}
\end{threeparttable}
\label{tab:elbo}
\end{table}
The results are similar to the findings for Figure~\ref{fig:SVsmall} in the numerical experiment, with a smaller sample and a more complex model. If the empirical results are consistent with the numerical analyses in Figure~\ref{fig:SVlarge}, the bias in Gaussian VB is expected to increase as the sample grows, while Efficient VB is not expected to lose accuracy.
\section{Conclusion}\label{sec:conclusion}
This paper proposes a variational Bayes method for state space models, that uses a new variational approximation to the states. This approximation conditions on the observed data, which results in an accurate approximation to the posterior distribution of both the states and the model parameters. Since the approximation is calibrated in a computationally efficient way, the method is fast and scalable to a large number of states and a large number of observations. The combination of accuracy and speed of the variational approximation is illustrated in numerical experiments with a simple stochastic volatility model and an empirical application with a modern macroeconomic time-varying parameter vector autoregression with stochastic volatility.
The proposed efficient variational Bayes method is applicable to a wide range of state space models, including models for which accurate estimation is computationally challenging using existing methods. First, our method can be applied to many models with nonlinear or non-Gaussian measurement equations, and/or high-dimensional state vectors. For instance, two potential applications of the approach are dynamic stochastic copula models \citep{hafner2012dynamic}, and multivariate stochastic volatility models with realised volatility \citep{yamauchi2020multivariate}.
Second, the method can be applied to models with any transition density that is a member of the exponential family of distributions. This opens up the possibility of applying state space modelling with a non-Gaussian transition density to, for instance, realised covariance matrices of asset returns.
The uptake of state space models in this literature has been limited, since high-dimensional state vectors with nonlinear restrictions are generally required \citep{gribisch2022modeling}.
One future extension of our method is the accommodation of transition equations that are not a member of the exponential family, as is the case with the Heston model \citep{eraker2004stock}.
\section{Introduction}
Estimation of many state space models with nonlinear and/or non-Gaussian measurement equations is computationally challenging \citep{gribisch2022modeling,chan2022large,cross2021macroeconomic}. The likelihood function of these models involves a high-dimensional integral with respect to the state variables which cannot be solved analytically, and hence renders maximum likelihood estimation to be infeasible. As an alternative, exact Bayesian estimation methods allow for the computation of the posterior distribution of the model parameters. These methods either use particle filtering \citep{chopin2020introduction}, or sample from the augmented posterior of the model parameters and the states
using analytical filtering \citep{carter1994gibbs}. Both approaches can become computationally costly, especially with high-dimensional state vectors or when strong dependence between the states and the parameters is observed \citep{quiroz2018gaussian}.
Variational Bayes (VB) methods provide a scalable alternative to exact Bayesian methods. Instead of sampling exactly from the posterior, VB calibrates an approximation to the posterior via the minimization of a divergence function. However, off-the-shelf variational methods for state space models, such as mean-field variational approximations, are known to be poor \citep{wang2004lack}. This is due to the inaccuracy of these methods in approximating the conditional posterior distribution of the states \citep{frazier2022variational}.
Recent advances in VB methods for state space models show a trade-off between accuracy and speed.
On the one hand, \citet{tran2017variational} exactly integrates out the states in the variational approximation using particle filtering. This approach produces accurate estimation but is also computationally costly. \citet{loaiza2022fast} show that for a class of models exact integration of the states can be achieved by combining VB and Gibbs sampling. This method is also accurate, but can also become computationally costly as exact generation from the conditional posterior of the states is required.
On the other hand,
\citet{quiroz2018gaussian} propose a fast Gaussian VB method. Their variational family for the states conditions on the model parameters and not on the data, which may result in inaccurate estimates.
This paper proposes a novel VB method for state space models that mitigates the trade-off between accuracy and speed. The method
uses a variational approximation to the states that directly conditions on the observed data, and as such produces an accurate approximation to the exact posterior distribution. The approach is faster than existing VB methods for state space models, due to the computationally efficient calibration steps it entails. The implementation only requires a measurement equation with a closed-form density representation, and a state transition distribution that belongs to the class of exponential distributions. This allows for a wide range of state space models, including ones with nonlinear measurement equations, certain type of nonlinear transition equations, and high-dimensional state vectors.
Our approximation to the states is chosen to be the importance density proposed by \citet{richard2007efficient}, which the authors use in the context of efficient importance sampling. Hence, we refer to our method as Efficient VB. \citet{scharth2016particle} employ this importance distribution within a particle Markov chain Monte Carlo (PMCMC) sampler to reduce the variance of the estimate of the likelihood function. The use of this importance density inside PMCMC does not result in substantial computational gains, as it must be recalibrated at each iteration. Because VB poses an optimisation problem, it can be solved via stochastic gradient ascent (SGA). SGA only requires a draw from the approximation to the states, which is used to construct an estimate of the gradient of the objective function. Since the importance density is easy to generate from, and it does not have to be recalibrated at each SGA step, the optimization routine is fast and hence scalable to state space models with high-dimensional state vectors and a large number of observations.
Numerical experiments show that the proposed Efficient VB method provides accurate posterior densities, while it only takes a fraction of the computational cost of MCMC.
The experiments employ the stochastic volatility model as the true data generating process. Since the exact posterior can be computed by MCMC methods, the accuracy of our method can be assessed for this model.
We find that Efficient VB produces variational approximations to the states that are close to the corresponding exact posteriors, which result in accurate variational approximations to the parameters of the model. Efficient VB is more accurate relative to Gaussian VB, and faster than all benchmark methods with all sample sizes under consideration.
An empirical application illustrates the practical relevance of our method. We fit a time-varying parameter vector autoregression with a stochastic volatility model to eight macroeconomic variables. Although this is a complex state space model with a nonlinear measurement equation and a high-dimensional state vector, our approach is still fast and accurate. Efficient VB produces posterior means of the states --time-varying VAR coefficients and time-varying volatilities-- that are close to the exact posterior means. The same holds for the posterior distributions for the parameters in the stochastic volatility model in the VAR.
The proposed VB method can produce fast and accurate estimation for a wide range of models. Computationally challenging state space models are currently estimated by VB methods that are limited to specific state space formulations. For instance,
\cite{chan2022fast} and \cite{gefang2019variational} propose a VB method for a specific class of vector autoregression models. \cite{koop2018variational} propose a VB method for a class of time-varying parameter models.
Existing variational inference methods for state space models that construct point estimates for model parameters, instead of posterior distributions, are computationally expensive.
For instance, \cite{naesseth2018variational} construct an approximation to the conditional posterior of the state using particle filtering, which is computationally costly and hence hinders the scalability to problems with high-dimensional state vectors. \cite{archer2015black} use neural networks to construct an approximation to the posterior of the states. The parameters of these neural networks are calibrated jointly with the parameters of the model, which means that a high-dimensional gradient has to be computed in each iteration of the optimization algorithm.
The outline of the remainder of this paper is as follows. Section~\ref{sec:ssm} discusses specification and exact estimation of state space models, and Section~\ref{sec:vb} develops our VB method. Section~\ref{sec:simulation} conducts numerical experiments to evaluate its accuracy and computational costs, and Section~\ref{sec:application} applies our method to real data. Section~\ref{sec:conclusion} concludes.
\section{State space models}\label{sec:ssm}
Let $\mathbf{y}=(\mathbf{y}_1^\top,\dots,\mathbf{y}_T^\top)^\top$ be an observed time series assumed to have been generated by a state space model with measurement and state densities
\begin{align}
\mathbf{y}_t|(\mathbf{X}_t=\mathbf{x}_t) &\sim p(\mathbf{y}_t|\mathbf{x}_t,\boldsymbol\theta), \label{eq:measurement}\\
\mathbf{X}_t|(\mathbf{X}_{t-1}=\mathbf{x}_{t-1}) &\sim p(\mathbf{x}_t|\mathbf{x}_{t-1},\boldsymbol\theta), \label{eq:state}
\end{align}
respectively, and where the prior density for $\mathbf{X}_1$ is $p(\mathbf{x}_1|\boldsymbol\theta)$, $\boldsymbol{\theta }\in\Theta$ is a $d$-dimensional parameter vector and $\mathbf{y}_t$ is an $N$-dimensional observation vector with $t=1,\dots,T$. The likelihood function for this model is given by
\begin{align}\label{eq:likelihood}
p(\mathbf{y}|\boldsymbol\theta)=\int p(\mathbf{y},\mathbf{x}|\boldsymbol\theta)d\mathbf{x},
\end{align}
where $\mathbf{x} = (\mathbf{x}_1^\top,\dots,\mathbf{x}_T^\top)^\top$ and $p(\mathbf{y},\mathbf{x}|\boldsymbol\theta) = \prod_{t=1}^{T}p(\mathbf{y}_{t}|\mathbf{x}_{t},{\boldsymbol\theta })p(\mathbf{x}_{t}|\mathbf{x}_{t-1},{\boldsymbol\theta })$. Typically, the integral that characterises the likelihood is intractable, as it does not have an analytical solution. This is the case for state space models that assume non-linear or non-Gaussian measurement equations. These types of models are pervasive in econometrics and include, for instance, stochastic volatility models and some time-varying parameter models. Hence, maximum likelihood estimation is infeasible for a large class of econometric problems.
Bayesian analysis is concerned with computing the posterior density $p(\boldsymbol{\theta }|\mathbf{y})\propto p(\mathbf{y}|\boldsymbol\theta)p(\boldsymbol{\theta })$, where $p(\boldsymbol{\theta })$ is a given choice of prior density. The intractability in the likelihood function is tackled via two different avenues. First, for certain state space models it is feasible to use Markov chain Monte Carlo (MCMC) to generate from the augmented density
\begin{align}\label{eq:augmented_posterior}
p(\boldsymbol\theta,\mathbf{x}|\mathbf{y}) \propto p(\mathbf{y},\mathbf{x}|\boldsymbol\theta)p(\boldsymbol\theta),
\end{align}
where analytical filtering methods are used to obtain draws from $p(\mathbf{x}|\boldsymbol\theta,\mathbf{y})$. MCMC effectively samples from $p(\boldsymbol\theta|\mathbf{y})$ which is a marginal density of $p(\boldsymbol\theta,\mathbf{x}|\mathbf{y})$. This approach is limited to certain classes of state space models, and the filtering techniques used can become computationally costly for large sample sizes or high-dimensional state vectors. The second Bayesian avenue generates samples from the posterior by replacing the likelihood function by its unbiased estimate $\widehat{p}_S(\mathbf{y}|\boldsymbol{\theta})$.
This unbiased estimate, evaluated via particle methods, is then used inside a Metropolis-Hastings scheme. This approach, known as particle MCMC (PMCMC), trades off accuracy in estimation of $p(\boldsymbol\theta|\mathbf{y})$ by computational speed, via the choice in the number of particles $S$ \citep{andrieu2010particle,doucet2015efficient}. While it can be applied to a broad class of state space models, this approach is known to be computationally costly and highly noisy for an inadequately low number of particles. This issue is exacerbated when the state vector is high-dimensional and a larger number of particles is required.
\section{Variational Bayes}\label{sec:vb}
Variational Bayes may overcome the computational challenges in estimating state space models. The general idea behind VB is to approximate the exact posterior $p(\boldsymbol{\theta}|\mathbf{y})$ with an approximating density $q_{\hat{\lambda}}(\boldsymbol\theta)\in\mathcal{Q}$, where $\mathcal{Q} = \{q_{\lambda}(\boldsymbol\theta): \boldsymbol\lambda\in\Lambda\}$ is a class of tractable approximating densities indexed by the variational parameter $\boldsymbol{\lambda}\in\Lambda$. The most popular choice for $\mathcal{Q}$ is the Gaussian distribution class. The optimal variational parameter $\hat{\boldsymbol{\lambda}}$ is then calibrated by finding the element in $\mathcal{Q}$ that minimizes the Kullback-Leibler (KL) divergence - or any other divergence - to the exact posterior. Implementation of VB requires evaluation of the likelihood function $p(\mathbf{y}|\boldsymbol{\theta})$, which is infeasible for most state space models. \cite{tran2017variational} circumvent this issue by replacing $p(\mathbf{y}|\boldsymbol{\theta})$ by the unbiased estimate $\widehat{p}_N(\mathbf{y}|\boldsymbol{\theta})$. While this approach is faster than PMCMC, it remains computationally costly due to its use of particle filtering.
Alternatively, VB can circumvent the computational challenges of exactly integrating out $\mathbf{x}$, by instead constructing an approximation to the augmented posterior in \eqref{eq:augmented_posterior}. In this case, the approximating density is $q_{\lambda}(\boldsymbol\theta,\mathbf{x})$ and $\mathcal{Q} = \{q_{\lambda}(\boldsymbol\theta,\mathbf{x}): \boldsymbol\lambda\in\Lambda\}$. Then, $\widehat{\boldsymbol{\lambda}}$ is obtained by minimising the KL divergence from $q_{\lambda}(\boldsymbol\theta,\mathbf{x})$ to $p(\boldsymbol\theta,\mathbf{x}|\mathbf{y})$, which is equivalent to maximising the evidence lower bound (ELBO) function $\mathcal{L}(\boldsymbol{\lambda}) = E_{q}\left[\log p(\mathbf{y},\mathbf{x}|\boldsymbol\theta)p(\boldsymbol\theta)-\log q_{\lambda}(\boldsymbol\theta,\mathbf{x})\right]$:
\begin{align}\label{eq:lambdahat}
\hat{\boldsymbol{\lambda}}=\operatornamewithlimits{argmin\,}_{\boldsymbol{\lambda}\in \Lambda}\text{KL}\left[ q_{\lambda}(\boldsymbol\theta,\mathbf{x})||p(\boldsymbol\theta,\mathbf{x}|\mathbf{y})\right]=\operatornamewithlimits{argmax\,}_{\boldsymbol{\lambda}\in \Lambda}\mathcal{L}(\boldsymbol{\lambda}).
\end{align}
VB methods that target the augmented posterior are much faster to implement than methods that approximate $p(\boldsymbol\theta|\mathbf{y})$ directly.
\subsection{Variational approximations for state space models}
This paper proposes a variational approximation of the form
\begin{align}\label{eq:vas}
q_{\lambda}(\boldsymbol\theta,\mathbf{x})=q_{\lambda}(\boldsymbol\theta)q(\mathbf{x}|\mathbf{y},\boldsymbol\theta).
\end{align}
For the choice of $q_\lambda(\boldsymbol{\theta})$, we follow \cite{ong2018gaussian} and employ a $d$-dimensional Gaussian density with mean $\boldsymbol{\mu}$ and a covariance matrix with a factor structure representation $\Omega = BB^\top+\text{diag}(\boldsymbol{d}^2)$, where $B$ is a $d\times p$ matrix and $\boldsymbol{d}$ is a $d$-dimensional vector. The variational parameter vector is $\boldsymbol{\lambda} = (\boldsymbol{\mu}^\top,\boldsymbol{d}^\top,\text{vech}(B)^\top)^\top$, where the vech denotes the half vectorization of a rectangular matrix.
\citet{loaiza2022fast} show that the optimal choice of approximation for the latent states is $q(\mathbf{x}|\mathbf{y},\boldsymbol\theta) = p(\mathbf{x}|\mathbf{y},\boldsymbol{\theta})$, which guarantees exact integration of $\mathbf{x}$. However, the implementation of this approximation requires one to generate from $p(\mathbf{x}|\mathbf{y},\boldsymbol{\theta})$, which can be computationally challenging or even infeasible for nonlinear and high-dimensional state space models.
A faster approach, which can also be applied to a wider range of state space models, is to take $q(\mathbf{x}|\mathbf{y},\boldsymbol\theta) = q(\mathbf{x}|\boldsymbol{\theta})$. For instance, \citet{quiroz2018gaussian} take a multivariate Gaussian for $q(\mathbf{x}|\boldsymbol\theta)$ that does not condition on the data $\mathbf{y}$.
\citet{frazier2022variational} show that Gaussian variational approximations to latent states may lead to inferential and predictive inaccuracies.
This paper develops an accurate and fast variational Bayes method for the state space model in \eqref{eq:measurement}--\eqref{eq:state}, by proposing an approximation that can be expressed as $q(\mathbf{x}|\mathbf{y},\boldsymbol\theta)=q(\mathbf{x}|\mathbf{y})$. The proposed approximation is accurate due to the conditioning on $\mathbf{y}$. In addition, it is fast to implement as it does not directly condition on the parameter vector $\boldsymbol{\theta}$. The method is developed for models that have a closed-form measurement density $p(\mathbf{y}_t|\mathbf{x}_t,\boldsymbol\theta)$, and state transition density $p(\mathbf{x}_t|\mathbf{x}_{t-1},\boldsymbol\theta)$ that belongs to the exponential family of distributions. This makes our approach applicable to a wide range of different state space models. As will be discussed next, our approach is inspired by the literature on efficient importance sampling; hence we refer to it as Efficient VB.
\subsection{An efficient variational approximation to the states}
We propose variational approximations to the states of the form
\begin{align}\label{eq:qxy}
q(\mathbf{x}|\mathbf{y})=\prod_{t=1}^T q(\mathbf{x}_t|\mathbf{x}_{t-1},\mathbf{y},\boldsymbol\varphi),
\end{align}
where $\boldsymbol{\varphi}$ is an auxiliary parameter vector, which works as a proxy for $\boldsymbol{\theta}$. The conditional densities $q(\mathbf{x}_t|\mathbf{x}_{t-1},\mathbf{y},\boldsymbol\varphi)$ are written in terms of a transition kernel $k(\mathbf{x}_t,\mathbf{x}_{t-1}|\boldsymbol{a}_t,\boldsymbol\varphi)$ and an integration constant $\chi(\mathbf{x}_{t-1}|\boldsymbol{a}_t,\boldsymbol\varphi)=\int k(\mathbf{x}_t,\mathbf{x}_{t-1}|\boldsymbol{a}_t,\boldsymbol{\varphi}) d \mathbf{x}_t$:
\begin{align}\label{eq:vakernel}
q(\mathbf{x}_t|\mathbf{x}_{t-1},\mathbf{y},\boldsymbol\varphi)=\frac{k(\mathbf{x}_t,\mathbf{x}_{t-1}|\boldsymbol{a}_t,\boldsymbol\varphi)}{\chi(\mathbf{x}_{t-1}|\boldsymbol{a}_t,\boldsymbol\varphi)},
\end{align}
where $\boldsymbol{a}_t$ is a vector of parameters dependent on $\mathbf{y}$.
Denote $D[P||F]$ to be a divergence function between two distributions $P$ and $F$. The parameters $\boldsymbol{a} = (\boldsymbol{a}_1^\top,\dots,\boldsymbol{a}_T^\top)^\top\in A$ are calibrated so that $q(\mathbf{x}|\mathbf{y})$ accurately approximates $p(\mathbf{x}|\mathbf{y},\boldsymbol\varphi)$ as measured by $D$, that is
\begin{align}\label{eq:calibrate_a}
\boldsymbol{a} = \argmin_{\tilde{\boldsymbol a}\in A} D\left[ \prod_{t=1}^T \frac{k(\mathbf{x}_t,\mathbf{x}_{t-1}|\tilde{\boldsymbol a}_t,\boldsymbol\varphi)}{\chi(\mathbf{x}_{t-1}|\tilde{\boldsymbol a}_t,\boldsymbol\varphi)}||p(\mathbf{x}|\mathbf{y},\boldsymbol{\varphi}) \right].
\end{align}
The optimization problem in \eqref{eq:calibrate_a} is similar to the one considered in efficient importance sampling \citep{richard2007efficient,koopman2015numerically}. Instead of sampling from the distribution of interest $p(\mathbf{x}|\mathbf{y},\boldsymbol\varphi)$, importance sampling replaces that distribution with an auxiliary distribution $q(\mathbf{x}|\mathbf{y})$. The parameters $\boldsymbol a$ are calibrated to minimize the variance of the ratio $p(\mathbf{x}|\mathbf{y},\boldsymbol\varphi)/q(\mathbf{x}|\mathbf{y})$.
We follow this approach, and solve \eqref{eq:calibrate_a} according to the steps proposed by \citet{richard2007efficient}.
Algorithm \ref{alg:vi2} in \citet{scharth2016particle} summarises this method for calibrating $\boldsymbol a$.
The calibration of $\boldsymbol a$ in \eqref{eq:calibrate_a} is the most computationally expensive step in the VB optimization routine. However, if $q(\mathbf{x}|\mathbf{y})$ does not depend on $\boldsymbol\theta$, $\boldsymbol a$ does not have to be calibrated at each iteration of VB. Hence, we set $\boldsymbol\varphi$ to be a parameter vector close to $\boldsymbol\theta$ rather than $\boldsymbol\theta$ itself. The numerical experiment and empirical application demonstrate that this does not hinder the performance of the approach.
Since the transition density belongs to the exponential family of distributions, it can be written as
\begin{equation}
p(\mathbf{x}_t|\mathbf{x}_{t-1},\boldsymbol\varphi) = h(\mathbf{x}_t)g(\mathbf{x}_{t-1},\boldsymbol{\varphi})\exp\left(\boldsymbol{\eta}(\mathbf{x}_{t-1},\boldsymbol{\varphi})^\top\boldsymbol{T}(\mathbf{x}_t)\right),
\end{equation}
where $\boldsymbol{T}(\mathbf{x}_t)$ denotes a vector of sufficient statistics.
We select the transition kernel $k(\mathbf{x}_t,\mathbf{x}_{t-1}|\boldsymbol a_t,\boldsymbol\varphi)$ to be
\begin{align}\label{eq:kernel}
k(\mathbf{x}_t,\mathbf{x}_{t-1}|\boldsymbol a_t,\boldsymbol\varphi) = \exp\left(\boldsymbol{a}_t^\top\boldsymbol{T}(\mathbf{x}_t)\right)p(\mathbf{x}_t|\mathbf{x}_{t-1},\boldsymbol\varphi).
\end{align}
The choice of kernel in \eqref{eq:kernel} guarantees the practical applicability of the proposed method. Calibration of $q(\mathbf{x}|\mathbf{y})$ can be implemented via an algorithm that involves a fast recursive sequence of linear regressions.
This algorithm is feasible because $ q(\mathbf{x}|\mathbf{y})$ is easy to generate from and the integration constant $\chi(\mathbf{x}_{t-1}|\boldsymbol{a}_t,\boldsymbol\varphi)$ can be evaluated, as shown in Appendix~\ref{app:calibration}.
\subsection{Stochastic gradient ascent}
We solve the optimization problem in \eqref{eq:lambdahat} using stochastic gradient ascent (SGA) methods. SGA calibrates the variational parameter by iterating over
\begin{equation}\label{eq: lambda_iter}
\boldsymbol{\lambda}^{[j+1]} = \boldsymbol{\lambda}^{[j]} + \boldsymbol{\rho}^{[j]}\circ\widehat{\nabla_\lambda \mathcal{L}\left(\boldsymbol{\lambda}^{[j]}\right)},
\end{equation}
until convergence is achieved. The vector $\boldsymbol{\rho}^{[j]}$ contains the so called ``learning parameters'', which we set according to the ADADELTA approach in \cite{zeiler2012adadelta}. The vector $\widehat{\nabla_\lambda \mathcal{L}\left(\boldsymbol{\lambda}^{[j]}\right)}$ is an unbiased estimate of the gradient of the ELBO
evaluated at $\boldsymbol{\lambda}^{[j]}$.
We can express any draw from $q_\lambda(\boldsymbol{\theta})$ as $\boldsymbol{\theta} = \boldsymbol{\theta}(\boldsymbol{\varepsilon},\boldsymbol{\lambda})$, where $\boldsymbol{\varepsilon}\sim f_{\varepsilon}$ and $f_{\varepsilon}$ is a distribution that does not depend on $\boldsymbol{\lambda}$. Using the re-parametrization trick in \citet{kingma2013auto}, we can then write the ELBO gradient as
\begin{align}\label{eq:gradient_elbo}
\nabla_\lambda\mathcal{L}(\boldsymbol{\lambda}) = E_{q(\mathbf{x}|\mathbf{y}),f_\varepsilon}\left[\frac{\partial \boldsymbol{\theta}}{\partial\boldsymbol{\lambda}}^\top\left[\nabla_\theta\log p(\mathbf{y},\mathbf{x}|\boldsymbol\theta)p(\boldsymbol\theta)-\nabla_\theta\log q_\lambda(\boldsymbol\theta)\right] \right],
\end{align}
where the expectation is taken with respect to $f_\varepsilon$ and $q(\mathbf{x}|\mathbf{y})$. The gradient $\nabla_\theta\log p(\mathbf{y},\mathbf{x}|\boldsymbol\theta)p(\boldsymbol{\theta})$ is model specific. The expressions $\partial\boldsymbol{\theta} /\partial\boldsymbol{\lambda}$ and $\nabla_\theta \log q_\lambda(\boldsymbol{\theta} )$ are provided in \cite{ong2018gaussian}.
At each SGA iteration $[j]$, we calculate a sample estimate of \eqref{eq:gradient_elbo} based on only one draw for both $\mathbf{x}$ from $q(\mathbf{x}|\mathbf{y})$ and $\boldsymbol\varepsilon$ from $f_\varepsilon$. Note that $q(\mathbf{x}|\mathbf{y})$ only depends on the parameters $\boldsymbol\varphi$ and $\boldsymbol a$, which are updated every $200$ steps by setting $\boldsymbol\varphi = \boldsymbol\mu^{[j]}$ and re-calibrating $\boldsymbol a$ as in \eqref{eq:calibrate_a}.
The VB estimation routine is summarized in Algorithm~\ref{alg:vi}.
\begin{algorithm}
\begin{algorithmic}[1]
\State{Initialize $\boldsymbol\lambda^{[0]}$ and set iteration $j=0$.}
\While{no convergence of ELBO}
\State{Set $j=j+1$.}
\If{$j+199$ is a multiple of 200}
\State{Set $\boldsymbol\varphi=\boldsymbol\mu^{[j]}$.}
\State{Solve $\boldsymbol a = \argmin_{\tilde{\boldsymbol a}\in A} D\left[ \prod_{t=1}^T \frac{k(\mathbf{x}_t,\mathbf{x}_{t-1}|\tilde{\boldsymbol a}_t,\boldsymbol\varphi)}{\chi(\mathbf{x}_{t-1}|\tilde{\boldsymbol a}_t,\boldsymbol\varphi)}||p(\mathbf{x}|\mathbf{y},\boldsymbol{\varphi}) \right]$.}
\State{Set $q(\mathbf{x}|\mathbf{y})=\prod_{t=1}^Tq(\mathbf{x}_t|\mathbf{x}_{t-1},\mathbf{y},\boldsymbol\varphi)$.}
\EndIf
\State{Draw $\boldsymbol\varepsilon^{[j]}$ from $f_\varepsilon$ and set $\boldsymbol\theta^{[j]} = h(\boldsymbol\varepsilon^{[j]},\boldsymbol\lambda^{[j]})$.}
\State{Draw $\mathbf{x}^{[j]}$ from $q(\mathbf{x}|\mathbf{y})$.}
\State{Compute $\widehat{\nabla_\lambda \mathcal{L}\left(\boldsymbol{\lambda}^{[j]}\right)}=\left.\frac{\partial \boldsymbol{\theta}}{\partial\boldsymbol{\lambda}}^\top\left[\nabla_\theta\log p(\mathbf{y},\mathbf{x}|\boldsymbol{\theta})p(\boldsymbol{\theta})-\nabla_\theta\log q_\lambda(\boldsymbol{\theta})\right]\right|_{\boldsymbol\theta=\boldsymbol{\theta}^{[j]},\boldsymbol\lambda=\boldsymbol{\lambda}^{[j]},\mathbf{x}=\mathbf{x}^{[j]}}$.}
\State{Update $\boldsymbol{\rho}^{[j]}$ via ADADELTA.}
\State{Set $\boldsymbol{\lambda}^{[j+1]} = \boldsymbol{\lambda}^{[j]} + \boldsymbol{\rho}^{[j]}\circ\widehat{\nabla_\lambda \mathcal{L}\left(\boldsymbol{\lambda}^{[j]}\right)}$.}
\EndWhile
\end{algorithmic}
\caption{Efficient VB algorithm}
\label{alg:vi}
\end{algorithm}
Line 6 in Algorithm \ref{alg:vi} takes the most computation time. Appendix~\ref{app:calibration} provides a detailed algorithm together with additional details on how this step is implemented.
The implementation directly follows from the kernel specification in \eqref{eq:kernel}. It does not require the practitioner to make any derivations or any choices, provided that the state density is from the exponential family.
\section{Numerical experiments}\label{sec:simulation}
This section presents numerical experiments to assess the accuracy and the computational costs of the proposed VB approach. We consider a state space model that is widely used in economics, namely, the stochastic volatility model. We investigate the properties of the method for varying sample sizes. The approach is compared to other VB methods and MCMC.
\subsection{Stochastic volatility model}
The stochastic volatility model is defined as
\begin{eqnarray}
p(y_t|x_t)& = &\phi_1(y_t;0,e^{x_t}),\nonumber\\
p(x_t|x_{t-1},\boldsymbol{\theta})& = &\phi_1(x_t;\bar{x}+\rho(x_{t-1}-\bar{x}),\sigma^2),\,
\label{eq:sv}
\end{eqnarray}
where $\phi_1(x;\mu,s^2)$ denotes the univariate Gaussian density function with mean $\mu$ and variance $s^2$, $\boldsymbol{\theta} = (\bar{x},\rho,\sigma)'$ are the global parameters of the model, and $x_t$ denotes the latent log-variances of the time series process at time $t$. We generate $T = 4000$ observations from the model in \eqref{eq:sv} with the true parameter values set as $\rho_0 = 0.95$, $\sigma_0 = 0.3$ and $\bar{x}_0 = -1.3$.
The objective of the experiments is to assess the accuracy and computational costs of different VB methods in approximating the augmented posterior
\begin{equation}
p\left(\boldsymbol{\theta},\mathbf{x}} \def \mX {\text{\boldmath$X$}|\boldsymbol{y}\right)\propto p\left(\boldsymbol{\theta}\right)\prod_{t=1}^{T} p\left(y_t|x_t\right)p\left(x_t|x_{t-1},\boldsymbol{\theta}\right),
\label{eq:ssmpost}
\end{equation}
for varying sample sizes. Here, $\mathbf{x}} \def \mX {\text{\boldmath$X$}=(x_1,\ldots,x_T)^\top$ and $p\left(\boldsymbol{\theta}\right) = p(\bar{x})p(\rho)p(\sigma)$ is the prior density for $\theta$, with $p(\bar{x})=N(0,1000)$, $p(\rho)=\text{Uniform}(0,0.995)$, and $p(\sigma)=\text{Inverse-Gamma}(1.001,1.001)$. VB is implemented by transforming all the parameters to the real line so that $\rho = 0.995/(1+\exp(-\kappa))$, and $\sigma = \exp(c/2)$.
The MCMC sampler generates from the exact posterior \eqref{eq:ssmpost}. Appendix~\ref{A:sv} outlines the steps of the MCMC algorithm. Because of the low computational costs of implementing MCMC, we can estimate the model for multiple sample sizes. Specifically, we estimate the model using $T = 1,10,20,\dots,4000$ observations. These results provide insights into the accuracy and computational costs of the different VB methods in small and large samples.
\subsection{Variational approximations}
The stochastic volatility model allows for the construction of the efficient variational approximation in \eqref{eq:qxy}.
Since the state transition density is Gaussian, set $\boldsymbol T( x)=(x,x^2)^\top$ and $\boldsymbol a_t = (b_t,c_t)^\top$ with $b_t$ and $c_t$ both scalars.
Denote $\boldsymbol\varphi = (\bar{x}_\varphi,\rho_\varphi,\sigma_\varphi)$. The approximation to the states $q(\mathbf{x}|\mathbf{y}) = \prod_{t=1}^Tq(x_t|x_{t-1},\mathbf{y},\boldsymbol\varphi)$ is a product of Gaussian densities such that $q(x_t|x_{t-1},\mathbf{y},\boldsymbol\varphi) = \phi_1(x_t;\mu_t,\sigma_t^2)$ with $\sigma_{t} = (\sigma_\varphi^{-2}-2c_{t})^{-1/2}$, $\mu_{t} = \sigma_{t}^2\left[b_{t}+\frac{\bar{x}_\varphi+\rho_\varphi(x_{t}-\bar{x}_\varphi)}{\sigma_\varphi^2}\right]$, and normalising constant
\begin{align}
\chi(x_{t-1}|\boldsymbol{a}_t,\boldsymbol\varphi) = \exp\left[ \frac{1}{2}\log \frac{\sigma_{t}}{\sigma_\varphi}+\frac{1}{2}\frac{\mu_{t}^2}{\sigma_{t}^2}-\frac{1}{2}\frac{(\bar{x}_\varphi+\rho_\varphi(x_{t-1}-\bar{x}_\varphi))^2}{\sigma_\varphi^2}\right].
\end{align}
Because generation of a random draw from $ q(\mathbf{x}|\mathbf{y}) $ is fast, so is line 10 in Algorithm 1.
In addition, we also implement Hybrid VB and Gaussian VB, proposed by \citet{loaiza2022fast} and \citet{quiroz2018gaussian}, respectively. Hybrid VB requires draws from the conditional density $p(\mathbf{x}|\mathbf{y},\boldsymbol\theta)$, which can be generated using the filtering steps discussed in \citet{kim1998stochastic}. Gaussian VB takes $q(\mathbf{x}|\boldsymbol\theta)=\phi_T(\mathbf{x},\boldsymbol\mu_x,\Sigma_x)$ to be a $T$-dimensional multivariate Gaussian density. In all three methods we use a Gaussian approximation with factor covariance matrix for $\boldsymbol{\theta}$ and set the number of factors to one. The gradient expressions required for the implementation of all three VB methods are provided in Appendix~\ref{A:sv}.
\subsection{Results}
\subsubsection{Accuracy of the posterior distribution of the states }
First, we assess the accuracy of the posterior distribution for the states in $\mathbf{x}$ in a small sample with $T =500$ observations. Figure~\ref{fig:SVstates} shows the Efficient VB, Gaussian VB, and MCMC posterior means of the states. Since Hybrid VB uses the exact conditional density of the states in its variational approximation, its posterior mean for the states is very similar to that of MCMC and is not included in the figure. Although Efficient VB uses an approximation to the conditional state density, its posterior mean is almost identical to the posterior mean of MCMC over the whole sample period. This is not the case for the posterior mean of Gaussian VB. The Gaussian VB posterior means overestimate the states compared to the posterior means from MCMC in almost each time period.
\begin{figure}[tb!]
\caption{Posterior mean of the states in the numerical experiment with $T=500$}
\centering
\includegraphics*[width=\textwidth]{Figs/x_states.eps}\\[3mm]
\begin{flushleft}
This figure shows the posterior mean in the numerical experiment with an estimation sample of 500 observations, for Efficient VB, Gaussian VB, and MCMC. These are indicated by the solid yellow, dotted blue, and solid black line, respectively.
\end{flushleft}
\label{fig:SVstates}
\end{figure}
Additionally, we analyse the posterior dependence structure of the states. Panels (a) to (c) in Figure~\ref{fig:SVcor} show the posterior correlations between $x_{100+i}$ and $x_{100+j}$ for $i,j=1,\dots,10$, for Gaussian VB, Efficient VB and MCMC, respectively. Gaussian VB underestimates most pairwise posterior correlations. Panel (a) in Figure~\ref{fig:SVcor} shows that only the first and second order posterior correlations are nonzero, while the MCMC posterior correlations are positive up to at least the tenth order. The posterior correlations of Efficient VB and MCMC do not show any differences. We find the same patterns across different time periods and across longer time samples.
\begin{figure}[tb!]
\caption{Posterior correlations between the states in the numerical experiment with $T=500$}
\centering
\includegraphics*[width=\textwidth]{Figs/cor_struct.eps}\\[3mm]
\begin{flushleft}
This figure shows the posterior correlations between $x_{100+i}$ and $x_{100+j}$ with $i,j=1,\dots,10$, in the numerical experiment with an estimation sample of 500 observations, for Gaussian VB, Efficient VB, and MCMC.
\end{flushleft}
\label{fig:SVcor}
\end{figure}
The differences in accuracy of the posterior distribution for the states can be explained by the choice of variational approximation for the states. Gaussian VB calibrates $q(\mathbf{x}|\boldsymbol\theta)$, which is different from the ideal distribution $p(\mathbf{x}|\mathbf{y},\boldsymbol\theta)$, as it does not condition in $\mathbf{y}$. On the other hand, Efficient VB uses $q(\mathbf{x}|\mathbf{y})$, which directly conditions on $\mathbf{y}$ but not on $\boldsymbol\theta$. Figures~\ref{fig:SVstates} and \ref{fig:SVcor} suggest that when it comes to accurately representing the posterior distribution of the states, it is more important to condition on the data than it is to condition on the parameter vector. While estimation of the posterior of the states is not always the target of Bayesian analysis, accurate estimation of this posterior is critical to obtaining accurate variational approximations to the target posterior $p(\boldsymbol{\theta}|\mathbf{y})$. We demonstrate this in the next section.
\subsubsection{Accuracy of the posterior distribution of the parameters}
Second, we assess the accuracy of the posterior distribution for the parameters. Figure~\ref{fig:SVsmall} shows the posterior parameter distributions for Efficient VB (solid yellow), Gaussian VB (dotted blue), Hybrid VB (dashed purple), and MCMC (solid black) in a small sample with 500 observations. The posterior of Efficient VB is close to the exact posterior of MCMC for all three parameters. The small differences between these posteriors can be summarized as a slight change in location for $\bar{x}$ and underestimation of the variance of the posteriors of $\rho$ and $\sigma$, which is a well-known property of VB. Hence Hybrid VB also underestimates the posterior variance, but is slightly more accurate in the posterior locations. However, Gaussian VB only produces an accurate approximation for $\bar{x}$. For $\rho$, the Gaussian VB has a different location than the exact posterior. For $\sigma$, the Gaussian VB has small overlap in probability mass with the MCMC posterior.
\begin{figure}[tb!]
\caption{Posterior parameter distributions in the numerical experiment with $T=500$}
\centering
\includegraphics*[scale=0.7]{Figs/SVestimatesSmall.eps}\\[3mm]
\begin{flushleft}
This figure shows the posterior parameter distributions in the numerical experiment with an estimation sample of 500 observations, for Efficient VB, Gaussian VB, Hybrid VB, and MCMC. These are indicated by the solid yellow, dotted blue, dashed purple, and solid black line, respectively. Panel (a) shows the posterior distribution for $\bar{x}$, Panel (b) for $\rho$, and Panel (c) for $\sigma$. The vertical lines indicate the true values in the data generating process.
\end{flushleft}
\label{fig:SVsmall}
\end{figure}
Additionally, we assess how the accuracy of the approximations changes with the sample size. The red lines in Panels (a.1) to (a.3) in Figure~\ref{fig:SVlarge} show the 99\% posterior intervals for all the three parameters using Efficient VB. The shaded areas correspond to the MCMC posterior intervals. The x-axis indicates the sample size used for estimation. We find that the posterior intervals of Efficient VB are similar to those of MCMC, and the accuracy does not seem to be affected by the sample size. The posterior interval of Efficient VB concentrates to a location close to the true values with increasing sample size. Panels (c.1) to (c.3) in the figure show that the posterior intervals of Hybrid VB are more accurate for $\bar{x}$, but are not that different from Efficient VB for $\rho$ and $\sigma$. Panels (b.1) to (b.3) show that Gaussian VB is less accurate for all parameters and all sample sizes under consideration. Moreover, as the sample size increases its posteriors do not concentrate close the true parameter values. This behaviour is likely to be related to the approximation errors to the posterior of the states that are exhibited by Gaussian VB.
\begin{figure}[tb!]
\caption{Posterior parameter distributions for different sample sizes}
\centering
\includegraphics*[width=\textwidth]{Figs/SVestimates.eps}\\[3mm]
\begin{flushleft}
This figure shows the 0.5\% quantiles of the posterior parameter distributions in the numerical experiments with different estimation samples. The columns correspond to the methods Efficient VB, Gaussian VB, and Hybrid VB, which quantiles are indicated by solid red lines and compared to the quantiles of MCMC indicated by gray areas. The rows correspond to the parameters $\bar{x}$, $\rho$, and $\sigma$. The horizontal dashed lines indicate the true values in the data generating process.
\end{flushleft}
\label{fig:SVlarge}
\end{figure}
\subsubsection{Computation time}
Third, we compare the computational costs of the different estimation methods. Figure~\ref{fig:ComputationTimes} shows the estimation time with increasing sample sizes for Efficient VB, Gaussian VB, Hybrid VB, and MCMC. Efficient VB is substantially faster than the other methods: more than six times faster than MCMC, but also three times faster than Hybrid VB and more than twice as fast as Gaussian VB with a sample size of 4000 observations. Hence, Efficient VB is both more accurate and faster relative to Gaussian VB. Although the differences between the methods are smaller for smaller sample sizes, the ordering in the computational costs remains the same.
\begin{figure}[tb!]
\caption{Estimation time in the numerical experiments}
\centering
\includegraphics*[scale=0.7]{Figs/SVtimes.eps}\\[3mm]
\begin{flushleft}
This figure shows the estimation time in seconds in numerical experiments with different sample sizes, for Efficient VB, Gaussian VB, Hybrid VB, and MCMC. These are indicated by the solid yellow, dotted blue, purple dashed, and solid black line, respectively.
\end{flushleft}
\label{fig:ComputationTimes}
\end{figure}
The substantial reduction in computational costs by Efficient VB can be explained by two properties of its variational approximation to the states. First, Efficient VB does not require computationally costly filtering steps to calibrate its approximation to the states. This explains the computational gains relative to Hybrid VB, which samples from the exact conditional density of the states by forward filtering and backward smoothing. Second, the number of variational parameters in Efficient VB depends only on the dimension of $\boldsymbol\theta$. On the other hand, the number of variational parameters in Gaussian VB also depends on the dimension of $\mathbf{x}$, making the required gradient and matrix operations computationally costly, especially as the sample size $T$ increases.
The computation time of Efficient VB is approximately equally distributed between three steps of Algorithm~\ref{alg:vi}: the calibration of the approximation to the states in line 6, the generation of the states in line 10 and the estimation of the gradient in line 11. The generation of the states is efficient since it does not require filtering steps, and so is evaluation of the gradient as its dimension is not affected by $T$. On the other hand, calibration of $q(\mathbf{x}|\mathbf{y})$ can be costly, since it involves a recursive sequence of linear regressions that increases in the number of observations. As a result, we only run step 6 of the algorithm every 200 iterations. The results in Figure \ref{fig:SVlarge} indicate that this choice of update frequency does not hinder the accuracy of the model. This is also corroborated by the sensitivity analysis presented in Appendix~\ref{A:numerical}, which demonstrates that more frequent updates of $q(\mathbf{x}|\mathbf{y})$ do not produce higher accuracy of the variational approximation for either small or large sample sizes. |
1,116,691,498,268 | arxiv | \section{Introduction}
While maximum likelihood estimation plays a central role in statistical inference, today
its application is challenged in a number of fields where modern technologies allow
scientists to collect data in unprecedented size and complexity. These fields include
genetics, biology, environmental research, meteorology and physics, to name a few. Two main
issues arise when attempting to apply traditional maximum likelihood to high-dimensional or complex
data. The first concerns modelling and model selection, since high-dimensional data
typically
imply complex models for which the full likelihood function is difficult, or
impossible, to specify. The second relates to computing, when the full likelihood function
is available but it is just too complex to be evaluated.
The above limitations of traditional maximum likelihood have motivated the development of
composite likelihood methods, which avoid the issues from full
likelihood maximization by combining a set of low-dimensional
likelihood objects. \citeasnoun{Besag74} was an early proponent of
composite likelihood estimation in the
context of data with spatial dependence; \citeasnoun{Lindsay88} developed composite
likelihood inference in its generality and systematically studied its properties. Over the years,
composite likelihood methods have proved useful in a range of complex applications, including models
for geostatistics, spatial extremes and statistical genetics. See
\citeasnoun{Varin&al11} for a comprehensive survey of composite likelihood
theory and applications.
Let $\boldsymbol{X}=(X_1, \dots, X_d)^T$ be a $d \times 1$ random vector with pdf (or pmf)
$f(\boldsymbol{x}| \boldsymbol{\theta}_0)$, where $\boldsymbol{\theta}_0 \in \boldsymbol{\Theta} \subseteq \mathbb{R}^p$, $p \geq 1$, is the
unknown parameter. From independent observations $\boldsymbol{X}^{(1)},\dots, \boldsymbol{X}^{(n)}$, one typically
computes the maximum likelihood estimator (MLE), $\hat{\boldsymbol{\bftheta}}_{mle}$, by maximizing the
likelihood function $L_{mle}(\boldsymbol{\theta}) = \prod_{i=1}^n f(\boldsymbol{X}^{(i)}| \boldsymbol{\theta})$.
Now suppose that
complications in the $d$-dimensional pdf (pmf) $f(\boldsymbol{x}| \boldsymbol{\theta})$ make it difficult to specify (or compute)
\replace{$\hat{L}_{mle}(\boldsymbol{\theta})$}{$L_{mle}(\boldsymbol{\theta})$} as the data dimension $d$ grows, but it is relatively easy to
specify (or compute) one-, two-,..., dimensional distributions up to some
order for some functions of $X_1,\dots, X_d$. One can then follow \citeasnoun{Lindsay88}
to estimate $\boldsymbol{\theta}$ by the maximum composite likelihood estimator (McLE),
which maximizes the composite likelihood function:
\begin{equation} \label{Lcl}
\replace{\hat{L}}{L}_{cl}(\boldsymbol{\theta}) = \prod_{m=1}^{M} \replace{\hat{L}}{L}_m(\boldsymbol{\theta}),
\end{equation}
where each $\replace{\hat{L}}{L}_m(\boldsymbol{\theta})$ is a user-specified partial likelihood (or sub-likelihood)
depending on marginal or conditional events on variables. For example, $\replace{\hat{L}}{L}_m$ can be
defined using a marginal event
$\{x_m\}$ (marginal composite likelihood), pairs of variables such
as $\{x_1, x_2\}$ (pairwise likelihood), or conditional events like $\{x_1, x_2\}|\{x_1\}$
(conditional composite likelihood). For simplicity, we assume that $\boldsymbol{\theta}$ is common to all
sub-likelihood components, so that any factorization based on a subset of $\{\replace{\hat{L}}{L}_m(\boldsymbol{\theta})$,
$m=1,\dots, M\}$ yields a valid objective function.
Although the composite likelihood approach provides a flexible framework with a sound
theory for making inference about $\boldsymbol{\theta}$ in situations involving multivariate data,
there exist at least two challenges hindering the efficiency improvement and feasible
computing of McLE in applications. The first challenge lies with selecting the right
sub-likelihood
components for constructing an informative composite likelihood function. The current
practice of keeping all plausible factors in (\ref{Lcl}) is not well justified in terms
of efficiency relative to MLE, since inclusion of redundant factors can deteriorate
dramatically the variance of the corresponding composite likelihood estimator
(e.g., see \citeasnoun{Cox&Reid04}). A better strategy would be to choose a subset of
likelihood components which are maximally informative on $\boldsymbol{\theta}_0$, and drop noisy
or redundant components to the maximum extent. However, little work is found in the
literature in regard to optimal selection of sub-likelihood components.
The second challenge lies with the computational complexity involved in the maximization
of $\replace{\hat{L}}{L}_{cl}(\boldsymbol{\theta})$, which can go quickly out of reach as $d$ (and $M$) increases.
Particularly, note that computing $\replace{\hat{L}}{L}_{cl}$ involves $M \times N_{ops}(d_{cl})$ operations,
where $N_{ops}(d_{cl})$ is the number of operations for each sub-likelihood component. The
computational burden is exacerbated when $\boldsymbol{\Theta}$ is relatively large and the different
sub-likelihood factors $\replace{\hat{L}}{L}_m(\boldsymbol{\theta})$ do not depend on distinct elements of
$\boldsymbol{\theta}$. One would like to see this computing complexity be alleviated to a manageable level
by applying parsimonious likelihood composition in the presence of high (or ultra-high) dimensional
data.
Motivated by the aforementioned challenges, in this paper we propose a new class of
stochastic selection algorithms for optimal likelihood composition. Our method uses Gibbs sampler,
a specific Markov Chain Monte Carlo (MCMC)
sampling scheme to generate informative -- yet parsimonious -- solutions. The resulting
estimates will converge to the one maximally informative on the target parameter $\boldsymbol{\theta}_0$
as the underlying Markov chain reaches equilibrium. This is because sub-likelihood
components generated in the algorithms are drawn according to probabilities determined by
a criterion
minimizing the McLE's variance or its consistent approximation. Theory of unbiased estimating
equations prescribes McLE's asymptotic variance as an optimality criterion, i.e., the
$O_F$-optimality criterion, see \citeasnoun[Ch. 2]{Heyde97}, but such an ideal
objective has
scarce practical utility due to the mathematical complexity and computational cost
involved in evaluating the numerous covariance terms implied by the asymptotic variance expression
\cite{Lindsay&al2011}. To address this issue, we replace the asymptotic variance by
a rather inexpensive jackknife variance estimator computed by a one-step Newton-Raphson iteration.
Such a replacement is shown to work very well based on our numerical study.
Another advantage of our approach is that proper use of the Gibbs sampler can generate
sparse
composition rules, i.e., composite likelihoods involving a relatively small number of
informative sub-likelihood components. Note that the model space implied by the set of all
available sub-likelihood components can be large, even when $d$ is moderate.
For example, if $d=20$, we have
$2^M=2^{{d}\choose{2}}=2^{190}$ possible composition rules based on pair-wise likelihood
components alone. To cope with such a high-dimensionality, we combine Gibbs sampling with a composite
likelihood stochastic stability mechanism. Analogously to the stochastic stability selection proposed by
\citeasnoun{Mein10} in the context of high-dimensional model selection, our approach selects a small
but sufficient number of informative likelihood components through the control of the error rates of
false discoveries.
The paper is organized as follows: In Section \ref{sec2}, we outline the main framework
and
basic concepts related to composite likelihood estimation. We also describe the $O_F$-optimality
criterion and introduce its jackknife approximation. In Section \ref{sec3}, we describe our core
algorithm for simultaneous likelihood estimation and selection. In Section \ref{sec4}, we
discuss an extension of our algorithm by incorporating the ideas of model complexity penalty and
stochastic stability selection. This leads to a second algorithm for parsimonious likelihoods composition
for high-dimensional data. In Section
\ref{sec5}, we illustrate our methods through numerical examples involving simulated data and real
genetic single nucleotide polymorphism (SNP) data from a breast cancer case-control
study. Section \ref{sec:remarks} concludes the paper with final remarks.
\section{Sparse Composite Likelihood Functions} \label{sec2}
\subsection{Binary Likelihood Composition}
Let $\{ \mathcal{A}_1, \dots, \mathcal{A}_{M}\}$ be a set of marginal or conditional sample sub-spaces
associated with \replace{pdfs (or pmfs)}{probability density functions (pdfs)} $f_{m}(\boldsymbol{x} \in \mathcal{A}_{m}| \boldsymbol{\theta})$. \add{See
\citeasnoun{Varin&al11} for interpretation and examples of $\mathcal{A}_m$.} Given
independent $d$-dimensional
observations $\boldsymbol{X}^{(1)},\dots, \boldsymbol{X}^{(n)} \sim f(\boldsymbol{x}| \boldsymbol{\theta}_0)$, $\boldsymbol{\theta}_0 \in \boldsymbol{\Theta}
\subseteq \mathbb{R}^p$, $p \geq 1$, we define the composite log-likelihood function:
\begin{align} \label{comp_lik}
\ell_{cl}(\boldsymbol{\theta}, \boldsymbol{\omega}) =
\sum_{m=1}^{M}
\omega_{m}
\ell_{m}(\boldsymbol{\theta}),
\end{align}
where $\boldsymbol{\omega}=(\omega_1, \dots, \omega_{M})^T \in \boldsymbol{\Omega} =\{0,1\}^{M}$, and
\replace{$\hat{\ell}_{m}(\boldsymbol{\theta})$}{$\ell_{m}(\boldsymbol{\theta})$} is the partial log-likelihood
\begin{align}
\ell_{m}(\boldsymbol{\theta}) = \sum_{i=1}^n
\log f_{m}(\boldsymbol{X}^{(i)} \in \mathcal{A}_m| \boldsymbol{\theta}).
\end{align}
Each partial likelihood object (sub-likelihood) $\ell_{m}(\boldsymbol{\theta})$ is allowed to
be selected or not, depending on whether $\omega_m$ is $1$ or $0$. We aim to
approximate the unknown complete log-likelihood function $\ell_{mle}(\boldsymbol{\theta})= \log
\replace{\hat{L}}{L}_{mle}(\boldsymbol{\theta})$ by selecting a few -- yet the most informative --
sub-likelihood objects from the $M$ available objects, where $M$ is allowed to be larger than $n$
and $p$. Given
the composition rule $\boldsymbol{\omega} \in \boldsymbol{\Omega}$, the maximum composite likelihood estimator (McLE),
denoted by $\hat{\boldsymbol{\bftheta}}(\boldsymbol{\omega})$, is defined by the
solution of the following system of estimating equations:
\begin{align} \label{comp_score}
\mathbf{0} = \sum_{i=1}^n \boldsymbol{U}^{(i)}(\boldsymbol{\theta}, \boldsymbol{\omega}) := \sum_{i=1}^n \sum_{m=1}^{M} \omega_m
\boldsymbol{U}^{(i)}_{m}(\boldsymbol{\theta}),
\end{align}
where
$
\boldsymbol{U}^{(i)}_m(\boldsymbol{\theta}) = \nabla_{\boldsymbol{\theta}} \log f_m(\boldsymbol{X}^{(i)} \in \mathcal{A}_m|\boldsymbol{\theta})
$,
$m=1,\dots,M$, \add{$i=1,\dots,n$}, are unbiased partial scores functions. Under standard regularity conditions on the
sub-likelihoods
\cite{Lindsay88}, $\sqrt{n}(\hat{\boldsymbol{\bftheta}}(\boldsymbol{\omega}) - \boldsymbol{\theta}_0(\boldsymbol{\omega}))
\overset{\mathcal{D}}{\rightarrow}
N_p(\mathbf{0},
\mathbf{V}_0)$ with asymptotic variance given by the $p \times p$ matrix
\begin{align} \label{asyvar}
\mathbf{V}_0(\boldsymbol{\omega}) = \mathbf{V}(\boldsymbol{\theta}_0, \boldsymbol{\omega}) = \mathbf{H}(\boldsymbol{\theta}_0,
\boldsymbol{\omega})^{-1} \mathbf{K}(\boldsymbol{\theta}_0, \boldsymbol{\omega}) \mathbf{H}(\boldsymbol{\theta}_0,
\boldsymbol{\omega})^{-1},
\end{align}
where
$$
\mathbf{H}(\boldsymbol{\theta}, \boldsymbol{\omega}) = \sum_{m=1}^{M}
\omega_m \mathbf{H}_m(\boldsymbol{\theta}),
\ \
\mathbf{K}(\boldsymbol{\theta}, \boldsymbol{\omega}) = Var\left[\sum_{m=1}^{M}
\omega_{m}
\boldsymbol{U}_{m}(\boldsymbol{\theta}) \right],
$$
$\mathbf{H}_m(\boldsymbol{\theta}) = Var(\boldsymbol{U}_m(\boldsymbol{\theta}))$ is the $p\times p$ \replace{Hessian}{sensitivity} matrix for the
$m$th component, and $\boldsymbol{U}_m(\boldsymbol{\theta}) =\nabla_{\boldsymbol{\theta}} \log f_m(\boldsymbol{X}
\in \mathcal{A}_m|\boldsymbol{\theta})$.
The main approach followed here is to minimize, in some sense, the
asymptotic variance, $\mathbf{V}_0(\boldsymbol{\omega})$. To this end, theory of unbiased
estimating equations suggests that both matrices $\mathbf{H}$ and $\mathbf{K}$ should be considered in order to
achieve this goal (e.g., see \citeasnoun[Chapter 2]{Heyde97}). On one hand,
note that $\mathbf{H}$ measures the covariance between the composite likelihood score $\boldsymbol{U}(\boldsymbol{\theta},
\boldsymbol{\omega})$\add{$=\sum_{m=1}^M\omega_m
\boldsymbol{U}_m(\boldsymbol{\theta}) =\sum_{m=1}^M \omega_m\nabla_{\boldsymbol{\theta}} \log f_m(\boldsymbol{X} \in \mathcal{A}_m|\boldsymbol{\theta})
$} and the MLE score $\boldsymbol{U}_{mle}(\boldsymbol{\theta}) = \nabla_{\boldsymbol{\theta}} \log
f(\boldsymbol{X};\boldsymbol{\theta})$. To see this, differentiate both sides in
\replace{$E[\hat{\boldsymbol{U}}(\boldsymbol{\theta}_0,\boldsymbol{\omega})]=\mathbf{0}$}{$E[\boldsymbol{U}(\boldsymbol{\theta}_0,\boldsymbol{\omega})]=\mathbf{0}$} and obtain
$
\mathbf{H}(\boldsymbol{\theta}_0, \boldsymbol{\omega}) = \remove{minus sign -}E \left[
\boldsymbol{U}_{mle}(\boldsymbol{\theta}_0) \boldsymbol{U}(\boldsymbol{\theta}_0, \boldsymbol{\omega})^T \right].
$
This shows that adding sub-likelihood components is desirable, since it increases the
covariance with the full likelihood. On the other hand, including too many correlated
sub-likelihoods components inflates the variance through the covariance terms in
$\mathbf{K}(\boldsymbol{\theta}_0, \boldsymbol{\omega})$.
\subsection{Fixed-Sample Optimality and its Jackknife Approximation} \label{OF}
The objective of minimizing the asymptotic variance is still
undefined since $\mathbf{V}(\boldsymbol{\theta}_0, \boldsymbol{\omega})$ in (\ref{asyvar}) is a $p \times p$ positive
semidefinite matrix. Therefore, we consider the following one-dimensional
objective function
\begin{eqnarray} \label{ideal}
\label{g} g_0(\boldsymbol{\omega})= \log \det \{ \mathbf{V}(\boldsymbol{\theta}_0, \boldsymbol{\omega}) \} = \log \det \{
\mathbf{K}(\boldsymbol{\theta}_0, \boldsymbol{\omega}) \} - 2 \log \det \{ \mathbf{H}(\boldsymbol{\theta}_0, \boldsymbol{\omega}) \}.
\end{eqnarray}
The minimizer, $\boldsymbol{\omega}_0$, of the ideal objective (\ref{ideal}) corresponds to the
$O_F$-optimal solution (fixed-sample optimality) (e.g., see \citeasnoun{Heyde97}, Chapter
2). Clearly, such a program still lacks practical relevance, since (\ref{ideal}) depends
on the unknown parameter $\boldsymbol{\theta}_0$. Therefore, $g_0(\boldsymbol{\omega})$ should be replaced by some
sample-based estimate, say $\hat{g}_0(\boldsymbol{\omega})$.
One option is to use the following consistent estimates of
$\mathbf{H}(\boldsymbol{\theta}_0, \boldsymbol{\omega})$ and $\mathbf{K}(\boldsymbol{\theta}_0, \boldsymbol{\omega})$ in
(\ref{g}):
\begin{eqnarray}
\hat{\mathbf{H}}(\boldsymbol{\omega}) = \dfrac{1}{n-1}\sum_{m=1}^{M} \sum_{i=1}^n \boldsymbol{\omega}_m
\boldsymbol{U}^{(i)}_m(\hat{\boldsymbol{\bftheta}})
\boldsymbol{U}^{(i)}_m(\hat{\boldsymbol{\bftheta}})^T, \ \
\hat{\mathbf{K}}(\boldsymbol{\omega}) = \dfrac{1}{n} \sum_{i=1}^n
\boldsymbol{U}^{(i)}(\hat{\boldsymbol{\bftheta}}, \boldsymbol{\omega})
\boldsymbol{U}^{(i)}(\hat{\boldsymbol{\bftheta}}, \boldsymbol{\omega})^T,
\end{eqnarray}
where the estimator $\hat{\boldsymbol{\bftheta}}=\hat{\boldsymbol{\bftheta}}(\boldsymbol{\omega})$ is the McLE. Although this strategy works
in simple models when $n$ is relatively large and $M$ is small, the estimator
$\hat{\mathbf{K}}(\boldsymbol{\omega})$ is knowingly unstable when $n$ is small compared to
$\dim(\boldsymbol{\Theta})$ \cite{Varin&al11}. Another issue with this approach in high-dimensional datasets
is that the number of operations required to compute $\hat{\mathbf{K}}(\boldsymbol{\omega})$ (or $\mathbf{K}(\boldsymbol{\theta},
\boldsymbol{\omega})$)
increases quadratically in $\sum_{m=1}^{M} \omega_m$.
To reduce the computational burden and avoid numerical instabilities, we estimate $g_0$ by the
following one-step \replace{jackknifed}{jackknife} criterion:
\begin{equation}
\hat{g}(\boldsymbol{\omega}) = \log \det \left\{ \sum_{i=1}^n \left(
\hat{\boldsymbol{\bftheta}}(\boldsymbol{\omega})^{(-i)} - \overline{\boldsymbol{\theta}}(\boldsymbol{\omega}) \right) \left(
\hat{\boldsymbol{\bftheta}}(\boldsymbol{\omega})^{(-i)} - \overline{\boldsymbol{\theta}}(\boldsymbol{\omega})\right)^T\right\},
\end{equation}
where the pseudo-value $\hat{\boldsymbol{\bftheta}}(\boldsymbol{\omega})^{(-i)}$ is a composite
likelihood estimator based on a sample without observation $\boldsymbol{X}^{(i)}$, and
$\overline{\boldsymbol{\theta}}(\boldsymbol{\omega})= \sum_{i=1}^n \hat{\boldsymbol{\bftheta}}(\boldsymbol{\omega})^{(-i)}/n$. Alternatively, one could
use the delete-$k$ jackknife estimate where $k>1$ observations at the time are deleted to compute
the pseudo-values. The delete-$k$ version is computationally cheaper than
delete-$1$ jackknife and therefore should be preferred when the sample size, $n$, is moderate or
large.
Other approaches -- including bootstrap -- should be
considered depending on the model set up. For example, block re-sampling
techniques such as the block-bootstrap (see \citeasnoun{Hall95} and subsequent papers) are viable
options for spatial data and time series.
When the sub-likelihood scores are in closed form, the pseudo-values can be
efficiently approximated by the following one-step Newton-Raphson iteration:
\begin{equation} \label{pseudovalues}
\hat{\boldsymbol{\bftheta}}(\boldsymbol{\omega})^{(-i)} = \tilde{\boldsymbol{\theta}} + \left( \sum_{m=1}^{M}
\omega_m \sum_{j \neq i} \boldsymbol{U}_m^{(j)}(\tilde{\boldsymbol{\theta}})
\boldsymbol{U}_m^{(j)}(\tilde{\boldsymbol{\theta}})^T \right)^{-1} \left( \sum_{m=1}^{M}\omega_m \sum_{j \neq i}
\boldsymbol{U}_m^{(j)}(\tilde{\boldsymbol{\theta}}) \right),
\end{equation}
where $\tilde{\boldsymbol{\theta}}$ is any root-$n$ consistent estimator of $\boldsymbol{\theta}$. Note that
$\tilde{\boldsymbol{\theta}}$ does not need to coincide with the McLE, $\hat{\boldsymbol{\bftheta}}(\boldsymbol{\omega})$, so a
computationally cheap initial estimator -- based only on a few small
sub-likelihoods subset -- may be considered. Remarkably, the number of
operations required in the Newton-Raphson iteration (\ref{pseudovalues}) grows linearly in the
number of sub-likelihood components, so that the one-step jackknife objective has
computational complexity comparable to a single
evaluation of the composite likelihood function (\ref{comp_score}). Large sample properties of
jackknife estimator of McLE's asympotic variance can be derived under
regularity conditions on $\boldsymbol{U}_m$ and $\mathbf{H}_m$ analogous to those described in
\citeasnoun{Shao92}. Then, for the one-step jackknife using the
root-$n$ consistent starting point $\tilde{\boldsymbol{\theta}}$, we
have
$
n \left( \hat{g}(\boldsymbol{\omega}) - g_0(\boldsymbol{\omega}) \right) \overset{p}{\rightarrow}
0, \ \ \text{as} \ \ n\rightarrow \infty.
$
uniformly on $\boldsymbol{\Omega}$. Moreover, the estimator $\hat{g}(\boldsymbol{\omega})$ is asymptotically
equivalent to the classic jackknife estimator.
\section{Parameter estimation and likelihood composition} \label{sec3}
\subsection{Likelihood Composition via Gibbs Sampling} \label{sec3.1}
When computing the McLE of $\boldsymbol{\theta}$, our objective is to find an optimal binary
vector $\boldsymbol{\omega}^\ast$ to estimate
$
\boldsymbol{\omega}_0 = {\text{argmin}}_{\boldsymbol{\omega} \in \Omega} \
g_0(\boldsymbol{\omega}),
$
where $g_0(\cdot)$ is the ideal objective function defined in (\ref{ideal}).
Typically, the population quantity $g_0$ cannot be directly assessed, so we replace $g_0$
with the sample-based jacknife estimate $\hat{g}$ described in Section \ref{sec2} and aim at
finding
$$
\boldsymbol{\omega}^\ast = \underset{\boldsymbol{\omega} \in \Omega}{\text{argmin}} \
\hat{g}(\boldsymbol{\omega}).
$$
This task, however, is computationally
infeasible through enumerating the space $\boldsymbol{\Omega}$ if $d$ is even moderately large.
For example, for composite likelihoods defined based on all pairs of variables $(X_s, X_t)$,
$1\leq s<t\leq d$ (pairwise likelihood), $\boldsymbol{\Omega}$ contains $2^{{d}\choose{2}}=2^{190}$ elements
\add{when $d=20$.}
To overcome this enumeration complexity, we carry out a random search method based on Gibbs
sampling. We regard the weight vector $\boldsymbol{\omega}$ as a random vector following the joint probability
mass function (pmf)
\begin{equation} \label{Gibbs_prob}
\pi_\tau(\boldsymbol{\omega}) = \dfrac{1}{Z(\tau)}\exp \left\{-\tau \hat{g}(\boldsymbol{\omega})
\right\}, \ \ \boldsymbol{\omega} \in \boldsymbol{\Omega},
\end{equation}
where $Z(\tau)= \sum_{\boldsymbol{\omega} \in \boldsymbol{\Omega}} \exp \{-\tau \hat{g}(\boldsymbol{\omega}) \}$ is
the normalizing constant. The above distribution
depends on the tuning parameter $\tau>0$,
which controls the extent to which we emphasize larger probabilities (and reduce smaller
probabilities) on $\boldsymbol{\Omega}$. Then $\boldsymbol{\omega}^*$ is also the mode of $\pi_\tau(\boldsymbol{\omega})$, meaning
that
$\boldsymbol{\omega}^*$ will have the highest probability to appear, and will be more likely to appear earlier
rather than later, if a random sequence of $\boldsymbol{\omega}$
is to be generated from $\pi_\tau(\boldsymbol{\omega})$.
Therefore, estimating $\boldsymbol{\omega}^*$ can be readily done based on the random sequence generated from
$\pi_\tau(\boldsymbol{\omega})$. But generating a random sample from $\pi_\tau(\boldsymbol{\omega})$ directly is difficult
because $\pi_\tau(\boldsymbol{\omega})$ contains an intractable normalizing constant $Z(\tau)$. Instead, we
will generate a Markov chain using the product of all univariate conditional pmf's with respect to
$\pi_\tau(\boldsymbol{\omega})$
as the transitional kernel. The stationary distribution of such a Markov chain can be
proved to be $\pi_\tau(\boldsymbol{\omega})$ \cite[Chapter 10]{Casella04}. Hence, the part of this Markov chain
after reaching equilibrium can
be regarded as a random sample from $\pi_\tau(\boldsymbol{\omega})$ for most purposes. The MCMC
method just described is in fact the so-called Gibbs sampling method. The key factor
for the Gibbs sampling to work is all the univariate conditional probability distributions
of the target distribution can be relatively easily simulated.
Let us write
$\boldsymbol{\omega}=(\omega_1,\cdots,\omega_M)$; $\boldsymbol{\omega}_{m_1:m_2}=(\omega_{m_1},
\omega_{m_1+1},\cdots,\omega_{m_2})$, if $m_1\leq m_2$, and
$\boldsymbol{\omega}_{m_1:m_2}=\emptyset$ otherwise; and $\boldsymbol{\omega}_{-m}=(\omega_1,\cdots,
\omega_{m-1},\omega_{m+1},\cdots,\omega_M)$. Then it is easy to see that the conditional
pmf of $\omega_m$ given $\boldsymbol{\omega}_{-m}$, $m=1,\cdots,M$, is
\begin{equation}
\pi_\tau(\omega_m|\boldsymbol{\omega}_{-m})=\frac{\exp\{-\tau \hat{g}(\boldsymbol{\omega})\}}{
\exp\{-\tau \hat{g}(\boldsymbol{\omega}_m^{[0]})\}+
\exp\{-\tau \hat{g}(\boldsymbol{\omega}_m^{[1]})\}},\quad
\omega_m=0,1,
\label{eq7}
\end{equation}
where
$\omega_m^{[j]}=(\omega_{1:(m-1)},j,\omega_{(m+1):M})$, $j=0,1$. Note that
$\pi_\tau(\omega_m|\boldsymbol{\omega}_{-m})$ is simply a Bernoulli pmf and so it is easily
generated. For
each
binary vector $\boldsymbol{\omega}$, $\hat{g}(\boldsymbol{\omega})$ is computed using the
one-step jackknife estimator described in Section \ref{sec2}. Therefore, the
probability mass function $\pi_\tau(\boldsymbol{\omega})$ is well defined for any $\tau>0$. On the other hand,
we have shown that the
Gibbs sampling can be used to generate a Markov chain from $\pi_\tau(\boldsymbol{\omega})$, from which we can
find a consistent estimator $\hat{\boldsymbol{\omega}}^*$ for the mode $\boldsymbol{\omega}^*$ \add{if $\boldsymbol{\omega}^*$
is unique}. Consequently,
the universally maximum composite log-likelihood estimator of $\boldsymbol{\theta}$ can be
approximated by the McLE, $\hat{\boldsymbol{\bftheta}}({\hat{\bfomega}}^*)$.
\add{Note that the mode $\boldsymbol{\omega}^*$ is not necessarily unique. In this case one can still consider
consistent estimation for $\boldsymbol{\omega}^*$ but in the following meaning. We know
$\hat{g}(\boldsymbol{\omega}^*)$ is the minimum and always unique according to its definition. Thus
$\hat{g}(\boldsymbol{\omega}^*)$ can be consistently and uniquely estimated based on the Markov chain
of $\hat{g}(\boldsymbol{\omega})$ induced from the Markov chain of $\boldsymbol{\omega}$ generated from
$\pi_\tau(\boldsymbol{\omega})$ when the length of the Markov chain goes to infinity. Therefore, any estimator
$\hat{\bfomega}^*$ such that $\hat{g}(\hat{\bfomega}^*)$ becomes consistent with
$\hat{g}(\boldsymbol{\omega}^*)$ can be regarded as a consistent estimator of $\boldsymbol{\omega}^*$.
Consequently, the McLE $\hat{\boldsymbol{\bftheta}}({\hat{\bfomega}}^*)$ for each such $\hat{\bfomega}^*$ is
still the universally McLE but not necessarily unique. From a practitioner's view point
there is no need to identify all consistent estimators of $\boldsymbol{\omega}^*$ and all universally McLE
of $\boldsymbol{\theta}$. Finding out one such $\hat{\bfomega}^*$ or a tight superset of it would be a
sufficient advance in parsimonious and efficient likelihood composition.}
\subsection{Algorithm 1: MCMC Composite Likelihood Selection (MCMC-CLS)}
\label{sec3.2}
The above discussion motivates the steps of our core Gibbs sampling algorithm for simultaneous
composite likelihood estimation and selection. Let $\tau$ be given and fixed.
\begin{enumerate}
\item[0$ $.]
For $t = 0$, choose an initial binary vector of weights \add{$\boldsymbol{\omega}^{(0)}$} and compute the one-step jackknife
estimator $\hat{g}(\boldsymbol{\omega}^{(0)})$. \add{E.g. randomly set 5 elements of $\boldsymbol{\omega}^{(0)}$ to 1 and the rest to 0.}
\item[1$ $.]
For each $t=1,\cdots, T$ for a given $T$, obtain $\boldsymbol{\omega}^{(t)}$
by repeating $1.1 $ to $1.4$ for each $m=1,\cdots,M$ sequentially.
\begin{enumerate}
\item[1.1$ $.]
Compute, if not available yet, $\hat{g}(\omega_{1:(m-1)}^{(t)}, j,\omega_{(m+1):M}^{(t-1)})$,
$j=0,1$.
\item[1.2$ $.]
Compute the conditional pmf of $\omega_m$ given
$(\omega_{1:(m-1)}^{(t)},\omega_{(m+1):M}^{(t-1)})$:
\begin{equation}
\pi_\tau(\omega_m=j|\omega_{1:(m-1)}^{(t)},\omega_{(m+1):M}^{(t-1)})\propto
\exp\{-\tau \hat{g}(\omega_{1:(m-1)}^{(t)}, j,\omega_{(m+1):M}^{(t-1)})\}
\end{equation}
where $j=0,1$. Note this is a Bernoulli pmf.
\item[1.3$ $.]
Generate a random number from the Bernoulli pmf obtained in $1.2 $, and denote the result
as $\omega_m^{(t)}$.
\item[1.4$ $.]
Set $\boldsymbol{\omega}^{(t)}\leftarrow
(\omega_{1:(m-1)}^{(t)},\omega_m^{(t)},\omega_{(m+1):M}^{(t-1)})^T$. Also compute and
record
$\hat{g}(\boldsymbol{\omega}^{(t)})$.
\end{enumerate}
\item[2$ $.] Compute
$ \hat{\bfomega}^*=\arg\min_{1\leq t\leq T}
\hat{g}( \boldsymbol{\omega}^{(t)}),$
and regard it as the estimate of
$\boldsymbol{\omega}^*$. Alternatively, column-combine $\boldsymbol{\omega}^{(1)},\cdots, \boldsymbol{\omega}^{(T)}$ generated in
Step $1$ into an $M\times T$ matrix
$\hat{\mathbf{W}}$; then compute row averages of $\hat{\mathbf{W}}$,
say $\overline{\omega}_1,\cdots,\overline{\omega}_M$, and set
$\tilde{\boldsymbol{\omega}}^*=(\tilde{\omega}_1^*,\cdots, \tilde{\omega}_M^*)$
where
$\tilde{\omega}_m^*=1$, if $\overline{\omega}_m\geq \xi$, and $\tilde{\omega}_m^*=0$
if $\overline{\omega}_m< \xi$, where $\xi$ is some constant larger than $0.5$.
\item[3$ $.] Finally, compute $\hat{\boldsymbol{\bftheta}}(\hat{\boldsymbol{\omega}}^*)$ (or
$\hat{\boldsymbol{\bftheta}}(\tilde{{\boldsymbol{\omega}}}^*)$) and $\hat{g}(\hat{\boldsymbol{\omega}}^*)$ (or
$\hat{g}(\tilde{{\boldsymbol{\omega}}}^*)$).
\end{enumerate}
Firstly, note that Gibbs sampling has been used in various contexts in the literature
of model selection. \citeasnoun{George&McCulloch97} used a similar strategy to generate
the distribution of the variable indicators in Bayesian linear regression.
\add{\citeasnoun{Qian1999} used Gibbs sampler for selecting robust linear regression models.}
\citeasnoun{Qian&Field02} \remove{have} used the
Gibbs sampler for selecting generalized linear regression models. \citeasnoun{Brooks&al03} and
\citeasnoun{Qian&Zhao07} \replace{use}{used} Gibbs sampling for selection in the context of time series
models.
However, to our knowledge this is the first work proposing a general-purpose Gibbs sampler for
construction of composite likelihoods.
Secondly, the sequence $\boldsymbol{\omega}^{(1)}, \dots, \boldsymbol{\omega}^{(T)}$ is a Markov chain, which
requires \add{an initial vector $\boldsymbol{\omega}^{(0)}$ and} a
burn-in period to be in equilibrium. \add{Values of $\boldsymbol{\omega}^{(0)}$ do not affect the eventual
attainment of equilibrium so
can be arbitrarily chosen. From a computational point of view most components of $\boldsymbol{\omega}^{(0)}$
should be set to 0 to reduce computing load. For example, we can randomly set all but 5 of the components to 0.}
To assess whether the chain has reached equilibrium, we suggest
the control-chart method discussed in \citeasnoun{Qian&Zhao07}.
For the random variable $\hat{g}({\boldsymbol{\omega}})$, we have the
following probability inequality for any given $b>1$:
$$
Pr\left(\hat{g}(\boldsymbol{\omega}) - \min \hat{g}(\boldsymbol{\omega}) \geq b \sqrt{Var[\hat{g}(\boldsymbol{\omega}) ] +
(E[\hat{g}(\boldsymbol{\omega})] - \min \hat{g}(\boldsymbol{\omega}))^2 } \right) \leq \dfrac{1}{b^2}
$$
This inequality can be used to find an upper control limit for $\hat{g}({\boldsymbol{\omega}})$. For example, by
setting \replace{$b=10$}{$b=\sqrt{10}$}, an at least $90\%$ upper control limit for $\hat{g}({\boldsymbol{\omega}})$
can be estimated as
$
\hat{g}^\ast + \sqrt{10 s^2 + 10 (\overline{g} -\hat{g}^\ast)^2 },
$
where $\hat{g}^\ast$, $\overline{g}$ and $s^2$ are the minimum, sample mean and sample variance
based on the first $N$ observations, $\hat{g}(\boldsymbol{\omega}^{(1)}), \dots, \hat{g}(\boldsymbol{\omega}^{(N)})$,
$N < T$, where typically we set $N=\lfloor T/2 \rfloor$. We then count the number of observations
passing the
upper control limit in the remaining sample $\hat{g}(\boldsymbol{\omega}^{(N+1)}), \dots,
\hat{g}(\boldsymbol{\omega}^{(T)})$. If more than 10\% of the points are above the control limit, then
at a significance level not more than 90\% there is statistical evidence against
equilibrium. Upper control limits of different levels for $\hat{g}({\boldsymbol{\omega}})$ can be similarly
calculated and interpreted by choosing different values of $b$.
Thirdly, $\hat{\bfomega}^*$ computed in Step~2 is simply a sample mode of $\pi_\tau(\boldsymbol{\omega})$
based on its definition in (\ref{Gibbs_prob}). Hence, by the Ergodic Theorem for stationary Markov
chain, $\hat{\bfomega}^*$ is a strongly consistent estimator of $\boldsymbol{\omega}$ under minimal
regularity conditions. With similar arguments $\overline{\omega}_m$ is a strongly consistent
estimator of the success probability involved in the marginal distribution of $\omega_m$
induced from $\pi_\tau(\boldsymbol{\omega})$. Hence, it is not difficult to see that the resultant
estimator $\tilde{\boldsymbol{\omega}}^*$ should satisfy $\tilde{\omega}_m^*\geq \omega_m^*$
without requiring $T$ to be very large. Propositions~1 and 2 in \citeasnoun{Qian&Zhao07}
provide an exposition of this property. Therefore, the estimator $\tilde{\boldsymbol{\omega}}^*$
captures all informative sub-likelihood components with high probability.
Finally, the tuning constant $\tau$ adjusts the mixing behavior of the chain, which has important
consequences on the exploration/exploitation trade-off on the search space $\boldsymbol{\Omega}$. If $\tau$
is too small, the algorithm produces solutions approaching the global optimal value
$\boldsymbol{\omega}^\ast$ slowly; if $\tau$ is large, then the algorithm finds local optima and may
not reach $\boldsymbol{\omega}^\ast$. The former behavior corresponds to a rapidly mixing
chain, while the latter occurs when the chain is mixing too slowly. In the composite likelihood
selection setting, the main hurdle is the computational cost, so $\tau$ should be set according to
the available computing capacity, after running some graphical or numerical diagnostics
(e.g., see \citeasnoun{Casella04}). We choose to use $\tau=d$ in our empirical study, which does
not seem to create adverse effects.
\section{An extension for high-dimensional data} \label{sec4}
\subsection{Sparsity-enforcing penalization} \label{sec:penalty}
Without additional modifications, Algorithm 1 ignores the likelihood complexity, since
solutions with many sub-likelihoods have in principle the same chance to occur as those with fewer
components. To discourage selection of overly complex likelihoods, we augment the Gibbs distribution
(\ref{Gibbs_prob}) as follows:
\begin{equation} \label{augmentedgibbs}
\pi_{\tau, \lambda}(\boldsymbol{\omega}) = Z(\tau, \lambda)^{-1} \exp\{ -\tau \hat{g}_\lambda(\boldsymbol{\omega})\},
\end{equation}
where
\begin{equation} \label{g_penalized}
\hat{g}_\lambda(\boldsymbol{\omega}) = \hat{g}(\boldsymbol{\omega}) + {pen}(\boldsymbol{\omega}|\lambda), \quad \tau>0, \;
\lambda>0,
\end{equation}
$ \hat{g}(\boldsymbol{\omega})$ is the jackknifed variance objective defined in Section \ref{sec2},
$Z(\tau, \lambda)$ is the normalization constant, and $pen(\boldsymbol{\omega})$ is
a complexity penalty enforcing sparse solutions when $\dim(\boldsymbol{\Omega})$ is large.
Maximization of $\pi_{\tau,
\lambda}(\boldsymbol{\omega})$ is interpreted as a maximum a posteriori estimation for $\boldsymbol{\omega}$, where the
probability distribution proportional to $\exp\{ -pen(\boldsymbol{\omega}|\lambda)\}$
is regarded as a prior pmf over $\boldsymbol{\Omega}$. In this paper, we use
the penalty term of form $\text{pen}(\boldsymbol{\omega}|\lambda) = \lambda \sum_{m=1}^{M} \omega_m$, since it
corresponds to
well-established model-selection criteria. For example, choices $\lambda=1$, $\lambda = 2^{-1}\log
n$ and $\lambda=\log \log n$ correspond to the AIC, BIC and HQC criteria, respectively (e.g., see
\citeasnoun{Claeskens08}). Other penalties could be considered as well depending on the model
structure and
available prior information; however, these are not shown to be crucial based on our
empirical study
so will not be explored in this paper.
\subsection{Composite Likelihood Stability Selection} \label{sec:stab}
To find the optimal solution $\boldsymbol{\omega}^\ast$, one could compute a sequence of optimal values
$\hat{\bfomega}^\ast_{\lambda_1}, \dots, \hat{\bfomega}^\ast_{\lambda_B}$ and then take
$\min_{1 \leq b \leq B} \hat{g}(\hat{\bfomega}^\ast_{\lambda_b})$. There are, however, various issues in
this approach: first, the
globally optimal value $\boldsymbol{\omega}^\ast$ might not be a member
of the set $\{ \hat{\bfomega}^\ast_{\lambda_b} \}_{b=1}^B$, since the mode of
$\pi_{\tau,\lambda}(\boldsymbol{\omega})$ is not necessarily the composite likelihood solution
which minimizes $\hat{g}(\boldsymbol{\omega})$. Second, even if $\boldsymbol{\omega}^\ast$ is in
such a set, determining $\lambda$ is typically
challenging. To address the above issues, we employ the idea of stability
selection, introduced by \citeasnoun{Mein10} in the context of variable selection for linear
models. Given an arbitrary value for $\lambda$, stochastic stability exploits the variability of
random
samples generated from $\pi_{\tau, \lambda}(\boldsymbol{\omega})$ by the Gibbs procedure, say
$\boldsymbol{\omega}^{(1)}_\lambda, \dots,
\boldsymbol{\omega}^{(T)}_\lambda$ and choses all the partial likelihoods that occur in a large fraction
of generated samples. For a given $0<\xi<1$, we define the set of stable likelihoods
by the vector
$\hat{\bfomega}^{\text{stable}}$, with elements
\begin{equation}
\hat{\omega}_m^{\text{stable}} =
\left\{
\begin{array}{ll}
1, &\text{if } \ \ \dfrac{1}{T}\sum_{t=1}^T
\omega_{\lambda,m}^{(t)}\geq \xi, \\
0, & \text{otherwise},
\end{array}
\right.
\end{equation}
so we regard as stable those sub-likelihoods selected more frequently and disregard sub-likelihood
items with low selection probabilities. Following \citeasnoun{Mein10}, we choose the tuning constant
$\xi$ using the following bound on the expected number of false selections, $V$:
\begin{equation} \label{EV}
E(V) \leq \dfrac{1}{(2\xi-1)} \dfrac{\eta_{\lambda}}{M},
\end{equation}
where $\eta_{\lambda}$ is average number of
selected sub-likelihood components. In multiple testing, the quantity $\alpha
= E(V)/M$ is sometimes called the per-comparison error rate (PCER). By increasing $\xi$, only few
likelihood components are selected, so that we reduce the expected number of falsely selected
variables. We
choose the threshold $\xi$ by fixing the PCER at some desired value (e.g.,
$\alpha = 0.10$), and then choose $\xi$ corresponding to the desired error rate. The unknown
quantity $\eta_\lambda$ in our setting can be estimated by the average number of sub-likelihood
components over $T$ Gibbs samples.
Finally, note that tuning $\xi$ according to (\ref{EV}) makes redundant the determination of the
optimal $\lambda$ value as long as $pen(\boldsymbol{\omega}|\lambda)$ in (\ref{g_penalized}) is not dominant
over $\hat{g}({\boldsymbol{\omega}})$. This is further supported by our empirical study where we found the effect
of $\lambda$ on $\hat{\boldsymbol{\omega}}^{\text{stable}}$ is negligible.
\subsection{Algorithm 2: MCMC Composite Likelihood with Stability Selection
(MCMC-CLS2)} \label{alg2}
The preceding discussions lead to Algorithm 2 which is essentially the same as Algorithm 1 with two
exceptions: (i) we replace $\hat{g}$ in Algorithm 1 by the augmented objective function $\hat{g}_{\lambda}$
defined in (\ref{g_penalized}); (ii) Steps 2 in Algorithm 1 is replaced by the
following stability selection step.
\begin{itemize}
\item[$2^\prime$.]
Column-combine $\boldsymbol{\omega}^{(1)},\cdots, \boldsymbol{\omega}^{(T)}$ generated in Step $1$ into an $M\times T$
matrix $\hat{\mathbf{W}}$. Compute the row averages of $\hat{\mathbf{W}}$, denoted as
$( \hat{\omega}_1,\cdots, \hat{\omega}_M)$, i.e. $ \hat{\omega}_m=T^{-1}\sum_{t=1}^T\omega_m^{(t)}$,
$m=1,\cdots, M$. Then set $\hat{\bfomega}^{\text{stable}}=( \hat{\omega}_1^*,\cdots, \hat{\omega}_M^*)$ where
$ \hat{\omega}_m^*=1$ if $ \hat{\omega}_m\geq \hat{\xi}_\lambda$ and $ \hat{\omega}_m^*=0$ if
$ \hat{\omega}_m<\hat{\xi}_\lambda$,
$m=1,\cdots,M$, where
$
\hat{\xi}_\lambda= \dfrac{1}{2}\left( \dfrac{\hat{\eta}_\lambda}{\alpha M^2} +
1\right),
$
and $\alpha$ is a nominal level for per-comparison error rate (e.g., 0.05 or 0.1).
\end{itemize}
The estimated threshold $\hat{\xi}_\lambda$ is obtained from (\ref{EV}) by plugging-in
$\hat{\eta}_{\lambda} = \sum_{t=1}^T \sum_{m=1}^M \omega^{(t)}_m / T$, the
sample average of the \replace{number}{numbers} of selected sub-likelihood components in $T$ Gibbs samples.
\section{Numerical Examples} \label{sec5}
\subsection{Normal Variables with Common Location} \label{sec5.1}
Let $\boldsymbol{X} \sim N_d(\mu \bf1,\mathbf{\Sigma})$, where the
parameter of interest is the common location \remove{parameter,} $\mu$. We study the scenario where many
components bring redundant
information on $\mu$ by considering covariance matrix $\mathbf{\Sigma}$ with elements
$\{\mathbf{\Sigma}\}_{mm} = 1$, for all $1\leq m\leq d$, and off-diagonal elements
$\{\mathbf{\Sigma}\}_{lm}=\rho>0$ if $l,m \leq d^\ast$, for some
$d^\ast < d$, while $\{\mathbf{\Sigma}\}_{lm}=0$ elsewhere.
We consider one-wise score composite likelihood estimator solving \replace{$0 = \sum_{m=1}^d
\omega_{m}\hat{U}_{m}(\mu)$}{$0 = \sum_{m=1}^d
\omega_{m}U_{m}(\mu)$}, where \replace{$\hat{U}_m(\mu) = \sum_{i=1}^n
(X^{(i)}_m - \mu)$}{$U_m(\mu) = \sum_{i=1}^n (X^{(i)}_m - \mu)$}.
\replace{Then}{It is easy to find} the composite likelihood estimator is
$
\widehat{\mu}_{cl}(\boldsymbol{\omega}) = {\sum_{m=1}^d \omega_m \overline{X}_m
}/{\sum_{m=1}^d \omega_m }
$
where $\overline{X}_m = \sum_{i=1}^n X_m^{(i)}/n$. For this simple model, the \replace{jackknifed}{jackknife} criterion
$\hat{g}(\cdot)$ can be easily computed in closed form. The pseudo-values are
$\widehat{\mu}^{(-i)}_{cl}(\boldsymbol{\omega}) = \sum_{m=1}^d
\omega_m
\overline{X}^{(-i)}_m/\sum_{m=1}^d \omega_m$, and the average of pseudo-values is
$\widehat{\mu}_{cl}$. \replace{This implies}{It can be shown that}
$$
\hat{g}(\boldsymbol{\omega}) = \log \sum_{i=1}^n \left( \sum_{m=1}^d \omega_m (X_m^{(i)} - \overline{X}_m) \right)^2
- 2 \log \left( \sum_{m=1}^d \omega_m \right),
$$
up to a constant not depending on $\boldsymbol{\omega}$. \replace{The}{It can also be shown that}
$O_F$-criterion has the following expression
\begin{align}\label{varex}
g_0(\boldsymbol{\omega}) = \log Var\left(\widehat{\mu}_{cl}(\boldsymbol{\omega}) \right) \propto
\log \left(\sum_{m=1}^d \omega_m + 2 \rho \sum_{ l < m \leq
d^\ast
} \omega_{l}\omega_{m} \right) - 2\log\left( \sum_{m=1}^d \omega_m\right),
\end{align}
depending on the unknown parameter $\rho$. This \replace{shows}{suggests} that
including too many correlated (redundant) components (with $\rho
\neq 0$) damages McLE's variance as $d$ increases.
Particularly, setting $\omega_j =1$, for all $j=1,\dots, d$,
implies $Var(\widehat{\mu}_{cl}(\boldsymbol{\omega}) ) = O(1)$, while
choosing only uncorrelated sub-likelihoods ($\omega_j=0$ if $2\leq j \leq d^\ast$, and
$\omega_j=1$
elsewhere), gives
$
Var(\widehat{\mu}_{cl}(\boldsymbol{\omega})) = O(d^{-1})
$.
In Table 1, we show Monte Carlo simulation results from $B=250$ runs of Algorithm 1 (MCMC-CLS1) using
the two approaches described in Section \ref{sec3.2}: one
consists of choosing the best weights vector (CLS1 min); the other uses thresholding, i.e. selects elements when
the weights are selected with a sufficiently large frequency (CLS1 thres.). In the same table we also report
results on the Algorithm 2 (MCMC-CLS2) based on the stochastic stability selection approach.
Our algorithms are compared with the estimator including all one-wise sub-likelihood components (No selection)
and the optimal maximum likelihood estimator (MLE). \replace{We consider MLE with optimal weights based on the true
matrix $\Sigma^{-1}$, and the MLE based on covariance estimator
$\hat{\Sigma} = (n-1)^{-1} \sum_{i=1}^n (X_i- \overline{X})(X- \overline{X})^T$.}{We compute the
MLE of $\mu$ in two ways: either based on using the known $\Sigma$ value or based on using the
sample covariance $\hat{\Sigma} = (n-1)^{-1} \sum_{i=1}^n (X_i- \overline{X})(X_i- \overline{X})^T$.}
Note that the latter is \replace{note}{not} available when $d>n$. \replace{Instead}{Note} our
algorithms are \remove{mainly} designed to obtain simultaneous estimation and dimension reduction in the
presence of limited information (i.e. $d>n$ or $d \gg n$) \replace{and}{where the}
optimal weights are difficult to estimate from the data. Stochastic selection with $\xi=0.7$ and $T=10d$ was
carried out for samples of $n=5, 25$ and $100$
observations from a model with $d^\ast =0.8 \times d$ correlated components, with $d=10, 30$.
To compare the methods we computed Monte Carlo estimates \replace{for}{of} the
variance ($Var$) and squared bias ($Bias^2$) \add{of $\widehat{\mu}_{cl}(\boldsymbol{\omega})$.}
Table 2 \replace{,}{further} reports the average number of selected
likelihood components (no. comp).
For all considered data dimensions, our selection methods outperform the all one-wise likelihood (AOW
\add{or No selection})
estimator and show relatively small losses in terms of mean squared error ($Var + Bias^2$)
compared to \replace{MLE with optimal smooth weights.}{the MLEs.} The gains in terms of
variance reduction are particularly evident for larger data dimensions (e.g., see $d=30$). At the
same time, our method tends to select mostly uncorrelated sub-likelihoods, while
discarding the redundant components which do not contribute useful information on $\mu$.
\begin{sidewaystable}
\begin{tabular}{llllllllllllllll}
\hline
& & & \multicolumn{2}{c}{No selection} & \multicolumn{2}{c}{CLS1 (min)} & \multicolumn{2}{c}{CLS1 (thresh.)} & \multicolumn{2}{c}{CLS2}&
\multicolumn{2}{c}{MLE (known $\Sigma$)} &
\multicolumn{2}{c}{MLE (unknown $\Sigma$)}\\
$n$ & $d$ & $\rho$ & $Var$ & $Bias^2$& $Var$ & $Bias^2$& $Var$ & $Bias^2$& $Var$ & $Bias^2$& $Var$ & $Bias^2$& $Var$ & $Bias^2$&\\
\hline
5 & 10 & 0.50 & 80.26 & 7.19 & 79.18 & 0.04 & 81.36 & 0.00 & 87.55 & 0.01 & 52.52 & 0.04 & NA & NA \\
& & 0.90 & 115.06 & 10.31 & 96.97 & 0.05 & 114.82 & 0.08 & 114.90 & 0.02 & 62.28 & 0.04 & NA & NA \\
& 30 & 0.50 & 80.08 & 7.18 & 60.61 & 1.60 & 54.06 & 1.18 & 52.16 & 0.74 & 26.22 & 0.03 & NA & NA \\
& & 0.90 & 103.50 & 9.28 & 61.27 & 0.01 & 44.14 & 0.01 & 44.68 & 0.00 & 24.93 & 0.02 & NA & NA \\
\\
25 & 10 & 0.50 & 17.14 & 8.28 & 15.27 & 0.18 & 17.45 & 0.06 & 17.73 & 0.06 & 12.01 & 0.03 & 20.99 & 0.03 \\
& & 0.90 & 19.89 & 1.78 & 14.75 & 0.07 & 18.65 & 0.03 & 19.08 & 0.02 & 13.02 & 0.02 & 21.64 & 0.04 \\
& 30 & 0.50 & 11.87 & 1.06 & 7.38 & 0.12 & 6.14 & 0.00 & 6.07 & 0.00 & 4.86 & 0.00 & NA & NA\\
& & 0.90 & 24.51 & 2.20 & 11.59 & 0.02 & 6.69 & 0.00 & 6.75 & 0.00 & 5.73 & 0.01 & NA & NA \\
\\
100& 10 & 0.50 & 4.14 & 0.37 & 3.09 & 0.02 & 3.55 & 0.03 & 4.24 & 0.09 & 2.56 & 0.02 & 2.79 & 0.02 \\
& & 0.90 & 6.18 & 0.55 & 3.25 & 0.00 & 4.58 & 0.00 & 4.58 & 0.00 & 3.12 & 0.00 & 3.37 & 0.00 \\
& 30 & 0.50 & 3.40 & 0.30 & 1.79 & 0.00 & 1.60 & 0.00 & 1.60 & 0.00 & 1.20 & 0.00 & 1.69 & 0.00 \\
& & 0.90 & 5.54 & 0.50 & 2.60 & 0.01 & 1.68 & 0.01 & 1.68 & 0.01 & 1.41 & 0.01 & 2.03 & 0.01 \\
\hline
\end{tabular}
\caption{Bias and variance of estimators for the location model $\boldsymbol{X} \sim N_d(\mu
\bf1,\mathbf{\Sigma}(\rho))$ by different methods. MCMC-CLS1 algorithm with and without thresholding (CLS1 (min) and MCMC-CLS1 (thresh.), respectively); MCMC-CLS2
algorithm with stability selection (CLS2) with $\alpha=0.1$ and $\lambda = 1$; Maximum likelihood estimator
\remove{with optimal weights} based on $\Sigma^{-1}$ where
$\Sigma$ is known; Maximum likelihood estimator \remove{with optimal weights} based on $\hat{\Sigma}^{-1}$, where $\hat{\Sigma}$ can be estimated if $d<n$, otherwise
the value is missing and denoted by NA. For each method we show the finite sample variance ($Var$) and squared bias ($Bias^2$) for $n=5,25,100$, $d=10,30$ and
$\rho=0.5,0.9$. Estimates based on $B=250$ Monte Carlo runs (simulation settings: $\tau = d$, $T=10d$ , $\xi = 0.7$). Monte Carlo standard errors are
smaller than $0.001$.}
\end{sidewaystable}
\begin{table}[h]
\centering
\begin{tabular}{lllcccc}
\hline
$n$ & $d$ & $\rho$ & \multicolumn{1}{c}{No selection}& \multicolumn{1}{c}{CLS1 (min)}& \multicolumn{1}{c}{CLS1 (thres.)} & \multicolumn{1}{c}{CLS2} \\
\hline
5 & 10 & 0.50 & 10 & 3.77 & 3.92 & 3.72 \\
& & 0.90 & 10 & 3.28 & 2.58 & 2.36 \\
& 30 & 0.50 & 30 & 13.94 & 11.01 & 10.84 \\
& & 0.90 & 30 & 12.04 & 6.53 & 6.43 \\
\\
25 & 10 & 0.50 & 10 & 4.30 & 3.58 & 3.26 \\
& & 0.90 & 10 & 3.27 & 2.04 & 2.00 \\
& 30 & 0.50 & 30 & 12.81 & 7.70 & 7.48 \\
& & 0.90 & 30 & 11.96 & 5.98 & 5.98 \\
\\
100 & 10 & 0.50 & 10 & 4.28 & 2.56 & 2.31 \\
& & 0.90 & 10 & 3.17 & 2.00 & 2.00 \\
& 30 & 0.50 & 30 & 12.22 & 6.08 & 6.05 \\
& & 0.90 & 30 & 11.94 & 6.00 & 6.00 \\
\hline
\end{tabular}
\caption{Number of selected sub-likelihoods by different methods. MCMC-CLS1 algorithm with and without thresholding (CLS1 (min) and MCMC-CLS1 (thresh.),
respectively); MCMC-CLS2
algorithm with stability selection (CLS2) with $\alpha=0.1$ and $\lambda = 1$. For each method we show results for $n=5,25,100$, $d=10,30$ and $\rho=0.5,0.9$.
Estimates based on $B=250$ Monte Carlo runs (simulation settings: $\tau = d$, $T=10d$ , $\xi = 0.7$). Monte Carlo standard errors are smaller than $0.01$.}
\end{table}
Next, we illustrate the selection procedure based on MCMC-CLS2 (Algorithm 2). We draw a random
sample of $n=50$ observations from the model $N_d(\mu \bf1,\mathbf{\Sigma}(\rho))$ described above with
$d=250$,
$d^\ast = 0.8 d$, and $\rho=0.9$. This corresponds to 200 strongly redundant variables and 50
independent variables. We applied Algorithm~2 with $\alpha=0.1$ and $\lambda = 1$ (corresponding to
the AIC penalty). In Figure 1 (b), we show the relative frequencies of the components,
$\overline{\omega}_m = \sum_{t=1}^T \omega^{(t)}_m$, $m=1,\dots,250$, in $T=1000$ MCMC
iterations. As expected, the uncorrelated sub-likelihood components (components 201--250) are
sampled much more frequently than the redundant ones (components 1--200). Figure 1
(c) shows the objective function $\hat{g}_\lambda(\hat{\bfomega}^{\text{stable}})$ (up to a constant)
evaluated at the best solution computed from past MCMC samples ($\xi =
0.7$). Figure 1 (d) shows the Hamming distance, $dist(\boldsymbol{\omega},
\boldsymbol{\omega}^\prime) = \sum_j I(\omega_j \neq \omega_j^\prime)$, between the current selected rule,
$\hat{\bfomega}^\ast$ and true optimal value, $\boldsymbol{\omega}^\ast =
(1,\underbrace{0,\dots,0}_{199},\underbrace{1,\dots,1}_{50})$. Overall, our algorithm
quickly approaches the optimal solution and the final selection has 94.0\% asymptotic
relative efficiency (ARE) compared to MLE. When no stochastic selection is applied and
all 250 sub-likelihood components are included, the relative efficiency is only 13\%. As far as computing time is concerned, our non-optimized R implementations of the MCMC-CLS1 and MCMC-CLS2 algorithms for the above example takes, respectively 4.25 and 1.87 seconds per MCMC iteration, respectively. The computing time was measured on a laptop computer with Intel \circledR Core$^{\text{TM}}$ i7-2620M CPU \@ 2.70GHz.
\begin{figure}[h]\label{fig1}
\centering
\begin{tabular}{cc}
\hspace{1cm}(a) & \hspace{1cm}(b) \\
\includegraphics[scale=0.4]{figure1.jpg}
&\includegraphics[scale=0.4]{figure2.jpg}\\
\hspace{1cm}(c) &\hspace{1cm} (d) \\
\includegraphics[scale=0.4]{figure5.jpg}
& \includegraphics[scale=0.4]{figure3.jpg}
\end{tabular}
\caption{Stochastic selection for Model 1, $N_d(\mu
\bf1,\mathbf{\Sigma})$, based on Algorithm 2. (a) Objective function $\hat{g}_\lambda(\boldsymbol{\omega})$
evaluated at samples drawn from $\pi_{\tau,\lambda}(\boldsymbol{\omega})$; the horizontal solid line is the
control chart limit as described in Section \ref{sec3.2}. (b) Observed frequency of
sampled sub-likelihood components. (c) Estimated objective function evaluated at the progressively
selected likelihood, $\hat{\bfomega}^{\text{stable}}$, based on past samples. (d) Hamming distance
between the progressively selected likelihood, $\hat{\bfomega}^{\text{stable}}$, and the globally
optimal solution, $\boldsymbol{\omega}^\ast$. Simulation settings: $n=50$, $d=250$, $d^\ast = 200$ $\tau = d$,
$T=1000$ , burn-in length = $250$, $\alpha = 0.1$, $\lambda=1$.}
\end{figure}
\subsection{Exchangeable Normal Variables with Unknown Correlation} \label{sec5.2}
Let $\boldsymbol{X} \sim N_d(\bf0,\mathbf{\Sigma}(\rho))$, where $\mathbf{\Sigma}(\rho) = \{
(1-\rho) \boldsymbol{I}_d + \rho
{\bf1}_d{\bf1}_d^T\}$ and $0 \leq \rho \leq 1$ is the unknown parameter of interest. The
marginal univariate sub-likelihoods do not contain information on $\rho$, so we consider pairwise
sub-likelihoods
\begin{eqnarray}
\ell_{lm}(\rho) = - \frac{n}{2} \log (1-\rho^2) - \frac{(SS_{ll}
+ SS_{mm})}{2(1-\rho^2)} + \frac{\rho SS_{lm}}{1-\rho^2}, \ \ \ 1 \leq l<m \leq d,
\end{eqnarray}
where $SS_{mm} = \sum_{i=1}^n (X^{(i)}_m)^2$ and $SS_{lm}= \sum_{i=1}^n X^{(i)}_l X^{(i)}_m$. Given
$\boldsymbol{\omega}$, we \replace{solve}{estimate $\rho$ by solving} the \replace{cubic}{composite score}
equation $ 0 =\sum_{j<k} \omega_{jk} U_{jk}(\rho)$ by
Newton-Raphson iterations \replace{using}{where each pairwise score}
\begin{equation}
U_{jk}(\rho) = (1+\rho^2) SS_{jk} - \rho ( SS_{jj} + SS_{kk}) + n \rho ( 1-\rho^2)
\end{equation}
\add{is a cubic function of $\rho$.}
It is well known that the composite likelihood estimation can lead to poor results for
this model. \citeasnoun{Cox&Reid04} carries out asymptotic variance
calculations, showing that efficiency losses compared to MLE occur whenever
$d>2$, with more pronounced efficiency losses for $\rho$ near $0.5$. Particularly, if $d=2$
(exactly one pair), $ARE=1$; if $d>2$, ARE=1 if $\rho$ approaches $0$ or $1$. The next simulation
results show that in finite samples composite likelihood selection is advantageous even in such a
challenging situation.
Since closed-form pairwise score expressions are available for this model, we use the
objective function $\hat{g}(\boldsymbol{\omega})$ based on the one-step jackknife with pseudo-values
computed as in
Equation (\ref{pseudovalues}). In Table \replace{2}{3}, we present results from $B=250$ Monte Carlo
runs of the MCMC-CLS algorithm applied to the model with $d = 5, 8, 10$ dimensions, which
correspond to $M={{d}\choose{2}} = 10, 28, 45$ sub-likelihoods, respectively. We
compute Monte Carlo estimates for the finite-sample relative efficiency $RE =
\widehat{Var}(\hat{\rho}_{apw})/\widehat{Var}(\hat{\rho}_{cl}(\hat{\boldsymbol{\omega}}^\ast))$, where $\widehat{Var}$ denotes the Monte Carlo estimate of the finite-sample variance
$\hat{\rho}_{cl}(\hat{\boldsymbol{\omega}}^\ast)$ is the estimator selected by the
MCMC-CLS algorithm, while $\hat{\rho}_{apw}$ is the all pairwise (APW) estimator obtained by
including all available pairs (\add{thus} $RE>1$ indicates that our stochastic selection outperforms no
selection). We show values of
$\rho$ around $0.5$, since they correspond to the largest asymptotic efficiency losses of
pairwise likelihood compared to MLE (see \citeasnoun{Cox&Reid04}, Figure 1).
In all considered cases, our stochastic selection method \replace{improved}{improves} the efficiency of the estimator
based on all pairwise components; at the same time, our composite likelihoods employ a much smaller
number of components. For example, when $\rho=0.6$ the efficiency improvements
range from 8\% to 39\%, using only about half of the available components.
\begin{table}[h] \label{table2}
\centering
\begin{footnotesize}
\begin{tabular}{llcccccccc}
&&& \multicolumn{3}{c}{$n=10$} && \multicolumn{3}{c}{$n=50$}\\
\hline
&$M={{d}\choose{2}}=$ && 10 & 28 & 45 && 10 & 28 & 45 \\
\hline
\\
$\rho=0.4$ & $RE$ && 1.27(0.03) & 1.11(0.01) & 1.07(0.01) & & 1.15(0.01) &
1.12(0.01) & 1.06(0.01)\\
&No. comp. && 5.83(0.09) & 13.70(0.16) & 21.12(0.22) & & 7.30(0.08)
&13.82(0.16) & 21.45(0.22)\\
\\
$\rho=0.5$ & $RE$ && 1.36(0.02) & 1.14(0.01) & 1.06(0.01) & & 1.19(0.01) &
1.11(0.01) & 1.06(0.01)\\
&No. comp. && 5.51(0.08) & 13.32(0.07) & 20.96(0.22) & & 7.01(0.07)&
13.64(0.18)&21.23(0.22)\\
\\
$\rho=0.6$ & $RE$ && 1.39(0.02) & 1.17(0.01) & 1.10(0.01) & & 1.38(0.02) &
1.14(0.01)& 1.08(0.01)\\
&No. comp. && 5.80(0.08) & 12.60(0.16) &20.99(0.20) & & 5.39(0.08)&
13.27(0.15)& 21.45(0.21)\\
\\
\end{tabular}
\end{footnotesize}
\caption{Pairwise likelihood selection for Model 2, $N_d(\mathbf{0} ,\mathbf{\Sigma}(\rho))$ based on
Algorithm 1: Monte Carlo estimates for: (i) the relative efficiency of the parameter estimate under
selection versus no selection (\replace{$RE = Var(\hat{\rho}_{selection})/Var(\hat{\rho}_{no\_selection})$,}{
$RE =Var(\hat{\rho}_{apw})/Var(\hat{\rho}_{cl}(\hat{\boldsymbol{\omega}}^\ast))$,}
so that $RE>1$ indicates that the selection outperforms no selection); (ii) and number of
sub-likelihood components (No. comp.). Monte Carlo standard errors are in parenthesis.
Simulation settings: $\tau = d$, chain length $T=10d$, $\xi=0.7$.}
\end{table}
\subsection{Real Data Analysis: Australian Breast Cancer Family Study}
\label{example3}
In this section, we apply the MCMC-CLS algorithm to a real genetic dataset of women
with breast cancer obtained from the Australian Breast Cancer Family Study (ABCFS)
\cite{Dite&al03} and control subjects from the Australian Mammographic Density Twins and Sisters
Study \cite{Odefrey10}. All women were genotyped using a Human610-Quad beadchip array. The final
data set \add{that we used} consisted of a subset of 20 single nucleotide polymorphisms (SNPs) corresponding
to genes encoding a candidate susceptibility pathway, which is motivated by biological
considerations. After recommended data cleaning and
quality control procedures (e.g., checks for SNP missingness, duplicate relatedness, population
outliers \cite{Weale10}), the final data consisted of $n=333$ observations ($67$ cases
and $266$ controls).
To detect group effects due to cancer, we consider an extension of the latent multivariate Gaussian
model first introduced by \citeasnoun{Han&Pan12}. Let $\boldsymbol{Y}^{(i)} = (Y_{1}^{(i)}, \dots,
Y_{d}^{(i)})$, $i= 1,\dots, n$, be independent observations of a multivariate categorical variable
measured on $n$ subjects. Each variable $Y^{(i)}_{k}$ can take values $0,1$ or $2$, representing the
copy number of one of the
alleles of SNP $k$ of subject $i$. The binary variable $X^{(i)} = x^{(i)} = 0$ or $1$ represents
disease status of the $i$th subject (0 = control and 1 = disease). We assume a latent random
$d$-vector $\boldsymbol{Z}^{(i)} = (Z^{(i)}_{1}, \dots, Z^{(i)}_{d}) \sim N_d(\boldsymbol{\mu}^{(i)}(\theta) ,
\boldsymbol{R})$, where $\boldsymbol{\mu}^{(i)}(\theta)$ is a conditional mean vector with elements
$\mu_1^{(i)}(\theta)=\cdots= \mu_d^{(i)}(\theta) = \theta x^{(i)}$ and $\boldsymbol{R}$ is the correlation
matrix. Our main interest is on the unknown mean
parameter $\theta$, which is common to all the SNP variables and represents the main effect due
to disease. We assume $P(Y_{k}^{(i)}=0 | X^{(i)} = x^{(i)}) = P(Z^{(i)}_{k} \leq \gamma_{k1})
\label{latent_model1}$, $P(Y_{k}^{(i)}=1 | X^{(i)} = x^{(i)}) = P(\gamma_{k1} < Z^{(i)}_{k} \leq
\gamma_{k2})$, and $
P(Y_{k}^{(i)}=2 | X^{(i)} = x^{(i)}) = P(Z^{(i)}_{k} > \gamma_{k2})$,
where $\gamma_{k1}$ and $\gamma_{k2}$ are SNP-specific thresholds. The above model
reflects the ordinal nature of
genotypes and assumes absence of the Hardy-Weinberg equilibrium
(HWE) (allele frequencies and genotypes in a population are constant from generation to generation).
If the HWE holds the parameters $\gamma_{1k}$ and $\gamma_{2k}$ are not needed, since we have the
additional constraint \replace{$P(X_k^{(i)} = 2) = P(X_k^{(i)} = 1)^2$.}{$P(Y_k^{(i)} = 2) = P(Y_k^{(i)} = 1)^2$.}
Let $\boldsymbol{\gamma} = \{ (\gamma_{1k}, \gamma_{2k}): k=1,\dots,d \}$ and define intervals
\replace{$\Gamma(Y^{(i)}_{k})$}{$\Gamma_k(Y^{(i)}_{k})$} to \replace{denote intervals}{be}
$(-\infty,\gamma_{k1}]$, $(\gamma_{k1}, \gamma_{k2}]$ and
$[\gamma_{k2}, \infty)$, corresponding to $Y^{(i)}_{k} =0,1$ and $2$, respectively. The full
log-likelihood
function is
\begin{align*}
\ell(\theta, \boldsymbol{\gamma}, \boldsymbol{R}) &= \sum_{i=1}^n \log
P(\boldsymbol{Y}^{(i)}=\mathbf{y}^{(i)}| X^{(i)}= x^{(i)}) \\
&= \sum_{i=1}^n \log
\int_{\Gamma_1(y^{(i)}_{1})} \cdots \int_{\Gamma_d(y^{(i)}_{d})} f(z_1, \dots, z_d| \boldsymbol{\mu}^{(i)}(\theta),
\boldsymbol{R})
dz_1
\cdots dz_d,
\end{align*}
where $f(z_1, \dots, z_d| \boldsymbol{\mu}, \boldsymbol{R})$ is the pdf of the $d$-variate normal
density function with mean $\boldsymbol{\mu}$ and correlation matrix $\boldsymbol{R}$.
Clearly, the full log-likelihood
function is intractable when $d$ is moderate or large, due to the
multivariate integral in the likelihood expression. Note that for the marginal latent
components, we have \replace{$Z_k \sim N_1(0,1)$,}{$Z_k^{(i)} \sim N_1(\theta x^{(i)},1)$,}
so $\boldsymbol{\gamma}$ and $\theta$ \replace{can}{can be}
estimated by minimizing the one-wise composite log-likelihood
\begin{align} \label{likSNP2}
\ell_{cl}(\theta, \boldsymbol{\gamma}, \boldsymbol{\omega}) & = \sum_{k=1}^d \omega_{k} \sum_{i=1}^n \log
P(Y_{k}^{(i)}= y_{k}^{(i)}| X^{(i)}= x^{(i)}) \\
& = \sum_{k=1}^d \omega_{k} \sum_{i=1}^n
\log \int_{\Gamma_k(y^{(i)}_{k})} \phi(z_k|\theta x^{(i)},1) dz_k,
\end{align}
where $\phi(\cdot|\mu,1)$ denotes the normal pdf with mean $\mu$ and unit variance.
\add{We focus on using the one-wise composite log-likelihood in this section, except
when $\boldsymbol{R}$ is to be estimated where we use pairwise composite log-likelihood.}
Differently from the expression in \citeasnoun{Han&Pan12}, the
disease group effect $\theta$ is common to multiple sub-likelihood components; also, we allow
for the inclusion/exclusion of particular sub-likelihood components (corresponding to SNPs) by
selecting
$\boldsymbol{\omega}$.
\begin{figure}[h] \label{figure3}
\centering
\begin{tabular}{cc}
(a) & (b) \\
\includegraphics[scale=0.4]{figure3bis.jpg}
&\includegraphics[scale=0.4]{figure1bis.jpg}\\
(c) & (d) \\
\includegraphics[scale=0.4]{figure2bis.jpg}
& \includegraphics[scale=0.4]{HMapCase.jpg}
\end{tabular}
\caption{Composite likelihood selection for the ABCFS data by Algorithm 1: (a) Frequency of
the sampled marginal likelihood components; (b) Objective function evaluated at the current
solution $\hat{\bfomega}^\ast$ (computed from past samples with $\xi=0.7$); (c) Parameter estimates,
$\hat{\theta}^{(t)} = \hat{\theta}(\boldsymbol{\omega}^{(t)})$, based on sampled composition rules,
$\boldsymbol{\omega}^{(t)}$ with optimal parameter estimate $\hat{\theta}(\boldsymbol{\omega}^\ast)$ corresponding to
vertical dashed line. (d) Pairwise likelihood estimates for the correlation matrix $\boldsymbol{R}$ for SNPs
in the susceptibility pathway.}
\label{fig2}
\end{figure}
We estimated the optimal composition rule $\hat{\boldsymbol{\omega}}^\ast$ based on Gibbs samples from
Algorithm 1, where the objective function $\hat{g}(\boldsymbol{\omega})= \log Var(\hat{\theta}(\boldsymbol{\omega}))$ was
estimated by delete-10 jackknife. We selected five marginal likelihoods (SNPs) occurring
with at least $\xi=0.7$ frequency in 250 runs of the Gibbs sampler (see Figure 2 (a)).
In Figure 2 (b), we show the trajectory of the objective function $\hat{g}(\boldsymbol{\omega})$ evaluated at the
current optimal solution $\hat{\bfomega}^\ast$ (optimal solutions are computed from past
samples using a $\xi=0.7$ threshold). The estimated variance tends to \replace{decrease}{sway toward
the minimum} as more composition
rules are drawn by our Gibbs sampler. This behavior is in agreement with preliminary simulation
results carried out on this model (not presented here) as well as the \replace{example}{examples} presented in
Sections \ref{sec5.1} and \ref{sec5.2}.
Figure 2 (c) shows the empirical distribution of parameter estimates,
$\hat{\theta}^{(t)} = \hat{\theta}(\boldsymbol{\omega}^{(t)})$, based on sampled
vectors $\boldsymbol{\omega}^{(t)}$, $t=1,\dots, 250$. The vertical dashed line corresponds to the
selected parameter estimate
$\hat{\theta}(\hat{\boldsymbol{\omega}}^\ast)$, which is located near the mode of the
empirical distribution. Particularly, the selected McLE is
$\hat{\theta}(\hat{\boldsymbol{\omega}}^\ast) = -0.125$ and the corresponding delete-10 jackknife standard
error is $\hat{sd}(\hat{\theta}(\hat{\boldsymbol{\omega}}^\ast))=0.012$. The McLE based on \replace{all}{using all} 20
target SNPs is $\hat{\theta}_{all} = -0.112$ with the \remove{correspondent} delete-10 jackknife standard error
$\hat{sd}(\hat{\theta}_{all})= 0.042$. Our estimator \replace{gave}{gives} a
substantial accuracy improvement, supporting the conclusion of a difference between case and control
groups (i.e., $\theta \neq 0$) with higher confidence. Finally, in Figure 2 (d), we show
estimates for the correlation matrix $\boldsymbol{R}$ for the target SNPs, based on
the pairwise composite likelihood described in \citeasnoun{Han&Pan12}.
\section{Final remarks} \label{sec:remarks}
Composite likelihood estimation is a rapidly-growing need for a number of fields, due
to the astonishing growth of data complexity and the limitations of traditional maximum
likelihood
estimation in this context. The Gibbs sampling protocol proposed in this paper
addresses an important unresolved issue by providing a tool to automatically select the
most useful
sub-likelihoods from a pool of feasible components. Our numerical results on
simulated and real data show that the composition rules generated by our MCMC approach are useful
to improve the variance of traditional McLE estimators, \replace{which typically include all the
available components.}{typically obtained by using all sub-likelihood components available.}
Another advantage deriving from our method is the possibility to generate
sparse composition rules, since our Gibbs sampler selects only a (relatively small)
subset of informative sub-likelihoods while discarding non-informative or redundant
components.
In the present paper, likelihood sparsity derives naturally from the discrete
nature of our MCMC approach based on binary composition rules with \replace{$\boldsymbol{\omega} \in \{0,1
\}^d$.}{$\boldsymbol{\omega} \in \{0,1\}^M$.} On the other hand, the development of sparsity-enforcing
likelihood selection methods suited to
real-valued weights would be valuable as well and could result in more efficient composition
rules. For example, \citeasnoun{Lindsay&al2011} discuss the optimality of composite likelihood
estimating functions involving both positive and negative real-valued weights.
Actually, the row averages $\overline{\omega}_1, \cdots, \overline{\omega}_M$ computed in
Step~2 of Algorithm~1 can also be used to replace $\boldsymbol{\omega}=(\omega_1,\cdots,\omega_M)^T$
in \replace{$\hat{\ell}_{cl}(\boldsymbol{\theta}, \boldsymbol{\omega})$}{$\ell_{cl}(\boldsymbol{\theta}, \boldsymbol{\omega})$} defined by (\ref{comp_lik}), providing a composite
log-likelihood with between-0-and-1 weights. It would be of interest to see how well this
form of composite log-likelihood gets on in regard to the efficiency. This, however, was
not pursued in this paper and is left for future exploration.
Finally, the penalized version of the objective function described in Section
\ref{sec4} enforces \remove{arbitrarily} sparse likelihood functions, which is necessary in situations where
the model
complexity is relatively large compared to the sample size. Thus developing a thorough theoretical
understanding on the effect of the penalty on the selection as $d,n \rightarrow \infty$ would be
very valuable for improved selection algorithm in high-dimensions.
|
1,116,691,498,269 | arxiv | \section{\label{ho235_s_introduction}%
Introduction}
Structural phase transitions, such as the development
of charge density waves (CDWs),
have attracted continued interest from condensed-matter
physicists and chemists.
Originally, the manifestation of a CDW was proposed
to originate in Fermi surface nesting (FSN), as it is
present in quasi-one-dimensional metals (1D) \cite{gruener1994a}.
More recently, alternative mechanisms were proposed that
explain, for example, the formation of CDWs
by $q$-dependent electron-phonon coupling (EPC)
in three-dimensional (3D) CDW compounds.
Examples of 3D metals with CDWs include
$\alpha$-Uranium \cite{marmeggijc1982a},
CuV$_2$S$_4$ \cite{flemingrm1981a, kawaguchi2012a, okadah2004a, ramakrishnan2019a},
La$_3$Co$_4$Sn$_{13}$ \cite{slebarski2013a, otomo2016a, welsch2019a},
$R$Te$_3$ ($R$ = La, Sm, Gd, Tb, Dy, Ho, Er, Tm)
\cite{dimasie1995a, run2008a, banerjee2013a},
$R$Te$_2$ ($R$ = La, Ce) \cite{dimasie1996a, shim2004a},
$R$$_5$Ir$_4$Si$_{10}$ ($R$ = Dy, Ho, Er, Yb, Lu) \cite{ramakrishnan2017a},
Sm$_2$Ru$_3$Ge$_5$ \cite{kuo2020a},
EuAl$_4$ \cite{nakamura2015a, shimomura2019a, kaneko2021a, ramakrishnan2022a,meierwr2022a}
and CuIr$_{2-x}$Cr$_x$Te$_4$ \cite{zeng2022a}.
In several of these compounds a coexistence and
competition exists between the CDW and
antiferromagnetic (AFM) order or superconductivity (SC).
In the family of compounds $R$NiC$_2$
($R$ = Ce, Pr, Nd, Sm, Gd, Tb, Dy, Ho, Er, Tm)
\cite{romanm2018a,shimomura2009a,wolfel2010a,shimomuras2016a, kolinciokk2017a,maeda2019a,kolincio2020a},
the CDW competes with AFM.
In the case of SmNiC$_2$ ferromagnetic (FM) order
completely destroys the CDW \cite{shimomura2009a, wolfel2010a}.
On the other hand, Kolincio \textit{et al.} \cite{kolincio2020a}
have established the coexistence of a CDW
and field-induced FM order in TmNiC$_2$,
suggesting strong coupling of the rare-earth spins to the CDW
in these rare-earth materials.
However, the magnetic susceptibility of the paramagnetic
regime does not exhibit anomalies at the CDW transitions.
Er$_2$Ir$_3$Si$_5$ differs from most magnetic CDW compounds,
in that the magnetic susceptibility of the paramagnetic
state exhibits an anomaly at the CDW transition
\cite{ramakrishnan2020a}.
Presently, we report a similar effect for Ho$_2$Ir$_3$Si$_5$.
On the other hand, the CDW of non-magnetic Lu$_2$Ir$_3$Si$_5$
coexists with SC at low temperatures \cite{sangeetha2015a}.
Investigations of the CDW in Lu$_2$Ir$_3$Si$_5$
has been extensively done
\cite{singhy2004a, singhy2005a, kuoyk2006a, leemh2011a}.
Studies by transmission electron microscopy (TEM)
reported the modulation wave vector as
\textbf{q} = $\sigma(\bar{1}21)$ with $\sigma$ = 0.23--0.25
around 200 K \cite{leemh2011a}.
Recently, the modulated crystal structure of Lu$_2$Ir$_3$Si$_5$
was reported as similar to that of
Er$_2$Ir$_3$Si$_5$ \cite{ramakrishnan2021a}.
In both systems it was elucidated that the CDW resides
on the zigzag chains of Ir atoms along \textbf{c}.
Although the rare earth element
is not directly involved in the stabilization
of the CDW,
we have earlier proposed that it would indirectly
influence the CDW through its size (atomic radius).
Er has a larger atomic radius than to Lu has
(2.26 vs 2.17 \AA{}),\cite{clementi1967a}
while the transition occurs 50 K lower in
Er$_2$Ir$_3$Si$_5$ than in Lu$_2$Ir$_3$Si$_5$.
Here, we find that there is no such simple relationship
between atomic radius and $T_{CDW}$.
The atomic radii of Ho and Er are equal,
but $T_{CDW}$ = 90 K of Ho$_2$Ir$_3$Si$_5$
is much lower than $T_{CDW}$ = 150 K of
Er$_2$Ir$_3$Si$_5$.
From SXRD we found that Ho$_2$Ir$_3$Si$_5$
undergoes a large distortion of the lattice where
the symmetry is lowered from orthorhombic ($Ibam$)
to triclinic ($I\bar{1}$) below the transition
temperature $T_{CDW}$.
Simultaneously, superlattice reflections
appear at incommensurate positions, with values of
$\mathbf{q}$ = $[0.2494(2),\: 0.4978(2),\: 0.2488(2)]$ at 70 K.
The CDW is supported by zigzag chains of Ir atoms
in all three compounds $R_2$Ir$_3$Si$_5$.
Ho$_2$Ir$_3$Si$_5$ differs from Lu$_2$Ir$_3$Si$_5$
and Er$_2$Ir$_3$Si$_5$ by the presence of
second-order superlattice reflections in its SXRD,
indicative of anharmonic contributions to
the displacive modulation wave.
We present for Ho$_2$Ir$_3$Si$_5$ the temperature
dependencies of the electrical resistivity $\rho(T)$,
the specific heat $C_p(T)$ and the magnetic
susceptibility $\chi(T)$.
They show anomalies in agreement with a first-order
CDW transition with a hysteresis of about 40 K.
Due to the strain imposed by the transition onto
the crystal,
it cracks which can be seen clearly from the
resistivity measurements.
Such cracking of crystals has also been observed
in other CDW compounds, like BaFe$_2$Al$_9$ \cite{meierwr2021a}.
From the magnetic susceptibility we observed that
there is an influence of the CDW on the effective
magnetic moment of Ho$^{3+}$, which
is similar to what was reported in Er$_2$Ir$_3$Si$_5$ \cite{ramakrishnan2020a}.
Such an effect of the CDW on the rare earth spin
in the paramagnetic state
is not commonly seen in rare earth compounds.
Earlier studies have shown that the disorder
in the polycrystalline material
allows long-range AFM order to develop at
low temperatures \cite{singhy2004a, singhy2005a}.
However, in the present high-quality
single crystal, long-range AFM order is absent
and only the CDW is observed.
Here, we elucidate the differences and similarities
in the incommensurate
structure of Ho$_2$Ir$_3$Si$_5$ by comparing
with Lu$_2$Ir$_3$Si$_5$ and Er$_2$Ir$_3$Si$_5$.
Furthermore, we explore the coupling of the CDW
and magnetism of Ho$^{3+}$ spins and absence of
long-range AFM order.
\section{\label{ho235_s_experiment_computation}%
Experimental and computational details}
\subsection{\label{ho235_s_crystal_growth}%
Crystal growth and characterization}
\begin{figure
\includegraphics[width=80mm]{crystalgrowth.png}
\caption{Crystal growth in a tetra arc furnace
by the Czochralski method.
A bar was obtained of about 50 mm length
and 15 mm in diameter.}
\label{fig:ho2ir3si5_crystalgrowth}
\end{figure}
A single crystal of Ho$_2$Ir$_3$Si$_5$ has been grown by the Czochralski method in a tetra-arc furnace
as shown in Fig. \ref{fig:ho2ir3si5_crystalgrowth}.
We have chosen the Czochralski over flux growth or any other technique because
the individual elements have high melting points. Furthermore, iridium metal does not dissolve in most
of the typical fluxes.
To start with, high purity
individual elements of Ho : Ir : Si (99.99\% for Ho and Ir, and 99.999\% for Si) were taken in the stoichiometric ratio 2 : 3 : 5, amounting to about 8 to 9 g,
and melted repeatedly to ensure its homogeneity. Next, a seed crystal was cut from this polycrystalline
ingot for the purpose of crystal growth. The polycrystalline seed was gently inserted into the molten solution
and initially pulled at a rapid speed of about 80 mm/h. The temperature of the melt was adjusted such
that a necking is formed and then we employed a pulling speed of about 10 mm/h throughout the growth process.
About 50 mm long ingot was pulled.
Energy-dispersive X-ray spectroscopy (EDX) was
used to verify the chemical composition.
\subsection{\label{ho235_s_sxrd}%
Single-crystal X-ray diffraction (SXRD):
data collection and data processing.}
\begin{table
\caption{\label{tab:ho2ir3si5_cdw_crystalinfo}%
Crystallographic data of crystal A of
Ho$_2$Ir$_3$Si$_5$ at 200 K (periodic phase) and 70 K (CDW phase).}
\small
\centering
\begin{tabular}{ccc}
\hline
Temperature (K) & 200 & 70 \\
Crystal system & Orthorhombic& Triclinic \\
Space/Superspace group & $Ibam$ &
$I\bar{1}(\sigma_1\:\sigma_2\:\sigma_3)0$ \\
Space/Superspace group No. \cite{stokesht2011a} & 72 & {2.1.1.1} \\
$a$ (\AA{}) &9.9023(4) &9.8356(5) \\
$b$ (\AA{}) &11.3747(3) &11.4902(4) \\
$c$ (\AA{}) &5.7745(3) &5.7304(3) \\
$\alpha$ (deg) & 90 & 89.983(3) \\
$\beta$ (deg) & 90 & 91.772(2) \\
$\gamma$ (deg) & 90 & 89.975(1) \\
Volume (\AA{}$^3$) & 650.42(5) & 647.32(5) \\
Wave vector \textbf{q} & -
& (0.2494(2), 0.4978(2), 0.2488(2)) \\
$Z$ & 4 & 4 \\
Wavelength (\AA{}) & 0.50000 &0.50000 \\
Detector distance (mm) &110 &110 \\
$2\theta$-offset (deg) &0 &0 \\
$\chi$-offset (deg) &-60 & -60 \\
Rotation per image (deg) & 1 & 1 \\
$(\sin(\theta)/\lambda)_{max}$ (\AA{}$^{-1}$) &0.683589& 0.684039 \\
Absorption, $\mu$ (mm$^{-1}$) & 34.823 & 34.990 \\
T$_{min}$, T$_{max}$ & 0.0220, 0.0501 & 0.0230, 0.0501 \\
Criterion of observability & $I>1.5\sigma(I)$ & $I>1.5\sigma(I)$ \\
Number of $(m = 0)$ reflections \\
measured & 3765 & 2706 \\
unique (obs/all) & 451/470 &1298/1474 \\
Number of $(m = 1)$ reflections \\
measured & - & 12770 \\
unique (obs/all) & - & 2779/6454 \\
Number of $(m = 2)$ reflections \\
measured & -& 12725 \\
unique (obs/all) & -& 714/3263 \\
$R_{int}$ $(m = 0)$ (obs/all) &0.0487/0.0487 &0.0287/0.0288 \\
$R_{int}$ $(m = 1)$ (obs/all) &- &0.0828/0.1051 \\
$R_{int}$ $(m = 2)$ (obs/all) &- &0.0967/0.1833 \\
No. of parameters &31 &147 \\
$R_{F }$ $(m = 0)$ (obs) &0.0296 &0.0578 \\
$R_{F }$ $(m = 1)$ (obs) &- &0.0798 \\
$R_{F }$ $(m = 2)$ (obs) &- &0.2148 \\
$wR_{F }$ $(m = 0)$ (all) &0.0380 &0.0717 \\
$wR_{F }$ $(m = 1)$ (all) &- &0.0973 \\
$wR_{F }$ $(m = 2)$ (all) &- &0.3748 \\
$wR_{F }$ all (all) &0.0380 &0.0879 \\
GoF (obs/all) &1.83/1.33 &1.69/1.23 \\
$\Delta\rho_{min}$, $\Delta\rho_{max}$ (e \AA{}$^{-3}$) &
-3.00, 2.94 &-14.56, 14.53 \\
\hline
\end{tabular}
\end{table}
Small pieces of single-crystalline Ho$_2$Ir$_3$Si$_5$ were obtained
by crushing a large single crystal, from which crystal A of dimensions
0.15$\times$0.07$\times$0.1 mm${^3}$ was selected for a single-crystal X-ray
diffraction (SXRD) experiment at beamline P24 of PETRA-III Extension at DESY in Hamburg,
Germany.
SXRD was measured at station EH2 of beamline P24,
employing radiation of a wavelength
of $\lambda_{P24}$ = 0.50000 \AA{}.
For further details regarding data
collection refer to the supporting
information \cite{ho2ir3si5suppmat2022a}.
The EVAL15 software suite \cite{schreursamm2010a}
was used for processing the SXRD data.
SADABS \cite{sheldrick2008} was used for scaling
and absorption correction, with Laue symmetry $mmm$
for the data in the periodic phase
and $\bar{1}$ for the CDW phase.
As the crystal structure in the CDW phase is
incommensurately modulated we had to use
the superspace approach \cite{van2007incommensurate, wagner2009a, stokesht2011a} to index and integrate the data.
Section S2 in the supporting
information \cite{ho2ir3si5suppmat2022a}
provides further details.
The resulting reflection file was imported
into JANA2006 \cite{petricekv2016a, petricekv2014a}.
Table \ref{tab:ho2ir3si5_cdw_crystalinfo} gives
the crystallographic information at 200 K (periodic phase)
and at 70 K (incommensurate phase).
The crystallographic data at other temperatures
and details regarding SXRD data processing
are given in the Supporting Information.\cite{ho2ir3si5suppmat2022a}
\subsection{\label{sec:ho2ir3si5_physical_properties}%
Physical properties}
A commercial superconducting quantum interference device (SQUID) magnetometer (MPMS5,
Quantum Design, USA) was used to measure dc magnetic susceptibility in a field
of 100 mT as a function of temperature from 2 to
300 K. The electrical resistivity between 1.8 and 300 K was
measured by the standard dc four probe technique on
a commercial physical property measurement system (PPMS, Quantum
Design, USA). The specific heat data were measured both
on PPMS as well as using a commercial differential scanning calorimeter (DSC)
setup.
\subsection{\label{sec:ho2ir3si5_dft}%
Density functional theory calculations}
Density functional theory (DFT) based calculations
were performed using the projector augmented wave
(PAW)~\cite{blochl1994projector} method
as implemented in the Vienna \textit{ab initio}
simulation package (VASP).\cite{kresse1996efficient}
Exchange-correlation effects were included
using the Perdew-Burke-Ernzerhof
(PBE)~\cite{perdew1996generalized} version
of the generalized gradient approximation.
A $8\times 10 \times 12$ $\Gamma$-centered $k$-mesh
was used for the Brillouin zone (BZ) sampling with
an energy cut-off of $380$ eV for
the plane-wave basis set.
Spin-orbit coupling (SOC) effects were taken into
account self-consistently to consider the
relativistic effects.
We employed Ho$^{3+}$ potential by considering
the remaining 4\textit{f} electrons as core electrons.
Experimental lattice parameters were used, while
the internal atomic positions were relaxed
until the residual forces on each atom were
less than 0.0001 eV/\AA{}.
The VESTA\cite{momma2008vesta} program was used
to visualize the charge density distributions.
The phonon spectrum was obtained using a
$2\times 2\times 2$ supercell following the
frozen phonon method as implemented in
the phonopy~\cite{togo2015first} package.
\section{\label{sec:ho2ir3si5_results_discussion}%
Results and discussion}
\subsection{\label{sec:ho2ir3si5_cdw_structure}%
Analysis of the CDW structure}
Unlike Er$_2$Ir$_3$Si$_5$,\cite{ramakrishnan2020a}
the manifestation of the CDW appears to be
sluggish in Ho$_2$Ir$_3$Si$_5$.
The crystal was initially cooled from room
temperature down to 70 K, where
it remained in the orthorhombic phase,
despite $T_{CDW}$ being about 90 K
according to the physical property measurements.
It is possible that the crystal was cooled too
rapidly, thereby not allowing it to settle
in the CDW phase, and resulting in an undercooled state.
Upon further lowering of the temperature to 20 K,
we observed the coexistence of orthorhombic and
CDW phases, indicative
of a first-order transition
(Figs. \ref{fig:ho2ir3si5_unwarp}
and \ref{fig:ho2ir3si5_lattice}).
The CDW phase is characterized by superlattice
reflections at positions
$\mathbf{q}$ = $[0.2496(3),\: 0.4987(3),\: 0.2493(3)]$,
accompanied by a large monoclinic distortion
of the lattice ($\beta = 91.784(3)$ deg),
similar to $R_2$Ir$_3$Si$_5$ ($R$ = Lu, Er)
(details are given in the Supporting
Information\cite{ho2ir3si5suppmat2022a}).
The severity of the distortions induces a colossal
strain in the crystal, causing the diffraction spots
in SXRD to become
broad and elongated [Figs. \ref{fig:ho2ir3si5_unwarp}(b)
and \ref{fig:ho2ir3si5_unwarp}(c)],
resulting in many partially overlapped
reflections which cannot be used for structural analysis.
Sometimes the strain is too much for the
crystal to handle,
such that it physically shatters, rendering
it unusable for further investigations.
Such destructive behaviour of the CDW has also
been reported for BaFe$_2$Al$_9$ \cite{meierwr2021a}.
Another important difference
to Lu$_2$Ir$_3$Si$_5$ and Er$_2$Ir$_3$Si$_5$
is that we observe in SXRD on Ho$_2$Ir$_3$Si$_5$
second-order superlattice reflections $(m = 2)$
in addition to first-order superlattice
reflections $(m = 1)$
[Figs. \ref{fig:ho2ir3si5_unwarp}(e) and
\ref{fig:ho2ir3si5_unwarp}(f)].
The orthorhombic phase disappears after the
crystal is heated to 50 K, and only the CDW
phase remains at temperatures 50--130 K
(Figs. \ref{fig:ho2ir3si5_unwarp}(c) and
\ref{fig:ho2ir3si5_lattice}).
Further heating to 150 and 200 K causes the crystal to
enter the orthorhombic phase and the CDW phase disappears
[Fig. \ref{fig:ho2ir3si5_unwarp}(a)].
Similar to Lu$_2$Ir$_3$Si$_5$ and Er$_2$Ir$_3$Si$_5$,
there is a lowering
of the point symmetry from orthorhombic
to triclinic which causes the crystal to be
twinned with four orientations \cite{parson2003a}.
\begin{figure
\includegraphics[width=80mm]{h0lsection.png}
\vspace{2mm}\\
\includegraphics[width=100mm]{hkhsection.png}
\caption{\label{fig:ho2ir3si5_unwarp}%
(a), (b), (c) The $h\,0\,l$ plane, and
(d), (e), (f) the $h\,k\,h$ plane reconstructed
from measured SXRD data.
(a), (d) are for SXRD on the periodic phase at 200 K.
(b), (e) show the intermediate phase (undercooled state),
where threefold splitting of the reflections can be observed at 20 K,
indicating coexistence of CDW and periodic phases.
(c), (f) are for SXRD on the CDW phase at 50 K, showing
groups of two instead of groups of three split reflections.
The degree of splitting increases with the value of $h$.
Panels (e) and (f) show satellite reflections of order
$m = 2$.}
\end{figure}
Figure \ref{fig:ho2ir3si5_lattice} shows the
temperature dependence of the lattice parameters.
One can observe the clear distortion of the lattice
upon entering the CDW state,
including an expansion of $b$ and contractions
of $a$ and $c$.
Furthermore, the change in lattice type at $T_{CDW}$
appears discontinuous, in agreement with the
first-order character of the phase transition.
Barring 20 K and 130 K as 20 K is an undercooled
mixed state and 130 K is on the onset of $T_{CDW}$
warming and therefore unreliable, it can be inferred
from Fig. \ref{fig:ho2ir3si5_lattice}(d)
that the modulation wave vector \textbf{q} decreases
with temperature.
\begin{figure
\includegraphics[width=80mm]{axialle.png}
\includegraphics[width=80mm]{axialag.png}
\\
\includegraphics[width=80mm]{volume.png}
\includegraphics[width=80mm]{sigma.png}
\caption{\label{fig:ho2ir3si5_lattice}%
Temperature dependence of
(a) the lattice parameters $a$, $b$ and $c$
relative to their values at $T = 200$ K:
$a(200) = 9.9023(4)$, $b(200) = 11.3747(3)$
and $c(200) = 5.7745(3)$ \AA{};
(b) the lattice parameters $\alpha$, $\beta$
and $\gamma$;
(c) the volume of the unit cell;
and
(d) components of the modulation wave vector \textbf{q}.
Orange and blue symbols refer to data obtained
during heating and cooling of the crystal, respectively.
The undercooled state at 20 K has coexisting
CDW and periodic phases.}
\end{figure}
The presence of second-order satellite reflections
required refinement of the crystal structure with
higher order harmonics for the modulation functions.
Table \ref{tab:ho2ir3si5_refmod_compare}
provides a comparison of the models to see which
provides the best
fit to the SXRD data in the CDW phase.
\begin{table
\caption{\label{tab:ho2ir3si5_refmod_compare}%
Quality of the fit to the SXRD data at 70 K
for three structure models for the CDW phase.
See text for models A, B and C.
The number of unique reflections is
1298/1474 for obs/all main reflections $(m = 0)$,
2779/6454 for obs/all $m = 1$ satellites,
and
714/3263 for obs/all $m = 2$ satellites.
Criterion of observability is $I>1.5\sigma(I)$.}
\centering
\begin{tabular}{cccc}
\hline
Models & A & B & C \\
No. of parameters &147 &177 &177 \\
$R_{F }$ $(m = 0)$ (obs) &0.0578 &0.0577 &0.0535 \\
$R_{F }$ $(m = 1)$ (obs) &0.0798 &0.0796 &0.0672 \\
$R_{F }$ $(m = 2)$ (obs) &0.2148 &0.2119 &0.0894 \\
$wR_{F }$ $(m = 0)$ (all) &0.0717 &0.0717 &0.0695 \\
$wR_{F }$ $(m = 1)$ (all) &0.0973 &0.0972 &0.0869 \\
$wR_{F }$ $(m = 2)$ (all) &0.3748 &0.3721 &0.1319 \\
$wR_{F }$ all (all) &0.0879 &0.0877 &0.0738\\
GoF (obs/all) &1.69/1.23 &1.69/1.23 &1.52/1.03 \\
$\Delta\rho_{min}$, $\Delta\rho_{max}$ (e \AA{}$^{-3}$) &
-14.56, 14.53 & -14.52, 14.69 &-8.73, 6.14 \\
\hline
\end{tabular}
\end{table}
\subsubsection{\label{sec:ho2ir3si5_model_a}%
Model A}
Here, up to second-order harmonics have been
applied to holmium and iridium atoms, whereas
silicon atoms have only first-order harmonics
(Eq. S6 in \cite{ho2ir3si5suppmat2022a}).
This resulted in fewer parameters
and minimal changes to the results
as compared to models B and C
(Table \ref{tab:ho2ir3si5_refmod_compare}).
This model has been selected to describe the crystal
structure in the CDW phase.
\subsubsection{\label{sec:ho2ir3si5_model_b}%
Model B}
In this model up to second-order displacement
modulation parameters have been applied to all
atoms and then refined.
The refinement resulted in a fit to the SXRD data
that is almost identical to that of Model A.
However, the second-order modulation amplitudes
of the silicon atoms appear to have large
standard uncertainties (s.u.'s).
This is probably due to the poor scattering power
of silicon as compared to holmium or iridium.
Together, these features imply that the
additional 30 parameters in model B do not
lead to a significant improvement and model
B is not selected.
\subsubsection{\label{sec:ho2ir3si5_model_c}%
Model C}
Model C builds upon model A and model B
in order to try solve the issue with the
high value of $R_{F}$ for the second-order
satellites.
Up to third-order harmonics are used for the
modulation functions of the holmium and iridium
atoms, while employing only first-order
harmonics for silicon atoms, as higher-order
harmonics have been discarded for Si on the
basis of models A and B.
As a rule of thumb the highest harmonics in the
modulation wave should not exceed the highest
order of observed satellite reflections,
the latter which is two in the present SXRD experiment.
At first glance from Table \ref{tab:ho2ir3si5_refmod_compare}
we see a huge improvement of the fit to the
second-order satellites upon introduction of
third-order harmonics.
However, all these parameters have a s.u.
that is larger than the refined value itself.
The latter indicate that the system would have
third-order satellite reflections with
intensities larger than its first- and second-order
satellites.
As we did not observe third-order satellites,
the refined values of the third-order harmonics
appear to be unreliable, and model A is chosen
in favor of model C.
\subsection{\label{sec:ho2ir3si5_location_cdw}%
Location of the CDW}
Table S8 in the Supporting Information \cite{ho2ir3si5suppmat2022a}
shows the atomic coordinates at 200 K and 70 K (warming).
The six crystallographically independent atoms,
Ho1, Ir1, Ir2, Si1, Si2, and Si3,
of the $Ibam$ structure at 200 K
split into the
Ho1a, Ho1b, Ir1a, Ir1b, Ir2, Si1, Si2a, Si2b, Si3a, Si3b
atoms of the triclinic structure at 70 K.
Figure \ref{fig:ho2ir3si5_cell} shows the crystal
structure projected onto the \textbf{a-c} plane
at 200 K and 70 K (average structure at 70 K).
Figure \ref{fig:ho2ir3si5_cell}(b) appears skew
compared to Fig. \ref{fig:ho2ir3si5_cell}(a) due
to $\beta > 90$ deg in the CDW phase.
As the modulation wave vector
is close to (1/4, 1/2, 1/4),
Fig. \ref{fig:ho2ir3si5_cell}(c) shows a
$4\times 2 \times 4$ superstructure approximation,
where one can see zigzag chains of Ir1a-Ir1b
along \textbf{c}.
Tables S9 and S10 in the
Supporting Information \cite{ho2ir3si5suppmat2022a}
show the refined modulation amplitudes.
\begin{figure
\includegraphics[width=160mm]{cell.png}
\caption{\label{fig:ho2ir3si5_cell}%
Projection onto the $(\mathbf{a},\mathbf{c})$-plane
of the crystal structures of Ho$_{2}$Ir$_{3}$Si$_{5}$ for
(a) 200 K and (b,c) 70 K.
Panel (c) shows the $4\times 2 \times 4$
superstructure approximation.
Large purple spheres correspond to Ho atoms;
green spheres of intermediate size correspond to Ir atoms;
small yellow spheres are Si atoms.
Dashed lines give the distances in the basic structure,
with values of $3.390\,(3)$ and $3.728\,(3)$ \AA{}.}
\end{figure}
Analysis of the modulation
amplitudes and distances revealed that Ho$_2$Ir$_3$Si$_5$ follows
a similar pattern as $R_2$Ir$_3$Si$_5$ (Lu, Er),
\cite{ramakrishnan2021a,ramakrishnan2020a}
such that the CDW resides on zigzag chains
of Ir1a-Ir1b atoms,
as they have the shortest metal-metal
distances and exhibit the largest modulation of these
distances (Fig. \ref{fig:ho2ir3si5_tplot}).
The lattice distortion results in alternating
short and long Ir1a--Ir1b distances, where the
shorter distance is the most affected by the
modulation.
The formation of dimers on the Ir1a-Ir1b
zigzag chains along \textbf{c}
is responsible for the formation of the CDW.
\begin{figure
\includegraphics[width=80mm]{Ir1a-Ir1b.png}
\caption{\label{fig:ho2ir3si5_tplot}%
$t$-Plot of the interatomic distances between atoms
Ir1a and Ir1b $(x,y,z)$ and between Ir1a and
Ir1b at $(x, y, z-1)$ for the crystal in the CDW
phase at $T=70$ K.}
\end{figure}
Table \ref{tab:lu2ir3si5_compare} provides a
comparison of essential structural parameters
between the three compounds $R_2$Ir$_3$Si$_5$.
Our earlier proposal, that $T_{CDW}$ would be
inversely proportional to the atomic radius of
the rare earth element, is not confirmed by
Ho$_2$Ir$_3$Si$_5$.
The atomic radius of Ho is comparable to that
of Er, however $T_{CDW}$ of Ho$_2$Ir$_3$Si$_5$
is much lower than that of Er$_2$Ir$_3$Si$_5$.
The variation of distances Ir1a--Ir1b is more
or less similar in all three compounds.
\begin{table
\caption{\label{tab:lu2ir3si5_compare}%
Crystal data of the CDW phases of
Lu$_{2}$Ir$_{3}$Si$_{5}$ \protect\cite{sangeetha2015a, ramakrishnan2021a},
Er$_{2}$Ir$_{3}$Si$_{5}$ \protect\cite{ramakrishnan2020a}
and Ho$_{2}$Ir$_{3}$Si$_{5}$ (present results).
}
\begin{tabular}{c c c c}
\hline
Compound & Lu$_{2}$Ir$_{3}$Si$_{5}$ & Er$_{2}$Ir$_{3}$Si$_{5}$ & Ho$_{2}$Ir$_{3}$Si$_{5}$ \\
\parbox[c]{24mm}{Atomic radius of $R$ (\AA{}) \cite{clementi1967a}}
& 2.17 & 2.26 & 2.26 \\
$T_{\mathrm{CDW}}$ (K) & 202--231 & 150--166 & 90--130 \\
$T$ (K) & 60 & 75 & 70 \\
$a$ (\AA{}) & 9.8182(3) & 9.8494(3) & 9.8356(5) \\
$b$ (\AA{}) & 11.4093(3) & 11.4863(3) & 11.4902(4) \\
$c$ (\AA{}) & 5.6835(2) & 5.7268(2) & 5.7304(3) \\
$\alpha$ (deg) & 90.001(2) & 90.079(1) & 89.983(3) \\
$\beta$ (deg) & 91.945(2) & 91.695(2)& 91.772(2) \\
$\gamma$ (deg) & 90.018(2) & 90.051(1)& 89.975(1) \\
$V$ (\AA{}$^3$)& 636.34(3) & 647.60(5) & 647.32(5) \\
$q_x$ & 0.2499(3) & 0.2495(2)& 0.2494(2) \\
$q_y$ & 0.4843(4) & 0.4973(1)& 0.4978(2) \\
$q_z$ & 0.2386(2) & 0.2483(1)& 0.2488(2) \\
\multicolumn{2}{l}{Distance Ir1a--Ir1b\textsuperscript{\emph{a}}} \\
max (\AA{}) & 3.801(1) & 3.818(2) & 3.801(3) \\
min (\AA{}) & 3.711(1) & 3.714(2) & 3.719(3) \\
avg (\AA{}) & 3.755(1) & 3.764(2) & 3.763(3) \\
\multicolumn{2}{l}{Distance Ir1a--Ir1b\textsuperscript{\emph{b}}} \\
max (\AA{}) & 3.761(1) & 3.782(2) & 3.728(3) \\
min (\AA{}) & 3.002(1) & 3.008(2) & 3.053(3) \\
avg (\AA{}) & 3.385(1) & 3.398(2) & 3.390(3) \\
\hline
\end{tabular}\\
\textsuperscript{\emph{a}}Symmetry code for Ir1b $(x,y,z)$; given are the
maximum (max), minimum (min) and average (avg) distances.
\textsuperscript{\emph{b}}Symmetry code for Ir1b $(x,y,z-1)$.
\end{table}
\subsection{\label{sec:ho2ir3si5_electronic_structure}%
Electronic structure and phonons}
The band structure of Ho$_2$Ir$_3$Si$_5$
is shown in Fig.~\ref{fig:ho2ir3si5_band}(a)
for several high-symmetry directions in the
primitive BZ given in Fig.~\ref{fig:ho2ir3si5_band}(b).
\begin{figure
\includegraphics[width=0.95\textwidth]{Theory_Fig_bahadur_221013.pdf}
\caption{\label{fig:ho2ir3si5_band}%
(a) Bulk electronic band structure of the orthorhombic
phase of Ho$_2$Ir$_3$Si$_5$ along various high-symmetry
directions in the primitive Brillouin zone.
(b) Brillouin zone with high-symmetry points.
(c) Total (highlighted grey color) and orbital
projected density of states (green, red, and
blue lines).
(d) and (e) Electronic charge density distributions
on planes containing Ho and Ir atoms, respectively.
(f) Phonon band structure of the orthorhombic phase,
as calculated with smearing parameter $\sigma = 0.05$ eV.
(g) Phonon density of states.
The imaginary frequencies are represented by
negative values and are mainly associated with Ir atoms.
(h) Expanded view of the phonon band structure along
the $V$--$R$--$Y$ directions.
The absence of imaginary frequencies for smearing
parameter $\sigma = 0.5$ eV indicates the stability
of the orthorhombic structure towards higher temperatures.
}
\end{figure}
It is metallic such that both
electron and hole pockets exist at the Fermi level.
To resolve the contributions of electronic
states near the Fermi energy, we show the
atomic orbital projected density of states (PDOS)
in Fig.~\ref{fig:ho2ir3si5_band}(c).
The Ir states are dominant near $E_F$, whereas
the Ho and Si states have lower weight than those
of Ir atoms.
To understand the charge density distributions
of these atoms, we have shown the charge density
on two different planes in the
primitive unit cell, which contain Ho and Ir atoms,
respectively [Figs.~\ref{fig:ho2ir3si5_band}(d)
and~\ref{fig:ho2ir3si5_band}(e)].
The spherical charge distribution of these atoms
is reminiscent of the metallic type ionic environment
and suggests that these atoms are more likely to
undergo CDW modulations to find a low energy state.\cite{Bartl1979a}
Figure~\ref{fig:ho2ir3si5_band}(f) shows the phonon
spectrum along various high-symmetry
directions in the BZ.
The system is dynamically
unstable with Kohn-type~\cite{kohn1959image}
soft modes at $Z$, $R$ and in between
$Y$ and $U$ points.
The phonon PDOS [Fig.~\ref{fig:ho2ir3si5_band}(g)]
shows that Ho and Ir atoms contribute to the
low-frequency phonon modes, whereas high-frequency
phonon modes are dominated by Si atoms.
The imaginary frequencies are inherent to the Ir atoms,
confirming that the CDW is associated mainly
with the Ir atoms.
The lowest value of negative frequency is found
near the reciprocal point (0.25, 0.50, 0.25),
which is consistent with the incommensurate
wave vector $\mathbf{q}$ found in the SXRD experiment.
Figure~\ref{fig:ho2ir3si5_band}(h) demonstrates
the evolution of the soft phonon mode as a function
of the smearing parameter $\sigma$,
which represents the electronic temperature in our
calculations.
The soft phonon mode disappears upon increasing
the smearing parameter $\sigma$ from 0.05 eV to 0.5 eV.
This particular dependence of the phonon frequencies
on the smearing parameter (electronic temperature)
indicates that the orthorhombic structure is stable
only at higher temperatures as seen in our
experiments.
\subsection{\label{sec:ho2ir3si5_electrical_resistivity}%
Electrical resistivity}
\begin{figure
\includegraphics[width=80mm]{resistivity.png}
\caption{\label{fig:ho2ir3si5_resi}%
Temperature dependence of the electrical
resistivity $\rho(T)$ of Ho$_2$Ir$_3$Si$_5$.
The pink and green regions indicate the periodic
and CDW phases, respectively.
A large hysteresis of 40 K is clearly seen.
The crystal shattered on heating to above 130 K.}
\end{figure}
From Fig. \ref{fig:ho2ir3si5_resi} we infer that,
upon cooling the crystal, there is a sharp upward
turn of the electrical resistivity at 90 K, which
signifies the opening of a gap in the electronic
density of states over a major fraction of the
Fermi surface, in agreement with the formation of
a charge density wave (CDW) at this temperature.
The metallic nature of the crystal after the
CDW transition indicates a partial opening of
the gap at the Fermi surface.
Upon heating, $\rho(T)$ shows a sharp anomaly
at 130 K, thereby establishing a huge hysteresis
of 90--130 K.
This large hysteresis and the sharpness of the
transition signify that it is a first-order
transition much like the one seen
in $R_2$Ir$_3$Si$_5$ ($R$ = Lu, Er)
\cite{sangeetha2015a, ramakrishnan2020a}.
Such a large hysteresis is uncommon
for CDW transitions.
Especially for incommensurate CDWs the transition
usually is of second order, \textit{e.g.}
as found in the canonical CDW system
NbSe$_3$ \cite{tomi1981a}.
All measurements were made using virgin samples.
After cooling and then heating through the phase
transition,
micro-cracks develop due to the strain induced
by the transition.
This feature is visible in the electrical
resistivity through a lower resistivity of
virgin samples than of samples that have gone
through a cooling/heating cycle
(Fig. \ref{fig:ho2ir3si5_resi}).
Similar behavior has been observed for
ceramic material of $R_2$Ir$_3$Si$_5$
($R$ = Lu, Er) \cite{singhy2004a, singhy2005a}.
It was explained
by variations of pinning of the CDW in those systems.
However, it is
possible that the large lattice distortions
accompanying the CDW
transition could be responsible for the
formation of micro-cracks.
This leads to a
reduction of sizes of mosaic blocks, and an increase in texture,
which, in turn, would cause an increase of the electrical resistance
at each thermal cycle.
The presence of micro-cracks is
supported by the observation that the transition temperatures
and hysteresis are the same in each thermal cycle.
On the other hand,
the increase of pinning centers would lead to a lowering of the CDW
transition temperature, which is not observed here.
Just as in the case of $R_2$Ir$_3$Si$_5$ ($R$ = Lu, Er)
\cite{ramakrishnan2021a, ramakrishnan2020a},
we observe that the electrical resistivity exhibits
a $T^2$ dependence below $T$ = 50 K, which is up to much
higher temperatures than $\sim$10 K for a Fermi liquid,
implying dominant contributions from short-range
magnetic fluctuations of Ho spins in the absence of
long-range magnetic ordering.
\subsection{\label{sec:ho2ir3si5_magnetic_susceptibility}%
Magnetic susceptibility}
The temperature-dependent magnetic susceptibility,
$\chi(T)$, of Ho$_2$Ir$_3$Si$_5$ was measured on
a single crystal from the same rod as used for the
other bulk measurements.
Measurements were performed with a commercial
superconducting quantum interference device
(SQUID) magnetometer (MPMS 7, Quantum Design, USA)
employing a magnetic field of 0.1 T
along $\mathbf{c}$, during cooling
and heating between 2 and 300 K
(Fig. \ref{fig:ho2ir3si5_suscep}).
\begin{figure
\includegraphics[width=80mm,keepaspectratio]{suscep.jpg}
\caption{\label{fig:ho2ir3si5_suscep}%
Temperature dependence of the inverse magnetic
susceptibility ($1/\chi(T)$) of Ho$_2$Ir$_3$Si$_5$.
The upper inset shows an expanded view of $1/\chi(T)$
versus temperature.
The lower inset shows an expanded view $\chi(T)$
versus temperature.}
\end{figure}
The most interesting feature of the
temperature-dependent
magnetic susceptibility is the noticeable increase
of $\chi$(T) at 90(1) K upon cooling through
the CDW transition.
The observed effect cannot be explained by
a change of Pauli susceptibility at the
transition, since the estimated magnitude
of the Pauli susceptibility is at least two orders
of magnitude smaller than the observed change in
the susceptibility.
Furthermore, one would expect a smaller Pauli
susceptibility in the CDW phase, whereas we
observe an increase of the
susceptibility when cooling through the transition.
The same transition is found at 130(1) K
upon heating.
The observed hysteresis is in good agreement with
the hysteresis in the electrical resistivity and
DSC measurements.
A Curie-Weiss fit to the data at 140--300 K
results in a Curie constant of $C$ = 31.78(1) emu/mol K
and an antiferromagnetic Weiss temperature of
$\theta$ = -2.6(1) K.
A Curie-Weiss fit to the low-temperature paramagnetic
regime 40--90 K results in $C$ = 32.4(1) emu/mol K
and $\theta$ = -6.3(1) K.
The different Curie constants correspond to different
effective magnetic moments on Ho$^{3+}$
of 11.27 $\mu_B$ and 11.38 $\mu_B$, respectively.
These values are slightly higher than the free ion
magnetic moment of Ho$^{3+}$ ions.
So we see that Ho$_2$Ir$_3$Si$_5$ joins our
earlier studied Er$_2$Ir$_3$Si$_5$ \cite{ramakrishnan2020a}
as yet another exceptional case in showing
an effect of the CDW transition on the magnetic susceptibility.
Usually, compounds containing magnetic
rare-earth elements do not show any anomaly
in the paramagnetic susceptibility at the
high-temperature CDW transitions.
This is true for many of the magnetic CDW
compounds which are mentioned in the introduction.
For example, the paramagnetic susceptibility
of Ho$_5$Ir$_4$Si$_{10}$ does not show any anomalies
at its CDW transition \cite{yanghd1991c,ghoshk1993}.
The coexistence of antiferromagnetic order
($T_N$ = 2 K) and CDW in Ho$_5$Ir$_4$Si$_{10}$
might be related to the presence of weakly
coupled $4f$ electrons of Ho$^{3+}$
ions \cite{ramakrishnan2017a},
while for Ho$_2$Ir$_3$Si$_5$ the reduced
magnitude of the magnetic moments and
strong coupling suggest influence of the $4f$
electrons in the CDW transition and vice versa.
It is not clear, whether
the small but distinct change of Ho$^{3+}$
moment (11.27 $\mu_B$ vs 11.38 $\mu_B$)
across the CDW transition in a single crystal
of Ho$_2$Ir$_3$Si$_5$ could be responsible for
the absence of magnetic ordering of Ho$^{3+}$
moments in the crystal down to 2 K.
\subsection{\label{sec:ho2ir3si5_specific_heat}%
Specific heat}
\begin{figure
\includegraphics[width=80mm,keepaspectratio]{specificheat.jpg}
\caption{\label{fig:ho2ir3si5_specific_heat}%
Temperature dependence of the specific heat
$C_p$ from 2 to 250 K using PPMS.
The inset provides an enlarged view of the CDW
transition where $\Delta C_p$ = 60.5(1) J/(mol K).}
\end{figure}
\begin{figure
\includegraphics[width=80mm]{ho2ir3si5_dsc_delta_cp.jpg}
\caption{\label{fig:ho2ir3si5_dsc}%
Temperature dependence of the excess specific heat
$\Delta C_p$,
obtained by subtracting a smooth baseline from the
DSC signal.
Blue and red circles refer to cooling and warming data.
Clear peaks are observed in both cooling and
warming with a hysteresis of about 40 K.}
\end{figure}
The specific heat ($C_p(T)$) of Ho$_2$Ir$_3$Si$_5$
was measured by the thermal relaxation method,
using a physical property measuring system
(PPMS, Quantum Design, USA).
Data obtained during heating of a
single crystal of 10.5 mg from 2 to 250 K exhibit a sharp
peak at the temperature of the CDW transition at 131.1 (1) K
(Fig. \ref{fig:ho2ir3si5_specific_heat}).
We are unable to find a peak in $C_p(T)$ while cooling
the crystal.
Similar behavior was noted earlier for a crystal of
Lu$_2$Ir$_3$Si$_5$ \cite{sangeetha2015a}.
This feature could be the result of the specific
method of measurement employed in the PPMS instrument.
In a second experiment, differential scanning calorimetry (DSC)
was measured from 80 to 150 K on the same single crystal
of Ho$_2$Ir$_3$Si$_5$ (Fig. \ref{fig:ho2ir3si5_dsc}).
The DSC data show clear peaks at different temperatures
in the heating and cooling runs,
that appear at similar temperatures as are found
for the electrical resistivity and magnetic susceptibility,
and thus confirm the first-order character of the
phase transition.
The peaks appear of similar width as for
Lu$_2$Ir$_3$Si$_5$ \cite{sangeetha2015a},
while the DSC peaks are much sharper for
Er$_2$Ir$_3$Si$_5$ \cite{ramakrishnan2020a}.
Since other features, like $C_p(T)$ and $\rho(T)$,
exhibit very sharp anomalies at the CDW transition,
the broadened features in the DSC signal might
be related to the rate of change of temperature
in this experiment in conjunction with the sluggish
character of the transition.
The lattice contribution to the specific heat was determined
from a fit to the data far away from the transition.
Subtraction
of the lattice contribution resulted in $\Delta C_p(T)$
(inset in Fig. \ref{fig:ho2ir3si5_specific_heat}).
A similar value was obtained from the DSC measurement
(Fig. \ref{fig:ho2ir3si5_dsc}).
The change of entropy at the transition $\Delta S$
has been determined by integration of
$\tfrac{\Delta C_p(T)}{T}$ over temperature.
The values of $\Delta C_p$ = 60.5(1) J/(mol K)
and $\Delta S$ = 0.6 J/mol
are comparable to transition entropies
for $R_5$Ir$_4$Si$_{10}$
and $R_2$Ir$_3$Si$_5$ \cite{ramakrishnan2017a,sangeetha2015a}.
However, they are much larger than obtained
for conventional CDW systems, such as
K$_{0.3}$MoO$_{3}$ ($\Delta C_p$(max) = 8 J/(mol K);
$\Delta S$ = 0.18R)\cite{Bartl1979a} and
NbSe$_3$ ($\Delta C_p$(max) =9 J/(mol K);
$\Delta S$ = 0.08R)\cite{tomi1981a}.
The specific heat anomaly indicates a much
sharper transition for single crystals
of Ho$_2$Ir$_3$Si$_5$ than for conventional CDW
systems, which is in agreement with the first-order
character of the transition
as deduced from
resistivity and magnetic susceptibility data.
\section{\label{sec:ho2ir3si5_conclusions}%
Conclusions}
We have established an incommensurately
modulated crystal structure of the CDW phase
of Ho$_2$Ir$_3$Si$_5$.
The incommensurate modulation
is accompanied by a strong lattice distortion,
both of which are important for the
modulation of interatomic distances on
zigzag chains of iridium atoms along $\mathbf{c}$.
This is in accordance with the CDW being
supported by these zigzag chains.
Similar to the case of Er$_2$Ir$_3$Si$_5$,
the rare earth atoms
are not directly involved in the CDW formation.
The occurrence of a large lattice distortion
accounts for the sluggish character
and large hysteresis of the transition,
as they are apparent
in the temperature dependencies of the electrical resistivity,
magnetic susceptibility and specific heat.
Another unique feature of the compounds
$R_2$Ir$_3$Si$_5$ ($R$ = Lu, Er and Ho)
is the extreme sensitivity of the phase
transitions to crystalline order.
The present single crystals of high
perfection undergo a CDW transition,
while magnetic order is suppressed
down to at least 1.5 K.
It is worthwhile to point out that the
previous experiments on polycrystalline material
did not observe the CDW transition,
while magnetic order appeared
below $T_N$ = 5.1 K \cite{singhy2004a, singhy2005a}.
The present results on Ho$_2$Ir$_3$Si$_5$
do not confirm the idea that $T_{CDW}$ would
scale with the atomic radius of the rare earth
element.
While the size of Ho is equal to
that of Er, $T_{CDW}$ is much lower for
Ho$_2$Ir$_3$Si$_5$ than for Er$_2$Ir$_3$Si$_5$.
This absence of a correlation in the
series $R_2$Ir$_3$Si$_5$ is in contrast to
the series of isostructural rare-earth compounds
$R_5$Ir$_4$Si$_{10}$ \cite{ramakrishnan2017a},
where $T_{CDW}$ increases with increasing
size of the rare earth element $R$.
The SXRD data have revealed that the
structure of Ho$_2$Ir$_3$Si$_5$ is different
from that of Lu$_2$Ir$_3$Si$_5$ and
Er$_2$Ir$_3$Si$_5$.
Although the symmetries are the same for
all three compounds,
the presence of second-order superlattice
reflections in the CDW phase of Ho$_2$Ir$_3$Si$_5$
has added more complexity to it,
requiring modulation functions with up to
second-order harmonics (model A in
Section \ref{sec:ho2ir3si5_cdw_structure}).
From physical property measurements,
a huge hysteresis is found of about 40 K,
with the transition proceeding at
90 K (cooling) and 130 K (warming).
The transition implies severe structural
distortions, such that sometimes the crystal
shatters.
This is visible in the electrical resistivity,
which is higher after cycling though the
transition than before (Fig. \ref{fig:ho2ir3si5_resi}).
The effective magnetic moment of Ho is found
to change at the CDW transition,
from 11.38 $\mu_B$ to 11.27 $\mu_B$
(Fig. \ref{fig:ho2ir3si5_suscep}).
This kind of coupling between CDW and magnetism
is rare.
It has only been observed in isostructural
Er$_2$Ir$_3$Si$_5$.
This feature can probably be attributed to the
participation of $4f$ orbitals of Ho or Er
in states near the Fermi level that are involved
in CDW ordering.
The fact the effective moment of the rare earth
is influenced by the CDW transition suggests
a competition between CDW and magnetic order,
with both interactions presumedly employing
the same part of the Fermi surface.
In order to clearly understand the mechanism
for this unusual nature one would probably need
microscopic probes of magnetism,
such as elastic and inelastic neutron scattering.
\begin{acknowledgement}
We acknowledge DESY (Hamburg, Germany),
a member of the Helmholtz Association HGF,
for the provision of experimental facilities.
Parts of this research were carried out at
PETRA III, using beamline P24.
J.-K. Bao acknowledges financial support from the
Alexander-von-Humboldt foundation.
The work at TIFR Mumbai was supported by the
Department of Atomic Energy of the Government
of India under Project No. 12-R\&D-TFR-5.10-0100.
This research has been funded by the Deutsche
Forschungsgemeinschaft
(DFG; German Research Foundation)--406658237.
\end{acknowledgement}
\begin{suppinfo}
Details of the SXRD data collection,
data processing and structural analysis.
This material is available free of charge
\textit{via} the Internet at
\texttt{http://pubs.acs.org}.
\end{suppinfo}
\providecommand{\latin}[1]{#1}
\makeatletter
\providecommand{\doi}
{\begingroup\let\do\@makeother\dospecials
\catcode`\{=1 \catcode`\}=2 \doi@aux}
\providecommand{\doi@aux}[1]{\endgroup\texttt{#1}}
\makeatother
\providecommand*\mcitethebibliography{\thebibliography}
\csname @ifundefined\endcsname{endmcitethebibliography}
{\let\endmcitethebibliography\endthebibliography}{}
\begin{mcitethebibliography}{58}
\providecommand*\natexlab[1]{#1}
\providecommand*\mciteSetBstSublistMode[1]{}
\providecommand*\mciteSetBstMaxWidthForm[2]{}
\providecommand*\mciteBstWouldAddEndPuncttrue
{\def\unskip.}{\unskip.}}
\providecommand*\mciteBstWouldAddEndPunctfalse
{\let\unskip.}\relax}
\providecommand*\mciteSetBstMidEndSepPunct[3]{}
\providecommand*\mciteSetBstSublistLabelBeginEnd[3]{}
\providecommand*\unskip.}{}
\mciteSetBstSublistMode{f}
\mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})}
\mciteSetBstSublistLabelBeginEnd
{\mcitemaxwidthsubitemform\space}
{\relax}
{\relax}
\bibitem[Gr\"{u}ner(1994)]{gruener1994a}
Gr\"{u}ner,~G. \emph{Charge Density Waves in Solids}; Addison-Wesley: Reading,
Massachusetts, 1994\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Marmeggi \latin{et~al.}(1982)Marmeggi, Delapalme, Lander, Vettier, and
Lehner]{marmeggijc1982a}
Marmeggi,~J.~C.; Delapalme,~A.; Lander,~G.~H.; Vettier,~C.; Lehner,~N. Atomic
displacements in the incommensurable charge-density wave in alpha-uranium.
\emph{Sol. Stat. Commun.} \textbf{1982}, \emph{43}, 577--581\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Fleming \latin{et~al.}(1981)Fleming, DiSalvo, Cava, and
Waszczak]{flemingrm1981a}
Fleming,~R.~M.; DiSalvo,~F.~J.; Cava,~R.~J.; Waszczak,~J.~V. Observation of
Charge-Density waves in the cubic spinel structure {CuV$_2$S$_4$}.
\emph{Phys. Rev. B} \textbf{1981}, \emph{24}, 2850--2853\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kawaguchi \latin{et~al.}(2012)Kawaguchi, Kubota, Tsuji, Kim, Kato,
Takata, and Ishibashi]{kawaguchi2012a}
Kawaguchi,~S.; Kubota,~Y.; Tsuji,~N.; Kim,~J.; Kato,~K.; Takata,~M.;
Ishibashi,~H. Structural Analysis of Spinel Compound {CuV$_2$S$_4$} with
Incommensurate Charge-Density Wave. \emph{J. Phys. Conf. Ser.} \textbf{2012},
\emph{391}, 012095\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Okada \latin{et~al.}(2004)Okada, Koyama, and Watanabe]{okadah2004a}
Okada,~H.; Koyama,~K.; Watanabe,~K. Two-Step Structural Modulations and {F}ermi
Liquid State in Spinel Compound {CuV$_2$S$_4$}. \emph{J. Phys. Soc. Jpn}
\textbf{2004}, \emph{73}, 3227--3230\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ramakrishnan \latin{et~al.}(2019)Ramakrishnan, Sch\"onleber,
H\"ubschle, Eisele, Schaller, Rekis, Bui, Feulner, van Smaalen, Bag,
Ramakrishnan, Tolkiehn, and Paulmann]{ramakrishnan2019a}
Ramakrishnan,~S.; Sch\"onleber,~A.; H\"ubschle,~C.~B.; Eisele,~C.;
Schaller,~A.~M.; Rekis,~T.; Bui,~N. H.~A.; Feulner,~F.; van Smaalen,~S.;
Bag,~B.; Ramakrishnan,~S.; Tolkiehn,~M.; Paulmann,~C. Charge density wave and
lock-in transitions of {${\mathrm{CuV}}_{2}{\mathrm{S}}_{4}$}. \emph{Phys.
Rev. B} \textbf{2019}, \emph{99}, 195140\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[\'{S}lebarski and Goraus(2013)\'{S}lebarski, and
Goraus]{slebarski2013a}
\'{S}lebarski,~A.; Goraus,~J. Electronic structure and crystallographic
properties of skutterudite-related {Ce$_{3}M_{4}$Sn$_{13}$} and
{La$_{3}M_{4}$Sn$_{13}$} {($M$ = Co, Ru, and Rh)}. \emph{Phys. Rev. B}
\textbf{2013}, \emph{88}, 155122\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Otomo \latin{et~al.}(2016)Otomo, Iwasa, Suyama, Tomiyasu, Sagayama,
Sagayama, Nakao, Kumai, and Murakami]{otomo2016a}
Otomo,~Y.; Iwasa,~K.; Suyama,~K.; Tomiyasu,~K.; Sagayama,~H.; Sagayama,~R.;
Nakao,~H.; Kumai,~R.; Murakami,~Y. Chiral crystal-structure transformation of
{$R_{3}$Co$_{4}$Sn$_{13}$} {($R$ = La and Ce)}. \emph{Phys. Rev. B}
\textbf{2016}, \emph{94}, 075109\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Welsch \latin{et~al.}(2019)Welsch, Ramakrishnan, Eisele, van Well,
Sch\"onleber, van Smaalen, Matteppanavar, Thamizhavel, Tolkiehn, Paulmann,
and Ramakrishnan]{welsch2019a}
Welsch,~J.; Ramakrishnan,~S.; Eisele,~C.; van Well,~N.; Sch\"onleber,~A.; van
Smaalen,~S.; Matteppanavar,~S.; Thamizhavel,~A.; Tolkiehn,~M.; Paulmann,~C.;
Ramakrishnan,~S. Second-order charge-density-wave transition in single
crystals of {${\mathrm{La}}_{3}{\mathrm{Co}}_{4}{\mathrm{Sn}}_{13}$}.
\emph{Phys. Rev. Materials} \textbf{2019}, \emph{3}, 125003\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[DiMasi \latin{et~al.}(1995)DiMasi, Aronson, Mansfield, Foran, and
Lee]{dimasie1995a}
DiMasi,~E.; Aronson,~M.~C.; Mansfield,~J.~F.; Foran,~B.; Lee,~S. Chemical
pressure and charge-density waves in rare-earth tritellurides. \emph{Phys.
Rev. B} \textbf{1995}, \emph{52}, 14516--14525\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ru \latin{et~al.}(2008)Ru, Condron, Margulis, Shin, Laverock, Dugdale,
Toney, and Fisher]{run2008a}
Ru,~N.; Condron,~C.~L.; Margulis,~G.~Y.; Shin,~K.~Y.; Laverock,~J.;
Dugdale,~S.~B.; Toney,~M.~F.; Fisher,~I.~R. Effect of chemical pressure on
the charge density wave transition in rare-earth tritellurides {$R$Te$_{3}$}.
\emph{Phys. Rev. B} \textbf{2008}, \emph{77}, 035114\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Banerjee \latin{et~al.}(2013)Banerjee, Feng, Silevitch, Wang, Lang,
Kuo, Fisher, and Rosenbaum]{banerjee2013a}
Banerjee,~A.; Feng,~Y.; Silevitch,~D.~M.; Wang,~J.; Lang,~J.~C.; Kuo,~H.-H.;
Fisher,~I.~R.; Rosenbaum,~T.~F. Charge transfer and multiple density waves in
the rare earth tellurides. \emph{Phys. Rev. B} \textbf{2013}, \emph{87},
155131\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[DiMasi \latin{et~al.}(1996)DiMasi, Foran, Aronson, and
Lee]{dimasie1996a}
DiMasi,~E.; Foran,~B.; Aronson,~M.~C.; Lee,~S. Stability of charge-density
waves under continuous variation of band filling in {LaTe$_{2-x}$Sb$_x$}
($0<x<1$). \emph{Phys. Rev. B.} \textbf{1996}, \emph{54}, 13587--13596\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shim \latin{et~al.}(2004)Shim, Kang, and Min]{shim2004a}
Shim,~J.~H.; Kang,~J.-S.; Min,~B.~I. Electronic Structures of {$R$Te$_2$} ({$R$
= La, Ce}): A Clue to the Pressure-Induced Superconductivity in
{CeTe$_{1.82}$}. \emph{Phys. Rev. Lett.} \textbf{2004}, \emph{93},
156406\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ramakrishnan and van Smaalen(2017)Ramakrishnan, and van
Smaalen]{ramakrishnan2017a}
Ramakrishnan,~S.; van Smaalen,~S. Unusual ground states in {$R_5T_4X_{10}$}
({$R$} = rare earth; {$T$} = {Rh, Ir}; and {$X$} = {Si, Ge, Sn):} a review.
\emph{Rep. Prog. Phys.} \textbf{2017}, \emph{80}, 116501\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kuo \latin{et~al.}(2020)Kuo, Hsu, Tseng, Chen, Lin, Liu, Kuo, and
Lue]{kuo2020a}
Kuo,~C.~N.; Hsu,~C.~J.; Tseng,~C.~W.; Chen,~W.~T.; Lin,~S.~Y.; Liu,~W.~Z.;
Kuo,~Y.~K.; Lue,~C.~S. Charge density wave like behavior with magnetic
ordering in orthorhombic {Sm$_{2}$Ru$_{3}$Ge$_{5}$}. \emph{Phys. Rev. B}
\textbf{2020}, \emph{101}, 155140\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Nakamura \latin{et~al.}(2015)Nakamura, Uejo, Honda, Takeuchi, Harima,
Yamamoto, Haga, Matsubayashi, Uwatoko, Hedo, Nakama, and
Onuki]{nakamura2015a}
Nakamura,~A.; Uejo,~T.; Honda,~F.; Takeuchi,~T.; Harima,~H.; Yamamoto,~E.;
Haga,~Y.; Matsubayashi,~K.; Uwatoko,~Y.; Hedo,~M.; Nakama,~T.; Onuki,~Y.
Transport and Magnetic Properties of {EuAl$_4$} and {EuGa$_4$}. \emph{J.
Phys. Soc. Jpn} \textbf{2015}, \emph{84}, 124711\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shimomura \latin{et~al.}(2019)Shimomura, Murao, Tsutsui, Nakao,
Nakamura, Hedo, Nakama, and Onuki]{shimomura2019a}
Shimomura,~S.; Murao,~H.; Tsutsui,~S.; Nakao,~H.; Nakamura,~A.; Hedo,~M.;
Nakama,~T.; Onuki,~Y. Lattice Modulation and Structural Phase Transition in
the Antiferromagnet {EuAl$_4$}. \emph{J. Phys. Soc. Jpn} \textbf{2019},
\emph{88}, 014602\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kaneko \latin{et~al.}(2021)Kaneko, Kawasaki, Nakamura, Munakata,
Nakao, Hanashima, Kiyanagi, Ohhara, Hedo, Nakama, and Onuki]{kaneko2021a}
Kaneko,~K.; Kawasaki,~T.; Nakamura,~A.; Munakata,~K.; Nakao,~A.; Hanashima,~T.;
Kiyanagi,~R.; Ohhara,~T.; Hedo,~M.; Nakama,~T.; Onuki,~Y. Charge-Density-Wave
Order and Multiple Magnetic Transitions in Divalent Europium Compound
{EuAl$_4$}. \emph{J. Phys. Soc. Jpn} \textbf{2021}, \emph{90}, 064704\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ramakrishnan \latin{et~al.}(2022)Ramakrishnan, Kotla, Rekis, Bao,
Eisele, Noohinejad, Tolkiehn, Paulmann, Singh, Verma, Bag, Kulkarni,
Thamizhavel, Singh, Ramakrishnan, and van Smaalen]{ramakrishnan2022a}
Ramakrishnan,~S. \latin{et~al.} Orthorhombic charge density wave on the
tetragonal lattice of {EuAl$_4$}. \emph{IUCrJ} \textbf{2022}, \emph{9},
378--385\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Meier \latin{et~al.}(2022)Meier, Torres, Hermann, Zhao, Lavina, Sales,
and May]{meierwr2022a}
Meier,~W.~R.; Torres,~J.~R.; Hermann,~R.~P.; Zhao,~J.; Lavina,~B.;
Sales,~B.~C.; May,~A.~F. Thermodynamic insights into the intricate magnetic
phase diagram of {EuAl$_4$}. \emph{Phys. Rev. B} \textbf{2022}, \emph{106},
094421\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Zeng \latin{et~al.}(2022)Zeng, Hu, Wang, Sun, Yang, Boubeche, Luo, He,
Cheng, Yao, and Luo]{zeng2022a}
Zeng,~L.; Hu,~X.; Wang,~N.; Sun,~J.; Yang,~P.; Boubeche,~M.; Luo,~S.; He,~Y.;
Cheng,~J.; Yao,~D.-X.; Luo,~H. Interplay between Charge-Density-Wave,
Superconductivity, and Ferromagnetism in {CuIr$_{2-x}$Cr$_x$Te$_4$}
Chalcogenides. \emph{J. Phys. Chem. Lett.} \textbf{2022}, \emph{13},
2442--2451\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Roman \latin{et~al.}(2018)Roman, Strychalska-Nowak, Klimczuk, and
Kolincio]{romanm2018a}
Roman,~M.; Strychalska-Nowak,~J.; Klimczuk,~T.; Kolincio,~K.~K. Extended phase
diagram of {$R$NiC$_2$} family: Linear scaling of the {P}eierls temperature.
\emph{Phys. Rev. B} \textbf{2018}, \emph{97}, 041103(R)\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shimomura \latin{et~al.}(2009)Shimomura, Hayashi, Asaka, Wakabayashi,
Mizumaki, and Onodera]{shimomura2009a}
Shimomura,~S.; Hayashi,~C.; Asaka,~G.; Wakabayashi,~N.; Mizumaki,~M.;
Onodera,~H. Charge-Density-Wave Destruction and Ferromagnetic Order in
{SmNiC$_2$}. \emph{Phys. Rev. Lett.} \textbf{2009}, \emph{102}, 076404\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[W\"olfel \latin{et~al.}(2010)W\"olfel, Li, Shimomura, Onodera, and van
Smaalen]{wolfel2010a}
W\"olfel,~A.; Li,~L.; Shimomura,~S.; Onodera,~H.; van Smaalen,~S. Commensurate
charge-density wave with frustrated interchain coupling in {SmNiC$_{2}$}.
\emph{Phys. Rev. B} \textbf{2010}, \emph{82}, 054120\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Shimomura \latin{et~al.}(2016)Shimomura, Hayashi, Hanasaki, Ohnuma,
Kobayashi, Nakao, Mizumaki, and Onodera]{shimomuras2016a}
Shimomura,~S.; Hayashi,~C.; Hanasaki,~N.; Ohnuma,~K.; Kobayashi,~Y.; Nakao,~H.;
Mizumaki,~M.; Onodera,~H. Multiple charge density wave transitions in the
antiferromagnets {$R$NiC$_{2}$} {($R$ = Gd, Tb)}. \emph{Phys. Rev. B}
\textbf{2016}, \emph{93}, 165108\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kolincio \latin{et~al.}(2017)Kolincio, Roman, Winiarski,
Strychalska-Nowak, and Klimczuk]{kolinciokk2017a}
Kolincio,~K.~K.; Roman,~M.; Winiarski,~M.~J.; Strychalska-Nowak,~J.;
Klimczuk,~T. Magnetism and charge density waves in {$R$NiC$_2$} {($R$ = Ce,
Pr, Nd)}. \emph{Phys. Rev. B} \textbf{2017}, \emph{95}, 235156\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Maeda \latin{et~al.}(2019)Maeda, Kondo, and Nogami]{maeda2019a}
Maeda,~H.; Kondo,~R.; Nogami,~Y. Multiple charge density waves compete in
ternary rare-earth nickel carbides, {$R$NiC$_{2}$} ({$R$:} {Y}, {Dy}, {Ho},
and {Er}). \emph{Phys. Rev. B} \textbf{2019}, \emph{100}, 104107\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kolincio \latin{et~al.}(2020)Kolincio, Roman, and
Klimczuk]{kolincio2020a}
Kolincio,~K.~K.; Roman,~M.; Klimczuk,~T. Enhanced Mobility and Large Linear
Nonsaturating Magnetoresistance in the Magnetically Ordered States of
{TmNiC$_{2}$}. \emph{Phys. Rev. Lett.} \textbf{2020}, \emph{125},
176601\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ramakrishnan \latin{et~al.}(2020)Ramakrishnan, Sch\"onleber, Rekis,
van Well, Noohinejad, van Smaalen, Tolkiehn, Paulmann, Bag, Thamizhavel, Pal,
and Ramakrishnan]{ramakrishnan2020a}
Ramakrishnan,~S.; Sch\"onleber,~A.; Rekis,~T.; van Well,~N.; Noohinejad,~L.;
van Smaalen,~S.; Tolkiehn,~M.; Paulmann,~C.; Bag,~B.; Thamizhavel,~A.;
Pal,~D.; Ramakrishnan,~S. Unusual charge density wave transition and absence
of magnetic ordering in {Er$_{2}$Ir$_{3}$Si$_{5}$}. \emph{Phys. Rev. B}
\textbf{2020}, \emph{101}, 060101(R)\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sangeetha \latin{et~al.}(2015)Sangeetha, Thamizhavel, Tomy, Basu,
Awasthi, Rajak, Bhattacharyya, Ramakrishnan, and Pal]{sangeetha2015a}
Sangeetha,~N.~S.; Thamizhavel,~A.; Tomy,~C.~V.; Basu,~S.; Awasthi,~A.~M.;
Rajak,~P.; Bhattacharyya,~S.; Ramakrishnan,~S.; Pal,~D. Multiple charge
density wave transitions in single-crystalline {Lu$_{2}$Ir$_{3}$Si$_{5}$}.
\emph{Phys. Rev. B} \textbf{2015}, \emph{91}, 205131\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Singh \latin{et~al.}(2004)Singh, Pal, and Ramakrishnan]{singhy2004a}
Singh,~Y.; Pal,~D.; Ramakrishnan,~S. Low-temperature studies of the magnetic
and superconducting properties of the {$R_2$Ir$_3$Si$_5$} {($R$ = Y, La,
Ce--Nd, Gd--Tm)} system. \emph{Phys. Rev. B} \textbf{2004}, \emph{70},
064403\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Singh \latin{et~al.}(2005)Singh, Pal, Ramakrishnan, Awasthi, and
Malik]{singhy2005a}
Singh,~Y.; Pal,~D.; Ramakrishnan,~S.; Awasthi,~A.~M.; Malik,~S.~K. Phase
transitions in {Lu$_{2}$Ir$_{3}$Si$_{5}$}. \emph{Phys. Rev. B} \textbf{2005},
\emph{71}, 045109\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kuo \latin{et~al.}(2006)Kuo, Sivakumar, Su, and Lue]{kuoyk2006a}
Kuo,~Y.~K.; Sivakumar,~K.~M.; Su,~T.~H.; Lue,~C.~S. Phase transitions in
{Lu$_{2}$Ir$_{3}$Si$_{5}$}: An experimental investigation by transport
measurements. \emph{Phys. Rev. B} \textbf{2006}, \emph{74}, 045115\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Lee \latin{et~al.}(2011)Lee, Chen, Chu, Lue, and Kuo]{leemh2011a}
Lee,~M.~H.; Chen,~C.~H.; Chu,~M.-W.; Lue,~C.~S.; Kuo,~Y.~K. Electronically
phase-separated charge-density waves in {Lu$_{2}$Ir$_{3}$Si$_{5}$}.
\emph{Phys. Rev. B} \textbf{2011}, \emph{83}, 155121\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ramakrishnan \latin{et~al.}(2021)Ramakrishnan, Sch\"onleber, Bao,
Rekis, Kotla, Schaller, van Smaalen, Noohinejad, Tolkiehn, Paulmann,
Sangeetha, Pal, Thamizhavel, and Ramakrishnan]{ramakrishnan2021a}
Ramakrishnan,~S.; Sch\"onleber,~A.; Bao,~J.-K.; Rekis,~T.; Kotla,~S.~R.;
Schaller,~A.~M.; van Smaalen,~S.; Noohinejad,~L.; Tolkiehn,~M.; Paulmann,~C.;
Sangeetha,~N.~S.; Pal,~D.; Thamizhavel,~A.; Ramakrishnan,~S. Modulated
crystal structure of the atypical charge density wave state of single-crystal
{Lu$_{2}$Ir$_{3}$Si$_{5}$}. \emph{Phys. Rev. B} \textbf{2021}, \emph{104},
054116\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Clementi \latin{et~al.}(1967)Clementi, Raimondi, and
Reinhardt]{clementi1967a}
Clementi,~E.; Raimondi,~D.~L.; Reinhardt,~W.~P. Atomic Screening Constants from
{SCF} Functions. {II.} {A}toms with 37 to 86 Electrons. \emph{J. Chem. Phys.}
\textbf{1967}, \emph{47}, 1300\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Meier \latin{et~al.}(2021)Meier, Chakoumakos, Okamoto, McGuire,
Hermann, Samolyuk, Gao, Zhang, Stone, Christianson, and Sales]{meierwr2021a}
Meier,~W.~R.; Chakoumakos,~B.~C.; Okamoto,~S.; McGuire,~M.~A.; Hermann,~R.~P.;
Samolyuk,~G.~D.; Gao,~S.; Zhang,~Q.; Stone,~M.~B.; Christianson,~A.~D.;
Sales,~B.~C. A Catastrophic Charge Density Wave in {BaFe$_2$Al$_{9}$}.
\emph{Chem. Mater.} \textbf{2021}, \emph{33}, 2855--2863\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Stokes \latin{et~al.}(2011)Stokes, Campbell, and van
Smaalen]{stokesht2011a}
Stokes,~H.~T.; Campbell,~B.~J.; van Smaalen,~S. Generation of
{$(3+d)$}-dimensional superspace groups for describing the symmetry of
modulated crystalline structures. \emph{Acta Crystallogr. A} \textbf{2011},
\emph{67}, 45--55\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[ho2()]{ho2ir3si5suppmat2022a}
See Supplemental Material at [URL will be inserted by publisher] for details on
the diffraction experiments and values of the structural parameters.\relax
\mciteBstWouldAddEndPunctfalse
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Schreurs \latin{et~al.}(2010)Schreurs, Xian, and
Kroon-Batenburg]{schreursamm2010a}
Schreurs,~A. M.~M.; Xian,~X.; Kroon-Batenburg,~L. M.~J. {EVAL15:} a diffraction
data integration method based on ab initio predicted profiles. \emph{J. Appl.
Crystallogr.} \textbf{2010}, \emph{43}, 70--82\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Sheldrick(2008)]{sheldrick2008}
Sheldrick,~G.~M. \emph{{SADABS,} Version 2008/1}; G\"{o}ttingen: University of
G\"{o}ttingen, 2008\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[van Smaalen(2007)]{van2007incommensurate}
van Smaalen,~S. \emph{Incommensurate Crystallography}; International Union of
Crystallography Monographs on Crystallography; OUP Oxford, 2007\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Wagner and Sch{\"{o}}nleber(2009)Wagner, and
Sch{\"{o}}nleber]{wagner2009a}
Wagner,~T.; Sch{\"{o}}nleber,~A. {A non-mathematical introduction to the
superspace description of modulated structures}. \emph{Acta Crystallogr. B}
\textbf{2009}, \emph{65}, 249--268\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Petricek \latin{et~al.}(2016)Petricek, Eigner, Dusek, and
Cejchan]{petricekv2016a}
Petricek,~V.; Eigner,~V.; Dusek,~M.; Cejchan,~A. Discontinuous modulation
functions and their application for analysis of modulated structures with the
computing system {JANA2006}. \emph{Z. Kristallogr.} \textbf{2016},
\emph{231}, 301--312\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Petricek \latin{et~al.}(2014)Petricek, Dusek, and
Palatinus]{petricekv2014a}
Petricek,~V.; Dusek,~M.; Palatinus,~L. Crystallographic computing system
{JANA2006:} general features. \emph{Z. Kristallogr.} \textbf{2014},
\emph{229}, 345--352\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bl\"ochl(1994)]{blochl1994projector}
Bl\"ochl,~P.~E. Projector augmented-wave method. \emph{Phys. Rev. B}
\textbf{1994}, \emph{50}, 17953--17979\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kresse and Furthm\"uller(1996)Kresse, and
Furthm\"uller]{kresse1996efficient}
Kresse,~G.; Furthm\"uller,~J. Efficient iterative schemes for ab initio
total-energy calculations using a plane-wave basis set. \emph{Phys. Rev. B}
\textbf{1996}, \emph{54}, 11169--11186\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Perdew \latin{et~al.}(1996)Perdew, Burke, and
Ernzerhof]{perdew1996generalized}
Perdew,~J.~P.; Burke,~K.; Ernzerhof,~M. Generalized Gradient Approximation Made
Simple. \emph{Phys. Rev. Lett.} \textbf{1996}, \emph{77}, 3865--3868\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Momma and Izumi(2008)Momma, and Izumi]{momma2008vesta}
Momma,~K.; Izumi,~F. {VESTA:} a three-dimensional visualization system for
electronic and structural analysis. \emph{J. Appl. Crystallogr.}
\textbf{2008}, \emph{41}, 653--658\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Togo and Tanaka(2015)Togo, and Tanaka]{togo2015first}
Togo,~A.; Tanaka,~I. First principles phonon calculations in materials science.
\emph{Scripta Mater.} \textbf{2015}, \emph{108}, 1--5\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Parsons(2003)]{parson2003a}
Parsons,~S. {Introduction to twinning}. \emph{Acta Crystallogr. Section D}
\textbf{2003}, \emph{59}, 1995--2003\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Bartl \latin{et~al.}(1979)Bartl, Schuster, and Schroeder]{Bartl1979a}
Bartl,~H.; Schuster,~D.; Schroeder,~F.~A. Verfeinerung der Kristallstruktur der
blauen Kalium Molybdaen-bronze, {K$_{0.3}$MoO$_3$}, durch Roentgenbeugung.
\emph{Zeitschrift fur Kristallographie} \textbf{1979}, \emph{149},
127--128\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Kohn(1959)]{kohn1959image}
Kohn,~W. Image of the Fermi Surface in the Vibration Spectrum of a Metal.
\emph{Phys. Rev. Lett.} \textbf{1959}, \emph{2}, 393--394\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Tomi\'{c} \latin{et~al.}(1981)Tomi\'{c}, Biljakovi\'{c}, Djurek,
Cooper, Monceau, and Meerschaut]{tomi1981a}
Tomi\'{c},~S.; Biljakovi\'{c},~K.; Djurek,~D.; Cooper,~J.; Monceau,~P.;
Meerschaut,~A. \emph{Solid State Commun.} \textbf{1981}, \emph{38},
109--112\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Yang \latin{et~al.}(1991)Yang, Klavins, and Shelton]{yanghd1991c}
Yang,~H.~D.; Klavins,~P.; Shelton,~R.~N. Low-temperature physical properties of
{$R_5$Ir$_4$Si$_{10}$} ({$R$} = {Dy,} {Ho}, {Er,} {Tm,} and {Yb}) compounds.
\emph{Phys. Rev. B} \textbf{1991}, \emph{43}, 7688--7694\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\bibitem[Ghosh \latin{et~al.}(1993)Ghosh, Ramakrishnan, and
Chandra]{ghoshk1993}
Ghosh,~K.; Ramakrishnan,~S.; Chandra,~G. Magnetism in the
{R$_5$Ir$_4$Si$_{10}$} ({R} = {Ho} and {Er}) systems. \emph{Phys. Rev. B}
\textbf{1993}, \emph{48}, 4152--4155\relax
\mciteBstWouldAddEndPuncttrue
\mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct}
{\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax
\unskip.}
\end{mcitethebibliography}
\end{document}
|
1,116,691,498,270 | arxiv | \section{Introduction} \label{sec:intro}
Images of the solar corona obtained in extreme ultraviolet (EUV) emission lines broadly separate by brightness into dark coronal holes, quiet Sun, and bright active regions.
Within these features, emission is highly inhomogeneous with compact bright points, and extended loops and plume structures. When interpreting the image data, considerations of scattered light (often referred to as stray light) within the instrument can be important. For example, bright features have their intensities reduced due to scattering, while dark areas can be contaminated by scattered light from neighboring bright areas. EUV emission is optically thin and thus emitted photons are not scattered within the corona---the scattering is entirely within the instrument.
The present work was motivated by the need to make spectroscopic measurements
to relate coronal plasma properties with those measured in situ. A fundamental objective for Heliophysics is to determine how structures in the solar atmosphere produce structure in the solar wind,
which is particularly relevant to the Parker Solar Probe \citep{2016SSRv..204....7F} and Solar Orbiter \citep{2020A&A...642A...1M} missions launched in 2018 and 2020 that will reach heliocentric distances of 0.05~AU and 0.28~AU, respectively. By making in situ measurements in the outer solar corona and young solar wind \citep{2020JGRA..12526005V}
there is a greater possibility of measuring plasma that is relatively unevolved from its coronal origins.
Coronal holes are locations where some of the photospheric magnetic field opens directly into the heliosphere and their centers are widely acknowledged as the source regions of the fast solar wind \citep{1976SoPh...49..271S}.
The boundary regions of coronal holes are also believed to make a contribution to the more variable, slower solar wind, either by direct release of plasma along the open magnetic field lines \citep[e.g.,][]{1990ApJ...355..726W}, or indirectly through interchange reconnection of open field lines with the closed field of the nearby quiet Sun and/or active regions \citep[e.g.,][]{2001ApJ...560..425F,2011ApJ...731..112A}.
EUV spectrometers are important for measuring parameters such as velocity (via Doppler shifts), temperature and element abundances in coronal holes that can be related to the parameters measured in situ. Due to the low coronal intensities within coronal holes it is thus vital that the scattered light component to coronal hole measurements be quantified.
Ideally scattered light should be characterized before the launch of a space instrument, but technical limitations or time constraints can prevent this.
This was the case for the \textit{EUV Imaging Spectrometer} \citep[EIS;][]{culhane07} on board the \textit{Hinode}\ spacecraft \citep{2007SoPh..243....3K}, which obtains spectral images in the 170--212 and 246--292~\AA\ wavelength regions. \pry{Each emission line observed by EIS has a characteristic temperature of formation [$T_\mathrm{f}$] for which the ion has its peak abundance in equilibrium conditions. These temperatures are obtained from the CHIANTI atomic database \citep{2016JPhB...49g4009Y,2021ApJ...909...38D}.}
For observations above the solar limb, the scattered light can be estimated by measuring emission lines with $\log\,(T_\mathrm{f}/\mathrm{K})\le5.5$ as these are expected to have zero intensity at coronal heights \citep{2011ApJ...736..101H}. This method does not work for on-disk measurements, however.
The only study of on-disk scattered light is by \citet{2018ApJ...856...28W}, who measured the intensities of EIS emission lines with different values of $T_{\rm f}$, inside and outside of two on-disk coronal holes. They found that the coronal hole to quiet Sun ratios decreased with \pry{increasing} temperature until $\log\,(T_{\rm f}/\mathrm{K})=6.15$, above which the ratios were constant.
Given that coronal holes are known to be cooler than quiet Sun regions \citep[e.g.,][]{1998A&A...336L..90D}, the authors concluded that the on-disk coronal hole intensities are dominated by scattered light for ions with $\log\,(T_{\rm f}/{\rm K})\ge 6.15$.
\pry{This result does not imply that the behavior of scattered light is changing as a function of temperature. All emission lines in the coronal hole will have a scattered light component coming from the surroundings. Rather, this result arises because the coronal hole signal becomes increasingly weak at higher temperatures and thus eventually becomes overwhelmed by the scattered light component.}
\pry{The \citet{2018ApJ...856...28W} result implies, in particular,} that emission lines of \ion{Si}{x} and \ion{S}{x}, both with $\log\,(T_{\rm f}/{\rm K})=6.15$, are predominantly composed of scattered light. These two ions are used to infer the Si/S element abundance ratio \citep{2009ApJ...695...36F,2011ApJ...727L..13B}, which is used as a diagnostic of the \textit{FIP bias}, i.e., the enhancement factor of elements with low first ionization potential (FIP) often found in coronal and solar wind plasma. The FIP bias is determined by processes in the chromosphere and low corona; once a parcel of plasma is released into the heliosphere, its FIP bias is frozen in, and does not evolve as the solar wind advects outward. Thus the FIP bias is a very important measurement for connecting structures and variability in the solar wind with their coronal source \citep{1998SSRv...85..253P,2015LRSP...12....2L}. The \citet{2018ApJ...856...28W} results thus suggest that efforts to use the Si/S FIP bias ratio in coronal holes to find connections to the solar wind plasma measured by Solar Orbiter are likely to have large uncertainties.
The \citet{2018ApJ...856...28W} method requires an intensity comparison between the coronal hole and surrounding quiet Sun across a range of ions formed at different temperatures. It does not intrinsically yield an estimate of the scattered light, but instead identifies a break point in the coronal hole/quiet Sun ratio vs.\ temperature relation where the ratio becomes constant, implying scattered light is completely dominant. An additional complication is that, due to the generally small size of the EIS rasters, the quiet Sun measurement is made close to the coronal hole. If the scattered light in the coronal hole is coming from larger distances, then the local quiet Sun emission may not accurately reflect the full extent of the scattered light.
In the present work we use the 2012 transit of Venus to obtain a general purpose empirical formula for scattered light in EIS observations using \ion{Fe}{xii} emission observed by EIS and the \textit{Atmospheric Imaging Assembly} \citep[AIA:][]{2012SoPh..275...17L} on the \textit{Solar Dynamics Observatory} \citep[SDO:][]{2012SoPh..275....3P}. In comparison to the \citet{2018ApJ...856...28W} work, the formula yields a direct percentage estimate of scattered light at any location on the solar disk. By cross-calibrating quiet Sun intensities between EIS and AIA, the full-disk intensity from AIA is used to estimate long-range scattering. The formula is derived from data obtained with the 40$^{\prime\prime}$\ slit (or ``slot") of EIS, and the companion work \citet{2022arXiv220314161Y} provides technical details of EIS slot data, including a correction factor for measured intensities that is applied here.
\pry{We argue in Section~\ref{sect.ext} that the formula can be used for any line within the EIS wavelength ranges, and} it should be valuable in assessing whether a specific EIS coronal hole observation exhibits a significant degree of scattered light.
Transits of Venus across the face of the Sun are rare celestial events, with two occurring eight years apart and the preceding and following pairs more than 100 years apart. The most recent pair occurred in 2004 and 2012, and the next transit will not be until 2117. The Venus shadow as it appears against the solar disk has an angular diameter of 60\arcsec, which is sufficiently large that short-range scattered light is significantly reduced in the center of the shadow, but not so large that the scattered light from the full solar disk is reduced. Hence the transit is valuable for assessing the relative contributions of short and long-range scattered light. This is in contrast to the much more frequent Mercury transits (angular diameter 10\arcsec) and partial solar eclipses by the Moon. A large portion of the Sun is blocked in the latter, and the lunar limb moves so quickly (a few arcsec per second) that it is blurred in typical EIS exposures of tens of seconds, preventing measurements of short-range scattered light.
Section~\ref{sect.scatt} discusses previous studies of scattered light in EUV instruments, and Section~\ref{sect.soft} summarizes the analysis software used in this article. Section~\ref{sect.aia} presents an analysis of the AIA data obtained during the transit, yielding a general formula for scattered light in the instrument. Section~\ref{sect.eis} describes the EIS transit observations, and the analysis of Section~\ref{sect.eis-anal} yields an empirical formula describing the scattered light. Section~\ref{sect.pres} gives our prescription for how to derive the scattered light contribution for on-disk EIS measurements of the \ion{Fe}{xii} \lam195.12 line. Examples from seven coronal hole observations are given in Section~\ref{sect.ch}. \pry{In Section~\ref{sect.ext} we consider whether the scattering formula for \ion{Fe}{xii} \lam195.12 applies to other emission lines in the EIS wavelength bands.} Our results are summarized in Section~\ref{sect.summary}, and a discussion of their significance is given in relation to FIP bias and Dopper shift measurements in coronal holes.
\section{Scattered light for EUV instruments}\label{sect.scatt}
In simple terms, the scattered light for an on-disk solar observation can be considered to consist of three components: local, short-range and long-range. To illustrate the difference, consider a dark, circular coronal hole of radius 100$^{\prime\prime}$. In the center there is a tiny, intense bright point that is around a factor 100 brighter than the coronal hole. Outside the coronal hole the quiet Sun emission is completely uniform and around a factor 10 brighter than the coronal hole. The point spread function (PSF) that describes how the light from a point source is spatially dispersed across a detector typically consists of a narrow core and broad wings. If the instrument resolution is 2$^{\prime\prime}$, then the core can be considered a Gaussian of full-width at half-maximum of 2$^{\prime\prime}$. The emission from the bright point due to this core will extend out to around 4$^{\prime\prime}$\ due to the contrast between the coronal hole and the bright point. The wing emission may extend further to 5--10$^{\prime\prime}$. This is local scattered light.
The quiet Sun emission around the coronal hole is weaker but is distributed continously. A block of $3\times 3$ pixels will yield similar wing emission to the bright point, but the combined effect of multiple nearby blocks of pixels will serve to create a more significant wing component than the bright point. This can be expected to extend 10's of arcsec into the coronal hole, but diminish towards the coronal hole center. This is short-range scattered light.
Finally, the presence of huge numbers of quiet Sun pixels from the entire solar disk, but far from the coronal hole, will result in a scattered light signal in the coronal hole that is due to the very faint far wings of the PSF. This is long-range scattered light and would be expected to be constant across the coronal hole.
\pry{The PSF described above is a function that varies smoothly from the core to the wings, and radially (or close-to radially) symmetric.}
A complication for EUV instruments such as AIA and EIS is the presence of mesh filters in the optical path that are used to block visible light. A compact source such as the coronal hole bright point discussed above will produce a complex diffraction pattern that, to first order, appears as a cross or double-cross on the detector. Examples from EIS can be seen in Figure~9 of \citet{2018ApJ...856...28W} and an example from AIA is shown in Figure~1 of \citet{2011ApJ...743L..27R}. A model of the AIA PSF is shown in Figure~5 of \citet{aia-psf}. The scattered light in the coronal hole would thus be enhanced at the cross locations, but not otherwise. For the continuous quiet Sun emission, the effect of the filter diffraction pattern is to produce an enhanced, smooth, symmetric wing to the scattered light.
Scattered light in AIA data has been studied by previous authors, building on earlier work
with the predecessor \textit{Transition Region And Coronal Explorer} \citep[TRACE;][]{1999SoPh..187..229H} mission, and we briefly summarize these activities here.
\citet{2001SoPh..198..385L} used flare data obtained with TRACE to study the diffraction pattern from the mesh filter, finding that the pattern contains around 20\%\ of the incident light. They also found that the dispersion within the individual diffraction orders is sufficient to resolve spectral features, and this latter feature has been used for diagnostic purposes by \citet{2011ApJ...734...34K} and \citet{2011ApJ...743L..27R} using TRACE and AIA data, respectively. Tens of diffraction orders can be seen in large flares and they can be used to derive flare intensities in the case that the zeroth order image is saturated on the detector \citep{2014ApJ...793L..23S}.
The diffraction patterns for all of the AIA EUV channels were computed in a technical report by \citet{aia-psf}, and are implemented in IDL software available in the \href{https://sohoftp.nascom.nasa.gov/solarsoft/}{\textsf{SolarSoft}} library \citep{1998SoPh..182..497F,2012ascl.soft08013F}.
\citet{2009ApJ...690.1264D} were the first to model the complete PSF of TRACE by including the diffraction pattern, a narrow, Gaussian core, a broader Gaussian ``shoulder", and an isotropic, long-range component modeled as a Gaussian-truncated Lorentzian. The latter effectively corresponds to the PSF wing discussed earlier. This approach was extended to AIA data by \citet{2013ApJ...765..144P} and has been used by other authors to deconvolve their AIA images \citep[e.g.,][]{2021ApJ...906...62L,2021ApJ...907....1U}. We note that the \citet{2013ApJ...765..144P} PSF puts all of the scattered light within 100\arcsec\ of the source.
\citet{2016JSWSC...6A...1G} proposed a general-purpose, non-parametric blind deconvolution scheme that can be applied to any image data. They applied it to AIA data from the 2012 Venus transit and compared the results with the parametric model of \citet{2013ApJ...765..144P}, finding comparable results. An important point is that deconvolutions with both the non-parametric and parametric PSFs did not lead to zero signal in the Venus shadow and so the authors required an additional constant level of scattered light across the image. The implication is that there is an additional component of scattered light beyond 100\arcsec\ from the source. A similar finding was made by \citet{2012ApJ...749L...8S} using data from the Extreme Ultraviolet Imager (EUVI) on board STEREO. Our interpretation is that this emission comes from the very far wings of the PSF that were not modeled in these articles.
An imaging telescope with a large field-of-view such as AIA is ideal for investigating the full PSF, including the long-range component. The situation is more complex for an imaging slit spectrometer such as EIS, however. The slit isolates a narrow column of the image focused by the primary mirror. Thus information about the wider image field that gives rise to the short and long-range scattering is lost. The field can be built up by performing a raster scan but at most this will cover around 600\arcsec\ $\times$ 500\arcsec, and usually significantly less due to telemetry or cadence restraints.
Our solution in the present work is to find a simple empirical formula that yields an estimate of scattered light in EIS data at on-disk locations. A comparable formula is first derived using AIA data from the Venus transit to illustrate the method, and then applied to EIS.
\section{Analysis Software}\label{sect.soft}
The analysis performed for this article used IDL code written by the authors or contained in the \textsf{Solarsoft} IDL library. Software that may be generally useful for other EIS or AIA observations have been placed in the \textsf{GitHub} repository \href{https://github.com/pryoung/aia-eis-venus}{\textsf{aia-eis-venus}}. Software specifically created for the Venus analysis and for generating figures in the present article have been placed in the \textsf{GitHub} repository \href{https://github.com/pryoung/papers/tree/main/2022_venus}{\textsf{pryoung/papers/2022\_venus}}.
Some of the analysis performed here is done with \href{https://hesperia.gsfc.nasa.gov/rhessidatacenter/complementary_data/maps/maps.html}{IDL \textit{maps}}, which place solar images within a common heliocentric coordinate system. This makes it easy to extract co-spatial images from different instruments. AIA maps are created in the present work with the routine \textsf{sdo2map}, and EIS maps are created with \textsf{eis\_slot\_map} and \textsf{eis\_get\_fitdata}. These routines are available in \textsf{Solarsoft}.
\section{AIA Observations and Analysis}\label{sect.aia}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{plot_aia_venus}
\caption{An AIA 193~\AA\ image from 2012 June 6, with Venus located at approximate position [40,590]. The track and direction of motion are indicated by the \textit{yellow line}. A logarithmic intensity scaling has been applied to the image. The \textit{blue crosses} show the locations of Venus as measured by EIS. \pry{The differences compared to the AIA track are due to the different orbits of the \textit{Hinode}\ and SDO spacecraft (Section~\ref{sect.eis-anal})}.}
\label{fig.aia-image}
\end{figure}
\pry{AIA obtains full-disk images of the Sun in a number of filters, including seven at EUV wavelengths that are centered on strong emission lines. For the present work, we mostly use images from the 193~\AA\ channel that has dominant contributions from \ion{Fe}{xii} lines at 192.39, 193.51 and 195.12~\AA\ in most conditions. In Section~\ref{sect.ext} we also consider data from the 211~\AA\ channel that is usually dominated by the \ion{Fe}{xiv} 211.32~\AA\ line. The AIA EUV images are obtained at a 12~s cadence and the image pixel size corresponds to 0.6$^{\prime\prime}$.
}
Venus entered the AIA field of view around 20:20~UT on 2012 June 5 and exited around 06:20~UT on June 6. (For the remainder of this article we will drop the dates when specifying times during the transit.)
The leading edge of Venus was externally tangent to the solar limb (Contact I) at 22:07~UT and the trailing edge was externally tangent (Contact IV) at 04:37~UT.
Figure~\ref{fig.aia-image} shows an AIA 193~\AA\ image from 01:01~UT, with Venus---visible as a black circle of diameter 60\arcsec---close to the central meridian.
Venus passed to the north of a large active region complex comprising active regions with numbers 11493, 11496, 11498, 11499 and 11501. To the west of the complex was a large low-latitude coronal hole, and Venus clipped the northern section of the hole.
During the transit, C1 and C2-class flares peaked at 23:13~UT and 02:19~UT, respectively, and there were several B-class flares. The background GOES activity was at the B7 level. \pry{The C2-class flare, located near the west limb at ($+822,-220$) produced only a factor three increase in the AIA 193~\AA\ intensity at that location, and so the impact on the scattered light at Venus is negligible.}
The period 21:00~UT to 05:40~UT was selected, corresponding to Venus transiting the corona from 250\arcsec\ above the north-east limb to 200\arcsec\ above the north-west limb. We chose twenty-six AIA 193~\AA\ images spaced at 20-min intervals for the scattered light analysis, supplemented by six additional images at intervals of 5 or 10~min during periods when the intensity in the Venus shadow changed rapidly.
For each of the AIA images, the average intensity at the center of the Venus shadow, $D_\mathrm{V}$, and the average intensity for an annulus surrounding the shadow, $D_\mathrm{ann}$, were extracted. The IDL routine \textsf{aia\_get\_venus} was written for this purpose, and is available in the \textsf{2022\_venus} repository. For each of the image frames, the routine displays a close-up of the Venus shadow and the user manually selects the center of the shadow. A box of $33\times 33$ pixels (20$^{\prime\prime}$\ $\times$ 20$^{\prime\prime}$) centered on this location is extracted and averaged to yield $D_\mathrm{V}$. The level-1 AIA files have had the detector background removed and so there is no need to correct for this.
\pry{We use the standard deviation of the intensities of the $33\times 33$ pixel block, $\sigma_\mathrm{V}$, as the uncertainty for the Venus intensity measurement.}
The annulus surrounding the shadow has an inner radius of 30$^{\prime\prime}$\ and an outer radius of 50$^{\prime\prime}$. The inner radius is set by the size of the Venus shadow, while the outer radius is set with a view to applying the same method to EIS, which generally has small fields of view. The effects of choosing different radii are considered in Appendix~\ref{app.annulus}, where it is found that the differences would be mostly a few percent, or up to 26\%\ in the worst case. The results from \textsf{aia\_get\_venus} are stored in an IDL save file available in the \textsf{2022\_venus} repository.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{plot_aia_venus_ints.png}
\caption{Panel (a) plots the Venus intensity, $D_{\rm V}$ as a function of the solar-$x$ position of Venus. \textit{Circles} indicate positions above the limb, and \textit{crosses} positions on the disk. \textit{Filled circles} and \textit{diagonal} crosses are additional points that were not part of the 20-min cadence image series (see main text). The \textit{cyan line} is the Venus intensity after deconvolving the images. Panel (b) plots $D_{\rm V}$ against the annular intensity, $D_{\rm ann}$. Symbols are the same as for panel (a). The \textit{cyan line} is a straight line fit to only the crossed data points.}
\label{fig.aia-int}
\end{figure}
Figure~\ref{fig.aia-int}a plots $D_{\rm V}$ against solar-$x$ position. The six additional images mentioned above are plotted as filled circles or diagonal crosses. A distinctive horned profile is seen with the two peaks corresponding to the passage of Venus in front of the solar limb. The passage over the west limb occurred at $y=470$\arcsec, about 200\arcsec\ lower than at the east limb, and the coronal emission was brighter here, explaining the larger peak at the west limb.
Also shown in Figure~\ref{fig.aia-int}a is the Venus intensity after the AIA images have been deconvolved with an estimate of the instrument point spread function (PSF). We followed the procedure of \citet{aia-psf}, using the IDL routine \textsf{aia\_psf\_calc} to create the PSF function, and then the routine \textsf{aia\_deconvolve\_richardsonlucy} to deconvolve the images. As noted by \citet{2020SoPh..295....6S}, there is a residual signal in the Venus shadow. Here we find that this signal is approximately constant during the Venus transit so the deconvolution is effective in removing the local scattered light.
Figure~\ref{fig.aia-int}b plots $D_{\rm V}$ against $D_{\rm ann}$, which demonstrates a tight correlation between the AIA intensity immediately adjacent to the Venus shadow and the scattered light within the shadow. A straight line is fit to the points from inside the limb (indicated by crosses in Figure~\ref{fig.aia-int}). The off-limb points were omitted because the $D_{\rm V}$--$D_{\rm ann}$ relation is intended to be applied to on-disk locations, and there is a suggestion from Figure~\ref{fig.aia-int}b that the off-limb points follow a different pattern from the on-disk points. The slope of the linear fit is $0.1063\pm 0.0077$ and $D_{\rm V}=11.5\pm 1.1$~DN~s$^{-1}$~pix$^{-1}$ for $D_{\rm ann}=0$~DN~s$^{-1}$~pix$^{-1}$.
The interpretation of the $D_{\rm V}$--$D_{\rm ann}$ relation is that the scattered light within the Venus shadow scales linearly with the local emission, but with a constant background due to the long-range scattered light from the full solar disk. This background level is \pry{11.5}~DN~s$^{-1}$~pix$^{-1}$, derived from the $D_{\rm ann}=0$ value of the linear fit. The average $D_{\rm V}$ value from the PSF-deconvolved images for the on-disk data points (\textit{blue line} in Figure~\ref{fig.aia-int}a) is 14.5~DN~s$^{-1}$~pix$^{-1}$, giving confidence that the long-range scattered light component can be estimated from the linear fit to the $D_{\rm V}$--$D_{\rm ann}$ relation.
We can express the long-range scattered light at Venus as a fraction, $f$, of the full disk 193~\AA\ intensity, $D_{\rm fd}$, by first noting that the latter is relatively stable over time-scales of around a day. For example, extracting a 5-min cadence 193~\AA\ light curve for 2012 June 4 (the day prior to the transit) following the procedure described in Sect.~7.7.4 of \citet{sdo_guide} shows a standard deviation of only 4.4\%. From full disk images at 21:04~UT on June 5 and 05:58~UT on June 6, we obtain average intensities of 285 and 293~DN~s$^{-1}$~pix$^{-1}$ for the spatial region extending to 1.05~R$_\odot$ from Sun center. Taking the average of these two values gives $D_{\rm fd}=289$~DN~s$^{-1}$~pix$^{-1}$. We then have $f=11.5/289=0.0398$.
We therefore use the linear fit derived from the Venus transit observations to suggest a general formula for the 193~\AA\ scattered light that can be applied to \textit{any} on-disk observation:
\begin{equation}\label{eq.aia}
D_{\rm scatt} = {D_{\rm ann} \over 9.4} + {D_{\rm fd} \over 25.0}
\end{equation}
The factor 9.4 is the reciprocal of the line gradient from Figure~\ref{fig.aia-int}, and the factor 25.0 is the reciprocal of $f$. We will adopt a similar expression for the EIS scattered light in Sect.~\ref{sect.eis-anal}.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{plot_aia_20161123.png}
\caption{An AIA 193~\AA\ image from 04:00:06~UT on 2016 November 23, shown with a logarithmic intensity scaling. The \textit{cross} denotes the location used as a check on the scattered light formula, the \textit{circles} show the annular region used to estimate the short-range scattered light component.}
\label{fig.aia-reg}
\end{figure}
As an example of the application of the formula, we choose the AIA 193~\AA\ image obtained at 04:00:06~UT on 2016 November 23 (Figure~\ref{fig.aia-reg}). A small coronal hole location at position (+68,+620) has an intensity of 12.3~DN~s$^{-1}$~pix$^{-1}$ (averaged over a 5\arcsec\ $\times$ 5\arcsec\ block); $D_{\rm ann}$ is measured as 12.6~DN~s$^{-1}$~pix$^{-1}$ and $D_{\rm fd}$=122.4~DN~s$^{-1}$~pix$^{-1}$. Thus Equation~\ref{eq.aia} gives the short-range scattered light component as 1.3~DN~s$^{-1}$~pix$^{-1}$ and the long-range component as 4.9~DN~s$^{-1}$~pix$^{-1}$, with the total scattered light component 50\%\ of the measured intensity. For comparison, the deconvolved image yields an average intensity in the same location of 11.8~DN~s$^{-1}$~pix$^{-1}$, implying a short-range scattered light component of 0.5~DN~s$^{-1}$~pix$^{-1}$ (since our interpretation of the data in Figure~\ref{fig.aia-int} is that the deconvolution only removes the short-range scattered light).
\section{EIS observations of the transit}\label{sect.eis}
\pry{EIS is one of three science instruments on board the \textit{Hinode}\ spacecraft, which was launched in 2006. It is an imaging slit spectrometer that offers a choice of four slits with different widths. Two slits have narrow widths that translate to angular widths of 1$^{\prime\prime}$\ and 2$^{\prime\prime}$\ on the Sun and are used for emission line spectroscopy. Two wide slits (or ``slots") have widths of 40$^{\prime\prime}$\ and 266$^{\prime\prime}$\ and result in images appearing on the detector at the location of each emission line. For strong lines, the 40$^{\prime\prime}$\ slit is narrow enough to yield relatively clean images not affected by overlap with neighboring lines. Further details are given in \citet{2022arXiv220314161Y}. EIS obtains spectra in the two wavelength bands 170--212~\AA\ and 246--292~\AA, referred to as short-wavelength (SW) and long-wavelength (LW), respectively. The spatial resolution is 3--4$^{\prime\prime}$\ \citep{2022arXiv220314161Y}, and spatial coverage along the slit direction is 512$^{\prime\prime}$. A scanning mechanism enables images up to 800$^{\prime\prime}$\ wide to be built up through rastering.
}
\textit{Hinode}\ observations of the Venus transit were organized through \href{http://www.isas.jaxa.jp/home/solar/hinode_op/hop.php?hop=0209}{\textit{Hinode}\ Operation Plan No.~209}, led by T.~Shimizu and A.~Sterling, and the EIS Chief Observer was K.~Aoki.
The \textit{Hinode}\ pointing system is not suitable for tracking Venus during the transit so a set of six fixed pointings was performed, with Venus drifting through the fields of view of the three \textit{Hinode}\ instruments. The pointing changes were performed at a frequency of 98.5~min, corresponding to the orbital period of \textit{Hinode}. The observations \pry{occurred} during the eclipse season, so the available observing time per orbit for EIS was about 65~min.
\begin{deluxetable}{clcccc}[t]
\tablecaption{EIS observations for the Venus transit.\label{tbl.eis-slot}}
\tablehead{
Pointing No. & Study name & Start time & End time & Position & No. rasters}
\startdata
1 & SI\_Venus\_slot\_v1 & 21:05 & 21:07 & [-940,+559] & 1 \\
& SI\_Venus\_slit & 21:10 & 21:52 & [-1003,+559] & 8 \\
2 & SI\_Venus\_slot\_v1 & 22:48 & 23:33 & [-568,+559] & 20 \\
3 & SI\_Venus\_slot\_v1 & 00:07 & 01:14 & [-212,+559] & 30 \\
4 & SI\_Venus\_slot\_v1 & 01:45 & 02:52 & [+179,+559] & 30 \\
5 & SI\_Venus\_slot\_v2 & 03:23 & 03:43 & [+498,+559] & 6 \\
& SI\_Venus\_slot\_v2 & 03:50 & 04:10 & [+598,+559] & 6 \\
6 & SI\_Venus\_slot\_v2 & 05:01 & 05:21 & [+895,+560] & 6 \\
& SI\_Venus\_slit\_v2 & 05:30 & 05:49 & [+1019,+560] & 10 \\
\enddata
\end{deluxetable}
Table~\ref{tbl.eis-slot} lists the sequence of
EIS observations obtained for the Venus transit, which began at 21:05 and completed by 05:49 thus within the period studied with AIA in the previous section. The six \textit{Hinode}\ pointings are indicated, along with the EIS study name, observation start and end time, the position of the raster centers, and the number of raster repeats. The four studies were designed by Dr.~S.~Imada, and they used either the 40\arcsec\ slot or the 2\arcsec\ slit (indicated in the titles of the studies). Only data from the slot studies are considered here.
The \textsf{SI\_Venus\_slot\_v1} study is a 6-step
raster with 20~s exposure times and a raster duration of
2~min and 10~s. The \textsf{SI\_Venus\_slot\_v2} study is a 2-step raster with
100~s exposure times and a raster duration of 3~min and
22~s. \pry{Both studies download the same 10 wavelength windows from the two EIS channels, one of which yields \ion{Fe}{xii} \lam195.12. The other windows are discussed in Section~\ref{sect.ext}.}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{plot_eis_image.png}
\caption{An EIS \lam195.12 image derived from a raster obtained between 00:39 and 00:41~UT. A logarithmic intensity scaling has been applied. The shadow of Venus is visible at approximate position [$-170,+550$].}
\label{fig.eis-img}
\end{figure}
Figure~\ref{fig.eis-img}a shows a raster image from the \textsf{SI\_Venus\_slot\_v1} study beginning at 00:39~UT. It consists of six vertical strips of height 512\arcsec\ assembled from the six adjacent steps of the 40\arcsec\ slot, and is built up from right to left. Figure~\ref{fig.eis-img}b shows the exposure from the third raster step as it appears on the EIS detector, with the $y$-range reduced to show the Venus shadow. Note that the image is reversed in the $x$-direction compared to Figure~\ref{fig.eis-img}a. The detector window used for \ion{Fe}{xii} \lam195.12 is 48 pixels wide, with 1 pixel corresponding to 1\arcsec.
\citet{2022arXiv220314161Y} performed a study of EIS slot data and found that the slot has a projected width on the detector of 41~pixels. Due to the line spread function of the instrument, the edges of the slot are blurred resulting in the slot image extending over 46~pixels, close to the width of the \lam195.12 wavelength window used for the transit observations.
\citet{2022arXiv220314161Y} found that intensities measured from the slot images in \ion{Fe}{xii} \lam195.12 are 14\%\ higher than those measured with the 1$^{\prime\prime}$\ slit. They also highlighted the importance of subtracting a background level from the slot image to obtaining accurate intensities in quiet Sun and coronal hole regions. For EIS studies with a \lam195.12 wavelength window of 48 pixels in width they recommended using the leftmost and rightmost data columns in the window to represent the background.
\section{EIS data analysis}\label{sect.eis-anal}
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{plot_eis_venus_ints_195.png}
\caption{Panel (a) shows the \ion{Fe}{xii} \lam195.12 Venus intensity, $I_{\rm V}$, measured from EIS as a function of the solar-$x$ position of Venus. \textit{Circles} indicate positions above the limb, \textit{crosses} positions on the disk, and \textit{triangles} positions on the disk where a complete annular intensity, $I_{\rm ann}$, could not be measured. Panel (b) plots $I_{\rm V}$ vs.\ $I_{\rm ann}$. The \textit{solid line} is the fit to the points inside the limb, and the \textit{dashed line} is the fit from AIA data (Figure~\ref{fig.aia-int}), normalized to cross the EIS line at $I_{\rm ann}=0$.}
\label{fig.eis-int}
\end{figure}
In this section \ion{Fe}{xii} \lam195.12 intensities are measured from the EIS Venus data in order to derive a scattered light formula similar to that for AIA (Equation~\ref{eq.aia}). We first note that users have a choice in selecting an absolute radiometric calibration for their EIS data. The \href{http://solarb.mssl.ucl.ac.uk:8080/eiswiki/Wiki.jsp?page=EISCalibration}{EIS Wiki} recommends using either the \citet{2013A&A...555A..47D} or the \citet{2014ApJS..213...11W} calibrations, which give similar results. Both are updates to the original laboratory calibration of \citet{2006ApOpt..45.8689L}, which is the default option within the EIS software. For the present work, all intensities were computed with the \citet{2014ApJS..213...11W} calibration, and then reduced by 14\%\ as mentioned in the previous section.
Figure~\ref{fig.eis-int}a shows the Venus intensities, $I_{\rm V}$, derived from \ion{Fe}{xii} \lam195.12 during the transit, and it can be compared with the AIA plot shown earlier (Figure~\ref{fig.aia-int}a).
$I_{\rm V}$ was derived with the following procedure which is implemented in the IDL routine \textsf{eis\_venus\_select}, available in the \textsf{papers/2022\_venus} repository. An image is constructed from the individual slot exposures, such as the one shown in Figure~\ref{fig.eis-img}a which is constructed from six exposures. By manually selecting the center of Venus in this image, the code identifies the exposure that contains most of the Venus shadow. A close-up image of the shadow in this exposure is then displayed to allow the center of Venus to be manually selected. If the center is too close to the edge of the slot image, then the raster is rejected. Otherwise, a block of $21\times 21$ pixels centered on the selected Venus position (see Figure~\ref{fig.eis-img}b) is extracted and averaged to yield an intensity, $I$.
\pry{An uncertainty, $\sigma$, is obtained from the standard deviation of the intensities in the $21\times 21$ pixel block.}
From the same exposure, the section of the leftmost pixel column that corresponds to the $y$-positions of the $21\times 21$ Venus block (the vertical blue line in Figure~\ref{fig.eis-img}b) was extracted and averaged to yield $I_{\rm bg}$.
\pry{The standard deviation of the intensities of the 21 pixels yields an uncertainty $\sigma_\mathrm{bg}$.}
We then have $I_\mathrm{V}=I-I_{\rm bg}$\pry{, and $\sigma_\mathrm{V}^2=\sigma^2+\sigma_\mathrm{bg}^2$}. This procedure was performed for all 99 slot rasters (Table~\ref{tbl.eis-slot}), and 45 datasets were rejected. Most of the latter were because the center of Venus was too close to the edge of the slot (as noted above). Additional datasets were rejected because of missing exposures or because the raster was begun during orbital twilight when the EUV spectrum is partially absorbed by the Earth's atmosphere. The positions of Venus for the selected datasets are shown on Figure~\ref{fig.aia-image} as blue crosses. The track is different from that of AIA due to SDO having a geosynchronous orbit while \textit{Hinode}\ has a polar, Sun-synchronous, low-Earth orbit. The angle subtended by the two spacecraft at Venus can be different by up to around 200$^{\prime\prime}$. The results from \textsf{eis\_venus\_select} were output to the text file \textsf{results\_195.txt}, which is available in the \textsf{papers/2022\_venus} repository.
Figure~\ref{fig.eis-int}b plots $I_\mathrm{V}$ against the annulus intensity, $I_{\rm ann}$, analogous to Figure~\ref{fig.aia-int}b for the AIA data. $I_{\rm ann}$ is computed as part of the \textsf{eis\_venus\_select} procedure through a call to the routine \textsf{eis\_get\_annulus\_int}. The latter calls \textsf{eis\_slot\_map} to create an IDL map with the \textsf{/bg48} keyword set, which removes the background intensity using the \citet{2022arXiv220314161Y} prescription for 48-pixel windows. The pixels between two circles of radius 30\arcsec\ and 50\arcsec, centered on the user-selected Venus position, are identified and averaged to then yield $I_{\rm ann}$. For many of the rasters, the annulus extends beyond the slot raster image, either because the Venus shadow is close to the raster edge, or because of the narrow width of the \textsf{SI\_Venus\_slot\_v2} rasters. The software computes the number of pixels entering into the annulus intensity calculation and prints the ratio ($R_\mathrm{ann}$) relative to the maximum number of pixels (5027) to the results file. Where $R_\mathrm{ann}<0.75$, the points in Figure~\ref{fig.eis-int} are plotted as triangles. All of the points obtained above the limb have $R_\mathrm{ann} <0.75$, and they are indicated with circles in Figure~\ref{fig.eis-int}.
Figure~\ref{fig.eis-int}a shows significant differences from the equivalent AIA plot (Figure~\ref{fig.aia-int}a). In particular, EIS did not observe Venus transiting the limb, with all observed Venus locations being at least 100$^{\prime\prime}$\ from the limb. The peak seen at $x=-500$ in Figure~\ref{fig.eis-int}a arises from Venus passing close to the bright active region loops on the east side of the active region complex and a plume-like structure at around $y=600$$^{\prime\prime}$\ (Figure~\ref{fig.aia-image}). A similar peak is not seen for AIA as Venus tracked further to the north compared to the EIS data. The plot of $I_\mathrm{V}$ vs.\ $I_\mathrm{ann}$ shows a larger spread of values compared to the equivalent AIA plot (Figure~\ref{fig.aia-int}a). However, the points for which $R_\mathrm{ann}\ge 0.75$ do show a clear linear trend, giving confidence that the linear relation found for AIA also applies to EIS. A linear fit to these points is over-plotted on Figure~\ref{fig.eis-img}b as a \textit{blue line}. \pry{The gradient of this line is $0.151\pm 0.006$. Extrapolating the fit to $I_{\rm ann}=0$ gives a Venus intensity of $I_{\mathrm{V}_0}=11.7\pm 0.9$~erg~cm$^{-2}$~s$^{-1}$~sr$^{-1}$}. This is the long-range component of the scattered light within the Venus shadow.
Also shown in Figure~\ref{fig.eis-img}b is the linear fit from the AIA data, scaled to intersect the EIS linear fit at $I_{\rm ann}=0$. The EIS fit has a steeper gradient, suggesting that the local scattered light is a larger factor for EIS, which may be due to the different optical configurations of the instruments.
In analogy with the AIA analysis, we can write the EIS scattered light contribution during the Venus transit as a combination of the short-range component and the long-range component:
\begin{equation}\label{eq.eis1}
I_{\rm V} = { I_{\rm ann} \over \alpha } + {I_{\rm fd}\over \beta}.
\end{equation}
$\alpha$ is simply the inverse of the gradient of the blue line in Figure~\ref{fig.eis-img}b, and thus $\alpha=6.6$. To determine $\beta$ we need an estimate of the full-disk \lam195.12 intensity during the transit. Full-disk measurements are not available from EIS, but it is possible to make use of the AIA 193~\AA\ full-disk intensity to obtain an estimate. The procedure is as follows.
Three slot rasters beginning at 23:16, 00:30 and 02:05~UT during the transit were selected. For each a full-disk AIA 193~\AA\ synoptic image was downloaded, with a time close to the EIS observation. (Since AIA interleaved partial frame images with full-disk images during the transit, the nearest-in-time AIA image was not necessarily a full-disk image.) Sub-images from the AIA images were extracted to match the EIS raster fields-of-view, and they were manually co-aligned by matching bright points in the images. Co-spatial blocks of size 150\arcsec\ $\times$ 150\arcsec\ to the north of Venus were extracted and the AIA and EIS intensities were averaged over these blocks to give intensities $D_\mathrm{block}$ and $I_\mathrm{block}$, respectively. The EIS intensities were derived following the procedure for the annulus intensity described earlier. The AIA images were used to obtain the full-disk intensities, $D_{\rm fd}$, averaged out to 1.05~R$_\odot$. The EIS full-disk \lam195.12 intensity \pry{[$I_\mathrm{fd}$]} is then approximated by $D_{\rm fd}I_\mathrm{block}/D_\mathrm{block}$, \pry{and $\beta=I_\mathrm{fd}/I_{\mathrm{V}_0}$}. The values of these parameters for the three datasets are given in Table~\ref{tbl.beta}. All of these numbers are generated with the IDL routine \textsf{eis\_full\_disk\_scale}, available in the \textsf{papers/2022\_venus} repository. The average value of $\beta$ is \pry{34}, and this is used in the following section.
Appendix~\ref{app.hop130} compares the $I_\mathrm{fd}$ value derived using this method with the true value obtained from a full-disk EIS scan performed on 2012 May 30, six days prior to the transit. The true $I_\mathrm{fd}$ value was found to be 13\%\ lower than the derived value, and demonstrates that our method of inferring the full-disk \lam195.12 intensity is reasonably accurate.
\begin{deluxetable}{llllll}
\tablecaption{AIA ($D$) and EIS ($I$) intensity measurements used to derive the parameter $\beta$.\label{tbl.beta}}
\tablehead{\colhead{Time} &
\colhead{$D_\mathrm{block}$\tablenotemark{a}} &
\colhead{$D_{\rm fd}$\tablenotemark{a}} &
\colhead{$I_\mathrm{block}$\tablenotemark{b}} &
\colhead{$I_{\rm fd}$\tablenotemark{b}} &
\colhead{$\beta$}
}
\startdata
23:16 & 222 & 284 & 312 & 399 & 34.1 \\
00;31 & 85 & 287 & 119 & 403 & 34.4 \\
02:06 & 78 & 291 & 101 & 376 & 32.1 \\
\enddata
\tablenotetext{a}{Units: DN~s$^{-1}$~pix$^{-1}$.}
\tablenotetext{b}{Units: erg~cm$^{-2}$~s$^{-1}$~sr$^{-1}$.}
\end{deluxetable}
\section{Prescription for estimating scattered light in EIS data}\label{sect.pres}
The previous section gave the expression for the \ion{Fe}{xii} \lam195.12 scattered light intensity within the Venus shadow during the transit in terms of short-range and long-range components. The former is proportional to the local annulus intensity, $I_\mathrm{ann}$, and the latter is proportional to the full-disk \lam195.12 intensity, $I_\mathrm{fd}$. We now assume this expression applies to any on-disk EIS observation. If an intensity $I$ is measured from an EIS raster observation, either with the narrow slits or the slots, there is a component due to scattered light that is given by
\begin{equation}\label{eq.eis2}
I_{\rm scatt} = { I_{\rm ann} \over 6.6 } + {I_{\rm fd}\over 34}.
\end{equation}
where $I_{\rm ann}$ and $I_{\rm fd}$ are measured co-temporally with $I$. The former can usually be measured directly from the same EIS raster if the field-of-view is large enough. The latter must be derived from an AIA 193~\AA\ full-disk image, by cross-calibrating the intensity in a region within the EIS raster to the same region in the AIA image, as described in the previous section.
Here we summarize the procedure to estimate the scattered light component in an EIS raster.
\begin{enumerate}
\item Measure $I_{\rm ann}$ for the location of interest from an EIS map. The IDL routine \textsf{eis\_annulus\_int} in the \textsf{aia-eis-venus} repository is provided for this purpose.
\item Derive $I_{\rm fd}$ by comparing a region observed by EIS with that observed in the AIA 193~\AA\ channel. The IDL routine \textsf{eis\_aia\_int\_compare} in the \textsf{aia-eis-venus} repository is provided for this purpose, and generally a quiet Sun region with fairly uniform emission should be selected.
\item Apply Equation~\ref{eq.eis2} to determine $I_{\rm scatt}$, and compare it with the intensity measured at the location of interest.
\end{enumerate}
How accurate will the scattered light estimates be?
Some sources of uncertainty can be quantified. \pry{For example, the linear fit to the Venus intensities (Figure~\ref{fig.eis-int}b) yields uncertainties of 4\%\ and 8\%\ for the short- and long-range scattered light intensities.}
The AIA 193~\AA\ full-disk intensity may not be an accurate proxy of the \lam195.12 full-disk intensity, which would lead to uncertainties in both the parameter $\beta$ (Equation~\ref{eq.eis1}) and $I_{\rm fd}$ (Equation~\ref{eq.eis2}). Appendix~\ref{app.hop130} performs a check on the AIA--EIS calibration method using a full-disk \lam195.12 measurement from 2012 May 30, and finds that the method under-estimates the \lam195.12 intensity by 13\%. This discrepancy may vary with solar conditions as the spectral content of the AIA 193~\AA\ varies with changing solar activity (e.g., greater or lesser contributions from species cooler or hotter than \ion{Fe}{xii} to the channel). This is likely to be small given that the entire corona emits strongly in \ion{Fe}{xii}, however. The following section demonstrates that for two coronal hole rasters the scattered light formula predicts an intensity larger than the measured intensity. The worst case suggests an uncertainty of at least 16\%. Overall, we suggest the scattered light estimate is accurate to around \pry{25}\%.
\pry{One scenario where the formula will underestimate the scattered light is when there is a bright active region close to the point of interest, but outside the 50$^{\prime\prime}$\ radius of the annulus. This is explored in Appendix~\ref{app.ar-model}, where it is found that a bright active region enhances the short-range scattered light by around 50\%\ if it is located at 100$^{\prime\prime}$\ from the point of interest. Features that may be impacted are the low-intensity patches at the edges of active regions that demonstrate outflowing plasma \citep{2007Sci...318.1585S}. The case considered in Appendix~\ref{app.ar-model} was deliberately chosen to be an extreme example, given the brightness of the active region. Applying a deconvolution algorithm to an AIA 193~\AA\ co-temporal with the EIS observation of interest, such as done in Appendix~\ref{app.ar-model}, would give an indication of the effect of the AR on the EIS data.
}
If the EIS field-of-view is too small to enable the annulus intensity to be measured, or the field-of-view is compromised by missing data, then the suggested solution is to use AIA 193~\AA\ as a proxy. \pry{The EIS location within the AIA image should be identified and the annulus intensity from the co-temporal AIA 193~\AA\ image obtained with the IDL routine \textsf{aia\_annulus\_int}}. The EIS/AIA quiet Sun calibration factor from Step (2) can then be used to convert the AIA annulus intensity to an EIS intensity. For coronal holes this will likely be an over-estimate of the EIS intensity as the 193~\AA\ channel has contamination from cooler species such as \ion{Fe}{vii} and \ion{Fe}{viii} that is more pronounced in coronal holes.
If the raster does not include a suitable patch of quiet Sun for Step (2) then another raster can be used. The stability of the AIA 193~\AA\ full-disk emission with time means that an observation within $\pm$~1~day of the raster of interest should give good results.
AIA has given almost continuous, high-cadence, full-disk coverage of the Sun since 2010~May. Prior to this time an EIT 195~\AA\ image can be substituted in order to provide the calibration necessary to yield the EIS full-disk intensity. The routine \textsf{eis\_aia\_int\_compare} automatically searches for an EIT image if an AIA image is not available.
One caveat of the prescription is that it does not account for the local scattered light coming from sources inside the inner radius of the annulus. For this reason, we recommend that our prescription only be applied if the intensity within the inner boundary is relatively uniform. Otherwise the estimated scattered light will only be a lower limit.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{plot_20161123_regions.png}
\caption{A raster image from EIS obtained between 03:08 and 05:15~UT on 2016 November 23 and showing the intensity of \ion{Fe}{xii} \lam195.12 on a logarithmic scale. The dark region to the top-right is part of a coronal hole, and the \textit{circles} indicate the annular region used for obtaining the local scattered light component. The \textit{white box} indicates the quiet Sun region used for normalizing the \lam195.12 intensity with the AIA 193~\AA\ channel.}
\label{fig.reg}
\end{figure}
As an example of applying the formula, we consider an EIS raster that began at 03:08~UT on 2016 November 23. This is a narrow slit raster obtained with the study DHB\_006\_v2, and the raster image from \ion{Fe}{xii} \lam195.12 is shown in Figure~\ref{fig.reg}. The north-west corner of the raster contains a section of a coronal hole. Due to the low signal in this region, $2\times 8$ spatial binning was applied to the data. The \lam195.12 intensity is 5.7~erg~cm$^{-2}$~s$^{-1}$~sr$^{-1}$\ at position $(+163,+532)$ within the coronal hole, indicated with a \textit{blue cross} on Figure~\ref{fig.reg}. The annulus intensity at this location (shown in Figure~\ref{fig.reg}) is measured as 7.5~erg~cm$^{-2}$~s$^{-1}$~sr$^{-1}$, giving a short range scattered light component of 1.1~erg~cm$^{-2}$~s$^{-1}$~sr$^{-1}$. To obtain the long-range scattered light component we first selected a quiet Sun region of 60$^{\prime\prime}$\ $\times$ 60$^{\prime\prime}$\ centered at $(-185,+415)$---also shown in Figure~\ref{fig.reg}---and calibrated it against an AIA 193~\AA\ image at 04:56~UT to yield a full-disk \lam195.12 intensity estimate of 187~erg~cm$^{-2}$~s$^{-1}$~sr$^{-1}$. From Equation~\ref{eq.eis2}, the long-range component is then \pry{5.5}~erg~cm$^{-2}$~s$^{-1}$~sr$^{-1}$. The estimated scattered light intensity is thus larger than the measured intensity in the coronal hole by \pry{16}\%, and so we infer that the \lam195.12 intensity is entirely due to scattered light at this position.
\section{Scattered light in coronal holes}\label{sect.ch}
Using the browse products at the \href{https://eismapper.pyoung.org}{\textit{EIS Mapper}} website \citep{young_peter_r_2022_6574455}, a number of coronal hole observations between 2011 and 2018 were identified and processed to estimate the percentage of scattered light at coronal hole locations. In each case the rasters had sufficient spatial coverage to allow the annulus intensity to be measured. The EIS-AIA calibration was performed by selecting quiet Sun regions of fairly uniform intensity in the same rasters, with the exceptions of the 2010 and 2013 datasets for which it was necessary to use another slit raster obtained on the same day. Details are given in the supplementary material provided in the \textsf{GitHub} \href{https://github.com/pryoung/papers/tree/main/2022_venus}{\textsf{pryoung/papers/2022\_venus}} repository.
\begin{deluxetable}{cclcDDc>{\bfseries}D>{\bfseries}D>{\bfseries}c}[t]
\tablecaption{Intensities for seven EIS coronal hole observations.\label{tbl.ch}}
\tablehead{
\colhead{Date} &
\colhead{Time} &
\colhead{Study} &
\colhead{Position} &
\twocolhead{$I_{\rm ch}$} &
\twocolhead{$I_{\rm ann}$} &
\colhead{$I_{\rm fd}$} &
\twocolhead{Short} &
\twocolhead{Long} &
\colhead{\%}
}
\decimals
\startdata
17-May-2010 & 12:19 & \textsf{Atlas\_60} & $(-98,-476)$ & 8.3 & 9.7 & 188
& 1.5 & 5.5 & 84\%\\
12-Jan-2011 & 00:09 & \textsf{YKK\_EqCHab\_01W} & $(+119,+312)$ & 12.7 & 14.6 & 312 & 2.2 & 9.2 & 90\%\\
31-Jan-2013 & 06:12 & \textsf{YKK\_ARabund01} & $(+52,+47)$ & 26.3 & 49.5 & 391 & 7.5 & 11.5 & 72\%\\
20-Jun-2015 & 11:18 & \textsf{GDZ\_PLUME1\_2\_300\_50s} & $(-140,-240)$ & 27.1 & 31.7 & 354 & 4.8 & 10.4 & 56\% \\
23-Nov-2016 & 03:08 & \textsf{DHB\_006\_v2} & $(+163,+532)$ & 5.7 & 7.5 & 192 & 1.1 & 5.5 & 116\% \\
13-Jun-2017 & 04:55 & \textsf{DHB\_007} & $(-138,-124)$ & 7.8 & 10.6 & 198 &
1.6 & 5.8 & 95\%\\
18-Aug-2018 & 23:21 & \textsf{HPW021\_VEL\_240x512v2} & $(-20,-445)$ &
6.8 & 18.5 & 208 & 2.8 & 4.6 & 109\% \\
\enddata
\end{deluxetable}
Table~\ref{tbl.ch} gives the measured intensities for each dataset, and the short and long-range scattered light components estimated from Equation~\ref{eq.eis2}. The final column gives the percentage contribution of scattered light to the measured coronal hole intensity ($I_\mathrm{ch}$). \pry{It can be seen that the scattered light component is dominant for all seven datasets, and makes a contribution of 90\%\ or more for four datasets.}
The long-range scattered light component is the most important in all cases. Thus, even for a coronal hole dataset in the heart of a large coronal hole where there are no nearby bright emission sources, there will always be a significant scattered light component to \ion{Fe}{xii} \lam195.12.
\section{Extension to Other EIS Wavelengths}\label{sect.ext}
\pry{
The prescription described in Section~\ref{sect.pres} applies specifically to \ion{Fe}{xii} \lam195.12, which is found in the EIS SW channel. A similar formula to Equation~\ref{eq.eis2} would be expected to apply to other lines but the $\alpha$ and $\beta$ (Equation~\ref{eq.eis1}) parameters may be different. A particular concern is whether there is a wavelength dependence that may be significant for lines in the EIS LW channel, as the \ion{S}{x} and \ion{Si}{x} ions used for FIP bias measurements (Section~\ref{sec:intro}) have lines in the 258--265~\AA\ region of the EIS LW channel.
}
\pry{
Formulae for other lines can potentially be derived from the EIS Venus observations. Nine wavelength windows were used in addition to the one for \ion{Fe}{xii} \lam195.12. These were centered on the following lines: \ion{Fe}{xi} \lam180.40, \ion{O}{vi} \lam184.12, \ion{O}{v} \lam192.90, \ion{He}{ii} \lam256.32, \ion{Fe}{xvi} \lam262.98, \ion{Mg}{vi} \lam269.00, \ion{Fe}{xiv} \lam274.20, \ion{Si}{vii} \lam275.35 and \ion{O}{iv} \lam279.93. The \ion{O}{iv} and \ion{Mg}{vi} lines are too weak to be useful, while \ion{O}{v} and \ion{O}{vi} are affected by blends with strong nearby lines.
}
\pry{
\ion{Fe}{xvi} has $\log\,(T_\mathrm{f}/\mathrm{K})= 6.45$, which means that it has negligible emission from quiet Sun and coronal holes. Thus any scattered light measured in these regions can only come from nearby active regions and there can be no long-range component comparable to that for \lam195.12.
The remaining ions are \ion{He}{ii}, \ion{Fe}{xi} and \ion{Fe}{xiv} lines. Of these, the latter is of the most interest as the wavelength is furthest from 195.12~\AA\ and offers the opportunity of checking if the scattering formula shows some dependence on wavelength.
}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{plot_eis_venus_ints_274.png}
\caption{Plots of the Venus \ion{Fe}{xiv} \lam274.20 intensity during the eclipse. An explanation of the symbols is given in Figure~\ref{fig.eis-int}. The blue dashed line on Panel (b) gives the linear fit from \ion{Fe}{xii} \lam195.12, scaled to intersect the \lam274.20 fit at $I_\mathrm{ann}=0$.}
\label{fig.274}
\end{figure}
\pry{
A reduced set of rasters compared to \lam195.12 were processed using the \textsf{eis\_venus\_select} routine and the results are presented in Figure~\ref{fig.274}, which can be compared to Figure~\ref{fig.eis-int} for \lam195.12. Immediately apparent are the large uncertainties for $I_\mathrm{V}$. The EIS effective area is about a factor five lower at 274.20~\AA\ compared to 195.12~\AA\ \citep{2014ApJS..213...11W}, and the \ion{Fe}{xiv} line generally has a lower intensity in quiet Sun and active region conditions \citep{2008ApJS..176..511B}. The linear fit to the data points gives a gradient of $0.169 \pm 0.037$ and $I_{\mathrm{V}_0}= 1.90\pm 2.77$. The latter is poorly constrained due to the very low signal near the coronal hole.
}
\pry{
The gradient of the linear fit is close to that found for \lam195.12 (shown graphically in Figure~\ref{fig.274}), which suggests that the behavior of the short-range scattered light does not vary much as a function of wavelength. No statement about the wavelength dependence of the long-range scattered light, which is partly determined by $I_{\mathrm{V}_0}$, can be made due to the large uncertainties. For reference, however, we note that the AIA 211~\AA\ channel can be used to estimate the \ion{Fe}{xiv} full-disk intensity as it is usually dominated by a strong \ion{Fe}{xiv} line at 211.32~\AA\ \citep{2010A&A...521A..21O}. The \lam274.20/\lam211.32 ratio is insensitive to plasma conditions, with a ratio around 0.5. Performing a scaling using the Venus transit data similar to that discussed for \lam195.12 in Section~\ref{sect.eis-anal} gives a full-disk \lam274.20 intensity of 196~erg~cm$^{-2}$~s$^{-1}$~sr$^{-1}$.
A comparison of an AIA 211~\AA\ image and an EIS full-disk scan (Appendix~\ref{app.hop130}) shows that the method of estimating the full-disk \lam274.20 intensity using the AIA 211~\AA\ channel over-estimates the true \lam274.0 intensity by only 14\%. If we assume the value of $\beta$ derived for \lam195.12 also applies to \lam274.20, then the long-range scattered light expected for \lam274.20 is $196/34= 5.8$~erg~cm$^{-2}$~s$^{-1}$~sr$^{-1}$. This is outside the $1\sigma$ uncertainty for $I_{\mathrm{V}_0}$ but, given the uncertainties in our method described in Section~\ref{sect.pres} we do not find evidence for a significantly different scattered light formula for \lam274.20.
}
\pry{
Another approach to investigating the dependence of scattered light on wavelength is to consider the \citet{aia-psf} prescription for scattered light in the AIA instrument. We consider an artificial AIA image that is zero everywhere, except for an annulus of inner radius 30$^{\prime\prime}$\ and outer radius 50$^{\prime\prime}$\ that has a uniform intensity of one. We then convolved this with the PSF functions for the 193 and 304~\AA\ channels, noting the mesh pitch is very similar for the two channels: 70.4 lines/inch and 70.2 lines/inch for 193~\AA\ and 304~\AA, respectively. Figure~\ref{fig.193-304} shows intensity cuts through the convolved images. Averaging the intensities over 5$^{\prime\prime}$\ radius circles at the center of the annulus gives a 304~\AA\ intensity that is 10\%\ higher than that for 193~\AA. If we assume similar behavior for EIS, for which there is a single mesh for all wavelengths, then we may expect the scattered light arising from the mesh to be up to 10\%\ larger for the lines in the EIS LW channel compared to \lam195.12. This is within the 25\%\ uncertainty that we quoted for the \lam195.12 scattering formula. The mesh scattered light does not explain the long-range scattered light component, which may show a different behavior with wavelength, but this can not be explored with the current data.
}
\begin{figure}[t]
\centering
\includegraphics[width=0.6\textwidth]{plot_annulus_193_304_comparison.png}
\caption{Radial slices through synthetic AIA 193~\AA\ and 304~\AA\ images derived by convolving annulus intensity distributions with the PSFs of \citet{aia-psf}.}
\label{fig.193-304}
\end{figure}
\pry{
In summary, although we expect some wavelength variation to the degree of scattered light for different wavelengths observed by EIS, our investigation of the \ion{Fe}{xiv} \lam274.20 line and the 193~\AA\ and 304~\AA\ filters of AIA suggests that this variation is within the uncertainties for our formula derived from \ion{Fe}{xii} \lam195.12. The formula can thus be applied to other EIS lines except for the following exceptions. Firstly, there is a requirement that the full-disk intensity of an emission line can be estimated using images from AIA, or another full-disk imaging instrument. The EUV AIA 94~\AA, 131~\AA, 171~\AA, 193~\AA, 211~\AA, 304~\AA\ and 335~\AA\ channels are dominated by \ion{Fe}{x}, \ion{Fe}{viii} , \ion{Fe}{ix}, \ion{Fe}{xii}, \ion{Fe}{xiv}, \ion{He}{ii} and \ion{Fe}{xvi}, respectively, in non-flaring conditions \citep{2010A&A...521A..21O}. So only full-disk intensities from emission lines of these ions, or ions formed at the same temperatures as these ions can be estimated. A second exception is
for lines with $\log\,(T_\mathrm{f}/\mathrm{K})\ge 6.3$, for which the long-range scattered light component will be inaccurate due to very low emission from the quiet sun.
}
\section{Summary and Discussion}\label{sect.summary}
An empirical formula for estimating the scattered light in an EIS raster for the \ion{Fe}{xii} 195.12~\AA\ emission line has been presented (Equation~\ref{eq.eis2}), based on data obtained during the Venus transit of 2012 June 5--6. \pry{Evidence from EIS \ion{Fe}{xiv} \lam274.20 and a consideration of the AIA 193~\AA\ and 304~\AA\ channels (Section~\ref{sect.ext}) suggests that the \lam195.12 scattering formula can be applied to other emission lines in the EIS wavelength ranges.} A prescription is provided (Section~\ref{sect.pres}) to enable users to assess the effect of scattered light for any EIS observation that makes use of full-disk images obtained by either AIA or EIT.
The intended application of the formula is an estimate of scattered light in on-disk coronal hole observations, and the results of Table~\ref{tbl.ch} show that scattered light dominates for low \ion{Fe}{xii} intensities in coronal holes. If the coronal hole intensity is brighter, such as for the 2013 and 2015 examples, then the true coronal hole intensity can be significant. Bright structures within coronal holes such as plumes will be less affected, as should the coronal hole boundary regions that have intensities intermediate between quiet Sun and coronal hole.
\pry{Based on our conclusion that the scattered light formula should apply to other lines in the EIS wavebands, the coronal hole results for \ion{Fe}{xii} extend to the \ion{S}{x} and \ion{Si}{x} ions, which are formed at the same temperature.}
\citet{2011ApJ...727L..13B} used the \ion{S}{x} and \ion{Si}{x} lines to measure the FIP bias in eight polar coronal hole observations finding, on average, no FIP bias. The time period of the datasets was not given, but was likely to be 2007--2008. \citet{2013ApJ...778...69B} studied an active region within a low-latitude coronal hole observed in 2007 October and also found no evidence of a FIP bias in the coronal hole. The results from Table~\ref{tbl.ch} suggest that the coronal hole intensities of \ion{S}{x} and \ion{Si}{x} largely come from long-range scattered light. This component would be expected to show the average FIP bias of the entire solar disk. The true coronal hole intensity (if any) and the short-range scattered light component will have the actual coronal hole FIP bias. \citet{2015NatCo...6.5947B} derived a FIP bias map of the entire solar disk from an observation in 2013 January. Values ranged from 1.5 to 2.5 over most of the disk, with the lower values generally corresponding to lower intensity regions. Five low intensity regions that are probably coronal holes are seen on the Sun in the intensity maps. These regions do not show a FIP bias of 1, and they can barely be distinguished from quiet Sun in the FIP bias map. We consider this to be consistent with our conclusion that the coronal hole intensities have a significant component of scattered light. The earlier measurements of coronal hole FIP biases close to one may be due to the global corona having a lower FIP bias during the solar minimum period of 2007--2008, although this is speculation.
The 2015 observation from Table~\ref{tbl.ch} is an example of a relatively high intensity within the coronal hole, with the long-range scattered light only contributing 33\%\ to the measured intensity. A low-latitude coronal hole was observed and it is not clear if such holes generally have higher intensities or if coronal holes are generally brighter around solar maximum. The 2013 coronal hole also had a high intensity and was another low-latitude coronal hole observation.
An additional consequence of the scattered light contribution to \ion{Fe}{xii} \lam195.12 in coronal holes is that Doppler shifts may not reflect the true (if any) Doppler shifts in the coronal hole. \citet{2010ApJ...709L..88T} presented a Doppler map in \ion{Fe}{xii} \lam195.12 of the north coronal hole and blue-shifts of around 20~km~s$^{-1}$\ are clearly seen around the coronal hole boundary. A follow-up paper \citep{2011ApJ...736..130T} clarified that these blueshifts are mostly due to quiet Sun plumes along the line-of-sight to the coronal hole. Our independent check of the darkest parts of the coronal hole, close to the limb, show no significant blueshifts, and this can be seen in the authors' Figure~1 around coordinate (130,300). Our interpretation is that these dark areas are dominated by scattered light from the full disk, and so show no blueshift. \citet{2014ApJ...794..109F} measured Doppler shifts in plumes and compared them with nearby coronal hole and quiet Sun regions. For \ion{Fe}{xii} they found no significant difference between quiet Sun and coronal hole velocities. This result also supports our suggestion that coronal holes are dominated by scattered light and so \ion{Fe}{xii} can not be used to measure an outflow velocity (if it exists) in coronal holes. \citet{2018ApJ...856...28W} also noted that centroid maps in \ion{Fe}{xii} \lam195.12 for two low-latitude coronal holes do not show evidence for Doppler shifts, consistent with their suggestion that scattered light is important.
Finally, we highlight that, as part of the present work, an empirical formula for scattered light in the AIA 193~\AA\ channel was also derived (Equation~\ref{eq.aia}) and this may be useful for scientists interested in assessing the effect of long-range scattered light in their data.
\begin{acknowledgements}
The authors acknowledge support from the GSFC Internal Scientist Funding Model competitive work package program ``Connecting the corona to solar wind structure and magnetospheric impact using modeling and remote and in situ observations". P.~Young also acknowledges funding from the \textit{Hinode}\ project. The authors thank I.~Ugarte-Urra for providing measurements from HOP~130 and for giving valuable comments on an early version of the manuscript. The anonymous referee is also thanked for insightful comments that improved the article.
\end{acknowledgements}
\facilities{Hinode (EIS), SDO (AIA)}
|
1,116,691,498,271 | arxiv | \section{Introduction}
\label{sec:intro}
The attractions of integrated atom-light interfaces lie in the control and enhancement of optical fields offered by photonics combined with the versatility of atomic systems.
With the help of such interfaces a range of new possibilities in atomic and optical physics can be explored, which include a pathway to implement scalable quantum networks \cite{Kimble:Nature:2008} and the ability to engineer light-matter interactions in quantum many-body physics \cite{Goban+:NatComm5:2014, Pichler+:PRA91:2015}.
The photonic part of the interface is commonly used to create a strong optical confinement, as is achieved for sub-wavelength fibres \cite{Balykin+:PRA70:2004,Vetsch+:PRL104:2010} or photonic crystals \cite{Goban+:NatComm5:2014}.
While this approach allows for the generation of strong coupling between light and atoms, it also requires a dedicated and often expensive production process which stands against the requirements of a cost-effective and scalable interface and begs the question whether there migth not be a simpler approach.
This is why we investigate the possibility to create a scalable light matter interface using the evanescent field of wave guides which have been written into a dielectric medium by means of femtosecond laser pulses.
This process is fast and cost effective and allows for the creation a variety of geometries \cite{SzameitNolte:JPB43:2010}.
Laser written waveguides have been used to study and experimentally verify numerous physical phenomena with classical light which often have analogy with coherent quantum effects found in atomic or condensed matter physics \cite{Lederer+:PhysRep463:2008, Longhi:LPR3:2009, SzameitNolte:JPB43:2010}.
Arrays of these waveguides are usually written into glass such that the optical modes of neighboring waveguides are coupled via bulk evanescent fields, thereby realizing tight-binding models with tunable hoppings \cite{SzameitNolte:JPB43:2010}.
Laser written waveguides also present an ideal platform to explore optical nonlinearities, leading for example, to the observation of various two-dimensional soliton solutions \cite{Szameit:PRL:2007, Lederer+:PhysRep463:2008}.
In addition to this, recent experiments include demonstration of pseudomagnetic fields and photonic Landau levels \cite{Rechtsman+:NatPhot7:2012}, photonic Floquet topological insulators \cite{Rechtsman+:Nature496:2013}, and generation of high-order photonic entangled W-states \cite{Graefe+:NatPhot8:2014}.
In most experiments with laser written waveguides, the evanescent field leaking outside of the bulk medium has not been exploited.
One exception is an optofluidic sensor in microchannels etched inside silica glass structure \cite{Maselli+:OE17:2009}.
On the other hand, proposals and experiments to use evanescent fields in order to achieve coherent light-atom coupling are already well established in other photonic systems.
In particular, evanescent fields have been used for laser trapping and optically interfacing atoms around dielectric nanofibers \cite{Balykin+:PRA70:2004, Vetsch+:PRL104:2010, LeKien+:PRA70:2004, Lacroute+:NJP14:2012, Goban+:PRL109:2012}
More recently, strong single atom and photon interactions have been achieved in evanescent field of microtoroidal resonators \cite{Alton+:NatPhys7:2010}, and in the near field of nano-photonic crystals \cite{ Goban+:NatComm5:2014, Tiecke+:Nature508:2014}.
In this paper we analyze evanescent fields of laser written waveguides, and address the possibility of their application in designing a novel light-matter interface, which is schematically depicted in Figure \ref{fig:profile}(a).
The advantages of our proposed scheme include (a) the benefit from established and developed techniques to manufacture silica glass chips with written waveguides,
(b) the robustness and simplicity of on-chip light-matter interface, and (c) the scalability of the interface, since individual elements can be combined via waveguides or optical fibers.
An integral part of our novel light interface is the ability to position and transport atoms across the chip.
This is why we show here that, for a set of realistic experimental parameters, trapping of atoms can be implemented at distances very close to the surface of a chip with laser written waveguides.
For this we examine the idea of two-color laser trapping and show that a stable trap in two dimensions can be achieved by choosing the geometry of the waveguides and wavelengths of the light such that there are two guided modes for the blue detuned light, while the red detuned light operates on a single guided mode.
In addition to this, we optimize parameters of the structure in order to maximize the trap depth and minimize trap losses for a given input power of light.
\begin{figure}
\includegraphics[width=0.4\textwidth]{Fig1.pdf}
\includegraphics[width=0.45\textwidth]{fundamental_mode_orig.pdf}
\caption{(a) Illustration of laser written and polished waveguide at the surface.
Refractive indices are indicated: maximum $n_1+\Delta n$ for the waveguide,
$n_1$ for unmodified fused silica in the bulk, and $n_0$ for the vacuum.
(b) Characteristic field profile of the fundamental guiding mode for a laser written surface waveguide,
centered at $x=0$:
we note the exponential decay into the air, along $y$ axis (see text for details).}
\label{fig:profile}
\end{figure}
The paper is organized as follows: in Section \ref{sec:ev_modes} we introduce the geometry of the waveguide structure written close to the surface of the bulk medium, discuss the eigenmodes of the system \cite{Jukic+:SPIE9379:2015}, and present a simple, analytic model for the evanescent field.
We then construct the total atomic potential for a specific example of Cesium atoms, which consists of the optical potential for the blue and red detuned laser frequencies and an attractive surface potential.
We first demonstrate that trapping is not to be expected when working in a single-mode regime for both colors due to losses to the surface at the sides of the trap.
We then show that a small admixture of a higher blue mode resolves this problem.
Therefore, we restrict ourselves to geometries supporting two blue modes and a single red waveguide mode.
Further, we choose the geometry (within experimental limitations) so that we minimize effective mode area at the surface (for the blue light), essentially maximizing the evanescent field intensity for a given propagating power.
In Section \ref{sec:optim} we study the trap depth and losses as a function of several parameters: distance from the surface, total power of blue and red light, and laser detunings.
In particular, for a given trap depth and total power, we find the detunings for which the trap losses are minimized.
In Section \ref{sec:concl} we summarize our discussion.
\marginpar{!}
\section{Evanescent field of exposed waveguides}
\label{sec:ev_modes}
We start by describing the structure of a typical laser written waveguide manufactured in the group of Alexander Szameit at the University of Jena \cite{SzameitNolte:JPB43:2010}.
A dielectric medium is exposed to femtosecond Ti:Saphire laser pulses, which creates a permanent refractive index change inside the bulk of the glass.
The shape of the resulting waveguide is generally elliptic in two dimensions, and the dielectric profile of the bulk can be described with a super-Gaussian function \cite{Rechtsman+:NatPhot7:2012},
\begin{equation}
n(x,y) = n_1 + \Delta n \exp \left[- \left( \frac{x^2}{r_\mathrm{x}^2} + \frac{(y - d)^2}{r_\mathrm{y}^2} \right)^3 \right].
\label{eq:supergauss}
\end{equation}
Here, $n_1$ is refractive index of unchanged medium, and $\Delta n$ is the relative change in the refractive index.
The ellipse is characterized by two radii $r_\mathrm{x}$ and $r_\mathrm{y}$ and we make no assumption on which is the major and minor radius.
We choose coordinates such that the boundary of the bulk material (containing the waveguide) and the vacuum with refractive index $n_0 =1$ is at $y=0$.
In our setup, waveguides are written in the bulk of fused silica, and then polished to remove the top layer in order to expose the waveguide to the surface.
We assume that the semi-axes are aligned with the axes of the coordinate system.
An illustration of this laser written and polished waveguide at the surface is plotted in Figure \ref{fig:profile}(a).
In the process of polishing the waveguide is also partially removed; in order to model the resulting geometry we introduce a parameter $d$, which indicates the distance of the semi-axis along $x$ from the polished surface at $y=0$.
Typical length scales for the radii $r_\mathrm{x}$ and $r_\mathrm{y}$ are several microns.
In what follows, we will specify the dielectric medium to be fused silica (SiO$_2$) and that the value of index change is $\Delta n \approx 0.005$, which corresponds to maximum change presently obtained in experiments.
For the range of frequencies we will explore, i.e., close to D1 and D2 resonances of Cesium (with wavelengths $894.59 \ \mbox{nm}$ and $852.12 \ \mbox{nm}$ \cite{Steck:Cs:2010}), we set $n_1 = 1.453$ throughout.
For the numerical calculation of the waveguide eigenmodes we use a vectorial finite difference method as developed in \cite{Fallahkhair+:JLT26:2008}.
In Figure \ref{fig:profile}(b), we show a characteristic profile of the evanescent part of the fundamental HE mode (for the particular example presented, parameters used are $r_\mathrm{x}=5$ \textmu m, $r_\mathrm{y} = 4$ \textmu m, and the light is blue detuned $10 \ \mbox{nm}$ from the D2 resonance).
\marginpar{!}
For a large set of parameters, the dominant field component of the fundamental waveguide solution in the evanescent region can simply be modeled as,
\begin{equation}
E_a^{(0)}(x,y) \approx A_a^{(0)} \ \rme^{-y/y_a} \rme^{-x^2/x_a^2},
\label{eq:field_approx}
\end{equation}
where $a = b,r$ refers to the "color" of the detuning, and $x_\mathrm{b,r}$ are characteristic width for blue and red detuned waveguide modes.
The decay lengths $y_\mathrm{b,r}$ are given by
\begin{equation}
y_a = \frac{\lambda_a}{2\pi} \left( n^2_a - n_0^2 \right)^{-\frac{1}{2}},
\label{eq:decaylength}
\end{equation}
where $a = b,r$ and $n_a$ is the effective refractive index obtained from the numerical solution of the given geometry for respective blue or red detuned wavelength.
The characteristic widths $x_{b,r}$ are correlated to the radius $r_\mathrm{x}$ of the waveguide.
\marginpar{!}
We will use our simple model of the evanescent field to derive general statements aboout the requirements on the widths $x_\mathrm{b}$ and $x_\mathrm{r}$ (and therefore on the radius $r_x$), as well as the decay lengths $y_\mathrm{b,r}$ to form a stable trap.
However, all final results are based on the full numerical calculation of the guided modes.
Varying the parameter $d$ has a similar effect on the eveanescent field as changing the radius $r_\mathrm{y}$.
This is why we set $d=0$ from now on for concreteness, meaning that exactly half of the waveguide is cut.
Recently, we have presented more detailed discussion of fundamental modes in surface laser written waveguides, together with the resulting evanescent part reaching into the vacuum \cite{Jukic+:SPIE9379:2015}.
The waveguide propagating along $z$ axis can have two quasi-degenerate hybrid solutions for the fundamental guiding mode.
The first solution (we label it as HE) has electric field with dominant polarization along $x$ axis, and the second (EH) solution along $y$ axis.
However, in the evanescent part, the second solution has a significant longitudinal $z$ component of the
electric field which is comparable to the $y$ component.
This suggests that the EH mode has in general elliptic polarization in evanescent region, and carries spin angular momentum orthogonal to the propagation direction \cite{Aiello+:PRL103:2009,DennisGoette:JOpt15:2013,Bliokh+:NatComm5:2014}.
In order to avoid possible non-scalar light shift contributions, in what follows we will assume only HE solutions (quasi-$x$ polarized) for both blue and red detuned light.
\section{Trapping potential}
The basic idea behind trapping of atoms in evanescent field of a waveguide is to use two different lasers which are blue and red detuned from the electronic transition resonances \cite{Balykin+:PRA70:2004}.
The blue (red) detuned color leads to a repulsive (attractive) optical potential which is proportional to the intensity of light.
For these forces the total scalar contribution to the light shift is,
\begin{equation}
V_\mathrm{light}(x,y)=-\frac{1}{4} \alpha_\mathrm{b} \left| \mathbf{E}_b(x,y) \right|^2 -\frac{1}{4} \alpha_\mathrm{r} \left| \mathbf{E}_r (x,y) \right|^2 ,
\end{equation}
where $\alpha_\mathrm{b,r}$ are the frequency dependent real polarizabilities of the Cs ground state for the blue and red detuned laser fields $\mathbf{E}_{b,r}$ \cite{Grimm+:AdvAMOPhys42:2000, LeKien+:EPJD67:2013}.
We assume throughout the paper that the two lasers are blue detuned from the D2 resonance of Cs,
and red detuned from the D1 resonance.
However, in addition to the optical potential we have to include the effect of attractive surface forces due to van der Waals and Casimir-Polder interactions.
For the Cs atomic trap at the distance of a few $100 \ \mbox{nm}$ above the interface, the surface potential can be approximated by the heuristic form,
\begin{equation}
V_\mathrm{surf} (y) = - \frac{C_4}{C_3} \frac{1}{(C_3 y + C_4)y^3},
\end{equation}
with $C_3(6S_{1/2})/h=1.16$ \mbox{kHz} \textmu $\mbox{m}^3$, and $C_4(6S_{1/2})/h=0.15$ \mbox{kHz} \textmu $\mbox{m}^4$ \cite{Lacroute+:NJP14:2012, Stern+:NJP13:2011}.
The total potential in the $x,y$ plane is therefore%
\begin{equation}
V(x,y) = V_\mathrm{light}(x,y) + V_\mathrm{surf}(y).
\end{equation}
If we neglect the surface potential for the moment and make use of the Gaussian approximation if Eq. (\ref{eq:field_approx}) we can write the optical potential as
\begin{equation}
\fl V_\mathrm{light} (x,y) \approx - \frac{\alpha_\mathrm{r}}{4} \left( A_\mathrm{r}^{(0)} \right)^2 \ \rme^{-2 y/y_\mathrm{r}} \rme^{-2 x^2/x_\mathrm{r}^2} - \frac{\alpha_\mathrm{b}}{4} \left( A_\mathrm{b}^{(0)} \right)^2 \ \rme^{-2 y/y_\mathrm{b}} \rme^{-2 x^2/x_\mathrm{b}^2},
\end{equation}
which, on introducing scaled, dimensionless variables $\xi = x/x_\mathrm{r}$ and $\upsilon = y / y_\mathrm{r}$ can be fully characterised by the ratios of the amplitudes $\tilde{A}_\mathrm{b/r} = A_\mathrm{b}^{(0)}/A_\mathrm{r}^{(0)}$, the ratio of the polarizabilities $\tilde{\alpha}_\mathrm{b/r} = \alpha_b / \alpha_r$, the ratio of the ratio of the decay lengths $\tilde{y}_\mathrm{r/b} = y_\mathrm{r} / y_\mathrm{b}$, and the Gaussian widths $\tilde{x}_\mathrm{r/b} = x_\mathrm{r} / x_\mathrm{b}$.
The resulting form of the optical potential is then given by
\begin{equation}
\label{eq:potential_scaled}
\fl V_\mathrm{light} (\xi,\upsilon) \approx - \frac{\alpha_r}{4} \left( A_\mathrm{r}^{(0)} \right)^2 \ \rme^{-2 \upsilon} \rme^{-2 \xi^2} \left[ 1 + \tilde{\alpha}_\mathrm{b/r}\left( \tilde{A}_\mathrm{b/r} \right)^2 \rme^{-2 \upsilon ( \tilde{y}_\mathrm{r/b} - 1 )} \rme^{-2 \xi^2 ( \tilde{x}^2_\mathrm{r/b} - 1 )} \right]
\end{equation}
The shape of the trapping potential is determined by the expression in the brackets.
This is becomes most obvious when first considering the conditions for a global trapping minimum at $\xi = 0$.
Determining the position $\upsilon_0$ of an extremum of Eq. (\ref{eq:potential_scaled}) along $\xi=0$ gives rise to the following condition:
\begin{equation}
\left[ 1 + \tilde{y}_\mathrm{r/b} \tilde{\alpha}_\mathrm{b/r}\left( \tilde{A}_\mathrm{b/r} \right)^2 \rme^{-2 \upsilon_0 ( \tilde{y}_\mathrm{r/b} - 1 )} \right] = 0.
\end{equation}
Demanding that this extremum is a minimum along both $\xi$ and $\upsilon$ and not a saddle point leads to the two additional conditions
\begin{eqnarray}
\left[ 1 + \tilde{y}^2_\mathrm{r/b} \tilde{\alpha}_\mathrm{b/r}\left( \tilde{A}_\mathrm{b/r} \right)^2 \rme^{-2 \upsilon_0 ( \tilde{y}_\mathrm{r/b} - 1 )} \right] & < & 0, \\
\left[ 1 + \tilde{x}^2_\mathrm{r/b} \tilde{\alpha}_\mathrm{b/r}\left( \tilde{A}_\mathrm{b/r} \right)^2 \rme^{-2 \upsilon_0 ( \tilde{y}_\mathrm{r/b} - 1 )} \right] & > & 0.
\end{eqnarray}
This in turn sets a condition on the ratios of the decay lenghts and Gaussian widths, as $\tilde{y}_\mathrm{r/b} > 1$ and $\tilde{y}_\mathrm{r/b} > \tilde{x}^2_\mathrm{r/b}$ to fulfill both equations simultaneously.
The properties of this trap are then determined by the ratios of the decay lengths $\tilde{y}_\mathrm{r/b}$ and the Gaussian widths $\tilde{x}_\mathrm{r/b}$.
As the effective refractive index $n_\mathrm{eff}$ does not change drastically for different wavelengths, according to Eq. (\ref{eq:decaylength}) the former is approximatively given by the ratio of the wavelengths $\tilde{y}_\mathrm{r/b} \approx \lambda_\mathrm{r} / \lambda_\mathrm{b}$, which therefore satisfies $\tilde{y}_\mathrm{r/b} > 1$.
The condition for a stable mininum is therefore direclty given by
\begin{equation}
\label{eq:condition2d}
\tilde{y}_\mathrm{r/b} > \tilde{x}^2_\mathrm{r/b}.
\end{equation}
The dependence of the Gaussian widths on the geometry of the waveguide as defined by the radii $r_x$ and $r_y$ is more complicated.
However, we are aiming to create a trap close to the surface ($\sim 100 \ \mbox{nm}$ at most) with waveguide modes having a small effective area.
This is why we are concentrating on geometries, for which the waveguide size defined by $r_x$ is a few microns and commensurate with the wavelength.
In this parameter range the ratio of the resulting Gaussian widths also scales approximatively with the ratio of the wavelengths $\tilde{x}_\mathrm{r/b} \approx \lambda_\mathrm{r} / \lambda_\mathrm{b}$.
As as consequence it is not possible to form a stable trap in this parameter regime.
In general, condition Eq. (\ref{eq:condition2d}) can be satisfied in certain limits, for example when the waveguide size becomes large compared to the wavelength.
In this case, the resulting widths $x_\mathrm{b}$ and $x_\mathrm{r}$ of the Gaussians along $x$ are not only given by the wavelengths, but depend also on the radii $r_\mathrm{x}$ and $r_\mathrm{y}$.
However, the resulting optical potential along $x$ axis is then extremely shallow.
In addition to this, the presence of attractive surface forces makes the trapping along $x$ axis even more constrained.
We have numerically verified that for realistic experimental parameters and working only with the fundamental modes, which even obey equation Eq. (\ref{eq:condition2d}), the resulting trapping potential along $x$ is at least one order of magnitude smaller then the potential along the $y$ direction, which is problematic as it can therefore also be smaller than the characteristic harmonic oscillator energy of the trap along the $y$ direction.
A typical example of the total atomic potential with a saddle point is presented in Figure \ref{fig:saddle_point}.
Here, the saddle point is located at about $\sim 200\ \mbox{nm}$ from the surface.
For this example the parameters used are $r_\mathrm{x}=5$ \textmu m, $r_\mathrm{y} = 4$ \textmu m, and the blue and red light are detuned $10 \ \mbox{nm}$ from respective resonances with a combined input power of $8$ W.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{potential_2D_PinTotal_8W_orig.pdf}
\caption{Total 2D potential for Cs atoms created by superimposing single-mode blue and red detuned lasers, together with attractive surface force.
Here, the saddle point is at $\sim 200\ \mbox{nm}$ above the surface.
}
\label{fig:saddle_point}
\end{figure}
\subsection{Atomic trap with two blue modes}
In this subsection we show that full 2D atomic confinement can be efficiently created by introducing an upper mode contribution to the blue detuned light.
The general idea is that two intensity maxima to the left and right from the symmetry axis $x=0$, characteristic for a higher waveguide mode, can create a repulsive potential at the sides of the stationary point that is large enough to turn the saddle point of atomic potential into a local minimum.
A characteristic field profile of the upper mode is shown in Figure \ref{fig:2nd_mode_and_mode_area_of_low_lamdba_b}(a).
Obviously, for this trapping mechanism to be implemented, it is necessary to choose waveguide geometries that support not only the fundamental, but also a higher (second) mode for the blue detuned light.
To avoid the possibility of coupling into a higher order mode for the red detuned light, we make use of the fact that the red detuned has a longer wavelengths, rendering it possible to design a waveguide supporting a single red and two blue detuned modes.
Under the general premise to create a working trapping devive which strive to optimize the geometry.
This is even more important now as adding another mode increases the parameter space.
One way to optimize the trapping scheme is maximize the electric field density of the fundamental (blue) mode at the surface, $| \mathbf{E}_{b,0}^{(0)} |^2$, for a given total propagating power $P_{b}^{(0)}$ of this mode.
For this we define effective area of the fundamental blue mode at the surface point $(x,y)=(0,0)$ as,
\begin{equation}
A_\mathrm{eff}^{(0)}= \frac{| \mathbf{E}_{b,0}^{(0)} |^2}{P_{b}^{(0)}}.
\end{equation}
In other words, we require that the effective area of the fundamental mode is minimized.
In Figure \ref{fig:2nd_mode_and_mode_area_of_low_lamdba_b} we have plotted the effective area of the fundamental mode at the D2 resonance as a function of waveguide geometry.
We have also excluded geometries which do not support two-mode guiding (blue colored region).
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{upper_mode.pdf}
\includegraphics[width=0.45\columnwidth]{comb_new_lamdba_b.pdf}
\caption{(a) Upper waveguide mode of blue detuned light with two intensity maxima that can provide for 2D atomic confinement.
(Waveguide parameters are $r_\mathrm{x}=6$ \textmu m, $r_\mathrm{y} = 3.9$ \textmu m, and the light is blue detuned $10$ nm from D2 resonance).
(b) Effective area at the surface $A_\mathrm{eff}^{(0)}$ for the resonant wavelength $\lambda_{D2}$ of the fundamental guiding mode.
Blue region is omitted from the plot since it does not support upper waveguide mode, and therefore cannot be used for the design of efficient 2D atomic trap.
}
\label{fig:2nd_mode_and_mode_area_of_low_lamdba_b}
\end{figure}
From this we notice that the optimization requires the use of geometries with larger $r_\mathrm{x}$ and smaller $r_\mathrm{y}$ in the region of the parameter space where two blue modes are supported.
This would suggest that the trap optimized this way might be fairly large in $x$-direction.
In order to avoid possible experimental difficulties with very elongated waveguides, we limit the geometry to $r_\mathrm{x}=6.0$ \textmu m.
Further, we choose $r_\mathrm{y} = 3.9$ \textmu m, thereby enabling two-mode regime for blue detuned lasers, and single-mode regime for red light.
After settling for a geometry, we construct the blue input light field as linear superposition of the fundamental and the upper mode:
\begin{equation}
\mathbf{E_b}=\sqrt{\tau} \mathbf{E}_{b}^{(0)} + \rmi \sqrt{1-\tau} \mathbf{E}_{b}^{(1)},
\label{eq:twomode}
\end{equation}
where $\tau$ parametrizes the contributions of the two blue modes.
The phase difference of $\pi/2$ is important, as the higher mode $\mathbf{E}_{b}^{(1)}$ has as $\pi$ phase jump across the symmetry axis in its dominant component.
In this the phase difference between the fundamental and higher mode is the same, albeit with a different sign, which nevertheless gives rise to a symmetric intensity distribution.
\marginpar{!}
The two blue modes have slightly different propagation constants or effective refractive indices $n_b^{0}$ and $n_b^{(1)}$.
This will create a beating of the two modes which changes the phase relation in Eq. (\ref{eq:twomode}) and therefore the shape of the trapping potential.
The beating period, given by $\lambda_b/(n_b^{0} - n_b^{1})$, is, however, very long, owing to the small difference in the effective refractive indices.
We can give a strict upper bound for the beating length by noticing that the maximum difference in the effective refractice index is given by the contrast $\Delta n$, which would correspond to a period of 200 $\lambda_b$, which is about $170$ \textmu m.
More realistically, the difference between the two effective refractive indices is about $0.001$ we gives a beating period of $1000 \lambda_b$ or $850$ \textmu m.
To generate a full 3D confinement of the atoms it may therefore be necessary to reflect the field in Eq. (\ref{eq:twomode}) to create a standing wave modulated by the beating frequency.
\marginpar{!}
Even a small presence of the upper mode, relative to total input power, can produce 2D confinement.
Therefore, in what follows, we set $k=0.95$.
In Figure \ref{fig:two_mode_trap} we present an example of 2D atomic trap.
It clearly demonstrates that the trap stability can be achieved by including upper blue mode for trapping.
For this example we assume the total power propagating along the waveguide to be $8$ W,
since the laser written waveguides can easily cope with input powers of several Watts without modifying their physical properties.
In general, we can tune depth, width, and position of the trap minimum (and also losses), by changing waveguide geometry, total input power of lasers, contributions of blue and red light fields, and their detunings.
We note that the characteristic length scale of the trap in $x$-direction is more then one order of magnitude larger then along $y$ axis, i.e. for the corresponding oscillator frequencies of the trap we have $\omega_x \ll \omega_y$,
so that the ground state energy is $E \approx E_y = \frac{\hbar \omega_y}{2}$.
Therefore, we can focus on a study of one-dimensional potential along $x=0$ line illustrated at Figure \ref{fig:two_mode_trap}(b).
In particular, we use the 1D profile to define trap depth relative to both the potential barrier close to the surface and to the zero potential away from the surface.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{potential_2D_PinTotal_8W.pdf}
\includegraphics[width=0.45\columnwidth]{potential_2D_at_x0_PinTotal_8W.pdf}
\caption{(a) Two-mode trap for total power $P_\mathrm{in} = 8$ W, with $P_\mathrm{b} = 3.66$ W for blue, and $P_\mathrm{r} = 4.34$ W, for red detuned light.
Detunings from D1 and D2 resonances are $\delta \lambda_\mathrm{r} = 10.5$ nm, and $\delta \lambda_\mathrm{b} = 10$ nm.
(b) Two-mode trap along $x=0$ line; the black like denotes energy of the trap ground state energy in harmonic approximation.}
\label{fig:two_mode_trap}
\end{figure}
The specific example presented in Figure \ref{fig:two_mode_trap} corresponds to the maximum trap depth ($\sim 11$ \textmu K) for the given choice of total power and the detunings.
Namely, for the set of fixed system parameters (geometry, total input power and detunings),
the trap depth and position depend only on relative contributions of blue and red detuned laser intensities.
Therefore, we can explore the depth of the trap as a function of its location $y_\mathrm{min}$ as shown in Figure \ref{fig:losses_vs_ym}(a).
We notice that the trap becomes more shallow for large distances $y_\mathrm{m}$:
we explain this by the fact that optical fields creating the potential decrease further away from surface.
Further, in the regime where distances from the surface are small,
repulsive barrier towards surface becomes also smaller due to the attraction of surface forces,
effectively reducing the trap depth.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{V_trap_vs_ym_PinTotal_8W.pdf}
\includegraphics[width=0.45\columnwidth]{losses_vs_ym_PinTotal_8W.pdf}
\caption{
(a) Trap depth as a function of the position of the trap minimum.
Total input power and detunings are the same as in Figure \ref{fig:two_mode_trap}.
(b) Losses as a function of position of the trap minimum.
Dashed vertical line indicates the position with the maximum trapping potential, i.e., example from Fig. \ref{fig:two_mode_trap}.
Total loss rate is then $\Gamma \approx 67 \ \mbox{s}^{-1}$.
}\label{fig:losses_vs_ym}
\end{figure}
\section{Optimization of losses of the atomic trap}
\label{sec:optim}
In this section we want to estimate losses from the atomic trap. We calculate two major contributions: the tunneling to the surface, and the losses due to photon scattering.
We estimate losses to the surface within the WKB approximation along the 1D line $x=0$.
That is, given the oscillator ground state frequency $\omega_y$, we have%
\begin{equation}
\Gamma_\mathrm{tunn} = \frac{\omega_y}{2 \pi} \exp \left(-2 \int_{y_1}^{y_2} dy \sqrt{\frac{2 m}{\hbar^2} \left[ V(x=0,y) - E \right]} \right).
\label{gamma_tunn2}
\end{equation}
where $y_{1,2}$ are the turning points of the potential barrier, $V(x=0,y_{1,2}) = E$.
Further, we calculate losses due to photon scattering from blue and red detuned light field, $\Gamma_{sc,b}$ and $\Gamma_{sc,r}$, by extracting both field densities at the center of the trap \cite{Grimm+:AdvAMOPhys42:2000}.
We now proceed to explore possibility of minimizing trap losses by changing trap parameters.
In Fig. \ref{fig:losses_vs_ym}(b) we plot all losses as a function of the position of the trap minimum.
The parameters are identical to those for Figure \ref{fig:losses_vs_ym}(b).
As expected, we notice that losses increase as the trap location approaches the surface.
For large trap distances from the surface, the tunneling loss $\Gamma_\mathrm{tunn}$ becomes negligible; however, is increases rapidly relative to the blue and red scattering losses for smaller $y_\mathrm{m}$.
Again, this can be explained by noting that for traps which are too close to the surface, the repulsive potential barrier become too small for efficient atom trapping.
The vertical dashed line corresponds to the maximum trap depth presented in Fig. \ref{fig:two_mode_trap}.
Total loss here is $\Gamma \approx 67 \ \mbox{s}^{-1}$, and we notice that the tunneling loss becomes comparable to the scattering contributions.
Loss considerations can therefore lead to modified trap optimization strategy.
In other words, we can reduce losses by pushing the trap minimum further away from the surface at the expense of having the trap also more shallow.
Example of such trap with $V_\mathrm{trap} = 10$ \textmu K, and $\Gamma \approx 42 \ \mbox{s}^{-1}$, is shown in Fig. \ref{fig:two_mode_trap_opt}.
As can be observed, the barrier to the surface is enlarged, effectively leading to very small tunneling loss.
This example can also be considered a result of loss optimization process described below.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{potential_2D_PinTotal_8W_opt.pdf}
\includegraphics[width=0.45\columnwidth]{potential_2D_at_x0_PinTotal_8W_opt.pdf}
\caption{(a) Two-mode trap optimized for losses for total power $P_\mathrm{in} = 8$W and $V_\mathrm{trap} = 10$ \textmu K.
Here, we find $\Gamma \approx 42 \ \mbox{s}^{-1}$.
Detunings from D1 and D2 resonances are $\delta \lambda_\mathrm{r} = 10.5$nm, and $\delta \lambda_\mathrm{b} = 10$nm.
(b) Two-mode trap along $x=0$ line; black line corresponds to the ground state energy of harmonic oscillator.}
\label{fig:two_mode_trap_opt}
\end{figure}
We formulate the loss optimization of an atomic trap as follows:
we estimate what is the minimal total loss of a 2D atomic trap when both total input power and the desired trap depth are fixed.
That is, given our waveguide geometry and desired trap depth, for each value of total input power we scan both blue and red detuned laser wavelengths in order to find optimal set of detunings for which the total loss is minimized.
Figure \ref{fig:optimized_losses} shows a characteristic optimization result, using the example of $V_\mathrm{trap} = 10$ \textmu K.
We emphasize here that optimized trap losses depend strongly on the propagating power, and can therefore be strongly reduced for laser written waveguides supporting large input powers.
In other words, for larger powers optimal detunings are further from atomic resonances, leading to smaller losses.
In Figure \ref{fig:optimized_losses}, we have limited the range of optimal detunings to start at least $5$nm away from atomic resonances in order to avoid possible molecular resonances for the red detuned laser \cite{Pichler:JCP:2004}.
At this point, we finally note that for total propagating power, $P_\mathrm{in} =8$ W,
the set of optimal detunings corresponds exactly to one used in Figure \ref{fig:two_mode_trap_opt},
i.e. $\delta \lambda_\mathrm{r} = 10.5$ nm, and $\delta \lambda_\mathrm{b} = 10$ nm.
\begin{figure}[ht]
\includegraphics[width=0.45\columnwidth]{power_vs_loss.pdf}
\caption{Total losses after optimization of detunings as a function of total input power for trap depth of $10$ \textmu K.
}
\label{fig:optimized_losses}
\end{figure}
Similar optimization analysis with different system parameters is straightforward, depending on chosen atomic trap properties (not shown here):
as expected, it will result in smaller optimized losses for smaller trap depths, and vice versa.
\section{Conclusion}
\label{sec:concl}
In conclusion, we have studied a promising interface for light-matter interactions based on laser written waveguides at the surface of fused silica.
Our calculations demonstrate that trapping of atoms in evanescent light of waveguide modes is possible to achieve by superimposing two different laser frequencies in the waveguide.
Importantly, we have verified that atom trapping mechanism is successful in the regime where the blue detuned laser supports two propagating modes, whereas the red detuned laser operates in a single-mode regime.
Unlike in the case of dielectric nanofibers \cite{Balykin+:PRA70:2004},
we have shown that the single-mode regime for both wavelengths is not expected to provide efficient trapping in laser written waveguide setup.
We have focused on a particular example of Cs atoms with one wavelength blue detuned from atomic D2 resonance,
and the other wavelength red detuned from D1 resonance.
Numerical results are presented for reasonable experimental parameters: as an example we have investigated atomic traps with potential depth $\sim 10$ \textmu K, realized with several Watts of total propagating power.
Finally, we have provided with optimization scheme for system parameters in order to minimize trap losses due to photon scattering and surface losses.
\section*{Acknowledgements}
We are grateful for many fruitful discussions with I. Lesanovsky and L. Hackerm\"{u}ller.
We acknowledge financial support from the FET grant 295293 (QuILMI) of the 7th framework programme of the European Commission and the UK UK Engineering and Physical Science Research Councuil via the grant EP/M01326X/1.
\section*{References}
\bibliographystyle{iopart-num}
|
1,116,691,498,272 | arxiv | \section{Conclusions}
\label{sec:conclusions}
In this paper we have reviewed gradient based algorithms for the problems of inverse reinforcement learning and apprenticeship learning. On one hand, we have discussed in detail the properties of a recently developed algorithm, the maximum likelihood IRL. We have shown that the probabilistic inference approach has connections with IRL as convex optimization. In particular, it represents an alternative cost function to the least squares criterion of the policy matching algorithm. The experimental results show that for some typical problems the behavior of the likelihood based algorithm is at least as good as the other methods.
On the other hand, one of the most expensive steps in gradient based methods is the computation of the derivative of the policy. We have analyzed an approximation of the derivative that exploits the fact that \emph{small changes in the reward function do not affect the policy}. The approximated derivative can be computed, at every iteration, in polynomial time (instead of using a fixed point recursion). Results show that the obtained reward is much faster and as accurate as the ones obtained with the full derivative.
\bibliographystyle{plain}
\section{Experiments}
\label{sec:results}
In this section we evaluate the performance of the maximum likelihood IRL, which we called GIRL for Gradient-based-IRL, and compare it with other IRL algorithms: Policy Matching (PM) \cite{Neu07uai} and the Multiplicative Weights Algorithm (MWAL) \cite{Syed2007}. The experiments are divided in two different scenarios. First, we use a standard grid world as used in many IRL papers since Abbeel and Ng \cite{Abbeel04icml}. Then, we use the sailing simulator proposed by Vanderbei \cite{Vanderbei} and used for IRL in Neu and Szepesvary \cite{Neu07uai}.
In addition to compare the methods, we also evaluate the impact of approximating the derivative using Eq. \ref{eq:appderiv}. Since for the reward model used in the paper the derivative is equal to the feature expectations, it is possible to compute them using the same approximation for the MWAL algorithm. In order to have a fair comparison in terms of computational time, we also report results where the feature expectations (i.e. the derivative) are estimated in a single step (horizon one). We will denote the full fixed point recursion as FP, the Independence assumption of Eq. \ref{eq:appderiv} as IA and the one step fixed point as FP1.
First we offer a description of the metrics that we are going to use to evaluate the algorithms among them. Then, we will describe the setup used for the experiments and show some results. Finally, we provide some quantitative results about the different criteria to approximate the derivative of the Q-function.
\subsection{Performance metrics}
We are interested in comparing the ability of the algorithms to a) recover the structure of the problem and b) propose policies that perform as good as the demonstrated expert.
One measure of performance is the accumulated rewards using policy evaluation \cite{Sutton98}. The value-function following policy $\pi$ from a state $x$ with a reward function $R$ is
\begin{equation}
V_R^\pi(x)=\mathbb{E}_{\pi}\left\{R_{t+1}+\gamma R_{t+2}+ \gamma^2 R_{t+3}+ \ldots \mid x_t=x\right\}
\end{equation}
The total value of a policy $\pi$ is:
\begin{equation}
V_R^\pi=\sum_{x \in \mathcal{X}} V_R^\pi(x) P(x_0=x)
\end{equation}
We call $R_E$ to the real reward which the expert is optimizing, and $\pi_E$ to the policy of the expert. The IRL algorithm converges to reward function $R_{IRL}$ with corresponding optimal policy $\pi_{IRL}$. Then, the maximum obtainable accumulated reward is $V_{R_E}^{\pi_E}$ and the accumulated reward obtained by the IRL solution $V_{R_E}^{\pi_{IRL}}$.
Another measure of how good the algorithm solves the problem is the comparison of the greedy policies of the expert and the learner. We measure what fraction of states of the greedy version of the learned policy $\pi_{IRL}$ match the actual optimal greedy policy $\pi_E$. However, this measure can be sometimes misleading, since taking the wrong actions in critical parts of the problem can be disastrous reward-wise. Also in some parts of the problems there is more than one optimal action. This has been called the label bias \cite{Ziebart08aaai} problem and the desired performance, whether to map the optimal policy or the distribution over paths, depends on the task.
\begin{figure}
\centering
\begin{tabular}{cccc}
\includegraphics[width=0.23\textwidth]{narrow2-crop} &
\includegraphics[width=0.23\textwidth]{pinarrow2-crop} &
\includegraphics[width=0.23\textwidth]{paths2-crop} &
\includegraphics[width=0.23\textwidth]{pipaths2-crop}\\
(a) & (b) & (c) & (d)
\end{tabular}
\caption{Narrow passage problem: (a) the true reward, (b) the expert policy. Paths problem: (c) the true reward, (d) the expert policy}
\label{fig:problems}
\end{figure}
\subsection{Grid world}
The grid world is made up of grid squares with five actions available to the agent, corresponding to moves in the compass directions (N,S,E,W) plus an action for staying in the actual state. We assume that the real reward is in the linear form from equation \eqref{eq:features}. The grid is divided in macro-cells $\Psi_i$ which can span several grid squares. The reward features are the indicator function of the agent's state being inside a macro-cell, $\mathbb{I}_\Psi(x)$. Note that in this example, the real and estimated rewards depend only on the state $R(x,a) = R(x)$.
After doing some preliminary tests, we found that it is important for the grid world example to have some structure, in order to be able to draw conclusions about IRL algorithms. In many papers, the reward is chosen randomly resulting in problems with little structure. In those cases, the IRL algorithm needs only to find the goal point to perform well thus not helping to properly evaluate the algorithms. Based on this observation, we propose two different grid world problems with structure to asses performance:
\begin{itemize}
\item A narrow passage problem, see Figures \ref{fig:problems} (a) and (b), where one corner has slightly better reward than the other states but to get to the corner the agent has to be careful not to fall in the pit and walk along the narrow passage. This problem has 25 macro-cells arranged in a 5x5 square grid. The true reward accumulated by the expert is $V_{R_E}^{\pi_E}=0.9964$.
\item A path following problem, see Figures \ref{fig:problems} (c) and (d), where most states give no reward but the goal state and the states following some paths to this goal. The agent has to follow these paths and beware of stepping outside on the way to the goal. This problem has 100 macro-cells arranged in a 10x10 square grid. The true reward accumulated by the expert is $V_{R_E}^{\pi_E}=0.9358$
\end{itemize}
\begin{figure}[!t]
\centering
\subfigure[Policy similarity, FP]{\includegraphics[width=0.3\textwidth]{policy-slow52-crop}}
\subfigure[Policy similarity, IA]{\includegraphics[width=0.3\textwidth]{policy-fast52-crop}}
\subfigure[Policy similarity, FP1]{\includegraphics[width=0.3\textwidth]{policy-fp1-52-crop}}
\subfigure[Acc. reward, FP]{\includegraphics[width=0.3\textwidth]{reward-slow52-crop}}
\subfigure[Acc. reward, FP]{\includegraphics[width=0.3\textwidth]{reward-fast52-crop}}
\subfigure[Acc. reward, FP1]{\includegraphics[width=0.3\textwidth]{reward-fp1-52-crop}}
\caption{Narrow passage. Average results over 10 runs. Top: policy similarity for (a) FP, (b) IA and (c) FP1. Bottom: Accumulated reward for (d) FP, (e) IA and (f) FP1.}
\label{fig:narrowpassage}
\end{figure}
Figure \ref{fig:narrowpassage} show the results for the narrow passage problem with a macro-cell of size $2\times2$ states\footnote{Very similar results were obtained with bigger macro-cells which are not reported due to lack of space. For instance, for $4\times 4$ macro-cells, GIRL accumulated 0.98122.}.
The results for the full derivative show that the three methods perform quite well obtaining an accumulated reward over.95 (compared to 0.99 of the expert's) (Fig. \ref{fig:narrowpassage}(d)), with GIRL achieving the best results (.987), almost identical to the expert. Similar results are obtained for the final estimated policy where GIRL behaves better than the other methods.
The results almost do not change when using the approximate derivative (see Figs. \ref{fig:narrowpassage}(b) and (e)). Again, GIRL obtains slightly better results than PM and MWAL. In both cases, this is mainly due to some solutions getting stuck at local minima that make some states fall to the pits or fail to get out of the pit through the fastest route. Indeed, this is illustrated by the variance bars that are smaller for the GIRL method than for the others.
For the FP1 approximation, GIRL behaves almost identically. PM degrades a little bit its performance and gets stuck in local minima more often. MWAL, on the other hand, is not able to recover any sensible policy. This is expected since that MWAL tries to match the feature expectations and, therefore, is more sensitive to approximations. However, it is surprising that it does work well with the IA approximation.
\begin{figure}[!t]
\centering
\subfigure[Policy similarity, FP]{\includegraphics[width=0.3\textwidth]{policy-slow102-crop}}
\subfigure[Policy similarity, IA]{\includegraphics[width=0.3\textwidth]{policy-fast102-crop}}
\subfigure[Policy similarity, FP1]{\includegraphics[width=0.3\textwidth]{policy-fp1-102-crop}}
\subfigure[Acc. reward, FP]{\includegraphics[width=0.3\textwidth]{reward-slow102-crop}}
\subfigure[Acc. reward, IA]{\includegraphics[width=0.3\textwidth]{reward-fast102-crop}}
\subfigure[Acc. reward, FP1]{\includegraphics[width=0.3\textwidth]{reward-fp1-102-crop}}
\caption{Path problem. Average results over 10 runs. Top: policy similarity for (a) FP, (b) IA and (c) FP1. Bottom: Accumulated reward for (d) FP, (e) IA and (f) FP1.}
\label{fig:paths}
\end{figure}
We now analyze the results for the path following problem (Fig. \ref{fig:paths}. Roughly, the results are the same as in the narrow passage. All the methods behave similarly for the FP and the IA cases with an accumulated reward almost identical to the expert (.93)\footnote{The accumulated reward with $4\times 4$ macro-cell was also best for GIRL (0.82). }. In fact, the differences between them are smaller than in the narrow passage. The main reason is that PM and MWAL do not seem to converge to worse local minima so often. This is indicated by a smaller variance in the figures, although it is still bigger than the GIRL's one. As in the previous case PM and GIRL almost keep the same performance with the FP1 approximation of the derivative and the MWAL algorithm fails to find a good solution.
The difference in the computational cost varies enormously depending on the selected method to compute the derivative. For example, in the case of GIRL for the path following, the FP took 2.98 s per iteration on average, while the IA and the FP1 were 1.14 and 0.48 respectively. The results are consistent with the fact that FP has an exponential cost, while IA has a polynomial cost. On the other hand, the difference between IRL algorithms were negligible, being more important the number of iterations until convergence.
\subsection{Sailing}
In the problem of ``sailing'', proposed by Vanderbei \cite{Vanderbei}, the task is to navigate a sailboat from one starting point to a goal point in the shortest time possible. The speed of the sailboat depends on the relative direction between the wind and the sailboat, and the wind follows a stochastic process. In the MDP context, the state contains the data about the sailboat's position in the grid, the current wind direction, and the current tack. The possible actions are to move to any of the 8 adjacent cells in the grid. The rewards are negative, and corresponds to the time required to carry out the movement under current wind, i.e. it is faster to sail away from the wind than into the wind, and changing the tack induces an additional delay. The reward function is a linear combination of six features -away, down, cross, up, into, delay-, see \cite{Vanderbei} for details. These features depend on the state and the action. Thus, the problem is more challenging than the grid worlds which depend only on the state, that is, the location on the grid. The true weights we used in the experiments are the same ones used by Vanderbei \cite{Vanderbei}, namely $\theta^*=(-1,-2,-3,-4,-100000,-3)^T$.
\begin{table}
\centering
\begin{tabular}{|l|c|c|r|}
\toprule
Method & $V_{R_E}^{\pi_{IRL}}$ & $\pi_E=\pi_\theta$ & T (s) \\\midrule
GIRL-FP &-11.76 & 93.87 & 332.44 \\
MWAL-FP & -15440.31 & 83.87 & 522.59 \\
PM-FP & -1630.48 & 90.90 & 376.26 \\
\hline
GIRL-I & -11.76 & 93.91 & 52.73 \\
MWAL-I & -395.45 & 85.43 & 47.95 \\
PM-I & -2035.15 & 91.05 & 47.56 \\
\hline
GIRL-FP1 &-11.76 & 94.53 &34.53 \\
MWAL-FP1 & -16814.30 & 86.29 &35.06 \\
PM-FP1 & -416.44 & 93.91 &37.16 \\
\bottomrule
\end{tabular}\label{tabla:derivatives2}
\caption{Experimental results for the sailing problem for 100 iterations of the GIRL, PM and MWAL methods using the fixed point method (FP), the independence assumption (IA) and a single step of the fixed point recursion (FP1). The reward accumulated by the expert is $V_{R_E}^{\pi_E}=-11.76$.}
\label{tabla:derivatives}
\end{table}
\begin{figure}[!t]
\centering
\subfigure[Policy similarity, FP]{\includegraphics[width=0.3\textwidth]{sail-slow-policy-crop}}
\subfigure[Policy similarity, IA]{\includegraphics[width=0.3\textwidth]{sail-fast-policy-crop}}
\subfigure[Policy similarity, FP1]{\includegraphics[width=0.3\textwidth]{sail-fp1-policy-crop}}
\subfigure[Acc. reward, FP]{\includegraphics[width=0.3\textwidth]{sail-slow-reward-crop}}
\subfigure[Acc. reward, IA]{\includegraphics[width=0.3\textwidth]{sail-fast-reward-crop}}
\subfigure[Acc. reward, FP1]{\includegraphics[width=0.3\textwidth]{sail-fp1-reward-crop}}
\caption{Evolution of the performance of the different IRL methods in
the sailing problem: policy similarity and accumulated reward for (a,d) FP, (b,e) IA and (c,f) FP1.}
\label{fig:plotsail}
\end{figure}
Table \ref{tabla:derivatives} summarizes the results of solving the sailing problem with different demonstrations averaged over 10 runs. The demonstrations consisted of 5120 expert trajectories each. We run the algorithms for one hundred iterations which was more than needed for convergence of the dissimilarity function $J(\pi_\theta, \mathcal{D})$ (around 40 IRL iterations). Figure \ref{fig:plotsail} shows the evolution of the two metrics, real accumulated reward of the learned policy $V_{R_E}^{\pi_{IRL}}$ and proportion of states $x_i$ where $\pi_E(x_i)=\pi_\theta(x_i)$. For this problem, the reward accumulated by the expert is $V_{R_E}^{\pi_E}=-11.76$.
The GIRL and the PM algorithms perform reasonably well for the FP, IA and FP1 cases. GIRL had the best performance with an accumulated reward almost identical to the expert's one independently of the derivative used. PM have slight differences that could be due to sampling noise for the demonstrations. In general, it had a slightly smaller accumulated reward (between -2000 and -400) and also a higher variance than GIRL. Surprisingly, MWAL worked best in the IA case where it obtained an accumulated reward of -395. However, it did perform poorly in the other two cases where it did not converge to the right solution. This effect, which also appeared in the grid world, requires further investigation.
In any case, the PM and MWAL solutions revealed that most of the loss in reward is due to a small number of wrong actions (going against the wind), which increased the final time. This was avoided by the GIRL method.
Finally, the computational times depend mainly on the approximation used (see Table \ref{tabla:derivatives}) and, as in the grid world problems, they are very similar among methods.
\section{Maximum likelihood IRL}
\label{sec:GIRL}
In this section we describe in detail the algorithm introduced in \cite{macl09airl} to solve the IRL problem using an estimate of the gradient of the likelihood function from equation \eqref{eq:rewardlik}. Then, we present new connections between that algorithm and other algorithms in the literature. First, to provide a uniform framework, we will use the reward parametrization of equation \eqref{eq:features}. As commented before, the original formulation of \cite{macl09airl} can be recovered by taking $\phi_i(x,a)=\mathbb{I}_i(x,a)$.
For simplicity, we assume that the feature functions are known in advance. Therefore, the reward function is fully determined by the vector of weights $\theta$. We define the likelihood of the data set as the product of the likelihood of the state-action pairs $ P(x_i,a_i \mid \mathcal{D}) = \ell_\theta(x_i,a_i) $ defined as in equation \eqref{eq:rewardlik},
\begin{equation}
\label{eq:likelihood}
\mathcal{L}_\theta (\mathcal{D})= \prod_{i=1}^{M}\ell_\theta(x_i,a_i)
\end{equation}
where $M$ is the number of demonstrated pairs.
Thus, a gradient ascent algorithm can be used to estimate the reward function $R$ maximizing the log-likelihood function with respect to the demonstrations $\mathcal{D}$. As described above, we are interested in computing a reward function $R^*$ such that:
\begin{equation}
\label{eq:maxlogl}
R_{\theta}^*= \arg\max_R \log \mathcal{L}_\theta (\mathcal{D})
\end{equation}
subject to the constraints on the parameter weights $\theta_i \geq 0$ and $\|{\theta}\|_1=1$, where
\begin{equation}
\log \mathcal{L}_\theta (\mathcal{D})= \sum_{i=1}^{M} \log ( \ell_\theta(x_i,a_i))
\label{eq:logl}
\end{equation}
The model in Eq. \ref{eq:rewardlik} assumes that the probability of the expert choosing an action is proportional to the action's value; e.g. actions with higher $Q^*$ are more likely to be selected. Given an observed pair $(x,a)$, the likelihood of the pair under the reward function $R_{\theta}$ is defined as in equation \eqref{eq:rewardlik}, which is equivalent to the Boltzmann policy from equation \eqref{eq:boltpi}
\subsection{Computing the gradient}
At this point, we need to determine the gradient
\begin{equation}
\label{eq:updaterule}
\mathbf{\nabla}_\mathbf{\theta} \log \mathcal{L}_\mathbf{\theta} (\mathcal{D}) = \frac{\partial}{\partial \mathbf{\theta}}\left [ \sum_{i=1}^{M} \log( \ell_\mathbf{\theta}(x_i,a_i)) \right ] = \sum_{i=1}^{M} \frac{1}{\ell_\mathbf{\theta}(x_i,a_i)}\frac{\partial \ell_\mathbf{\theta}}{\partial \mathbf{\theta}}(x_i,a_i)
\end{equation}
The partial derivative of the pair likelihood can be calculated as:
\begin{equation}
\label{eq:likpair}
\frac{\partial \ell_\theta}{\partial \theta_k}(x,a)=\frac{\ell_\theta(x,a)}{\eta} \left(\frac{\partial Q^*}{\partial \theta_k}(x,a)-\sum_{b \in \mathcal{A}} \ell_\theta(x,b)\frac{\partial Q^*}{\partial \theta_k}(x,b)\right)
\end{equation}
The likelihood of the pair $(x_i,a_i)$ can be easily calculated by solving the direct RL problem. However, we still need to find the gradient of the optimal \emph{Q-function} with respect to the reward parameters $\theta$. This derivative is not trivial, since there is double dependency of $R_\theta$ in $Q^*$, first through the accumulated reward and second through the optimal policy. However, we can find an estimate of the derivatives of the $Q^*$ functions.
\subsubsection{Fixed-point estimate}
As shown by \cite{Neu07uai} the derivatives of the $Q^*$ functions can be computed \emph{almost everywhere} with a fixed point equation:
\begin{equation}
\label{eq:fixedpoint}
\psi_\theta(x,a) = R'_\theta(x,a) + \gamma \sum_{y\in\mathcal{X}} P(y\mid x,a) \sum_{b\in\mathcal{A}} \pi(b,y) \psi_\theta(b,y)
\end{equation}
where $R'_\theta(x,a)$ is the derivative of the reward w.r.t. $\theta$ and $\pi$ is any policy that is greedy with respect to $Q_\theta$.
If the reward is in the linear form of \eqref{eq:features}, then $R'_\theta(x,a) = \phi(x,a)$ and the solution to that equation is also the set of feature expectations $\Phi_\pi(x,a)$ defined in equation \eqref{eq:expfeat}.
Thus, the resulting equation for the derivative of $\ell_\theta$ is
\begin{equation}
\frac{\partial \ell_\mathbf{\theta}}{\partial \theta_k}(x,a)=\frac{\ell_\mathbf{\theta}(x,a)}{\eta} \left(\Phi^\pi_k(x,a)-\sum_{b \in \mathcal{A}} \ell_\mathbf{\theta}(x,b)\Phi^\pi_k(x,b)\right) \label{partialderivative}
\end{equation}
\subsubsection{Independence assumption}
The second estimate is an approximation based on the assumption that the \emph{policy remains unchanged under a small variation in the reward function} as in \cite{macl09airl}. The Bellman recursion of the value-function and Q-function from equations \eqref{eq:bellmanv} and \eqref{eq:bellmanq} using vector notation are:
\begin{align*}
V^*&= \max_{a \in \mathcal{A}} \left[ R_a+\gamma P_a V^* \right] \;\; &V^\pi= R_\pi+\gamma P_\pi V^\pi \\
Q^*_a&=R_a+\gamma P_a V^* \;\; &Q^\pi_a = R_a + \gamma P_a V^\pi
\end{align*}
then, we notice that for any optimal policy $\pi^*$:
\begin{equation}
V^*=\left ( \mathbf{I} -\gamma P_{\pi^*} \right )^{-1} R_{\pi^*}
\end{equation}
These expressions can be combined into
\begin{equation}
Q^*_a=R_a+\gamma P_a \left ( \mathbf I -\gamma P_{\pi^*} \right )^{-1} R_{\pi^*}
\end{equation}
Let us define $\mathbf{T}=\mathbf{I} -\gamma P_{\pi^*}$. Ignoring the dependency on the policy of the right hand side of the equation one obtains the following approximation of the derivative
\begin{equation}
\frac{\partial Q^*_a}{\partial \theta_k}=\phi_k (x,a)+\gamma P_a \mathbf T^{-1} \left [ \sum_{b \in \mathcal{A}} \pi^*(x,b) \phi_k(x,b) \right ]_x \; .
\label{eq:appderiv}
\end{equation}
In contrast with the fixed point method of \cite{Neu07uai}, this approximation has a computational cost that is polynomial (due to the inverse) in the number of states. In the experiments we show that both methods provide comparative results in terms of accuracy.
\subsection{Comparison to other IRL algorithms}
Although we have presented the maximum likelihood algorithm in terms of probabilistic inference, it can be easily reformulated in terms of convex optimization, in order to be comparable to other IRL algorithms as in \cite{Neu09ML}. For example, the update rule in \eqref{eq:updaterule} can be rewritten by replacing the sum over the dataset for a sum over all state-action pairs:
\begin{equation}
\label{eq:newupdaterule}
\begin{split}
\Delta_k &=\sum_{i=1}^{M} \frac{1}{\ell_\theta(x_i,a_i)}\frac{\partial \ell_\theta}{\partial \theta_k}(x_i,a_i)\\
&=\sum_{x,a \in \mathcal{X} \times A} M \; \mu_E(x) \; \hat{\pi}_E(a\mid x)\frac{1}{\ell_\theta(x,a)}\frac{\partial \ell_\theta}{\partial \theta_k}(x,a)
\end{split}\end{equation}
where $\mu_E(x)$ is the observed state visitation frequencies of the expert's behavior\footnote{Remember that the indicator function is defined as $\mathbb{I}_{xa}(y,b) = \left\{ \begin{array}{cc} 1 & \mbox{if} \; x=y \wedge a=b\\ 0 & \mbox{otherwise}\end{array}\right.$.}
\begin{equation}
\mu_E(x)=\frac{\sum_{i=1}^{M} \mathbb{I}(x_i=x)}{M}
\end{equation}
and $\hat{\pi}_E(a\mid x)$ the policy estimated from observations of the expert's behavior
\begin{equation}
\hat{\pi}_E(a\mid x)=\frac{\sum_{i=1}^{M} \mathbb{I}(x_i=x \wedge a_i=a)}{\sum_{i=1}^{M} \mathbb{I}(x_i=x)}.
\end{equation}
As shown in \cite{Boularias2010}, the estimated expert policy can be inaccurate when there are many unvisited states or the demonstrations are scarce. When no observations are available for a state we will assume that the optimal policy is a random walk.
In terms of maximization, the constant $M$ can be dropped and we can replace the likelihood function for the Boltzmann policy since, by definition, $\ell_\theta(x,a)=\pi_\theta(a\mid x)$. Therefore, we obtain:
\begin{equation}
\Delta_k=\sum_{x,a \in \mathcal{X} \times \mathcal{A}} \mu_E(x) \hat{\pi}_E(a\mid x)\frac{1}{\pi_\theta(a\mid x)}\frac{\partial \pi_\theta}{\partial \theta_k}(a\mid x)
\label{eq:update2}
\end{equation}
which can be integrated to obtain the \emph{similarity} function being maximized:
\begin{equation}
\label{eq:dissimilaritygirl}
J(\pi_\theta,\mathcal{D}) =\sum_{x,a \in \mathcal{X} \times \mathcal{A}} \mu_E(x) \hat{\pi}_E(a\mid x) \log \pi_\theta(a\mid x)
\end{equation}
The maximum likelihood approach, therefore, describes an alternative cost function for IRL problems.
Indeed, following \cite{Neu09ML}, it belongs to the the family of algorithms aiming to match the policies instead of the feature expectations as the policy matching algorithm described in \cite{Neu07uai}. However, the latter uses a least square cost function instead of a maximum likelihood approach.
\section{Introduction}
As proposed originally by Ng and Russell \cite{Ng00ICML}, the objective of the inverse reinforcement learning (IRL) problem is to determine the reward function that an \emph{expert} agent is optimizing in order to solve a task based on observations of that expert's behavior while solving the task. The motivation of IRL is twofold. First, it can provide computational models for human and animal behavior. It has been demonstrated that human action understanding can be modeled as inverse planning in a Markov decision process (MDP) model \cite{Baker2009,Ullman2010}. Second, it can be used for the design of intelligent agents, where the description of the tasks might not be easy to obtain. In the later, it is sometimes simpler to get demonstrations from an agent that already knows how to do the tasks \cite{Ziebart08aaai,Abbeel04icml,Neu07uai,Silva06icra}. For example, sometimes we are unable to describe a complex body movement, but we can show it so that the \emph{learner} can infer how to do it by herself \cite{fmelo07unified}. Furthermore, as discovered in \cite{Ratliff06icml} and exploited in \cite{Neu09ML}, there is a strong connection between the problem of IRL and the one of structured learning \cite{BakIr2007}.
Although the general IRL problem does not assume any model of the environment, the formal statement of the problem that appears in the literature assumes a MDP \cite{Puterman94} and that the \emph{expert} follows the principle of \emph{rational action}, that is, the expert agent always tries to maximize the reward function \cite{Baker2009}. This formulation is a generalization of the classical inverse optimal control (IOC)
problem in continuous domains \cite{Boyd1994,Krishnamurthy2010}. However, it is well known that, even with those restrictions, the problem of inverse reinforcement learning is ill-posed. That is, there is a virtually infinite number of rewards that accept the same demonstration as the optimal policy \cite{Ng99rewshap}.
Thus, there are two ways of tackle the problem of IRL. One one hand, in the seminal paper of Ng and Russell \cite{Ng00ICML} and posterior works based on that one, such as \cite{Ramachandran07ijcai,Melo2010}, the authors try to characterize the space of solutions of the reward function. On the other hand, most of the recent algorithms care about the problem of \emph{apprenticeship learning}, where the learning is less concerned about the actual reward function, and the objective is to recover a policy that is \emph{close} to the demonstrated behavior \cite{Abbeel04icml,Neu07uai,Ratliff06icml,Ziebart08aaai,Syed08icml,Syed2007}. In that way, apprenticeship learning is related to imitation learning or learning by demonstration. Therefore, IRL in the case of apprenticeship learning includes a new restriction based on the \emph{similarity} or \emph{dissimilarity} of the expert and learner behaviors \cite{Neu09ML}. However, contrary to other approachers in imitation learning it can provide a more compact representation in terms of the reward function, it can generalize to states that have not been demonstrated and it is more robust to changes in the environment.
This papers studies an algorithm for maximum likelihood IRL from two different points of view. First, it shows its connections to prior work on convex optimization for IRL, namely with the use of gradient methods in a least-squares optimization. Second, it provides a comparison of the performance of maximum likelihood against other IRL methods as well as of an approximation of the gradient. The results show that maximum likelihood always obtains, for the studied problems, the best results. Moreover, the solutions with the gradient approximation do practically not degrade while achieving a considerable speed up in computational time.
The reminder of the paper is organized as follows. After introducing the notation and inverse reinforcement learning methods in Section \ref{sec:prelim}, Section \ref{sec:GIRL} describes maximum likelihood IRL and discusses its connections to other algorithms. Section \ref{sec:results} presents the experimental results. Finally, in Section \ref{sec:conclusions} we draw the conclusions.
\section{Preliminaries}
\label{sec:prelim}
In this section we will introduce the notation used in this article, point to the basic equations we need from direct reinforcement learning, and enunciate the inverse reinforcement learning problem.
\subsection{Markov decision processes}
A Markov decision process (MDP) is a tuple $(\mathcal{X}, \mathcal{A}, P, \gamma, R)$ where
\begin{itemize}
\item $\mathcal{X}$ is a set of states,
\item $\mathcal{A}$ is a set of actions,
\item $P(x'\mid x, a) \equiv P_{x'ax}$ is the probability of transitioning to state $x' \in \mathcal{X}$ when taking action $a \in \mathcal{A}$ in state $x \in \mathcal{X}$, i.e., $P: \mathcal{X} \times \mathcal{A} \times \mathcal{X} \rightarrow [0,1]$,
\item $R$ is a reward function. $R(x,a) \equiv R_{xa}$ returns the reward for taking action $a$ in state $x$. $R: \mathcal{X} \times \mathcal{A} \rightarrow \mathbb{R}$ and
\item $\gamma \in [0,1)$ is the discount factor.
\end{itemize}
The purpose of the MDP is to find the action sequence that maximizes the expected future reward:
\begin{equation}
V(x)=\mathbb{E} \left [ \sum_{t=0}^{\infty} \gamma^t R(x_t,a_t) \middle \vert x_0=x \right ]
\label{eq:valuefunction}
\end{equation}
The sequence of actions is encoded in a \emph{policy}, which is a mapping $\pi :\mathcal{X} \times \mathcal{A} \rightarrow [0,1]$ such that $\pi(x,a)=P(a\mid x)$. In the case of a deterministic policy, the probability distribution collapses to a single action value.
We can associate a \emph{value function} with a particular policy $\pi$, $V^{\pi}(x)=\mathbb{E}_{\pi} [ \sum_{t=0}^{\infty} \gamma^t R(x_t,a_t) \vert x_0=x ]$, where the expectation also considers the stochasticity in the policy. Then, the optimal policy $\pi^*$ is defined as the policy such that the associated value function $V^{*}(x)$ is greater or equal than the value function of any other policy, that is $V^{*}(x) = \sup_\pi V^{\pi}(x)$. The optimal value function satisfies the Bellman equations:
\begin{equation}
\label{eq:bellmanv}
V^*(x)=\max_{a\in\mathcal{A}} \left [ R(x,a) + \gamma \sum_{y\in\mathcal{X}} P(y\mid x,a) V^*(y) \right ]
\end{equation}
We can also associate an action-value function, or \emph{Q-function}, with each policy
\begin{equation}
Q^{\pi}(x,a)=\mathbb{E}_{\pi} \left [ \sum_{t=0}^{\infty} \gamma^t R(x_t,a_t) \middle \vert x_0=x, a_0=a \right ]
\end{equation}
where $a_t$ is generated by following policy $\pi$ for $t>0$. The Q-function can also be updated from the Bellman equation:
\begin{equation}
\label{eq:bellmanq}
Q^*(x,a)= \left [ R(x,a) + \gamma \sum_{y\in\mathcal{X}} P(y\mid x,a) V^*(y) \right ]
\end{equation}
Therefore, the optimal policy can be computed as
\begin{equation}
\pi^*(x)= \arg \max_{a\in\mathcal{A}} Q^*(x,a)
\label{eq:greedypi}
\end{equation}
\subsection{Differentiable Markov decision processes}
One of the limitations for the Bellman formulation of MDPs is the non-differentiability of the maximum function. Thus, in many algorithms replace the maximum function by a softmax function \cite{Ziebart__2010_6590}. For example, we can replace equation \eqref{eq:greedypi} by the softmax version of it, the Boltzmann policy:
\begin{equation}
\label{eq:boltpi}
\pi^*(x)=\frac{e^{\frac{Q^*(x,a)}{\eta}}}{\sum_{a}e^{\frac{Q(x,a)}{\eta}}}
\end{equation}
where $\eta$ is the Boltzmann temperature. If the temperature $\eta \rightarrow 0^+$, then equation \eqref{eq:boltpi} becomes equivalent to equation \eqref{eq:greedypi}. If $\eta \rightarrow \infty$ then $\pi(x)$ becomes a uniform random walk.
\subsection{Inverse reinforcement learning}
\label{sec:irl}
As stated before, the inverse reinforcement learning (IRL) problem
consists of learning the reward function of a reward-less MDP, that is, MDP\textbackslash $R$, given a set of trajectories from an expert or teacher. Formally, in the IRL problem we are given a dataset
\begin{equation}
\mathcal{D} =\left\{ \left( x_i , a_i \right)\right\}_{i=1}^M
\end{equation}
containing observations of an expert agent acting in the MDP
$\mathcal{M} = ( \mathcal{X}, \mathcal{A}, P, R, \gamma )$. The dataset can contain full or partial trajectories of the expert, or even some sparse action selections.
We assume the expert acts near-optimally trying to solve a certain task encoded by the reward function $R$. Both the task and the reward signal are unknown to the learning agent. Her goal is to find a reward function that explains the observed behavior of the expert.
The problem of IRL is, by definition, ill-posed \cite{Ng00ICML} because different rewards can produce the same behavior \cite{Ng99rewshap}, that is, a demonstration cannot generate a single reward signal, neither discriminate among an infinite set of reward functions. Furthermore, the set of solutions for a demonstration include degenerate cases such a flat reward for every action-state pair \cite{Ng00ICML}. Also, there can be some rewards that do not depend directly on the state of the system, but on some intrinsic parameters of the agent \cite{Singh04intrRL}; or the agent might not fully observe the system, in which case the direct problem becomes a partially observable MDP (POMDP) \cite{Cassandra1994,Chot2011}.
At this point is important to note that, apart from the sums in the value function, there is no other assumption in this paper about whether the spaces are discrete or continuous. In fact, many algorithms that were designed for discrete spaces have been applied in continuous setups, provided that there is a planning algorithm to solve the direct problem just by replacing the sums by the corresponding integrals \cite{Abbeel2008,Krishnamurthy2010,Ziebart__2010_6590}.
\subsubsection{Inverse reinforcement learning as convex optimization}
As presented in \cite{Neu09ML}, many algorithms for apprenticeship learning based on IRL, share a common formulation\cite{Abbeel04icml,Neu07uai,Ratliff06icml,Syed2007,Boularias2010}. Basically, the objective is to find the reward that minimizes the similarity between the expert's and learner's behavior:
\begin{equation}
\label{eq:dissimilarity}
R^* = \arg \max_R J(\pi_R,\mathcal{D})
\end{equation}
Those algorithms also include some extra assumptions to constrain the admissible reward set in a way to reduce the effects of being an ill-posed estimation. The most extended assumption is to consider that the reward function is a linear combination of basis functions $\phi(x,a)$, also called, state features.
\begin{equation}
\label{eq:features}
R_{\theta}(x,a) = \sum_{i=1}^N\theta_i \phi_i(x,a)
\end{equation}
where $\theta$ is a vector of feature weights. Then equation \eqref{eq:valuefunction}, can be rewritten in matrix form as $V(x)=\mathbf{\theta} \ \Phi^\pi(x,a)$, where
\begin{equation}
\label{eq:expfeat}
\Phi^\pi_i(x,a) = \mathbb{E}_\pi\left[\sum_{t=0}^\infty \gamma^t \phi_i(x,a) \middle \vert x_0=x \right]
\end{equation}
The selection of the features might be tricky depending on the problem. In some works, the IRL problem is formulated by directly trying to find an arbitrary $R(x,a)$. This is equivalent to use the indicator function as feature $\phi_i(x,a)=\mathbb{I}_i(x,a)$. Some authors also have tried to learn the features as a part of the IRL problem\cite{Krishnamurthy2010,Levine2010}.
As pointed out by \cite{Ratliff06icml} this estimation resembles the problem of structured learning \cite{BakIr2007}. The main hypothesis of \cite{Taskar2003} is that, for a certain kind of combinatorial problems in the form of equation \eqref{eq:features}, computing the likelihood function is intractable. Therefore, they propose the max-margin method, which approximates the correct solution with polynomial complexity. In fact, the work of \cite{Neu09ML} shows that under certain conditions, the problems of structured learning and IRL are equivalent.
In this paper, we show that for IRL problems we can obtain an alternative good approximation of the likelihood function with better performance than the heuristic proposed for structured learning.
\subsubsection{Inverse reinforcement learning as probabilistic inference}
The problem of IRL can also be solved using Bayesian inference. Ramachandran and Amir \cite{Ramachandran07ijcai} presented an algorithm called Bayesian inverse reinforcement learning, where they consider that the unknown reward function as an stochastic variable which can be inferred based on the observations from the demonstration $ P(R \mid \mathcal{D}) \propto P(\mathcal{D} \mid R) P(R) $. Then, they introduce the following likelihood model
\begin{equation}
\label{eq:rewardlik}
P(\mathcal{D} \mid R) = \prod_i P(x_i,a_i \mid R) \propto e^{\alpha \sum_i Q^*(x_i,a_i)}
\end{equation}
where $\alpha$ is a parameter of the distribution that represents the confidence on the expert. The likelihood of each pair $(x,a)$ is equivalent to the Boltzmann policy from equation \eqref{eq:boltpi} assuming that the confidence is the inverse of the Boltzmann temperature $\alpha = 1/\eta$. Although for apprenticeship learning, the computation of a full distribution of rewards might seem excessive, it has some advantages as being able to use the uncertainty in the estimation in an active learning framework \cite{macl09airl}.
\subsubsection{Inverse reinforcement learning as density estimation}
Although they are not within the scope of the paper, it is worth mentioning a set of algorithms rooted on the computation of the KL-divergence (or relative entropy) between the optimal policy and the passive dynamics of the system \cite{Boularias2011,Krishnamurthy2010,Ziebart08aaai}. In contrast with the two approaches described above, this family of algorithms does not need to solve the direct planning problem. Instead, they require to compute the distributions of trajectories following the passive dynamics of the system and the potentially optimal dynamics of the expert. Furthermore, it is not clear if those algorithms are comparable to other IRL algorithms since the problems they solve are different. For example, \cite{Neu09ML} shows that the dissimilarity function that those algorithms are optimizing do not use optimal policies. Besides, \cite{Krishnamurthy2010} shows that those methods are within the framework of linearly-solvable MDPs.
|
1,116,691,498,273 | arxiv | \section{Introduction}
In \cite{BMP}, the diffraction properties of the visible points of
$\mathbb Z^2$ and the
$k$th-power-free numbers were studied. It was shown that these sets
have positive, pure-point, translation-bounded \emph{diffraction spectra} with countable,
dense support. This is of interest because these sets
fail to be Delone sets: they are uniformly discrete (subsets of lattices, in fact)
but not relatively dense. The lack of relative denseness means that these
sets have arbitrarily large `holes'. In \cite{PH}, it was shown that
the above results remain true for the larger class of $k$th-power-free
(or $k$-free for short)
points of arbitrary lattices in $n$-space. Furthermore, it was shown
there that these sets have positive \emph{patch counting entropy} but
zero \emph{measure-theoretical entropy} with respect to a measure that is defined in terms of the
`tied' frequencies of patches in space.
Recent independent results by Sarnak~\cite{Sarnak} and by Cellarosi and
Sinai~\cite{CS} on the natural dynamical system associated with the square-free (resp.\
$k$th-power-free) integers (in particular on the ergodicity of the
above frequency measure and the dynamical spectrum, but also on the topological dynamics) go beyond what was covered
in~\cite{PH}. The aim of this short note is to generalise these
results to the setting of $k$-free lattice points.
\section{$k$-free points}
The \emph{$k$-free points} $V=V(\varLambda,k)$ of a lattice
$\varLambda\subset\mathbb R^n$ are the points with the property that the
greatest common divisor of their
coordinates in any lattice basis is not divisible by any non-trivial
$k$th power of an integer. Without restriction, we shall assume that
$\varLambda$ is
unimodular, i.e.\ $|\det(\varLambda)|=1$. One can see that $V$ is
\emph{non-periodic}, i.e.\ $V$ has no non-zero translational symmetries. As particular cases, we have
the visible points (with respect to the origin $0$) of $\varLambda$ (with $n\ge2$ and $k=1$) and the
$k$-free integers (with $\varLambda=\mathbb Z$), both treated in
\cite{BMP} and \cite{BG}. We exclude the trivial case $n=k=1$, where $V$ consists
of just the two points of $\varLambda$ closest to $0$ on either side.
\begin{center}
\begin{figure}
\centerline{\epsfysize=0.48\textwidth\epsfbox{vispoints.eps}}
\caption{A central patch of the visible points of the square lattice
$\mathbb Z^2$. Note the invariance with respect to $\operatorname{GL}(2,\mathbb Z)$.}
\label{fig:vis}
\end{figure}
\end{center}
Let $v_n=\operatorname{vol}(B_1(0))$, so that $v_nR^n$ is the volume
of the open ball $B_R(0)$ of radius $R$ about $0$. If $Y\subset\varLambda$, its `tied' \emph{density} $\operatorname{dens}(Y)$ is defined by
$$
\operatorname{dens}(Y):=\lim_{R\to\infty}\frac{|Y\cap B_R(0)|}{v_nR^n},
$$
when the limit exists. The following result is well known.
\begin{theorem}{\rm \cite[Cor.~1]{PH}}
One has $\operatorname{dens}(V)=1/\zeta(nk)$,
where $\zeta$ denotes
Riemann's $\zeta$-function.
\qed
\end{theorem}
An application of the Chinese Remainder Theorem immediately gives the
following result on the occurrence of `holes' in $V$.
\begin{prop}{\rm \cite[Prop. 1]{PH}}\label{holes}
$V$ is uniformly discrete, but has arbitrarily large holes. Moreover,
for any $r>0$, the set of centres of holes in $V$ of inradius at least $r$
contains a coset of $m^k\varLambda$ in $\varLambda$
for some $m\in\mathbb N$.
\qed
\end{prop}
Given a radius $\rho>0$ and a point $t\in\Lambda$,
the \emph{$\rho$-patch} of $V$ at $t$ is
\[(V-t)\cap B_\rho(0),\]
the translation to the origin of the part of $V$ within a distance
$\rho$ of $t$. We denote by $\mathcal A(\rho)$ the (finite) set of all
$\rho$-patches of $V$, and by $N(\rho)=|\mathcal A(\rho)|$ the number of
distinct $\rho$-patches of $V$. In view of the binary configuration
space interpretation, and following~\cite{PH}, the \emph{patch counting entropy} of
$V$ is defined as
$$
h_{\rm pc}(V):=\lim_{\rho\to\infty}\frac{\log_2N(\rho)}{v_n\rho^n}.
$$
It can be shown by a classic subadditivity argument that this limit exists.
Following~\cite{BMP,PH}, the `tied' \emph{frequency} $\nu(\mathcal P)$
of a $\rho$-patch $\mathcal P$ of $V$ is defined by
\begin{equation}\label{freqdef}
\nu(\mathcal
P):=\operatorname{dens}\big(\{t\in\varLambda\,\mid\,(V-t)\cap B_\rho(0)=\mathcal P\}\big),
\end{equation}
which can indeed be seen to exist. Moreover, one has
\begin{theorem}{\rm \cite[Thms.~1 and~2]{PH}}\label{freq}
Any $\rho$-patch $\mathcal P$ of $V$ occurs with positive frequency, given by
\[\nu(\mathcal P)=\sum_{\mathcal F\subset (B_{\rho}(0)\cap\varLambda)\setminus \mathcal P}(-1)^{|\mathcal F|}
\prod_p\left(1-\frac{|(\mathcal P\cup\mathcal
F)/p^k\varLambda|}{p^{nk}}\right),\]
where $p$ runs through all primes.
\qed
\end{theorem}
\section{Diffraction}
Recall that the \emph{dual}\/ or \emph{reciprocal
lattice}\/ $\varLambda^*$ of $\varLambda$ is
\[
\varLambda^*:=\{y \in\mathbb{R}^n\,\mid\, y\cdot x\in\mathbb Z
\mbox{ for all } x\in\varLambda\}.
\]
Further, the
\emph{denominator} of a point $y$ in the $\mathbb Q$-span $\mathbb
Q\varLambda^*$ of $\varLambda^*$ is defined as
$$
\operatorname{den}(y):=\min\{m\in\mathbb N\,\mid\,m y\in\varLambda^*\}.
$$
\begin{theorem}{\rm \cite[Thms.~3 and 5]{BMP} \cite[Thm.~8]{PH} \cite{BG}}\label{thdiff}
The natural diffraction measure $\widehat{\gamma}$ of the autocorrelation
$\gamma$ of\/ $V$ exists and is a positive,
translation-bounded, pure-point measure
which is concentrated on the set of points in $\mathbb Q\varLambda^*$
with $(k+1)$-free denominator, the Fourier--Bohr spectrum of $\gamma$,
and whose intensity is
\[
\Bigg(\frac{1}{\zeta(nk)}\prod_{p\mid q}\frac{1}{p^{nk}-1}\Bigg)^2
\]
at any point with such
a denominator $q$.
\qed
\end{theorem}
\begin{center}
\begin{figure}
\centerline{\epsfysize=0.48\textwidth\epsfbox{vispodiff.eps}}
\caption{Diffraction $\widehat{\gamma}$ of the visible points of $\mathbb Z^2$. Shown are
the intensities with $I(y)/I(0)\ge 10^{-6}$ and $y\in [0,2]^2$. Its lattice of periods is $\mathbb Z^2$, and $\widehat{\gamma}$ turns out to
be $\operatorname{GL}(2,\mathbb Z)$-invariant.}
\label{fig:diff}
\end{figure}
\end{center}
\section{The hull of $V$}
Endowing the power set $\{0,1\}^{\varLambda}$ of the lattice $\varLambda$ with the
product topology of the discrete topology on $\{0,1\}$, it becomes a
compact topological space (by Tychonov's theorem). This
topology is in fact generated by the metric $\rm d$ defined by
$$
{\rm d}(X,Y):=\min\Big\{1,\inf_{\epsilon >0}\{X\cap
B_{1/\epsilon}(0)=Y\cap
B_{1/\epsilon}(0)\}\Big\}
$$
for subsets $X,Y$ of $\varLambda$; cf.~\cite{Sol}. Then, $(\{0,1\}^{\varLambda},\varLambda)$ is
a \emph{topological dynamical system}, i.e.\ the natural translational
action of the group $\varLambda$
on $\{0,1\}^{\varLambda}$ is continuous.
Let $X$ now be a subset of
$\varLambda$. The closure
$$\mathbb X(X):=\overline{\{t+X\,\mid\,t\in \varLambda\}}$$ of the set of
lattice translations $t+X$ of $X$ in
$\{0,1\}^{\varLambda}$ is the \emph{$($discrete\/$)$ hull} of $X$ and gives rise to the topological dynamical system
$(\mathbb X(X),\varLambda)$, i.e.\ $\mathbb X(X)$ is a compact topological space on which
the action of $\varLambda$ is continuous.
By construction of the hull, Proposition~\ref{holes} implies
\begin{lemma}\label{holes2}
For any $r>0$ and any element $X\in\mathbb X(V)$, the set of centres of holes in $X$ of inradius at least $r$
contains a coset of $m^k\varLambda$ in $\varLambda$
for some $m\in\mathbb N$.
\qed
\end{lemma}
For a $\rho$-patch $\mathcal P$ of $V$, denote by $C_\mathcal{P}$ the set of elements of $\mathbb X(V)$ whose $\rho$-patch at
$0$ is $\mathcal P$, the so-called \emph{cylinder set} defined by the
$\rho$-patch $\mathcal P$. Note that these cylinder sets form a
basis of the topology of $\mathbb X(V)$.
It is clear from the existence of holes of unbounded inradius in $V$ that
$\mathbb X(V)$ contains the empty set (the configuration of $0$ on
every lattice point). Denote by
$\mathbb A$
the set of \emph{admissible} subsets $A$
of $\varLambda$, i.e.\ subsets $A$ of $\varLambda$ having the property that,
for every prime $p$, $A$ does \emph{not} contain a full set of
representatives modulo $p^k\varLambda$. In other words, $A$ is
admissible if and only if
$|A/p^k\varLambda|<p^{nk}$ for any prime $p$, where
$A/p^k\varLambda$ denotes the set of cosets of $p^k\varLambda$ in $\varLambda$ that are
represented in $A$. Since $V\in\mathbb A$ (otherwise some point of $V$ is in $p^k\varLambda$
for some prime $p$, a contradiction) and since $\mathbb A$ is a $\varLambda$-invariant and
closed subset of $\{0,1\}^{\varLambda}$, it is clear that $\mathbb X(V)$
is a subset of $\mathbb A$. By~\cite[Thm.~2]{PH}, the other inclusion is also
true. One thus obtains the
following characterisation of the hull of $V$, which was first shown by Sarnak~\cite{Sarnak} for the
special case of the square-free integers.
\begin{theorem}{\rm \cite[Thm.~6]{PH}}
One has $\mathbb X(V)=\mathbb A$.
\qed
\end{theorem}
In particular, $\mathbb X(V)$ contains
\emph{all} subsets of $V$ (and their translates). In other words, $V$
is an interpolating set for $\mathbb X(V)$ in the sense
of~\cite{W}, i.e. $$\mathbb X(V)|^{}_V\,\,:=\{X\cap V\,\mid\,
X\in\mathbb X(V)\}=\{0,1\}^V.$$
It follows that $V$ has patch counting entropy at least
$\operatorname{dens}(V)=1/\zeta(nk)$. In fact, one has more.
\begin{theorem}{\rm \cite[Thm.~3]{PH}~\cite[Thm.~1]{BLR}}\label{hpc}
One has $h_{\rm pc}(V)=1/\zeta(nk)$. Moreover, $h_{\rm pc}(V)$ coincides with the
topological entropy of the dynamical system $(\mathbb X(V),\varLambda)$.
\qed
\end{theorem}
\section{Topological dynamics}
By construction, $(\mathbb X(V),\varLambda)$ is topologically
transitive~\cite{A,G,W}, as it is the orbit closure of one of its
elements (namely $V$). Equivalently, for any two non-empty open subsets $U$
and $W$ of $\mathbb X(V)$, there is an element $t\in\varLambda$ such
that
$$
U\cap (W+t)\neq\varnothing.
$$
In accordance with Sarnak's findings~\cite{Sarnak} for
square-free integers, one has the following results.
\begin{theorem}\label{c1}
The topological dynamical system $(\mathbb X(V),\varLambda)$ has the following properties.
\begin{itemize}
\item[\rm (a)]
$(\mathbb X(V),\varLambda)$ is topologically ergodic with positive topological
entropy equal to $1/\zeta(nk)$.
\item[\rm (b)]
$(\mathbb X(V),\varLambda)$ is proximal $($i.e., for any $X,Y\in\mathbb
X(V)$ one has $\inf_{t\in\varLambda}{\rm d}(X+t,Y+t)=0$$)$ and $\{\varnothing\}$ is
the unique $\varLambda$-minimal subset of $\mathbb
X(V)$.
\item[\rm (c)]
$(\mathbb X(V),\varLambda)$ has no non-trivial topological Kronecker
factor $($i.e., minimal equicontinuous factor\/$)$. In particular, $(\mathbb
X(V),\varLambda)$ has trivial topological point spectrum.
\item[\rm (d)]
$(\mathbb X(V),\varLambda)$ has a non-trivial joining with the Kronecker system $K=(G,\varLambda)$, where $G$ is the compact
Abelian group $\prod_p (\varLambda/p^k\varLambda)$ and $\varLambda$ acts on $G$
via addition on the diagonal, $g\mapsto
g+(\bar{x},\bar{x},\dots)$, with $g\in G$ and $x\in\varLambda$. In
particular, $(\mathbb X(V),\varLambda)$ fails to be topologically weakly mixing.
\end{itemize}
\end{theorem}
\begin{proof}
The positivity of the topological entropy follows from
Theorem~\ref{hpc} since $1/\zeta(nk)>0$. For the
topological ergodicity~\cite{A}, one has to show that, for any two non-empty
open subsets $U$ and $W$ of $\mathbb X(V)$, one has
\begin{equation}\label{tergod}
\limsup_{R\to\infty}\frac{\sum_{t\in\varLambda\cap B_R(0)}\theta\big(U\cap
(W+t)\big)}{v_nR^n}>0,
\end{equation}
where $\theta(\varnothing)=0$ and $\theta(A)=1$ for non-empty subsets
$A$ of $\mathbb X(V)$. It certainly suffices to verify~\eqref{tergod}
for cylinder sets. To this end, let $\mathcal P$ and $\mathcal Q$ be
patches of $V$. Then, a suitable translate $V+s$ is an element of $C_\mathcal P$. Since
\begin{eqnarray*}
&&\hspace{-2em}\limsup_{R\to\infty}\frac{\sum_{t\in\varLambda\cap
B_R(0)}\theta\big(C_\mathcal P\cap
(C_\mathcal Q+t)\big)}{v_nR^n}\\
&\ge&\limsup_{R\to\infty}\frac{\sum_{t\in\varLambda\cap
B_R(0)}\theta\big(\{V+s\}\cap
(C_\mathcal Q+t)\big)}{v_nR^n}\\
&=&\limsup_{R\to\infty}\frac{\sum_{t\in\varLambda\cap
B_R(0)}\theta\big(\{V\}\cap
(C_\mathcal Q+t)\big)}{v_nR^n}\,=\,\nu(\mathcal Q),
\end{eqnarray*}
the assertion follows from Theorem~\ref{freq}. This proves (a).
For part (b), one can easily derive from Lemma~\ref{holes2} that, for any $\rho>0$
and any two
elements $X,Y\in\mathbb
X(V)$, there is a translation $t\in\varLambda$ such that
$$(X+t)\cap B_\rho(0)=(Y+t)\cap B_\rho(0)=\varnothing,$$ i.e.\ both $X$ and $Y$
have the empty $\rho$-patch at $-t$. It follows that ${\rm d}(X+t,Y+t)\le
1/\rho$ and thus the proximality of the system follows. Similarly, the assertion on the unique $\varLambda$-minimal
subset $\{\varnothing\}$ follows from the fact that any element of $\mathbb
X(V)$ contains arbitrarily large `holes' and thus any non-empty
subsytem contains $\varnothing$.
Since Kronecker systems are distal, the first assertion of part (c) is an immediate consequence of the
proximality of $(\mathbb X(V),\varLambda)$. Although this immediately
implies that $(\mathbb X(V),\varLambda)$ has trivial
topological point spectrum, we add the following independent argument. Let $f\!:\, \mathbb
X(V)\longrightarrow \mathbb C$ be a continuous eigenfunction, in
particular $f\not\equiv 0$.
Let $\lambda_t\in\mathbb C$ be the eigenvalue with respect to $t\in\varLambda$,
i.e.\ $f(X-t)=\lambda_t f(X)$ for any $X\in\mathbb X(V)$, in
particular
\begin{equation}\label{emptyset}
f(\varnothing)=\lambda_tf(\varnothing).
\end{equation}
Since
$\varLambda$ acts by homeomorphisms on the compact space $\mathbb X(V)$ and since $(\mathbb
X(V),\varLambda)$ is topologically transitive, it is clear that
$|\lambda_t|=1$ and that $|f|$ is a non-zero constant. We shall now show
that even $\lambda_t=1$ for any $t$ and that $f$ itself is a non-zero constant. By
Lemma~\ref{holes2}, for any $X\in\mathbb X(V)$, one can choose a sequence $(t_n)_{n\in\mathbb N}$
in $\varLambda$ such that $\lim_{n\rightarrow
\infty}(X-t_n)=\varnothing$. Since $f$ is continuous, we have
\begin{equation}\label{emptyset2}f(\varnothing)=\lim_{n\rightarrow
\infty}f(X-t_n)=\lim_{n\rightarrow \infty}\lambda_{t_n}f(X).
\end{equation}
Assuming
that $f(\varnothing)=0$ thus implies $f\equiv 0$, a
contradiction. Hence $f(\varnothing)\neq 0$ and $\lambda_t=1$ for any
$t\in\varLambda$ by~\eqref{emptyset}. Further, by~\eqref{emptyset2},
one has
$f(X)=f(\varnothing)$ for any $X\in\mathbb X(V)$.
For part (d), one can verify that a non-trivial joining~\cite{G}
of $(\mathbb X(V),\varLambda)$ with the Kronecker system $K$ is given
by
$$
W:=\bigcup_{X\in\mathbb X(V)}\Big(\{X\}\times \prod_p
(\varLambda\setminus X) /p^k\varLambda\Big).
$$
Since the Kronecker system $K$ is minimal and distal, a well known
disjointness theorem by Furstenberg~\cite[Thm. II.3]{F} implies that
$(\mathbb X(V),\varLambda)$ fails to be topologically weakly mixing.
\end{proof}
\section{Measure-theoretic dynamics}
The frequency function $\nu$ from~\eqref{freqdef}, regarded as a function on the
cylinder sets by setting $\nu(C_\mathcal
P):=\nu(\mathcal P)$, is finitely additive on the
cylinder sets with $$\nu(\mathbb X(V))=\sum_{\mathcal P\in\mathcal
A(\rho)}\nu(C_{\mathcal P})=|\det(\varLambda)|=1.$$ Since the
family of cylinder sets is a (countable) semi-algebra that generates the
Borel $\sigma$-algebra on $\mathbb X(V)$ (i.e.\ the smallest
$\sigma$-algebra on $\mathbb X(V)$ which contains the open subsets of
$\mathbb X(V)$), it extends uniquely to a probability measure on
$\mathbb X(V)$; cf.~\cite[\S 0.2]{Walters}. Moreover, this probability measure is
$\varLambda$-invariant by construction. For part (b) of the following
claim, note that, in the case of $V$,
the Fourier--Bohr spectrum is itself a group and
compare~\cite[Prop. 17]{BLvE}. Turning to the measure-theoretic
dynamical system $(\mathbb X(V),\varLambda,\nu)$, one has
\begin{theorem}$(\mathbb X(V),\varLambda,\nu)$ has the following properties.
\begin{itemize}
\item[\rm (a)]
The $\varLambda$-orbit of $V$ in $\mathbb X(V)$ is $\nu$-equidistributed,
i.e., for any function $f\in C(\mathbb X(V))$, one has
\[
\lim_{R\to\infty}\frac{1}{v_nR^n}\sum_{x\in\varLambda\cap
B_R(0)}f(V+x)=\int_{\mathbb X(V)}f(X)\,\,{\rm d}\nu(X).
\]
In other words, $V$ is $\nu$-generic.
\item[\rm (b)]
$(\mathbb X(V),\varLambda,\nu)$ is ergodic, deterministic
$($i.e., it is of zero measure entropy\/$)$ and has pure-point dynamical spectrum given by
the Fourier--Bohr spectrum of the autocorrelation $\gamma$, as
described in Theorem~$\ref{thdiff}$.
\item[\rm (c)]
The Kronecker system $K_{\nu}=(X_K,\varLambda,\nu)$, where $X_K$ is the
compact Abelian
group $\prod_p (\varLambda/p^k\varLambda)$, $\varLambda$ acts on $X_K$ via
addition on the diagonal $($cf.\ Theorem\/~$\ref{c1}(\rm{d}))$ and $\nu$ is
Haar measure on $X_K$, is metrically
isomorphic to $(\mathbb X(V),\varLambda,\nu)$.
\end{itemize}
\end{theorem}
\begin{proof}
For part (a), it suffices to show this for the characteristic
functions of cylinder sets of finite patches, as their span is dense
in $C(\mathbb X(V))$. But for such functions, the claim is clear as
the left hand side is the patch frequency as used for the definition
of the measure $\nu$.
For the ergodicity of $(\mathbb X(V),\varLambda,\nu)$, one has to show
that
$$
\lim_{R\rightarrow\infty}\frac{1}{v_nR^n}\sum_{x\in\varLambda\cap
B_R(0)}\nu\big((C_\mathcal P+x)\cap C_\mathcal Q\big)=\nu(C_\mathcal P)\nu(C_\mathcal Q)
$$
for arbitrary cylinder sets $C_\mathcal P$ and $C_\mathcal Q$;
compare~\cite[Thm. 1.17]{Walters}. The latter in turn follows from a straightforward calculation using
Theorem~\ref{freq} and the definition of the measure $\nu$ together
with the the Chinese Remainder Theorem. In fact, for technical
reasons, it is better to work with a different semi-algebra that also
generates the Borel $\sigma$-algebra on $\mathbb X(V)$~\cite{H}.
Vanishing measure-theoretical entropy (relative to $\nu$) was shown
in~\cite[Thm.~4]{PH}, which is in line with the results
of~\cite{BLR}. As a consequence of part (a), the individual
diffraction measure of $V$ according to Theorem~\ref{thdiff} coincides
with the diffraction measure of the system $(\mathbb
X(V),\varLambda,\nu)$ in the sense of~\cite{BL}. Then, pure point
diffraction means pure point dynamical spectrum~\cite[Thm. 7]{BL},
and the latter is the group generated by the Fourier--Bohr spectrum;
compare~\cite[Thm. 8]{BL} and~\cite[Prop. 17]{BLvE}. Since the intensity
formula of Theorem~\ref{thdiff} shows that there are no extinctions,
the Fourier--Bohr spectrum here is itself a group, which completes
part (b).
The Kronecker system can now be read off from the model set
description, which provides the compact Abelian group. For the cases
$k=1$ and $d\ge 2$ as well as $k\ge 2$ and $d=1$, the construction is
given in~\cite{BMP}; see also~\cite[Ch. 5a]{Sing} for an alternative
description. The general formalism is developed in~\cite{BLM}, though
the torus parametrisation does not immediately apply. Some extra work
is required here to establish the precise properties of the
homomorphism onto the compact Abelian group.
\end{proof}
Let us mention that our approach is complementary to that
in~\cite{CS}. There, ergodicity and pure point spectrum are consequences of
determining all eigenfunctions, then concluding via $1$ being a simple
eigenvalue and via the basis property of the eigenfunctions. Here, we
establish ergodicity of the measure $\nu$ and afterwards use the equivalence
theorem between pure point dynamical and diffraction spectrum~\cite[Thm. 7]{BL},
hence employing the diffraction measure of $V$ calculated in~\cite{BMP,PH}.
\section*{Acknowledgements}
It is our pleasure to thank Peter Sarnak for valuable discussions. This work was supported by the German Research Foundation (DFG) within
the CRC~701.
|
1,116,691,498,274 | arxiv | \section{Introduction} \label{Intro}
{ \textit {Coherence}} is ubiquitous in quantum systems and is known to be the primary factor behind the emergence of phenomena that are dramatically different from those observed in the systems where it is absent. This fact was first realized in the context of wave optics, where the presence of coherence was observed to have some effects which were radically different from the phenomena usually observed in the domain of traditional geometrical optics \cite{Mandel,Boyd,cls}. More dramatic consequences of coherence, specifically that of quantum coherence which signifies the existence of superposition between quantum states, appeared in the last century with the advent of quantum mechanics in general and with the understanding of quantum interference in particular. The interest in quantum coherence further amplified with the advent of quantum computation and communication, where entangled states, which are nothing but non-separable superpositions in a tensor product space, play a crucial role.
Quantum coherence originates from the wave function description of quantum systems and cannot be described within the framework of classical physics. A consequence of quantum coherence is the existence of nonclassical states which have no classical analogue and can only be understood as uniquely quantum in character \cite{glaub,sudar}. These states are essential for the establishment of quantum supremacy (\cite{sup} and references therein). In fact, in the context of quantum information processing, quantum coherence {is being widely accepted as an important} resource \cite{coh_rev2,coh_rev1}, and thus it is important to quantify the amount of coherence present in a quantum state. {Recently,} Baumgratz et al. \cite{plenio} have provided a framework for the quantitative characterization of quantum coherence by {treating it} as a physical resource. The framework to quantify coherence is based on considering an incoherent basis and defining an incoherent state as one which is diagonal in that basis. Since the pioneering work of Baumgratz et al. \cite{plenio}, various measures of quantum coherence have been proposed \cite{str, rana, qir, yu, roc, yuan, winter, chitambar} and used \cite{coh_rev2,coh_rev1}. However, till date only the relative entropy of coherence, {$l1$ norm} of coherence, and a skew information based measure of coherence have been found to be satisfactory. It is worth noting here that all these measures are basis dependent quantities. In what follows, we will discuss these measures along with some other proposed measures of quantum coherence as well as a measure of first-order coherence \cite{optical_coh}, which can also be referred to as optical coherence as it resembles the measure of coherence used in optics.
Apart from the quantitative measures of coherence, the idea of quantum coherence has also been examined from various other interesting perspectives. For example, in the field of quantum thermodynamics, studies have been performed to understand the possibilities of the extraction of work from quantum coherence \cite{szilard}. Studies of low temperature thermodynamics with quantum coherence \cite{nat1,nat2} and the role of quantum coherence in energy transfer \cite{pre} have also been reported. In quantum biology \cite{bio1}, it has been shown that quantum coherence plays a major role in photosynthesis \cite{bio2}, and quantum coherence and entanglement are known to play an important role in the avian compass {in} migratory birds \cite{bio3}. Similarly, quantum coherence has various important applications in the field of quantum algorithms \cite{alg1, alg2} and quantum metrology \cite{met1,met2}.
Further, the close {relationship} among nonclassicality, entanglement, Bell nonlocality, and quantum coherence {leads} to {the} question: are the {known} limitations of the quantitative measures of nonclassicality \cite{adam,adam-non}, entanglement \cite{adam-ent}, steering \cite{adam-st}, and Bell nonlocality \cite{adam-bi} also apply to the quantitative measures of quantum coherence? In all earlier studies \cite{adam,adam-non,adam-ent,adam-st,adam-bi,adam-nm}, it has been observed that the ordering of quantum states based on the amount of a particular type of nonclassicality they contain (as measured by different quantitative measures of that nonclassical feature) is usually inconsistent. Therefore, here we attempt to answer a question: If a measure of coherence {$p$,} indicates that state {$A$} has more coherence than state {$B$}, will that {also} mean that {$A$} will always be found to have more coherence even when coherence {is measured} using another quantitative measure, {$q$,} in the same basis? In other words, are the quantitative measures of the quantum coherence monotone of each other? If not, are they connected with each other for {a subset} of a family of quantum states? Until now, no effort has been made to answer these questions satisfactorily. Motivated by these facts, here we aim to address some of the above mentioned hitherto unanswered questions related to the measures of quantum coherence and their interrelations using $X$ states \cite{xstate1, xstate2} as our example states as these states are known to serve as a good test bed to {explore and} study the properties and applications of quantum coherence \cite{x_dynamics,xh,xp_p1, xp_p2, xp_p3, xp_p4, xp_uc1, xp_uc2, xp_nmr}. In what follows, we will show that the quantitative measures of quantum coherence are not monotone of each other unless they are trivially dependent on each other, but are only related for a subset of a family of $X$ states, and such states define the boundary values of the measures of coherence. By obtaining the `relative' coherence (the amount of coherence using one measure relative to that from another measure) we could identify the states having maximum and minimum values of relative coherence. This in turn can help us to illuminate the connections between different measures of quantum coherence.
The rest of the paper is organized as follows. In Section \ref{measures}, we briefly review different measures of coherence. Thereafter, we introduce $X$ states and their properties in Section \ref{X states}. This is followed by our comparative study in Section \ref{our-res} to establish the relationship between different measures of coherence for the family of $X$ states. We briefly discuss first-order coherence for $X$ states in Section \ref{Op-coh} before concluding the paper in Section \ref{con}.
\section{Measures of coherence}\label{measures}
As mentioned above, Baumgratz et al. \cite{plenio} have recently provided a prescription for the quantitative characterization of quantum coherence by considering coherence as a physical resource. They proposed the following set of criteria (which we would refer to as Baumgratz et al.'s criteria) that every potential quantifier of coherence $(C)$ should satisfy:
\begin{itemize}
\item $(C1)$ Non-negativity: $C(\rho)\geq 0$ with equality if and only if $\rho$ is incoherent.
\item $(C2a)$ Monotonicity: $C$ does not increase under the application of completely positive and trace preserving incoherent operations, i.e., $C(\phi[\rho])\leq C(\rho)$, where $\phi$ is any completely positive and trace preserving incoherent operation.
\item $(C2b)$ Strong monotonicity: $C$ does not increase on an average under selective incoherent operations, i.e., $\underset{i}{\sum} q_{i} C(\rho_{i})\leq C(\rho)$, where $\rho_{i}=(K_{i}\rho K_{i}^{\dagger})/q_{i}$ are post measurement states with probabilities given by $ q_{i}= Tr[K_{i}\rho K_{i}^{\dagger}] $, and $K_{i}$ are incoherent Kraus operators.
\item $(C3)$ Convexity: $C$ does not increase under mixing, i.e., $ \sum_{i} p_{i} C(\rho_{i}) \geq C(\sum_{i} p_{i}\rho_{i})$.
\end{itemize}
Here, $\rho$ is the density operator corresponding to the quantum state. Based on the above criteria, many quantum coherence measures have been introduced. However, it is very difficult to prove the condition of strong monotonicty $(C2b)$ for them. Usually, it is sufficient to just prove the conditions $(C1)$, $(C2b)$ and $(C3)$ as $(C2a)$ is already implied by $(C2b)$ and $(C3)$. It is found that of all the existing well-known measures of coherence, only the relative entropy of coherence and $l1$ norm of coherence satisfy these criteria and hence these two measures of coherence serve as good quantifiers of coherence. Along with these, a recently introduced coherence measure based on skew information has also been shown to satisfy Baumgratz et al.'s criteria \cite{yu}. In this section, we aim to briefly describe these three measures along with some other proposed quantifiers with an aim to compare them and find out their interrelations and limitations. To begin with, we describe relative entropy of coherence.
\subsection{Relative entropy of coherence}
The relative entropy of coherence \cite{plenio} present in a quantum state represented by the density matrix $\rho$ is defined as
\begin{equation}\label{rel_ent}
C_{\rm{} rel}(\rho)= S(\rho_{\rm{} diag})- S(\rho),
\end{equation}
where $S(\rho )$ is the von Neumann entropy of $\rho$, and $\rho_{\rm{} diag} $ denotes the state obtained from $\rho $ by removing all the off-diagonal elements of $\rho$. Note that $C_{\rm{} rel}$ (\ref{rel_ent}) is a basis dependent quantity. Due to its similarity in form to that of the relative entropy of entanglement, $C_{\rm{} rel}$ has a physical meaning \cite{winter}. Specifically, it physically represents the optimal rate of the distilled maximally coherent states that can be produced by incoherent operations in the asymptotic limit of many copies of $\rho$. Interestingly, experimental measurement of this coherence quantifier can be performed without full quantum state tomography \cite{yu}.
\subsection{$l1$ norm of coherence}
The $l1$ norm of coherence \cite{plenio} is given by
\begin{equation}\label{l1}
C_{l1}(\rho)= \sum _{i,j,i\neq j}|\rho_{i,j}|.
\end{equation}
This measure of coherence, which like $C_{\rm{} rel}$ (\ref{rel_ent}) is also basis dependent, is presently not known to have any analogue in the resource theory of entanglement \cite{coh_rev2}. While efforts had been made earlier to find a physical interpretation to (\ref{l1}), recently it has been reported that for a multi-slit interference set up coherence as defined by (\ref{l1}) can be experimentally measured \cite{bera,bagan, tania, biswas, sandeep}.
\subsection{Skew information based measure of coherence}
In 2014, Girolami \cite{qir} introduced a new experimentally accessible measure of coherence known as $k-$coherence which was based on quantum skew information. The motivation for this was the fact that quantum coherence of a state $\rho$ is rooted in unpredictability, and the state $\rho$ is incoherent in the eigenbasis of an observable $k$ if and only if it commutes with $k$. The $k-$coherence is given by:
\begin{equation}
I(\rho, k)= - \frac{1}{2} Tr \{[\sqrt{\rho},k ]^{2}\} \label{K}
\end{equation}
Later it was found that (\ref{K}) violates the property of strong monotonicity for certain states \cite{qir_not}. Recently, motivated by the $k-$coherence, Yu \cite{yu} has proposed a new quantum coherence measure using skew information and have proven that it satisfies all the conditions required for a good quantifier (mentioned above as Baumgratz et al.'s criteria). Additionally, in contrast to a set of other measures of coherence mentioned in the following subsection, it has an analytic expression which is easy to calculate and analyze. The quantum coherence of state $\rho $ in the basis $\{ | i \rangle \}$ via skew information is given by
\begin{equation}\label{skew}
C_{\rm{} skew}(\rho)= - \frac{1}{2}\sum _{i}Tr \{[\sqrt{\rho},| i \rangle \langle i |]^{2} \}.
\end{equation}
It is important to note that all the above measures of coherence (i.e., relative entropy of coherence, $l1$ norm of coherence, and skew information based coherence) satisfy Baumgratz et al.'s criteria and have closed form expressions, hence their computation does not involve any optimization method. Further, relative entropy of coherence and $l1$ norm of coherence are considered to be equally good measures of quantum coherence, and a relation between these two measures has been conjectured by Rana et al. \cite{rana} as
\begin{equation}\label{conj}
C_{l1}(\rho)\geq C_{\rm{} rel}(\rho).
\end{equation}
{Rana et al.} have proved the conjecture (\ref{conj}) for pure qubit states, but the validity of this conjecture for mixed states is still an open problem. Further, no such relation between $l1$ norm (or relative entropy) based measures of coherence and skew information based measure of coherence has yet been investigated. In what follows, we will try to provide some insights into these issues using the example of the class of $X$ states.
\subsection{Other quantitative measures of quantum coherence}
Nonclassicality measures have traditionally been studied with reference to the quantum theory of light (see Ref. \cite{adam} and references therein). Intuitively, it seems obvious that nonclassicality in light must be connected with the notion of quantum coherence. This is so because a nonclassical state is defined as a state which cannot be expressed as a mixture of coherent states. This essentially implies the existence of non-zero coherence (off-diagonal terms in the density matrix of a nonclassical state) when expressed in the coherent state basis. Naturally, a set of quantifiers of quantum coherence analogous to earlier proposed measures of nonclassicality have been proposed. For example, in the context of nonclassicality, a measure of nonclassicality in single mode fields was introduced in the past \cite{asb} as the amount of two mode entanglement generated by it (i.e., a nonclassical state) using linear optics and classical states only. In analogy, Streltsov et al. \cite{str} have tried to relate the theories of quantum coherence with that of quantum entanglement. They have proved that any degree of quantum coherence with respect to some reference basis can be converted to entanglement via application of incoherent operations. This approach is similar to Asboth et al.'s idea \cite{asb}, which provided a model for inter-convertibility of the quantifiers of nonclassicality and entanglement.
To understand the idea of Streltsov et al. \cite{str}, let us consider an example with a source (S) (which may or may not have coherence) attached to an ancilla (A) and apply a $CNOT_{SA}$ operation jointly to the source and ancilla. If the ancilla and source are in the states $|0\rangle_{A}$ and $|0\rangle_{S}$, respectively, the application of $CNOT_{SA}$ gate on the joint state of source and ancilla would result in the state $|00\rangle_{SA}\rightarrow |00\rangle_{SA}$, which is clearly a separable state. When the state of source is $\alpha|0\rangle+\beta|1\rangle: |\alpha|\neq0 $ and $|\alpha|^2+|\beta|^2=1$, i.e., a state with a finite amount of coherence, the application of the $CNOT_{SA}$ gate on the joint state of source and ancilla would entangle them, i.e., $\alpha|0\rangle_{S}+\beta|1\rangle_{S} \otimes |0\rangle_{A}\rightarrow \alpha|00\rangle_{SA}+\beta|11\rangle_{SA} $. Clearly, $CNOT_{SA}$ gate is acting here as an incoherent operator and producing entanglement between the source and the ancilla only if the source has some quantum coherence in it. Further, upon the application of $CNOT_{SA}$ gate, the amount of entanglement created between the source and the ancilla comes at the cost of amount of coherence left over in the source state.
It is not our purpose to elaborate on this analogy. However, this analogy hints at the possibility that the limitations of nonclassicality measures may be present in the coherence measures, too. As we will further discuss that some measures of quantum coherence require optimization over infinitely many states and are expected to encounter the same drawbacks as similar nonclassicality measures are known to face \cite{adam}.
In analogy with the above example, Streltsov et al. \cite{str} have mathematically proven that a state $\rho _{S}$ can be converted to an entangled state via incoherent operations if and only if $\rho _{S}$ is coherent (Theorem 2 of Ref. \cite{str}). Thus, this work led to a framework for inter-conversion of quantum coherence and quantum entanglement due to which, in principle, we can use any quantifier of quantum entanglement as a measure of quantum coherence. Keeping this in mind, a family of entanglement based coherence measures were defined $\{ C _{E}\}$ as follows \cite{str}
\begin{equation}
C _{E} (\rho _{S})= \lim_{d_{A} \to \infty} \{ \sup_{\wedge^{SA}}\, E^{S:A}\left( \wedge^{SA} \left[ \rho_{S}\otimes |0 \rangle \langle 0 |_{A} \right] \right) \},
\end{equation}
where $E$ is an arbitrary entanglement measure, and supremum is taken over all incoherent operations.
This leads to a new perspective towards the understanding of relation between quantum coherence and quantum entanglement. However, the non-monotonic nature of the relationship between the measures of coherence is expected to remain valid for this family of measures, too, as it is well-known that the measures of entanglement are not monotones of each other \cite{adam-ent,ent_mon}. Another major criticism of the above method of calculating the coherence is that we have to undertake the maximization over infinitely many incoherent operations that are available. So the task at hand is practically impossible and is useful only for the cases where we can get a simple analytical expression, which is possible only for a subclass of states.
Note that the most of the proposed quantifiers \cite{coh_rev1,coh_rev2}, such as trace distance measure of coherence \cite{rana}, robustness of coherence \cite{roc}, convex roof measures of coherence \cite{yuan,winter}, geometric coherence \cite{str}, coherence monotones of entanglement \cite{str}, coherence of assistance \cite{chitambar}, are based on optimization and do not have any analytical form (except for some subset of states) or do not satisfy the property of strong monotonocity. Moreover, it has been shown that the trace distance measure of coherence \cite{rana} and robustness of coherence \cite{roc} reduce to $l1$ norm of coherence for all the single qubit and $X$ states.
\subsection{Measure of first-order coherence}
Beyond the framework of resource theory, there are some other types of coherence or asymmetry measures \cite{kagalwala,optical_coh,fang} which are also of great significance in optical coherence theory and condensed matter physics. A particular measure of first-order coherence \cite{optical_coh}, $ D = \sqrt{2\,Tr[\rho^2]-1},$ has been exploited recently for single qubit subsystems to introduce the concept of accessible coherence. For example, for a two-qubit state $\rho _{AB}$, the degree of first-order coherence of each subsystem is given by $D_{i} = \sqrt{2\,Tr[\rho_{i}^2]-1}\, \forall\, i\in \left\{A,B\right\} $, and the amount of coherence present in the system is defined as $ D^{2}= (D_{A}^{2}+D_{B}^{2})/2$. The state $\rho _{AB}$ can be transformed via a global unitary ($U$) to get state $\rho^{\prime}_{AB}= U\rho_{AB}U^{\dagger}$ such that the first-order coherence can vanish and the two subsystems become strongly correlated. On the other hand, for certain unitary operation ($U^{\prime}$), the maximum first-order coherence can be obtained, which is $ D^{2}_{\rm max} = (\lambda _{1}- \lambda _{4})^{2}+(\lambda _{2}- \lambda _{3})^{2}$, where $\lambda_i$s are the eigenvalues of the state $\rho _{AB}$ in a decreasing order \cite{optical_coh}. Further, $D^{2}_{\rm max}$ can be called the degree of available coherence, since it represents the maximum first-order coherence that can be extracted under a global unitary transformation.
\section{$X$ states}\label{X states}
$X$ states were introduced by Yu and Eberly \cite{xstate1, xstate2} in a study highlighting the finite time disentanglement of two qubits due to spontaneous emission resulting in entanglement sudden death \cite{esd}. Since then these states have become {a subject of extensive study as they} contain an important class of pure and mixed states, such as maximally entangled states (like Bell states), partially entangled and quantum correlated states (like the Werner states \cite{werner}), maximally nonlocal mixed states (MNMSs) \cite{mnms}, maximally entangled mixed states (MEMSs) \cite{munro,ishizaka,verst} as well as non-entangled (separable) states. The $X$ states are described in the computational basis $ \{ \vert 00 \rangle$, $ \vert 01 \rangle$, $ \vert 10 \rangle$,$ \vert 11 \rangle \}$ as
\begin{equation} \label{x_state}
\rho_{X} = \left(
\begin{array}{cccc}
\rho_{11} & 0 & 0& \rho_{14} \\
0&\rho_{22} &\rho_{23}& 0 \\
0 & \rho^{*}_{23} &\rho_{33}& 0 \\
\rho^{*}_{14} & 0 & 0 & \rho_{44} \\
\end{array}
\right).
\end{equation}
The positions of the non-zero elements of $\rho_X$ resemble the shape of the letter $X$, and hence the states having density matrix of this form are referred to as $X$ states. For $\rho_{X} $ to represent a physical state, we must have $ \underset{i}{\sum}\rho_{ii}=1 $, $ \rho_{22}\rho_{33}\geq |\rho_{23}|^2$, and $ \rho_{11}\rho_{44}\geq |\rho_{14}|^2$ \cite{x_dynamics}. Further, $\rho_{14} $ and $\rho_{23} $ are complex numbers, but they can always be made real and non-negative by local unitary transformations. Thus, without loss of generality, we can always start with a $\rho_{X}$ with all real and non-negative elements. The eigenstates of $\rho_{X}$ are given by
\begin{eqnarray}\label{eigenvalues}
\lambda _{1}= \frac{1}{2} \left\{(\rho_{11}+\rho_{44})+ \sqrt{(\rho_{11}-\rho_{44})^{2}+ 4|\rho_{14}|^{2}} \right\}, \\ \nonumber
\lambda _{2}= \frac{1}{2} \left\{(\rho_{11}+\rho_{44})- \sqrt{(\rho_{11}-\rho_{44})^{2}+ 4|\rho_{14}|^{2}} \right\}, \\ \nonumber
\lambda _{3}= \frac{1}{2} \left\{(\rho_{22}+\rho_{33})+ \sqrt{(\rho_{22}-\rho_{33})^{2}+ 4|\rho_{23}|^{2}} \right\}, \\ \nonumber
\lambda _{4}= \frac{1}{2} \left\{(\rho_{22}+\rho_{33})- \sqrt{(\rho_{22}-\rho_{33})^{2}+ 4|\rho_{14}|^{2}} \right\}.
\end{eqnarray}
As mentioned previously, $X$ states can be both separable and entangled depending upon the values of parameters describing them. It is known that $X$ states are entangled if and only if either $\rho_{22}\rho_{33} < |\rho_{14}|^{2}$ or $\rho_{11}\rho_{44} < |\rho_{23}|^{2}$ \cite{x_dynamics}, and the amount of entanglement, as measured by concurrence, is given by
\begin{equation}
\begin{array}{lcl}
{\rm Concurrence}(\rho_{X})&=& 2 \, \max \thinspace \left\{0, |\rho_{14}|- \sqrt{\rho_{22}\rho_{33}},\right. \\
& &\left.|\rho_{23}|- \sqrt{\rho_{11}\rho_{44}} \right\}.
\end{array}
\end{equation}
Another interesting feature of $X$ states is that they have only nonlocal coherence (i.e., the coherence pertaining to the total system). All local coherences (i.e., the coherence of the reduced density matrix of the subsystem) vanish for these states. This can be easily visualized from the reduced density matrices that can be obtained for the subsystems $A$ and $B$ as
\begin{equation}
\rho^{A}_{X} = \left(
\begin{array}{cc}
\rho_{11}+\rho_{22} & 0 \\
0 & \rho_{33}+\rho_{44}
\end{array}
\right)
\end{equation}
and
\begin{equation}
\rho^{B}_{X} = \left(
\begin{array}{cc}
\rho_{11}+\rho_{33} & 0 \\
0 & \rho_{22}+\rho_{44}
\end{array}
\right).
\end{equation}
For unitary time evolution, it has been observed that $\rho_{X}$ retains its $X$ form if and only if the Hamiltonian is $X$ shaped in the computational basis \cite{x_dynamics}. Further, it is also known that this state can retain its form during the time evolution under a restricted class of open system dynamics \cite{x_dynamics}.
Recently, numerous studies have been reported that look at the theoretical aspects of $X$ states, with specific focus on quantum correlations, as well as their production and manipulation in experimental systems. Ali et al. \cite{ali} have tried to find an analytical expression for the quantum discord of two-qubit $X$ states, which was later found not to be valid for a few states \cite{lu}. Rau \cite{rau} has studied the algebraic characterization of $X$ states in quantum information. On the experimental side, two-qubit $X$ states can be produced in a wide variety of systems, such as optical systems \cite{xp_p1,xp_p2,xp_p3,xp_p4}, ultra cold atoms \cite{xp_uc1,xp_uc2}, and nuclear magnetic resonance \cite{xp_nmr}. The importance of $X$ states lies in the very sparse structure of the density matrix describing them, due to which they can be analyzed efficiently. Recently, Paulo et al. \cite{xh} have proved that for every two-qubit state, there is a $X$-counterpart, i.e., a corresponding two-qubit $X$ state having the same spectrum and entanglement, as measured by concurrence, negativity or relative entropy of entanglement. This universality property of $X$ states allows us to map every two-qubit state to an equivalent $X$ state, and hence these $X$ states constitute an important resource for quantum communication and computation. The study of $X$ states is very important in understanding subtle concepts of quantum correlations, quantum coherence, and quantum entanglement as they form a very broad subset of two-qubit mixed states and incorporate most of the states that can be produced experimentally. In the next section, we will try to quantify the quantum coherence of $X$ states via different available measures of coherence and try to find relations between them.
\section{Relations between the resource theoretic measures of quantum coherence}\label{our-res}
We have randomly prepared $10^5$ $X$ states and have quantified the amount of coherence present in these states using the quantitative measures of coherence described in Section \ref{measures}. Specifically, for each of the randomly prepared states we have computed the relative entropy of coherence, $l1$ norm of coherence, coherence via skew information, and first-order coherence. The obtained results are plotted to reveal relationships between various measures of coherence. To begin with, in Fig. \ref{fig1}, we provide a scatter plot of random $X$ states with coherence measured by relative entropy of coherence on the abscissa and $l1$ norm of coherence on the ordinate. The relative entropy of coherence (\ref{rel_ent}) for the $X$ states can be expressed as
\begin{equation}\label{rel_ent_x}
C_{\rm{} rel}(\rho _{X})= \sum_{i} \lambda _{i} \log_{2}(\lambda_{i})- \sum_{i} \rho _{ii} \log_{2}(\rho _{ii}) ,
\end{equation}
where $\lambda _ {i} $s are the eigenvalues of the $X$ states given by Eq. (\ref{eigenvalues}), while $\rho _{ii} $ represent the diagonal values of the $X$ state (\ref{x_state}). Similarly, the amount of coherence of $X$ states (\ref{x_state}) measured by $l1$ norm of coherence (\ref{l1}) can be expressed as
\begin{equation}\label{cl1_ent_x}
C_{l1}(\rho_{X})= 2(|\rho_{14}|+|\rho_{23}|).
\end{equation}
We can clearly see from Fig. \ref{fig1} that these two quantum coherence quantifiers are not monotone of each other. To illustrate this point specifically, we have marked two points on the plot as A and B which correspond to two different $X$ states $\rho_A$ and $\rho_B$, respectively. Clearly, as far as the relative entropy of coherence is concerned $\rho_B$ has more coherence than $\rho_A$. However, the opposite is observed if we measure coherence using $l1$ norm of coherence. Thus, we cannot conclude whether $\rho_A$ possesses more coherence than $\rho_B$ or not. This situation is analogous to the case of measures of nonclassicality \cite{adam,adam-non}, entanglement \cite{adam-ent}, steering \cite{adam-st}, Bell nonlocality \cite{adam-bi}, and non-Markovinaity \cite{adam-nm}, where non-monotonic natures of different measures have already been observed.
The MNMSs \cite{mnms} form a subclass of $X$ states and are described as
\begin{equation}
\rho_{\rm{} MNMS} = \left(
\begin{array}{cccc}
\frac{1}{2} & 0 & 0& \frac{\epsilon}{2} \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\frac{\epsilon}{2} & 0 & 0 & \frac{1}{2} \\
\end{array}
\right).
\end{equation}
For each value of $\epsilon\in\left\{0,1 \right\}$, the state $\rho_{\rm{} MNMS}$ is a Bell diagonal state and represents the state that produces a maximal violation of the CHSH inequality \cite{chsh}. In the scatter plot of relative coherence for the random $X$ states, measured by relative entropy of coherence and $l1$ norm of coherence, we find that the MNMSs form the upper boundary as represented by {the red squares} in Fig. \ref{fig1}. Clearly, this shows that for a given value of quantum coherence measured by the relative entropy of coherence, the MNMSs of $X$ type have the maximum quantum coherence as measured by $l1$ norm of coherence. Further, we can note that these states having maximum quantum coherence also maximally violate the CHSH inequality.
\begin{figure}[!htb]
\includegraphics[width=8.6 cm]{Figure1.pdf}
\caption{\label{fig:epsart1} (Color online) The blue points represent scatter plots for $X$ states. The red squares represent the same for MNMSs, and the green circles correspond the same for MEMSs. The pink triangles are obtained for the Werner states. The black (dashed) and brown (smooth) curves represent the line with slope 1 and 0.5 respectively. The black (dashed) line also represents the results obtained for state $\rho_L$. All the quantities shown in this plot and the rest of the figures in the present work are dimensionless.}
\label{fig1}
\end{figure}
The Werner states \cite{werner}, which are described as a statistical mixture of a maximally entangled state and a maximally mixed state can be written as
\begin{eqnarray}
\rho_{\rm{} W} = \epsilon | \Phi^{+}\rangle \langle \Phi^{+} | + \frac{1- \epsilon}{4} I_{2}\otimes I_{2},
\end{eqnarray}
where $ I_{2}$ is {the} identity matrix, and $| \Phi^{+}\rangle = \frac{1}{\sqrt{2}}\{ |00 \rangle + | 11 \rangle \} $. Note that the Werner state is also a subset of $X$ states as it can be written as
\begin{equation}
\rho_{\rm{} W} = \left(
\begin{array}{cccc}
\frac{1+\epsilon}{4} & 0 & 0& \frac{\epsilon}{2} \\
0 &\frac{1-\epsilon}{4} & 0 & 0 \\
0 & 0 & \frac{1-\epsilon}{4} & 0 \\
\frac{\epsilon}{2} & 0 & 0 & \frac{1+\epsilon}{4} \\
\end{array}
\right).
\end{equation}
Depending on the value of $\epsilon$, a Werner state {is} entangled if $\epsilon > \frac{1}{3}$ and separable otherwise \cite{werner2}. For $\frac{1}{3} < \epsilon < \frac{1}{\sqrt{2}} $, the Werner states are found to be entangled, but such states do not violate any Bell’s inequality. The Werner states, sometimes referred to as decoherence-free states, have special significance in quantum information processing applications where there is a need to combat decoherence in noisy channels \cite{nc}. We can clearly see from Fig. \ref{fig1} that the amount of coherence measured by $l1$ norm of coherence for a Werner state (pink triangles) is always less than that for a MNMS (red squares) even though both the states may have the same amount of coherence as measured by the relative entropy of coherence.
Let us further consider the case of MEMSs \cite{munro}. These represent a class of states for which no additional entanglement can be produced by global unitary operations. These states are a generalization of the class of Bell states to mixed states and are known to have the highest degree of entanglement for a given purity of a state. While Bell states are {known to be} maximally entangled two-qubit {pure} states, all other maximally entangled states can be represented as
\begin{equation}
\rho_{\rm MEMS} = \left(
\begin{array}{cccc}
g (\epsilon) & 0 & 0& \frac{\epsilon}{2} \\
0 & 1- 2g (\epsilon) & 0 & 0 \\
0 & 0 & 0 & 0 \\
\frac{\epsilon}{2} & 0 & 0 & g (\epsilon) \\
\end{array}
\right),
\end{equation}
where
\begin{equation}
g (\epsilon)= \begin{cases}
\frac{\epsilon}{2}, & \text{if $\epsilon \geq \frac{2}{3}$ } \\
\frac{1}{3}, & \text{$\epsilon < \frac{2}{3}$}
\end{cases}.
\end{equation}
We can clearly see from Fig. \ref{fig1} that the MEMSs (represented by the green circles) have lesser quantum coherence as measured by $l1$ norm of coherence than that of the MNMSs and Werner states for the same amount of quantum coherence as measured by relative entropy of coherence.
Interestingly, Rana et al. \cite{rana} have proved that for any $d$ dimensional mixed state $ C_{l1} (\rho) \geq \frac{C_{\rm{} rel}(\rho)}{\log_{2} d}$ and conjectured that $ C_{l1} (\rho) \geq C_{\rm{} rel} (\rho)$ for all states. The smooth brown line in Fig. \ref{fig1} represents a straight line with slope $\frac{1}{2}$, while the black dashed curve represents the straight line with slope $1$. We can clearly see from Fig. \ref{fig1} that Rana et al.'s conjecture described by inequality (\ref{conj}) is clearly valid for the case of $X$ states. Let us now focus on the states which would satisfy $ C_{l1} (\rho_{X}) = C_{\rm{} rel} (\rho_{X})$, i.e., the $X$ states that have the same amount of quantum coherence as measured by $l1$ norm of coherence and relative entropy of coherence. These states are given by
\begin{equation}
\rho_{L} = \left(
\begin{array}{cccc}
\frac{\epsilon}{2} & 0 & 0& \frac{\epsilon}{2} \\
0 & 1- \epsilon & 0 & 0 \\
0 & 0 & 0 & 0 \\
\frac{\epsilon}{2} & 0 & 0 & \frac{\epsilon}{2} \\
\end{array}
\right)
\end{equation}
for $0 \leq \epsilon\leq 1$. {The amount} of coherence present in these states as measured by two different measures of coherence has been illustrated by the black dashed line in Fig. \ref{fig1}. We can clearly see that $C_{l1}(\rho_{L})= \epsilon $, and the eigenvalues of $\rho_{L}$ are $0,0,$ $ \epsilon$, and $1-\epsilon$. Using the eigenvalues of $\rho_{L}$ in Eq. (\ref{rel_ent}), the relative entropy of coherence evaluates to the value of $C_{\rm{} rel}(\rho_{L})= \epsilon $, which is same as that computed using the $l1$ norm of coherence. We can see that the state $\rho_{L}$ is similar to the MEMS for $\epsilon \geq \frac{2}{3}$ and retains its form for the whole range ($0 \leq \epsilon\leq 1$).
In summary, we have seen that the two well-known measures of quantum coherence, namely the relative entropy of coherence and the $l1$ norm of coherence, are not monotone of each other, and we have analytical expressions for the states forming the lower and upper bounds of the scatter plots (Fig. \ref{fig1}) for the class of $X$ states.
To further stress on the non-monotonic nature of the measures of quantum coherence, we now consider another measure of coherence, which is referred to as trace distance measure of coherence \cite{rana} and is defined as $C_{tr}(\rho)= \min ||\rho - \delta ||_{1} $, where $\delta$ belongs to the set of incoherent states $I$. The problem with this measure is that it does not satisfy the property of strong monotonicity for all the states. Rana et al. \cite{rana} have shown that for the case of single qubit and $X$ states, the trace distance measure of coherence reduces to just the $l1$ norm of coherence and hence is a valid measure of quantum coherence for these states. Thus, Fig. \ref{fig1} also establishes the fact that the trace distance measure of coherence and the relative entropy of coherence are not monotone of each other. Similarly, it is known that the quantum coherence as measured by the robustness of coherence \cite{roc} reduces to the $l1$ norm of coherence for $X$ states. Thus, Fig. \ref{fig1} also establishes that robustness of coherence and the relative entropy of coherence are not monotone of each other.
\begin{figure}[!htb]
\includegraphics[width=8.6 cm]{Figure2.pdf}
\caption{\label{fig:epsart1} (Color online) The blue points represent scatter plots of $X$ states for $l1$ norm of coherence versus the quantum coherence via skew information. The red squares, green circles, and pink triangles represent the same for MNMSs, MEMSs, and Werner states, respectively. Further, the black (dashed) line represents the results obtained for state $\rho_L$.}
\label{fig2}
\end{figure}
\begin{figure}[!htb]
\includegraphics[width=8.6 cm]{Figure3.pdf}
\caption{\label{fig:epsart1} (Color online) The blue points represent scatter plots of $X$ states for relative entropy of coherence versus the quantum coherence via skew information. The red squares, green circles, and pink triangles represent the same for MNMSs, MEMSs, and Werner states, respectively.}
\label{fig3}
\end{figure}
In Sec. \ref{measures}, we have already defined the skew information based measure of coherence (\ref{skew}). Here, we would try to compute this measure of coherence for $X$ states and check whether this measure is monotonic with the other measures, i.e., relative entropy of coherence and $l1$ norm of coherence. Figure \ref{fig2} represents the scatter plot of $l1$ norm of coherence and quantum coherence via skew information for $X$ states. We can clearly see that these two measures are also not monotone of each other. Interestingly, just like the variation of $l1$ norm of coherence with relative entropy of coherence, MNMSs ($\rho_{\rm{} MNMS}$) are found to form the upper boundary of the scatter plot, and the state ($\rho_{L}$) forms the lower boundary, while the MEMSs and Werner states are observed to lie between them. Therefore, the quantifiers of quantum coherence, namely $l1$ norm of coherence and quantum coherence via skew information, are also not monotonic of each other and will not follow each other with respect to the ordering of the states. Figure \ref{fig3} represents the scatter plot of the relative entropy of coherence versus quantum coherence via skew information, and we can clearly see that these two measures of coherence are also not monotone of each other. In the light of all our results and observations, we can conclude that the different popular quantifiers currently being used to measure the quantum coherence are not equivalent. This would imply an ambiguity with respect to the ordering of the states as can be seen from the plots in Figs. \ref{fig1}-\ref{fig3}. Further, we have already mentioned that the non-monotonic nature of the measures of coherence shown in this paper would remain valid for entanglement-based measures of coherence too, as it is well-known that the measures of entanglement are also not monotone of each other \cite{ent_mon}.
\begin{figure}[!htb]
\includegraphics[width=8.6 cm]{Figure4.pdf}
\caption{\label{fig:epsart1} (Color online) The blue points represent scatter plots of $l1$ norm of coherence versus quantum entanglement measured by concurrence for the $X$ states. The red squares, green circles, and pink triangles represent the same for MNMSs, MEMSs, and Werner states, respectively. }
\label{fig4}
\end{figure}
Let us look further at the possible relation between the amount of entanglement as measured by concurrence and the amount of coherence as measured by $l1$ norm of coherence. For the $X$ states under consideration, we can see that $C_{l1}(\rho_{X}) = 2 \left(|\rho_{14}|+|\rho_{23}|\right),$ while the concurrence is given by $ {\rm Concurrence}(\rho_{X})= 2 \, \max \thinspace \{0, |\rho_{14}|- \sqrt{\rho_{22}\rho_{33}}, |\rho_{23}|- \sqrt{\rho_{11}\rho_{44}} \}$. Thus, we can clearly see that $C_{l1}(\rho_{X})\geq {\rm Concurrence}(\rho_{X})$. We further analyze the state for which $C_{l1}(\rho_{X})= {\rm Concurrence}(\rho_{X}),$ i.e., the amount of coherence as measured by the $l1$ norm of coherence is equal to the amount of entanglement as measured by concurrence. We can see from Fig. \ref{fig4} that MNMSs and MEMSs form the lower boundary of the scatter plot for concurrence and $l1$ norm of coherence, respectively. Further, we know that Werner states are separable if $ \epsilon < \frac{1}{3}$ and entangled otherwise. This fact is also reflected in Fig. \ref{fig4}. {Note that here, all the coherence measures, which are basis dependent, are obtained in the computational basis. There is a simple reason behind it- the $X$ states are defined in the computational basis only. However, in principle, we can measure coherence using other bases, too. If we change the basis and compute the coherence using different measures, it is expected that the analytical form of the states forming the upper and lower boundary would change. Specifically, we may visualize this point, by noting that MNMSs are Bell diagonal states, and we have already shown that these states form the upper boundary in our Fig. \ref{fig1}. Therefore, if we choose the Bell basis as our incoherent basis, then MNMSs will not form the upper boundary. Here, we restrict ourselves from exploring more in this direction as the study on the computational basis alone provides us answer to the question that this paper aims to address. Keeping this in mind, we now proceed to describe a new measure of coherence in the following section (first-order coherence), which is a basis independent measure like concurrence. }
\begin{figure}[!htb]
\includegraphics[width=8.6 cm]{Figure5.pdf}
\caption{\label{fig:epsart1} (Color online) The blue points represent scatter plots of first-order coherence ($D^{2}$) versus relative entropy of coherence. The red squares, green circles, and pink triangles represent the same for MNMSs, MEMSs, and Werner states, respectively.}
\label{fig5}
\end{figure}
\begin{figure}[!htb]
\includegraphics[width=8.6 cm]{Figure6.pdf}
\caption{\label{fig:epsart1} (Color online) The blue points represent scatter plots of hidden coherence ($ D^{2}_{\rm max}$) versus relative entropy of coherence. The red squares, green circles, and pink triangles represent the same for MNMSs, MEMSs, and Werner states, respectively.}
\label{fig6}
\end{figure}
\begin{figure}[!htb]
\includegraphics[width=8.6 cm]{Figure7.pdf}
\caption{\label{fig:epsart1} (Color online) The blue points represent scatter plots of hidden coherence ($ D^{2}_{\rm max}$) versus concurrence. The red squares, green circles, and pink triangles represent the same for MNMSs, MEMSs, and Werner states, respectively.}
\label{fig7}
\end{figure}
\section{First-order coherence}\label{Op-coh}
Let us further analyze, how the first-order coherence \cite{optical_coh} and the maximum first-order coherence vary for $X$ states. This coherence measure is based on the purity of the subsystems which constitute the bipartite state. From Figs. \ref{fig5} and \ref{fig6}, we can see that there is no clear relation between the first-order coherence and the measures of coherence as described by resource theory of coherence, such as $l1$ norm of coherence, relative entropy of coherence, and coherence using the skew information. This was expected as first-order coherence was introduced with an altogether different motivation, and it does not follow Baumgratz et al.'s criteria. Moreover, it is related to the purity of the individual subsystems of which the combined bipartite system is composed. In Section \ref{measures}, we have already mentioned that the amount of hidden coherence (degree of available coherence) is known as the maximum first-order coherence. Consequently, this measure of coherence is a basis independent measure in contrast to the basis dependent measures of coherence present in the resource theory of quantum coherence discussed above. Svozil{\'\i}k et al. \cite{optical_coh} have shown a possible trade off between the amount of hidden coherence present in the system and the amount of violation of CHSH inequality \cite{chsh}.
Specifically, Figs. \ref{fig5} and \ref{fig6} illustrate the scatter plot of $ D^{2}$ and $ D^{2}_{\rm max} $ versus $C_{\rm rel}$ for different $X$ states under consideration. We can clearly see from Fig. \ref{fig5} that the amount of first-order coherence, i.e., $D^{2}$ is zero for the case of MNMS (red squares) and Werner (pink triangles) states, while it is nonzero for most of the MEMSs (green circles). Further, the amount of hidden coherence $ D^{2}_{\rm max}$, i.e., the coherence available after the unitary transformation for $X$ states shows that of {all the} subclasses of $X$ states considered in the study, MNMSs (red squares) have the maximum amount of hidden coherence $ D^{2}_{\rm max}$ with respect to the relative entropy of coherence. Therefore, we can see that the maximum first-order coherence is also not monotonic with the measures of quantum coherence studied here. Specifically, it is illustrated in Fig. \ref{fig6} that the maximum first-order coherence is not monotonic with the relative entropy of coherence. Further, it is checked that the maximum of first-order coherence is not monotonic with $l1$ norm of coherence and skew information based measure of coherence, too. However, corresponding plots are not shown here. We have also studied the relation between first-order coherence $( D^{2})$ and the amount of entanglement as measured by concurrence for different $X$ states. However, we have not included the corresponding plot as it is found to be similar to the scatter plot of $ D^{2}$ versus $C_{\rm rel}$ (Fig. \ref{fig5}).
Interestingly, both $ D^{2}_{\rm max}$ and concurrence are known to be basis independent quantities. We found this fact motivating enough to explore the relationship between these two basis independent quantities. Figure \ref{fig7} illustrates the scatter plot of $ D^{2}_{\rm max}$ with respect to the concurrence of the $X$ states under consideration. We can clearly see from Fig. \ref{fig7} that the MEMSs (green circles) form the lower boundary of the plots. We can see that for zero value of concurrence (i.e., for the separable state), MEMSs (green circles) have a value of $ D^{2}_{\rm max}=0.1$, while for MNMSs (red squares) we obtain $D^{2}_{\rm max}=0.5$. However, there are separable $X$ states (including the separable subclass of Werner states) which can be observed to have smaller values of hidden coherence than that of MEMSs. As the amount of concurrence starts to increase, the difference in the amount of hidden coherence $ D^{2}_{\rm max}$ for the MNMSs (red squares) and the MEMSs (green circles) starts decreasing and becomes equal to zero when the amount of concurrence = 1 (i.e., for a maximally entangled $X$ state). In all cases, for the same amount of entanglement as measured by concurrence, the amount of hidden coherence for the MNMSs (red squares) is always greater that that of the MEMSs (green circles).
The study of first-order coherence has provided a kind of completeness to the present study, but we could not find any concrete relation between first-order coherence and other measures of coherence studied here. However, efforts have already been made to relate the resource theories of coherence and the interferometric visibility (cf. Bera et al. \cite{bera} and Bagan et al. \cite{bagan}) by using $l1$ norm of coherence to measure the visibility in a multi-slit experiment \cite{tania, biswas, sandeep}.
\section{Conclusions}\label{con}
This paper aimed to answer the question: Can we compare the quantum coherence present in two states? A detailed analysis revealed that the answer is no. This is so because the analysis performed using the $X$ states and the measures of coherence (relative entropy of coherence, $l1$ norm of coherence, skew information based measure of {coherence,} robustness of coherence, trace distance norm of coherence, and first-order coherence) has proved that the measures of coherence studied here are not monotone of each other. {This feature (non-monotonic nature of the measures) is not only present in the nonclassicality measures reported before \cite{adam,adam-non} but also in measures of coherence (studied here). Similar results have also been observed in the context of measures of entanglement \cite{adam-ent}, steering \cite{adam-st}, Bell nonlocality \cite{adam-bi}, non-Markovianity \cite{adam-nm}, etc. Specifically, in our analysis, we see that in all these cases some of the investigated measures are not monotones of each other. Further, our analysis reveals that for a given value of quantum coherence measured by the relative entropy of coherence, the MNMSs of $X$ type have the maximum quantum coherence as measured by $l1$ norm of coherence. In addition, we observe that the amount of coherence measured by $l1$ norm of coherence for a Werner state is found to be always less than that for a MNMS even when they possess an equal amount of coherence as measured by the relative entropy of coherence. We have illustrated our main observations in graphs (Figs. \ref{fig1}-\ref{fig7}). Further, we have also found analytical expressions for the states forming the upper and lower bounds of the scatter plots of the $l1$ norm of coherence and the relative entropy of coherence for $X$ states. It is interesting that the same behavior was observed between the $l1$ norm of coherence and skew information based measure of coherence, i.e., the boundary states are observed to be the same as that in the previous case. However, no such relation between the relative entropy of coherence and skew information based measure of coherence have been found as the states with both higher and lower values of $C_{\rm rel}$ were observed for the same amount of $C_{\rm skew}$. Further, we have analyzed the results for the case of first-order coherence to check for any relation between these two completely different types of coherence measures. However, first-order coherence being connected with the purity of reduced states has no direct relation with any measure used in the resource theory of quantum coherence. Also, considering its close analogy with entanglement measured using concurrence, we have studied the relation between first-order coherence and concurrence to reveal that they are not related. However, the maximum of first-order coherence is found to be more related to concurrence as both are basis independent quantities. Note that neither first-order, nor hidden coherence show monotonic behavior with concurrence. Also, during the present study, we have restricted to computational basis as our incoherent basis as $X$ states are defined in this basis only and a different choice of incoherent basis would have revealed the same results with different boundary states.
We have not only shown the conjecture in Ref. \cite{rana} to satisfy in the present study (at least for $X$ states), using some of the existing discrete results on the quantum coherence measured for $X$ states, we have extended our results to a large class of measures of quantum coherence and have shown that they too are not monotone of each other. Specifically, the trace distance and robustness of coherence have the same value as that of $l1$ norm of coherence for $X$ states and are thus non-monotonic with relative entropy of coherence and skew information based measure of coherence as well.
Finally, we conclude that the quantum coherence measures studied here are not monotone of each other. Probably, at a deeper level they capture different manifestations of nonclassicality. Consequently, there is no way to circumvent the difficulty associated with the comparison of the amount of coherence between two states. Further, the relationship of coherence with different measures of nonclassicality, entanglement and other measures of quantum correlations, such as discord, is still an open problem and this work is expected to provide some deeper understanding of these facets of nonclassicality and their mutual relationship.
\section*{Acknowledgment}
AP thanks Department of Science and Technology (DST), India for the support provided
through the project number EMR/2015/000393. SM thanks Guru Gobind Singh Indraprastha university for STRF. KT thanks the project LO1305 of the Ministry of Education, Youth and Sports of the Czech Republic for support. AP and KT also thank A. Miranowicz, P. Panigrahi and J. Perina Jr. for some fruitful discussion and their interest in this work.
|
1,116,691,498,275 | arxiv | \section{Introduction}
One of the most fundamental questions in quantum information concerns with the amount of information that can be transmitted reliably through a quantum channel. Despite of the significant progress in recent
years~\cite{Holevo06,KR01,Kin02,Kin03,FHMV,Sho02,HHH09,AHW00,Shor04,AB04,Has09,Yard08,Shor04,Brandao,Fuk10,Bra11}, as pointed out in~\cite{Bra11},
this question remained surprisingly wide open. The main reason for that is related to
the additivity nature of the classical or quantum capacities of quantum channels to transmit information~\cite{Holevo06}. Recently, it was shown that both the Holevo expression for the classical capacity~\cite{Has09} and the quantum capacity~\cite{Yard08} are not additive in general. The additivity of the Holevo expression for the classical capacity was an open problem for more than a decade and was shown by Shor~\cite{Shor04} to be equivalent to three other additivity conjectures; namely, the additivity of entanglement of formation, the strong super-additivity of entanglement of formation, and the additivity of the minimum entropy output of a quantum channel.
In~\cite{Has09} Hastings gave a counterexample to the last of
the above additivity conjectures and thereby proved that they are all false. Hastings counterexamples (see also~\cite{Brandao}) exist in very high dimensions and an estimate of these extremely high dimensions can be found in~\cite{Fuk10}. Earlier, in~\cite{Shor04}, Shor pointed out that if the additivity conjectures were true, perhaps the first step towards proving them would be to prove local additivity. We show here that this local additivity conjecture is indeed true, despite the existence of counterexamples to the original additivity conjectures. Our results therefore demonstrate that the counterexamples to the original additivity conjecture exhibit a global effect of quantum channels.
As we pointed out in Appendix B of~\cite{FGA},
both the local and global additivity conjectures are false over the real numbers.
This in turn implies that a straightforward argument involving just directional derivatives
could not provide a proof of local additivity in the general complex case. Hence,
to show local additivity we use strongly the complex structure.
In quantum information theory, quantum channels are the natural generalizations of stochastic communication channels in classical information theory. They are described in terms of completely-positive trace preserving linear maps (CPT maps).
A CPT map $\mathcal{N}: H_{d_{\rm in}}\to H_{d_{\rm out}}$ takes the set of $d_{\rm in}\times d_{\rm in}$ Hermitian matrices
$H_{d_{\rm in}}$ to a subset of the set of all $d_{\rm out}\times d_{\rm out}$ Hermitian matrices
$H_{d_{\rm out}}$. Any finite dimensional quantum channel can be characterized in terms of a unitary embedding followed by a partial trace (the Stinespring dilation theorem): for any CPT map $\mathcal{N}$ there exists an ancillary space of Hermitian matrices $H_{E}$ such that
$$
\mathcal{N}(\rho)=\hbox{\rm Tr} \, _{E}\left[U(\rho\otimes |0\rangle_{E}\langle 0|) U^{\dag}\right]
$$
where $\rho\in H_{d_{\rm in}}$ and $U$ is a unitary matrix mapping states $|\psi\rangle|0\rangle_E$ with
$|\psi\rangle\in H_{d_{\text{in}}}$ to
$H_{d_{\text{out}}}\otimes H_{E}$.
The minimum entropy output of a quantum channel $\mathcal{N}$ is defined by
$$
S_{\min}(\mathcal{N})\equiv\min_{\rho\in H_{d_{\text{in}},+,1}}S\left(\mathcal{N}(\rho)\right)\;,
$$
where $H_{d_{\text{in}},+,1}\subset H_{d_{\text{in}}}$ is the set of all $d_{\rm in}\times d_{\rm in}$ positive semi-definite matrices with trace one (i.e. density matrices), and $S(\rho)=-\hbox{\rm Tr} \, (\rho\log\rho)$ is the von-Neumann entropy.
Since the von-Neumann entropy is concave it follows that the minimization can be taken over all rank one matrices
$\rho=|\psi\rangle\langle\psi|$ in $H_{d_{\text{in}},+,1}$.
For any such rank one density matrix $\rho$ we can define a bipartite pure state
$|\Psi\rangle=U|\psi\rangle|0\rangle_{E}$ in the bipartite subspace $\mathcal{K}\equiv \{|\Psi\rangle\big|\;|\psi\rangle\in H_{d_{\rm in}}\}$. We therefore find that the minimum
entropy output of the channel $\mathcal{N}$ can be expressed in terms of the entanglement of the bipartite subspace
$\mathcal{K}$ defined by
$$
E(\mathcal{K})\equiv\min_{|\phi\rangle\in\mathcal{K}\;,\;\|\phi\|=1}E(|\phi\rangle)\;,
$$
where $E(|\phi\rangle)\equiv S\left(\hbox{\rm Tr} \, _{E}(|\phi\rangle\langle\phi|)\right)$ is the entropy of entanglement.
In~\cite{GN} it was pointed out that $E(\mathcal{K})=0$ unless $\dim\mathcal{K}\leq (d_{\rm out}-1)(\dim H_{E}-1)$.
This claim follows directly from the fact that the number of (bipartite) states in an unextendible product basis is
at least $d_{\rm out}+\dim H_{E}-1$~\cite{Ben99}.
With these notations, the non-additivity of the minimum entropy output of a quantum channel is equivalent to
the existence of two subspaces $\mathcal{K}_1\subset\mathbb{C}^{n_1}\otimes\mathbb{C}^{m_1}$ and $\mathcal{K}_2\subset\mathbb{C}^{n_2}\otimes\mathbb{C}^{m_2}$ such that
$$
E(\mathcal{K}_1\otimes\mathcal{K}_1)<E(\mathcal{K}_1)+E(\mathcal{K}_2)\;.
$$
In what follows we will prove the local additivity of entanglement of subspaces, which is equivalent to the local additivity of the minimum entropy output.
The rest of this paper is organized as follows. In section~\ref{local} we find and simplify the first and second directional derivatives of the von-Neumann entropy of entanglement. In section~\ref{additive} we prove our main result of local additivity which is stated in Theorem~\ref{main} for the non-singular case. In section~\ref{sing} we prove Theorem~\ref{main} for the singular case. We end with a discussion in section~\ref{conc}.
\section{Local Minimum}\label{local}
Let $\mathcal{K}\subset\mathbb{C}^{n}\otimes\mathbb{C}^{m}$ be a subspace of bipartite entangled states.
Since the bipartite Hilbert space $\mathbb{C}^{n}\otimes\mathbb{C}^{m}$ is isomorphic to the Hilbert space
of all $n\times m$ complex matrices $\mathbb{C}^{n\times m}$, we can view any bipartite state
$|\psi\rangle^{AB}=\sum_{i,j}x_{ij}|i\rangle|j\rangle$ in $\mathcal{K}$ as an $n\times m$ matrix $x$.
The reduced density matrix of $|\psi\rangle^{AB}$ is then given by $\rho_r\equiv\hbox{\rm Tr} \, _{B}|\psi\rangle^{AB}\langle\psi|=xx^*$,
and the entropy of entanglement of $|\psi\rangle^{AB}$ is given by
\begin{equation}\label{entropy}
E(x)\equiv-\hbox{\rm Tr} \, \left(xx^*\log xx^*\right)\;.
\end{equation}
In our notations, instead of using a dagger, we use $x^*$ to denote the hermitian conjugate of the matrix $x$.
If $x\in\mathcal{K}$ is a local minimum of $E$ in $\mathcal{K}$, then there exists a neighbourhood of $x$ in $\mathcal{K}$
such that $x$ is the minimum in that neighbourhood. Any state in the neighbourhood of $x$ can be written as
$ax+by$, where $a,b\in\mathbb{C}$ and $y\in\mathcal{K}$ is an orthogonal matrix to $x$; i.e. $\hbox{\rm Tr} \, (xy^*)=0$.
We also assume that the state is normalized so that $|a|^2+|b|^2=1$.
Now, since the function $E(x)$ is independent on global phase, we can assume that $a$ is a positive \emph{real} number.
We can also assume that $b$ is real since we can absorb its phase into $y$ (adding a phase to $y$ will not change its orthogonality to $x$). Thus, any normalized state in the neighbourhood of $x$ can be written as
$$
\frac{x+ty}{\sqrt{1+t^2}}\;\;\text{with}\;\;\hbox{\rm Tr} \, (xy^*)=0\;,
$$
where $t\equiv b/a$ is a small real number and $y$ is normalized (i.e. $\hbox{\rm Tr} \, (yy^*)=1$).
\begin{definition}\label{maindef}
$\;$\\
\textbf{(a)} A matrix $x\in\mathcal{K}$ is said to be a critical point of $E(x)$ in $\mathcal{K}$ if
$$
D_{y}E(x)\equiv \frac{d}{dt}E\left(\frac{x+ty}{\sqrt{1+t^2}}\right)\Big|_{t=0}=0\;\;\;\forall\;y\in x^{\perp}
$$
where the notation $D_{y}E(x)$ indicate that we are taking the directional derivative of $E$ in the direction of $y$,
and $x^\perp\subset\mathcal{K}$ denotes the subspace of all the matrices $y$ in $\mathcal{K}$ for which
$\hbox{\rm Tr} \, (xy^*)=0$.\\
\textbf{(b)} A matrix $x\in\mathcal{K}$ is said to be a non-degenerate local minimum of $E(x)$ in $\mathcal{K}$ if it is critical
and
$$
D_{y}^{2}E(x)\equiv \frac{d^2}{dt^2}E\left(\frac{x+ty}{\sqrt{1+t^2}}\right)\Big|_{t=0}>0\;\;\;\forall\;y\in x^{\perp},
$$
were we also allow $D_{y}^{2}E(x)=+\infty$.
Moreover, a critical $x\in\mathcal{K}$ is said to be degenerate if there exists at least one direction $y$ such that
$D_{y}^{2}E(x)=0$.
\end{definition}
In order to prove local additivity we will need to calculate the above directional derivatives.
This can be done by expressing the logarithm as an integral~\cite{Yard} (see also~\cite{OP04,Petz}). However, in this technique
all the quantities are expressed by integrals, and some of these integral expressions do not lead to additivity
in a transparent way, as the divided difference method does.
We therefore apply below a new technique that is based on the \emph{divided difference}~\cite[(6.1.17)]{HJ99}.
One of the advantages of the divided difference approach, is that it enables one to calculate and express
all directional derivatives \emph{explicitly} with no integrals involved. Before introducing the divided difference approach, we will first discuss briefly the affine parametrization.
In our calculations we will assume that $x$ is diagonal (or equivalently, the bipartite state $x$ represents is given in its
Schmidt form). This assumption follows from
the singular value decomposition theorem; namely, we can always find unitary matrices $u\in\mathbb{C}^{n\times n}$ and $v\in\mathbb{C}^{m\times m}$ such that $uxv$ is an $n\times m$ diagonal matrix with non-negative real numbers (the singular values of $x$) on the diagonal. Since $E(x)=E(uxv)$ we can assume without loss of generality
that $x$ is a diagonal matrix.
\subsection{The Affine Parametrization}
Up to second order in $t$ we have
\begin{align}
\rho(t) \equiv\frac{(x+ty)(x^{*}+ty^{*})}{1+t^2}
&=\left(xx^{*}+t(xy^{*}+yx^{*})+t^{2}yy^{*}\right)(1-t^{2})\nonumber\\
&=xx^{*}+t(xy^{*}+yx^{*})+t^2(yy^{*}-xx^{*})
=\rho+t\gamma_0+t^2\gamma_1\;,
\end{align}
where $\rho=xx^*$, $\gamma_0\equiv xy^*+yx^*$, and $\gamma_1\equiv yy^*-xx^*$.
Note that $\hbox{\rm Tr} \, \rho=1$ and $\hbox{\rm Tr} \, \gamma_0=\hbox{\rm Tr} \, \gamma_1=0$,
where without loss of generality we assumed $\hbox{\rm Tr} \, (yy^*)=1$ since we can absorb the normalization factor of $y$ into $t$. We are interested in taking the first and second derivative of
$$
E\left(\frac{x+ty}{\sqrt{1+t^2}}\right)=S(\rho(t))=S(\rho+t\gamma_0+t^2\gamma_1)\;.
$$
In this section we assume that $\rho=xx^*$ is an $n\times n$ non-singular matrix.
Denote
$$
\sigma(t)\equiv\rho+t\gamma_0\;.
$$
In the next proposition we relate $S(\rho(t))$ with $S(\sigma(t))$.
\begin{proposition}\label{affparfor}
Let $\rho(t),\sigma(t),\rho,\gamma_0$ and $\gamma_1$ as above. Then
\begin{equation}\label{affparfor1}
S(\rho(t))=S(\sigma(t))-t^2\hbox{\rm Tr} \, \left[\gamma_1 \log\rho\right]
+O(t^3)
\end{equation}
\end{proposition}
\begin{proof}
Since $\rho$ is non-singular, also $\rho(t)$ and $\sigma(t)$ are non-singular for small enough $t$. Thus, $I-\rho(t)<I$ for small $t$. Using the Taylor expansion
$$
\log\rho(t)=\log[I-(I-\rho(t))]=-\sum_{n=1}^{\infty}\frac{\left(I-\sigma(t)-t^2\gamma_1\right)^n}{n}\;,
$$
we get
$$
-\hbox{\rm Tr} \, \left[\rho\log\rho(t)\right]=\sum_{n=1}^{\infty}\frac{1}{n}\hbox{\rm Tr} \, \left[\rho\left(I-\sigma(t)-t^2\gamma_1\right)^n\right]\;.
$$
Expanding the term in the trace above up to second order in $t$ gives
$$
\hbox{\rm Tr} \, \left[\rho\left(I-\sigma(t)-t^2\gamma_1\right)^n\right]=\hbox{\rm Tr} \, \left[\rho\left(I-\sigma(t)\right)^n\right]
+t^2n\hbox{\rm Tr} \, \left[\rho(I-\rho)^{n-1}\gamma_1\right]+O(t^3)\;.
$$
We therefore have
$$
-\hbox{\rm Tr} \, \left[\rho\log\rho(t)\right]=-\hbox{\rm Tr} \, \left[\rho\log\sigma(t)\right]+t^2\sum_{n=1}^{\infty}\hbox{\rm Tr} \, \left[\rho(I-\rho)^{n-1}\gamma_1\right]+O(t^3)\;.
$$
Since $\rho^{-1}=\sum_{n=1}^{\infty}(I-\rho)^{n-1}$ and $\hbox{\rm Tr} \, (\gamma_1)=0$ we conclude
$$
\hbox{\rm Tr} \, \left[\rho\log\rho(t)\right]=\hbox{\rm Tr} \, \left[\rho\log\sigma(t)\right]+O(t^3)\;.
$$
Thus, $$\hbox{\rm Tr} \, [\rho(t)\log\rho(t)]=\hbox{\rm Tr} \, \left[\sigma(t)\log\sigma(t)\right]+t^2\hbox{\rm Tr} \, \left[\gamma_1 \log\rho\right]+O(t^3).$$
This completes the proof.
\end{proof}
This simple relation between $S(\rho(t))$ and $S(\sigma(t))$ is very useful since now we can focus on the Taylor expansion of the simpler function $S(\sigma(t))$.
\subsection{The method of divided difference}
To calculate the first and second derivatives of $S(\sigma(t)$, we first evaluate the Taylor expansion of
a complex valued function $f:\mathbb{C}\to\mathbb{C}$, which we later assume can be extended to act on
$n\times n$ complex matrices.
We will make use of the notion of the \emph{divided difference} for $f$, which we refer the reader to~\cite[(6.1.17)]{HJ99}
for more details. The divided difference for a function $f:\mathbb{C}\to\mathbb{C}$,
given a sequence of distinct complex points,
$\alpha_i\in\mathbb{C}, i=1,\ldots,n$, is defined for $i=0,1$ by
\begin{align}\label{defdivdif1}
& \triangle^0 f(\alpha_1):=f(\alpha_1)\\
& \triangle^1 f(\alpha_1,\alpha_2)\equiv\triangle f(\alpha_1,\alpha_2):=\frac{f(\alpha_1)-f(\alpha_2)}{\alpha_1-\alpha_2},
\end{align}
and defined inductively by
\begin{eqnarray}
\triangle^i f(\alpha_1,\ldots,\alpha_i,\alpha_{i+1})=
\frac{\triangle^{i-1} f(\alpha_1,\ldots,\alpha_{i-1},\alpha_{i}) - \triangle^{i-1} f(\alpha_1,\ldots,\alpha_{i-1},\alpha_{i+1})}
{\alpha_i-\alpha_{i+1}},
\end{eqnarray}
for $i=2,3,\ldots,n$. It is well known that $\triangle^i f(\alpha_1,\ldots,\alpha_i,\alpha_{i+1})$ is a symmetric function in $\alpha_1,\ldots,\alpha_{i+1}$,
e.g. \cite[p'393]{HJ99}. For points that are not distinct it is defined by an appropriate limit. For example, for $x\ne y$ we have
\begin{align}\label{g1g}
& \triangle f(x,x)=f'(x)\nonumber\\
& \triangle^2 f(x,x,y)=\frac{f'(x)}{(x-y)} -\frac{f(x)-f(y)}{(x-y)^2}\\
& \triangle^2 f(x,x,x)=\frac{1}{2} f''(x).\label{ququ}
\end{align}
Note that~\eqref{ququ} can be obtained from~\eqref{g1g} by setting $h\equiv y-x\to 0$ and expending
$f(y)=f(x+h)=f(x)+hf'(x)+\frac{1}{2}h^2f''(x)+O(h^3)$.
\begin{theorem}\label{taylor} Let $A=\mathop{{\rm diag}}\nolimits(\alpha_1,\ldots,\alpha_n)\in\mathbb{C}^{n\times n}$ be a diagonal square matrix, and
$B=[b_{ij}]\in\mathbb{C}^{n\times n}$ be a complex square matrix. Assume that $f(x):\mathbb{C}\to\mathbb{C}$ satisfy one of the following conditions:
\begin{enumerate}
\item\label{smoothcase1} $f(x)$ is an analytic function in some domain ${\cal D}\subset \mathbb{C}$ which contains $\alpha_1,\ldots,\alpha_n$,
and can be approximated uniformly in ${\cal D}$ by polynomials.
\item\label{smoothcase2} $\alpha_1,\ldots,\alpha_n$ are in a real open interval $(a,b)$ and $f$ has two continuous derivatives in $(a,b)$.
\end{enumerate}
Then
\begin{equation}\label{polcase1}
f(A+tB)=f(A)+tL_A(B)+t^2Q_A(B)+O(t^3)
\end{equation}
Here $L_A:\mathbb{C}^{n\times n}\to\mathbb{C}^{n\times n}$ is a linear operator, and $Q_B:\mathbb{C}^{n\times n}\to \mathbb{C}^{n\times n}$ is a quadratic homogeneous noncommutative
polynomial in $B$. For $i,j=1,\ldots,n$ we have
\begin{align}\label{LABform}
& [L_A(B)]_{ij}=\triangle f(\alpha_i,\alpha_j)b_{ij}=\frac{f(\alpha_i)-f(\alpha_j)}{\alpha_i-\alpha_j} b_{ij}\\
\label{QABform}
& [Q_A(B)]_{ij}=\sum_{k=1}^n \triangle^2f(\alpha_i,\alpha_k,\alpha_j) b_{ik}b_{kj}.
\end{align}
In particular
\begin{align}
& \hbox{\rm Tr} \, (L_A(B))=\sum_{j=1}^{n}f'(\alpha_j)b_{jj}\\
\label{tracQAB}
& \hbox{\rm Tr} \, (Q_A(B))=\sum_{i,j=1}^n \frac{f'(\alpha_i)-f'(\alpha_j)}{2(\alpha_i-\alpha_j)}b_{ij}b_{ji}.
\end{align}
\end{theorem}
\begin{remark}
The expansion above can be naturally generalized to higher than the second order, but for the purpose of this article, we will
only need to expand $f(A+tB)$ up to the second order in $t$. Moreover, for our purposes we will only need to assume that the $\alpha_i$ are real and the condition 2 on $f$ holds. We kept condition 1 on $f$ in the theorem just to be a bit more general.
\end{remark}
Note that in all the expressions above, one must identify $\alpha_i=\alpha_j$ with the limit $\alpha_j\to\alpha_i$.
For example, the term
$$
\frac{f'(\alpha_i)-f'(\alpha_j)}{2(\alpha_i-\alpha_j)}=\frac{1}{2}f''(\alpha_i)\;\;\;\text{for}\;\;\alpha_i=\alpha_j\;.
$$
In particular, note that if $B$ is diagonal, Eq.~\eqref{tracQAB} gives the known second order term of the Taylor expansion.
\proof
From the conditions on $f$, it is enough to prove the theorem assuming $f$ is a polynomial.
By linearity, it is enough to prove all the claims for $f(x)=x^m$.
Clearly, in the expension
$$
(A+tB)^{m}=A^{m}+tL_A(B)+t^2Q_A(B)+O(t^3)
$$
we must have
\begin{align}
& L_A(B)=\sum_{0\le p,q,\; p+q=m-1} A^pBA^q,\label{121}\\
& Q_A(B)=\sum_{0\le p,q,r,\; p+q+r=m-2} A^pBA^qBA^r,
\label{LABQABfor}
\end{align}
where we expanded $(A+tB)^m$ up to first and second order in $t$. All that is left to show is that
these matrices coincide with the ones defined in~Eqs.~(\ref{LABform},\ref{QABform}).
Indeed, since $A$ is diagonal, the matrix elements of the $L_{A}(B)$ in Eq.(\ref{121}) are given by
$$
[L_A(B)]_{ij}=\sum_{0\le p,q,\; p+q=m-1}\alpha_{i}^{p}\alpha_{j}^{q}b_{ij}=\frac{\alpha_{i}^{m}-\alpha_{j}^{m}}{\alpha_{i}-\alpha_{j}}b_{ij}\;,
$$
which is equal to the exact same matrix elements given in Eq.(\ref{LABform}).
In the same way, since $A$ is diagonal, observe that the matrix elements of the $Q_{A}(B)$ in Eq.(\ref{LABQABfor}) are given by
$$
[Q_A(B)]_{ij}=\sum_{k=1}^{n}\sum_{0\le p,q,r,\; p+q+r=m-2}\alpha_{i}^{p}\alpha_{k}^{q}\alpha_{j}^{r}b_{ik}b_{kj}\;.
$$
On the other hand, a straightforward calculation gives for $f(x)=x^m$
$$
\triangle^2 x^m(\alpha_i,\alpha_k,\alpha_j)=\sum_{0\le p,q,r,\; p+q+r=m-2} \alpha_{i}^{p} \alpha_{k}^{q} \alpha_{j}^{r}.
$$
Thus, the expressions in Eq.~(\ref{QABform}) and Eq.~(\ref{LABQABfor}) for $Q_{A}(B)$ are the same.
We now prove Eq.~\eqref{tracQAB}. Observe first that Eq.~\eqref{QABform} yields
\begin{equation}\label{tracQAB1}
\hbox{\rm Tr} \, (Q_A(B))=\sum_{i,j=1}^n \triangle ^2 f(\alpha_i,\alpha_i, \alpha_j)b_{ij}b_{ji}\;,
\end{equation}
where we have used the symmetry $\triangle^2f(\alpha_i,\alpha_j, \alpha_i)=\triangle^2f(\alpha_i,\alpha_i, \alpha_j)$.
Now, since $b_{ij}b_{ji}$ is symmetric under an exchange between $i$ and $j$, we can replace
$\triangle ^2 f(\alpha_i,\alpha_i, \alpha_j)$ in Eq.~(\ref{tracQAB1}) with
$$
\frac{1}{2}\left[\triangle ^2 f(\alpha_i,\alpha_i, \alpha_j)+\triangle ^2 f(\alpha_j,\alpha_j, \alpha_i)\right]
=\frac{1}{2}\triangle f'(\alpha_i,\alpha_j)\;,
$$
where for the last equality we used Eq.~\eqref{g1g}. This completes the proof.\qed
We now use the above theorem for the Taylor expansion of the function $S(\sigma(t))$ in the neighbourhood of $t=0$.
\subsection{The first and second derivatives of $E(x)$}
We first assume that $\rho$ is non singular. The case where $\rho$ is singular will be treated separately in section~\ref{sing}.
\begin{theorem}
Let $\rho=\mathop{{\rm diag}}\nolimits\{p_1,\ldots,p_{n}\}$ with $p_j>0$
for $j=1,\ldots,n$. For this case, we get the following expressions:
\begin{align}
D_{y}^{1}E(x) & \equiv \frac{d}{dt}S(\rho(t))\Big|_{t=0}=-\hbox{\rm Tr} \, (\gamma_0\log \rho)\nonumber\\
D_{y}^{2}E(x) & \equiv\frac{d^2}{dt^2}S(\rho(t))\Big|_{t=0}
=-2\left(\hbox{\rm Tr} \, \left[\gamma_1 \log\rho\right]+\sum_{j,k} \frac{\log p_{j}-\log p_{k}}{2(p_j-p_k)}\left|(\gamma_0)_{jk}\right|^2\right)\;.
\label{se}
\end{align}
\end{theorem}
\begin{remark}
The condition for $x\in\mathcal{K}$ to be critical is $D_{y}^{1}E(x)=0$ which is equivalent to $\hbox{\rm Tr} \, [(xy^*+yx^*)\log xx^*]=0$ for all $y\in\mathcal{K}$
such that $\hbox{\rm Tr} \, (xy^*)=0$. Moreover, if $x$ is critical then we also have
$D_{iy}^{1}E(x)=0$ for all $y\in x^\perp\subset\mathcal{K}$.
Hence, if $x$ is critical we must have
\begin{equation}\label{critical}
\hbox{\rm Tr} \, (xy^*\log xx^*)=0
\end{equation}
for all $y\in x^\perp\subset\mathcal{K}$.
\end{remark}
\begin{proof}
Theorem~\ref{taylor} implies that
$$
S(\rho+t\gamma_0)=S(\rho)+tL_{\rho}(\gamma_0)+t^2Q_{\rho}(\gamma_0)+\mathcal{O}(t^3).
$$
where $L_\rho$ and $Q_\rho$ are the following linear and quadratic forms
\begin{align*}
& L_{\rho}(\gamma)\equiv \sum_{i=1}^{n} g'(p_i)(\gamma_0)_{ii}\\
& Q_{\rho}(\gamma)\equiv \sum_{i=1}^{n}\sum_{j=1}^{n}
\frac{g'(p_i)-g'(p_j)}{2(p_i-p_j)}(\gamma_0)_{ij}(\gamma_0)_{ji}\;,
\end{align*}
and $g(t)\equiv-t\log t$. Note that the expressions for $L_{\rho}(\gamma_0)$ and $Q_{\rho}(\gamma_0)$
above are the traces of the analogous expressions given in theorem~\ref{taylor}, since
$S(\rho)$ is defined as the trace of the matrix $g(\rho)=-\rho\log \rho$.
Since $\gamma_0$ is hermitian with zero trace, and $g'(t)=-1-\log t$, we get
\begin{eqnarray}
L_{\rho}(\gamma_0) &=& -\hbox{\rm Tr} \, (\gamma_0\log \rho)\nonumber\\
Q_{\rho}(\gamma_0) &=& -\sum_{j,k} \frac{\log p_{j}-\log p_{k}}{2(p_j-p_k)}\left|(\gamma_0)_{j,k}\right|^2\;\label{qxy}\;.
\end{eqnarray}
Combining this with proposition~\ref{affparfor} proves the theorem.
\end{proof}
In the following lemma, we rewrite the expression in Eq.~(\ref{se}), which will be useful for the proof of local additivity.
\begin{lemma}\label{lemsecder}
Denote $w=(y+y^*)/2$, and $z=i(y-y^*)/2$. Denote also $r_{jk}=\sqrt{p_j/p_k}$, where $\{p_j\}_{i=1}^{n}$ are the eigenvalues of $\rho=xx^*$.
Then,
the expression in Eq.(\ref{se}) for $D_{y}^{2}E(x)$ can be rewritten as
\begin{align}
D_{y}^{2}E(x) =-2E(x)
-\hbox{\rm Tr} \, \left[(yy^*+y^*y) \log\rho\right]
-2\sum_{j,k}\left(|w_{jk}|^2\Phi(r_{jk})+|z_{jk}|^2\Phi(-r_{jk})\right)\;,\label{gf}
\end{align}
where
\begin{equation}\label{Phi}
\Phi(r)\equiv\frac{1}{2}\frac{r+1}{r-1}\log r^{2}\;\;\;,\; r\in\mathbb{R},
\end{equation}
with the identification $\Phi(1)=2$.
\end{lemma}
\begin{proof}
The expression in Eq.(\ref{se}) for $D_{y}^{2}E(x)$ involves the terms $|(\gamma_0)_{jk}|^2$. The matrix
$\gamma_0=xy^{*}+yx^{*}=xy^{*}+yx$ where $x=\mathop{{\rm diag}}\nolimits\{\sqrt{p_1},\ldots,\sqrt{p_n}\}$. Note that $y^*=w+iz$ and $y=w-iz$, where $w$ and $z$ are the Hermitian matrices defined in the lemma. Thus,
$$
\gamma_0=xw+wx+i(xz-zx)\;.
$$
In terms of the matrix elements $w_{jk}$ and $z_{jk}$ of $w$ and $z$, we have
$$
(\gamma_0)_{jk}=(\sqrt{p_k}+\sqrt{p_j})w_{jk}+i(\sqrt{p_j}-\sqrt{p_k})z_{jk}\;.
$$
The square of this expression can be written as
\begin{align*}
|(\gamma_0)_{jk}|^2 =(\sqrt{p_j}+\sqrt{p_k})^2\left |w_{jk}\right|^2+(\sqrt{p_j}-\sqrt{p_k})^2\left|z_{jk}\right|^2
+i(p_j-p_k)(w_{jk}^{*}z_{jk}-w_{jk}z_{jk}^{*})
\end{align*}
Moreover, expressing back $w$ and $z$ interms of $y$ gives
$i(w_{jk}^{*}z_{jk}-w_{jk}z_{jk}^{*})=(|y_{kj}|^2-|y_{jk}|^2)/2$.
We can therefore write
\begin{align*}
|(\gamma_0)_{jk}|^2 = (\sqrt{p_j}+\sqrt{p_k})^2\left |w_{jk}\right|^2+(\sqrt{p_j}-\sqrt{p_k})^2\left|z_{jk}\right|^2
+\frac{1}{2}(p_j-p_k)(|y_{kj}|^2-|y_{jk}|^2)\;.
\end{align*}
Substituting this expression, and the value for $\gamma_1=yy^*-xx^*$, into Eq.(\ref{se}) gives
$$
-\frac{1}{2}D_{y}^{2}E(x) =E(x)+
\hbox{\rm Tr} \, \left[yy^* \log\rho\right]
+\sum_{j,k}\log \left(\frac{p_j}{p_k}\right)
\left\{\frac{(\sqrt{p_j}+\sqrt{p_k})^2}{2(p_j-p_k)}\left|w_{jk}\right|^2
+\frac{(\sqrt{p_j}-\sqrt{p_k})^2}{2(p_j-p_k)}\left|z_{jk}\right|^2+\frac{1}{4}(|y_{kj}|^2-|y_{jk}|^2)
\right\}\;.
$$
Note first that the term
$$
\frac{1}{4}\sum_{j,k}\log \left(\frac{p_j}{p_k}\right)
(|y_{kj}|^2-|y_{jk}|^2)
= \frac{1}{2}\hbox{\rm Tr} \, \left[(y^*y-yy^*)\log\rho\right]\;.
$$
Moreover, denoting $r_{jk}=\sqrt{p_j/p_k}$ we get
$$
\frac{(\sqrt{p_j}+\sqrt{p_k})^2}{2(p_j-p_k)}\log \left(\frac{p_j}{p_k}\right)
=\frac{(r_{jk}+1)^{2}}{2(r_{jk}^2-1)}\log r_{jk}^{2}
=\frac{1}{2}\frac{r_{jk}+1}{r_{jk}-1}\log r_{jk}^{2}\equiv \Phi(r_{jk})
$$
Similarly,
$$
\frac{(\sqrt{p_j}+\sqrt{p_k})^2}{2(p_j-p_k)}\log \left(\frac{p_j}{p_k}\right)
=\frac{1}{2}\frac{r_{jk}-1}{r_{jk}+1}\log r_{jk}^{2}
= \Phi(-r_{jk})
$$
With these notations we get
$$
-\frac{1}{2}D_{y}^{2}E(x) =E(x)+
\frac{1}{2}\hbox{\rm Tr} \, \left[(yy^*+y^*y) \log\rho\right]
+\sum_{j,k}\left(|w_{jk}|^2\Phi(r_{jk})+|z_{jk}|^2\Phi(-r_{jk})\right)
$$
This complete the proof.
\end{proof}
In the rest of the paper we will use the notations
\begin{align}
M_{x}(y)&\equiv\sum_{j,k=1}^{n}\left(|w_{jk}|^2\Phi(r_{jk})+|z_{jk}|^2\Phi(-r_{jk})\right)=\hbox{\rm Tr} \, \left[w\Phi_{\rho}^{+}(w)+z\Phi_{\rho}^{-}(z)\right]
\nonumber\\
\Gamma_{x}(y)& \equiv -E(x)-\frac{1}{2}\hbox{\rm Tr} \, \left[(y^*y+yy^*)\log xx^*\right]\;.\label{gam}
\end{align}
where $\Phi_{\rho}^{\pm}$ are self-adjoint linear operators defining in terms of the Hadamard product between the input matrix
and the matrix with elements $\Phi(\pm r_{jk})$. That is,
$$
\left[\Phi_{\rho}^{\pm}(w)\right]_{jk}=\Phi(\pm r_{jk})w_{jk}\;.
$$
With these notations we get that
$D_{y}^{2}E(x)> 0$ if and only if
\begin{equation}\label{delta1}
M_{x}(y)< \Gamma_{x}(y)\;.
\end{equation}
\subsection{The complex structure and additional necessary condition}
If $D_{y}^{2}E(x)> 0$ for all $y$ orthogonal to $x$, then $D_{iy}^{2}E(x)$ is also positive
since $iy$ is orthogonal to $x$. That is,
\begin{equation}\label{delta2}
M_{x}(iy)< \Gamma_{x}(iy)=\Gamma_{x}(y)\;.
\end{equation}
Therefore, we get from Eqs.~(\ref{delta1},\ref{delta2}) that if $x$ is a non-degenerate local minimum then
\begin{align}\label{221}
\frac{1}{2}\left(M_{x}(y)+M_{x}(iy)\right)
= \sum_{j,k}|y_{jk}|^2\tilde{\Phi}(r_{jk})< \Gamma_{x}(y)\;,
\end{align}
where
\begin{equation}\label{Phi0}
\tilde{\Phi}(r):=\frac{1}{2}\left(\Phi(r)+\Phi(-r)\right)=\frac{1}{2}\frac{r^2+1}{r^2-1}\log r^2\;,
\end{equation}
with the identification $\tilde{\Phi}(\pm1)=1$. Let $\tilde{\Phi}_{\rho}$ be a self-adjoint linear operator defining in terms of the Hadamard product between the input matrix and the matrix with components $\tilde{\Phi}(r_{jk})$. With this notation the necessary condition given in Eq.(\ref{221}) can be written as
\begin{eqnarray}
\hbox{\rm Tr} \, \left[y^*\tilde{\Phi}_{\rho}(y)\right]<\Gamma_{x}(y)\;.\label{222}
\end{eqnarray}
A simple analysis of the function $\tilde{\Phi}$ shows that $\tilde{\Phi}(r)\geq 1$ with equality if and only if $r=\pm 1$.
Thus, Eq.(\ref{222}) also implies the following necessary condition on a local minimum:
$$
1 \leq \Gamma_{x}(y)\;,
$$
which can be written as
\begin{equation}\label{elegant}
E(y)-E(x)\geq 1-\frac{1}{2}\left[S(yy^*\|xx^*)+S(y^*y\|xx^*)\right]
\end{equation}
where
$$
S(yy^*\|xx^*)\equiv \hbox{\rm Tr} \, (yy^*\log yy^*)-\hbox{\rm Tr} \, (yy^*\log xx^*)
$$
is the relative entropy. Since $S(yy^*\|xx^*)\geq 0$ with equality if and only if $yy^*=xx^*$, we always have
$S(yy^*\|xx^*)>0$ for $\hbox{\rm Tr} \, (xy^*)=0$. Nevertheless, it is possible that $\hbox{\rm Tr} \, (xy^*)=0$ and yet $S(yy^*\|xx^*)\leq 1$.
In such cases Eq.~\eqref{elegant} gives $E(y)\geq E(x)$ which is consistent with the fact that $x$ is a local min.
\section{Local Additivity}\label{additive}
We now state the main result of this paper.
\begin{theorem}\label{main}
Let $x^{(1)}$ and $x^{(2)}$ be two non-degenerate local minima of $E(x)$ in $\mathcal{K}^{(1)}\subset\mathbb{C}^{n_1\times m_1}$
and $\mathcal{K}^{(2)}\subset\mathbb{C}^{n_2\times m_2}$, respectively. Then, $x^{(1)}\otimes x^{(2)}$ is a non-degenerate local minimum
of $E(x)$ in $\mathcal{K}^{(1)}\otimes\mathcal{K}^{(2)}$. Moreover, if $x^{(1)}$ is degenerate local minimum and
$x^{(2)}$ is non-degenerate local minimum, then $x^{(1)}\otimes x^{(2)}$ is a degenerate local minimum.
\end{theorem}
The theorem above implies, in particular, that if $x^{(1)}$ and $x^{(2)}$ are critical points of $E(x)$ in $\mathcal{K}^{(1)}$
and $\mathcal{K}^{(2)}$, respectively, then, $x^{(1)}\otimes x^{(2)}$ is a critical point of $E(x)$
in $\mathcal{K}^{(1)}\otimes\mathcal{K}^{(2)}$. This fact was observed in~\cite{AIM08} (see also~\cite{Maxim}),
and later was stated in~\cite{FGA}.
It follows from the linearity in $y$ of the condition given in Eq.~(\ref{critical}) for critical points.
More precisely, if $x^{(1)}$ and $x^{(2)}$ are critical points, then $x^{(1)}\otimes x^{(2)}$ is also critical if
(see Eq.~(\ref{critical}))
\begin{align*}
&0 =\hbox{\rm Tr} \, \left[\left(x^{(1)}\otimes x^{(2)}\right)y^*\log \left(x^{(1)}x^{(1)*}\otimes x^{(2)}x^{(2)*}\right)\right]=\\
&\hbox{\rm Tr} \, \left[x^{(1)}y^{(1)*}\log (x^{(1)}x^{(1)*})\right]+\hbox{\rm Tr} \, \left[x^{(2)}y^{(2)*}\log (x^{(2)}x^{(2)*})\right]
\end{align*}
for all $y\in (x^{(1)}\otimes x^{(2)})^{\perp}$, where $y^{(1)*}\equiv\hbox{\rm Tr} \, _2[(I\otimes x^{(2)})y^*]$ and
$y^{(2)*}\equiv\hbox{\rm Tr} \, _1[(x^{(1)}\otimes I)y^*]$. In the equation above we used the additivity of the logarithm function
under tensor products. Moreover, since $y\in (x^{(1)}\otimes x^{(2)})^{\perp}$, we also have
$y^{(1)}\in (x^{(1)})^{\perp}$ and $y^{(2)}\in (x^{(1)})^{\perp}$. Thus, if $x^{(1)}$ and $x^{(2)}$ are critical points,
$x^{(1)}\otimes x^{(2)}$ is also critical~\footnote{In~\cite{AIM08,FGA} it was shown to be true for a large class of functions (not only for the von-Neumann entropy) including all the $p$-norm entropy functions.}.
In the following subsection we provide one of the main ingredients for
the local additivity of the von-Neumann entropy output of a quantum channel.
\subsection{The Subadditivity of $\Phi_{\rho}^{\pm}$}
\begin{lemma}\label{phi0}
Let $\Phi,\tilde{\Phi}:\mathbb{R}\to\mathbb{R}$ be defined as in Eq.(\ref{Phi}) and Eq.~\eqref{Phi0}, respectively.
Then, for any $r,s\in\mathbb{R}$ the following holds:
\begin{equation}\label{rs}
\Phi(rs)\leq\tilde{\Phi}(r)+\tilde{\Phi}(s)
\end{equation}
with equality if and only if $r=s$. In the operator language of Eqs.~(\ref{gam},\ref{222}), the inequality~\eqref{rs} can be expressed as
\begin{eqnarray}
\Phi_{\rho^{A}\otimes\rho^{B}}^{\pm}\leq \tilde{\Phi}_{\rho^A\otimes I^B}+\tilde{\Phi}_{I^A\otimes\rho^B}
\;=\;\tilde{\Phi}_{\rho^A}\otimes I^B+I^A\otimes\tilde{\Phi}_{\rho^B}\;.
\end{eqnarray}
where two operators satisfies $O_1\leq O_2$ if and only if $\hbox{\rm Tr} \, [y^*O_1y]\leq\hbox{\rm Tr} \, [y^*O_2y]$ for all $y$.
\end{lemma}
\begin{proof}
We need to prove that
$$
\frac{rs+1}{rs-1}\log (r^{2}s^2)\leq \frac{r^2+1}{r^2-1}\log r^2+\frac{s^2+1}{s^2-1}\log s^2
$$
This inequality is equivalent to
$$
\left(\frac{r^2+1}{r^2-1}-\frac{rs+1}{rs-1}\right)\log r^2+\left(\frac{s^2+1}{s^2-1}-\frac{rs+1}{rs-1}\right)\log s^2\geq 0\;
$$
which is equivalent to
\begin{equation}\label{ff}
\frac{s-r}{rs-1}\left(f(r)-f(s)\right)\geq 0\;,
\end{equation}
where
$$
f(r)\equiv \frac{r}{r^2-1}\log r^2\;.
$$
That is, we need to prove that $f(r)\geq f(s)$ if $(s-r)/(rs-1)>0$ and $f(r)\leq f(s)$ if $(s-r)/(rs-1)<0$.
From symmetry under exchange of $r$ and $s$, both cases are equivalent, and therefore without lose of generality we assume
$(s-r)/(rs-1)>0$.This inequality is satisfied if \textbf{(a)} $s>r$ and $rs>1$ or \textbf{(b)} $s<r$ and $rs<1$.
A simple analysis of the function $f(r)$ shows that $f$ is odd, and it is monotonically increasing for $-1\leq r\leq 1$ and monotonically decreasing for $|r|>1$. Moreover, note that $f(1/r)=f(r)$.
Consider case $\textbf{(a)}$: If $s>r>1$ then $f(r)\geq f(s)$ since $f$ is monotonically
decreasing in this region. In the same way if $-1>s>r$ then $f(r)\geq f(s)$. Another possibility in this case is that
$0<r<1<1/r<s$. But since both $r$ and $1/s$ are positive and smaller than 1, we get $f(r)\geq f(1/s)=f(s)$, where we have used the fact that $f(r)$ is monotonically increasing for $|r|\leq 1$. The last possibility in this case is that $1/r>s>-1>r$.
For this last possibility both $s$ and $1/r$ are negative numbers bigger than $-1$ and in this region $f$ is monotonically increasing. Thus, $f(r)=f(1/r)\geq f(s)$.
Consider case $\textbf{(b)}$: First note that if $s<0<r$ then $f(s)<0<f(r)$, and if $-1<s<r<1$ then $f(r)\geq f(s)$ since $f$ is monotonically increasing in this region. Another possibility in this case is that
$s<1<r<1/s$. But since both $r$ and $1/s$ are positive and bigger than 1, we get $f(r)\geq f(1/s)=f(s)$, where we have used the fact that $f(r)$ is monotonically decreasing for $r\geq 1$.
Finally, the last possibility in this case is that $1/r<s<-1<r$.
For this last possibility both $s$ and $1/r$ are negative numbers smaller than $-1$ and in this region $f$ is monotonically decreasing. Thus, $f(r)=f(1/r)\geq f(s)$.
In order to prove the equality conditions, we need to show that the expression in Eq.(\ref{ff}) equals zero if and only if
$s=r$. Before proceeding to prove that, we check the case $r=1/s$. In this case, $\Phi(rs)=\Phi(1)=2$ and $\tilde{\Phi}(s)=\tilde{\Phi}(1/r)=\tilde{\Phi}(r)$. That is, if $r=1/s$ then the equality in Eq.~\eqref{rs} holds if and only if $\tilde{\Phi}(r)=1$. As pointed out earlier,
$\tilde{\Phi}(r)=1$ if and only if $r=\pm 1$. We therefore conclude that if $r=1/s$ than the equality in Eq.~(\ref{rs}) holds if and only if
$r=s=\pm 1$. Assume now $rs\neq 1$. In this case, the expression in Eq.(\ref{ff}) equals zero if and only if $f(r)=f(s)$. However,
a simple analysis of the function $f(r)$ implies that $f(r)=f(s)$ if and only if $r=s$ or $r=1/s$. Since we assumed $rs\neq 1$, we get
that $r=s$. This completes the proof.
\end{proof}
\subsection{Proof of Theorem~\ref{main}}
We can assume without loss of generality that $n_1=m_1$, $n_2=m_2$. This can always be done by adding zero rows or columns. However, in this part of the proof we also assume
that both $x^{(1)}$ and $x^{(2)}$ are non-singular. The singular case is treated separately in section~\ref{sing}.
From the singular valued decomposition (see the argument below definition~\ref{maindef}) we can assume
without loss of generality that $x^{(1)}=\mathop{{\rm diag}}\nolimits\{\sqrt{p_1},\ldots,\sqrt{p_{n_1}}\} $
and $x^{(2)}=\mathop{{\rm diag}}\nolimits\{\sqrt{q_1},\ldots,\sqrt{q_{n_2}}\}$,
where $p_i$ and $q_j$ are positive and $\sum_{i=1}^{n_1}p_i=\sum_{j=1}^{n_2}q_{j}=1$.
We first assume that both $x^{(1)}$ and $x^{(2)}$ are non-degenerate local minima.
We need to show that $D_{y}^{2}E(x)>0$ for all $y\in x^\perp$, where $x\equiv x^{(1)}\otimes x^{(2)}$.
The most general
$y\in \left(x^{(1)}\otimes x^{(2)}\right)^\perp$ can be written as
\begin{equation}\label{generalform}
y=c_1x^{(1)}\otimes y^{(2)}+c_2y^{(1)}\otimes x^{(2)}+c_3y'\;,
\end{equation}
where $y^{(1)}\in (x^{(1)})^\perp$, $y^{(2)}\in (x^{(2)})^\perp$, and $y'\in\left(x^{(1)}\right)^\perp\otimes \left(x^{(2)}\right)^\perp$
are all normalized. The numbers $c_j$ can be chosen to be real because we can absorb their phases in $y^{(1)}$, $y^{(2)}$, and $y'$. They also
satisfy $c_{1}^{2}+c_{2}^{2}+c_{3}^{2}=1$, so that $y$ is normalized.
Consider first the simple case where $y=x^{(1)}\otimes y^{(2)}$. In this case,
\begin{align}
E\left(\frac{x+ty}{\sqrt{1+t^2}}\right)& =E\left(x^{(1)}\otimes \frac{x^{(2)}+ty^{(2)}}{\sqrt{1+t^2}}\right)\nonumber\\
&=E\left(x^{(1)}\right)+E\left(\frac{x^{(2)}+ty^{(2)}}{\sqrt{1+t^2}}\right)\;.
\end{align}
Since $x^{(2)}$ is a non-degenerate local minimum, we must have $D_{y}^{2}E(x)>0$.
The case $y=y^{(1)}\otimes x^{(2)}$ is similar.
Consider now the case in which $y\in \left(x^{(1)}\right)^\perp\otimes \left(x^{(2)}\right)^\perp$.
Using its Schmidt decomposition, we can write it as
\begin{equation}\label{yprime}
y=\sum_{l}c_l y^{(1)}_{l}\otimes y^{(2)}_{l}\;,
\end{equation}
where
\begin{equation}\label{ort}
\hbox{\rm Tr} \, [y^{(1)}_{l}y^{(1)*}_{l'}]= \hbox{\rm Tr} \, [y^{(2)}_{l}y^{(2)*}_{l'}]=\delta_{ll'}\;,
\end{equation}
and $c_l$ are real numbers such that $\sum_{l}c_{l}^{2}=1$.
By definition we have
\begin{align}
M_{x}(y)
=\hbox{\rm Tr} \, \left[w^{AB}\Phi_{\rho^A\otimes\rho^B}^{+}(w^{AB})+z^{AB}\Phi_{\rho^A\otimes\rho^B}^{-}(z^{AB})\right]
\;,
\label{tendelta}
\end{align}
where $w^{AB}=(y^*+y)/2$,
$z^{AB}=i(y^*-y)/2$, $\rho^A\equiv x^{(1)}x^{(1)*}$ and $\rho^B\equiv x^{(2)}x^{(2)*}$.
Applying lemma~\ref{phi0} both to $\Phi_{\rho^A\otimes\rho^B}^{\pm}$ gives:
\begin{align*}
M_{x}(y) & \leq\hbox{\rm Tr} \, \left[
w^{AB}\tilde{\Phi}_{\rho^A\otimes I^{B}}(w^{AB})+w^{AB}\tilde{\Phi}_{I^{A}\otimes\rho^B}(w^{AB})+z^{AB}\tilde{\Phi}_{\rho^A\otimes I^B}(z^{AB})+z^{AB}\tilde{\Phi}_{I^A\otimes \rho^B}(z^{AB})\right]\\
& =\hbox{\rm Tr} \, \left[
y^{*}\tilde{\Phi}_{\rho^A\otimes I^B}(y)+y^{*}\tilde{\Phi}_{I^{A}\otimes\rho^B}(y)\right]\;.
\end{align*}
where $I^A$ and $I^B$ are the identity matrices in the respective spaces, and in the last equality we have used the definitions $w^{AB}=(y^*+y)/2$
and $z^{AB}=i(y^*-y)/2$. Now, but substituting~\eqref{yprime} into the above equation we get
\begin{align*}
M_{x}(y)\leq\sum_{l}c_{l}^{2}\hbox{\rm Tr} \, \left[
y_{l}^{(1)*}\tilde{\Phi}_{\rho^A}(y_{l}^{(1)})+y_{l}^{(2)*}\tilde{\Phi}_{\rho^B}(y_{l}^{(2)})\right]\;,
\end{align*}
where we have used the orthogonality relations in Eq.~\eqref{ort}.
Combining this with Eq.~\eqref{222} gives
\begin{equation}\label{laststep}
M_{x}(y)<\sum_{l}c_{l}^{2}\left(\Gamma_{x^{(1)}}(y^{(1)}_{l})+\Gamma_{x^{(2)}}(y^{(2)}_{l})\right)=\Gamma_{x}(y)
\end{equation}
where the last equality can be verified from the orthogonality relations given in Eq.~\eqref{ort}, and the fact that
\begin{equation}\label{log}
\log xx^*=\log x^{(1)}x^{(1)*}\otimes I^B+I^A\otimes\log x^{(2)}x^{(2)*}\;.
\end{equation}
This completes the proof for $y\in\left(x^{(1)}\right)^\perp\otimes \left(x^{(2)}\right)^\perp$.
Consider now the most general case where $y\in x^\perp$ has the form given in Eq.~(\ref{generalform}).
Denote
\begin{equation}\label{hhh}
w^{AB}=\frac{1}{2}(y^{*}+y)=c_1x^{(1)}\otimes w^{(2)}+c_2w^{(1)}\otimes x^{(2)}+c_3w'\;,
\end{equation}
where $w'=(y'{}^{*}+y')/2$ and we have used
\begin{align*}
& \frac{1}{2}\left(x^{(1)*}\otimes y^{(2)*}+x^{(1)}\otimes y^{(2)}\right)
=x^{(1)}\otimes\frac{1}{2}\left(y^{(2)*}+y^{(2)}\right)\equiv x^{(1)}\otimes w^{(2)}\\
&\frac{1}{2}\left(y^{(1)*}\otimes x^{(2)*}+y^{(1)}\otimes x^{(2)}\right)
=\frac{1}{2}\left(y^{(1)*}+y^{(1)}\right)\otimes x^{(2)}\equiv w^{(1)}\otimes x^{(2)}\;.
\end{align*}
In the above equation we used the fact that $x^{(1)}$ and $x^{(2)}$ are square diagonal matrices with their singular values on the diagonal. We would like to substitute the expression in Eq.~\eqref{hhh} for $w^{AB}$, into the expression for $M_{x}(y)$ given in Eq.~\eqref{tendelta}.
By doing that we will get expressions with several cross terms. We argue that these cross terms vanish.
To see that consider for example the cross term
$$
c_1c_3\hbox{\rm Tr} \, \left[x^{(1)}\otimes w^{(2)}\Phi_{\rho^A\otimes\rho^B}^{+}(w^{\prime})\right]\;,
$$
and recall that $\rho^A\equiv x^{(1)}x^{(1)*}$ and $\rho^B\equiv x^{(2)}x^{(2)*}$.
Since $\Phi_{\rho^A\otimes\rho^B}^{+}$ is self-adjoint, the above expression can be written as
$$
c_1c_3\hbox{\rm Tr} \, \left[x^{(1)}\otimes w^{(2)}\Phi_{\rho^A\otimes\rho^B}^{+}(w^{\prime})\right]=c_1c_3\hbox{\rm Tr} \, \left[w^{\prime}\Phi_{\rho^A\otimes\rho^B}^{+}(x^{(1)}\otimes w^{(2)})\right]=
c_1c_3\hbox{\rm Tr} \, \left[w^{\prime}\left(x^{(1)}\otimes \Phi_{\rho^B}^{+}(w^{(2)})\right)\right]
$$
where in the last equality we used the identity $\Phi_{\rho^A\otimes\rho^B}^{+}(x^{(1)}\otimes w^{(2)})
= x^{(1)}\otimes \Phi_{\rho^B}^{+}(w^{(2)})$. This identity follows from the definition of $\Phi_{\rho}^{+}$, when working with a basis in which $x^{(1)}$ is diagonal. Now, since the partial trace $\hbox{\rm Tr} \, _1[w' (x^{(1)}\otimes B)]=0$
for all matrices $B$, we have
$$
c_1c_3\hbox{\rm Tr} \, \left[x^{(1)}\otimes w^{(2)}\Phi_{\rho^A\otimes\rho^B}^{+}(w^{\prime})\right]=0\;.
$$
In the same way, we see that all the other cross terms vanish.
Moreover, denote
$$
z^{AB}=\frac{i}{2}(y^{*}-y)=c_1x^{(1)}\otimes z^{(2)}+c_2z^{(1)}\otimes x^{(2)}+c_3z'\;,
$$
where $z^{(1)}$, $z^{(2)}$, and $z'$ are defined similarly to $w^{(1)}$, $w^{(2)}$, and $w'$.
Substituting this expression for $z^{AB}$ in Eq.~\eqref{tendelta} will also lead to vanishing cross terms.
To summarize, by substituting the above expressions for $z^{AB}$ and $w^{AB}$ in Eq.~\eqref{tendelta} we get
$$
M_{x}(y)=c_{1}^{2}M_{x}(x^{(1)}\otimes y^{(2)})+c_{2}^{2}M_{x}(y^{(1)}\otimes x^{(2)})
+c_{3}^{2}M_{x}(y')
$$
However, since we already proved that $x$ is a non-degenerate local minimum in the directions $x^{(1)}\otimes y^{(2)}$,
$y^{(1)}\otimes x^{(2)}$, and $y'$, we get
\begin{equation}\label{gmm}
M_{x}(y)<c_{1}^{2}\Gamma_{x}(x^{(1)}\otimes y^{(2)})+c_{2}^{2}\Gamma_{x}(y^{(1)}\otimes x^{(2)})
+c_{3}^{2}\Gamma_{x}(y')
\end{equation}
Now, note the orthogonality relations in the partial traces: $\hbox{\rm Tr} \, _1[(x^{(1)}\otimes y^{(2)})(y')^*]=\hbox{\rm Tr} \, _2[(x^{(1)}\otimes y^{(2)})(y')^*]=0$ and
$\hbox{\rm Tr} \, _1[(y^{(1)}\otimes x^{(2)})(y')^*]=\hbox{\rm Tr} \, _2[(y^{(1)}\otimes x^{(2)})(y')^*]=0$.
With these relations and from Eq.~\eqref{log}
we get that the expression in the RHS of Eq.(\ref{gmm}) is equal to
$\Gamma_{x}(y)$.
This completes the proof of the main part of the theorem.
To prove the second part of the theorem, assume that $x^{(1)}$ is degenerate local minimum and $x^{(2)}$ is a non-degenerate local minimum. Following the exact same lines of the proof above we get that $M_{x}(y')<\Gamma_{x}(y')$ for
$y'\in\left(x^{(1)}\right)^\perp\otimes \left(x^{(2)}\right)^\perp$. This is clear from Eq.~(\ref{laststep}) and the one above it,
where we use the fact that
$$
\hbox{\rm Tr} \, \left[y_{l}^{(2)*}\tilde{\Phi}_{\rho^B}(y_{l}^{(2)})\right]<\Gamma_{x^{(2)}}(y_{l}^{(2)})
$$
since $x^{(2)}$ is a non-degenerate local minimum. Similarly, if $y=x^{(1)}\otimes y^{(2)}$ we get
$M_{x}(y)<\Gamma_{x}(y)$. The only $y\in x^\perp$ for which it is possible to have
$M_{x}(y)=\Gamma_{x}(y)$ is $y=y^{(1)}\otimes x^{(2)}$. However, in this case
\begin{equation}\label{xy}
E\left(\frac{x+ty}{\sqrt{1+t^2}}\right) =
E\left(\frac{x^{(1)}+ty^{(1)}}{\sqrt{1+t^2}}\right)+E\left(x^{(2)}\right)\;,
\end{equation}
so $x$ is a local minimum in this direction as well. Hence, $x$ is a degenerate local minimum.
This completes the proof of the second part of the theorem.
\section{The Singular Case}\label{sing}
In the previous section, we were able to derive the first and second directional derivatives $D_{y}^{1}E(x)$ and $D_{y}^{2}E(x)$ assuming $x$ is non-singular.
In this section we consider the case where $x$ is singular. While the expression for $D_{y}^{1}E(x)$ is the same as in the previous section, the expression for the second derivative is not the same for the singular case. In particular, in the singular case it is possible that $D_{y}^{2}E(x)\equiv\frac{d^2}{dt^2 }S(\rho(t))\big|_{t=0}$ diverge.
Nevertheless, we will see in this section that even if $x$ is singular, $E(x)$ is additive.
For simplicity of the exposition, we will consider here subspaces $\mathcal{K}\subset\mathbb{C}^{n}\otimes\mathbb{C}^{m}$, where $n=m$, since we can always embed $\mathcal{K}$ in $\mathbb{C}^{\max{\{n,m\}}}\otimes\mathbb{C}^{\max{\{n,m\}}}$. The following theorem provides the criterion for the divergence of the second derivative.
\begin{theorem}\label{secdersin} Let $x,y\in\mathcal{K}\subset \mathbb{C}^{n\times n}$, $\mathrm{Tr\;} xx^*=\hbox{\rm Tr} \, yy^*=1$ and $\hbox{\rm Tr} \, (xy^*)=0$.
Change the standard orthonormal base in $\mathbb{C}^n$ to a new orthonormal base such that $x$ and $y$ have the forms
\begin{equation}\label{xypartition}
x=\left[\begin{array}{cc} x_{11}&0_{r,n-r}\\mathbf{0}_{n-r,r}&0_{n-r,n-r}\end{array}\right]\;\text{ and }\;\;
y=\left[\begin{array}{cc} y_{11}&y_{12}\\mathbf{y}_{21}&y_{22}\end{array}\right],
\end{equation}
where $r$ is the rank of $x$, $0_{i,j}$ are $i\times j$ zero matrices, and $x_{11}, y_{11}\in \mathbb{C}^{r\times r}$.
Then
\begin{equation}\label{srhotexpan}
S(\rho(t))=f(t) -(K+tg(t))t^2\log t^2, \quad K=\hbox{\rm Tr} \, (y_{22}y_{22}^*),
\end{equation}
where $f(t),g(t)$ are analytic functions in a neighbourhood of $0$.
Hence $D^2_yE(x)=+\infty$ if and only if $y_{22}\ne 0$. Furthermore, if $y_{22}=0$ then either $g(t)\equiv 0$ or $g(t)=at^{2k-1}(1+O(t))$, where
$a>0$ and $k$ is a positive integer.
\end{theorem}
A much weaker version of the theorem above can be found in~\cite{FGA}. For the clarity of the exposition in this section,
we leave the proof of Theorem~\ref{secdersin} to appendix~\ref{appsec}.
From the theorem above it follows that w.l.o.g we can set $y_{22}=0$ since otherwise the second derivative is $+\infty$.
This will be useful when proving local additivity for the singular case. However, in the tensor product space, $y$ can be written as in Eq.(~\ref{yprime}). Hence, while we assume that the $(2,2)$ block of the bipartite state $y$ is zero, it is not immediately obvious that the $(2,2)$ blocks of the one-party states $y_{l}^{(1)}$ and $y_{l}^{(2)}$ are also zero. Nevertheless, this is indeed the case as we show now.
\subsection{Tensor product structure in the singular case}
Let $\mathcal{K}\subset \mathbb{C}^{n\times n}$ be a subspace of matrices that are partitioned as in Eq.~\eqref{xypartition}.
We assume that $\mathcal{K}$ contains a matrix
\begin{equation}\label{xform}
x=\left[\begin{array}{cc} x_{11}&0\\mathbf{0}&0\end{array}\right], \quad \hbox{\rm Tr} \, (x_{11}^*x_{11})=1.
\end{equation}
We now choose a following orthonormal base $x_1,\ldots,x_p, y_1,\ldots,y_q,z_1,\ldots,z_r,w_1,\ldots,w_s\in\mathcal{K}$.
First, $x_1=x$. Then
\begin{enumerate}
\item
$x_1,\ldots,x_p$ is an orthonormal basis of the subspace of $\mathcal{K}$ of matrices of the form $\left[\begin{array}{cc} *&0\\mathbf{0}&0\end{array}\right]$.
(It is possible that $p=1$.)
\item
$x_1,\ldots,x_p,y_1,\ldots,y_q$ is an orthonormal basis of the subspace of $\mathcal{K}$ of matrices of the form $\left[\begin{array}{cc} *&*\\mathbf{0}&0\end{array}\right]$.
(It is possible that $q=0$.)
\item
$x_1,\ldots,x_p,y_1,\ldots,y_q,z_1,\ldots,z_r$ is an orthonormal basis of the subspace of $\mathcal{K}$ of matrices of the form
$\left[\begin{array}{cc} *&*\\ *&0\end{array}\right]$.
(It is possible that $r=0$.)
\item
$x_1,\ldots,x_p,y_1,\ldots,y_q,z_1,\ldots,z_r,w_1,\ldots,w_s$ is an orthonormal basis of $\mathcal{K}$.
(It is possible that $s=0$.)
\end{enumerate}
We observe the following
\begin{enumerate}
\item The projections of $x_1,\ldots,x_p$ on the block $(1,1)$ are linearly independent.
\item The projections of $y_1,\ldots,y_q$ on the block $(1,2)$ are linearly independent if $q\ge 1$.
\item The projections of $z_1,\ldots,z_r$ on the block $(2,1)$ are linearly independent if $r\ge 1$.
\item The projections of $w_1,\ldots,w_s$ on the block $(2,2)$ are linearly independent if $s\ge 1$.
\end{enumerate}
We now consider two subspaces $\mathcal{K}_i\subset \mathbb{C}^{n_i\times n_i}$ for $i=1,2$. We consider here the most complicated case
in which both matrices $x^{(1)}\in\mathcal{K}_1$ and $x^{(2)}\in\mathcal{K}_2$ are singular. So we assume that each $x^{(i)}$ has the form \eqref{xform}.
For $i=1,2$ we form orthonormal bases
\begin{equation}\label{bases}
x_1^{(i)},\ldots,x_{p_i}^{(i)}, y_1^{(i)},\ldots,y_{q_i}^{(i)},z_1^{(i)},\ldots,z_{r_i}^{(i)},w_1^{(i)},\ldots,w_{s_i}^{(i)}\in\mathcal{K}_i
\end{equation}
exactly as above.
We now form a tensor product of $\mathcal{K}_1\otimes\mathcal{K}_2$ with respect to the partitions of $\mathcal{K}_1,\mathcal{K}_2$ as above.
Let
\begin{equation}\label{ABpart}
A=\left[\begin{array}{cc} A_{11}&A_{12}\\ A_{21}&A_{22}\end{array}\right]\in\mathcal{K}_1,\;
B=\left[\begin{array}{cc} B_{11}&B_{12}\\ B_{21}&B_{22}\end{array}\right]\in\mathcal{K}_2.
\end{equation}
We then agree that the partition in $\mathcal{K}_1\otimes\mathcal{K}_2$ is of the form as the following partition of $A\otimes B$:
\begin{equation}\label{AtensBpart}
A\otimes B=\left[\begin{array}{cccc} A_{11}\otimes B_{11}&A_{11}\otimes B_{12}& A_{12}\otimes B_{11}&A_{12}\otimes B_{12}\\
A_{11}\otimes B_{21}&A_{11}\otimes B_{22}& A_{12}\otimes B_{21}&A_{12}\otimes B_{22}\\
A_{21}\otimes B_{11}&A_{21}\otimes B_{12}& A_{22}\otimes B_{11}&A_{22}\otimes B_{12}\\
A_{21}\otimes B_{21}&A_{21}\otimes B_{22}& A_{22}\otimes B_{21}&A_{22}\otimes B_{22}
\end{array}\right].
\end{equation}
\begin{lemma}\label{tensorlem1} Let $\mathcal{K}_1,\mathcal{K}_2$ be two subspaces in $\mathbb{C}^{n_{1}\times n_{1}}$ and $\mathbb{C}^{n_{2}\times n_{2}}$, respectively. Let $C=[C_{ij}]_{i,j=1}^4\in\mathcal{K}_1\otimes\mathcal{K}_2$ be partitioned as
in \eqref{AtensBpart}. Suppose that $C\ne 0$ and $C_{ij}=0$ for $i,j\ge 2$. Write $C$ as a linear combination of the tensor products of the bases of $\mathcal{K}_1$ and $\mathcal{K}_2$, chosen as in~\eqref{bases}. Then each term in this linear combination of $C$ is of the form $\alpha f\otimes g$,
where $\alpha\in\mathbb{C}$, $f\in\mathcal{K}_1$, $g\in\mathcal{K}_2$, and both $f$ and $g$ have the form
$\left[\begin{array}{cc} *&*\\ *&0\end{array}\right]$.
\end{lemma}
\begin{remark}
It is also possible to show that at least one of the matrices $f$ and $g$ must have the form $\left[\begin{array}{cc} *&*\\ 0&0\end{array}\right]$ or $\left[\begin{array}{cc} *&0\\ *&0\end{array}\right]$. However, we will not be using it here.
\end{remark}
\begin{proof}
Suppose the expansion of $C$ contains a term of the form $w^{(1)}_i\otimes w^{(2)}_j$. Look at the block $(4,4)$. The contribution of the
expansion of C to this block only comes from the tensor products projections of $w^{(1)}_i$ and $w^{(2)}_j$ on the block $(2,2)$.
Since all these projections are linearly independent we must have that $C_{44}\ne 0$ contrary to our assumption.
Assume now that the expansion of $C$ contains $w_{i}^{(1)}\otimes z_j^{(2)}$. Since the expansion of $C$ does not have terms $w^{(1)}_i\otimes w^{(2)}_j$, the contribution to the block
$C_{43}$ comes only from the projection of $w_{i}^{(1)}$ on the block $(2,2)$ and the projection of $z_j^{(2)}$ on the block $(2,1)$.
Again as all these projections are linearly independent we deduce that $C_{43}\ne 0$, contrary to our assumptions.
Similarly, there are no terms in the expansion of $C$ of the form $w_i^{(1)}\otimes y_j^{(2)}$, since $C_{34}=0$, and there are no terms in the expansion of $C$ of the form $w_i^{(1)}\otimes x_j^{(2)}$ since $C_{33}=0$.
That is, we have shown that the matrices $w_{i}^{(1)}$ do not appear in the expansion of $C$.
In exactly the same way, there are no terms in the expansion of $C$ of the form $z_{i}^{(1)}\otimes w_j^{(2)}$,
$y_{i}^{(1)}\otimes w_j^{(2)}$, and $x_{i}^{(1)}\otimes w_j^{(2)}$, since $C_{42}=0$, $C_{24}=0$, and
$C_{22}=0$, respectively. This completes the proof.
\end{proof}
\subsection{Local additivity in the singular case}
In this subsection we prove Theorem~\ref{main} for the case in which $x^{(1)}$ and $x^{(2)}$ are \emph{singular} local minima of $\mathcal{K}^{(1)}$ and $\mathcal{K}^{(2)}$, respectively.
We therefore choose bases such that $x^{(1)}$ and $x^{(2)}$ are of the form given in Eq.~\eqref{xypartition}, and denote by $r_1$ and $r_2$ their respective ranks.
Assume first that both $x^{(1)}$ and $x^{(2)}$ are \emph{non-degenerate} local minima.
We need to show that $D_{y}^{2}E(x)>0$ for all $y\in x^\perp$, where
$x\equiv x^{(1)}\otimes x^{(2)}$. Note that the partition of $x=[x_{ij}]_{i,j=1}^{4}$ as in Eq.~(\ref{AtensBpart})
gives $x_{ij}=0$ for all $i,j=1,2,3,4$ except for $x_{11}=x^{(1)}_{11}\otimes x^{(2)}_{11}$.
The most general
$y\in \left(x^{(1)}\otimes x^{(2)}\right)^\perp$ can be written as in Eq.~(\ref{generalform}), where $y'$ is of the form given in Eq.~(\ref{yprime}). Consider now
the partition of $y=[y_{ij}]_{i=j=1}^4$ as in Eq.(\ref{AtensBpart}).
From Theorem~\ref{secdersin} we know that $D_{y}^{2}E(x)=+\infty$ unless
$y_{ij}=0$ for all $i,j=2,3,4$. We therefore assume now that $y_{ij}=0$ for all $i,j=2,3,4$.
In this case, Lemma~\ref{tensorlem1} implies that all the matrices $y_{l}^{(1)}$ and $y_{l}^{(2)}$
in Eq.~(\ref{yprime}) have the form
$$
\left[\begin{array}{cc} *&*\\ *&0\end{array}\right].
$$
That is, their $(2,2)$ block is zero. For this reason,
we replace each subspace $\mathcal{K}^{(i)}\subset\mathbb{C}^{n_i\times n_i}$ $(i=1,2)$ with a smaller subspace
$\mathcal{U}^{(i)}\subset \mathcal{K}^{(i)}$ such that each matrix in the basis of $\mathcal{U}^{(i)}$
has zeros on the (2,2) block. It is left to prove that $x\equiv x_1\otimes x_2$ is local
minimum in $\mathcal{U}^{(1)}\otimes \mathcal{U}^{(2)}$.
Consider the new subspace $\mathcal{U}^{(i)}_{\epsilon}$, for $\epsilon>0$,
where in the orthonormal basis of $\mathcal{U}^{(i)}$, we change only the first
matrix $x^{(i)}$, i.e. the local minimum matrix, with the normalized diagonal matrix
$$
x^{(i)}_{\epsilon}\equiv\frac{1}{\sqrt{1+(n_i-r_i)\epsilon^2}}
\left[\begin{array}{cc} x_{11}^{(i)}&0_{r_i,n_i-r_i}\\ 0_{n_i-r_i,r_i}& \epsilon I_{n_i-r_i}\end{array}\right]\;\;\;i=1,2
$$
where $0_{i,j}$ are $i\times j$ zero matrices and $I_{n_i-r_i}$ are $(n_i-r_i)\times(n_i-r_i)$ identity matrices.
\begin{lemma}\label{yxeps}
Assume $x^{(i)}$ is a non-degenerate local minimum in $\mathcal{U}^{(i)}$, then
$x^{(i)}_{\epsilon}$ is a non-degenerate local minimum in $\mathcal{U}^{(i)}_{\epsilon}$. Moreover, there exists $\delta>0$ and $\epsilon_0>0$ such that if $\epsilon<\epsilon_0$ then $D_{y^{(i)}}^{2}E(x^{(i)}_{\epsilon})>\delta$
for all $y^{(i)}\in\left(x^{(i)}_{\epsilon}\right)^\perp$.
\end{lemma}
\begin{proof}
For simplicity of the exposition we remove the superscript $(i)$ from $x^{(i)}$ and denote $d\equiv n-r$. That is, consider
$$
x=
\left[\begin{array}{cc} x_{11}&0_{r,d}\\ 0_{d,r}& 0_{d,d}\end{array}\right]\;\text{and}\;
x_{\epsilon}\equiv\frac{1}{\sqrt{1+d\epsilon^2}}
\left[\begin{array}{cc} x_{11}&0_{r,d}\\ 0_{d,r}& \epsilon I_{d}\end{array}\right]\;.
$$
We need to show that if $x$ is a non-degenerate local minimum in $\mathcal{U}$ then $x_\epsilon$ is a non-degenerate local minimum in $\mathcal{U}_\epsilon$ for small enough $\epsilon$.
First, we need to show that $x_\epsilon$ remains critical. Indeed, since the condition~\eqref{critical} for criticality
is satisfied for $x$, it is also satisfied for $x_\epsilon$. This is because $x_\epsilon$ is a diagonal matrix and all $y\in x_{\epsilon}^{\perp}\subset\mathcal{U}$ is of the form
$
\left[\begin{array}{cc} *&*\\ *&0_{d,d}\end{array}\right]\;.
$
Second, we need to show that $D_{y}^{2}E(x_\epsilon)>\delta$. In Appendix~\ref{appb} we show
that $D_{y}^{2}E(x_\epsilon)$ does \emph{not} diverge in the limit $\epsilon\to 0$ (assuming $y_{jk}=0$ when both $j>r$ and $k>r$). Now, since we assume $D_{y}^{2}E(x)>0$
for all $y\in x^\perp$, we
can also assume that there exist $\delta'>0$ such that $D_{y}^{2}E(x)>\delta'$ for all $y\in x^\perp$. This is true because the set of all normalized matrices in $x^\perp$ is compact. Hence, from the nice behaviour of $D_{y}^{2}E(x_\epsilon)$ in the limit
$\epsilon\to 0$ (see Appendix~\ref{appb}), we get that
for small enough $\epsilon$ there exists $\delta>0$
such that $D_{y}^{2}E(x_\epsilon)>\delta$ for all $y\in (x_\epsilon)^\perp$. This completes the proof of the lemma.
\end{proof}
We now apply Theorem~\ref{main} to the non-singular case of $x_\epsilon\equiv x^{(1)}_\epsilon\otimes x^{(2)}_\epsilon$.
From Lemma~\ref{yxeps}, the second derivatives $D_{y^{(i)}}^{2}E(x^{(i)}_{\epsilon})>\delta$
for all $y^{(i)}\in\left(x^{(i)}\right)^\perp$ and $i=1,2$. Thus, we get that $D_{y}^{2}E(x_\epsilon)>2\delta$ for all $y\in (x_\epsilon)^\perp$. We obtain it by following precisely the same steps of the proof of Theorem~\ref{main} (in the non-singular case). Letting $\epsilon\to 0$ we deduce that in the direction
of $y$ the second derivative at $x^{(1)}\otimes x^{(2)}$ is strictly positive (greater or equal to $2\delta$). This complete the proof of the main part of theorem~\ref{main} for the singular case.
To proof the second part of the theorem, we assume now that $x^{(1)}$ is degenerate local minimum and $x^{(2)}$ is non-degenerate local minimum. In this case we only have $D_{y^{(2)}}^{2}E(x^{(2)}_{\epsilon})>\delta$. Nevertheless,
in Appendix~\ref{appb} we show that $D_{y^{(1)}}^{2}E(x^{(1)}_{\epsilon})-D_{y^{(1)}}^{2}E(x^{(1)})$ is of order $\epsilon\log\epsilon^2$. Therefore, since $D_{y^{(1)}}^{2}E(x^{(1)})\geq 0$, it follows that we can choose $\epsilon$ small enough such that $D_{y^{(1)}}^{2}E(x^{(1)}_{\epsilon})>-\delta/2$.
As pointed out in the proof of the non-singular case of theorem~\ref{main}, the only $y\in x^\perp$ (recall $x\equiv x^{(1)}\otimes x^{(2)}$) for which it is possible to have
$D_{y}^{2}E(x)=0$ is $y=y^{(1)}\otimes x^{(2)}$. However, the equality in Eq.~\eqref{xy} implies
that $x$ is a local minimum in this direction and this is also true even if $x^{(i)}$ are singular. We will therefore assume now that $y$ is not of the form $y^{(1)}\otimes x^{(2)}$.
By following precisely the same steps of the proof of Theorem~\ref{main} (in the non-singular case) we get that
for all other $y\in x^\perp$ we have $D_{y}^{2}E(x_\epsilon)>\delta-\delta/2=\delta/2$. We therefore get
$D_{y}^{2}E(x)>0$ in the limit $\epsilon\to 0$. This completes the proof of the second part of theorem~\ref{main}.
\section{Discussion}\label{conc}
We have shown that the minimum entropy output of a quantum channel is locally additive
(assuming at least one of the two local minima is non-degenerate).
Our proof consists of two key ingredients. The first one is
the use of the divided difference approach, which enabled us to calculate directional derivatives explicitly, and
the second one is the explicit use of the complex structure. In the appendix B of~\cite{FGA} we show that there exists counterexamples for local additivity over the real numbers. These counterexamples precludes the existence of
a more straightforward differentiation argument than the complex structure based argument given here.
The fact that the minimum entropy output is not globally additive makes local additivity of even greater interest
to quantum information theorists. It suggests that it is some global feature, of the quantum channels involved, that corresponds to cases of non-additivity of the minimum entropy output.
Perhaps one way to improve our understanding in this direction is to study properties of generic channels.
In particular, it seems quite possible to us that for generic channels (or generic subspaces) the entropy output have a
\emph{finite} number of isolated non-degenerate critical points.
\emph{Acknowledgments:---}
We acknowledge many fruitful discussions with A. Roy and J. Yard in the earlier stages of this work.
GG research is supported by NSERC. The authors acknowledge support from PIMS CRG MQI, MITACS, and iCore for Shmuel Friedland's visits to IQIS in Calgary.
\begin{appendix}
\section{\label{appsec}Proof of Theorem~\ref{secdersin}}
\proof Let $\lambda_1(t)\ge\ldots\ge \lambda_n(t)\ge 0$, for $t>0$, be the eigenvalues of $\rho(t)$. Rellich's theorem yields that
each $\lambda_i(t)$ is analytic in $t$ in a neighbourhood of $t=0$. So $\lambda_i(0)=\lambda_i(\rho)>0$ for $i=1,\ldots,r$ and $\lambda_i(0)=0$
for $i=r+1,\ldots,n$. Since each $\lambda_i(t)\ge 0$ it follows that the Taylor expansion of each $\lambda_i(t)\not \equiv 0$, for $i>r$,
must start with $t$ to a positive even power times a positive constant. I.e.
$\lambda_i(t)=\lambda_{i, 2n_i}t^{2n_i}(1+O(t))$, where $\lambda_{i,2n_i}>0$ and $n_i$ is a positive integer for $i>r$. This shows that $S(\rho(t))=-\sum_{i=1}^n \lambda_i(t)\log\lambda_i(t)$ must be of the form \eqref{srhotexpan}. Furthermore, $K=0$ if and only if $n_i\ge 2$ for all $i>r$. So if $K=0$ and not all $\lambda_i(t)$ are identically zero for $i>r$, then $k=\min\{n_i-1, \lambda_{i, 2n_i}>0\}$.
It is left to show that $K=\hbox{\rm Tr} \, (y_{22}y_{22}^*)$.
Let $X=\left[\begin{array}{cc}0&x\\mathbf{x}^*&0\end{array}\right], Y=\left[\begin{array}{cc}0&y\\mathbf{y}^*&0\end{array}\right]$.
Recall that the pencil $X+tY$ has $n$ nonnegaive and $n$ nonpositive eigenvalues
\[\sigma_1(t)\ge\ldots\ge\sigma_n(t)\ge 0\ge-\sigma_n(t)\ge\ldots\ge-\sigma_1(t).\]
The singular values of $x+ty$ are the $n$ nonnegative eigenvalues of $X+tY$. Hence, the eigenvalues of $\rho(t)$
are $\frac{\sigma_i(t)^2}{1+t^2}$ for $i=1,\ldots,n$. Let $\sigma_i(t)=\sigma_{i,1}t +O(t^2)$ for $t>0$ and $i>r$.
Hence the coefficient of $t^2$ in the $i$-th eigenvalue of $\rho(t)$, for $i>r$, is $\sigma_{i,1}^2$.
Thus $K=\sum_{i=r+1}^n \sigma_{i,1}^2$.\\
Let $P\in \mathbb{C}^{2n\times 2n}$ be the orthogonal projection on the zero eigenspace of $X$. Then $PYP((I-P)\mathbb{C}^{2n})=\mathbf{0}$.
The other possible nonzero eigenvalues of $PYP$ are $\sigma_{r+1,1}\ge\ldots\ge\sigma_{n,1}\ge 0\ge -\sigma_{n,1}\ge\ldots\ge -\sigma_{r+1,1}$,
which are the eigenvalues of the restriction of $PYP$ on the kernel of $\rho$ \cite{Fri78,Kato} or \cite[\S3.8]{Fri10}. The restriction of $PYP$ to the kernel
of $X$ is $\left[\begin{array}{cc}0&y_{22}\\mathbf{y}_{22}^*&0\end{array}\right]$, obtained by deleting the corresponding rows and the columns in $Y$.
Hence
\[2K=2\sum_{i=r+1}^n \sigma_{i,1}^2 =\hbox{\rm Tr} \, ((PYP)^2)=\hbox{\rm Tr} \, (y_{22}y_{22}^*+y_{22}^*y_{22}).\]
This completes the proof.\qed
\section{Formula for the second derivative in the singular case\label{appb}}
\begin{proposition}
Let
\begin{equation}\label{xypartition2}
x_\epsilon=\left[\begin{array}{cc} x_{11}&0_{r,n-r}\\mathbf{0}_{n-r,r}&\epsilon I_{n-r,n-r}\end{array}\right]\;\text{ and }\;\;
y=\left[\begin{array}{cc} y_{11}&y_{12}\\mathbf{y}_{21}&0_{n-r,n-r}\end{array}\right],
\end{equation}
where $0_{i,j}$ are $i\times j$ zero matrices, and $x_{11}, y_{11}\in \mathbb{C}^{r\times r}$, $y_{12}\in \mathbb{C}^{r\times n}$,
$y_{21}\in \mathbb{C}^{n\times r}$. We also assume that $x_{11}=\mathop{{\rm diag}}\nolimits\{\sqrt{p_1},...,\sqrt{p_r}\}$ is non singular. Then, the limit of $D_{y}^{2}E(x_\epsilon)$ when $\epsilon$ goes to zero exists and equals to
\begin{eqnarray}\label{mainappen}
\lim_{\epsilon\to 0}D_{y}^{2}E(x_\epsilon)
=D_{y_{11}}^{2}E(x_{11})-2\hbox{\rm Tr} \, \left[\left(y_{12}y_{12}^{*}+y_{21}^{*}y_{21}\right)\log\rho_{11}\right]\;,
\end{eqnarray}
where $\rho_{11}\equiv x_{11}x_{11}^{*}$.
\end{proposition}
\begin{remark}
The contribution of the normalization factor of $x_\epsilon$ is of order $O(\epsilon^2)$ and therefore ignored here.
\end{remark}
\proof The proof is based on a straightforward calculation.
The expression for the second derivative given in Eq.(\ref{se}), can be written as:
$$
D_{y}^{2}E(x_\epsilon) \equiv\frac{d^2}{dt^2}S(\rho_\epsilon(t))\Big|_{t=0}
=-2\left(S(\rho_\epsilon)+\sum_{j=1}^{n}\sum_{k=1}^{n}G_{jk}\right)\;,
$$
where $\rho_\epsilon\equiv x_\epsilon x^{*}_{\epsilon}$, $\gamma_0\equiv x_\epsilon y^*+yx_{\epsilon}^{*}$
\begin{eqnarray}\label{321}
G_{jk}\equiv|y_{jk}|^2\log p_j+ \frac{\log p_{j}-\log p_{k}}{2(p_j-p_k)}\left|(\gamma_0)_{jk}\right|^2\;.
\end{eqnarray}
If both $j,k$ are smaller or equal to $r$, then clearly those $G_{jk}$ terms contribute to $D_{y_{11}}^{2}E(x_{11})$.
Now, if both $j>r$ and $k>r$ then $y_{jk}=0$ and we have $G_{jk}=0$. Hence, we get
\begin{eqnarray}\label{gres}
D_{y}^{2}E(x_\epsilon) =D_{y_{11}}^{2}E(x_{11})-2\sum_{j=r+1}^{n}\sum_{k=1}^{r}\left(G_{jk}+G_{kj}\right)
\end{eqnarray}
We therefore focus now on the expressions for $G_{jk}$ and $G_{kj}$ in the case $j>r$ and $k\leq r$.
Writing $x_\epsilon=\mathop{{\rm diag}}\nolimits\{\sqrt{p_1},...,\sqrt{p_n}\}$ with $p_j=\epsilon^2$ for $j>r$ we have
\begin{align*}
(\gamma_{0})_{jk}&=\sqrt{p_j}\bar{y}_{kj}+\sqrt{p_k}y_{jk}=\sqrt{p_k}y_{jk}+O(\epsilon)\\
(\gamma_{0})_{kj}&=\sqrt{p_k}\bar{y}_{jk}+\sqrt{p_j}y_{kj}=\sqrt{p_k}\bar{y}_{jk}+O(\epsilon)\;,
\end{align*}
where the last equality was obtained by setting $p_j=\epsilon^2$. We therefore have
$
\left|(\gamma_0)_{jk}\right|^2=\left|(\gamma_0)_{kj}\right|^2
$
up to $O(\epsilon)$. From the expressions above we get for $j>r$ and $k\leq r$ the following formulas:
\begin{align*}
G_{jk}&=|y_{jk}|^2\log p_j+ \frac{\log p_{j}-\log p_{k}}{2(p_j-p_k)}\left(p_k|y_{jk}|^2+O(\epsilon)\right)\\
G_{kj}&=|y_{kj}|^2\log p_k+ \frac{\log p_{k}-\log p_{j}}{2(p_k-p_j)}\left(p_k|y_{jk}|^2+O(\epsilon)\right)
\end{align*}
Since $p_j=\epsilon^2$ and $p_k>0$, we have
\begin{align*}
G_{jk}&=|y_{jk}|^2\log \epsilon^2+ \frac{\log \epsilon^2-\log p_{k}}{2(\epsilon^2-p_k)}\left(p_k|y_{jk}|^2+O(\epsilon)\right)=\frac{1}{2}|y_{jk}|^2\log\epsilon^2+\frac{1}{2}|y_{jk}|^2\log p_k+O(\epsilon\log\epsilon)\\
G_{kj}&=|y_{kj}|^2\log p_k+ \frac{\log p_{k}-\log\epsilon^2}{2(p_k-\epsilon^2)}\left(p_k|y_{jk}|^2+O(\epsilon)\right)
=|y_{kj}|^2\log p_k+\frac{1}{2}|y_{jk}|^2\log p_k-\frac{1}{2}|y_{jk}|^2\log\epsilon^2+O(\epsilon\log\epsilon)
\end{align*}
Hence,
$$
G_{jk}+G_{kj}=|y_{jk}|^2\log p_k+|y_{kj}|^2\log p_k+O(\epsilon\log\epsilon)\;.
$$
By substituting this expression into Eq.(\ref{gres}) we get~(\ref{mainappen}). This completes the proof.\qed
\end{appendix}
|
1,116,691,498,276 | arxiv | \section{Introduction}
Statistical methodology for identifying periodicities in time series can provide meaningful information about the underlying physical process. Non-stationary behavior seems to be the norm rather than the exception for physiological time series as time-varying periodicities and other forms of rich dynamical patterns are commonly observed in response to external perturbations and pathological states. For example, body temperature and rest activity might exhibit changes in their periodic patterns as an individual experiences a disruption in its circadian timing system \citep{krauchi1994circadian, komarzynski2018}. Heart rate variability and electroencephalography are other examples of data that are often characterized by time-changing spectral properties, the quantification of which can provide valuable information about the well-being of a subject \citep{malik1996heart, cohen2014analyzing, bruce2016adaptive}. This paper is motivated by modeling airflow trace data obtained from a sleep apnea study, where our objective is to identify and model the recurrence of different periodicities, which are indicative of the apneic and hyponeic events.
\subsection{A Case Study on Sleep Apnea in Humans}
Our study focuses on \textit{sleep apnea} \citep{heinzer2015prevalence}, a chronic respiratory disorder characterized by recurrent episodes of temporary ($\geq$2 breaths) cessation of breathing during sleep (about 10 seconds in humans). Sleep apnea negatively affects several organ systems, such as the heart and kidneys in the long term. It is also associated with an increased likelihood of
hypertension, stroke, several types of dementia, cardiovascular diseases, daytime sleepiness, depression and a diminished quality of life
\citep{ancoli1991dementia,teran1999association, peker2002increased, young2002epidemiology, yaggi2005obstructive, cooke2009sustained, dewan2015intermittent}.
Instances of sleep apnea can be subclassified based on the degree of reduction in airflow to the lungs whereby \textit{apneas} are classified as a
reduction of airflow by 90\% and \textit{hypopneas} require a reduction in airflow by at least 30\% (with a reduction of blood oxygen levels by at least 3\%). For example, the airflow trace shown in Figure \ref{fig:human_breathing_trace_intro} was collected from a human over a time span of 5.5 minutes of continuous
breathing. During this time, apneic and hyponeic events were simulated; apneas appear in the first and second minute and around the start of the fifth minute, where there are two instances of hypopneas in the first half of the second minute and at the start of the fourth minute as marked in Figure \ref{fig:human_breathing_trace_intro}. Note these events were classified by eye by an experienced experimental researcher. Detecting apneic and hyponeic events during sleep is one of the primary interests of researchers and clinicians working in the field of sleep medicine and relevant healthcare \citep{berry2017aasm}. Manual classification is a time-consuming process, and hence there is a need of a data-driven approach for the automated classification of these types of events.
\begin{figure}[htbp]
\centering
\centerline{\includegraphics[height =6.9 cm, width = 15.1cm]{Plots/data_intro_finale.png}}
\caption{Airflow trace collected over a period of five and half minutes of continuous breathing where instances of simulated apnea and hypopnea (highlighted on the graph) were recurring over time. }
\label{fig:human_breathing_trace_intro}
\end{figure}
\subsection{Hidden Markov Models and Spectral Analysis}
Approaches to spectral analysis of nonstationary processes were first developed by \citet {priestley1965evolutionary} who introduced the concept of \textit{evolutionary spectra}, namely spectral density functions that are time-dependent as well as localized in the frequency domain. This modeling framework was formalized as a class
of nonstationary time series called to be \textit{locally stationary}
\citep{dahlhaus1997fitting}. Locally stationary processes can be well approximated by \textit{piecewise stationary} processes and several authors proposed to model the time-varying spectra of locally stationary time series through the piecewise constant spectra of the corresponding stationary segments \citep{ adak1998time, ombao2001automatic, davis2006structural}.
This framework was extended to a Bayesian setting by \citet{rosen2009local} and \citet{rosen2012adaptspec} who estimated a time-varying spectral density using a fixed number of smoothing splines and approximated the likelihood function via a product of local Whittle likelihoods \citep{whittle1957curve}.
Their methodology is based on the assumption that the time series are piecewise stationary, and the underlying spectral density for each partition is smooth over frequencies.
In order to deal with changes in spectral densities with sharp peaks which can be observed for some physiological data sets such as respiratory data,
\citet{hadj2018bayesian} proposed a change-point analysis where they
introduced a Bayesian methodology for inferring change-points along with
the number and values of the periodicities affecting each segment. While these approaches allow us to analyse the spectral changing properties of a process
from a retrospective and exploratory point of view, in order to develop a more comprehensive understanding of the process driving the data, further modeling assumptions are needed that quantify the probabilistic rules governing the
transitions as well as recurrence of different oscillatory dynamic patterns. For example, in the context of experimental sleep apnea research,
both, correctly classifying the states of apnea as well as quantifying their risk of recurrence, possibly in the context of real-time monitoring of patients,
is of major interest to the development of treatments for breathing disorders.
Here, we address the switching dynamics between different oscillatory states in the framework of a hidden Markov model (HMM) that assumes a
discrete latent state sequence whose transition probabilities follow a Markovian structure (see e.g. \citet{rabiner1989tutorial, ephraim2002hidden, cappet}). Conditioned on the state sequence, the observations are
assumed to be independent and generated from a family of probability distributions, which hereafter we refer to as the \textit{emission distributions}. HMMs are arguably among the most popular statistical approaches used for modeling time series data when the observations exhibit nonstationary characteristics that can be represented by an underlying and unobserved hidden process. These modeling approaches, also known as hidden Markov processes and Markov-switching models, became notable by the work of \citet{baum1966statistical} and \citet{baum1967inequality}, and
HMMs have since been successfully used in many different applications
\citep{krogh1994hidden, yau2011bayesian, langrock2013combining, yaghouby2015quasi, huang2018hidden}.
As we are interested in modeling the recurrence of periodicities in the airflow trace data we propose a spectrum-based HMM where the discrete latent state sequence reflects the time-varying changes as well as recurrence of periodic regimes as defined by their spectral properties. Furthermore,
we pursue a flexible non-parametric specification within a Bayesian approach by assuming the infinite-state hierarchical Dirichlet process (HDP) as a building block \citep{teh2006}.
The HDP-HMM approach places a Dirichlet process (DP) prior on the Markovian transition probabilities of the system, while allowing the atoms associated
with the state-specific conditional DPs to be shared between each other, yielding an HMM with a countably infinite number of states. The HDP-HMM therefore not only provides a non-parametric specification of the transition distributions but also removes the need for specifying a priori the number of states.
We focus on the
\textit{sticky} HDP-HMM by \citet{fox2011sticky}, where an additional parameter is introduced to promote self-transition with the effect that the sticky HDP-HMM more
realistically explains the switching dynamics between states that exhibit some temporal mode persistence.
We hence extend the Bayesian methodology for the sticky HDP-HMM
to a spectral representation of the states where
inference for the variable dimensionality regarding the number of periodicities that characterize the emission distributions of the states is achieved
by developing an appropriate form of trans-dimensional MCMC sampling step \citep{green1995reversible}. To the best of our knowledge, this article presents the first statistical methodology that exploits an HMM for analyzing the spectral properties of a time series while quantifying the probabilistic mechanism governing the transitions and recurrence of distinct dynamic patterns.
The rest of the paper is organized as follows. Section \ref{sec:model} presents the model and the general framework of our Bayesian approach.
Section \ref{sec:inference} and \ref{sec:simulation_studies} provide the inference scheme and simulation studies to show the performance
of the proposed method. In Section \ref{sec:case_study}, we illustrate the use of our approach to detect instances of apnea in human breathing traces.
\section{A Sticky HDP-HMM with Oscillatory Emissions} \label{sec:model}
Let $\bm{y} = (y_1, \dots, y_T){'}$ be a realization of a time series whose oscillatory behavior may switch
dynamically over time and let $\bm{z} = (z_1, \dots, z_T)'$ denote the hidden discrete-valued states of the Markov chain that
characterize the different periodic regimes, where $z_t$ denotes the state of the Markov chain at time $t$.
Any observation $y_t$ given the state $z_t$, is assumed to be conditionally independent of the observations and
states at other time steps \citep{rabiner1989tutorial}. Here, a highly flexible nonparametric approach is postulated by
assuming that the state space is unbounded, i.e. has infinitely many states as in \citep{beal2002infinite, teh2006}.
Thus, the Markovian structure on the state sequence $\bm{z}$ is given by
\begin{equation} \label{eq:HMM}
z_{\,t} \, | \, z_{\, t-1}, \, ( \,\bm{\pi}_{\,j} \,)_{\,j=1}^{\infty} \, \sim \, \bm{\pi}_{\,z_{\, t-1}}, \quad t = 1, \dots, T,
\end{equation}
where $\bm{\pi}_{\,j} = (\pi_{j1}, \pi_{j2}, \dots)$ represents the (countably infinite) state-specific vector of transition probabilities,
and in particular $\pi_{\,jk} = p \, (z_{\, t} = k \, | \, z_{\, t-1} = j \,)$, where $p \, (\, \cdot \, )$ is used as a generic notation for
probability density or mass function, whichever appropriate.
We assume that the initial state has distribution $\bm{\pi}_{\,0} = (\pi_{01}, \pi_{02}, \dots)$, namely $z_{\, 0} \sim \bm{\pi}_{\,0}$.
Next, assume that each state $j$ represents a periodic regime that is characterized by $d_j$ relevant periodicities whose frequencies
are denoted by $\bm{\omega}_{\, j} = (\omega_{j 1}, \dots, \omega_{j \,d_j})^{'}$, recalling that periodicity is the inverse of frequency.
Let $\bm{\beta}_{\,j } = ( \, \bm{\beta}_{j 1}^{\,'}, \dots, \, \bm{\beta}_{j \,d_{j}}^{\,'})^{\,'}$ be the vector of linear coefficients that can
be associated with the amplitude and phase corresponding to each frequency $\omega_{jl}$ that is of relevance to state $j$,
where $\bm{\beta}_{j l} = (\, \beta_{j l}^{\, (1)}, \, \beta_{j l}^{\, (2)} \, )^{'}$ and $l = 1, \dots, d_j$.
Furthermore, let us define $ \bm{\theta}_{\, j} = (\,d_j, \, \bm{\omega}_{j}^{'} , \, \bm{\beta}_{j}^{\,'}, \sigma^2_j)^{\, '} $, where $\sigma^2_j$
accounts for a state-specific variance. Then, each observation is assumed to be generated from the following emission distribution
\begin{equation} \label{eq:emission_distribution}
y_t \, \big| \, z_{\,t} = j, \, \big( \, \bm{\theta}_{\, j} \,)_{j=1}^{\infty} \, \sim \, \mathcal{N} \, \Big(\, f_{\, t j\,}, \, \sigma_{j}^{2} \, \Big), \quad t = 1, \dots, T,
\end{equation}
where the mean function $f_{\, t j\,}$ for state $j$ at time $t$ is \citep{andrieu1999joint, hadj2018bayesian} specified to be oscillatory, i.e.,
\begin{equation} \label{eq:oscillatory}
f_{\, t j} \, = \bm{x}_t \, \big( \bm{\omega}_{ j} \big)^{\, '} \, \bm{\beta}_{\, j},
\end{equation}
and the vector of basis functions $\bm{x}_t \, \big( \, \bm{\omega}_j \, \big)$ is defined as
\begin{equation}
\label{basis_functions}
\bm{x}_t \, \big( \bm{\omega}_{j} \big) = \big(\cos(2\pi\omega_{j 1}t), \, \sin(2\pi\omega_{j 1}t), \dots, \cos(2\pi\omega_{j \, d_j}t), \, \sin(2\pi\omega_{j \, d_j}t) \big)^{'}.
\end{equation}
The dimension of each oscillatory function depends on the unknown number $d_j$ of periodicities relevant to state $j$.
Given a pre-fixed upper bound for the number of relevant periodicities per state,
$d_{\text{max}}$, the parameter space
$\bm{\Theta}_j$ for the vector of emission parameters $\bm{\theta_j}$ can be written as
$ \bm{\Theta}_j = \bigcup_{d_{j} = 1}^{d_{\text{max}}} \Big\{ \, d_j \, \Big\} \times \Big\{ {\rm I\!R}^{2 d_j} \times \bm{\Omega}_{d_j} \times {\rm I\!R}^{+}\Big\},$
where $ \bm{\Omega}_{d_j} = (0, 0.5)^{\, d_j}$ denotes the sample space for the frequencies of the $j$-th state.
We notice that \citet{hadj2018bayesian} introduced such a cosinor modeling approach for oscillatory data that show regime shifts in periodicity, amplitude and phase, where they assume that, conditional on an (unknown)
number of change-points and their (unknown) positions, the time series process can be approximated by a sequence of segments,
each with mean functions specified by cosinor models of the general form given in Equation \eqref{eq:oscillatory}. Here this approach will be integrated within a nonparametric sticky HDP-HMM model.
\subsection{A Bayesian Nonparametric Framework for Unbounded Markov States} \label{sec:BNP_markov}
Dirichlet processes provide a simple description of clustering processes where the number of clusters is not fixed a priori.
Suitably extended to a hierarchical DP, this form of stochastic process provides a foundation for the design of state-space models in which
the number of modes is random and inferred from the data.
In contrast to classic methods that assume a parametric prior on the number of states, or use model selection
techniques to determine the number of regimes in an HMM, here we follow \citet{beal2002infinite, teh2006} and \citet{fox2011sticky},
and assume the number of states to be unknown. We therefore do not need to pre-specify the number of hidden states, which provides a more flexible modeling framework.
The DP may be used in frameworks where an element of the model is a discrete random variable of unknown cardinality \citep{hjort2010bayesian}.
The unbounded HMM (i.e., where the number of possible states is unknown) can be seen as an infinite mixture model,
where the mixing proportions are modelled as DPs \citep{beal2002infinite, rasmussen2002infinite, teh2006}.
The current state $z_{\,t}$ indexes a specific transition distribution $\bm{\pi}_{\,z_{\,t}}$ over the positive integers,
whose probabilities are the mixing proportions for the choice of the next state $z_{t+1}$. To allow the same set of next states to
be reachable from each of the current states, we introduce a set of state-specific DPs, whose atoms are shared between each
other \citep{teh2006}. As in \citet{fox2011sticky} we implement the sticky version by increasing the expected probability of self-transitions.
In particular, the state-specific transition
distribution $\bm{\pi}_j$ follows the HDP
\begin{equation} \label{eq:pi_DP}
\bm{\pi}_j \, \big| \, \eta, \, \kappa, \, \bm{\alpha} \sim \text{DP} \, \Bigg( \, \eta + \kappa, \, \dfrac{\eta \, \bm{\alpha} + \kappa \, \delta_j}{\eta + \kappa} \, \Bigg),
\end{equation}
where
\begin{equation*}\label{eq:GEM}
\bm{\alpha} \, \big| \, \gamma \sim \text{GEM} \, ( \, \gamma \, ).
\end{equation*}
Here, the sequence $\bm{\alpha} = (\, \alpha_{\, k} \,)_{k=1}^{\infty}$ can be seen as a \textit{global} probability distribution over
the positive integers that ties together the transition distributions $\bm{\pi}_j$ and guarantees that they have the same support.
We denote by GEM ($ \gamma )$ \footnote{GEM is an abbreviation for Griffiths, Engen and McCloskey, see \citet{ignatov1982constant, perman1992size, pitman1996blackwell} for background.} the \textit{stick-breaking construction} \citep{sethuraman1994constructive, pitman2002poisson} of $\bm{\alpha}$ as
\begin{equation} \label{eq:stick_breaking_1}
\alpha_k = \nu_k \, \prod_{l=1}^{k-1} \, ( 1 - \nu_l),
\end{equation}
where
\begin{equation} \label{eq:stick_breaking_2}
\nu_k \, | \, \gamma \sim \text{Beta} \, (\, 1, \, \gamma \, ),
\end{equation}
for $ k = 1, 2, \dots $, and $\gamma$ is a positive real number that controls the expected value of the number of elements in $\bm{\alpha}$
with significant probability mass. Equations \eqref{eq:stick_breaking_1} and \eqref{eq:stick_breaking_2} can be motivated by the
equivalent process where a stick of length one is split into lengths specified by the weights $\alpha_k$, where the $k^{\, th}$ proportion
is a random fraction $\nu_k$ of the remaining stick after the preceding $(\, k -1 \, )$ proportions have been constructed.
The stick-breaking construction ensures that the sequence $\bm{\alpha}$ satisfies $\sum_{k=1}^{\infty} \alpha_{\, k} = 1$ with probability one.
Conditional on $\bm{\alpha}$, the hierarchical structure given in Equation \eqref{eq:pi_DP} indicates that the state-specific
transition distribution $\bm{\pi}_{\, j}$ is distributed according to a DP with \textit{concentration parameter} $\eta + \kappa$
and \textit{base distribution} $(\eta \, \bm{\alpha} + \kappa \, \delta_j)/(\eta + \kappa)$, that is itself a DP.
Here, $\eta$ is a positive real number that controls the variability of the $\bm{\pi}_{\, j}$'s around $\bm{\alpha}$,
while $\kappa$ is a positive real number that inflates the expected probability of a self-transition \citep{fox2011sticky},
and $ \delta_j $ denotes a unit-mass measure concentrated at $j$. By setting $\kappa = 0$ in Equation \eqref{eq:pi_DP},
we obtain the non-sticky HDP-HMM framework proposed by \citet{teh2006}. It was noted that this specification could result in an
unrealistically rapid alternation between different (and often redundant) states. The
\textit{sticky} formulation of \citet{fox2011sticky} allows for more temporal state persistence by inflating the
expected probabilities of self-transitions by an amount proportional to $\kappa$, i.e.
\begin{equation*}
\mathbb{E} \, \big[ \, \pi_{jk} \, | \, \eta, \, \kappa, \, \bm{\alpha} \big] = \dfrac{\eta}{\eta + \kappa} \, \alpha_k + \dfrac{\kappa}{\eta + \kappa} \, \delta \,(j, \, k),
\end{equation*}
where $\delta \,(j, \, k) = 1$ if $k = j$ and zero otherwise.
\section{Inference} \label{sec:inference}
Our inference scheme is formulated within a full Bayesian framework, where our proposed sampler alternates between updating the emission and the HMM parameters. Section \ref{sec:emission_parameters} presents a reversible jump MCMC based algorithm to obtain posterior samples of the emission parameters $\bm{\theta}_j$, where a trans-dimensional MCMC sampler is developed to explore subspaces of variable dimensionality regarding the number of periodicities that characterize state $j$. In Section \ref{sec:HMM_parameters} we address model search on the number of states by exploiting the \textit{Chinese restaurant franchise with loyal customers} \citep{fox2011sticky}, a metaphor that provides the building blocks to perform Bayesian nonparametric inference for updating the HMM parameters. The resulting Gibbs sampler is described in Section \ref{sec:gibbs_sampler} where in Section \ref{sec:label_switching} we address the label switching problem related to our proposed algorithm.
\subsection{Emission Parameters} \label{sec:emission_parameters}
Conditional on the state sequence $\bm{z}$, the observations $\bm{y}$ are implicitly partitioned into a finite number of states,
where each state refers to at least one segment of the time series. When a type of periodic behavior recurs over time, the corresponding
state is necessarily related to more than one segment. Let $\bm{y}^{\,*}_{j} = ( \, \bm{y}^{\,'}_{j1}, \bm{y}^{\,'}_{j2}, \dots, \bm{y}^{\,'}_{j R_{j}})^{\, '} $
be the vector of (non-adjacent) segments that are assigned to state $j$, where $\bm{y}_{jr}$ denotes the $r^{\, th}$ segment of the time series for which $z_{\,t} = j$ and $R_j$
is the total number of segments assigned to that state. Then, the likelihood of the emission parameter $ \bm{\theta}_j $ given the observations in $\bm{y}^{\,*}_j$ is
\begin{equation}
\label{likelik_segment}
\mathscr{L} ( \, \bm{\theta}_{\, j} \, | \, \bm{y}^{\,*}_{j} \, ) = ( \, 2\pi \sigma^2_j \, )^{\,-T^{\,*}_j/2} \exp \Bigg[ -\dfrac{1}{2\sigma^2_j} \, \sum_{ t \, \in \, I^{\,*}_j} \Bigg\{ \, y_t - \bm{x}_t \, \big( \bm{\omega}_{ j} \big)^{\, '} \, \bm{\beta}_{\, j} \, \Bigg\}^{\, 2} \,\Bigg],
\end{equation} where $I^{\,*}_j$ and $T_j^{\,*}$ denote the set of time points and number of observations, respectively, associated with $\bm{y}_j^{\, *}$.
Following \citet{hadj2018bayesian}, we assume independent Poisson prior distributions for the number of frequencies $d_j$ for each state $j$, constrained on $1 \leq d_j \leq d_{max}$.
Conditional on $d_j$, we choose a uniform prior for the frequencies $ \omega_{j, \, l} \sim \text{Uniform}(0, \phi_{\omega}), \, \, l = 1, \dots, d_j$, where $0 < \phi_{\omega} < 0.5$. The value of $\phi_\omega$ can be chosen to be informative in the sense that it may reflect prior information about the significant frequencies that drive the overall variation in the data, for example $\phi_\omega$ may be assumed to be in the low frequencies range $ 0 < \phi_\omega < 0.1 $. Analogous to a Bayesian regression \citep{Bishop:2006:PRM:1162264}, a zero-mean isotropic Gaussian prior is assumed for the coefficients of the $j^{th}$ regime,
$\bm{\beta}_{\, j} \sim \mathcal{\bm{N}}_{2d_j} (\, \bm{0}, \, \sigma^2_{\beta} \, \bm{I}\, )$, where the prior variance $\sigma^2_\beta$ is fixed at a relatively large value (e.g., in our case $10^{\,2})$. The prior on the residual variance $\sigma^2_j$ of state $j$ is specified as
$\text{Inverse-Gamma} \, \big(\frac{\xi_0}{2}, \frac{\tau_0}{2}\big)$, where $\xi_0$ and $\tau_0$ are fixed at small values, noticing that when $\xi_0 = \tau_0 = 0$
we obtain Jeffreys' uninformative prior \citep{bernardo2009bayesian}.
Bayesian inference on $\bm{\theta}_j$ is built upon the following factorization of the joint posterior distribution
\begin{equation} \label{eq:posterior_theta_j}
p \, ( \, \bm{\theta}_j \, | \, \bm{y}^{\,*}_{j} ) = p \, ( \, d_j \, | \, \bm{y}^{\,*}_{j} ) \, p \, ( \, \bm{\omega}_j \, |\, d_j, \, \bm{y}^{\,*}_{j} ) \, p \, ( \, \bm{\beta}_j \, |\, \bm{\omega}_j, \, d_j, \, \bm{y}^{\,*}_{j} ) \, p \, ( \, \sigma^2_j \, |\, \bm{\beta}_j, \, \bm{\omega}_j, \, d_j, \, \bm{y}^{\,*}_{j} ).
\end{equation} Sampling from \eqref{eq:posterior_theta_j} gives rise to a model selection problem regarding the
number of periodicities, thus requiring an inference algorithm that is able to explore subspaces of variable dimensionality.
This will be addressed by the reversible-jump sampling step introduced in the following section.
\subsubsection{Reversible-Jump Sampler}
Here we provide the details for drawing $\bm{\theta}_j$ from the posterior distribution $ p \, ( \, \bm{\theta}_j \, | \, \bm{y}^{\,*}_{j} )$ given in Equation \eqref{eq:posterior_theta_j}.
Our methodology follows \citet{andrieu1999joint} and \citet{hadj2018bayesian} and is based on the principles of reversible-jump MCMC introduced in \citet{green1995reversible}.
Notice that, conditional on the state sequence $\bm{z}$, the emission parameters $\bm{\theta}_j$ can be updated independently and in parallel for each of the current states.
Hence, for the rest of this subsection and for ease of notation, we drop the subscript corresponding to the j$^{th}$ state.
At each iteration of the algorithm, a random choice with probabilities given in \eqref{eq:transition_prob} based on the current number of frequencies $d$ will dictate whether to
add a frequency (\textit{birth step}) with probability $b_d$, remove a frequency (\textit{death step}) with probability $r_d$, or update the frequencies (\textit{within step}) with probability
$\mu_d = 1 - b_d - r_d$, where
\begin{equation}
\label{eq:transition_prob}
b_d = c \, \text{min} \Bigg\{ 1, \dfrac{p\, (\,d+1\,)}{p\,(\,d\,)} \Bigg\}, \qquad r_{d+1} = c \, \text{min} \Bigg\{1, \dfrac{p\,(\,d\,)}{p\,(\,d+1\,)} \Bigg\},
\end{equation} for some constant $c$ $\in [0, \frac{1}{2}]$ and $p \, (\,d\,)$ is the prior probability. Here, as in \citet{hadj2018bayesian}, we fixed $c=0.4$ but other values are admissible
as long as $c$ is not larger than $0.5$ to guarantee that the sum of the probabilities does not exceed 1 for some values of $c$.
Naturally, $b_{d_{\text{max}}} = r_{1} = 0$. An outline of these moves is as follows (further details are provided in Supplementary Material \textcolor{blue}{B}).
{\it Within-Model Move:} \label{segment_model_within}
Conditional on the number of frequencies $d$, the vector of frequencies $\bm{\omega}$ is sampled following a similar procedure proposed in
\cite{andrieu1999joint} and \citet{hadj2018bayesian} where we update the frequencies one-at-a-time using Metropolis-Hastings (M-H) steps, with target distribution
\begin{equation} \label{posterior_omega}
p \, (\bm{\omega} \, | \, \bm{\beta}, \, \sigma^2, \, d, \, \bm{y}^{\,*} ) \propto \exp \Bigg[ -\dfrac{1}{2\sigma^2} \sum_{ t \, \in \, I^{\,*}} \Big\{ y_t - \bm{x}_t \, \big( \, \bm{\omega} \, \big)^{\, '} \, \bm{\beta} \, \Big\}^{2} \Bigg] \mathbbm{1}_{\big[ \, \bm{\omega} \, \in \, \bm{\Omega}_{d} \big] \, }.
\end{equation}
Specifically, the proposal distribution is a combination of a Normal random walk centred around the current frequency
and a draw from the periodogram of $\hat{\bm{y}}$, where $\hat{\bm{y}}$ denotes a segment of data randomly chosen from $\bm{y}^{\, *}$
with probability proportional to the number of observations belonging to that segment. Naturally, when a state does not recur over time, i.e.
when a state refers to only one segment of the time series, that segment is chosen with probability one. Next, updating the vector of linear coefficients $\bm{\beta}$ and the residual variance $\sigma^{\,2}$ is carried out as in the fashion of the usual normal Bayesian regression setting \citep{gelman2014bayesian}.
Hence, $\bm{\beta}$ is updated in a Gibbs step from
\begin{equation} \label{posterior_beta}
\bm{\beta} \, \big| \, \bm{\omega}, \, \sigma^2, \, d, \, \bm{y}^{\,*} \sim \bm{\mathcal{N}}_{2d} \, (\, \hat{\bm{\beta}}, \, \bm{V}_{\beta}),
\end{equation} where \begin{equation}
\begin{split}
\bm{V}_{\beta} &= \bigg( \sigma^{-2}_\beta \, \bm{I} + \sigma^{-2} \bm{X}^{\,*}(\bm{\omega})^{\,'} \bm{X}^{\,*}(\bm{\omega}) \bigg)^{-1}, \\
\hat{\bm{\beta}} &= \bm{V}_{\beta} \, \big( \sigma^{-2} \bm{X}^{*}(\bm{\omega})^{\,'} \bm{y}^{*} \big),
\end{split}
\end{equation} and we denote with $\bm{X}^{\,*}(\bm{\omega})$ the design matrix with rows given by $\bm{x}_t \, \big( \bm{\omega}$ \big) (Equation \ref{basis_functions}), for $t \in I^{\,*}$.
Finally, $\sigma^{\,2}$ is drawn in a Gibbs step directly from
\begin{equation}
\label{inverse_gamma}
\sigma^2 \, \big| \, \bm{\beta}, \, \bm{\omega}, \, d, \, \bm{y}^{\,*} \sim \text{Inverse-Gamma} \, \Bigg( \, \dfrac{T + \xi_0}{2}, \, \dfrac{\tau_0 + \sum_{ t \, \in \, I^{\,*}}\Big\{ \, y_t - \bm{x}_t \, \big( \, \bm{\omega} \, \big)^{\, '} \, \bm{\beta} \, \Big\}^{2}}{2} \Bigg).
\end{equation}
{\it Trans-Dimensional Moves: } \label{segment_model_between}
For these types of move, the number of periodicities is either proposed to increase by one (birth) or decrease by one (death) \citep{green1995reversible}.
If a birth move is attempted, we have that $d^{\,p} = d^{\,c} + 1$, where we denote with superscripts \textit{c} and \textit{p}, the current and proposed values, respectively.
The proposed vector of frequencies is obtained by drawing an additional frequency to be included in the current vector. On the other hand if a death move is chosen,
we have that $d^{\,p} = d^{\,c} - 1$ and one of the current periodicities is randomly selected to be deleted. Conditional on the proposed vector of frequencies,
the vector of linear coefficients and the residual variance are sampled as in the within-model move described above. For both birth and death moves, the updates are jointly
accepted or rejected in a M-H step.
\subsection{HMM Parameters} \label{sec:HMM_parameters}
We explain how to perform posterior inference about the probability distribution $\bm{\alpha}$, the transition probabilities $\bm{\pi}_j$
and the state sequence $\bm{z}$. The \textit{Chinese restaurant franchise with loyal customers} presented by \citet{fox2011sticky},
which extends the \textit{Chinese restaurant franchise} introduced by \citet{teh2006}, is a metaphor that can be used to express the generative
process behind the sticky version of the HDP and provides a general framework for performing inference.
A high level summary of the metaphor is as follows: in a \textit{Chinese restaurant franchise} the analogy of a \textit{Chinese restaurant process}
\citep{aldous1985exchangeability} is extended to a set of restaurants, where an infinite global menu of dishes is shared across these restaurants.
The process of seating customers at tables happens in a similar way as for the Chinese restaurant process, but is restaurant-specific.
The process of choosing dishes at a specific table happens franchise-wide, namely the dishes are selected with probability proportional
to the number of tables (in the entire franchise) that have previously served that dish. However, in the Chinese restaurant franchise with loyal
customers, each restaurant in the franchise has a speciality dish which may keep many generations of customers eating in the same restaurant.
Let $y_{j1}, \dots, y_{j N_j} $ denote the set of customers in restaurant $j$, where $N_j$ is the number of customers in restaurant $j$ and each customer is pre-allocated to a specific restaurant designated by that customer's group $j$. Let us also define indicator random variables $t_{ji}$ and $k_{jt}$, such that $t_{ji}$ indicates the table assignment for customer $i$ in restaurant $j$, and $k_{jt}$ the dish assignment for table $t$ in restaurant $j$. In the Chinese restaurant franchise with loyal customers, customer $i$ in restaurant $j$ chooses a table via $t_{ji} \sim \bm{\tilde{\pi}}_j,$ where $\bm{\tilde{\pi}}_j \sim \text{GEM} \, (\eta + \kappa)$, and $\eta$ and $\kappa$ are as in Section \ref{sec:BNP_markov}. Each table is assigned a dish via $k_{jt} \sim (\eta \, \bm{\alpha} + \kappa \, \delta_j)/(\eta + \kappa) $, so that there is more weight on the house speciality dish, namely the dish that has the same index as the restaurant. Here, $\bm{\alpha}$ follows a DP with concentration parameters $\gamma$ and can be seen as a collection of ratings for the dishes served in the global menu. Note that in the HMM formulated in Equation \eqref{eq:HMM}, the value of the hidden state $z_{\,t}$ corresponds to the dish index, i.e. $ k_{j t_{\!ji}} = z_{ji} = z_{\,t}$, where we suppose there exist a bijection $f : t \rightarrow ji$ of time indexes $t$ to restaurant-customer indexes $ji$. Furthermore, as suggested in \citet{fox2011sticky}, we augment the space and introduce \textit{considered} dishes $\bar{k}_{jt}$ and \textit{override} variables $o_{jt}$ so that we have the following generative process
\begin{equation*} \label{eq:served_considered}
\begin{split}
\bar{k}_{jt} \, | \, \bm{\alpha} &\sim \bm{\alpha} \\
o_{jt} \, | \, \eta, \, \kappa \, &\sim \text{Bernoulli}\,\bigg( \, \dfrac{\kappa}{\eta + \kappa} \, \bigg) \\
k_{jt} \, | \, \bar{k}_{jt}, \, o_{jt} \, &= \begin{cases} \,\bar{k}_{jt}, & o_{jt} = 0, \\
\,j, & o_{jt} = 1.
\end{cases}
\end{split}
\end{equation*}
Thus, a table first considers a dish $\bar{k}_{jt}$ without taking into account the dish of the house, i.e. $\bar{k}_{jt}$ is chosen from the infinite buffet line according to the ratings provided by $\bm{\alpha}$. Then, the dish $k_{jt}$ that is actually being served can be the house-speciality dish $j$, with probability $\rho = \kappa / (\eta + \kappa)$, or the initially considered dish $\bar{k}_{jt}$, with probability $ 1- \rho$. As shown in \citet{fox2011sticky}, table counts $\bar{m}_{jk}$ of considered dishes are sufficient statistics for updating the collection of dish ratings $\bm{\alpha}$, where $\bar{m}_{jk}$ denotes how many of the tables in restaurant $j$ considered dish $k$. The sampling of $\bar{m}_{jk}$ is additionally simplified by introducing the table counts $m_{jk}$ of served dishes and override variables $o_{jt}$. In the next section we describe a Gibbs sampler which alternates between updating the hidden states $\bm{z}$, dish ratings $\bm{\alpha}$, transition probabilities $\bm{\pi}_j$, newly introduced random variables $m_{jk}$, $o_{jt}$ and $\bar{m}_{jk}$, emission parameters $\bm{\theta_j}$, as well as the hyperparameters $\gamma$, $\eta$ and $\kappa$.
\subsubsection{Gibbs Sampler} \label{sec:gibbs_sampler}
We follow \citet{kivinen2007learning} and \citet{fox2011sticky} and consider a Gibbs sampler which uses finite approximations to the DP to allow sampling in blocks of the state sequence $\bm{z}$. In particular, conditioned on observations $\bm{y}$, transition probabilities $\bm{\pi}_j$ and emission parameters $\bm{\theta}_j$, the hidden states $\bm{z}$ are sampled using a variant of the well-known HMM forward-backward procedure (see Supplementary Material \textcolor{blue}{C.1}) presented in \citet{rabiner1989tutorial}. In order to use this scheme, we must truncate the countably infinite transition distributions $\bm{\pi}_j$ (and global menu $\bm{\alpha}$), and this is achieved using the $\,K_{\text{max}}$-limit approximation to a DP \citep{ishwaran2002exact}, i.e. $\text{GEM}_{\,K_{\text{max}}} \, ( \gamma ) := \text{Dir} \, \big( \gamma/K_{\text{max}}, \dots, \gamma/K_{\text{max}} \big),$ where the truncation level $K_{\text{max}}$ is a number that exceeds the total number of expected HMM states, and $\text{Dir} \, ( \cdot ) $ denote the Dirichlet distribution. Following \citet{fox2011sticky}, conditioned on the state sequence $\bm{z}$ and collection of dish ratings $\bm{\alpha}$, we sample the auxiliary variables $m_{jk}$, $o_{jt}$ and $\bar{m}_{jk}$ as described in Supplementary Material \textcolor{blue}{C.2}. Dish ratings $\bm{\alpha}$ and transition distributions $\bm{\pi}_j$ are then updated from the following posterior distributions
\begin{equation*}
\label{eq:posterior_alpha_1}
\begin{split}
\bm{\alpha} \, | \, \bm{\bar{m}}, \gamma &\sim \, \text{Dir} \, \big(\gamma/K_{\text{max}} + \bar{m}_{\cdot 1}, \dots, \gamma/K_{\text{max}} + \bar{m}_{\cdot K_{\text{max}}}\big)\\
\bm{\pi}_j \, | \, \bm{z}, \, \bm{\alpha},\, \eta, \, \kappa \, &\sim \, \text{Dir} \, \big( \eta \, \alpha_1 + n_{j1}, \dots, \eta \, \alpha_j + \kappa + n_{jj}, \dots, \eta \, \alpha_{K_{\text{max}}} + n_{j K_{\text{max}}} \big),
\end{split}
\end{equation*}
for each state $j = 1, \dots, K_{max}$. Here, $\bm{\bar{m}}$ is the vector of table counts of considered dishes for the whole franchise, and marginal counts are described with dots, so that $ \bar{m}_{\cdot k} = \sum_{j=1}^{K_{\text{max}}} \bar{m}_{jk} $ is the number of tables in the whole franchise considering dish $k$. We denote with $n_{jk}$ the number of Markov chain transitions from state $j$ to state $k$ in the hidden sequence $\bm{z}$. Next, given the state sequence $\bm{z}$ and transition probabilities $\bm{\pi}_j$, we draw the emission parameters $\bm{\theta}_j$ for each of the currently instantiated state as described in Section \ref{sec:emission_parameters}, where each reversible-jump MCMC update is run for several iterations. We also need to update the emission parameters for states which are not instantiated (namely, those states among $\{1, \dots, K_{\text{max}} \}$ that are not represented during a particular iteration of the sampler), and hence we draw the corresponding emission parameters from their priors. For computational or modeling reasons, the latter may be also performed for those instantiated states that do not contain a minimum number of observations. Finally, we sample the hyperparameters $\gamma$, $\eta$ and $\kappa$ in a Gibbs step (see Supplementary Material \textcolor{blue}{C.3}).
For the HDP-HMM, different procedures have been applied for sampling the hidden state sequence $\bm{z}$. \citet{teh2006} originally introduced an approach based on a Gibbs sampler which has been shown to suffer from slow mixing behavior due to strong correlations that is frequently observed in the data at nearby time points. \citet{van2008beam} presented a \textit{beam sampling} algorithm that combines a slice sampler \citep{neal2003slice} with dynamic programming. This allows to constrain the number of reachable states at each MCMC iteration to a finite number, where the entire hidden sequence $\bm{z}$ is drawn in block
using a form of forward-backward filtering scheme. However, \citet{fox2011sticky} showed that applications of the beam sampler to the HDP-HMM resulted in slower mixing rates compared to the forward-backward procedure that we use in our truncated model. Recently, \citet{tripuraneni2015particle} developed a particle Gibbs MCMC algorithm \citep{andrieu2010particle} which uses an efficient proposal and makes use of ancestor sampling to enhance the mixing rate.
\subsubsection{Label Switching } \label{sec:label_switching} The proposed approach may suffer from \textit{label switching} (see e.g. \citet{redner1984mixture, stephens2000dealing, jasra2005markov}) since the likelihood is invariant under permutations of labelling of the mixture components, for both hidden state labels $\{1, \dots, K_{max}\}$ and frequency labels $\{1, \dots, d_{max}\}$ in each state. The label switching problem occurs when using Bayesian mixture models and needs to be addressed in order to draw meaningful inference about the posterior model parameters. In our multiple model search, the frequencies (and their corresponding linear coefficients) are identified by keeping them in ascending order for every iteration of the sampler. Posterior samples of the model parameters corresponding to different hidden states are post-processed (after the full estimation run) using the relabelling algorithm developed by \citet{stephens2000dealing}. The basic idea behind this algorithm is to find permutations of the MCMC samples in such a way that the Kullback-Leibler (KL) divergence \citep{kullback1951information} between the `true' distribution on clusterings, say $P \, ( \bm{\theta} )$, and a matrix of classification probabilities, say $\bm{Q}$, is minimized. The KL distance is given by $ d( \bm{Q}, \, P \, ( \bm{\theta} ) )_{\, KL} = \sum_{t} \sum_{j} p_{tj} (\bm{\theta}) \log \frac{p_{tj} (\bm{\theta)}}{q_{tj}}$, where $p_{tj} (\bm{\theta}) = p \, (z_t = j \, | \, z_{t-1}, \, \bm{y}, \bm{\pi}, \bm{\theta} \, )$ is part of the MCMC output obtained as in Supplementary Material \textcolor{blue}{C.1}, and $q_{tj}$ is the probability that observation $t$ is assigned to class $j$. The algorithm iterates between estimating $\bm{Q}$ and the most likely permutation of the hidden labels for each MCMC iteration. We chose the strategy of \citet{stephens2000dealing} since it has been shown to perform very efficiently in terms of finding the correct relabelling (see e.g. \citet{rodriguez2014label}). However, it may be computationally quite intensive in memory since it requires the storage of a matrix of probabilities of dimension $N \times T \times K_{max}$, where $N$ is the number of MCMC samples. Furthermore, at each iterative step, the algorithm requires to go over $K_{max}!$ permutations of the labels for each MCMC iteration, which might significantly slow down the computation when using large values of $K_{max}$. Related approaches to the label switching issue include pivotal reordering algorithms \citep{marin2005bayesian}, label invariant loss functions \citep{celeux2000computational, hurn2003estimating} and equivalent classes representative methods \citep{papastamoulis2010artificial}, where an overview of these strategies can be found in \citet{rodriguez2014label}.
\section{Simulation Studies} \label{sec:simulation_studies}
This section presents results of simulation studies to explore the performance of our proposed methodology in two different settings. In the first scenario the data are generated from the model described in Section \ref{sec:model} and thus this simulation study provides a ``sanity'' check that the algorithm is indeed retrieving the correct pre-fixed parameters. We also investigate signal extraction for the case that the innovations come from a heavy-tailed t-distribution instead of a Gaussian. Our second study deals with artificial data from an HMM whose emission distributions are characterized by oscillatory dynamics generated by state-specific autoregressive (AR) time series models. Julia code that implements our procedure is available at \href{https://github.com/Beniamino92/SpectralHMM}{\texttt{https://github.com/Beniamino92/SpectralHMM}}.
\subsection{Illustrative Example} \label{sec:illustrative_example_simul}
We generated a time series consisting of $T = 1450$ data points from a three-state HMM with
the following transition probability matrix showing high probabilities of self transition along the diagonal
and Gaussian oscillatory emissions as specified in Equation \eqref{eq:emission_distribution}, where the parameters of each of the three regimes and the transition probability matrix are given in Supplementary Material \textcolor{blue}{A}. A realization from this model is displayed in Figure \ref{fig:illustrative_example}. The prior mean on the number of frequencies $d_j$ is set equal to 1 and we place a Gamma $(1, 0.01)$ prior on the concentration parameters $\gamma$ and $(\eta + \kappa)$, and a Beta $(100, 1)$ prior on the self-transition proportion $\rho$ as in \citet{fox2011sticky}. The maximum number of periodicities per regime $d_{\text{max}}$ is set to 5, while the truncation level $K_{\text{max}}$ for the DP approximation is set equal to 7. Also, we set $\phi_{\omega} = 0.25$ as a threshold for the uniform prior. The proposed estimation algorithm is run for 15,000 iterations, 3,000 of which were discarded as burn-in. At each iteration, for each instantiated set of emission parameters, 2 reversible-jump MCMC updates were performed. The full estimation algorithm took 31 minutes with a program written in Julia 0.62 on an Intel\textsuperscript{\textregistered} Core\textsuperscript{TM} i7 2.2 GHz Processor 8 GB RAM. For our experiments, we used the R package \textit{label.switching} of \citet{papast2016} to post-process the MCMC output with the relabelling algorithm of \citet{stephens2000dealing}.
\begin{figure}[htbp]
\centering
\centerline{\includegraphics[scale = 0.40]{Plots/illustrative_example_cool_plot.pdf}}
\caption{Illustrative Example. Dots represent the simulated time series, where the different colors corresponds to (true) different regimes. The state-specific estimated oscillatory mean function is displayed as a solid curve, and the estimated state sequence as a piecewise horizontal line at the top part of the graph. }
\label{fig:illustrative_example}
\end{figure}
\begin{table}[htbp]
\centering
\caption{Illustrative example. (left panel) posterior probabilities for number of distinct states $k$; (right panel) posterior probabilities for number of frequencies in each state, conditioned on $k = 3$. }
\label{table:posterior_k_m_illustrative_1}
\begin{tabular}{cc}
\hline \\[-0.9em]
\hline \\[-0.9em]
$k $ & $\hat{\pi} \, (\, k \, | \, \bm{y})$ \\[.1em] \hline
1 & 0.00\\
2 & 0.00\\
3 & 0.99 \\
4 & 0.01 \\
5 & 0.00 \\
6 & 0.00 \\
7 & 0.00 \\
\hline
\end{tabular} \quad \quad
\begin{tabular}{cccc}
\hline \\[-0.9em]
\hline \\[-0.9em]
$m$ & $\hat{\pi} \, (\, d_1 \, | \, k = 3, \, \bm{y})$ & $\hat{\pi} \, (\, d_2 \, | \, k = 3, \, \bm{y})$ & $\hat{\pi} \, (\, d_3 \, | \, k = 3, \, \bm{y})$ \\[.1em] \hline
1 & 0.99 & 1.00 & 0.01 \\
2 & 0.01 & 0.00 & 0.99 \\
3 & 0.00 & 0.00 & 0.00 \\
4 & 0.00 & 0.00 & 0.00 \\
5 & 0.00 & 0.00 & 0.00 \\ \hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\centering
\caption{Illustrative Example. Estimated posterior mean (and standard deviation) of frequencies and square root of the power of the corresponding frequencies. }
\label{table:estimate_parameter_illustrative}
\begin{tabular}{lcccc}
\hline \\[-0.9em]
\hline
& $\omega_{\, 11}$ & $\omega_{\, 21}$ & $\omega_{\, 31}$ & $\omega_{\, 32}$ \\ \cmidrule{2-5}
True & 0.0400 & 0.0526 & 0.0833 & 0.1250 \\ [.2em]
Estimated & \begin{tabular}[c]{@{}c@{}}0.0399\\ \footnotesize (8.8 $\cdot 10^{-6}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.0526\\ \footnotesize (6.3 $\cdot 10^{-6}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.0833\\ \footnotesize (9.6 $\cdot 10^{-6}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.1249\\ \footnotesize ($9.4 \cdot 10^{-6}$)\end{tabular} \\
& & & & \\
& $A_{\, 11}$ & $A_{\, 21}$ & $A_{\, 31}$ & $A_{\, 32}$ \\ \cmidrule{2-5}
True & 1.131 & 0.283 & 1.414 & 1.414 \\ [.2em]
Estimated & \begin{tabular}[c]{@{}c@{}}1.069 \\ \footnotesize (0.029)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.281\\ \footnotesize (0.004)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1.380\\ \footnotesize (0.022)\end{tabular} & \begin{tabular}[c]{@{}c@{}}1.367\\ \footnotesize (0.022)\end{tabular} \\[.8em]
\hline
\end{tabular}
\end{table}
Table \ref{table:posterior_k_m_illustrative_1} (left panel) shows that our estimation algorithm successfully detects the correct number of states in the sense that a model with $k = 3$ regimes has the highest posterior probability. In addition, our approach correctly identifies the right number of frequencies in each regime, as shown in Table \ref{table:posterior_k_m_illustrative_1} (right panel). Table \ref{table:estimate_parameter_illustrative} displays the estimated posterior mean and standard deviation of the frequencies along with the square root of the power of the corresponding frequencies, where the results are conditional on three estimated states and the modal number of frequencies within each state. Here, the power of each frequency $\omega_{\, jl}$ is summarized by the amplitude $A_{\, jl} = \sqrt{\beta_{j l}^{\, (1)^{\,2}} + \beta_{j l}^{\, (2)^{\,2}}}$, namely the square root of the sum of squares of the corresponding linear coefficients (see, e.g., \citet{shumway2017time}). Our proposed method seems to provide a good match between true and estimated values for both frequencies and their power, for this example. We also show in Figure \ref{fig:illustrative_example} the state-specific estimated signal (Equation \eqref{eq:oscillatory}), and the estimated state sequence using the method of \citet{stephens2000dealing} (as a piecewise horizontal line). The rows of the estimated transition probability matrix were $\bm{\hat{\pi}}_1 = (0.9921, \, 0.0073, \, 0.0006), \, \bm{\hat{\pi}}_2 = (0.0005, \, 0.9956, \,0.0040)$ and $\bm{\hat{\pi}}_3 = (0.0051, \,0.0006, \, 0.9942)$. The high probabilities along the diagonal reflect the estimated posterior mean of the self transition parameter $\hat{\rho} = 0.9860$, which is indeed centered around the true probability of self-transition.
\begin{figure}[htbp]
\centering
\centerline{\includegraphics[scale = 0.30]{Plots/final_traces_likelihood.png}}
\caption{Illustrative Example. (a) Trace plots (after burn-in) for posterior sample of frequencies, conditional on modal number of states and number of frequencies in each state; red lines correspond to true values of the frequencies. (b) Trace plots (including burn-in) of the likelihood for three Markov chains initialized at different starting values. }
\label{fig:traces_illustrativce}
\end{figure}
Diagnostics for verifying convergence were performed in several ways. For example, we observed that the MCMC samples of the likelihood of the HMM reached a stable regime, while initializing the Markov chains from overdispersed starting values (see Figure \ref{fig:traces_illustrativce} (b)). This diagnostic might be very useful, for example, in determining the burn-in period. However, we note that it does not guarantee convergence since steady values of the log likelihood might be the result of a Markov chain being stuck in some local mode of the target posterior distribution. The likelihood of an HMM with Gaussian emissions can be expressed as
\begin{equation*}
\mathcal{L} \, (\bm{z}, \bm{\pi}, \, \bm{\theta} \, | \, \bm{y} ) = p (z_1 \, | \, \bm{y}, \bm{\pi}, \bm{\theta}) \mathcal{N} ( y_1 \, ; f_{\, 1\, z_{\,1}}, \, \sigma^{\, 2}_{z_{\,1}}) \prod_{t=2}^{T} p \, (z_{\, t} \, | \, z_{\, t-1}, \, \bm{y}, \bm{\pi}, \bm{\theta} \,) \mathcal{N} ( y_t \, ; f_{\,t \,z_{\,t}}, \, \sigma^{\, 2}_{z_{\, t}}),
\end{equation*}
where $\mathcal{N} \, ( \,y_t; \, f_{j \, t}, \, \sigma^2_j )$ denotes the density of a Gaussian distribution with mean $ f_{jt} = \bm{x}_t \, \big( \bm{\omega}_{ j} \big)^{\, '} \, \bm{\beta}_{\, j} $ (as in Equation \ref{eq:oscillatory}) and variance $\sigma^2_j$, evaluated at $y_t$. Conditioned on the modal number of states, we also validated convergence for the state-specific emission parameters by analyzing trace plots and running averages of the corresponding MCMC samples, with acceptable results as each trace reached a stable regime. As an example, we show in Figure \ref{fig:traces_illustrativce} (a) trace plots (after burn-in) for the posterior values of the frequencies.
\vspace{0.1cm}
\textit{Signal Extraction with Non-Gaussian Innovations:} In many scientific experiments it may be of interest to extract the underlying signal that generates the observed time series and HMMs can be used to this end. Here, we study the performance of our proposed approach in estimating the time-varying oscillatory signal $f_{\, j t \, }$ (Equation \ref{eq:oscillatory}) when the Gaussian assumption of $\varepsilon_t$ in Equation \eqref{eq:emission_distribution} is violated. In particular, we generated 20 time series, each consisting of 1024 observations from the same simulation setting introduced above, where the innovations were simulated from heavy-tailed $t-$distributions with 2, 3, 2 degrees of freedom for state 1,2,3, respectively. The linear basis coefficients were chosen to be $\bm{\beta}_{\, 1 1} = (3, 2)^{'}, \, \bm{\beta}_{\, 2 1} = (1.2, 4.0)^{'}, \, \bm{\beta}_{\, 3 1} = (1.0, 5.0)^{'}, \, \bm{\beta}_{\, 3 2} = (4.0, 3.0)^{'}$. As a measure of performance we computed the mean squared error MSE = $ \frac{1}{1024}\sum_{t = 1}^{1024} ( f_{t \, z_{\,t}} - \hat{f}_{t \, z_{\,t}})^2$ between true and estimated signal and compared the proposed approach with the method of \citet{hadj2018bayesian} referred to as AutoNOM (Automatic Nonstationary Oscillatory Modelling), which we believe is the state-of-the-art in extracting the signal of nonstationary periodic processes. Our proposed estimation algorithm was run with the same parameterization as above while AutoNOM was performed for 15,000 updates, 3,000 of which were discarded as burn-in, where we fixed 15 maximum number of change points and 5 maximum number of frequencies per segment. The prior means for the number of change-points and frequencies per segment are fixed at 2 and 1, respectively, and the minimum distance between change-points is set at 10. For both methodologies, the estimated signal was obtained by averaging across MCMC iterations. AutoNOM was run using the Julia software provided by the authors at \href{https://github.com/Beniamino92/AutoNOM}{\texttt{https://github.com/Beniamino92/AutoNOM}}.
\begin{figure}[htbp]
\centering
\includegraphics[height =7.0cm, width = 12.5cm]{Plots/MSE_t_all.png}
\caption{ Signal extraction with non-Gaussian innovations. Boxplots of the MSE values for AutoNOM and our oscillatory HDP-HMM when (a) the data exhibit recurrent patterns (b) the data do not exhibit recurrent patterns. }
\label{fig:t_dist_simul}
\end{figure}
Figure \ref{fig:t_dist_simul} (a) presents boxplots of the MSE values for AutoNOM and our proposed approach. It becomes clear that the estimates of the signal obtained using our proposed methodology are superior to those obtained using AutoNOM. However, this result is not surprising as the two approaches make different assumptions. In particular, AutoNOM does not assume recurrence of a periodic behavior and hence needs to estimate the regime-specific modeling parameters each time it detects a new segment, while our oscillatory HDP-HMM has the advantage of using the same set of parameters whenever a particular periodic pattern recurs in the time series. Hence, we also compared the performance of the two approaches in extracting the signal (under non-Gaussian innovations) in a scenario where the time series do not exhibit recurrence. Specifically, we generated 10 time series manifesting two change-points (where the oscillatory behavior corresponding to the three different partitions are parameterized as above) and computed the MSE between true and estimated signal as we did in the previous scenario. The corresponding boxplots displayed in Figure \ref{fig:t_dist_simul} (b) show that the two approaches seem to perform in similar way, with AutoNOM being slightly more accurate than our oscillatory HDP-HMM. We conclude that both methodologies have their own strengths. Our proposed procedure is superior to AutoNOM in the sense that the additional HMM provides a framework for modeling and explicitly quantifying the switching dynamics and connectivity between different states. On the other hand, AutoNOM is better suited to scenarios where there are nonstationarities arising from singular change-points and the observed oscillatory processes evolve without exhibiting recurrent patterns.
\subsection{Markov Switching Autoregressive Process}
We now investigate the performance of our approach in detecting time-changing periodicities in a scenario where the data generating process shows large departures from our modeling assumptions. The HMM hypothesis which assumes conditionally independent observations given the hidden state sequence, such as the one formulated in Equation \eqref{eq:emission_distribution}, may sometimes be inadequate in expressing the temporal dependencies occurring in some phenomena. A different class of HMMs that relaxes this assumption is given by the \textit{Markov switching autoregressive process}, also referred to as the AR-HMM \citep{juang1985mixture, albert1993bayes, fruhwirth2006finite}, where an AR process is associated with each state. This model is used to design autoregressive dynamics for the emission distributions while allowing the state transition mechanisms to follow a discrete state Markov chain.
We generated $T = 900$ observations from an AR-HMM with two hidden states and autoregressive order fixed at $p = 2$, that is
\begin{equation} \label{eq:AR_HMM}
\begin{split}
z_{\,t} \, \sim& \, \, \bm{\pi}_{z_{\,t-1}}, \\
y_{\,t} =& \sum_{l=1}^{p} \psi^{\, (z_t)}_{\,l} y_{\, t-l} + \varepsilon^{\, (z_t)},
\end{split}
\end{equation}
where $\bm{\pi}_1 = (0.99, 0.01)$ and $\bm{\pi}_2 = (0.01, 0.99) $. The AR parameterization $\bm{\psi}^{\, (1)} = (1.91, \, -0.991)$ and $\bm{\psi}^{\, (2)} = (1.71, \, -0.995 )$ is chosen in such a way that the state-specific spectral density functions display a pronounced peakedness. Furthermore, $\varepsilon_t^{(1)} \stackrel{iid}{\sim} \mathcal{N}(0, 0.1^{\,2})$ and $\varepsilon_t^{(2)} \stackrel{iid}{\sim} \mathcal{N}(0, 0.05^{\,2})$. A realization from this model is shown in Figure \ref{fig:AR_HMM_study} (top) as a blue solid line. Our proposed estimation algorithm was run for 15,000 iterations 5,000 of which are used as burn-in. At each iteration, we performed 2 reversible-jump MCMC updates for each instantiated set of emission parameters. The rate of the Poisson prior for the number of periodicities is fixed at $10^{-1}$ and the corresponding truncation level $d_{\text{max}}$ was fixed to 3. The maximum number of states $K_{\text{max}}$ was set equal to 10 whereas the rest of the hyperparameters are specified as in Section \ref{sec:illustrative_example_simul}. Our procedure seems to overestimate the number of states, as a model with 8 regimes had the highest posterior probability $\hat{\pi} \, (\, k = 8 \, | \, \bm{y}) = 97\%$. However, this is not entirely unexpected as a visual inspection of the realization displayed in Figure \ref{fig:AR_HMM_study} (top) suggest more than two distinct spectral patterns in the sense that the phases, amplitudes and frequencies appear to vary stochastically within a regime.
Figure \ref{fig:AR_HMM_study} (bottom) shows the estimated time-varying frequency peak along with a 95\% credible interval obtained from the posterior sample. The estimate was determined by first selecting the dominant frequency (i.e. the frequency with the highest posterior power) corresponding to each observation and then averaging the frequency estimates over MCMC iterations. While our approach identifies a larger number of states when the data were generated from an AR-HMM we note that the data generating process are very different from the assumptions of our model and the proposed procedure still provides a reasonable summary of the underlying time changing spectral properties observed in the data. Furthermore, by setting the truncation level $K_{max}$ equal to 2, we retrieve the true transition probability matrix that generates the switching dynamics between the two different autoregressive patterns, as the vectors of transition probabilities obtained using our estimation algorithm are $\hat{\bm{\pi}}_1 = (0.99, 0.01)$ and $\hat{\bm{\pi}}_2 = (0.98, 0.02).$
\begin{figure}[htbp]
\centering
\includegraphics[height =8.8cm, width = 7.8cm]{Plots/data_and_freq_ARHMM.pdf}
\includegraphics[height =8.7cm, width = 5.6cm]{Plots/comparison_ARHMM.pdf}
\caption{(Top) A realization from model \eqref{eq:AR_HMM}, where the piecewise horizontal line represents the true state sequence. (Bottom) True time varying frequency peak (dotted red line) and the estimate provided by our proposed approach (solid blue line) where we highlight a 95\% credible interval obtained from the posterior sample. (Right) Boxplots of the MSE values for AutoNOM, our oscillatory HDP-HMM and AdaptSPEC.}
\label{fig:AR_HMM_study}
\end{figure}
In addition, we simulated 10 time series from model \eqref{eq:AR_HMM} and computed the mean squared error MSE = $\frac{1}{900} \sum_{t=1}^{900} ( \omega_t - \hat{\omega}_t)$ between the true time-varying frequency peak $\omega_t$ and its estimate $\hat{\omega}_t$ for the proposed approach, AutoNOM and the procedure of \citet{rosen2012adaptspec}, referred to as AdaptSPEC (Adaptive Spectral Estimation). For both AutoNOM and AdaptSPEC, we ran the algorithm for 15,000 MCMC iterations (5,000 of which were used as burn-in), fixed the maximum number of change-points at 15 and set the minimum distance between change-points to 30. The number of spline basis functions for AdaptSPEC is set to 10. AutoNOM is performed using a Poisson prior with rate $10^{-1}$ for both number of frequencies and number of change-points. AdaptSPEC was performed using the R package \textit{BayesSpec} provided by the authors. Boxplots of the MSE values for the three different methodologies are displayed in Figure \ref{fig:AR_HMM_study} (right), showing that our oscillatory HDP-HMM seems to outperform the other two approaches in detecting the time-varying frequency peak, for this example. However, our procedure finds some very short sequences (such as in Figure \ref{fig:AR_HMM_study} (bottom) for $t \approx 200, 500, 700$) demonstrating that the sticky parameter might not always be adequate enough in capturing the correct temporal mode persistence of the latent state sequence. AutoNOM and AdaptSPEC are less prone to this problem as both methodologies are able to specify a minimum time distance between change-points; though, we acknowledge that this constraint might not be optimal when the observed data exhibit relatively rapid changes. We also notice that, not surprisingly, the estimates of the time-varying frequency peak obtained using AutoNOM and our oscillatory HDP-HMM, which are based on a line-spectrum model, are both superior than the ones obtained via the smoothing spline basis of AdaptSPEC, which is built upon a continuous-spectrum setting; this is consistent with the findings in \citet{hadj2018bayesian}. However, it is important to keep in mind that, while AutoNOM and AdaptSPEC allow to retrospectively analyse the spectral changing properties of a process from an exploratory angle, unlike our spectral HDP-HMM they do not quantify a probabilistic mechanism for the recurrence of periodic dynamic patterns.
\section{Analysis of the Airflow Trace Data} \label{sec:case_study}
The airflow trace shown in Figure \ref{fig:human_breathing_trace_intro} was collected from a human over a time span of 5.5 minutes of continuous
breathing and measured via a facemask attached to a pressure transducer. Airflow pressure signals were amplified using the NeuroLog system connected to a 1401 interface and acquired on a computer using \textit{Spike2} software (Cambrdige Electronic Design). The data are sampled at rate of 4 Hertz, i.e., 4 observations per second,
for a total of 1314 data points.
We fitted our oscillatory HDP-HMM to the time series displayed in Figure \ref{fig:human_breathing_trace_intro} for 100,000 iterations, 60,000 of which were discarded as burn-in, where at each iteration, we carried out 10 reversible-jump MCMC updates for each instantiated set of emission parameters. The truncation level $K_{\text{max}}$ was set to 10, whereas the maximum number of frequencies per state $d_{\text{max}}$ was fixed to 3. The rate of the Poisson prior for the number of frequencies is set equal to $10^{-2}$, the prior on the self-transition proportion $\rho$ is specified as Beta $(10^{\,3}, 1)$ and the rest of the parameterization is chosen as in Section \ref{sec:illustrative_example_simul}. The posterior distribution over the number of states had a mode at 7, with posterior probabilities $\hat{\pi} \, (\, k = 6 \, | \, \bm{y}) = 21\%$, $\hat{\pi} \, (\, k = 7 \, | \, \bm{y}) = 77\%$ and $\hat{\pi} \, (\, k = 8 \, | \, \bm{y}) = 2\%$. Indeed, it is conceivable that the state corresponding to \textit{normal} breathing (i.e. neither apnea or hypopnea) may exhibit more than one distinct periodic pattern, which justifies the need to use a nonparametric HMM. \cite{paz2013acute} reported at least 13 forms of breathing patterns including forms of apnea. Figure \ref{fig:human_breathing_trace} shows the fitted signal (yellow line) along with a 95\% credible interval obtained from the posterior sample and the estimated hidden state sequence (piecewise horizontal line), where we highlight our model estimate for apnea state (red) and hypopnea state (blue) while reporting the ground truth at the top of the plot. Conditional on the modal number of regimes, the number of periodicities belonging to apnea and hypopnea had a posterior mode at 2 and 3, respectively. Conditional on the modal number of frequencies, Table \ref{table:estimate_parameter_apnea_hypopnea} displays the posterior mean and standard deviation of periodicities (in seconds) and powers that characterize the two states classified as apnea and hypopnea, showing that apnea instances seem to be characterized by larger periods and lower amplitude than hypopnea.
\begin{figure}[htbp]
\centering
\centerline{\includegraphics[height =9cm, width = 17cm]{Plots/final_simul_human.png}}
\caption{Case Study. Dots represent the airflow trace collected over a period of five and half minutes of continuous breathing. The estimated signal (solid line) is shown along with its 95\% credible interval. The piecewise horizontal line corresponds to the estimated state sequence where we highlight the states corresponding to estimated apnea (red) and hypopnea (blue), while reporting the ground truth at the top of the plot. }
\label{fig:human_breathing_trace}
\end{figure}
\begin{table}[htbp]
\centering
\caption{Case study. Posterior mean and standard deviation of periodicities (in seconds) and powers that characterize the two states classified as apnea and hypopnea. }
\label{table:estimate_parameter_apnea_hypopnea}
\begin{tabular}{cclcc}
\hline \\[-0.9em]
\hline
\multicolumn{2}{c}{\textbf{Apnea}} & & \multicolumn{2}{c}{\textbf{Hypopnea}} \\
Period & Power & & Period & Power \\ [.1em] \cmidrule{1-2} \cmidrule{4-5}
\begin{tabular}[c]{@{}c@{}}9.862\\ \footnotesize (2.20$\cdot 10^{-2}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.1581\\ \footnotesize (1.05$\cdot 10^{-2}$)\end{tabular} & & \begin{tabular}[c]{@{}c@{}}6.084\\ \footnotesize (6.95$\cdot 10^{-5}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.370\\ \footnotesize (2.2$\cdot 10^{-2}$)\end{tabular} \\[1.2em]
\begin{tabular}[c]{@{}c@{}}6.823\\ \footnotesize (1.54$\cdot 10^{-5}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.2161\\ \footnotesize (1.06$\cdot 10^{-2}$)\end{tabular} & & \begin{tabular}[c]{@{}c@{}}5.252\\ \footnotesize (3.48$\cdot 10^{-5}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.683\\ \footnotesize (2.1$\cdot 10^{-2}$)\end{tabular} \\ [1.2em]
- & - & & \begin{tabular}[c]{@{}c@{}}3.984\\ \footnotesize (7.910$\cdot 10^{-5}$)\end{tabular} & \begin{tabular}[c]{@{}c@{}}0.178\\ \footnotesize (1.8$\cdot 10^{-2}$) \end{tabular}
\\ [.2em] \hline
\end{tabular}
\end{table}
Our estimation algorithm detected all known apnea and hypopnea instances. In order for them to qualify as a clinically relevant obstructive event they must have a minimum length of 10 seconds \citep{berry2017aasm}. Thus, we only highlight the clinically relevant instances in Figure \ref{fig:human_breathing_trace}, discarding sequences of duration less than 10 seconds. We also detected a \textit{post sigh} apnea (after the third minute) which is a normal phenomenon to observe in a breathing trace and hence should not count as a disordered breathing event. Again, such an event after a sigh can be identified as a sigh is characterized by an amplitude which is always higher than any other respiratory event and hence can be easily detected. Subtracting the number of sighs from the total number of apneas/hypopneas results in a measure of all apneas of interest without the confounding data from post sigh apneas. A common score to indicate the severity of sleep apnea is given by the Apnea-Hypopnea Index (AHI) which consists of the number of apneas and hypopneas per hour of sleep \citep{ruehland2009new}. Our proposed approach provides a realistic estimate of the total number of apnea and hypopnea instances recurring in this case study.
\section{Summary and Discussion} \label{sec:conclusions}
In this paper we developed a novel HMM approach that can address the challenges of modeling periodic phenomena whose behavior switches and recurs dynamically over time. The number of states is assumed unknown as well as their relevant periodicities which may
differ over the different regimes since each regime is represented by a different periodic pattern. To address flexibility in the number of states, we assumed a sticky HDP-HMM that penalises rapid changing dynamics of the process and provides effective control over the switching rate. The variable dimensionality with respect to the number of frequencies that specifies the different states is tackled by a reversible-jump MCMC algorithm.
We illustrated the use of our approach in a case study relevant to respiratory research, where our methodology was able to identify recurring instances of sleep apnea in human
breathing traces. Despite the fact that here we have focused on the detection of apnea instances, our proposed methodology provides a very flexible and general framework to analyze different breathing patterns. A question of interest is whether similar dynamical patterns can be identified across a heterogeneous patient cohort, and be used for the prognosis of patients' health and progress. The growth of information and communication technologies permits new advancements in the health care system to facilitate support in the homes of patients in order to proactively enhance their health and well-being. We believe that our proposed HMM approach has the potential to aid the iterative feedback between clinical investigations in sleep apnea research and practice with computational, statistical and mathematical analysis.
Although both parametric and nonparametric HMMs have been shown to be good models in addressing learning challenges in time series data, they have the drawback of limiting the \textit{state duration distribution}, i.e., the distribution for the number of consecutive time points that the Markov chain spends in a given state, to a geometric form \citep{ephraim2002hidden}. In addition, the self-transition bias of the sticky HDP-HMM used to increase temporal state persistence is shared among all states and thus does not allow for inferring state-specific duration features. In our application, learning the duration structure of a specific state may be of interest to health care providers, for example, in assessing the severity of sleep apnea. Future work will address extending our approach to a hidden semi-Markov model (HSMM) setting \citep{guedon2003estimating,yu2010hidden,johnson2013bayesian}, where the generative process of an HMM is augmented by introducing a random state duration time which is drawn from some state-specific distribution when the state is entered. However, this increased flexibility in modeling the state duration has the cost of increasing substantially the computational effort to compute the likelihood: the message-passing procedure for HSMMs requires $\mathcal{O} (T^2 K + T K^2)$ basic computations for a time series of length $T$ and number of states $K$, whereas the corresponding forward-backward algorithm for HMMs requires only $\mathcal{O}(T K^2)$.
\section*{Acknowledgements}
We wish to thank Maxwell Renna, Paul Jenkins and Jim Griffin for their insightful and valuable comments. The work presented in this article was developed as part of the first author's Ph.D. thesis at the University of Warwick and he is currently affiliated with the Division of Biostatistics at the University of Minnesota. B. Hadj-Amar was supported by the Oxford-Warwick Statistics Programme (OxWaSP) and the Engineering and Physical Sciences Research Council (EPSRC), Grant Number EP/L016710/1. R. Huckstepp was supported by the Medical Research Council (MRC), Grant Number MC/PC/15070.
\vspace{1.0cm}
\begin{center}
SUPPLEMENTARY MATERIAL
\end{center}
We provide supplemental material to the manuscript. Section \textcolor{blue}{A} provides the parameterization of the simulation setting presented in Section \textcolor{blue}{4.1} of the manuscript. Section \textcolor{blue}{B} contains further details about the sampling scheme for updating the emission parameters via reversible-jump MCMC steps, and in Section \textcolor{blue}{C} we present the sampling scheme for drawing the HMM parameters within the Chinese restaurant franchise framework.
\renewcommand{\thesection}{A}
\section{Illustrative Example - Parameterization}\label{sec:supp_illustrative} $\,$
\vspace{0.2cm}
\begin{equation*}
\bm{\pi} =
\begin{bmatrix}
0.9900 & 0.0097 & 0.0003\\
0.0001 & 0.9900 & 0.0099 \\
0.0097 & 0.0003 & 0.9900
\end{bmatrix},
\end{equation*}
\begin{table}[htbp]
\centering
\caption{Illustrative Example. Frequencies $\bm{\omega}_{\,j}$ and linear coefficients $\bm{\beta}_{\,j}$ for the three different regimes. The number of periodicities $d_j$ in each regime is 1, 1 and 2, respectively. The innovations $\sigma^2_j$ are set to $(0.4)^3, (0.08)^2$ and $(0.3)^2$, respectively. }
\label{table:parameter_illustrative_1}
\begin{tabular}{ccc}
\hline \\[-0.9em]
\hline
Frequencies & & Linear Coefficients \\[.2em] \hline \\[-1.0em]
$\omega_{\,1 1}$ $\, \, \, \, 1/25$ & & $\bm{\beta}_{\, 1 1}$ = $\, \, \, \, (0.8, 0.8)^{'}$ \\
$\omega_{\,2 1}$ $\, \, \, \, 1/19$ & & $\bm{\beta}_{\, 2 1}$ = $\, \, \, \, (0.2, 0.2)^{\,'}$ \\
$\omega_{\,3 1}$ $\, \, \, \, 1/12$ & & $\bm{\beta}_{\, 3 1}$ = $\, \, \, \, (1.0, 1.0)^{\,'}$ \\
$\omega_{\,3 2}$ $\, \, \, \, 1/8 \, \, $ & & $\bm{\beta}_{\, 3 2}$ = $\, \, \, \, (1.0, 1.0)^{\,'}$ \\[.2em]
\hline
\end{tabular}
\end{table}
\renewcommand{\thesection}{B}
\section{Updating Emission Parameters}\label{sec:supp_emission}
\subsection{Within-Model Move}
\hfill\\
{\textit{Updating} $\bm{\omega}$}:
Samples from the conditional posterior distribution $ p \, ( \bm{\omega} \, | \, \bm{\beta}, \, \sigma^2, \, d, \, \bm{y}^{\,*})$ (see Equation \textcolor{blue}{11}) are obtained by drawing the frequencies one-at-time using a mixture of M-H steps. We follow \citet{andrieu1999joint} and \citet{hadj2018bayesian} and design a mixture proposal distribution of the form \begin{equation}
\label{mixture_proposal_freq}
q \, ( \, \omega_l^{\, p} \, | \, \omega_l^{\, c} \,) = \xi_{\omega} \, q_1 \, (\, \omega_l^{\, p} \, | \, \omega_l^{\, c} \,) + (1 - \xi_{\omega} ) \, q_2 \, (\, \omega_l^{\, p} \, | \, \omega_l^{\, c} \,), \qquad l = 1, \dots, d,
\end{equation} where $q_1$ is defined in Equation \eqref{q1_freq} below, $q_2$ is the density of a Normal $\mathcal{N}\, (\omega_l^{\, c}, \sigma^2_{\omega})$, $\xi_{\omega}$ is a positive value such that $ 0 \leq \xi_{\omega} \leq 1$, and the superscripts $c$ and $p$ refer to current and proposed values, respectively. Equation \eqref{mixture_proposal_freq} states that a M-H step with proposal distribution $q_1 \, (\, \omega_l^{\, p} \, | \, \omega_l^{\, c} \,)$ \begin{equation} \label{q1_freq}
q_1 \, (\, \omega_l^{\, p} \, | \, \omega_l^{\, c} \,) \propto \sum_{h \, = \, 0}^{\tilde{T} - 1} I_h \, \mathbbm{1}_{\big[ \, h/T \, \, \leq \, \, \omega_l^{\, p} \, < \, \, (h+1)/T \, \big] \, },
\end{equation} is performed with probability $\xi_{\omega} $. Here $\tilde{T} = \floor{\hat{T}/2}$, $\hat{T}$ is the number of observations in $\hat{\bm{y}}$, and $I_h$ is the value of the periodogram of $\hat{\bm{y}}$, i.e the squared modulus of the Discrete Fourier transform evaluated at frequency $h/T$
$$ I_h = \Big| \, \sum_{t \, \in \, I^{\,*}} y_t \, \exp{\Big(-i \, 2 \pi \, \frac{h}{T} \, \Big)}\, \Big|^{\, 2}, $$ where we recall that $\hat{\bm{y}}$ is a segment of data that is randomly selected from $\bm{y}^{*}$ with probability proportional to the number of observations belonging to that segment. The acceptance probability for this move is
\begin{equation*}
\alpha = \min \Bigg\{1, \dfrac{p \, (\bm{\omega}^{\, p} \, | \, \bm{\beta}, \, \sigma^2, \, d, \, \bm{y}^{\,*}) }{p \, (\bm{\omega}^{\,c} \, | \, \bm{\beta}, \, \sigma^2, \, d, \, \bm{y}^{\,*}) } \times \dfrac{q_1 \, (\, \omega_l^{\, c} \, )}{q_1 \, (\, \omega_l^{\, p} \, )} \Bigg\},
\end{equation*}
where $\bm{\omega}^{\,p} = (\omega_1^{\, c}, \dots, \omega_{l-1}^{\,c}, \omega_{l}^{\,p}, \omega_{l+1}^{\,c}, \dots, \omega_{p}^{\,c})^{'}$.
With probability 1 - $\xi_{\omega} $, we carry out a random walk M-H step with proposal distribution $q_2 \, (\, \omega_l^{\, p} \, | \, \omega_l^{\, c} \,)$, whose density is Normal with mean $\omega_l^{\, c}$ and variance $\sigma^2_{{\omega} }$, i.e.
$\omega_l^{\, p} \, | \, \omega_l^{\, c} \, \sim \mathcal{N}(\,\omega_l^{\, c}, \, \sigma^2_{{\omega} }\,)$. This move is accepted with probability
\begin{equation*}
\alpha = \min \Bigg\{1, \dfrac{p \, (\bm{\omega}^{\,p} \, | \, \bm{\beta}, \, \sigma^2, \, d, \, \bm{y}^{\,*}) }{p \, (\bm{\omega}^{\, c} \, | \, \bm{\beta}, \, \sigma^2, \, d, \, \bm{y}^{\,*}) } \Bigg\}.
\end{equation*}
Low values of $\xi_{\omega}$ yield to high acceptance rate combined with an efficient exploration of the parameter space. For our experiments, we set $\sigma^2_{{\omega} } = 1/(50T)$ and $\xi_{\omega} = 0.2$.
\vspace{0.1cm}
\subsection{Trans-Dimensional Moves:}
\hfill\\
\label{appendix_segment_between}
\textit{Birth move:} If a birth move is attempted, the number of frequencies is proposed to increase by one, namely $d^{\,p} = d^{\,c} + 1$. The proposed frequency vector $\bm{\omega}^{\, p}$ is constructed as follows $$ \bm{\omega}^{\, p} = (\omega_1^{c}, \dots, \, \omega_{d^{\, c}}^{c}, \, \omega_{d^{\, p}}^{\,p})^{'}, $$ where the proposed additional frequency $\omega_{d^{\, p}}^{\,p}$ is sampled following \citet{hadj2018bayesian}, namely by drawing a candidate value $\omega_{d^p}^{p}$ uniformly from the union of intervals of the form $[\omega_l^{c} + \psi_\omega, \, \omega_{l+1}^{c} - \psi_\omega]$, for $l = 0, \dots, d_{c}$ and denoting $\omega_0^{\, c} = 0$ and $\omega_{\, d^c +1}^c = \phi_\omega$. Here, $\psi_\omega$ is a fixed minimum distance between frequencies larger than $\frac{1}{n}$ \citep{dou1995bayesian, hadj2018bayesian}. Also, we sort the proposed vector of frequencies $\bm{\omega}^{\, p}$ to ensure identifiability when performing estimation, as suggested by \citet{andrieu1999joint}. For proposed $\bm{\omega}^{\, p}$ and given $\sigma^{\,2c}$, the proposed vector of linear coefficients $\bm{\beta}^{\, p}$ is drawn from its conjugate Gaussian posterior (as in Equation \textcolor{blue}{12}). The proposed state $(\, d^{\,p}, \, \bm{\omega}^{\, p}, \, \bm{\beta}^{\, p} \,)$ is jointly accepted or rejected with probability
\begin{equation}
\label{eq:acceptance_birth}
\alpha = \min \Bigg\{ 1, \, \dfrac{\mathscr{L}(\, \bm{\theta}^{\, p} \, | \, \bm{y}^{\,*})}{\mathscr{L}( \, \bm{\theta}^{\, c} \, | \, \bm{y}^{\,*})} \times \dfrac{p \,(\,d^{\, p}) \, p \,(\,\bm{\omega}^{\, p} \, | \, d^{\,p}) \, p \,(\, \bm{\beta}^{\, p} \, | \, \bm{\omega}^{\, p}, \, d^{\,p})}{p \,(\,d^{\, c}) \, p \,(\,\bm{\omega}^{\, c} \, | \, d^{\,c}) \, p \,(\, \bm{\beta}^{\, c} \, | \, \bm{\omega}^{\, c}, \, p^{\,c})} \times \dfrac{r_{d^{\, p}} \cdot \big(\frac{1}{d^{\,p}}\big) \cdot q \, ( \, \bm{\beta}^{\, c}\,)}{b_{d^{\, c}} \cdot q \, (\omega_{d^p}^{p}) \cdot q \, ( \, \bm{\beta}^{p} \, ) } \Bigg\},
\end{equation} where the likelihood $\mathscr{L} ( \cdot \, | \, \bm{y}^{\,*}) $ is provided in Equation (\textcolor{blue}{8}), $p (\, d \, ) $ is the Poisson density truncated at $d_{\text{max}}$, $b_{d^{\, c}}$ and $ r_{d^{\, p}}$ are the probabilities specified in Equation (\textcolor{blue}{10}), $q \, (\omega_{d^p}^{p})$ is the density of the uniform proposal for sampling the additional frequency, $q \, ( \bm{\beta}^{\, c} )$ and $q \, ( \bm{\beta}^{\,p} )$ are the Normal densities $\bm{\mathcal{N}}_{2d^{c}} \, (\, \hat{\bm{\beta}}^{\,c}, \,\bm{V}_{\beta}^{\, c})$ and $\bm{\mathcal{N}}_{2d^{p}} \, (\, \hat{\bm{\beta}}^{\,p}, \, \bm{V}_{\beta}^{\, p})$, respectively (Equation \textcolor{blue}{12}). Finally, the residual variance $\sigma^2$ is updated in a Gibbs step from Equation (\textcolor{blue}{14}).
\vspace{0.1cm}
\textit{Death move:} If a death move is attempted, the number of frequencies is proposed to decrease by one, namely $d^{\, p} = d^{\, c} - 1$. The proposed vector of frequencies $\bm{\omega}^{\, p}$ is constructed by choosing with probability $\frac{1}{d^{\, c}}$ one of the current frequencies as the candidate frequency to be removed. Conditioned on $\bm{\omega}^{\, d}$ and $\sigma^{\,2c}$, a vector of linear coefficients $\bm{\beta}^{\, p}$ is sampled from its Gaussian posterior conditional distribution (Equation \textcolor{blue}{12}). The proposed state $(\, d^{\,p}, \, \bm{\omega}^{\, p}, \, \bm{\beta}^{\, p} \,)$ is jointly accepted or rejected with probability given in Equation \eqref{eq:acceptance_birth} above, with the correct adjustment of labelling of the variables, and the terms in the ratio inverted. The residual variance is then updated in a Gibbs step (Equation \textcolor{blue}{14}).
\renewcommand{\thesection}{C}
\section{Updating HMM Parameters} \label{sec:appendix_HMM}
\subsection{Sampling the Hidden State Sequence} \label{sec:sampling_z}
Given observations $\bm{y}$, transition probabilities $\bm{\pi}$ and emission parameters $\bm{\theta}$, we sample the state sequence $\bm{z}$ using the forward-backward procedure introduced by \citet{rabiner1989tutorial} and presented in a Bayesian setting in \citet{fox2011sticky}. Let us define, recursively, backward messages $b_{ t, \, t-1} \,( k )$ as
\begin{equation}
b_{\, T+1, \, T} \,( k ) = 1, \qquad \qquad
b_{t, \, t-1} \,( k ) \propto \sum_{j\, = \, 1}^{K_{max}} \pi_{kj} \, \mathcal{N} \, ( \,y_t; \, f_{j \, t}, \, \sigma^2_j ) \, b_{t+1, \, t} \, ( j ), \quad t = T, \dots, 2,
\end{equation} for each state $k = 1, \dots, K_{max}$, where we recall that $\pi_{kj} = p \, ( z_{t} = j \, | \, z_{t-1} = k )$. Here, $\mathcal{N} \, ( \,y_t; \, f_{j \, t}, \, \sigma^2_j )$ denotes the density of a Gaussian distribution with mean $ f_{jt} = \bm{x}_t \, \big( \bm{\omega}_{ j} \big)^{\, '} \, \bm{\beta}_{\, j} $ (as in Equation \textcolor{blue}{3}) and variance $\sigma^2_j$, evaluated at $y_t$. Note that $b_{ t, \, t-1} \,( k ) \propto p \, ( y_t, \dots, y_T \, | \, z_{t-1} = k, \, \bm{\pi}, \, \bm{\theta} ) $, namely a backward message passed from $z_t$ to $z_{t-1}$ is proportional to the probability of the partial observation sequence from $t$ to the end, given the state $z_{t-1} = k$. We then observe that we may recursively sample each state $z_{\,t}$ conditioned on $z_{\, t-1}$ since
\begin{equation*}
p \, ( \bm{z} \, | \, \bm{y}, \bm{\pi}, \bm{\theta} \, ) = p \, (z_1 \, | \, \bm{y}, \bm{\pi}, \bm{\theta} \, ) \, \prod_{t\, = \,2}^{T} p \, (z_{\, t} \, | \, z_{\, t-1}, \, \bm{y}, \bm{\pi}, \bm{\theta} \,).
\end{equation*}
The first state $z_1$ is sampled from the following posterior conditional distribution \begin{equation*}
\begin{split}
p \, (z_1 = k \, | \, \bm{y}, \bm{\pi}, \bm{\theta} \, ) \propto \pi_{\,0 k}\, \mathcal{N} \, ( \,y_1; \, f_{\,k 1}, \, \sigma^2_k ) \, b_{ 2, \, 1} \,( k ),
\end{split}
\end{equation*} where we recall that $\pi_{0k}$ is the initial transition distribution $p \, (z_{\, 1} = k)$. The rest of the sequence $z_{\,2}, \dots, z_{T}$ is then sampled, recursively, from
\begin{equation*}
p \, (z_t = k \, | \, z_{t-1} = j, \, \bm{y}, \bm{\pi}, \bm{\theta} \, ) \propto \pi_{\, jk}\, \mathcal{N} \, ( \, y_t; \, f_{k \, t}, \, \sigma^2_k ) \, b_{ t+1, \, t} \,( k ).
\end{equation*}
\subsection{Sampling Table Counts and Override Variables} \label{sec:sampling_table_counts}
Conditioned on the state sequence $\bm{z}$ and collection of dish ratings $\bm{\alpha}$, as well as hyperparameters $\eta$ and $\kappa$, we sample $m_{jk}$, $o_{jt}$ and $\bar{m}_{jk}$ as in \citet{fox2011sticky} from
\begin{equation*}
\begin{split}
p \, ( \bm{m}, \, \bm{o}, \, \bm{\bar{m}} \, | \, \bm{z}, \, \bm{\alpha}, \, \eta, \kappa ) &= p \, ( \bm{m} \, | \, \bm{z}, \, \bm{\alpha}, \, \eta, \, \kappa) \times p \, ( \bm{o} \, | \, \bm{m}, \, \bm{z}, \, \bm{\alpha}, \, \eta, \, \kappa) \\
& \times p \, ( \bm{\bar{m}} \, | \, \bm{o}, \, \bm{m}, \, \bm{z}, \, \bm{\alpha}, \, \eta, \, \kappa),
\end{split}
\end{equation*}
where $\bm{m}$ and $\bm{\bar{m}}$ denote the vectors of table counts of served and considered dishes, respectively, and $\bm{o}$ is the vector of override variables. Hence, we first draw $\bm{m}$, we then sample $\bm{o}$, and finally determine $\bm{\bar{m}}$.
\textit{Updating $m_{jk}$ : } Let us consider sampling from $p \, ( \bm{m} \, | \, \bm{z}, \, \bm{\alpha}, \, \eta, \, \kappa)$. Conditional on the value of the states $\bm{z}$, the customers $\bm{y}$ are partitioned according to both restaurants and dishes, but the table assignments are unknown because multiple tables can be served the same dish. Table assignments may be sampled from the following conditional distribution
\begin{equation}
\label{eq:table_assignments}
p \, ( t_{ji} = t \, | \, \bm{t}^{-ji}, \, k_{jt} = k, \, \bm{k}^{-jt}, \, \bm{\alpha}, \, \eta, \, \kappa ) \, \propto \, \begin{cases}
\,\tilde{n}_{jt}^{-ji}, & t \in \{1, \dots, T_j\} \\
\, \eta \, \alpha_k + \kappa \, \delta (k, j), & t = T_{j} + 1,
\end{cases}
\end{equation}
where $\tilde{n}_{jt}^{-ji}$ is the number of customers in restaurant $j$ that sits at table $t$ without including the customer $y_{ji}$, $\bm{t}^{-ji}$ are the table assignments for all customers in restaurant $j$ except for $y_{ji}$, and similarly $\bm{k}^{-jt}$ denote the dish assignments for all tables without counting table $t$ in restaurant $j$. Equation \eqref{eq:table_assignments} implies that we can sample table assignments after knowing the dish assignments and also states that a customers enters the restaurant and chooses an already occupied table with probability proportional to $\tilde{n}_{jt}^{-ji}$, or starts a new table served dish $k$ with probability proportional to $\eta \, \alpha_k + \kappa \, \delta (k, j)$. Note that, when joining a new table, a mass proportional to $\kappa$ is added if the dish assigned to that table was the house speciality dish. Moreover, the form of Equation \eqref{eq:table_assignments} also implies that a customer table assignment $t_{ji}$, conditioned on the dish assignment $k$, follows a DP with concentration parameter $\eta \, \alpha_k + \kappa \, \delta \, (j, k)$. Hence, the inference algorithm may be performed by introducing a set of auxiliary random variables $t^{\,(i)}_{jk}$ which indicate whether or not customer $i$ in restaurant$j$ has joined a new table serving dish $k$. These variable can be sampled in the following way
\begin{equation*}
t^{\,(i)}_{\, jk} \, \big| \, \, n_{jk},\, \alpha_{k}, \, \eta, \kappa \, \sim \text{Bernoulli} \, \Bigg( \, \dfrac{\eta \,\alpha_k + \kappa \, \delta (k, j) }{\eta \,\alpha_k + \kappa \, \delta (k, j) + i} \, \Bigg), \qquad i = 1, \dots, n_{jk},
\end{equation*} recalling that $n_{jk}$ is the number of transitions from state $j$ to state $k$. Table counts $m_{jk}$ are then determined by summing over these auxiliary random variables, i.e.
\begin{equation*} \label{eq:table_counts_eval}
m_{jk} = \sum_{i = 1}^{n_{jk}} \, t^{\,(i)}_{\, jk}.
\end{equation*}
\textit{Updating $o_{jt} : $} Let us now consider sampling from $p \, ( \bm{o} \, | \, \bm{m}, \, \bm{z}, \, \bm{\alpha}, \, \eta, \, \kappa)$. First, we observe that, when $j \neq k$, there are $m_{jk}$ tables for which $o_{jt} = 0$, since the corresponding considered dishes were not overridden with probability one. On the other hand, when $j = k$, the served dish $k_{jt} = j$, which is the house speciality dish, can arise from either an override decision (i.e. $o_{jt} = 1$) or a considered dish $\bar{k}_{jt} = j$ which has not been overridden (i.e. $o_{jt} = 0$). The resulting conditional distribution is given below
\begin{equation} \label{eq:override_posterior}
p \, ( o_{jt} \, | \, k_{jt} = j, \, \bm{\alpha}, \, \rho ) \propto
\begin{cases}
\, \rho & o_{jt} = 1, \\
\, \alpha_j \, (1 - \rho) & o_{jt} = 0.
\end{cases}
\end{equation}
Hence, we may sample $m_{jj}$ Bernoulli random variables from Equation \eqref{eq:override_posterior} or sample $o_{j \, \cdot } = \sum_{t} o_{jt}$, i.e. the total number of override dishes in restaurant $j$, from the following Binomial distribution \begin{equation*}
o_{j \, \cdot } \sim \text{Binomial } \, \bigg( m_{jj}, \, \dfrac{\alpha_j \, (1 - \rho)}{\rho + \alpha_j \, ( 1 - \rho) }\bigg)
\end{equation*}
\textit{Updating $\bar{m}_{jk}:$ } Conditioned on table counts $m_{jk}$ of served dishes for all $j$ and $k$, and override variables $o_{jt}$ for each of these instantiated tables, the number of tables $\bar{m}_{jk}$ that considered dish $k$ in restaurant $j$ is computed deterministically in the following way
\begin{equation*}
\bar{m}_{jk} = \begin{cases}
m_{jk} & j \neq k, \\
m_{jj} - o_{j \, \cdot} &j = k.
\end{cases}
\end{equation*}
noticing that, for house speciality dishes, we subtract the sum $o_{j \cdot}$ of override tables within restaurant $j$, since $o_{jt} = 1$ holds when table $t$ is served dish $j$.
\subsection{Sampling Hyperparameters} \label{sec:app_hyper}
We follow \citet{fox2011sticky} and parameterize the model by $(\eta + \kappa)$ and $\rho = \kappa / (\eta + \kappa)$. The previous parameterization can be restored using the equalities $\eta = ( 1 - \rho) (\eta + \kappa)$ and $\kappa = \rho (\eta + \kappa)$. We place a Beta ($c_{\rho}, d_{\rho}$) prior on the expected value $\rho$ of the override variable $o_{jt}$ and a vague Gamma ($a_{\eta + \kappa}, b_{\eta + \kappa}$) prior on $(\eta + \kappa)$. A Gamma ($a_{\gamma}, b_{\gamma}$) prior is placed on the concentration parameter $\gamma$. The posterior distribution of these hyperparameters and the concentration parameter $\gamma$ are given below. Here, we omit the full derivations of these results which follow closely \citet{escobar1995bayesian} and \citet{teh2006} and are provided in details in \citet{fox2011sticky}.
\textit{Updating $ (\eta + \kappa)$ : } Let $J$ be the number of instantiated restaurant in the franchise at a given iteration of the sampler. We introduced auxiliary random variables $\bm{r} = \{ r_1, \dots, r_j\}$, where each $r_j \in [0,1]$, and $\bm{s} = \{s_1, \dots, s_J\}$, with each $s_j \in \{0,1\}.$ Then, the posterior distribution of $ (\eta + \kappa)$, conditioned on these newly introduced set of parameters is given by
\begin{equation*}
(\eta + \kappa) \, \big| \, \bm{r}, \, \bm{s}, \, \bm{y} \, \sim \, \text{Gamma\,} \bigg( a_{\eta + \kappa} + m_{\cdot\cdot} - \sum_{j=1}^{J} s_j, \, b_{\eta + \kappa} - \sum_{j=1}^{J} \log r_j \bigg),
\end{equation*}
where the auxiliary variables $\bm{r}$ and $\bm{s}$ are updated in a Gibbs step from
\begin{equation*}
\begin{split}
r_j \, \big| \, \eta + \kappa, \, \bm{r}^{-j}, \, \bm{s}, \, \bm{y} \, &\sim \, \text{Beta\,} \big( \eta + \kappa + 1, n_{j \cdot} \big), \\
s_j \, \big| \, \eta + \kappa, \, \bm{s}^{-j}, \, \bm{r}, \, \bm{y} \, &\sim \, \text{Bernoulli\,} \bigg( \dfrac{n_{j\cdot}}{n_{j\cdot} + \eta + \kappa} \bigg),
\end{split}
\end{equation*}
and we recall that marginal counts are described with dots and hence $m_{\cdot \cdot}$ is the total number of tables serving dishes in the franchise, and $n_{jk}$ is the number of transitions from state $j$ to $k$ in the state sequence $\bm{z}$.
\textit{Updating $ \gamma$ :} The posterior distribution of $\gamma$ can be updated in a similar way. We introduce auxiliary random variables $\psi \in [0, 1]$ and $\xi \in \{0, 1\}$ and draw $\gamma$ from the full conditional
\begin{equation*}
\gamma \, \big| \, \psi, \, \xi, \, \bm{y} \, \sim \, \text{Gamma\,} \bigg( a_{\gamma} + \bar{K} - \xi, \, b_{\gamma} - \log \psi \bigg),
\end{equation*}
where the auxiliary variables $\psi$ and $\xi$ are drawn in a Gibbs step from
\begin{equation*}
\begin{split}
\psi \, \big| \, \gamma, \, \xi, \, \bm{y} \, &\sim \, \text{Beta\,} \big( \gamma + 1, \, \bar{m}_{\cdot \cdot} \big), \\
\xi \, \big| \, \gamma, \, \psi \, \bm{y} \, &\sim \, \text{Bernoulli\,} \bigg( \dfrac{\bar{m}_{\cdot \cdot}}{\bar{m}_{\cdot \cdot} + \gamma} \bigg),
\end{split}
\end{equation*}
where we recall that $\bar{K}$ is the number of unique considered dishes in the franchise, and $\bar{m}_{j\cdot}$ represents the total number of tables in restaurant $j$ with considered dishes that were not overridden.
\textit{Updating $ \rho$: } Finally, we sample the self-transition proportion $\rho$ from its conditional posterior distribution which is given by
\begin{equation*}
\rho \, | \, \bm{o} \, \sim \text{Beta } \bigg( \sum_{j=1}^{J} o_{j \cdot} + c_{\rho}, \, m_{\cdot \cdot} - \sum_{j=1}^{J} o_{j \cdot} + d_{\rho} \bigg).
\end{equation*}
\bibliographystyle{imsart-nameyear}
|
1,116,691,498,277 | arxiv | \section{Introduction}\label{sec:intro}
Many schemes for classifying galaxies have been presented over the years,
focusing on somewhat ephemeral properties such as morphology and color.
Alternatively, one may consider three fundamental physical parameters:
mass $M$, energy $E$, and angular momentum $J$.
Qualitatively, these are related to the amount of material in a galaxy,
to the linear size, and to the rotation velocity.
An important advantage of these parameters is that they may be related back to
the earlier states of galaxies without having to unravel all of the
messy intervening details
such as baryonic dissipation, star formation, and morphological transformation.
As an example, the simple assumption that $J$ is approximately conserved during the
collapse of gas within hierarchically-forming dark matter halos
naturally explains the
observed basic scaling relations of disk galaxies
\citep{1980MNRAS.193..189F,1997ApJ...482..659D,1998MNRAS.295..319M}.
Here ``conserved'' means that the initial $J_\star$ is retained at a factor of $\sim$~2 level,
unlike $E$, which can be readily lost by factors of $\sim$~10 through dissipative collapse and radiation.
Note that the ``weak'' conservation
of {\it total} $J$ is less restrictive and more plausible than the ``strong''
conservation of
the {\it internal} distribution of $J$ with radius, which
could be readily altered
by secular processes within disks while still preserving total $J$
(e.g., \citealt{2004ARA&A..42..603K};
see \citealt{2002ASPC..275..389F}
and \citealt{2002ARA&A..40..487F} for further discussion).
In this vein,
\citet[hereafter F83]{1983IAUS..100..391F} introduced a general diagram of
$j_\star$\ versus stellar mass $M_\star$, where $j_\star \equiv J_\star/M_\star$ is the
stellar specific angular momentum.
This diagram has the important advantages that it deals
with conservable physical quantities, and that the axes represent independent
variables. The $M_\star$\ axis embodies a {\it mass} scale, while the $j_\star$\ axis
represents a {\it length} scale times a {\it rotation-velocity} scale.
On the contrary, the standard relations between $M_\star$\ and circular velocity $v_{\rm c}$
(e.g., \citealt{1977A&A....54..661T,2010MNRAS.407....2D,2011ApJ...742...16T})
involve correlated variables,
since $v_{\rm c}$ may be directly connected to $M_\star$.
Another related parameter is the {\it spin} ($\lambda$), which is useful for
characterizing dark matter halo rotation, and which we will discuss later in this paper.
The simple $j_\star$--$M_\star$\ diagram is still charged with useful
information for understanding galaxies, and to orient the remainder of our
discussion, we begin by reproducing the original version from F83
here in Figure~\ref{fig:JMM00}.
The only change is to rescale the data for
a Hubble constant of $h=0.7$ rather than $h=0.5$.
These data were for late-type spirals (Sb and Sc) based on extended optical
rotation curves, and for elliptical galaxies based on observations from their
inner half-light radii, as feasible in that era.
\begin{figure}
\centering{
\includegraphics[width=3.5in]{f1.ps}
}
\caption{The total intrinsic stellar specific angular momentum of galaxies
plotted against their total stellar mass, reproduced from
\citet{1983IAUS..100..391F},
with corrections from a Hubble constant of $h=0.5$ to $0.7$.
The symbols show galaxy types according to the legend at the upper left;
for the ellipticals (E), open circles show galaxies with an upper-limit estimate of $j_\star$.
The dotted line shows a trend of $j_\star \propto M_\star^{2/3}$.
The logarithms plotted here and used throughout the paper are in base~10.
These $j_\star$--$M_\star$\ scaling relations are the focus of this paper, and will
eventually be updated
in Figure~\ref{fig:JMM0}.
\label{fig:JMM00}
}
\end{figure}
The first key feature to note from Figure~\ref{fig:JMM00} is that
the spirals follow a fairly tight scaling relation of $j_\star \propto M_\star^\alpha$,
where $\alpha\sim0.7$
(see also \citealt{1967PASJ...19..409T,1969ApL.....3..153H,1970ApJ...160..811F,1973ApJ...184..735N}),
which is a phenomenology that is now understood
to provide a remarkable link between visible galaxies
and their invisible dark matter halos.
F83 provided a simple theoretical framework in which the gaseous baryons of galaxies
are initially mixed with the dark matter and share in the same $j$.
The baryons then cool and decouple from the dark matter, collapsing into star-forming disks.
If the baryonic $j$ is approximately conserved in this process,
both the {\it zeropoint} and the {\it slope} of the observed spiral-galaxy $j_\star$--$M_\star$\ relation
are reproduced.
The formation of disk galaxies can thus be explained at a basic level through this
long-standing picture of (weak) $j$ conservation.
To provide further understanding, hydrodynamical simulations of galaxy
formation have been pursued for decades, with the $j_\star$--$M_\star$\ observational diagram
from F83 as a key benchmark for theory.
Attaining that benchmark has turned out to be a major challenge, with early
studies finding catastrophic $j$ loss (e.g., \citealt{1991ApJ...377..365K,1991ApJ...380..320N,1995MNRAS.275...56N,1997ApJ...478...13N}).
This angular momentum ``catastrophe'' can be attributed partially to numerical limitations,
and partially to uncertainties in modeling baryonic processes such as feedback
following star formation, as reviewed by \citet{2002ASPC..275..389F}.
Over the years, the simulations have improved and
can now come close to reproducing the $j_\star$--$M_\star$\ observations
(e.g., \citealt{2007MNRAS.374.1479G,2011MNRAS.410.1391A,2011ApJ...742...76G}),
although much work still remains in understanding both the numerics and the physics.
Besides the angular momentum benchmark from F83 which has become a standard ingredient in
modeling the formation of disk galaxies, there is another aspect
of the original $j_\star$--$M_\star$\ diagram
that has received relatively little attention:
the inclusion of elliptical galaxies along with the spirals.
The diagram thereby provides
a fundamental diagnostic of scaling relations for {\it all} galaxies, which
is important because
there is still not a full explanation for such a basic property
as the \citet{1926ApJ....64..321H} sequence of galaxy morphologies.
Star formation considerations aside,
there is an obvious {\it dynamical} distinction between galaxy
disks and spheroids, which are characterized
by cold, ordered rotation versus random motions with fairly low net rotation,
respectively.
Differences in the conservation and distribution of $j$ may very well
be pivotal to explaining these differences and to governing the fates of
galaxies.
As shown in Figure~\ref{fig:JMM00},
F83 found that ellipticals followed a $j_\star$--$M_\star$\ trend roughly parallel to the spirals,
but lower by a factor of $\sim$~6, and with more apparent scatter
(see also \citealt{1975ApJ...200..439B}).
There are several potential explanations for such a
difference between spirals and ellipticals, but the most plausible one is
traced to a violent, clumpy genesis for spheroids.
For example, mergers
could naturally redistribute angular momentum from the central regions of a galaxy to its
outer parts by dynamical friction
(e.g., \citealt{1980ApJ...236...43A,1981MNRAS.197..179G,1987ApJ...319..575B,1988ApJ...330..519Z,1992ApJ...393..484B,1992ApJ...400..460H,1994MNRAS.267..401N,1996ApJ...463...69H,2007MNRAS.380L..58D,2008MNRAS.387..364Z}).
Thus, $j$ should be basically conserved but inconveniently locked up in unobservable components such
as the dark halo and the faint outer stars.
With this theoretical sketch in hand, the $j_\star$\ disparity between spirals
and ellipticals has received little further attention over the years.
However, the scenario of angular momentum redistribution has not yet been directly
tested by observations---a
situation that may now finally be remedied via the advent of new
techniques for optical spectroscopy in galaxy halos
(with preliminary results along these lines
reported in \citealt{2004IAUS..220..165R}).
\begin{figure}
\centering{
\includegraphics[width=3.5in]{f2.ps}
}
\caption{
Physically-motivated classification diagram of galaxies, using the parameter
space of stellar mass and specific angular momentum.
The solid blue and red lines show parallel scaling relations for disks and bulges,
which are based loosely on our observational results to be
presented in Section~\ref{sec:obsres}.
Approximate positions are also shown for different galaxy types:
Sc, Sb, Sa, S0, fE, and sE (the latter two
being fast and slow-rotating ellipticals).
}
\label{fig:schem1}
\end{figure}
In this paper we re-open various questions about angular momentum in
all types of bright galaxies,
following and extending the treatment of F83.
Are the $j_\star$--$M_\star$\ slopes, zeropoints, and scatter in Figure~\ref{fig:JMM00}
supported upon re-examination?
Does the ``missing'' $j_\star$\ in ellipticals emerge in large-radius data?
Can the $j_\star$\ variations be associated with the natural dispersion in spin expected for
standard dark matter halos, or is it necessary to invoke additional baryonic $j$ evolution?
F83 also proposed that the Hubble sequence may be understood as a systematic variation
in $j_\star$\ at a fixed $M_\star$\
(or equivalently, variation in $M_\star$\ at fixed $j_\star$),
but could not test this idea owing to the lack of
adequate data for the crucial, intermediate cases of Sa and S0 galaxies.
Here we will pursue this theme, and advance a framework where
every galaxy can be considered basically as a linear combination of a disk and a bulge,
with each of these components following a characteristic $j_\star$--$M_\star$\ scaling relation.
In this idealized model, the $j_\star$--$M_\star$\ parameter space maps uniquely to a space
of $M_\star$\ and bulge fraction $B/T$.
Figure~\ref{fig:schem1} provides a schematic overview of this framework, showing
decompositions of the Hubble sequence in $j_\star$--$M_\star$\ parameter space.
One of our goals in this paper will be to include observational results for
Sa and S0 galaxies in this diagram for the first time, to see if such
systems fill in the gap (if any) between earlier and later types,
and if bulges and disks are homologous enough to explain the
$j_\star$--$M_\star$\ trends as primarily reflecting a $B/T$ sequence.
The $j_\star$--$M_\star$\ diagram does not simply provide a basic {\it description} of galaxies
and their subcomponents, but also permits a novel approach to modeling the
{\it evolution} of galaxies which is complementary to numerical simulations.
As mentioned previously, there are simple models for the formation of disk galaxies
that relate their $j_\star$\ and $M_\star$\ values to the initial conditions of their host halos.
More generally, {\it any} stage in the evolution of a galaxy will involve
a vector of change in the $j$-$M$ diagram
that is not arbitrary, since in real physical processes, changes in
$j$ and $M$ will be linked in characteristic ways.
Therefore the empirical offsets between the $j_\star$--$M_\star$\ sequences of different galaxy types,
and of their subcomponents including bulges, disks, and dark matter halos,
can reveal the evolutionary connections among them.
We set out to explore the preceding questions and issues as follows.
In Section~\ref{sec:gen} we present a methodology for careful estimation of $j_\star$\ in
various types of galaxies and observations, with most of the details of its derivation
given in Appendix~\ref{sec:form}.
Section~\ref{sec:examp} uses detailed models of a handful of real galaxies
to examine a simplified procedure for $j_\star$\ estimation.
Our updated analysis
of the observed $j_\star$\ trends in a large sample of galaxies
follows, with the observational ingredients and their inter-correlations
described in Section~\ref{sec:data},
and the full results presented in Section~\ref{sec:obsres}
including a definitive confirmation of the large offset between spirals and ellipticals.
These empirical $j_\star$\ trends can be considered as fundamental, enduring
tools for constraining theories of galaxy evolution.
In Section~\ref{sec:theory} we go on to connect the observations to generalized theoretical
predictions for angular momentum
in a modern cosmological context.
We summarize in Section~\ref{sec:concl}.
In addition, Appendix~\ref{sec:form} is an important part of this paper, providing an extended presentation
of new content relating to the derivation of $j_\star$, which has been split off from
the main text for the sake of readability.
Appendices~\ref{sec:obsapp}--\ref{sec:decomp} provide data tables of $j_\star$\ and other properties of observed galaxies,
along with detailed discussion of the observations and data analysis for a subsample
of these galaxies.
The reader looking for immediate answers to the questions above may wish to skip
ahead to the results of Sections~\ref{sec:obsresults} and onwards.
\section{Basic formulae: disks and spheroids}\label{sec:gen}
The foundation for this paper is a revised, general observational analysis of
specific stellar angular momentum $j_\star$\ for bright galaxies in the nearby universe.
This quantity is most generally calculated
by the following expression:
\begin{equation}\label{eqn:j3d}
{\bf j}_{\rm t} \equiv \frac{{\bf J}_{\rm t}}{M_\star} = \frac{\int_{\bf r} {\bf r}\times \bar{\bf v} \rho \, d^3{\bf r}}{\int_{\bf r} \rho \, d^3{\bf r}} ,
\end{equation}
where the subscript ``t'' denotes the ``true'' angular momentum in three-dimensional space,
${\bf r}$ and $\bar{\bf v}({\bf r})$ are the position and mean-velocity vectors (with respect to the center of mass of the galaxy),
and $\rho({\bf r})$ is the three-dimensional density of the population under study
(generally assumed to be stars in this project).
For spiral galaxies, we approximate the density distribution as an infinitely-thin,
axisymmetric disk with an exponential surface density profile.
Assuming also a radially-constant rotation curve,
Equation~(\ref{eqn:j3d}) yields the simple expression:
\begin{equation}\label{eqn:F83eq1}
j_{\rm t} = 2 \, v_{\rm c} \, R_{\rm d}\ ,
\end{equation}
where $v_{\rm c}$ is the intrinsic circular rotation velocity,
and $R_{\rm d}$ is the intrinsic exponential-disk scale length.
These deprojected quantities are relatively easy to infer from observations
because it is straightforward to estimate disk galaxy inclinations.
Equation~({\ref{eqn:F83eq1}) is widely used in the literature (including in F83),
but we will demonstrate explicitly that it provides an excellent approximation
to real galaxies whose rotation curves vary with radius.
For more general cases including elliptical galaxies,\footnote{We use the term
``spheroid'' to mean a pressure-dominated stellar system
(which may also rotate).
A ``bulge'' is the spheroidal component of a spiral galaxy.
An ``elliptical'' is a galaxy with only a spheroidal component,
although many galaxies commonly classified as ellipticals probably have
embedded disklike components, similar to those in lenticulars but less obvious.
We consider jointly the ellipticals and lenticulars under the general rubric of
``early-type'' galaxies.}
there is no established recipe equivalent to Equation~(\ref{eqn:F83eq1}).
For multiple reasons, estimating $j_{\rm t}$\ for these galaxies is much harder
than for spirals. Not only are their inclinations and intrinsic shapes
uncertain, but large-radius rotation measurements are both more difficult and
more critical.
We illustrate the last point with some basic galaxy models.
Adopting the simple assumption of an axisymmetric system with cylindrical
rotation that is constant with respect to the intrinsic radius $R$, we consider both
a disk galaxy with an exponential surface density profile, and an elliptical
galaxy with a standard \citet{1948AnAp...11..247D} $R^{1/4}$ profile.
Although ellipticals are in general triaxial systems,
the axisymmetric model is sufficiently accurate for our purposes.
Figure~\ref{fig:fR} then shows the cumulative distribution of angular momentum
(both total and specific) with radius.
For the disk galaxy, the specific angular momentum reaches roughly half of its
total value at the effective radius $R_{\rm e}$\ that encloses half of the stellar light.
This implies that observational estimates of $j_{\rm t}$\ will be relatively easy for disk galaxies.
\begin{figure}
\centering{
\includegraphics[width=3.5in]{f3.ps}
}
\caption{Fraction of enclosed cumulative quantities vs.\
cylindrical galactocentric radius (normalized by
the effective radius $R_{\rm e}$) for model galaxies with an exponential profile
($n=1$ disk, {\it top}) and a de Vaucouleurs profile ($n=4$ spheroid, {\it bottom}).
A constant, cylindrical rotation field is assumed.
The quantities are projected stellar mass $M_\star$\ ({\it dotted curve}),
angular momentum $J_\star$ ({\it dashed}), and specific angular momentum $j_\star$\
({\it solid}).
The latter quantity is computed using the cumulative values of both $J_\star$
and $M_\star$\ within the radius $R$.
The vertical dashed line marks 1~$R_{\rm e}$.
To capture half of $j_\star$, the observations must extend to $\sim$~1~$R_{\rm e}$\ in a disk
galaxy, and to $\sim$~(4--5)~$R_{\rm e}$\ in a spheroid.
\label{fig:fR}
}
\end{figure}
For the elliptical galaxy on the other hand, the halfway mark for $j_{\rm t}$\ is reached
at 4.5~$R_{\rm e}$.
This is because ellipticals contain a fairly large fraction of their light in their
outer regions where the radius lever-arm in ${\bf r}\times\bar{\bf v}$ is large.
The implication is that observations of elliptical galaxies need to
extend to much larger radii than for spirals, in order to be confident of
capturing the total $j_{\rm t}$.
Typical stellar kinematics observations in 1983 extended to $\sim$~1~$R_{\rm e}$,
and even today, only a small handful of galaxies have been observed
kinematically out to $\sim$~5~$R_{\rm e}$,
which means the positions of the ellipticals in the original $j_\star$--$M_\star$\ diagram
(Figure~\ref{fig:JMM00}) were highly uncertain, and continue to be challenging
to determine with surety.
Fortunately, after a great deal of experimentation, which we will discuss below,
we find that there is a heuristic approach where
observations around $\sim$~2~$R_{\rm e}$\ can be used to
estimate the total $j_{\rm t}$\ of ellipticals with reasonable accuracy.
Returning to a general framework for estimating $j_{\rm t}$\ from observations,
there is not only the challenge of extending the data to large radii, but
also of having only three of the six phase-space quantities in
Equation~(\ref{eqn:j3d}) accessible (i.e., the projected positions and line-of-sight velocity).
Even the projection of ${\bf j}_{\rm t}$ on the sky involves unobservable
velocity components tangential to the line of sight, and requires
additional modeling assumptions.
To cope with these issues, we will model the observed rotation and luminosity profiles of
galaxies and convert these to $j_{\rm t}$\ estimates using approximate deprojection factors.
Although these factors are based on highly simplified models,
the dominant source of uncertainty is still the limited extent of the data to
large radii.
We derive in Appendix~\ref{sec:form} two alternative expressions for estimating $j_{\rm t}$\
from observations, both of them based again on the simplifying assumption of cylindrical rotation.
The first expression starts with a detailed calculation of a ``projected''
specific angular momentum proxy that can be estimated directly from observations:
\begin{equation}
j_{\rm p} = \frac{\int v_{\rm rot,p}(x) \, \Sigma(x) \, x^2 \, dx}{\int \Sigma(x) \, x \, dx} \, .
\label{eqn:JMp}
\end{equation}
Here $v_{\rm rot,p}(x)$\ is the observed profile of rotation velocity along the projected
semi-major axis $x$, and
$\Sigma(x)$ is the surface density profile, again along the semi-major axis.
The quantity $j_{\rm p}$\ is related to $j_{\rm t}$\ through a ``deprojection'' factor $C_i$:
\begin{equation}\label{eqn:proxy}
j_{\rm t} = C_i \, j_{\rm p} .
\end{equation}
Therefore the problem of estimating $j_{\rm t}$\ separates into two parts:
the calculation of $j_{\rm p}$\ from observations, and the factor $C_i$ which can be
calibrated from theoretical models.
As we describe in Appendix~\ref{sec:form},
this latter factor has some dependence on the detailed density--velocity structure
of the galaxy, but is primarily a function of the inclination $i$ relative to the
line of sight. For thin-disk galaxies, it is simply $C_i=(\sin i)^{-1}$.
With spheroidal galaxies, there is an additional dilution effect that comes from
the line-of-sight intersecting the rotation field at non-tangent points.
In principle, this effect is dependent on the detailed shape of the
rotation profile, but we have found with simplified test models that such
variations can be neglected in practice.
We also find that as long as the major-axis radius $x$, rather than a circularized
radius $R$, is used in Equation~(\ref{eqn:JMp}), then $C_i$ is insensitive to galaxy flattening.
A general approximation to $C_i$ as a function of inclination
is provided by Equation~(\ref{eqn:Cform}).
It is normally difficult to determine $i$ for spheroidal galaxies,
and we will when needed adopt inclination-averaged values.
Equation~(\ref{eqn:JMp}) yields accurate results that are commensurate with the quality
of modern observations, but involves numerical integration,
and careful compilation of $\Sigma(x)$ and $v_{\rm rot,p}(x)$\ profiles along with
extrapolation beyond the bounds of the data.
We could in principle simplify the problem further by using parametric
models for $v_{\rm rot,p}(x)$\ and $\Sigma(x)$. Unfortunately, the diversity of
observed rotation profiles (when non-spiral galaxies are considered)
defies parametrization.
We can at least adopt for the surface density
the general \citet{1968adga.book.....S} law
which accurately represents a wide range of galaxy types:
\begin{equation}\label{eqn:sersic}
\Sigma(x) \propto \exp\left[-b_n(x/a_{\rm e})^{1/n}\right],
\end{equation}
where $a_{\rm e}$\ is the effective radius along the {\it semi-major axis}, and
the shape index $n$ determines the steepness of the outer density profile
(higher values are shallower: e.g., an exponential disk profile has $n=1$
and the de Vaucouleurs law for ellipticals has $n=4$),
while $b_n$ is a numerical function of $n$
[Equation~(\ref{eqn:bn})].
We use this $\Sigma(x)$ simplification in practice when deriving $j_{\rm p}$\ from
a detailed $v_{\rm rot,p}(x)$\ profile in expression (\ref{eqn:JMp}).
We also generally base our $\Sigma(x)$ profiles on observations of stellar
surface brightness profiles $I(x)$, assuming for simplicity
that there are no variations of stellar mass-to-light ratio with radius
(e.g., due to dust).
Our second method
is a quick-and-dirty shortcut for estimating $j_{\rm t}$,
as needed to generate an initial overview of the trends for a large
sample of galaxies.
We simply calculate the following linear scalar expression
[derived in Appendix~\ref{sec:form} from Equation~(\ref{eqn:JMp})]:
\begin{equation}\label{eqn:jCK0}
\tilde{j_{\rm p}} = k_n \, v_s \, a_{\rm e} ,
\end{equation}
where $\tilde{j_{\rm p}}$\ means an approximation for $j_{\rm p}$,
$v_s$ is the {\it observed} rotation velocity at some arbitrary measurement
location $x_s$, and $k_n\sim$~1--5 is a numerical coefficient that depends on the
S\'ersic index $n$ of the galaxy
[see Equation~(\ref{eqn:kn})].
As in Equation~(\ref{eqn:proxy}), $\tilde{j_{\rm p}}$\ is multiplied by $C_i$ to provide an approximate $\tilde{j_{\rm t}}$.
Here the basic idea is that a galaxy can be represented by a characteristic
observed rotation velocity scale $v_s$, a length scale $a_{\rm e}$,
and a factor $k_n$ that relates to the moment of inertia (discussed further below).
The heuristic approximation that we make here is to
select $v_s$ at $x_s\sim$~2~$a_{\rm e}$\ for
all galaxies. We will show in the next section that this choice
allows us to estimate $j_{\rm p}$\ with an accuracy of $\sim \pm$~0.1~dex,
which is good enough to start making some interesting inferences about
trends in $j_{\rm t}$.
For $n=4$ spheroids, the expression equivalent to
Equation~(\ref{eqn:F83eq1}) for spirals is:
\begin{equation}\label{eqn:eqn4}
\tilde{j_{\rm t}} = 3.03 \, v_s \, R_{\rm e} ,
\end{equation}
for a median, unknown inclination (Equation~(\ref{eqn:j1})).
An important concept with the more general expression (\ref{eqn:jCK0})
is that $k_n$ increases strongly with $n$;
for fixed galaxy size and rotation velocity, a
more extended luminosity profile implies a higher $j_{\rm p}$\
owing to the large fraction of mass residing at large radii.
This also means that a spheroidal ($n\sim4$) galaxy
with the {\it same} observed rotation $v_s$ and size $a_{\rm e}$\ as a spiral
has a {\it larger} specific angular momentum.
Late-type and early-type galaxies near the $L^*$ characteristic luminosity
{\it do} have similar sizes for the same stellar mass
(e.g., \citealt{2003MNRAS.343..978S}).
Therefore we can already make the basic prediction that if
$j_{\rm p}$\ at a fixed mass is independent of morphology,
then the early-types should have $v_s$ values relative to late-types of
$\sim k_1/k_4$, i.e., lower by a factor of $\sim$~2.
The $j_\star$\ formalism that we have outlined here represents a modest extension of the simpler
methods in F83. The improvements introduced here
include allowance for a range of luminosity profiles (not only $n=1$ and $n=4$),
and better treatment of elliptical galaxies where rotation at large radii is
critically important.
It also becomes more straightforward to understand the interplay between
observations and uncertainties in the $j_\star$\ estimates, as explored in the
next section.
\section{Observations: analysis methods}\label{sec:examp}
Before we move on to $j_\star$--$M_\star$\ analyses of a large sample of galaxies,
we examine a small sample in more detail.
The goals here are to illustrate the nature of the available data,
to demonstrate that the simplified Equations~(\ref{eqn:F83eq1}) and (\ref{eqn:jCK0})
are good approximations to a full treatment with Equation~(\ref{eqn:JMp}),
and to understand some systematic effects in the $j_\star$\ and $M_\star$\ determinations.
Because this paper is concerned with the angular momentum bound up in the stellar
components of galaxies,
the preferred kinematic tracer comes from integrated-light absorption-line spectroscopy.
In many cases, such data do not extend to large enough radii,
so we make use of additional tracers as proxies for the field stars:
cold and warm gas, planetary nebulae (PNe), and metal-rich globular clusters (GCs).
We consider disk- and bulge-dominated galaxies in Sections~\ref{sec:disk}
and \ref{sec:bulge}, respectively.
We evaluate our simplified $\tilde{j_{\rm p}}$\ estimate (\ref{eqn:jCK0}) in Section~\ref{sec:simp},
describe our mass estimates in Section~\ref{sec:mass}, and then consider
systematic uncertainties in Section~\ref{sec:sys}.
\subsection{Disk-dominated galaxies}\label{sec:disk}
\vspace{0.2cm}
\noindent
The most straightforward galaxies for estimating angular momentum are the
gas-rich spirals, since the stellar rotation profile, which cannot
always be measured directly, follows the gas rotation profile to a good approximation.
Also, the observed rotation can easily be
corrected for projection effects in order to recover the intrinsic value
(see Appendix~\ref{sec:thin}).
The detailed analysis below is overkill for these galaxies, whose $j_{\rm t}$\ can be
readily estimated through Equation~(\ref{eqn:F83eq1}), but we wish to illustrate
how our more general treatment works for them, before moving on to the spheroids.
We consider two real galaxies:
NGC~3054 and NGC~3200, which are well-studied disk-dominated spirals from
the classic optical rotation curve analyses of \citet{1982ApJ...261..439R}.
These cases are chosen to bracket the typical range of inner
rotation profile shapes for spirals (slowly and rapidly rising, respectively).
We take the long-slit major-axis ionized-gas kinematics data from \citet{2004A&A...424..447P},
shown in Figure~\ref{fig:spirals} after a modest amount of re-binning.
These rotation profiles have
high-frequency bumps and wiggles that
are presumably caused by local perturbations such as spiral arms.
Fortunately, these features tend to average out when calculating a cumulative $j$
and are not important in this context.
To calculate the projected specific angular momentum $j_{\rm p}$,
we carry out a piecewise integration of Equation~(\ref{eqn:JMp}),
using the major-axis rotation-velocity data $v_{\rm rot,p}(x)$\ up to $\sim$~2~$a_{\rm e}$,
along with simple power-law extrapolations at larger radii,
as shown in Figure~\ref{fig:spirals}.
For $\Sigma(x)$, we use an exponential model
[$n=1$ in Equation~(\ref{eqn:sersic})], with the disk scale-lengths $R_{\rm d}$
taken from $r$-band photometry as we will discuss in the next section.
Note that $a_{\rm e} = 1.68 R_{\rm d}$ for a pure exponential disk.
\begin{figure}
\centering{
\includegraphics[width=3.5in]{f4.ps}
}
\caption{Observed rotation-velocity profiles of two spiral galaxies (NGC~3054 and NGC~3200)
vs.\ semi-major axis radius
(renormalized by the effective radius).
Each galaxy is labeled with its Hubble type.
The data are ionized gas velocities from \citet{2004A&A...424..447P}.
The solid curves with shaded regions show power-law fits (with uncertainties)
used to extrapolate the rotation velocity to larger radii.
See main text and Appendices~\ref{sec:form}~and~\ref{sec:obsapp} for further details.
Dotted horizontal lines show the characteristic rotation velocity $v_s$ for each galaxy;
the approximate intersection with the corresponding rotation-velocity profile is
marked with a \$\ symbol and defines the radius $x_s$
(see Section~\ref{sec:simp}).
\vskip 0.1cm
\label{fig:spirals}
}
\end{figure}
\begin{figure}
\centering{
\includegraphics[width=3.5in]{f5.ps}
}
\caption{The cumulative projected specific angular momentum, $j_{\rm p}(<x)$,
of several nearby galaxies as a function of semi-major axis radius
(with log axes), based on modeling of kinematic observations.
Solid curves show the best-fit models, with shaded regions illustrating the
uncertainties (including those due to extrapolations at large radii).
See Table~B\ref{tab:gal} for the distances and $a_{\rm e}$\ values adopted.
For most of the galaxies, $j_{\rm p}$\ has nearly reached its asymptotic value
by $x \sim$~5~$a_{\rm e}$.
\label{fig:JMcum1}
}
\end{figure}
The resulting cumulative $j_{\rm p}(\leq x)$ profiles with radius for these galaxies are shown in
Figure~\ref{fig:JMcum1}. Here it would be trivial to convert $j_{\rm p}(\leq x)$ immediately to
$j_{\rm t}(\leq R)$ using the known inclinations of these galaxies, but our general strategy
is to focus first on the direct modeling of the observations for all galaxies,
and later apply the deprojection factors $C_i$, which involve different systematics.
It can be seen that $j_{\rm p}$\ hardly changes outside $\sim$~3~$a_{\rm e}$,
and that the large-radius extrapolations
make very little difference:
the regions outside $\sim$~2--2.5~$a_{\rm e}$\ ($\sim$~3--4~$R_{\rm d}$)
contain only $\sim$~8\%--15\%
of the total luminosity, and contribute only $\sim$~15\%--25\% of the total $j_{\rm p}$\
(half of $j_{\rm p}$\ is enclosed within $\sim$~1.2~$a_{\rm e}$~$\sim$~$2R_{\rm d}$;
Figure~\ref{fig:fR}).
Given reasonable extrapolations of the data, the total $j_{\rm p}$\ for these
two galaxies, using our basic modeling assumptions, is constrained to $\sim$~5\%
($\sim$~0.02~dex).
Thus the kinematics is not a major source of uncertainty for $j_{\rm t}$\ estimation
in disk-dominated galaxies.
Additional complications that we have not considered here are
deviations of the disk surface density profile from
a simple constant mass-to-light ratio exponential model,
and inclusion of a bulge (to be discussed later).
We will examine more general systematic uncertainties
in Section~\ref{sec:sys}.
\subsection{Bulge-dominated galaxies}\label{sec:bulge}
We now turn to the novel component of this paper, which is the careful treatment
of $j_{\rm t}$\ in early-type, bulge-dominated galaxies.
Figure~\ref{fig:fR} demonstrated that traditional observations within 1~$a_{\rm e}$\
provide little assurance about the total angular momentum content of these
systems, while even current cutting-edge observations out to
$\sim$~5~$a_{\rm e}$\ might in principle not be adequate.
\begin{figure*}
\centering{
\includegraphics[width=5in]{f6.ps}
}
\caption{Rotation-velocity profiles for eight early-type galaxies.
See Figure~\ref{fig:spirals} for further details, including
an explanation of the shaded uncertainty regions.
For ease of inter-comparisons, the vertical axis
of each panel has been scaled according to the velocity
dispersion of the galaxy at at 2~$a_{\rm e}$, which is marked in each panel by a $*$ symbol.
Note the dashed lines at zero rotation velocity in some cases.
The galaxies show a diversity of rotation-velocity trends with radius.
\label{fig:etg}
}
\end{figure*}
Here we analyze a sample of eight real galaxies in detail in
order to characterize the accuracy of $j_{\rm t}$\ estimations.
Seven of these galaxies were chosen because of the availability of
high-quality extended kinematic data
using integrated stellar light spectroscopy
from two recent papers \citep{2009MNRAS.394.1249C,2009MNRAS.398...91P}.
Both papers represent the first installments of systematic surveys of
early-type galaxies in the local universe, and there is no obvious selection bias for
the seven galaxies.
Five of them are ``ordinary'' near-$L^*$ early-types
with central ``fast-rotator''
kinematics as is typical for such galaxies \citep{1996ApJ...464L.119K,2011MNRAS.414..888E}.
The other two (NGC~1407 and NGC~4374 $=$ M84) are examples of round, bright ``slow rotators''
that are common in high-density environments \citep{2011MNRAS.416.1680C}.
Five of these galaxies also have PN or GC kinematics data available
\citep{2009MNRAS.394.1249C,2009AJ....137.4956R},
which we incorporate into our analysis
in order to extend the range of galactocentric radii probed.
We include an eighth galaxy in our sample, NGC~5128 (Cen~A), because it has the most
extended (PN) kinematics data of any early-type galaxy in the literature
\citep{2004ApJ...602..685P}. It may also be the remnant of a recent
major merger (e.g., \citealt{2006MNRAS.370.1737B}),
which as discussed in Section~\ref{sec:intro} is expected to
generally transfer angular momentum into the outer regions.
Analysis of this galaxy thus provides a golden opportunity to search for
the ``missing'' angular momentum, and to see if any clear $j_{\rm t}$\ difference
emerges with respect to the other galaxies in the sample.
The use of PNe and GCs to provide proxies for stellar kinematics
may seem risky, given the considerable uncertainties that remain about the
parent stellar populations of these tracers.
However, in most galaxies studied to date,
both the density and kinematical profiles of PN and metal-rich GC
systems have been found to correspond well
to those of the full stellar population in the regions of overlap
(e.g., \citealt{2009MNRAS.394.1249C,2010A&A...518A..44M,2011MNRAS.415.1244D,2012A&A...539A..11M,Cortesi12,Pota12}).
We have also verified that this is generally the case for the galaxies in our sample.
Further details of the observations as well as of the kinematical modeling are
provided in Appendix~\ref{sec:obsapp}, along with the resulting rotation and angular momentum
profiles. It should be emphasized that the careful, homogeneous construction of these
profiles is laborious, which is why the current sample of galaxies that we consider
in detail is relatively small.
The rotation-velocity profiles of these eight galaxies are summarized
in Figure~\ref{fig:etg}.
Unlike the spirals (Figure~\ref{fig:spirals}), the early-types show great diversity
in the characteristic shapes of their profiles.
Some are fairly constant with radius,
others plummet rapidly from a central high value, and
one continues increasing to the limits of the data.
This diversity is {\it not} simply a matter of inclination, as can be seen
by the divergent cases of NGC~821 and NGC~2768, which are both highly flattened
and probably close to edge-on.
We thus find that the central rotation properties of early-type galaxies cannot be used
to reliably estimate the total angular momentum content, and there is probably no
simple function that universally characterizes their full rotation-velocity profiles.
As with the spirals, we fit power laws to the outer
regions of the rotation data in order to extrapolate to larger radii
(see Appendix~\ref{sec:obsapp} for further details).
We then use Equation~(\ref{eqn:JMp}) to calculate profiles of cumulative
$j_{\rm p}$\ with radius, which we plot in Figure~\ref{fig:JMcum1}.
Even though the data do not reach the total asymptotic value for $j_{\rm p}$, the requirement
of a smooth power-law extrapolation for the rotation-velocity profile does in most cases
strongly limit the total $j_{\rm p}$, which is typically determined at the $\pm$~15\% level
($\pm$~0.06~dex).
The radius enclosing half of the total $j_{\rm p}$\ varies from galaxy to galaxy
depending on the shape of its rotation-velocity profile: 0.7--3~$a_{\rm e}$\
(for the two spirals, it is 1~$a_{\rm e}$).
The exceptions to these findings are the
two bright, round ellipticals NGC~1407 and NGC~4374.
Figure~\ref{fig:JMcum1} shows that
much of the angular momentum in these galaxies is found
at very large radii (half of $j_{\rm p}$\ within 9~$a_{\rm e}$\ and 4~$a_{\rm e}$, respectively),
as expected from their fairly high S\'ersic indices of $n\sim$~4--8
(the ordinary early-types have $n\sim$~2--4).
However, beyond the usual uncertainties introduced by extrapolating the rotation velocity,
there are a couple of other practical considerations.
One issue is that although these particular galaxies have relatively well studied
surface brightness profiles, many such massive ellipticals do not,
with their $n$ and $a_{\rm e}$\ values poorly known. This situation produces
``double jeopardy'' for angular momentum estimation, since both the luminosity and
the rotation-velocity profiles at very large radii are important yet poorly constrained.
The other issue demonstrated with NGC~4374 is that its cumulative $j_{\rm p}$\ has
not yet converged at the (estimated total) virial radius of
$\sim$~35~$a_{\rm e}$, so it is not clear how its angular momentum should even be defined.
This class of high-$n$ galaxies is clearly problematic, and we will consider
any $j_{\rm t}$\ results on them to be tentative for now.
Figure~\ref{fig:JMcum1} also reveals a first glimpse of the basic result of this paper.
For most of the early-types in the sample, there is relatively little angular momentum
hidden beyond $\sim$~1--2~$a_{\rm e}$, and their total values of $j_{\rm p}$\ are lower than those of
the spirals. We will make more detailed comparisons later in this paper.
\subsection{Simple $J/M$ approximations}\label{sec:simp}
We now arrive at a question that is critical for the wider survey
of angular momentum in the rest of this paper:
how accurate is the simplified Equation (\ref{eqn:jCK0})?
As a reminder, this $\tilde{j_{\rm p}}$-estimator would replace the detailed calculations
based on Equation~(\ref{eqn:JMp}) that we have carried out in the preceding subsections,
but which are time-consuming to carry out for a larger sample of galaxies, and
are not even possible for cases without very extended kinematic data.
In Appendix~\ref{sec:form},
we have motivated the construction of Equation~(\ref{eqn:jCK0})
via toy models of galaxies, and calculated the corresponding coefficient $k_n$.
We will now apply this formula to the set of ten real galaxies just discussed
(both late- and early-types), and find an optimum radial location $x_s$ for measuring the
characteristic rotation velocity $v_s$.
For each galaxy, it is straightforward to find the constant value of $v_s$
which when substituted in Equation~(\ref{eqn:JMp}) yields the same $j_{\rm p}$\
as with the full observed rotation-velocity profile.
These results are listed in Table~B\ref{tab:gal} and
shown in Figures~\ref{fig:spirals} and \ref{fig:etg}, where the intersection of
$v_s$ with the rotation-velocity profile determines the characteristic measurement
radius $x_s$. As an example,
for NGC~821, it is clear that $x_s\sim$~2$a_{\rm e}$.
For NGC~4494 on the other hand, a broad range of choices for $x_s$ would work,
owing to its nearly constant rotation-velocity profile.
Considering this issue in more detail, we
calculate $\tilde{j_{\rm p}}$\ using Equation~(\ref{eqn:jCK0})
with an arbitrary choice for $x_s$ (which in turn
determines a guess for $v_s$ from the observed rotation velocity at this radius).
The results for $x_s/a_{\rm e} = (1,2,3)$
are shown in Figure~\ref{fig:guess}, plotted against $j_{\rm p}$\ calculated in full
from Equation~(\ref{eqn:JMp}).
It can be seen that $x_s/a_{\rm e}=2$ provides a reasonably good match between
$\tilde{j_{\rm p}}$\ and $j_{\rm p}$\ for all of the galaxies in this sample.
The other radius choices fare worse, owing to
galaxies like NGC~821 that have rotation-velocity profiles
with a distinct transition between the inner and outer regions near 2~$a_{\rm e}$,
and thus $v_s$ measurement elsewhere would be biased.
\begin{figure}
\centering{
\includegraphics[width=3.5in]{f7.ps}
}
\caption{Comparison of a simple projected specific angular momentum estimate
[$\tilde{j_{\rm p}}$; Equation~(\ref{eqn:jCK0})] with the more accurate value ($j_{\rm p}$).
Results are shown for ten different galaxies, each with a choice of
three reference radii: $x_s/a_{\rm e}=$~1 (red crosses),
2 (green filled circles), and 3 (purple open circles).
Some of the points are given a 0.02~dex horizontal offset for visibility.
The dashed and dotted lines mark the one-to-one relation with a $\pm$~0.1~dex scatter.
The optimal choice here for $x_s$ is $2$~$a_{\rm e}$.
}\label{fig:guess}
\end{figure}
Now to home in more finely on a choice for $x_s$, in Figure~\ref{fig:jrat}
we present the ratio of estimated and ``correct'' $j_{\rm p}$, as a function of
the chosen $x_s$, for each galaxy.
Some of the galaxies permit a broad range of choices for $x_s$, while
others do not. Especially noteworthy again are the galaxies like NGC~821 and NGC~3377
which have sharp drops in their rotation-velocity profiles, so $v_s$ measured
at small radii would overestimate $j_{\rm p}$\ by factors of $\sim$~2--3.
We do not find a strong correlation between $n$ and optimal $x_s$ as
expected from the simple models we constructed in Appendix~\ref{sec:simpapp};
the dominant effect on $x_s$ with the real galaxy sample is the scatter in
the shapes of the rotation-velocity profiles.
Future detailed analyses of a larger sample of galaxies may reveal systematic
trends with $n$ that motivate improved $j_{\rm p}$\ estimation methods, but for now
we stick with our simple $\tilde{j_{\rm p}}$\ approach.
Because the real galaxies so far do not show strongly rising outer rotation-velocity profiles,
and if anything the reverse,
$x_s\sim$~2~$a_{\rm e}$\ appears to be a good overall choice for the rotation-velocity
measurement radius. This minimizes
the galaxy-to-galaxy scatter in the $\tilde{j_{\rm p}}$\ approximation ($\sim \pm$~0.1~dex),
and appears to produce little systematic bias ($<\sim$~0.1~dex).
Such ``errors'' are comparable to the uncertainties from carrying out the
full $j_{\rm p}$\ calculations, and are therefore acceptable
for our purposes in this paper.
\begin{figure}
\centering{
\includegraphics[width=3.5in]{f8.ps}
}
\caption{The logarithmic ratio between
simple estimates of projected specific angular momentum [Equation~(\ref{eqn:jCK0})]
and more accurate values [Equation~(\ref{eqn:JMp})],
vs.\ the rotation-measurement radius $x_s$ in units of the effective radius.
Each point indicates a sample ratio for an individual galaxy,
with error bars indicating the kinematics-driven uncertainties in total $j_{\rm p}$\
from the detailed models.
Results are plotted for 10 galaxies: two spirals (orange profiles with open circles),
six ordinary early-types (blue profiles with filled circles),
and two giant ellipticals (red profiles with filled squares).
As in Figure~\ref{fig:guess}, $x_s \sim 2 a_{\rm e}$ provides a good
measurement location, resulting in minimal scatter and bias
for the angular momentum estimates.
\label{fig:jrat}
}
\end{figure}
One caveat here is that this sample of galaxies is still small, and we cannot
yet be sure of the universal validity of our approximation, e.g., for the
larger sample of galaxies that we will study in the remainder of this paper.
However, we will show that there is no apparent systematic bias, i.e.,
the overall scientific conclusions are consistent with the
subset of detailed $j_{\rm p}$\ profiles.
\subsection{Stellar mass estimates}\label{sec:mass}
So far we have focused on estimating $j_\star$, but the other key component
in constructing the $j_\star$--$M_\star$\ diagram is of course the stellar mass $M_\star$.
Assuming that we have a well-determined surface brightness profile $I(x)$
or total luminosity,
we then need to know the stellar mass-to-light ratio $\Upsilon_\star$.
In this paper, we assume for simplicity that $\Upsilon_\star$\ is constant
throughout each galaxy, which also means that its value is not relevant
in our $j_\star$\ calculations.
Estimating $\Upsilon_\star$\ in galaxies is a classic and not fully resolved problem.
One standard approach is to use theoretical models for stellar
populations in combination with observations of the stellar light
(e.g., broad-band colors, or spectroscopic line indices).
Although there are well-known degeneracies between the ages and metallicities
inferred for the stars, fortunately $\Upsilon_\star$\ can be estimated with
more certainty (e.g., \citealt{2009MNRAS.396.1132T}), modulo
the initial mass function (IMF) of the stellar populations.
The IMF affects the overall normalization of $\Upsilon_\star$\
via the mass contributions of late-type dwarf stars or compact stellar remnants,
which are observationally difficult to tally.
If all galaxies have the same IMF, then our analyses of the
{\it relative} differences between galaxies in the $j_\star$--$M_\star$\ plane will be secure.
There are also recent, indirect claims for possible galaxy-to-galaxy IMF variations
(e.g., \citealt{2008MNRAS.385..147D,2010ApJ...709.1195T,2010ApJ...721L...1T,2011ApJ...735L..13V,2012MNRAS.422L..33D,2012arXiv1206.1594F,2012arXiv1206.4311S}).
However, even in this case we do not expect a major impact on our conclusions.
As an example, the recent analysis of \citet{2012Natur.484..485C}
implies that strong IMF variations tend to occur in only the most
massive, and relatively rare, early-type galaxies,
which would have $\log\,(M_\star/M_\odot) \ga 11.3$ in our plots
(based on a standard IMF).
Such galaxies might have masses larger than our estimates by factors of
$\sim$~2, but given the relatively small numbers of such galaxies and the
weak constraints on their $j_\star$\ values, they will have little effect on
our estimated $j_\star$--$M_\star$\ trends.
It is outside the scope of this paper to estimate $\Upsilon_\star$\ for each galaxy in detail.
Instead, we adopt the simplification that all galaxies have $\Upsilon_{\star,K}=1.0$.
The near infrared (NIR) $K$-band
is only mildly affected by internal and foreground extinction, is
thought to be fairly insensitive to variations in stellar populations,
and has uniform photometry available from the 2MASS survey \citep{2006AJ....131.1163S}.
The systematic variation in $\Upsilon_{\star,K}$ across our entire sample of late
and early-type galaxies is conventionally expected to
be no more than $\sim$~0.1~dex, based on $B-V$ colors \citep{2003ApJS..149..289B},
although there are recent suggestions of variations at the level of $\sim$~0.4 dex
(\citealt{2009MNRAS.400.1181Z,2011ApJ...739L..47B}).
Our adopted value of $\Upsilon_{\star,K}=1.0$ is motivated by dynamical estimates
in both spirals and lenticulars from
\citet[figure~9]{2009MNRAS.400.1665W}, and corresponds to an IMF midway
between \citet{2001MNRAS.322..231K} and \citet{1955ApJ...121..161S}.
Our calculations of $M_\star$\ also require estimates of total luminosity, $L_K$.
However, we do {\it not}
simply adopt the total magnitudes provided by the 2MASS archive.
These values are not reliable for early-type galaxies
(e.g., \citealt{2007MNRAS.381.1463N,2009ApJ...702..955D,2009MNRAS.400.1665W,2011arXiv1107.1728S}),
particularly the variety with extended high-$n$ envelopes,
where the 2MASS values could be too faint by as much as 1 mag.
Instead, we construct our own ``aperture corrections''.
We adopt the 2MASS magnitudes within the 20$^{\rm th}$-mag isophote, $K_{20}$,
and use the best available optical photometry for each galaxy along with a
S\'ersic model fit to estimate the fraction of the galaxy light residing
beyond $K_{20}$.
This procedure neglects any bandpass-dependence in the light profiles $I(x)$,
which are often more radially extended in bluer bands (e.g.,
\citealt{1961ApJS....5..233D,1990AJ....100.1091P,2011MNRAS.416.1983R}).
Such differences imply $\Upsilon_\star$\ variations with radius \citep{2011MNRAS.418.1557T},
which is a reminder of the limitations of our constant-$\Upsilon_\star$\ approximation.
Given our reliance on optical profiles $I(x)$ to derive $\Sigma(x)$ and estimate $j_{\rm p}$,
as in Equation~(\ref{eqn:JMp}), for consistency we do need to use
the optical data to extrapolate the $K$-band photometry in estimating $M_\star$.
However, the scale-lengths $a_{\rm e}$\ of the stellar {\it mass} distributions
are probably smaller on average
than the $a_{\rm e}$\ values that we use based on optical {\it luminosity} distributions,
leading us to overestimate both $j_{\rm p}$\ and $M_\star$. Improvement on this point
could be made in the future by analysis of deep $I(x)$ data at NIR wavelengths.
NIR spectroscopy would then also be needed for full consistency of both
$j_{\rm p}$\ and $M_\star$\ estimates (e.g.,
\citealt{2003AJ....125.2809S,2008ApJ...674..194S,2011MNRAS.412.2017V}).
\subsection{The $j_\star$--$M_\star$\ diagram}\label{sec:sys}
Here we focus on the $j_\star$--$M_\star$\ plane, our ultimate destination in this paper,
but for now considering the projected specific angular momentum $j_{\rm p}$\
rather than the true $j_{\rm t}$\ in
order to isolate various effects that are disjoint from inclination uncertainties.
Figure~\ref{fig:JMM1} shows our detailed galaxy sample where cumulative
$j_{\rm p}(<R) = J_{\rm p}(<R)/M_\star(<R)$ is plotted
not as a function of radius (as in Figure~\ref{fig:JMcum1})
but of {\it enclosed projected stellar mass}, $M_\star$.
\begin{figure}
\centering{
\includegraphics[width=3.5in]{f9.ps}
}
\caption{The
cumulative projected specific angular momentum of nearby galaxies
(as in Figure~\ref{fig:JMcum1}), now plotted
vs.\ cumulative projected stellar mass.
The curves are {\it solid} where constrained by the data,
and {\it dotted} for extrapolations.
Circles show intervals of 1~$a_{\rm e}$, up to 4~$a_{\rm e}$.
Error bars at the end of the NGC~3054 curve
illustrate the effects of systematic uncertainties (see text for details):
diagonal for the distance, vertical for scale-length, and horizontal for $\Upsilon_\star$.
Diagonal dashed lines show tracks of $j_{\rm p} \propto M_\star^{2/3}$,
which represent constant halo spin.
\label{fig:JMM1}
}
\end{figure}
For reference, we show dashed lines corresponding to
$j_{\rm p} \propto M_\star^{\alpha}$, with $\alpha=2/3$.
This value for $\alpha$ is motivated by previous observations
(Section~\ref{sec:intro}), and by theoretical predictions for $j_{\rm t}$-$M_\star$,
given
constant values of an initial halo spin parameter $\lambda$, as we will see
in Section~\ref{sec:theory1}.
We are most concerned with the locations of galaxies relative to these tracks,
and with any systematic effects that could shift the data in a direction
{\it perpendicular} to them.
The shaded regions of the curves in Figure~\ref{fig:JMM1}
indicate the uncertainties due to the kinematic data,
including the extrapolations to large radii. For most of the galaxies,
the asymptotic position in the $j_{\rm p}$-$M_\star$\ diagram is relatively well determined.
The main exceptions are NGC~1407 and NGC~4374,
which as discussed before are extended giant ellipticals
whose total $j_{\rm p}$\ is very difficult to determine.
The early-type galaxy NGC~2768 is also a concern even though the formal $j_{\rm p}$\ uncertainties are small,
since there are large contributions to the total $j_{\rm p}$\ estimate from the region
of extrapolation.
An offset in total $j_{\rm p}$\ between the late-types and most of the early-types as in
Figure~\ref{fig:JMcum1} is also apparent in Figure~\ref{fig:JMM1}.
However, the mass dimension brings the relative positions into sharper focus.
For example, NGC~4374 and NGC~5128 have similar $j_{\rm p}$\ values to NGC~3054, but also have
larger stellar masses, which means that their
inferred halo spins will be
{\it lower} (considering distances perpendicular to the dashed tracks).
We next consider some systematic uncertainties that apply even if the
rotation-velocity profiles are perfectly measured.
First, there is a typical distance uncertainty of $\sim$~10\%.
This affects $j_{\rm p}$\ linearly and $M_\star$\ quadratically, moving
the position of the data by a very small amount
nearly parallel to the $\lambda$
tracks (see sample error bars marked for NGC~3054 in the Figure).
Next we consider an uncertainty of $\sim$~30\% ($\sim$~0.11~dex) in the scale lengths $a_{\rm e}$,
which translates into a similar uncertainty in $j_{\rm p}$\ [see Equation~(\ref{eqn:F83eq1})].\footnote{
In practice, the $a_{\rm e}$\ uncertainty is correlated with an
uncertainty in the galaxy luminosity and thus in $M_\star$, but this is a relatively
weak effect.}
Also, in some cases the surface brightness profile is well constrained
and the associated $j_{\rm p}$\ uncertainty is very small (e.g., $\sim$~5\% or $\sim$~0.02~dex
in the case of the $n\sim3$ elliptical NGC~4494).
Finally there is $\Upsilon_\star$, which may be uncertain by a factor of $\sim$~50\%
($\sim$~0.2~dex) and would affect $M_\star$\ by the same amount.
For spiral galaxies, this is probably the limiting factor for inferring
their $\lambda$ values.
For the early-types, the inclination is generally unknown and may be a
significant source of uncertainty for estimating $j_{\rm t}$, even when $j_{\rm p}$\ is well constrained.
We will return to this theme in Section~\ref{sec:less}.
\section{Observations: scaling relations and derivations of $J/M$ for the full sample}\label{sec:data}
Having carried out detailed analyses of $j_\star$\ for a handful of galaxies in the previous section,
we now derive $j_\star$\ for a much larger galaxy sample, using simpler methods.
Besides these derivations, in this section we also examine some basic scaling
relations for galaxies, in order to understand the observational underpinnings of
the $j_\star$--$M_\star$\ results in the next section, and to verify that our results
are consistent with some well-known properties of galaxies.
We also introduce a novel, generalized version of the Tully-Fisher relation
for galaxies of all types.
Those who are keen to get straight to the angular momentum results
may wish to skip to Section~\ref{sec:obsresults}.
In order to populate the observational $j_\star$--$M_\star$\ diagram,
we will use the $\tilde{j_{\rm p}}$\ approximation of Equation~(\ref{eqn:jCK0})
which we have found to be generally accurate at the $\sim$~0.1~dex
($\sim$~25\%) level.
The basic parameters that we then need for all of the galaxies are:
the total stellar mass ($M_\star$) and its
scale-length ($R_{\rm d}$ or $a_{\rm e}$), the S\'ersic index $n$, and
the characteristic rotation velocity $v_s$.
The distances to the galaxies are estimated from redshifts and surface brightness fluctuations.
As discussed in Section~\ref{sec:mass},
$M_\star$\ is derived from aperture-corrected 2MASS magnitudes $m_K$, assuming
$\Upsilon_{\star,K}=1.0$.
The other parameters are derived differently for the late-type and early-type samples,
as we will discuss in Sections~\ref{sec:ltgdata} and \ref{sec:etgdata}, respectively.
Section~\ref{sec:scale} brings the data together in an examination of
basic scaling relations, before proceeding to the final $j_\star$--$M_\star$\ analyses of
Section~\ref{sec:obsres}.
\subsection{Late-types}\label{sec:ltgdata}
Because spiral galaxies are dominated by their disk components, whose
photometric and kinematic properties are relatively straightforward to measure,
past studies of their angular momenta have generally treated them as
pure disks, e.g., using Equation~(\ref{eqn:F83eq1}) to calculate $j_{\rm t}$.
However, this approximation may be inadequate for the spirals with relatively
large bulges (Sa and some Sb), and it is one of the goals of this paper to
consider these components.
With Equation~(\ref{eqn:jCK0}) in mind, we could use values for the parameters
$n$, $a_{\rm e}$, and $v_{\rm s}$ that characterize the composite bulge--disk systems
(e.g., with an overall $n$ somewhat larger than $1$).
However, the required stellar photometry and kinematic data are not available
for a large sample of galaxies.
Instead, we analyze disk and bulge components separately, make some
simple assumptions for the bulges to compensate for the missing data,
and then combine the disks and bulges into global $j_\star$\ analyses.
We focus on the classic spiral galaxy data set assembled
by \citet{1986AJ.....91.1301K,1987AJ.....93..816K,1988AJ.....96..514K},
comprising 64 galaxies from type Sa to Sm, at distances ranging from 1 to 100~Mpc.
These data include $r$-band CCD photometry along with
bulge/disk decompositions, and inclination-corrected gas-disk rotation curves
from both optical emission-lines
(e.g., \citealt{1980ApJ...238..471R,1982ApJ...261..439R,1985ApJ...289...81R})
and HI radio emission (based on various sources in the literature).
Most of Kent's sample comes from the Rubin et~al.\ surveys, which selected for
spiral galaxies with high inclinations,
spanning a wide range of luminosities, scale-lengths, and Hubble types,
and without strong bars.
Despite advances in observational resources in the intervening decades,
we know of no comparable, publicly-available sample that includes both rotation curves and
photometry with detailed bulge/disk decompositions for a wide range of disk-galaxy types.
We estimate the disk and bulge scale-lengths ($R_{\rm d}$ and $a_{\rm e, b}$)
by modeling the nonparametric Kent decompositions
with simple exponential and de Vaucouleurs profiles ($n=1$ and $n=4$,
respectively).
Our models thereby treat all bulges as ``classical'', with $n\sim4$,
neglecting some variations in their detailed properties, such as
the $n \sim$~1--2 indices of ``pseudo'' bulges \citep{2004ARA&A..42..603K}.
The latter bulges tend to be much less massive, and make only minor
contributions to the total $j_\star$\ for spirals, which is insensitive to the
details of the adopted bulge density and rotation profiles.\footnote{More
extensive observations and modeling in the future could
be used to establish the $j_\star$--$M_\star$\ trends for morphologically different bulges,
and thereby provide physically-based information as to whether or not
there are genuinely distinct subtypes.}
For 34 of these sample galaxies (type Sb to Sc), independent decompositions
were carried out on the {\it same data set}
by \citet{1994MNRAS.267..283A}, using parametric fits to the raw surface
brightness profiles.
Our $R_{\rm d}$ values agree with theirs at the $\sim$~10\% level,
while the bulge results are highly variable, both between our
analyses and theirs, and between different model fits by these authors.
Most of these galaxies are very disk dominated ($B/T \la 0.1$),
so it is not surprising that the bulge parameters would be very uncertain.
Fortunately the bulges in such cases turn out to be only very minor
contributors to the total $j_\star$\ of their host galaxies.
Other parameters and their sources are listed in Table~C\ref{tab:spirals}.
For $v_s$ of the stellar disk components of these galaxies, we assume that they rotate
with the same velocities as their gas disks.
We derive $v_{\rm c}$ based on the rotation curves
over the range (2--3)~$R_{\rm d}$, re-projecting this intrinsic value to
the observed $v_s$ according to the inclination
($v_s = v_{\rm c} \sin i$).
The final and most challenging parameter to estimate is
the characteristic rotation velocity $v_s$ for the bulges.
Direct estimates of bulge rotation-velocity profiles over a large
range in radius require extensive spectroscopic data combined with careful bulge--disk
kinematic decomposition.
As far as we know, this has only been done for {\it one} spiral galaxy to date
\citep{2012ApJ...752..147D}.
Thus we are much worse off with estimating $j_\star$\ for spiral bulges than for
early-type galaxies, and must make even stronger simplifying assumptions than in the
original F83 analysis of ellipticals. Fortunately, because the spirals are disk-dominated,
we will find that their total $j_\star$\ estimates
are only mildly sensitive to the assumptions about bulge kinematics.
Our strategy for the bulge $v_s$ values is to estimate these
indirectly, based on other observables:
the ellipticity $\epsilon\equiv 1-q$
and the central velocity dispersion $\sigma_0$.
These three parameters may be related together through the following model:
\begin{equation}\label{eqn:binney}
v_s = \left(\frac{v}{\sigma}\right)^* \sigma_0 \left(\frac{\epsilon}{1-\epsilon}\right)^{1/2} ,
\end{equation}
where $(v/\sigma)^*$ is a parameter describing the relative dynamical importance of rotation
and pressure.
In an edge-on galaxy, $(v/\sigma)^* \simeq 1$ represents an oblate isotropic system where the
observed ellipticity is supported by rotation, and this model also turns out to work
well at other inclinations
\citep{1982modg.proc..113K}.
The standard lore is that spiral bulges and low-luminosity ellipticals are
near oblate-isotropic, with typical $(v/\sigma)^*\sim$~0.9
\citep{1982ApJ...256..460K,1983ApJ...266...41D,1998gaas.book.....B,2008gady.book.....B}.
However, some concerns about these conclusions were raised early on
\citep{1984ApJ...287...66W,1986ApJ...302..208F}
and modern integral-field analysis of early-types has revealed that their
rotation velocities tend to be significantly {\it lower} than in the oblate isotropic model
\citep{2007MNRAS.379..418C,2011MNRAS.414..888E}.
The rotation of spiral bulges, on the other hand,
has not seen systematic investigation in decades
(some new work has just appeared in \citealt{2012ApJ...754...67F}),
and here we attempt only a quick look at the implications of recent papers that have
reported bulge kinematics for a handful of cases.
\begin{figure}
\centering{
\includegraphics[width=3.5in]{f10.ps}
}
\caption{Relation between bulge rotation velocity and velocity dispersion
as a function of ellipticity.
The points show data for 26 spiral galaxies from the literature,
with symbol shapes and colors corresponding to different Hubble types as in the legend.
The curves show Equation~(\ref{eqn:binney}) with
$(v/\sigma)^* = 1$ and $(v/\sigma)^*=0.7$ for the dotted and solid curves, respectively.
We adopt $(v/\sigma)^*=0.7$ as our default model.
}
\label{fig:vsigb}
\end{figure}
We take results on $(v/\sigma)$ and $\epsilon$ from
\citet{2007MNRAS.381..401L}, \citet{2008MNRAS.389..341M}, and \citet{2009MNRAS.395...28M},
and plot them in Figure~\ref{fig:vsigb}.
We see that the oblate isotropic model is {\it not} a good representation of most of the data,
nor is any other simple value of $(v/\sigma)^*$.
However, in order to have a simplified framework for bulge
rotation, we characterize this data set as having $(v/\sigma)^*=0.7\pm0.4$
(median and 68\% scatter).
We therefore adopt the following procedure for estimating bulge $j_\star$.
We use the observational values for $\epsilon$ and $\sigma_0$, and then estimate
$v_s$ using Equation~(\ref{eqn:binney}) with $(v/\sigma)^*=0.7$ representing
a typical value for bulges.
We test the impact of the latter assumption on the results by also using
$(v/\sigma)^*=0.3$ and $1.1$ to bracket the possible range of average
bulge rotation. We thereby explore the systematic uncertainty in bulge rotation
but not the intrinsic scatter, keeping in mind also that this bulge model
is based on the central regions and does not account for the uncertainties
in extrapolating the rotation to large radii,
as discussed in detail for the early-type galaxies.
The $\epsilon$ values are taken from the Kent derivations.
We take the $\sigma_0$ measurements in most cases from HyperLeda \citep{2003A&A...412...45P},
and also from \citet{1999A&A...342..671C} and \citet{2004A&A...424..447P}.
For some of the later-type galaxies, there are no $\sigma_0$
measurements available, and for these we use an empirical relation
(which we infer from other galaxies in these studies)
that $\sigma_0$ is approximately equal to the gas-disk rotation velocity.
Such cases all have $B/T < 0.15$, so this approximation is not of major importance for
the total $j_\star$\ estimates, but any inferences for these particular bulges
will be relatively uncertain.
We now have enough information to proceed with the specific angular momentum
calculations for the spiral galaxies.
Again, our basic approach is to
estimate separately the bulge and disk angular momenta $j_{\rm b}$ and $j_{\rm d}$.
Given a bulge stellar mass fraction quantified as $f_{\rm b}$, we can then estimate the
total specific angular momentum by:
\begin{equation}\label{eqn:jtot}
j = f_{\rm b} j_{\rm b} + (1-f_{\rm b}) j_{\rm d} .
\end{equation}
In practice, we use the bulge-to-total $r$-band luminosity ratio $B/T$ (from
the series of Kent papers) as a proxy for $f_{\rm b}$.
To calculate the projected values of $j_{\rm b}$ and $j_{\rm d}$, we use
Equation~(\ref{eqn:jCK0}). For the intrinsic values,
we assume that both the bulge and the disk in a given galaxy have the same inclination $i$,
which is estimated from the observed disk ellipticity.
We then use the deprojection factor $C_i$ to convert projected to intrinsic values
[see Equation~(\ref{eqn:proxy})].
For the disk, this is a simple factor of $(\sin i)^{-1}$, and the calculation reduces
to Equation~(\ref{eqn:F83eq1}).
For the bulge, we calculate $C_i$ from Equation~(\ref{eqn:Cform}).
Using these procedures,
we construct a catalog of spiral galaxies with
characteristic masses, scale-lengths, and rotation velocities for both their bulge
and disk components. We report these values in Table~C\ref{tab:spirals}, along with
the total galactic specific angular momenta (bulge and disk combined),
both projected and intrinsic.
When we vary the assumed bulge rotation systematically across the bracketing range,
the total $j_\star$\ is changed by no more than $\sim$~0.03~dex ($\sim$~7\%) for the vast majority
of the galaxies, and up to $\sim$~0.1~dex ($\sim$~25\%) for a few of the Sa--Sab galaxies.
Therefore the details of the bulge modeling are of only very mild importance
to the overall $j_\star$\ results for the spirals.
These data will be used in later sections to examine various scaling relations for
these galaxies and for their subcomponents.
\subsection{Early-types}\label{sec:etgdata}
For the gas-poor early-type galaxies (lenticulars and ellipticals),
the challenge is to assemble a large sample with all of the ingredients
that we need to calculate $j_\star$\ (i.e., $v_s$, $a_{\rm e}$, $n$).
The information is scarcest for $v_s$, and therefore we have scoured
the literature for kinematic data sets extending to radii of at least
$\sim$~2~$a_{\rm e}$, assembling a sample that, although not exhaustive,
is unprecedented in its size and scope.
The sources include integrated-starlight absorption-line spectroscopy,
and velocities of GCs and PNe.
To estimate approximate values for $v_s$,
we simply read off the major-axis rotation velocity at 2~$a_{\rm e}$\
(as explained in Section~\ref{sec:simp}).
We thereby assemble a total sample of 40 early-type galaxies, including the 8 galaxies
that we modeled in detail in Section~\ref{sec:examp}.
Table~C\ref{tab:etg} provides a summary of our sample,
along with the sources of kinematic data.
Given that the data are drawn from a variety of literature sources with
complex selection effects, it is important to check whether or not the sample
is a fair representation of early-types in the nearby universe.
We have done so in Appendix~\ref{sec:obsfull}, using the ATLAS$^{\rm 3D}$\ volume-limited sample
of nearby galaxies as a reference,
and focusing on the
masses $M_\star$\ and central rotation parameters $(v/\sigma)^*$.
We find that the distribution of our sample galaxies in the $(v/\sigma)^*$--$M_\star$\
parameter space is fairly similar to that of an unbiased sample over a similar mass range.
The median galaxy mass in our sample is $\log\,(M_\star/M_\odot)=10.8$, which is near the
characteristic mass $M_\star^*$ of nearby galaxies \citep{2010MNRAS.404.1111G}.
We thus conclude that our observational results should be representative of low-redshift
ordinary early-type galaxies.
The only caveat here is that our sample is biased toward ellipticals at the
expense of lenticulars, which we must take into account later when drawing
conclusions about the overall population of early-type galaxies.
An alternative scheme for classifying early-types is as ``fast rotators''
(including almost all lenticulars) and ``slow rotators'',
based on their central kinematics \citep{2007MNRAS.379..401E}.
The central rotation is known to correlate with many other galaxy properties
\citep{1983ApJ...266...41D,1996ApJ...464L.119K},
and the fast and slow rotators have been interpreted as having
different formation histories.
Therefore it is important that we investigate to what extent
the {\it global} specific angular momentum $j_\star$\
correlates with the central rotation classification.
Our sample includes three slow-rotators, which is consistent with the
fraction of such galaxies in the nearby universe \citep{2011MNRAS.414..888E},
and will provide a rough initial idea of any systematic differences between
fast and slow rotators.
Returning to the remaining observational parameters,
for each early-type density profile, we need both the S\'ersic index $n$ and
the corresponding scale-length $a_{\rm e}$\ (which can differ significantly from
the value obtained with a classic $n=4$ fit,
e.g., in the RC3 catalog of \citealt{1991trcb.book.....D}).
Unfortunately there is no comprehensive source available for such measurements,
and we resort to a medley of literature data.
For 34 of the galaxies in our sample, there are published S\'ersic fits,
and we take the $(a_{\rm e}, n)$ values according to the following priority:
detailed photometric analysis in individual galaxy papers
(e.g., \citealt{2009MNRAS.393..329N});
the \citet{2009ApJS..182..216K} tabulation for Virgo galaxies;
\citet{2009ApJS..181..135H,2009ApJS..181..486H,2001MNRAS.326.1517D}.
For the remaining 6 galaxies,
we have as a starting point the RC3 value for the effective radius.
Then we use the well-established observation that there are strong correlations between
early-type galaxy size and luminosity, and the S\'ersic index $n$
(e.g., \citealt{1993MNRAS.265.1013C,1997A&A...321..111P,2003AJ....125.2936G,2003ApJ...594..186B,2009ApJS..182..216K}).
This allows us to estimate a most-probable $n$ value for each galaxy
(see Appendix~\ref{sec:obsfull} for details).
Note that if we were simply to approximate all of the early-types as $n=4$ spheroids,
the $k_n$ values in Equation~(\ref{eqn:jCK0}) would be too high on average by
$\sim$~30\% ($\sim$~0.15~dex, given a median index value of $n\sim2.5$).
This would translate to an equivalent systematic error on $j_\star$.
We could adjust for this effect by adopting $n=2.5$ in all cases, but $n$ also
has a systematic dependence on galaxy mass, and ignoring this fact would produce
a spurious mass-dependent trend in $j_\star$\ of $\sim$~50\% ($\sim$~0.2~dex) over the
full range in mass.
In Table~C\ref{tab:etg}, we compile the observed parameters $v_s$, $a_{\rm e}$, and $n$ for
our full early-type galaxy sample.
We use these to calculate $j_{\rm p}$\ approximately from Equation~(\ref{eqn:jCK0}),
and tabulate these values as well.
For some of the very extended galaxies like NGC~4374,
the total luminosity and angular momentum
(via the factor $k_n$) are integrated out only to the estimated virial radius.
In order to convert projected $j_{\rm p}$\ to intrinsic $j_{\rm t}$\ for analysis in later sections,
we must apply a deprojection factor $C_i$ which depends on the inclination $i$.
Unfortunately,
the individual inclinations are not generally known, but neither are they completely random,
because of an inclination-bias in galaxy classification.
As discussed in Appendix~\ref{sec:spheroid}, we therefore apply median deprojection factors of
$C_{\rm med}=1.21$ ($+0.08$~dex) to the lenticulars,
and $C_{\rm med}=1.65$ ($+0.22$~dex) to the ellipticals.
Since one of our eventual goals will be to quantify the intrinsic scatter in the observed
$j_\star$--$M_\star$\ relations, it is important to be clear about the error budget
in our analyses. Again, the basic parameters that go into our $j_\star$\ calculations are
$C_i$, $a_{\rm e}$, $n$, and $v_s$. For early-type galaxies with an assumed $n=4$ profile,
the typical uncertainties in $a_{\rm e}$\ are $\sim$~25\% ($\sim$~0.1~dex;
\citealt{2011MNRAS.413..813C}). If we allow for a more general $n$, which
for some galaxies is measured directly and in other cases is derived statistically
(Appendix~\ref{sec:obsfull}), then we estimate a combined uncertainty on $j_\star$\ from
$a_{\rm e}$\ and $n$ of $\sim$~40\% ($\sim$~0.15~dex).
The uncertainty on $v_s$ from our simplified measurement and extrapolation approach is
$\sim$~25\% ($\sim$~0.1~dex; Section~\ref{sec:simp}).
Table~\ref{tab:err} summarizes the uncertainties introduced by a
number of different ingredients in the $j_\star$--$M_\star$\ calculations.
The separate uncertainties for $j_\star$\ and $M_\star$\ are
mapped to the direction
perpendicular to a $j_\star \propto M_\star^{2/3}$
trend, as discussed in Section~\ref{sec:sys}.
This net uncertainty is designated $\Delta \lambda$, owing to the
connection with spin-based theoretical models.
The total uncertainty in $\lambda$ for late-type galaxies is
typically $\sim$~30\% ($\sim$~0.1 dex), and is driven by the
estimate of $M_\star$\ (via $\Upsilon_\star$) rather than $j_\star$.
For the vast majority of the early-types (apart from the special
class of massive, extended ellipticals), the uncertainty is $\sim$~60\%
($\sim$~0.2~dex), and is driven by the four parameters
mentioned above that enter into the $j_\star$\ calculation.
\begin{table}
\begin{center}
\caption{Uncertainty budget}\label{tab:err}
\noindent{\smallskip}\\
\begin{tabular}{l c c c c c c c c}
\hline
Galaxy & & & & $\Delta \lambda$ (dex) \\
type & $D$ & $C_i$ & $v_s$ & $\tilde{v}_s$ & $n,a_{\rm e}$ & bulge & $\Upsilon_\star$\ & total\\
\hline
Sb--Sm & 0.01 & 0.01 & 0.02 & 0.03 & 0.05 & 0.03 & 0.07 & 0.09 \\
Sa--Sab & 0.01 & 0.01 & 0.02 & 0.03 & 0.05 & 0.1 & 0.07 & 0.13 \\
S0 & 0.01 & 0.05 & 0.06 & 0.1 & 0.15 & 0 & 0.07 & 0.18 \\
fE & 0.01 & 0.15 & 0.06 & 0.1 & 0.15 & 0 & 0.07 & 0.22 \\
sE & 0.01 & 0.12 & 0.35 & 0.35 & 0.2 & 0 & 0.2 & 0.40 \\
\hline
\hline
\end{tabular}
\\
\tablecomments{
The uncertainties on $j_\star$\ and $M_\star$\ have been converted into equivalent
uncertainties on $\lambda$.
The different galaxy types include
fast- and slow-rotating ellipticals (fE and sE).
The listed sources of potential error are
distance ($D$),
corrections for projection effects including inclination ($C_i$),
the rotation velocity scale calculated in detail ($v_s$),
the alternative approximate rotation velocity scale ($\tilde{v}_s$),
the stellar density profile S\'ersic index ($n$) and
scale radius ($a_{\rm e}$),
the incorporation of bulge contributions,
and the stellar mass-to-light ratio including IMF variations ($\Upsilon_\star$).
}
\end{center}
\end{table}
This full $j_\star$--$M_\star$\ dataset is assembled from
a generally unbiased $\sim M_\star^*$ galaxy sample that we can use to
investigate differences in angular momentum not only between early-types and spirals,
but also between ellipticals and lenticulars, and between fast and slow rotators.
\subsection{Size and rotation-velocity scaling relations}\label{sec:scale}
Before considering specific angular momenta
and their correlations in the next section, we examine some trends among the
raw ingredients that go into these analyses, $a_{\rm e}$, $v_s$, and $M_\star$.
Doing so provides a check that our results are consistent with the familiar size--mass
and mass--rotation~velocity (Tully-Fisher)
relations that have been established for nearby galaxies.
We also introduce novel relations involving rotation, and explore some
preliminary indications about angular momentum.
We first consider the standard scaling relation of galaxy size versus mass,
or $a_{\rm e}$ versus $M_\star$\ in our notation, showing the results in Figure~\ref{fig:size},
where we again compare our results to the
volume-limited ATLAS$^{\rm 3D}$\ sample as a baseline check.
We find that in both samples, late- and early-type galaxies have {\it roughly}
the same sizes
at a given mass
(cf.\ \citealt{2003MNRAS.343..978S,2007MNRAS.379..400S}), but
there is a clear systematic trend for the more bulge-dominated galaxies to be more compact
(see also \citealt{2004MNRAS.355.1155D,2009MNRAS.393.1531G,2010MNRAS.402..282M,2011MNRAS.414.2055M,2011MNRAS.416..322D}).
Given the many different assumptions and data sources that went into our
sizes and masses, these parameters match the ATLAS$^{\rm 3D}$\ results remarkably well overall
(with some nuances discussed further in Appendix~\ref{sec:obsfull}).
This suggests that our size and mass data are representative and reliable at the $\sim$~0.1~dex level.
\begin{figure}
\centering{
\includegraphics[width=3.5in]{f11.ps}
}
\caption{Relation between size and stellar mass for our galaxy sample.
The former is the semi-major axis effective radius, and the latter
is based on $K$-band total luminosities with an adopted mass-to-light ratio of
$M_\star/L_K=1$ in solar units.
Different symbols denote different galaxy types as shown in the legend;
for the spirals,
the disk and bulge (``B'') components are shown separately.
The range of the plot is restricted in order to better see the
main trends in the data; the bulge data extend
to radii as small as $a_{\rm e}\sim 0.01$~kpc (note also that the most compact elliptical
shown is NGC~4486B, which is considered a rare, highly-stripped galaxy).
For comparison, diagonal lines show
power-law model fits to the data from the ATLAS$^{\rm 3D}$\ survey
(i.e., {\it independent} from our data set):
lenticulars and fast-rotator ellipticals (dot-dashed),
Sa--Sb spirals (dashed), and Sc--Irr spirals (dotted).
For both data sets, the late-type galaxies are systematically larger than the early-types
at a given stellar mass. The absolute normalizations of the trends are similar
between the ATLAS$^{\rm 3D}$\ sample and ours,
with some small differences as discussed in the text.
} \label{fig:size}
\end{figure}
We can also consider separately the spiral {\it bulges}, plotting their sizes and masses
for our sample
in Figure~\ref{fig:size}.
Although the full range of sizes is not visible in this plot, the bulges
follow a roughly parallel size--mass relation to the elliptical galaxies,
but smaller on average by a factor of $\sim$~4 ($\sim$~0.6~dex)
and with a great deal of scatter
(possibly because of the approximate nature of these size measurements).
Other studies have also found that bulges are more compact than ellipticals
\citep{2008MNRAS.388.1708G,2009MNRAS.393.1531G,2010MNRAS.405.1089L,2011MNRAS.416..322D},
but the quantitative details vary considerably, and we therefore regard
our bulge scaling relations as provisional.
\begin{figure*}
\centering{
\includegraphics[width=6.6in]{f12.eps}
}
\caption{Relations between characteristic rotation velocity $C_i\,v_s$, stellar mass
(left-hand panel), and size (right-hand panel) for our
full galaxy sample, using the same data sources and symbols as in
Figure~\ref{fig:size}.
For the spiral disks, $v_s$ is the outer gas-disk rotation velocity.
For the lenticulars and ellipticals, $v_s$ is the stellar rotation velocity measured along
the semi-major axis at 2~$a_{\rm e}$, except for the points with error bars,
which are the eight cases studied in detail in Section~\ref{sec:examp}, with
$v_s$ derived from full modeling of the rotation-velocity profiles.
For the bulges, $v_s$ is estimated indirectly using flattening and velocity
dispersion observations (Section~\ref{sec:ltgdata}).
In all cases, the rotation velocity has been deprojected for both inclination and ``dilution''
effects, using the factor $C_i$ (see text for details).
In the left-hand panel, the dotted blue line shows a least-square fit
to the Sb--Sc disks,
a dashed red line shows a proposed inverse trend for a subset of the E/S0s, and
the blue dot-dashed line shows the baryonic Tully-Fisher relation
for late-type galaxies from \citet{2011ApJ...742...16T} for comparison.
In the right-hand panel, the diagonal line shows a prediction for the spiral
disks based on $\Lambda$CDM models
(see Section~\ref{sec:theory2}).
Overall, the spiral and elliptical galaxies follow mass--rotation~velocity and
size--rotation~velocity trends that have remarkably opposite slopes.
The trends for the lenticulars are between the spirals and ellipticals.
} \label{fig:rot}
\end{figure*}
The next scaling relation that we consider is rotation velocity
versus mass. For spiral
galaxies, this is the Tully-Fisher relation, but it has to
our knowledge never been constructed previously for all galaxy types.
We can already generate a broad expectation for what we will find,
given the observed size--mass relations along with the assumption that $j_\star$\ is independent
of galaxy type.
As mentioned in Section~\ref{sec:gen}, we can then use
Equation~(\ref{eqn:jCK0}) to predict the ratio of characteristic rotation velocities for
ellipticals and spirals:
\begin{equation}
\frac{v_{s,{\rm E}}}{v_{s,{\rm Sp}}} \sim \frac{k_1}{k_4} \frac{a_{\rm e,Sp}}{a_{\rm e,E}} ,
\end{equation}
where we are approximating the spiral galaxy parameters as dominated by the disk component.
With $k_1/k_4=0.5$, and $a_{\rm e,Sp}/a_{\rm e,E} \sim 2$ for our sample,
we therefore predict $v_{s,{\rm E}}/v_{s,{\rm Sp}} \sim 1$. Thus,
{\it ellipticals should rotate at roughly the same velocity as spirals if they have
the same specific angular momenta at a given mass.}
Without proceeding any further, this scaling analysis already suggests that
ellipticals have lower $j_\star$\ than spirals, or else they would be extremely flattened
by rotation, similarly to the spiral disks which have near-maximal rotational support
(modulo possible differences in dynamical mass between
spiral and elliptical galaxies at the same stellar mass).
The same argument applies even more strongly to the spiral bulges, since they
are far more compact than the disks at a given mass.
If the bulges had the same $j_\star$\ as the disks, then
they would have to rotate much {\it faster}, which is impossible.
We now examine what our new collection of observations tells us directly about the
rotation scaling relations. The left-hand panel of
Figure~\ref{fig:rot} shows the characteristic rotation velocity $v_s$
for the elliptical and lenticular galaxies, and the spiral disk and bulge subcomponents,
in our sample.
Here we are plotting the {\it intrinsic} rotation velocity,
multiplying by the deprojection factor $C_i$, which is just $(\sin i)^{-1}$
for disks (see Appendix~\ref{sec:thin}),
and Equation (\ref{eqn:Cform}) for bulges.
For the early-type galaxies, the inclinations are unknown, and we have adopted
median factors for $C_i$ as discussed in Section~\ref{sec:etgdata}.
We see that the disks follow a fairly tight relation of approximately
$C_i\,v_{\rm s} \propto M_\star^{0.25}$,
with a residual trend for the later-type disks to rotate more slowly.
This is equivalent to the familiar Tully-Fisher relation, and in the Figure
we include a recent result from the literature \citep{2011ApJ...742...16T},
which matches our data very well
(cf.\ the type-dependence among spirals found by \citealt{2008AJ....135.1738M}).
We also show in the right-hand panel of Figure~\ref{fig:rot} the relation between
size and rotation velocity, which are strongly correlated parameters for disk galaxies.
The elliptical galaxies are completely different,
showing an {\it anti-correlation} between rotation velocity and mass,\footnote{This
echoes a similar trend in the {\it central} rotation properties of
early-type galaxies in general (shown in Figure~\ref{fig:atlas1}).
The eight galaxies studied in detail (points with error bars in Figure~\ref{fig:rot})
are consistent with this trend but do not include enough lower-luminosity ellipticals
to distinguish between $v_{\rm s}$ being constant or decreasing with mass.}
with $C_i\,v_{\rm s} \propto M_\star^{-0.1}$.
This result also contrasts markedly with standard relations
for ellipticals involving the velocity dispersion $\sigma_0$ or the dynamical mass
(e.g., $\sigma_0 \propto M_\star^{0.25}$;
\citealt{1976ApJ...204..668F,2011ApJ...742...16T}).
In galaxy disks, the rotation velocity traces the dynamical mass,
so the Tully-Fisher relation is a measure of both mass and angular momentum.
In elliptical galaxies, on the other hand,
the mass and angular momentum relations are decoupled.
We also find an anti-correlation between rotation velocity and size (right-hand panel)
that we will discuss later in this paper.
The behavior of the lenticulars in the mass--rotation~velocity diagram is difficult to
discern in detail owing to the small sample size, but in general it appears
intermediate to the other galaxy types. We also notice
an interesting pattern when considering the lenticulars and ellipticals together:
there may be a {\it bimodal} mass--rotation~velocity relation,\footnote{This
pattern may be partially an artifact of inclination effects.
In particular, some of the edge-on lenticulars
were observed with long-slit spectroscopy
directly along their embedded disks, which may not provide an accurate measurement
of the overall rotation. However, for the ellipticals we find no correlation
between apparent rotation velocity and ellipticity.
An additional issue is that the occasional extremely low-inclination
galaxy will not be treated well by our median-deprojection method
(cf.\ the right-hand panel of Figure~\ref{fig:Cr}), so in any fits to the data,
we will discard outliers with very low $v_s$ or $j_\star$\ (e.g., NGC~1419).
}
with some galaxies following
the trend for spirals, and others following a steep reverse relation,
$C_i\,v_{\rm s} \propto M_\star^{-0.3}$.
The implication is that there may be two distinct populations of early-type galaxies,
one of which is closely related to spirals, and which are not equivalent to
standard E and S0 classifications.
The bulge rotation velocities appear to follow a similar trend to the spirals,
at about half the amplitude.
Here it should be remembered that the bulge ``data'' points are {\it indirect}
estimates constructed in order to provide plausible adjustments to the total
angular momenta of the spiral galaxies (Section~\ref{sec:ltgdata}).
The results so far suggest that bulges are different from
ellipticals in their mass--size--rotation~velocity relations, and we will see in
the next section how their angular momenta compare.
Since both the sizes and the rotation velocities of elliptical galaxies are systematically
lower than for spiral disks, we can already predict that the ellipticals will
on average have much lower $j_\star$.
Note that although this conclusion has already been widely adopted for decades,
only now have the kinematic data reached large enough radii to confirm it
with confidence.
To see that the low characteristic rotation velocities for ellipticals
are not a mathematical sleight of hand,
one may consider the specific cases of NGC~821 and NGC~3377 in Figure~\ref{fig:etg}.
The rotation-velocity profiles of these galaxies decline dramatically outside $x\sim$~(1--2)~$a_{\rm e}$,
which may be contrasted with the spiral galaxies in Figure~\ref{fig:spirals}.
Preliminary analysis of additional {\it edge-on} cases, where the deprojection
uncertainties are minimized, indicates that such declines are a {\it generic
feature} of $\sim M^*$ early-type galaxies (A. Romanowsky et al., in preparation).
This conclusion includes NGC~2768, which from the current data
appears consistent with a constant
or rising outer rotation velocity, but which with more extensive new PN data may have
a declining outer profile.
Even the cases of strongly rising rotation-velocity profiles out to $x\sim$~2~$a_{\rm e}$\
found by \citet{1999ApJ...513L..25R}
appear upon closer inspection to turn over at larger radii.
These results all contrast with early claims of high outer rotation
in some early-types, which were recently overturned with improved observations
(e.g., \citealt{1994Msngr..76...40A,1998AJ....116.2237K,2006pnbm.conf..294R,2010A&A...518A..44M,2011ApJS..197...33S}).
We can also begin making some interesting inferences
about the relations among other galaxy types, based on both
size and rotation-velocity trends (Figures~\ref{fig:size} and \ref{fig:rot}).
As discussed, the lenticulars share similar properties to spirals
in some cases, and to ellipticals in others.
The distinction between ``fast'' and ``slow'' rotator ellipticals
based on their inner regions does not appear to hold up when considering
their global rotation properties.
This overview of the observable scaling relations between mass, size, and rotation velocity
gives us a preview of some of our overall conclusions about angular momentum, and
provides more confidence in the solidity of those conclusions.
We construct a novel mass--rotation~velocity relation for ellipticals, which is the
analogue of the Tully-Fisher relation for spirals, but with the remarkable difference
of having a negative slope.
The data also imply that both elliptical galaxies and spiral bulges must have
lower specific angular momenta than spiral disks of the same mass.
We address this issue more quantitatively in the next section, incorporating
the additional mass-dependent factor $k_n$ in calculating $j_\star$.
\section{Observations: angular momenta of the full sample}\label{sec:obsres}
Having derived estimates of the $j_\star$\ and $M_\star$\ parameters for our
full galaxy sample, we now examine the resulting observational trends,
which constitute the key results of this paper.
We begin by focusing on the late-type galaxies in Section~\ref{sec:less}, and
combine these with the early-types in Section~\ref{sec:obsresults}.
We discuss our proposed replacement for the Hubble sequence in Section~\ref{sec:replace},
which we test by examining systematic residuals from the $j_\star$--$M_\star$\ trends in Section~\ref{sec:resid}.
We further convert the $j_\star$--$M_\star$\ data into one-dimensional histograms in Section~\ref{sec:hist}.
\subsection{Lessons from spirals}\label{sec:less}
Although the main novelty of this paper is our careful consideration of
early type galaxies, we also include the oft-studied category of spirals in order
to provide an integrated analysis of bright galaxies of all types.
Furthermore,
the well-constrained angular momenta of the spirals also permit us to better understand
systematic issues such as inclination corrections that are trickier to handle
for early-types.
\begin{figure}
\centering{
\includegraphics[width=3.3in]{f13.eps}
}
\caption{The total (disk plus bulge) stellar specific angular momentum of nearby spiral galaxies
plotted against total stellar mass.
The top and bottom panels show estimates of projected and intrinsic $j_\star$,
respectively;
the uncertainty in $j_\star$\ for each galaxy is in almost all cases
smaller than the plotted symbols.
Different symbols denote galaxy sub-types as specified in the legends.
The dotted lines show fits to the data in each panel,
while the dashed lines show fits to the disk components alone (data not shown).
The spiral galaxies follow a universal $j_\star$--$M_\star$\ relation, with some
dependence on Hubble type. The projected relation is very similar to the
intrinsic relation, but with a small offset, and slightly increased scatter, in $j_\star$.
\label{fig:JMsp}
}
\end{figure}
We plot the total (disk+bulge) $j_\star$--$M_\star$\ data for the spirals from
Table~C\ref{tab:spirals} in
Figure~\ref{fig:JMsp}. In the top panel, we show the projected value, $j_{\rm p}$,
and in the bottom panel, the intrinsic value, $j_{\rm t}$.
These are related trivially by the disk inclination, but we wish to investigate
how well the trends in projection reflect the intrinsic trends, since
deprojection for the early-type galaxies will be more difficult.
\begin{table}
\begin{center}
\caption{Mass--angular momentum fits to data}\label{tab:loglog}
\noindent{\smallskip}\\
\begin{tabular}{l c c c}
\hline
Sample & $\log j_0$ & $\alpha$ & $\sigma_{\log j_\star}$ \\
\hline
All spirals, total, projected & $3.11\pm0.03$ & $0.53\pm0.05$ & 0.22 \\
All spirals, total, intrinsic & $3.18\pm0.03$ & $0.52\pm0.04$ & 0.19 \\
Sa--Sab, total, projected & $2.93\pm0.05$ & $0.60\pm0.06$ & 0.17 \\
Sa--Sab, total, intrinsic & $3.02\pm0.04$ & $0.64\pm0.07$ & 0.12 \\
Sb--Sbc, total, projected & $3.15\pm0.03$ & $0.65\pm0.14$ & 0.16 \\
Sb--Sbc, total, intrinsic & $3.21\pm0.03$ & $0.68\pm0.13$ & 0.15 \\
Sc--Sm, total, projected & $3.25\pm0.04$ & $0.58\pm0.06$ & 0.20 \\
Sc--Sm, total, intrinsic & $3.29\pm0.04$ & $0.55\pm0.05$ & 0.18 \\
\hline
All spirals, disks, projected & $3.25\pm0.02$ & $0.62\pm0.05$ & 0.20 \\
All spirals, disks, intrinsic & $3.31\pm0.02$ & $0.61\pm0.04$ & 0.17 \\
Sa--Sab, disks, projected & $3.25\pm0.05$ & $0.76\pm0.09$ & 0.21 \\
Sa--Sab, disks, intrinsic & $3.34\pm0.04$ & $0.82\pm0.08$ & 0.17 \\
Sb--Sbc, disks, projected & $3.24\pm0.03$ & $0.71\pm0.14$ & 0.16 \\
Sb--Sbc, disks, intrinsic & $3.30\pm0.03$ & $0.75\pm0.12$ & 0.13 \\
Sc--Sm, disks, projected & $3.29\pm0.05$ & $0.61\pm0.07$ & 0.21 \\
Sc--Sm, disks, intrinsic & $3.33\pm0.05$ & $0.57\pm0.05$ & 0.19 \\
\hline
All spirals, bulges, projected & $2.20\pm0.31$ & $0.69\pm0.11$ & 0.58 \\
All spirals, bulges, intrinsic & $2.32\pm0.31$ & $0.69\pm0.10$ & 0.57 \\
Sa--Sab, bulges, projected & $2.30\pm0.32$ & $0.99\pm0.15$ & 0.47 \\
Sa--Sab, bulges, intrinsic & $2.44\pm0.32$ & $0.99\pm0.15$ & 0.46 \\
Sb--Sbc, bulges, projected & $1.89\pm0.34$ & $0.34\pm0.20$ & 0.58 \\
Sb--Sbc, bulges, intrinsic & $2.01\pm0.33$ & $0.34\pm0.19$ & 0.56 \\
Sc--Sm, bulges, projected & $2.21\pm0.57$ & $0.64\pm0.27$ & 0.60 \\
Sc--Sm, bulges, intrinsic & $2.30\pm0.58$ & $0.63\pm0.28$ & 0.60 \\
\hline
Lenticulars, projected & $2.97\pm0.08$ & $0.80\pm0.14$ & 0.29 \\
Lenticulars, intrinsic & $3.05\pm0.08$ & $0.80\pm0.14$ & 0.29 \\
Ellipticals, projected & $2.52\pm0.05$ & $0.60\pm0.09$ & 0.24 \\
Ellipticals, intrinsic & $2.73\pm0.05$ & $0.60\pm0.09$ & 0.24 \\
\hline
Sb--Sm, intrinsic, fixed $\alpha=2/3$ & $3.28\pm0.03$ & 0.67 & 0.19 \\
Ellipticals, intrinsic, fixed $\alpha=2/3$ & $2.75\pm0.05$ & 0.67 & 0.24 \\
{\it $\Lambda$CDM halos} & {\it 2.50} & {\it 0.67} & {\it 0.23} \\
\hline
\end{tabular}
\\
\end{center}
\end{table}
Overall, the spiral galaxies appear to follow fairly tight
$j_\star$--$M_\star$\ trends, with similar slopes, regardless of Hubble sub-type.
In more detail, we carry out least-square fits to $j_\star$\ as a function of $M_\star$\
in log-log space:
\begin{equation}\label{eqn:loglog}
\log j_{\rm mod} = \log j_0 + \alpha \left[ \log (M_\star/M_\odot) -11 \right] ,
\end{equation}
with a residual rms scatter that we parameterize as $\sigma_{\log j_\star}$.
The uncertainties in the fit parameters $j_0$ and $\alpha$ are
estimated by bootstrap resampling.
Our fitting results for various spiral subsamples are reported in Table~\ref{tab:loglog}.
For total $j_\star$, the systematic uncertainties from the bulge rotation
(see Section~\ref{sec:ltgdata}) turn out to be smaller than or equal to
the statistical fitting uncertainties, even for the Sa--Sab galaxies,
and in the Table we have combined both uncertainties in quadrature.
The data are basically consistent with a universal $j_\star$--$M_\star$\ slope for spiral
galaxies of all types, with $\alpha$~$\sim$~0.6 and an rms scatter of
$\sigma_{\log j}$~$\sim$~0.2~dex.
There is also a clear residual trend with Hubble type:
the Sa--Sab galaxies have systematically lower $j_\star$\ than the Sb--Sm galaxies.
These conclusions hold for both $j_{\rm p}$\ and $j_{\rm t}$, although the uncertainties and
the scatter are smaller for $j_{\rm t}$, as expected if there are genuine, underlying
physical correlations that become clearer after deprojection.
The multi-component nature of our model galaxies allows
us to look further at disk and bulge properties separately.
We will take up this issue in Section~\ref{sec:obsresults}, and for now provide the fits to
the $j_{\rm d}$--$M_{\rm d}$ and $j_{\rm b}$--$M_{\rm b}$ relations
in Table~\ref{tab:loglog}.
It should be remembered that the bulge results depend on
model assumptions, although as discussed, we have plausibly bracketed their
upper and lower limits for $j_\star$.
\begin{figure*}
\centering{
\includegraphics[width=3.5in]{f14a.ps}
\includegraphics[width=3.5in]{f14b.ps}
}
\caption{{\it Left-hand panel:} The total intrinsic specific angular momentum of galaxies
plotted against their total stellar mass.
Symbols show galaxy types according to the legend at the upper left.
The points with error bars shown are based on the
more detailed $j_\star$\ estimator [Equation~(\ref{eqn:JMp})];
for the remainder of the galaxies,
the approximate $j_\star$\ estimator [Equation~(\ref{eqn:jCK0})] was used.
The uncertainties are similar in both cases.
The deprojection from observed $j_{\rm p}$\ to intrinsic $j_{\rm t}$\ was accomplished using
individual inclinations for the spirals, and median deprojection factors
for the lenticulars and ellipticals (see main text).
The least massive early-type galaxy in the sample is
the compact elliptical NGC~4486B, which is probably
in the process of being tidally stripped by the giant galaxy M87;
the other low-$j_\star$\ outlier is NGC~1419. Both are
marked with black $\times$ symbols and excluded from all fits in this paper.
Dotted lines show the best fits for the Sb--Sm and elliptical galaxies:
these two galaxy types follow $j_\star$--$M_\star$\ trends that are parallel but separated in
$j_\star$\ by $\sim$~0.5~dex.
{\it Right-hand panel:}
As left-hand panel, but now plotting spiral disks and bulges alone,
along with elliptical galaxies, as indicated by the legend.
The upper line is now the fit to the disks (for all spiral types)
rather than to the whole galaxies.
Note that the slopes of the lines in this panel and the left-hand one
should not be compared by eye,
owing to the different axis ranges.
The uncertainties in $j_\star$\ for the disks are typically $\sim$~0.04 dex,
and for the bulges at least $\sim$~0.2~dex;
the $M_\star$\ uncertainties are systematic (see main text).
Many of the most massive spiral bulges appear to a follow a similar $j_\star$--$M_\star$\
relation to the ellipticals.
\label{fig:JMM0}
}
\end{figure*}
As anticipated, the bulges turn out to have little impact on the total $j_\star$\ trends for
the Sb--Sm galaxies,
which are dominated by the disk components.
For the Sa--Sab galaxies, the bulges are responsible for the systematic
offset with respect to the later types;
this offset changes slightly but persists when
adopting the upper or lower limits to the bulge rotation.
The {\it disks} of all the galaxy types
turn out to follow nearly the same $j_\star$--$M_\star$\ relations.
This analysis demonstrates that inclination effects are not expected to have a major
impact on our overall results, since for both disks and bulges,
the intrinsic and projected $j_\star$--$M_\star$\ trends as well as their scatter are very similar.
There is an overall offset between disk $j_{\rm t}$\ and $j_{\rm p}$\ of $\sim$~0.07~dex,
which is comparable to the range of 0.04--0.06~dex that we would expect,
given the median inclination $i=67^\circ$ of our sample, and
depending on whether the $j_\star$--$M_\star$\ trend represents a median or an average fit
(see Appendix~\ref{sec:thin} for further discussion).
For our ensuing study of early-type galaxies, we will therefore simply adopt median
deprojection values for all of the galaxies, which we estimated in Section~\ref{sec:etgdata}
to mean adding offsets of 0.08~dex and 0.22~dex to $j_{\rm p}$\ to derive $j_{\rm t}$,
for lenticulars and ellipticals, respectively.
We can also in general drop the usage of $j_{\rm p}$\ in the rest of this paper,
in favor of the more physically meaningful $j_{\rm t}$\ which we now adopt as
our estimate for $j_\star$.
\subsection{Combined observational results}\label{sec:obsresults}
We are now ready to include the early-type galaxies in our analysis,
and thereby address most of the key science questions raised in Section~\ref{sec:intro}.
As a reminder, our starting point is the $j_\star$--$M_\star$\ diagram from F83 that
we have reproduced in Figure~\ref{fig:JMM00}.
Do we find the same $j_\star$--$M_\star$\ trends with an updated and expanded dataset,
and more detailed analysis?
Do ellipticals still appear to have systematically low $j_\star$\ relative to spirals,
or do we discover large reservoirs of additional $j_\star$\ at large galactocentric radii,
using modern data?
Do Sa and S0 galaxies fill in any ``gap'' between spirals and ellipticals,
and can we then connect the Hubble sequence to a sequence in $j_\star$?
Can we characterize all galaxies as combinations of disks and bulges that follow
universal scaling relations?
(The main remaining question that connects to galaxy formation theory will be pursued
in the next section.)
Taking our early-type galaxy $j_\star$\ and $M_\star$\ estimates from Table~C\ref{tab:etg}
(after statistically correcting projected to intrinsic quantities;
see Table~\ref{tab:err} for an error analysis),
we plot them in Figure~\ref{fig:JMM0} (left), along with the spirals results discussed
in Section~\ref{sec:less}.
This new Figure is the centerpiece of our paper.
{\it Focusing first on the elliptical galaxies, our basic finding is that
they follow a $j_\star$--$M_\star$\ trend which is
roughly parallel to the spirals but with a large systematic offset to lower $j_\star$.}
We thereby confirm the conclusions of F83, finding from
a new synthesis of modern photometric and kinematic data
that the ``missing'' angular momentum in ellipticals does {\it not} emerge at large radii,
as had been expected from some theoretical studies.
As discussed in Section~\ref{sec:scale}, the new observations tend to show outer rotation
profiles that {\it decline} rather than rise.
Even the nearby galaxy NGC~5128 (Cen~A),
which is often considered to be an elliptical formed through a recent major merger,
shows a relatively low $j_\star$\ when compared to spirals of the same stellar mass.
Whether or not these observations pose a genuine problem to major-merger explanations
for forming ellipticals will require renewed theoretical analysis, but
as discussed in Section~\ref{sec:scale}, there seems to be
a pattern in the literature
of misdiagnoses of high outer rotation from early, sparse data -- which led to
premature claims of evidence for major mergers.\footnote{\citet{2012MNRAS.421.1485N}
also recently noted an emerging trend for low rotation in elliptical-galaxy halos,
at odds with major-merger expectations.
One possible counter-example is the S0 galaxy NGC~1316, which is generally thought to be
a major-merger remnant. Based on the new PN kinematics results from
\citet{2012A&A...539A..11M},
we confirm the finding of \citet{1998ApJ...507..759A} that the $j_\star$--$M_\star$\ values
for this galaxy are close to the mean trend for spirals.
However, we caution that our photometric parameters and $\Upsilon_\star$\ value
are particularly insecure for this galaxy.
}
The specific angular momentum difference between spirals and ellipticals is
also apparent from a simple,
direct consideration of the data in Section~\ref{sec:scale},
where the smaller sizes and rotation velocities for ellipticals suggested that they
have lower $j_\star$.
As an arbitrary benchmark, we use the median $j_\star$\ at the $L^*$
characteristic luminosity, which is $\log\,(L^*_K/L_{K,\odot}) \sim 11$,
corresponding to $\log\,(M_\star/M_\odot) \sim 11$.
For ellipticals and Sb--Sm spirals,
we find projected values of $j_{\rm p} \sim 330$~km~s$^{-1}$~kpc
and $\sim 1600$~km~s$^{-1}$~kpc, respectively, and true values
of $j_\star = j_{\rm t} \sim 540$~km~s$^{-1}$~kpc and $\sim 1800$~km~s$^{-1}$~kpc.
In more detail, we report fits to the $j_\star$--$M_\star$\ data
toward the end of Table~\ref{tab:loglog}.
The fitted slope for the ellipticals is consistent with that for the Sb--Sm spirals,
but is significantly offset to lower $j_\star$\ by a factor of $\sim$~3.4 ($\sim$~0.5 dex).
These findings are consistent with F83, except that the gap has narrowed from
a factor of $\sim$~6 ($\sim$~0.8~dex).\footnote{Our
revised Sb--Sm relation is
$\sim$~0.1~dex lower than in F83, partly owing to the inclusion of bulges, and partly to
new estimates for disk sizes and mass-to-light ratios.
Our revised ellipticals relation is
$\sim$~0.2~dex higher than in F83;
this difference appears to arise not so much from the rotation
data (the extrapolations to large radius by F83 turn out very good on average),
but from a refined treatment of the total angular momentum calculation for spheroids.
Our slopes of $\alpha=0.53\pm0.04$ and $0.60\pm0.09$ for the
Sb--Sm and elliptical galaxies are shallower than the $\alpha=0.75$ slope
suggested by F83; for the Sb--Sm galaxies, this difference is driven mostly by
our inclusion of bulges and of lower-mass galaxies
[$\log\,(M_\star/M_\odot)\sim$~9];
while for the ellipticals, a shallower slope was already apparent in F83.
}
Note that if the $K$-band $\Upsilon_\star$\ for the ellipticals were systematically higher than
for the spirals by a factor of $\sim$~2 (perhaps owing to age or IMF differences;
cf.\ Section~\ref{sec:mass}),
then the $j_\star$\ offset would increase to a factor of $\sim$~5 ($\sim$~0.7~dex).
The scatter of $\sigma_{\log j_\star}=$~0.24~dex for the ellipticals
is similar to the $j_{\rm p}$\ scatter for the spirals.
We also note that the general trends for the ellipticals
are supported by the small sample of galaxies that we modeled in detail
(see points with error bars in Figure~\ref{fig:JMM0}, left).
Although one might still have concerns that large formal
uncertainties in $j_\star$\ remain for most of the sample after extrapolating
their rotation-velocity profiles beyond 2~$R_{\rm e}$, in order to
close the $j_\star$\ gap between spirals and ellipticals,
the rotation velocity would have to rise rapidly by a factor of $\sim$~4 outside these radii,
which seems implausible (cf.\ Figure~\ref{fig:etg}).
The parallel nature of the spiral and elliptical trends
is an interesting and non-trivial result, since Figure~\ref{fig:rot} showed
that the slopes of the rotation-velocity scaling relations for these galaxies
have opposite signs.
Some mass-dependent conspiracy of size, rotation velocity, and S\'ersic index must be at work
in order for the $j_\star$--$M_\star$\ slopes to turn out the same.
The few ``slow rotator'' ellipticals in our sample show no indication of deviating
systematically from the overall $j_\star$--$M_\star$\ trend for ellipticals, which
disagrees with earlier findings of much lower $j_\star$\ for such galaxies
\citep{1990A&A...239...97B}.
Although their outer regions, like their central parts, rotate slowly relative
to most of the fast rotators (Figure~\ref{fig:rot}), we find that
this is compensated for by their larger scale radii and S\'ersic indices
(keeping in mind that the results for these galaxies are the most uncertain).
Thus the global $j_\star$\ measurements suggest that the slow and fast rotators may
have more in common than was previously suspected.
Having confirmed the basic observational findings of F83, we now move on to
fresh territory, beginning with the
inclusion of Sa and S0 galaxies in Figure~\ref{fig:JMM0} (left).
F83 suggested that these would fill the gap in $j_\star$--$M_\star$\ space between ellipticals
and late-type spirals, which is confirmed by our sample.
Both of these galaxy types are on average offset to lower $j_\star$\ from the Sb--Sm spirals trend
by a factor of $\sim$~1.8 ($\sim$~0.25 dex; we will discuss variations about
the average in Section~\ref{sec:resid}).
One natural interpretation of this new finding is that the Hubble classifications are related to
an underlying physical structure, where all galaxies are composed of
some combination of two basic components: a disk and a spheroid
(as illustrated schematically in Figure~\ref{fig:schem1} of Section~\ref{sec:intro}).
These components would define two distinct sequences in the $j_\star$--$M_\star$\ plane, which
in combination would move the total values of galaxies to intermediate regions in this plane,
depending on the bulge-to-total mass ratios, $B/T$.
To explore this idea, we plot the $j_\star$--$M_\star$\ data
separately for elliptical galaxies, and for spiral disk and bulge subcomponents,
in the right-hand panel of Figure~\ref{fig:JMM0}.
The disks follow a similar relation to spiral galaxies overall, since these are
dominated by their disks.
More remarkably, the $j_\star$--$M_\star$\ trend for {\it bulges}
is fairly similar to the trend for ellipticals over the mass range where they
overlap.\footnote{At lower bulge masses, the apparent tendency to
relatively low $j_\star$\ values
should be viewed as speculative,
since it is based on classical bulges rather than the pseudo-bulges
that may predominate in this regime.}
This is a surprising result, because as shown in Figure~\ref{fig:size}, the bulge
{\it sizes} are systematically smaller than the ellipticals, and thus their
rotation velocities (Figure~\ref{fig:rot}) must be higher, in an apparent
conspiracy to produce roughly the same $j_\star$.
A similar analysis could in principle be carried out for the fast-rotator ellipticals,
since they
are widely considered to host hidden, embedded disk-like components.
Do the disk and bulge subcomponents of ellipticals
follow the same $j_\star$--$M_\star$\ relations as those of the spirals?
We have investigated this question
in Appendix~\ref{sec:decomp} using decompositions from
the literature, but the results are somewhat ambiguous.
Thus, although we have been able to address all of the major questions
raised initially about empirical $j_\star$--$M_\star$\ trends, we flag the trends
for the subcomponents in ellipticals (and lenticulars) as an important
aspect remaining in need of clarification.
\subsection{Replacing the Hubble diagram}\label{sec:replace}
The foregoing discussion brings us to the diagram that
we have already introduced schematically with Figure~\ref{fig:schem1}, which
constitutes our own, physically-motivated, substitute for the classic Hubble tuning fork,
and which could provide the underlying explanation for the observational trends
found in Figure~\ref{fig:JMM0}.
In this scheme, all galaxies are composed of a disk and a bulge, each adhering to a
distinct and parallel $j_\star$--$M_\star$\ scaling relation.
If the disk and bulge relations are universal
(which we will further test in Section~\ref{sec:resid}), then the
location of a galaxy in $j_\star$--$M_\star$\ space can immediately be used to infer its $B/T$ value uniquely,
and vice-versa
(i.e., there is a coordinate transformation between the two parameter spaces).
Elliptical galaxies would then be the cases with $B/T \sim 1$,
and bulges could be thought of as mini-ellipticals.
\begin{figure*}
\centering{
\includegraphics[width=3.5in]{f15a.ps}
\includegraphics[width=3.5in]{f15b.ps}
}
\caption{Specific angular momentum relative to the best-fitted
trend for spiral disks.
In the {\it left-hand panel}, these residuals are plotted vs.\ Hubble stage.
For clarity, small random offsets have been added in the horizontal direction
for the early-type galaxies.
In the {\it right-hand panel}, the residuals are plotted vs.\ bulge-to-total mass ratio.
The curved line shows a sample model prediction
(not a fit to the data; see text for details).
There are strong systematic trends of the $j_\star$\ residuals with respect to both
Hubble type and to bulge-fraction, and the relative smoothness of
this trend (particularly for the E/S0s)
suggests that bulge-fraction is the more fundamental
driving parameter.
\label{fig:diffj}
}
\end{figure*}
As with the original Hubble diagram, our $j_\star$--$M_\star$\ diagram
provides a simple {\it description}
of galaxies, along with the temptation to interpret it as some kind of
{\it evolutionary sequence}.
However, our diagram differs, since the parameters used are physical
quantities that may in principle be conserved, and thus it is actually justified to begin
using the diagram directly as a tool to motivate and test some evolutionary scenarios
for galaxies. This will be the objective of Section~\ref{sec:theory}.
A key feature of our diagram is that it views galaxies as fundamentally populating
a space of {\it two parameters}, angular momentum and mass, which are nearly
equivalent to the more observationally accessible properties of bulge fraction and luminosity.
In this framework, galaxies {\it cannot} be fruitfully reduced to a one-dimensional family
controlled by a single parameter (e.g., \citealt{2008Natur.455.1082D}).
Our diagram may also be contrasted with another currently fashionable way to understand
galaxies: as color-magnitude sequences that are generally related to
star formation histories (e.g., \citealt{2004ApJ...600..681B,2007ApJ...665..265F}).
These properties are loosely related to $j_\star$--$M_\star$\ space if star formation generally
occurs in high-$j_\star$\ disks. However, our framework is
less astronomical and more astrophysical in nature, and
we expect it to provide novel insights to galaxy formation that are complementary to
other classifications, and perhaps more fundamental.
Another recently-introduced classification for galaxies is also based loosely
on specific angular momentum concepts:
$\lambda_R$ \citep{2007MNRAS.379..401E},
which measures the rotational dominance in the central regions
(typically inside $\sim R_{\rm e}/2$) and is similar to a $v/\sigma$ metric.
Applied to early-type galaxies, a host of interesting patterns and
correlations have emerged \citep{2011MNRAS.414..888E}.
However, this metric in practice is not only very scale dependent,
but also misses exactly those scales that are
most important for measuring true, physical angular momentum (recall Figure~\ref{fig:fR}).
In fact, we have seen evidence that $j_\star$\ and the central $\lambda_R$
are disjoint properties:
the slow rotators (low-$\lambda_R$ galaxies) do not appear to deviate from
the $j_\star$--$M_\star$\ trend for fast rotators.
A final related diagram to mention is $j_\star$--$v_{\rm c}$, where $v_{\rm c}$ is the
circular velocity, tracing the dynamical mass of a galaxy within some characteristic radius
(e.g., \citealt{2000ApJ...538..477N,2012MNRAS.424..502K}).
There are complications with using this parameter space, since for spiral galaxies
both $j_\star$\ and $v_{\rm c}$ are normally based on the same rotation-velocity measurements,
which causes a built-in correlation.
Unlike $M_\star$, $v_{\rm c}$ is not a physical quantity subject to
straightforward conservation laws.
In addition, a critical point for our goal of analyzing all types of galaxies
in a unified manner is that
it is very hard to estimate $v_{\rm c}$ for a large sample of early-types
since they rarely host extended gas disks. Instead, extensive data are
required from other tracers such as stellar kinematics
(as needed for $j_\star$\ estimation), as well as
grueling dynamical modeling which even with the state-of-the art techniques
can still leave considerable uncertainties \citep{2009MNRAS.395...76D}.
Similar problems apply to a $j_\star$--$M_{\rm vir}$\ (virial mass) diagram, where
the masses can be estimated only on a statistical rather than on an individual basis
(e.g., \citealt{2012MNRAS.421..608D}).
\subsection{Examining the residuals}\label{sec:resid}
Our bulge--disk framework, although rather compelling, is not a unique
explanation for the systematic trends in the left-hand panel of Figure~\ref{fig:JMM0}.
It is possible that the vertical displacements of $j_\star$\ in this diagram are
somehow more directly related to Hubble morphology than to $B/T$
(although one should keep in mind that $B/T$ is one of the main factors
in the morphological classifications, along with spiral arm winding and
clumpiness).
To consider this point more clearly, and to
better see the relative trends in the data, we flatten the $j_\star$--$M_\star$\
relations into one dimension, dividing by the mean
trend for the spiral disks and thus generating the quantity:
\begin{equation}\label{eqn:Dj}
\Delta \log j_\star \equiv \log j_\star - \log\,j_{\rm mod}(M_\star) ,
\end{equation}
where $j_{\rm mod}$ is given by Equation~(\ref{eqn:loglog}).
We plot $\Delta \log j_\star$\ versus the Hubble stage parameter $T_{\rm Hubble}$ in Figure~\ref{fig:diffj}
(left-hand panel).
There is clearly a strong positive correlation between $T_{\rm Hubble}$ and the $j_\star$--$M_\star$\ residuals.
Among the spirals, this trend is clearest when considering the Sa--Sab versus Sb--Sc galaxies.
The Scd--Sm galaxies appear to continue the trend, but they inhabit the lowest-mass
area of the $j_\star$--$M_\star$\ diagram, where the mean relation is not defined well enough
to be certain of the residuals.
The S0s break the smooth trend of
$\Delta \log j_\star$\ decreasing for smaller $T_{\rm Hubble}$.
Many of them appear to have comparable specific angular momenta to typical
Sb--Sc galaxies, which was foreshadowed by the rotation scaling relations
of Figure~\ref{fig:rot}.
The implication is that lenticulars and spirals are overall dynamically similar,
differing more in their finer morphological features which
may be related to star formation activity.
We can thus think of these lenticulars as faded spirals, or of
the spirals as rejuvenated lenticulars, although
they differ in average $B/T$ values, and
more nuanced comparisons will require analysis of $\Upsilon_\star$\
(cf.\ \citealt{2010MNRAS.409.1330W}).
As for the subset of lenticulars with low $\Delta \log j_\star$, they may either be
very close to face-on, or else belong to a different family of objects
that are related to the ellipticals.
Returning to our original hypothesis that $B/T$ is the key parameter
affecting the $j_\star$--$M_\star$\ trends, we consider its correlation with
the residuals $\Delta \log j_\star$.
Since we do not actually
have bulge/disk decompositions for the early-type galaxies in our sample,
we introduce a novel technique that uses
the degree of central rotational support as a rough proxy for $B/T$.
The idea here is that the bulge is to a first approximation non-rotating,
so any observed rotation is from the disk:
objects with higher $(v/\sigma)$ imply higher disk fractions and lower $B/T$.
Appendix~\ref{sec:decomp} describes our methods for early-type $B/T$ estimation in
more detail.
For the late-types, we already have $B/T$ estimates based on decompositions in
the literature, as discussed earlier.
We show the results in the right-hand panel of Figure~\ref{fig:diffj}.
The residuals {\it do} correlate clearly with $B/T$, in a fairly smooth trend
that is followed equally well by all of the galaxy types, and
which contrasts with the $T_{\rm Hubble}$ trend.
We have marked a simple expectation for the $B/T$ trend with the curved line, given
the summation of Equation~(\ref{eqn:jtot}), along with an arbitrarily assumed
$j_{\rm b}=0.1 \times j_{\rm d}$.
This model mimics the data remarkably well, although it should be remembered
that the agreement is somewhat built-in already, since
correlated rotational properties
were used both to estimate $B/T$ and to calculate $j_\star$.
Recalling that we also had to make strong modeling assumptions for
the spiral bulges when calculating $j_\star$,
the better connection of the residuals to $B/T$
rather than $T_{\rm Hubble}$
should be considered preliminary.
It is also difficult to tell how much of the scatter in $j_\star$\ at
fixed $B/T$ is due to observational error, and how much is due to intrinsic
variations, i.e., with bulges and/or disks not following perfectly standardized
$j_\star$--$M_\star$\ relations.
Definitive resolution of these issues
will require more detailed bulge--disk decompositions of all
types of galaxies, including spectroscopic information
(cf.\ \citealt{2011MNRAS.414..642C,2012MNRAS.422.2590J,2012ApJ...752..147D,Forbes12}}),
and allowances for $\Upsilon_\star$\ variations.
We would however like to advance the proposition that bulge fraction is the
fundamental driving parameter behind $j_\star$\ variations, and
is responsible for many of the observed variations in galaxy properties
(see discussion in previous subsection).
Not only does this make sense from a physical standpoint, but the
agreements between ellipticals and spiral bulges in Figure~\ref{fig:JMM0} (right),
and between model and data in Figure~\ref{fig:diffj} (right),
provide provisional but strongly suggestive observational support.
The radially-declining rotation-velocity profiles of galaxies like NGC~821 and NGC~3377
in Figure~\ref{fig:etg} could also be naturally explained by central disk components
embedded in non-rotating bulges.
Furthermore, we will see from consideration of a cosmological context in Section~\ref{sec:theory2}
that the distribution of $j_\star$\ is more naturally reconciled with distinct disk and spheroid
subpopulations than with a simple continuum of galaxy $j_\star$.
\subsection{Histograms of stellar $j$ residuals}\label{sec:hist}
Before moving on to theoretical analyses,
we construct one more representation of the data
whose relevance will become particularly clear in the next section.
We compress
the preceding $j_\star$--$M_\star$\ information into a histogram of residuals from the
spiral disk relation, showing the results
in Figure~\ref{fig:histdiff} (upper panel).
Here it is apparent that the spiral galaxy data comprise
a roughly lognormal distribution in $\Delta j_\star$, with an rms dispersion of $\sim$~0.2 dex.
The ellipticals have a less well-defined distribution that
partially overlaps the spirals but is offset $\sim$~0.5 dex lower,
while the small sample of lenticulars spans almost the full range of residuals.
\begin{figure}
\centering{
\includegraphics[width=3.5in]{f16.ps}
}
\caption{Histogram of specific angular momentum relative to the
mean observed trend for spiral disks.
In two of the panels, curves show example lognormal distributions for
comparison to the data.
In the upper panel, the red, green, and blue
histograms show data from Figure~\ref{fig:diffj} for spirals, lenticulars,
and ellipticals, respectively.
The middle panel shows the bulge and disk subcomponents of spiral galaxies,
with red and blue histograms, respectively.
The lower panel is a summation of the data from the upper panel,
after renormalizing each galaxy sub-type type by its frequency in the nearby universe
(see main text).
The specific angular momentum does not appear to have a simple lognormal distribution,
and may even be bimodal.
\label{fig:histdiff}
}
\end{figure}
In the middle panel of Figure~\ref{fig:histdiff}, we look instead at the
disk and bulge subcomponents of the spiral galaxies,
where we have also
overplotted a Gaussian with a width of $\sigma_{\log j_\star} = 0.17$~dex for reference.
Given the uncertainties and possible selection bias in our analysis,
we consider the disks to be reasonably consistent with a lognormal distribution.
The $\Delta \log j_\star$\ distribution for the spiral bulges resembles that of the ellipticals
in the sense that both are systematically offset to lower values, as we have previously seen.
The bulges apparently extend to much lower $\Delta \log j_\star$\ than the ellipticals, but as discussed in
Section~\ref{sec:obsresults},
this is not a secure result, given the uncertainties in the
bulge calculations.
Returning to the overall results, we would like to know whether or not galaxies follow
a bimodal distribution in $\Delta \log j_\star$\ as the top panel of Figure~\ref{fig:histdiff}
suggests. The complication here is possible bias in the galaxy sample:
if we were to study {\it all} bright galaxies in a volume-limited sample,
the $\Delta \log j_\star$\ distribution might look very different.
To investigate this issue, we must
re-weight the distribution of $j_\star$\ in our sample by galaxy type.
The simplest approach is to renormalize by frequency or number density.
We use the ATLAS$^{\rm 3D}$\ results that 70\%, 22\%, and 8\% of the
galaxies in the nearby universe are spirals, lenticulars, and ellipticals
(over a stellar mass range similar to our observational sample;
\citealt{2011MNRAS.413..813C}).
The fractions in our sample are 63\%, 14\%, and 23\%, demonstrating a strong bias
toward ellipticals at the expense of lenticulars.
We plot the re-weighted results in the lower panel of Figure~\ref{fig:histdiff},
showing also for reference a lognormal
curve with $\sigma_{\log j_\star}=0.27$~dex
(a width that will be motivated in Section~\ref{sec:theory2}).
The total distribution of $\log\,j_\star$ residuals appears slightly non-Gaussian,
with a tail extending to low values.
This feature may not be significant if one allows for systematic
uncertainties in the selection effects,
but the skewness will become clearer when compared to
theory in Section~\ref{sec:theory2}.
An alternative scheme would be to re-weight by the stellar mass density of the
different galaxy types. This would bring us closer to a total distribution function
for stellar $j$ in the universe, rather than a distribution of galaxies with given $j_\star$.
It is beyond the scope of this paper to carry out such an exercise in detail, but the
basic outcome is clear.
The high end of the mass distribution is dominated by early-types (cf.\ lower panel
of Figure~\ref{fig:atlas1}), which means that the mass weighting would enhance the
contributions of these galaxies relative to number weighting.
The universal distribution of $j_\star$\ would then appear {\it more non-Gaussian} than in the
lower panel of Figure~\ref{fig:histdiff}.
We therefore find evidence that the residuals of the specific angular momenta of
galaxies from the mean relation are not simply lognormal.
The best match to a lognormal model is provided by the disk components of spirals,
while the bulges and the ellipticals may comprise a distinct second
population.\footnote{\citet{2007MNRAS.375..163H} used a large photometric survey
to estimate $j_\star$\ indirectly, with results that are less accurate than those
presented here, but which similarly imply a bimodal distribution
for ellipticals and spirals.}
Again, a natural interpretation of this finding is that
all galaxies are composed of some combination of high-
and low-$j_\star$\ material, which may be identified with disks and bulges, respectively.
Some implications of these results for galaxy formation in a modern cosmological
context will be discussed in the next section.
It should be remembered, however, that our empirical findings---of specific,
strong correlations between galactic angular momentum, mass, morphology, and bulge
fraction---stand on their
own and must be explicable by any successful theory of galaxy formation,
whether now or in the future.
\section{Connecting to theory}\label{sec:theory}
We are now ready to present a fresh theoretical way of looking at galaxies,
using the $j_\star$--$M_\star$\ diagram, which was introduced in F83, and which may now be
reinvigorated by populating it with observational data for galaxies of all types.
Our general approach is to take a step back from galactic {\it details},
whether these be spiral arms and dust lanes in observations, or unresolved
gas physics and star formation recipes in simulations,
and return to some simple physical parameters and conservation rules
that may provide robust constraints and insights to galaxy formation.
We have shown in Sections~\ref{sec:obsresults} and \ref{sec:resid}
that the specific stellar angular momenta
of observed galaxies follow remarkably tight correlations with their masses and bulge fractions.
Such patterns in Nature demand theoretical explanations, as they could
be tracing fundamental physical processes.
Indeed, the $j_\star$--$M_\star$\ relation for spiral galaxies is well known in some circles,
and provides a crucial benchmark for models of galaxy formation.
However, the correlation for elliptical galaxies (already shown in a preliminary version
by F83) is less well known and addressed with theoretical models.
Our goal is to advance a general, physical framework for integrating these
observational constraints into models of galaxy formation and evolution.
Our approach here is different from, and complementary to, the
active field of hydrodynamical simulations of galaxy formation.
Although such simulations have made notable progress toward the ultimate goal
of reproducing realistic galaxies,
they still have a long way to go,
with recent work highlighting large differences in the basic properties of
simulated galaxies, depending on what code, resolution, and physical recipes are used
\citep{2012MNRAS.423.1726S,2011arXiv1110.5635T}.
Historically, such methods missed reproducing observed
$j_\star$\ trends by factors of up to $\sim$~30, and even the most recent work shows
variations at the factor of $\sim$~2 level.
The general concern is that many of the large-scale properties of galaxies could
well depend strongly on transport processes at the scales of molecular clouds,
which are not yet modeled satisfactorily in cosmological simulations.
Therefore some caution is still needed in assuming that the simulations
are providing an adequate representation of reality.
In this context, simplified ``toy'' models continue to play a key role in
defining the broad but solid outlines of the galaxy formation theory that
is required to match the observational constraints.
These models may also prove useful in physical understanding of the output
of numerical hydrodynamical simulations.
We frame our analysis in the context
of the current standard cosmological model for structure formation:
cold dark matter with a cosmological constant ($\Lambda$CDM; \citealt{2011ApJS..192...18K}).
This model makes specific, robust predictions for the angular momenta of DM halos.
Because the visible galaxies, consisting of stars and gas,
are presumed to reside in these DM halos, we may then
ask whether or not the observed stellar angular momenta bear any resemblance to
the predictions for DM halos.
We begin with the properties of $\Lambda$CDM halos as our ``initial conditions''
for galaxy formation, which we map to our observable space: $j_\star$--$M_\star$\ for
the stellar components of galaxies.
We do this by parameterizing the retention
of mass and angular momentum during galaxy formation,
and
then by introducing a menu of $j_\star$--$M_\star$\ vectors of change that correspond
to plausible physical processes (outflows, mergers, etc.).
We emphasize that the primary aim of this paper is {\it not} to concoct a new theory
of galaxy formation, nor to weigh in on competing models
by vetting specific simulation outputs against the $j_\star$--$M_\star$\ diagram.
Instead, we wish to lay out a generalized framework that can
both constrain and explain the models.
The methodology and merits of this approach should become clearer as we develop the
ideas throughout this section,
and as we eventually work through some practical examples.
We develop general theoretical predictions and make basic inferences
about $j$ retention in Section~\ref{sec:theory1}.
In Section~\ref{sec:theory2} we investigate two possible explanations for
the observed $j_\star$ dichotomy between spirals and ellipticals.
In Section~\ref{sec:theory3} we consider coupling between changes in mass and
angular momentum,
and connect these to evolutionary scenarios for galaxies.
\subsection{Basic constraints}\label{sec:theory1}
The overdense regions in an expanding universe are not spherically symmetric
and exert tidal torques on each other, inducing a net angular
momentum in each collapsing galaxy \citep{1951pca..conf..195H}.
This rotational behavior is usually specified in terms of a dimensionless spin
parameter that
quantifies the dynamical importance of rotation, and
is a combination of fundamental physical quantities:
\begin{equation}\label{eqn:lambda}
\lambda \equiv \frac{J |E|^{1/2}}{G M^{5/2}} ,
\end{equation}
where $J$ is the angular momentum, $E$ is the energy (kinetic and potential),
$G$ is the gravitational constant, and $M$ is the mass
\citep{1969ApJ...155..393P}.\footnote{Recall
that the parameters $(J,E,M)$ can be translated roughly into a more observationally oriented
basis set of rotation velocity, effective radius, and luminosity $(v_{\rm rot}, R_{\rm e}, L)$,
where in approximate terms:
$M \propto L$, $E \propto L^2 R_{\rm e}^{-1}$, and $J \propto v_{\rm rot} L R_{\rm e}$.}
Whether analyzed through linear tidal torque theory, or through $N$-body simulations
of galaxy assembly, $\lambda$ is predicted to follow an almost lognormal distribution
that is relatively insensitive to cosmological parameters, time, galaxy mass,
and environment (e.g., \citealt{1987ApJ...319..575B,1988ApJ...330..519Z,1995MNRAS.272..570S,1996MNRAS.281..716C,2007MNRAS.378...55M}).
The spin parameter provides a convenient way to characterize DM halos, but
it is not straightforward to connect
$\lambda$ to baryonic galaxies
because it is not a physically conserved quantity (as energy is dissipated).
We instead conduct our theoretical analysis in terms of
the specific angular momentum parameter $j$, as we have done
with the observations.
Along with the mass $M$, $j$ is a quantity that is potentially conserved
at some approximate level during the evolutionary history of a galaxy.
\begin{figure*}
\centering{
\includegraphics[width=7in]{f17.eps}
}
\caption{
Schematic evolution of galaxies in the space of specific angular momentum and mass.
Each point shows a galaxy randomly selected from a simple model (see main text).
Panel (a) shows the initial galactic halos of gas and DM.
Panel (b) shows the gas component only, adopting a baryon fraction of $f_{\rm b}=0.17$,
with an arrow illustrating the direction that a single galaxy takes in this diagram.
Panel (c) shows the stellar component after forming from the gas
with an average relative fraction of $\langle f_\star \rangle=0.1$.
Panels (d) and (e) show the stars of spiral and elliptical galaxies, respectively, after
adopting more realistic variations of $\langle f_\star \rangle$ with mass.
Panel (f) shows the effect of angular momentum loss, with a factor of
$\langle f_j \rangle=0.1$.
Note that these are simple, idealized models, and not every aspect should be
taken literally; e.g., spiral galaxies probably do not exist at masses of
$M_\star \ga 10^{12} M_\odot$.
}
\label{fig:schem2}
\end{figure*}
To re-cast $\lambda$ to $j$, we adopt a $\Lambda$CDM-based
spherically-symmetric halo profile from \citet{1996ApJ...462..563N},
truncated at the virial radius.\footnote{The virial radius is defined
as bounding a region
inside which the mean halo density is a factor of $\Delta_{\rm vir}$
times the critical density $\rho_{\rm crit} \equiv 3 H^2 / (8 \pi G)$.
We adopt a WMAP5 cosmology, with
$H=$~72~km~s$^{-1}$~Mpc$^{-1}$ and $\Delta_{\rm vir}=95.3$ at $z=0$
\citep{2008MNRAS.391.1940M}.
To calculate $E$ for this halo, we use an expression from
\citet{1998MNRAS.295..319M} with a fixed concentration of $c_{\rm vir}=9.7$;
and we ignore variations due to concentration which affect $\lambda$
at the $\sim$~5\% level.
A related spin-proxy parameter, $\lambda^\prime$, is based on a singular
isothermal sphere \citep{2001ApJ...555..240B},
and is $\simeq$~11\% smaller than $\lambda$.}
We then obtain:
\begin{equation}\label{eqn:jlam}
j_{\rm vir} = 4.23\times10^4 \mbox{ } \lambda \left(\frac{M_{\rm vir}}{10^{12} M_\odot}\right)^{2/3} \mbox{km~s$^{-1}$~kpc} .
\end{equation}
We adopt a characteristic value\footnote{This is
based on the average value of $\log\,\lambda$, but throughout this
paper we use shorthand such as $\langle \lambda \rangle$
and $\langle j \rangle$ for log-averages.}
of $\langle\lambda\rangle=0.035$,
along with a 1~$\sigma$ log dispersion of 0.23~dex,
based on a study of relaxed halos in a cosmological simulation
with WMAP5 parameters, by \citet{2008MNRAS.391.1940M}.
The log-averaged numerical coefficient in Equation~(\ref{eqn:jlam}) then becomes
1460~km~s$^{-1}$~kpc.
Other recent studies are generally consistent with these results
at the level of $\sim$~10\%.
The $\alpha=2/3$ exponent is also an explicit prediction of tidal torque theory
\citep{1984ApJ...281...56S,1988MNRAS.232..339H}, and
provides a reasonable approximation to the trends from {\it direct} calculations
of $j_{\rm vir}$\ and $M_{\rm vir}$\ in $N$-body simulations \citep{2010MNRAS.407.1338A}.
Equation~(\ref{eqn:jlam}) can be considered as setting firm ``initial conditions''
for galaxies, characterizing their angular momenta near the
time of virialization.
This is shown schematically in panel (a) of Figure~\ref{fig:schem2},
which we have populated with toy-model ``galaxies'' consisting of primordial halos of gas and
DM. Their masses are drawn from a uniform logarithmic distribution, and their
angular momenta from a lognormal distribution using $\langle j_{\rm vir} \rangle$
and $\sigma_{\log j_{\rm vir}}$ as above.
We next consider a series of idealized evolutionary steps that allow us to
parameterize evolution in the $j$--$M$ diagram.
We assume that the baryons consist initially of gas
that is well mixed with the dark matter of its parent halo, and that does not
collapse within the halo until after the linear and translinear
regimes of tidal torque when most of the angular momentum is acquired.
The gas may then be
assumed to have the same value of $j$ as the halo, which
we show in panel (b) as a simple shift of the points to the left,
according to a cosmological baryon fraction of $f_{\rm b}=0.17$
\citep{2011ApJS..192...18K}.
In panel (c) we show what happens in a simple case
where a fraction of the baryons form into stars, with a particular value
of $\langle f_\star\rangle=0.1$, and a dispersion of $\sigma_{\log f_\star}=0.15$~dex.
Again, $j$ is assumed to be conserved, and the galaxies shift to the left.
It is also usually assumed, though not required by the diagram,
that this process involves the formation of a thin
stellar disk whose collapse was halted by the balance between gravity and centrifugal
force.
Our analysis does however assume that the baryon collapse extends all the way out
to the halo virial radius. This conventional assumption is at some level
implausible since DM collapse and gas cooling are governed by different physical
scales in space and time. A more generalized approach where the baryon collapse
radius is allowed to vary
will be considered in Section~\ref{sec:bias}.
Note that the $f_\star$ parameter can take on a more general meaning
of {\it net} stellar mass fraction relative to initial gas mass, which
allows for stars that are accreted by or ejected from the galaxy.
We will shortly discuss a more refined model where $f_\star$ varies systematically
with mass, but for now we continue with our very simplified constant-$f_\star$
model in order to consider its basic implications.
Our next model ingredient is an idealized process of angular momentum loss, with
no concomitant change in mass, which we quantify by a fractional $j$ net retention
factor of $f_j$. An example of such a process would be internal $j$ transfer from
the stars to the DM halo.
Given the parameters $f_\star$ and $f_j$, we may then translate
the $j$--$M$ relation~(\ref{eqn:jlam}) for DM halos to an equivalent one for the
stellar components of galaxies:
\begin{equation}\label{eqn:jslam}
j_\star = 2.92\times10^4 \, f_j \, f_\star^{-2/3} \, \lambda \left(\frac{M_\star}{10^{11} M_\odot}\right)^{2/3} \mbox{km~s$^{-1}$~kpc} ,
\end{equation}
where again using the prediction for
$\langle\lambda\rangle$,
the numerical coefficient for $\langle j_\star \rangle$
becomes 1010~km~s$^{-1}$~kpc.
This relation is identical to our parameterized fit to the
observational data with Equation~(\ref{eqn:loglog}),
modulo the numerical factors and the value for the exponent $\alpha$.
Since the observed $j_\star$--$M_\star$\ relation can be approximated with $\alpha=2/3$
and a normalization $j_0$, then we
can express the difference between observation and theory through
a simple combination of the parameters $f_j$ and $f_\star$:
\begin{equation}\label{eqn:comb}
j_0 = 1010 \, \langle f_j \, f_\star^{-2/3} \rangle \, {\rm km \, s^{-1} \, kpc} .
\end{equation}
Equations~(\ref{eqn:jlam})--(\ref{eqn:comb})
are simple but powerful, allowing us
to connect the visible properties of galaxies
to their invisible DM halos, using some simple parameters and assumptions.
They also provide robust observational constraints on some basic
characteristics of galaxy formation that are still far beyond the ability
of raw theory to predict reliably.
{\it The average value of $f_j \, f_\star^{-2/3}$ for a population of galaxies
can be determined by observations as a strict constraint on theory.}
We can immediately use Equation~(\ref{eqn:comb}) in combination with the observational
results for $j_0$ from Table~\ref{tab:loglog} for fixed $\alpha=2/3$.
We find that $\langle f_j \, f_\star^{-2/3}\rangle \simeq$~1.9 for Sb--Sm spirals
and $\simeq$~0.5 for ellipticals.
For example, if we assumed an arbitrary $\langle f_\star \rangle=$~0.2 for both
types of galaxies, then we would infer $\langle f_j\rangle\simeq$~0.65 for spirals
and $\simeq$~0.1 for ellipticals.
This means a systematic difference in net angular momentum retention between
the two galaxy types which, although there are many further details to work through
below, will hold up as a basic result of this paper.
To derive firmer constraints on $f_j$, we need to break the $f_\star$--$f_j$
degeneracy by introducing well-motivated values for $f_\star$,
for both spirals and ellipticals.
We also need to consider the complication that $f_\star$ cannot in reality
have a simple, constant value, even on average.
This is because the observed luminosity function of galaxies has
a dramatically different shape from the predicted mass function of DM halos
(e.g., \citealt{1978MNRAS.183..341W,1991ApJ...379...52W,2002ApJ...569..101M,2003MNRAS.339.1057Y,2010ApJ...710..903M}).
Below the characteristic ``knee'' luminosity $L^*$,
the galaxies are observed to follow a shallower slope than the DM mass function
$dN/dM \propto M^{-2}$, while at higher luminosities, the observations are
{\it steeper} than the predictions.
The implication is that the fraction of luminous-to-dark matter
declines rapidly for galaxies fainter and brighter than $L^*$;
i.e., assuming a constant $f_{\rm b}$, the function $\langle f_\star\rangle(M_{\rm vir})$ has a
characteristic inverted U shape.
This empirical trend is thought to be caused physically
by various feedback effects that inhibit
star formation and become increasingly important in the low- and high-mass regimes
(such as stellar and supermassive black hole feedback, respectively; e.g.,
\citealt{1993ApJ...402...15L,1994MNRAS.271..781C,1999MNRAS.310.1087S,2006MNRAS.370..645B,2006MNRAS.365...11C}).
Regardless of the explanation, any self-consistent
$\Lambda$CDM-based model must incorporate a strong,
systematic mass dependence on star formation efficiency, $\langle f_\star\rangle(M_{\rm vir})$.
One might be concerned that such a mass dependence would transform an underlying
$j \propto M^{2/3}$
relation for DM halos into something very different for
the stellar components of galaxies, and quite unlike our observational results.
To check this, we will modify our simple model above to allow for a varying
function $\langle f_*\rangle(M_{\rm vir})$.
Since this function is a tracer of undetermined baryonic physics during galaxy
evolution, there is not yet any robust theoretical prediction for it,
but fortunately it can be estimated empirically.
This is done in an {\it average} sense through various techniques such
as weak gravitational lensing, stacked satellite kinematics, and
matching up the mass and luminosity functions mentioned above.
There have been many studies that estimated $\langle f_\star\rangle(M_{\rm vir})$, but
few that did so separately for different galaxy types, which is important for our analysis.
We therefore adopt the relations for $\langle f_\star\rangle(M_\star)$ derived
by \citet{2010MNRAS.407....2D}.
For the spiral galaxies, we use their relation for ``late-type'' galaxies:
\begin{equation}\label{eqn:fsMltg}
\langle f_\star\rangle(M_\star) = \frac{f_0 \left(M_\star/M_0\right)^{1/2}}{\left[1+\left(M_\star/M_0\right)\right]^{1/2}} .
\end{equation}
Below a characteristic mass $\log\,(M_0/M_\odot) \simeq 10.8$,
this relation has a dependence
$\langle f_\star\rangle \propto M_\star^{1/2}$. At higher masses, it
approaches a constant, $f_0 \simeq 0.33$.
Here we have converted the Dutton~et~al.\ results to our definition of the virial mass
and to our adopted stellar IMF, while using $h=0.72$.
For elliptical galaxies, we adopt the Dutton et al.\ relation
for ``early-type'' galaxies:\footnote{There has been very little work along these
lines for elliptical and lenticular galaxies separately, but there is some
recent evidence that the halo masses for these types are the same
\citep{2011ApJ...742...16T}.
Note also that the Dutton~et~al.\ relations were derived for somewhat smaller mass ranges
than covered by our data,
and that their stellar mass determinations
may not be fully consistent with our methods.}
\begin{equation}\label{eqn:fsMetg}
\langle f_\star \rangle (M_\star) = \frac{f_0 \left(M_\star/M_0\right)^{0.15}}{\left[1+\left(M_\star/M_0\right)^2\right]^{1/2}} ,
\end{equation}
where $\log\,(M_0/M_\odot)\simeq11.2$, $f_0 \simeq 0.14$,
and the asymptotic behaviors at low and high masses are
$\langle f_\star\rangle \sim M_\star^{\,0.15}$ and
$\langle f_\star\rangle \sim M_\star^{-0.85}$, respectively.
One of the key features to notice here is that an elliptical galaxy typically
has a much lower value of $f_\star$ than a spiral with the same stellar mass:
i.e., ellipticals inhabit systematically more massive DM halos, which in many
cases extend up to ``group'' masses of $M_{\rm vir}\sim10^{13} M_\odot$ and beyond
(see also \citealt{2011A&A...534A..14V}).
These $\langle f_\star\rangle(M_\star)$ relations can be uniquely transformed to
$\langle f_\star\rangle(M_{\rm vir})$,
and taken together define an inverted U-shaped trend as discussed above.
The relations were constructed using a compilation of different literature
results,
which showed an encouraging degree of mutual consistency,
so we conclude that the average trends above are probably reliable at the
$\sim$~50\% ($\sim$~0.2~dex) level.
There may also be non-zero galaxy-to-galaxy variations in $f_\star$
at a fixed mass and type; the value of this scatter is less well established, but
recent analyses suggest that it may be $\sim$~0.15~dex
\citep{2010ApJ...717..379B,2011MNRAS.410..210M}.
We adopt this as our default value, which fortunately is smaller than
the expected dispersion in halo spin of $\simeq$~0.23~dex and so will not
have much impact on our conclusions.
Using these variable $\langle f_\star\rangle(M_{\rm vir})$ relations to construct mock $j_\star$--$M_\star$\
data sets as before, we plot the results in panels (d) and (e) of Figure~\ref{fig:schem2}.
For both spirals and ellipticals, we can see that the curvature in
$\langle f_\star\rangle (M_{\rm vir})$
translates to systematic deviations in the $j_\star$--$M_\star$\ relation from a simple
$\alpha=2/3$ power-law.
We will investigate how these deviations compare to real observations
in the next subsection.
Panels (d) and (e) of Figure~\ref{fig:schem2}
also demonstrate that at masses of $M_\star \ga 10^{11} M_\odot$,
the ellipticals are predicted to have {\it higher} $j_\star$\ than the spirals of the same mass,
owing to their differences in $f_\star$. The more massive DM halos of
ellipticals ought to provide larger virial-radius lever arms that lead
to larger $j_{\rm vir}$, and therefore larger $j_\star$---{\it if}
they retain as much fractional angular momentum as spiral galaxies do.
Therefore the observed offset in $j_\star$--$M_\star$\ between spirals and ellipticals
implies an even larger difference in $\langle f_j\rangle$ than in the simple example above
with fixed $\langle f_\star\rangle=0.2$.
We will examine this apparent $f_j$ dichotomy further in the next
subsection.
As a final illustrative exercise,
we generate a mock data set for elliptical galaxies as in panel (e), then
adopt $\langle f_j\rangle=0.1$,
with an assumed dispersion of $\sigma_{\log f_j}=0.15$~dex.
The results are plotted in panel (f), where we see that the galaxies
have coincidentally returned to nearly the original
$j$--$M$ sequence for halos, modulo a little curvature and increased scatter.
Figure~\ref{fig:schem2} thus shows how one could map the observed
$j_\star$--$M_\star$\ properties of a population of galaxies (panel f) to a theoretical
prediction for their halos (panel a), and recover some basic parameters describing
galaxy formation (see Equation~(\ref{eqn:comb})).
This formulation is closely related to a classic theoretical framework
for the formation of spiral galaxy disks, whose observed sizes and rotation velocities
are generally
consistent with the approximate conservation of primordial specific angular momentum
($f_j \sim 1$; e.g., \citealt{1980MNRAS.193..189F,1997ApJ...482..659D,1998MNRAS.295..319M}).
However, our formulation is more general by including also the early-type galaxies,
as well as the bulge components within spiral galaxies (which we will discuss below).
\subsection{Investigating the spread in $j_\star$}\label{sec:theory2}
As just discussed, the observed dichotomy between the $j_\star$--$M_\star$\
relations of spirals and ellipticals may imply differences in
their specific angular momentum retention, expressed here by the factor $f_j$.
This interpretation is based on an implicit assumption that the parent
halos of both galaxy types had the same average $\lambda$.
However, a natural halo-to-halo scatter in $\lambda$ is expected,
and one could instead imagine the other extreme case, in which
$f_j$ is the same for the two galaxy types, while
their halo $\lambda$ values are systematically different
(e.g., \citealt{1982MNRAS.200..585K,1984Natur.311..517B,1996MNRAS.282..436C}).
In other words, spirals and ellipticals are drawn from the high-
and low-spin tails of the $\lambda$ distribution, respectively.
We call these two alternatives the ``variable $f_j$''
and ``spin bias'' scenarios.
In reality, a mixture of both scenarios may be present, which would be
difficult to disentangle,
but we can begin by investigating these two limiting cases in detail.
Thus the aim of this section is to test how consistent each of
these cases is with the data.
The reason we can make headway on this issue is that
there are predictions from $\Lambda$CDM not only for the average value of $\lambda$,
but also for its probability distribution, i.e., a lognormal
with a characteristic dispersion as discussed in Section~\ref{sec:theory1}.
We continue to focus on the spirals and ellipticals as
the two interesting extremes of the observed $j_\star$\ range (at fixed $M_\star$),
and consider the lenticulars as intermediate either in
$f_j$ or in $\lambda$.
We begin with the spin-bias scenario. If correct, adopting a constant $f_j$ value for
a complete, unbiased galaxy sample would allow us to work backwards to infer
the underlying $\lambda$ distribution, which could then be
compared to the theoretical prediction.
One might think that we have already implicitly carried out this test
by examining the residuals from the observed $j_\star$--$M_\star$\ relation in
Section~\ref{sec:hist} and Figure~\ref{fig:histdiff}.
However, that analysis did not account for the differences in $f_\star$
between different galaxy types.
We therefore proceed with a more direct comparison to theory by
generating $j_\star$--$M_\star$\ model predictions for each galaxy type, and calculating
the observed residuals with respect to these models.
We use Equation~(\ref{eqn:jslam}) with $\lambda=\langle \lambda \rangle=0.035$,
along with the empirical
$\langle f_\star\rangle(M_\star)$ relations (\ref{eqn:fsMltg}) and (\ref{eqn:fsMetg}),
and an ad-hoc $\langle f_j\rangle=0.55$,
to predict a mean $j_\star$--$M_\star$\ relation for each galaxy type.
We then derive the residuals $\Delta \log j_\star$ by subtracting the model from
the observations as in Equation~(\ref{eqn:Dj}).
If the spin bias scenario is correct, then the properly reweighted distribution
of these residuals ought to follow a lognormal with dispersion
$\sigma_{\log j_\star}\simeq0.27$ (which accounts for observational
errors and the intrinsic scatter in $f_\star$).
\begin{figure}
\centering{
\includegraphics[width=3.5in]{f18.ps}
}
\caption{Distributions of residuals in the
observed stellar specific angular momentum,
with respect to the mean theoretical prediction for $\Lambda$CDM halos,
after assuming a fixed $j$-retention parameter, $f_j=0.55$.
As in Figure~\ref{fig:histdiff}, red, green, and blue histograms in the top
panels show the residuals for elliptical, lenticular, and spiral galaxies,
respectively. The bottom panel shows the same distribution, renormalized
for the relative frequencies of galaxies in the nearby universe.
The curve shows a predicted lognormal distribution for comparison.
The distribution of residuals for spiral galaxies is narrower than
expected from the distribution of halo spins, while the overall galaxy distribution
shows clear departures from the lognormal model
(with an excess at low $j_\star$\ and a deficit at high $j_\star$).
}\label{fig:histdiff3}
\end{figure}
\begin{figure*}
\centering{
\includegraphics[width=7.0in]{f19.eps}
}
\caption{Stellar specific angular momentum vs.\ stellar mass, comparing
mock data generated from $\Lambda$CDM-based models (left- and right-hand panels)
to real data (middle panel).
The model on the left includes halo spin bias, while the model on the right
assumes systematic differences in angular momentum retention
between spirals and ellipticals.
Blue open squares and red filled circles show spirals and ellipticals, respectively,
with the solid blue and dotted red lines showing the best-fit power-laws
for the real data.
The relation for halos is also shown for reference as a gray dot-dashed line.
The mock data sets include intrinsic scatter in the parameters $\lambda$
and $f_\star$ at a given mass, but {\it not} observational errors.
The simple variable-$f_j$ mock data on the right resemble the real data,
while the spin-biased model does not.
}\label{fig:mock2}
\end{figure*}
Figure~\ref{fig:histdiff3} presents histograms of these residuals, both
by separate galaxy types (top panel), and in combination (bottom panel), which
uses a renormalization
by frequency of galaxy types from the
ATLAS$^{\rm 3D}$\ survey, as in Section~\ref{sec:hist},
We find that overall, the total distribution of $\Delta j_\star$
has approximately the predicted width.
However, the distribution in detail appears significantly different from a lognormal:
there is an excess of low-$\Delta j_\star$ galaxies,
and a missing tail at high-$\Delta j_\star$.
In particular, there are too many elliptical galaxies in the nearby universe to
be explained by the tail of low-spin halos.\footnote{\citet{2007MNRAS.375..163H}
also found in attempting to infer halo $\lambda$ values for spirals and ellipticals
that an ad-hoc rescaling of the elliptical values was required in order
to avoid a double-peaked $\lambda$ distribution.}
This histogram analysis appears to exclude a simple spin-bias scenario,
but there are some caveats,
such as small sample sizes and the assumption of perfect lognormality
for the distribution of halo spins.
We can make further progress by recognizing that the scenario makes predictions
for the $j_\star$\ residuals not only for all galaxies combined, but also as a function
of mass. This is because $\lambda$ is not predicted to depend on halo mass,
while the relative frequencies of different galaxy types are
observed to vary strongly.
One can then immediately see a serious problem with the spin-bias scenario:
at high masses, almost all of the galaxies are ellipticals, which should thus
be an unbiased population representing the full range of halo spins
(\citealt{2012MNRAS.421..608D} made a similar point for low-mass disk galaxies).
We investigate this issue in more detail by constructing
a mock data set as in
Figure~\ref{fig:schem2}, while this time incorporating a schematic model for spin bias.
We now assume that all galaxies have $f_j=0.45$, with
the late-types inhabiting the high-spin halos, and the early-types the low-spin ones.
Using the number densities of early- and late-types as a function of $M_\star$\ from ATLAS$^{\rm 3D}$,
we use the $\langle f_\star\rangle(M_\star)$ relations to
translate this to the relative fractions at fixed halo mass
(which can be quite different from the fractions at fixed $M_\star$).
We then randomly draw a distribution of biased spin parameters for each galaxy type;
e.g., if spirals comprise 25\% of galaxies at a given mass,
we draw mock spirals from the top quarter of the spin distribution.
We also adopt a similar mass range and total number of galaxies as in our
real data sets.
We show the resulting $j_\star$--$M_\star$\ mock data set in the left-hand panel
of Figure~\ref{fig:mock2}, which can be compared to the real data in the middle panel.
We see that the low-mass ellipticals could indeed be drawn from only the low-spin tail
because of their rarity. However, at high masses the ellipticals are common
and their predicted $j_\star$\ values are similar to the spirals.
To salvage the spin-bias scenario would thus seem to require a mass-dependent
bias, which seems epicyclic and therefore not appealing.\footnote{There
may be reasons of stability for ellipticals to be dominant at high masses (e.g.,
\citealt{1997ApJ...482..659D,1998ApJ...507..601V,2012MNRAS.421..608D}),
but this ostensibly changes the {\it morphology} and not $j_\star$.}
The biasing idea can also be discredited by environmental considerations:
there are strong observational correlations between environmental
density and galaxy morphology, but as mentioned earlier,
halo spins in theory depend only weakly on environment
(which has some observational support in the case of disk galaxies;
\citealt{2008MNRAS.388..863C,2008MNRAS.391..197B}).
In addition, if we consider disks and bulges to be manifestations of the same
$j_\star$--$M_\star$\ trends as spiral and elliptical galaxies, then the coexistence of
these subcomponents within the same galaxies provides a clear argument against halo
spin bias.
We next turn to the variable-$f_j$ scenario, where spirals and ellipticals
are drawn from the same underlying distribution of halo spins, but their
baryonic components have systematic differences in retaining $j$.
Given that we know $\langle f_\star\rangle$ for each galaxy type, Equation~(\ref{eqn:comb})
suggests that we can immediately use the observed $j_0$ normalization to infer $\langle f_j\rangle$.
However, the situation is more complicated since $\langle f_\star\rangle$ varies with mass
and therefore one does not expect an exact
$\alpha=2/3$ for fixed $f_j$ (recall Figure~\ref{fig:schem2}(d,e)).
As we did for the spin-bias scenario, we again construct mean $j_\star$--$M_\star$\
relations for each galaxy type, while now leaving $f_j$ as a free parameter.
Carrying out least-square fits to the data, we find
values of $\langle f_j\rangle=0.56\pm0.03$ and $\langle f_j\rangle=0.12\pm0.01$ for
the spiral and elliptical galaxies, respectively.
The difference in $\langle f_j\rangle$ of a factor of $4.7\pm0.8$ is slightly larger than
the observed $j_\star$--$M_\star$\ relative offset, as anticipated in the
previous section because of the differences in $\langle f_\star\rangle$
(e.g., Equation~(\ref{eqn:comb})).\footnote{Given the degeneracy
between $f_j$ and $f_\star$, in principle the inferred $f_j$ dichotomy
could be an artifact of errors in our adopted values for $\langle f_\star\rangle$.
However, these errors would have to amount to a combined factor of $\sim$~5:
e.g., with true $\langle f_\star\rangle \sim$~0.1 for the spirals
along with $\sim$~0.2 for the ellipticals,
rather than $\sim$~0.25 and $\sim$~0.1.}
The next step is to verify that these best-fit models
provide reasonable representations of the data.
We again construct mock data sets, using the new $f_j$ models
(with 0.15~dex of scatter in $f_\star$),
and show the results in the right-hand panel of Figure~\ref{fig:mock2}.
Here we see that, unlike the spin bias model, these variable-$f_j$ models provide
a remarkably good match to the data. The curvature of the predicted $j_\star$--$M_\star$\
relation turns out to be imperceptible, once we account for
observational errors, small-number statistics, and a limited mass range.\footnote{
Future empirical estimates of $j_\star$\ and $M_\star$\ over a larger dynamic range
could provide a strong test of constant-$f_j$ scenarios.
Given the observational difficulty of measuring $j_\star$\ at high masses where the
underlying halos pertain to entire galaxy groups and clusters, the
best prospect for improvement would be to study lower-mass galaxies,
with $\log\,(M_\star/M_\odot) \la$~9.}
Furthermore, the observed slope for the spirals is shallower than
$\alpha=2/3$, which is predicted by the model.
This comparison does not entirely succeed in accounting for
the {\it scatter} about the $j_\star$--$M_\star$\ relations.
As can be seen in Figure~\ref{fig:mock2},
the real observations appear to follow {\it tighter} trends than
predicted by our simple model, for both spirals and ellipticals.
The model fits give rms scatters of $\sigma_{\log f_j}=$~0.18~dex and
0.25~dex for the spirals and ellipticals,
which is already {\it less} than the expected scatter
of 0.27~dex from $\lambda$ and $f_\star$,
even without allowing for measurement errors, and scatter in $f_j$
(see also the histogram of spirals in the
top panel of Figure~\ref{fig:histdiff3}, compared to the curve in the lower panel).
One possible explanation for this reduced scatter
is that the baryonic processes responsible for $j$-loss could act
as some kind of ``attractor'' to specific values of $f_j$
(cf.\ \citealt{2000ApJ...545..781D}).
Alternatively, halo spin bias could be at work in a secondary role,
even while $f_j$ variation is the primary effect.\footnote{It has
been suggested that later type galaxies are biased to {\it lower}
spin halos \citep{2004ApJ...612L..13D}. If correct,
the net impact on the $j_\star$\ scatter is unclear, but one implication
is that the $f_j$ dichotomy between spirals and ellipticals would
be even larger than in our no-bias scenario.}
Our overall conclusion is that the variable-$f_j$ model
reproduces the $j_\star$--$M_\star$\ observations well in general,
is fairly insensitive to the exact trend of $\langle f_\star\rangle$ with mass,
and does not require any additional variation of $\langle f_j\rangle$ with mass.
The spirals appear to
have been fairly efficient in preserving the
specific angular momentum imprints of their parent halos, while
ellipticals have lost the vast majority of theirs.
This is a plausible scenario from a physical standpoint
if we return to our proposed framework where all galaxies are composed of bulges and disks
(Figure~\ref{fig:schem1} and Section~\ref{sec:replace}).
Unfortunately, we do not have $\langle f_\star\rangle(M_\star)$ relations for the bulges and disks
themselves in order to
directly derive their $\langle f_j\rangle$ trends. However, given the similarities in $j_\star$--$M_\star$\
that we found between these subcomponents and the galaxies overall,
it seems reasonable to suppose that bulges and disks have $\langle f_j\rangle\sim$~0.1
and $\sim$~0.6, respectively, and that these values are characteristic
of two distinct modes of galaxy evolution.\footnote{One concern here is that for
more bulge-dominated galaxies, one might expect the disk-only $\langle f_\star\rangle$ to be
relatively low, and thus the disk $j_\star$\ to appear relatively high.
However, the observations are somewhat suggestive of the {\it opposite} trend,
i.e., disk $j_\star$\ anti-correlating with $B/T$.}
We will return to this topic in the next section.
Our conclusions about {\it spiral} galaxies echo similar findings in the literature,
which have typically inferred $\langle f_j\rangle \sim$~0.5--0.6 overall
(e.g., \citealt{2000ApJ...538..477N,2007ApJ...654...27D,2009ASPC..419....3B,2012MNRAS.421..608D,2012MNRAS.424..502K}).
In particular, \citet{2012MNRAS.421..608D} used a
model parametrization similar to our $(f_\star,f_j)$,
and found that $\langle f_j\rangle$ is fairly constant over a wide mass range.
Note that these authors used a
parametrized mass model
to fit the Tully-Fisher relation, which was then converted to
an average $j_{\rm vir}$--$M_{\rm vir}$ relation.
Our approach works instead in the space of observables, $j_\star$--$M_\star$, which is more direct
and transparent while also allowing us to analyze galaxy-to-galaxy
variations.\footnote{As
a consistency check, we also take a slightly different approach
and make a model prediction for the mean relation
between size and rotation~velocity for spirals (cf.\ \citealt{1998MNRAS.295..319M,2004ASSL..319..341B}).
We adopt a value of $\langle f_\star \rangle=0.56$, and rather than assuming
some function $\langle f_\star\rangle(M_\star)$, we relate the disk rotation and the virial
circular velocity by $v_s \simeq 1.2 v_{\rm vir}$. Given $\langle \lambda \rangle=0.035$,
there is a linear relation predicted between $v_s$ and $a_{\rm e}$, which we show in the
right-hand panel of Figure~\ref{fig:rot}. To zeroth order, this prediction agrees
well with the spiral data.}
Our finding for the {\it ellipticals} is novel,
as neither the predictions for $j_\star$--$M_\star$\ of ellipticals, nor their
subsequent $f_j$ inferences, have been well-studied before now.
We have not carried out a comparable analysis on {\it lenticulars}
since the constraints on them are less certain.
Qualitatively speaking, their observed $\log j_\star$ normalization is between the
other two galaxy types, which for plausible values of $\langle f_\star\rangle$
implies $\langle f_j \rangle$ values that are intermediate to
those for the spirals and ellipticals.
In addition, there may be two subpopulations of lenticulars
as discussed in Section~\ref{sec:resid}, with low and high $\langle f_j\rangle$.
There are two interesting implications about these findings.
One is that that we now have a remarkably simple and successful framework for
describing and connecting some of the most fundamental
properties of galaxies.
The observable galaxies may be connected to their unobservable host halos
using $j_\star$\ and $M_\star$\ along with some relatively basic parameters $f_j$ and $f_\star$.
Such a model may appear implausibly oversimplified in the light of our
ever-expanding awareness of the complexities of galaxy formation physics,
but for some reason it seems to work.
The other implication is that these parameters may give us insight into the
formation of disks and bulges, and into the origins of the Hubble sequence.
To illustrate this point, we use our modeling procedures as described
above to work backwards and estimate
$f_\star$ and $f_j$ values for {\it individual} galaxies.
The outcome is shown in Figure~\ref{fig:fvsf}, where one should focus on
the {\it average} results for each galaxy type, since
no attempt was made to model the scatter in $f_\star$ and $\lambda$.
The general picture that we obtain is that
spiral and elliptical galaxies are clumped
around two regions of parameter space:
$(f_\star,f_j) \sim (0.25,0.55)$, and $\sim (0.1,0.1)$, respectively.
{\it Whatever processes formed and shaped these galaxies were
efficient at both forming stars and retaining total specific angular momentum
for the spirals, and inefficient for the ellipticals.}
\begin{figure}
\centering{
\includegraphics[width=3.5in]{f20.ps}
}
\caption{Specific angular momentum retention fraction plotted against
stellar mass fraction,
as inferred for individual galaxies, with symbols as in
Figure~\ref{fig:mock2}.
The dotted diagonal line is the one-to-one relation,
and the gray double-arrow shows the direction of the
uncertainties as driven by the $f_j \propto f_\star^{2/3}$ degeneracy.
The width of the shaded region around $f_j=1$ corresponds to the
scatter in spin expected for $\Lambda$CDM halos.
The black arrows show schematic vectors from 1:1 and 1:10 mergers,
as discussed in Section~\ref{sec:theory3}.
The spiral and elliptical galaxies occupy distinct regions of the diagram,
while a simple model implies that converting spirals into ellipticals would
require a very large amount of growth through $\sim$~1:3 mergers.
}\label{fig:fvsf}
\end{figure}
As discussed in Section~\ref{sec:intro}, early cosmologically-based simulations
struggled to reproduce such high $f_j$ values for spirals, finding typically
$f_j \sim$~0.01--0.1, which
was later realized to be due in part to numerical artifacts,
and in part to inadequate feedback recipes.
Feedback could be particularly important for slowing down gas collapse and star formation
so that the baryons are not affected by torque-driven $j$ transfer during early mergers
\citep{1998MNRAS.300..773W,2003ApJ...596...47S,2012ApJ...749..140H,2012MNRAS.423.1726S}.
However, whatever physical processes are now invoked to explain the $f_j$ values
of spirals must simultaneously allow for much lower $f_j$ in ellipticals
(e.g., by having less efficient feedback; \citealt{2008MNRAS.387..364Z,2008MNRAS.389.1137S}).
\subsection{Physically-motivated models for galaxy evolution}\label{sec:theory3}
\begin{figure*}
\centering{
\includegraphics[width=7.1in]{f21.eps}
}
\caption{Schematic evolution of galaxies in specifical angular momentum and mass,
as in Figure~\ref{fig:schem2}, but now considering evolution through gas outflows,
stripping, and biased baryon collapse, and galaxy mergers.
Panel (a) show initial conditions for pre-collapse gas (dots), and
possible evolutionary vectors from outflows and stripping
(arrows; see text for details).
Panel (b) shows the collapse of gas and formation of
stars at some initial redshift $z_{\rm i}$, preserving the $j_\star$--$M_\star$\
values until a final redshift $z_0$ (black arrow to the left, with dots illustrating
a population of galaxies).
The halo grows until redshift $z_0$ with no further star formation (black arrow to
upper right). At $z_0$, the expected trend with perfect $j$ conservation is the
dotted line, and net values for $f_\star$ and $f_j$ would be inferred using the
leftward and downward gray arrows, respectively.
Panel (c) shows initial conditions for DM halos as gray dots, and
schematic vectors of evolution through mergers (gray arrows):
mass growth (to the right), specific angular momentum decrease through
cancellation of the spin components (downwards), and increase through the
orbital component (upwards). The net evolution is a black diagonal arrow to the upper
right.
The upper dotted track marks the initial conditions for stellar disks, and
the blue dots show disks
after having undergone four 1:1 mergers each.
The upper black curved vector illustrates the typical evolution of a galaxy,
with each black dot marking the beginning of a discrete merger event.
The lower black curved vector shows the same for a series of 1:10 mergers
(note that for clarity, the curved vectors are arbitrarily shifted relative to the $f_j=1$
starting point for the DM vector).
In both cases, after the mass has grown by a factor of $\sim$~2, the orbital $j_\star$\
dominates the evolution, moving merger remnants along a $j_\star$--$M_\star$\ track parallel to,
but lower than, the initial disk trend.
}
\label{fig:schem3}
\end{figure*}
Now that we have derived a comprehensive framework for connecting $j_\star$--$M_\star$\ observations
with simulated $\Lambda$CDM halos, and thereby derived generic constraints on specific
angular momentum retention, $f_j$ (Figure~\ref{fig:fvsf}),
we will work through some
case studies of plausible physical
processes in galaxy formation and evolution.
These cases are not meant to be exhaustive, nor to provide immediate
ammunition for current debates about galaxy formation,
but to serve as practical examples of how the $j$--$M$ diagram can be
used as a tool to furnish physical insight.
The models involved will treat $f_j$ and $f_\star$ as covariant parameters,
unlike in the previous sections where for simplicity they were independent.
A general constraint to keep in mind is
that for each galaxy type, $f_j$ is approximately constant
as a function of mass, including little additional scatter,
which accounts for the observed $j$--$M$
relations appearing so similar to those for theoretical DM halos.
{\it Any model for angular momentum evolution should
explain why galaxies appear to remember so faithfully the overall initial conditions of
their parent halos.}
The challenge of this $f_j$ constancy has been recognized previously for disk galaxies.
There are a variety of physical mechanisms during galaxy evolution that could involve $j$
transfer (e.g., gas cooling and feedback), but
unlike gravitational clustering,
these baryonic processes (and the resulting $f_j$ values)
are expected to depend strongly on mass,
which appears to require some degree of fine-tuning
to reconcile with the observations
(e.g., \citealt{2012MNRAS.421..608D}).
Our inclusion of early-type galaxies in this framework, with near-constant $f_j$,
deepens the mystery: there are now {\it two} fine-tuning conspiracies to explain.
Here we emphasize again a distinction from comparisons between
{\it internal} distributions with radius of $j$ for stars and DM halos
(e.g., \citealt{2001ApJ...555..240B,2001MNRAS.326.1205V,2002MNRAS.329..423M,2005ApJ...628...21S}).
As mentioned in Section~\ref{sec:intro}, there is ample reason to expect
redistribution of $j_\star$\ to occur within the baryonic component of a galaxy
and thereby violate strong $j$ conservation.
However, this does not affect our examination of weak conservation, where
the overall value of $j$ may remain roughly the same
(assuming negligible transfer of $j$ between baryons and DM).
We may reduce the potential explanations for the systematic difference in $f_j$
between spirals and ellipticals into two basic scenarios, which we will examine
before summarizing the overall picture.
One general scenario is an {\it internal} angular momentum bias, where
high- and low-$j_\star$\ galaxies were formed from parts of their available gas supply
that had preferentially high or low $j$.
The other is that these galaxies experienced systematic differences in
angular momentum transport {\it after} star formation, and during
subsequent galaxy assembly phases.
Below,
Section~\ref{sec:outflow} discusses outflow and stripping scenarios,
Section~\ref{sec:bias} considers biased collapse, and
Section~\ref{sec:merg} examines mergers.
Section~\ref{sec:eval} surveys
the plausibility of these evolutionary modes in the light of the $j_\star$--$M_\star$\ observations.
\subsubsection{Outflows and stripping}\label{sec:outflow}
One example of the first scenario involves {\it gas outflows}, whether caused by
galactic winds or by some other mechanism.
Let us assume that the baryons in a galaxy collapse into a thin disk while
preserving the total specific angular momentum, i.e., $f_j=1$ (recall Figure~\ref{fig:schem2}(b)).
The local specific angular momentum within the disk,
$j_{\rm g}(R) \propto R\,v_{\rm rot}(R)$,
is assumed to increase monotonically with galactocentric radius,
which is unavoidable if the gas follows
co-rotating circular orbits (the rotation-velocity profile
cannot decrease any more rapidly than Keplerian, while the lever arm $R$ in the $j$ calculation
increases linearly).
Before many stars form, an outflow begins which we parameterize by a mass
loss that is proportional to the gas surface density to some unknown power $\beta$:
\begin{equation}
\Delta M_{\rm g} \propto \Sigma_{\rm g}^\beta.
\end{equation}
Because the gas is presumed to settle into a configuration where the density
increases toward the center (e.g., an exponential profile), the parameter
$\beta$ translates into a biased removal of gas from different disk {\it radii},
which in turn means depletion of gas parcels with systematically different $j_{\rm g}$.
To analyze this scenario further, we now introduce Figure~\ref{fig:schem3},
which like Figure~\ref{fig:schem2} illustrates schematic vectors
of mass and angular momentum evolution, but now extends to more specific,
physically-motivated processes.
In Figure~\ref{fig:schem3}(a), the horizontal arrow to the left illustrates
an outflow with $\beta=0$:
the gas everywhere in the disk is depleted by an equal fraction,
and its initial specific angular momentum is preserved, while its mass decreases.
If $\beta>0$, then the outflows occur preferentially in the high-density, central
regions that have relatively low $j_{\rm g}$, and so the overall $j_{\rm g}$ for the galaxy
increases (diagonal arrow toward upper left;
cf.\ \citealt{2001MNRAS.321..471B,2012ApJ...750..107S}).
If $\beta<0$, then the mass loss is preferentially from the outer regions, and the overall $j_{\rm g}$
decreases (diagonal arrows toward lower left).
Thus, outflows could in principle produce either a net increase or decrease in $f_j$.
It should be kept in mind that these outflows represent only material that is
launched completely out of the galaxy, never to return.
Other types of outflows may also occur, where gas is expelled outward but remains
bound and falls inward again, as in a galactic fountain
(e.g., \citealt{2012MNRAS.419..771B}).
However, such internal processes might alter only the detailed distribution with radius
of $j$, and not affect the overall value which concerns us here
(see the discussion above of weak and strong $j$ conservation).
More complex scenarios could also be considered, where fountain material interacts with
halo gas and exchanges angular momentum
(e.g., \citealt{2009MNRAS.399.1089M,2011MNRAS.415.1534M}),
leading to shifts in $j_\star$\ for the stellar disk that eventually forms.
A mechanism related to gas outflows is galaxy {\it stripping} through gravitational
interactions with other galaxies in a dense environment.
Here the effects on $j_\star$\ and $M_\star$\ depend on whether the tidal stripping
occurs before or after the gas collapses.
If a galactic halo is tidally stripped {\it before} the gas collapses
(e.g., \citealt{1980ApJ...237..692L}),
then the reservoir of $M_{\rm g}$ and $j_{\rm g}$ available for collapse is depleted
in a manner that depends on the internal distribution of these quantities.
F83 adopted some plausible distributions and worked out the resulting $j$--$M$ changes:
we will not repeat the analysis here, but merely show the equivalent evolutionary vectors
as the three arrows in Figure~\ref{fig:schem3}(a) pointing downwards to the left.
There are two key features to notice with the gaseous stripping arrows. One is that unlike
outflows, this stripping can only {\it decrease} $f_j$ ($\beta<0$) since it acts solely
on the outer regions.
The second is that plausible $j$-loss vectors are accompanied by substantial mass loss,
which means that it is fairly difficult to move galaxies away from the initial
$j$--$M$ sequence.
This conclusion is supported by $N$-body simulations of $\Lambda$CDM
halos, which find that the environmental dependencies of halo $\lambda$ are
fairly weak \citep{1988ApJ...330..519Z,1999MNRAS.302..111L,2005MNRAS.359.1537R}.
If instead the stripping occurs {\it after} the gas collapse, then $j$ and $M$
decrease for the DM but not for the baryons. This leads to elevated
values of $f_j$ and $f_\star$, which could be investigated through observational
constraints on $M_{\rm vir}$ for field galaxies
in comparison to satellite galaxies in massive groups.
\subsubsection{Biased collapse}\label{sec:bias}
There is another scenario that is functionally equivalent in the $j$--$M$ diagram
to outflow or stripping, but which merits special attention.
Here we consider a {\it spatially-biased} subcomponent of the initial gas
which collapses and forms stars.
Rather than our default assumption
of uniform efficiencies $f_\star$ and $f_j$ throughout the virial region,
we assume that stars form preferentially in the {\it inner regions} of the halo,
while the outer regions remain largely gaseous and form relatively few stars.
This scenario was introduced by \citet{2002ASPC..275..389F} and
is motivated by the higher densities, and thus overall gas dissipation rates
(through cooling and cloud collisions),
in the inner regions.
The consequent spatial bias in star formation can also be understood as a {\it temporal}
bias, if one considers an idealized onion-shell model wherein galaxies form
by inside-out collapse,
with virialization and star formation occurring first in the central regions
(cf.\ \citealt{1998ApJ...507..601V,1999ApJ...520...59K}).
Even in more realistic, hierarchical galaxy models, it is
uncontroversial that a large fraction of the baryons within a galaxy halo at any given
time will not yet have formed stars, and are
located preferentially at larger radii.
The stars observed in a galaxy at $z=0$ will have formed on average
at higher redshifts, and from gas that was more centrally confined than the
$z=0$ virial volume.
Because $j$ for a $\Lambda$CDM halo is expected to increase systematically with
both internal radius and time, the above biasing scenario implies that $j_\star$ for
a galaxy will be lower than its total $j$ (including DM).
Such a biasing framework was used by \citet{2012MNRAS.424..502K} to connect
observed disk galaxies with simulated $\Lambda$CDM halos, and
thereby infer a radius of baryonic collapse.
Here we outline a generic toy model of collapse bias,
to understand its implications in the context of $j$--$M$ evolution vectors.
For simplicity, we adopt a step-function model where at an initial redshift $z_{\rm i}$,
all of the gas within the virial radius instantaneously collapses and forms stars
with perfect efficiency and angular momentum conservation ($f_\star=f_j=1$), and
subsequently no star formation occurs ($f_\star=0$).
This scenario is illustrated by Figure~\ref{fig:schem3}(b), where $z_{\rm i}$
marks the initial halo parameters. The leftward arrow shows the formation of the
stars, with $j_\star$--$M_\star$\ parameters that are preserved until $z_0=0$.
The diagonal arrow to the upper-right shows the subsequent evolution of the halo.
Because the halo continues to grow in $M$ and $j$, the net values of $f_\star$ and
$f_j$ for the stars will decrease with time, which is illustrated by the gray
arrows which are the inferences made by connecting the final conditions of the
halo and stars.
This biasing scenario might seem to provide a tidy alternative for understanding
galaxies that have {\it apparently} experienced baryonic angular momentum loss.
However, it is important to realize that such biasing cannot explain just any
arbitrary set of $j_\star$--$M_\star$\ observations.
For example, the vectors in Figure~\ref{fig:schem3}(b) were constructed to represent a
typical early-type galaxy with a net $f_\star=0.1$ at $z=0$, which turns out to
have a net $f_j=0.22$, i.e., not reproducing the apparent $\langle f_j\rangle\sim0.1$
from observations.
Note that this model had an initial $f_\star=1$, but in reality,
we expect an initial $f_\star < 1$, which would increase the discrepancy.
We will discuss this scenario further in Section~\ref{sec:eval};
for now, it serves as an important illustration of how
constructing physically-motivated vectors in the $j_\star$--$M_\star$\ diagram can provide
tight constraints on possible evolutionary scenarios.
\subsubsection{Mergers}\label{sec:merg}
We next consider galaxy {\it merging} following star formation,
which is likely to be more important for ellipticals than for spirals.
The mass of a galaxy increases through a merger, while
its final $j$ is determined by the vector sum of three initial $j$ components
(the internal $j$ for the two progenitor galaxies, and their relative orbital $j$),
as well as by any exchange of $j$ with the environment (e.g., between the stars
and their surrounding DM halos).
The random relative orientations of the first two components will cause them
to partially cancel out, which contributes a net {\it decrease} to $j$.
That is, after $N$ equal-mass mergers, there will be average trends for the remnant of
$J \propto N^{1/2}$ and $M \propto N$, and therefore $j \propto N^{-1/2}$
\citep{1979Natur.281..200F,1980ApJ...236...43A}.
The orbital $j$ and the $j$ exchange processes are more difficult
to model a priori.
The effects of mergers on DM halos have been studied extensively through
numerical simulations, resulting in a general picture where major mergers tend
to ``spin up'' the halos, while minor mergers and smooth accretion
tend to spin them down
(e.g., \citealt{2001ApJ...557..616G,2002MNRAS.329..423M,2002ApJ...581..799V,2004MNRAS.348..921P,2004ApJ...612L..13D,2006MNRAS.370.1905H}).
Given that the $j_{\rm vir}$--$M_{\rm vir}$ relation is scale-free and has a normalization
that is expected to change only gradually, if at all,
with time (e.g., \citealt{1997ApJ...478...13N}),
we conclude that for individual halos, the co-addition of the above processes must
amount to a random-walk that takes them on average {\it along} the $j_{\rm vir}$--$M_{\rm vir}$
sequence.
We illustrate this process in Figure~\ref{fig:schem3}(c) with a schematic evolutionary
vector for galaxy halos, broken down into subcomponents of $j_{\rm vir}$ and $M_{\rm vir}$
changes.\footnote{In the merging of DM halos, the resulting angular momentum and mass
are {\it not} the simple sum of those properties from the progenitors.
The combination of the two virial regions in a merger
increases the {\it density} within a fixed physical radius, but also increases the {\it volume}
of the virial region, so that more of the surrounding material falls under the gravitational
sway of the two galaxies together.
A 1:1 merger typically increases $M_{\rm vir}$ by a factor of $\sim$~2.3;
similar effects apply to $j_{\rm vir}$.}
Doubling the mass should typically increase $j_{\rm vir}$\ by a factor of $2^{2/3}=1.6$.
The effects of mergers on the stellar components of galaxies, which have collapsed
by large factors within their DM halos, are somewhat different.
Qualitatively speaking, it is a generic dynamical requirement that the stars
shed some of their orbital angular momentum, via tidal torques or dynamical friction,
in order to coalesce into a bound merger remnant
(e.g., \citealt{1985Natur.317..595F,1988ApJ...330..519Z,1988ApJ...331..699B,2006MNRAS.372.1525D}).
More quantitatively, we may make an initial, plausible guess that the ``final pass'' of the
merger before coalescence involves an impact parameter and relative velocity that are similar
to the stellar scalelength and circular velocity of the larger progenitor.
This would mean that the smaller progenitor would bring in an orbital $j_{\star,2}$
of a similar magnitude to internal $j_{\rm \star,1}$ of the larger progenitor
(i.e., $\Delta J_\star = j_{\star,2} \, M_{\star,2} \sim j_{\star,1} \, M_{\star, 2}$).
We sketch out some implications of this kind of merger evolution in Figure~\ref{fig:schem3}(c).
Starting with galaxy disks randomly selected along the median $j_\star$--$M_\star$\ trend as
in Figure~\ref{fig:schem2}(c) (adopting a simple $f_\star=0.1$ model
with scatter included for halo $\lambda$),
we apply a sequence of four mergers to each disk.
Each merger has a 1:1 mass ratio, and the relative vectors of internal $j_\star$\
and orbital $j_\star$\ are selected randomly
(this is similar in spirit to the orbital-merger model of
\citealt{2002MNRAS.329..423M}).
The blue dots show the end result after the merger sequence, and the
upper arrow shows the median trend for a single galaxy, with black dots marking the
discrete merger events.
Note that at this point, the series of four 1:1 events is meant
as a thought experiment and not necessarily as a likely merger history.
After an initial decrease of $j_\star$\ in the first merger from cancellation of the internal spin vectors,
the orbital $j_\star$\ dominates the evolution of the merger remnant
(e.g., \citealt{1980ApJ...236...43A,2006MNRAS.370.1905H};
this also means that the results hardly change if the ``accreted'' galaxies
are low-$j_\star$\ spheroids rather than disks as we have assumed here).
Because the orbital $j_\star$\ term is assumed to be similar to the disk $j_\star$--$M_\star$\ trend,
the final trend for the merger remnants parallels the disk trend, while being
offset to lower $j_\star$\ by a factor of $\sim$~2 ($\sim$~$-0.3$~dex).
Referring back to Figure~\ref{fig:schem2}, this corresponds to an effective
angular momentum loss term of $f_j \sim 0.5$.
The distribution of the offset is also shown by a histogram in Figure~\ref{fig:histdiff2}.
\begin{figure}
\centering{
\includegraphics[width=3.5in]{f22.ps}
}
\caption{Distributions of specific angular momentum residuals, relative to
the mean trend for spiral disks, using the same analysis as in Figure~\ref{fig:schem3}(c).
The right histogram shows the disk initial conditions.
The middle and left histograms show merger remnants after having grown by
a factor of 16 in mass, for 1:1 and 1:10 mergers, respectively.
The $j_\star$\ distribution has a smaller mean and dispersion for the
1:10 mergers than for the 1:1 mergers.
}
\label{fig:histdiff2}
\end{figure}
We have carried out the same exercise for a series of 1:10 mergers, with a median
trend shown by the lower vector in Figure~\ref{fig:schem3}(c). The result is
similar to the 1:1 case, with orbital $j_\star$\ dominating the evolution after the
galaxy grows in mass by a factor of $\sim$~2.
However, the final $j_\star$\ trend is now lower
than the disks by a factor of $\sim$~6 ($\sim$~$-0.8$~dex; $f_j \sim$~0.15),
with less scatter than in the 1:1 case
(see Figure~\ref{fig:histdiff2} again).
These differences arise because there is less stochasticity with the 1:10 mergers, where
random walk effects tend both to wash out variations and
to dilute the orbital contributions to $j_\star$.\footnote{This scenario has
some parallels to discussions in the literature about the systematic relations
between angular momentum and merger histories, and the implications for
the observed properties of galaxies
(e.g., \citealt{2004ApJ...612L..13D,2002ApJ...581..799V,2005NewAR..49...25P,2007MNRAS.380L..58D,2012ApJ...750..107S}).
However, those studies did not always make a clear distinction between
the differing merger dynamics of DM halos and of their embedded stellar components.}
A more realistic mixture of multiple mergers with varying mass ratios
would presumably produce a $j_\star$\ distribution with a peak intermediate to our 1:1 and
1:10 scenarios, and with a larger scatter.
These calculations are laden with simplifying assumptions and could easily be
wrong by a factor of 2 in $j_\star$. However, they are meant to illustrate some
possible implications of merger activity in a hierarchical context.
First of all, it is plausible that spheroids with a merger origin
would follow a $j_\star$--$M_\star$\ relation that is parallel to that of spiral disks,
but offset to lower $j_\star$\ by a factor of a few.\footnote{More generally,
a similar slope would presumably be driven by any merger history that
involves a scale-free mass spectrum of progenitors.
This is a basic property of $\Lambda$CDM halos, but is incorrect at
some level for stellar galaxies, owing to the strong break in their luminosity
function.}
Second, the scatter in $j_\star$\ introduced by random merging may be relatively small.
These two results in our toy model are both driven by the dominant
contributions of orbital $j_\star$.
Similar points were made by \citet{1979Natur.281..200F} and by \citet{1988ApJ...330..519Z},
in the latter case based on the prediction
that $\lambda$ would be fairly constant with radius inside DM halos.
The stars that condense at
the center of a halo, and then participate collisionlessly in its merger history,
would naturally follow the same $j$--$M$ scaling relations as the overall halos,
modulo a smaller scale-length in converting from $\lambda$ to $j$
(in Equation~(\ref{eqn:lambda}), $|E|$ is inversely proportional to the radius).
\subsubsection{Evaluating the possibilities}\label{sec:eval}
We now step back and consider how well the preceding evolutionary scenarios
(outflows, stripping, collapse bias, and mergers)
mesh with the observational constraints
(Figures~\ref{fig:JMM0} and \ref{fig:fvsf}).
The idea is to find a vector (or combination of vectors) that
connects up the well-established endpoints in the $j$--$M$ diagram:
the $\Lambda$CDM halo initial conditions and the $z=0$ galaxy observations.
It should however be remembered that the focus of this paper is not to
solve long-standing questions about galaxy evolution which may require
a detailed understanding of the physics involved.
Instead, our more modest goals are
to illustrate how the $j$--$M$ diagram can be used
in practical terms as a constraint on theory,
while looking for any hints as to the viability of various scenarios.
Recent work in numerical simulations of {\it disk} galaxy
formation has emphasized how outflows might remove low-$j_{\rm g}$ material,
which counteracts $j$-loss through tidal torques during galaxy collapse,
and maintains a high net level of $f_j$
(e.g., \citealt{2011MNRAS.415.1051B,2011ApJ...742...76G}).
We could then imagine that the differences between spiral and elliptical galaxies
originate from the spirals having much stronger outflows at early times.
This outflow scenario implies more mass loss in spirals and so
would initially seem to work the wrong way in explaining the $f_\star$ differences---but
there could be other factors besides gas-depletion that affect $f_\star$.
It is beyond the scope of this paper to explore this
scenario in detail, but we emphasize that the focus on reproducing $f_j$
and $f_\star$ for spirals needs to expand to include simultaneously
the constraints from ellipticals, beyond these being nuisance factors that
represent failed disks.
We have already discussed how stripping before baryonic collapse is not
expected to produce large changes in the observable $j_\star$--$M_\star$\ relations,
which may indeed be part of the reason that there is not more scatter
in these relations.\footnote{There is one case where
severe stripping has apparently led to a large reduction in $j_\star$:
NGC~4486B, which is a low-$j_\star$\ outlier in Figure~\ref{fig:JMM0}, and is discussed
in \citet{2012ApJ...748...29R}. This ``compact elliptical'' is a fairly rare
type of galaxy.}
There is also
a more obvious constraint that both spirals and
ellipticals exist in the field and in clusters,
so present-day environment cannot be the unique driver of morphology and $j$ evolution.
Collapse bias is an appealing possibility because it would provide a natural explanation
for the positive correlation between $f_\star$ and $f_j$ as in Figure~\ref{fig:fvsf}.
In this scenario, elliptical galaxies would cease to build up both $M_\star$\ and $j_\star$\
at relatively early times, with
the remaining baryonic $M$ and $j$ at late times
either residing in a hot gas halo or having been blown out into intergalactic space.
Spiral galaxies would have more protracted star formation histories that
increase $M_\star$\ and $j_\star$\ monotonically with time.
Besides explaining the relative positions of ellipticals and spirals in the $j_\star$--$M_\star$\
diagram, this scenario also fits in naturally with the observation that the stars in spirals
are on average much younger than those in ellipticals.
There may be additional implications if one connects the {\it baryon} collapse to
the {\it overall halo} collapse, which has a well-understood theoretical underpinning.
At a given $z=0$ mass, some halos should have collapsed earlier than others,
leading to their DM distributions being more centrally concentrated.
Given a fixed $\lambda$, the central DM and associated stars would then have
relatively low $j$ values. Since halo collapse time is correlated strongly with
environmental density, one would then expect the low-$j_\star$\ galaxies to reside preferentially
in high-density environments -- which is indeed what is found observationally
(through the traditional morphology-density relation).
A potential problem with this scenario is that it does not appear by itself
to be capable of explaining the apparent deficit of $j_\star$\ in ellipticals, as discussed in
Section~\ref{sec:bias}.
More detailed analysis would be needed to see if halo concentration makes a difference,
and to understand the baryonic physics of
why early-collapsing galaxies would also shut down
their star formation more drastically than late collapsers.
In addition to collapse bias, other effects may also need to be involved,
such as a bias to low spin for their halos, or a component of real $j$ loss.
The merger scenario is a common explanation for ellipticals, since it
accounts for spheroidal morphologies through violent relaxation
\citep{1977egsp.conf..401T}, and because there is strong observational evidence
for some elliptical galaxies actively forming through mergers
(e.g., \citealt{2006AJ....132..976R}).
Our toy model analysis suggests that the overall effect of mergers is
to {\it reduce} the $j_\star$\ of the remnant relative to an initial $j_\star$--$M_\star$\ trend for disks,
while the combination of {\it multiple} mergers may move the remnants parallel to that trend
(Figure~\ref{fig:schem3}(c)). This might provide a natural explanation for the
observed $j_\star$--$M_\star$\ trend for ellipticals: the slope, scatter, and offset relative to disks.
Note that it is not entirely clear in this context
why the spiral bulges and the ellipticals would follow the same $j_\star$--$M_\star$\ trends.
A more quantitative comparison of our model to the observations allows us not only
to constrain the typical mass ratios in mergers (as Figure~\ref{fig:histdiff2}),
but also to infer the amount of mass growth in ellipticals since their
assumed primordial disk phase.
We do so by mapping our toy model vectors for mergers in the key $f_j$--$f_\star$
diagram (Figure~\ref{fig:fvsf}), starting from initial conditions similar to
present-day spirals ($f_\star=0.25,f_j=0.6$),
and requiring that they terminate at $(f_\star=0.1,f_j=0.1$).
Recalling that $M_{\rm vir}$ growth slightly outpaces $M_\star$\ growth
we find that reducing $f_\star$ by a factor of 2.5 requires a very long series
of mergers, with a final growth factor of $\sim$~100 in $M_\star$\ and $\sim$~300 in
$M_{\rm vir}$.
Consideration of the $f_j$ constraint then
suggests a typical merger mass ratio of $\sim$~1:3.
Such ``major mergers'' seem like a reasonable pathway to forming
elliptical galaxies, although recent work suggests a more
dominant role for {\it minor} mergers (e.g., $\sim$~1:10;
\citealt{2009ApJ...699L.178N,2009ApJ...697.1290B,2011MNRAS.417..845K,2012ApJ...744...63O,2012arXiv1202.3441J}),
which is motivated in part by explaining trends in size evolution, and
is also supported by the observed {\it shapes} of rotation-velocity profiles
(see Section~\ref{sec:obsresults} and
\citealt{2011ApJ...736L..26A}).\footnote{In more detail, the fast- and slow-rotator
subcategories of ellipticals (Section~\ref{sec:etgdata})
are often thought to originate in different merger histories,
such as binary versus multiple mergers
(e.g., \citealt{2008ApJ...685..897B,2011MNRAS.416.1654B}).
Our discussion concerns primarily the fast-rotators, since these represent the vast
majority of ellipticals, and in addition, our $j_\star$\ constraints for the slow-rotators
are less certain.
However, as discussed in Sections~\ref{sec:obsresults} and \ref{sec:replace},
we detect no systematic difference in $j_\star$--$M_\star$\ space between the two galaxy types,
suggesting that they may have relatively similar merger histories after all.
}
This apparent tension is not of great concern since our current results
involve significant observational uncertainties and a crude model for the merging
vectors in Figure~\ref{fig:schem3}(c),
while not taking proper account of the redshift-dependence of virial quantities.
However, they are intended to illustrate conceptually the kinds of constraints that are possible with more careful modeling.
A merger scenario may successfully explain the $j_\star$--$M_\star$\ properties
of ellipticals, but it should be remembered that in a cosmological context, all
galaxies including spirals should experience a continuous rain of accreting objects.
Even if spiral galaxies have systematically avoided the most extreme merger events,
they will have still experienced events in the $\sim$~1:10 range
(e.g., \citealt{1993MNRAS.261..921K,2008ApJ...683..597S,2010MNRAS.406.2267F}),
which as shown in our toy models could significantly reduce $j_\star$.
A more detailed analysis of $j_\star$--$M_\star$\ evolution within a cosmological framework is
needed in order to investigate the quantitative differences that might arise between
spirals and ellipticals owing to varying merger histories.
In particular, an explanation for the observed bulge--disk $j_\star$\ bimodality is needed,
since a spectrum of merger histories is more suggestive of a smooth distribution
of $j_\star$.
It should also be kept in mind that $\langle f_\star(M_\star)\rangle$ is observationally
constrained not only for present-day galaxies, but also at earlier
times (e.g., \citealt{2009ApJ...696..620C,2012arXiv1205.5807M}),
which introduces additional
``boundary conditions'' to $j$--$M$ evolution.
Synthesizing the scenarios above, it seems plausible that ellipticals might be explained
through a combination of collapse bias and multiple mergers--which
bears a notable resemblance to recent discussions of two-phase
galaxy formation \citep{2010ApJ...725.2312O}. In this context,
an early burst of star formation would both imprint a relatively low initial $j_\star$,
and allow more opportunity for subsequent mergers to reduce $j_\star$\ further.
Spirals would be those systems where late gas infall both brings in higher $j$,
and avoids the most active merging period.
There are of course other considerations besides angular momentum when
constructing models of galaxy evolution, which are beyond the scope
of this paper to evaluate.
We have also been able to cover only a subset of possible scenarios.
One significant omission is the disk-instability pathway for bulge formation
(e.g., \citealt{1964ApJ...139.1217T,1997ApJ...482..659D,1998ApJ...507..601V,2009MNRAS.396.1972P}), which is an internal
process where the bulge and disk either form from high- and low-$j$ material,
or else exchange $j$ through gravitational torques.
While this pathway is usually considered in connection with pseudo-bulges,
there are recent proposals that the special conditions in high-redshift
galaxy disks can lead to the massive, classical bulges of present-day spirals, lenticulars,
and ellipticals
(e.g., \citealt{1999ApJ...514...77N,2004ApJ...611...20I,2008ApJ...688...67E,2009Natur.457..451D,2009ApJ...703..785D,2010MNRAS.404.2151C}).
The filamentary nature of mass and $j$ inflows at high redshift may also
require significant revisions to standard spherical models
\citep{2012MNRAS.422.1732D,2012MNRAS.423.1544S,2012MNRAS.423.3616D,2011arXiv1106.0538K}.
Our overarching emphasis here is that whatever the mechanisms for galaxy formation,
they must reproduce the basic $j_\star$--$M_\star$\ scaling relations {\it observed for both
spiral and elliptical galaxies}. A combination of all the processes mentioned above,
and more, could be operational in real galaxies, where each process must be associated
with a vector of $j_\star$--$M_\star$\ evolution that is not arbitrary but physically-motivated,
as we have sketched in Figures~\ref{fig:fvsf}
and \ref{fig:schem2}.
The sum of these vectors over the lifetime of the galaxy
must preserve the halo-like
scaling relations, {\it along with a relatively small scatter}.
These may be very challenging constraints to match in practice,
particularly if one includes boundary conditions
on $f_\star(M_\star)$ evolution with redshift,
and requires that the $j_\star$--$M_\star$\ relations hold for both bulge and disk
components simultaneously within the same galaxies.
Thus, a fresh approach to $j$--$M$ analysis appears to hold promise
for providing new, powerful constraints on galaxy evolution.
We would encourage numerical simulators to keep this
approach in mind as part of their toolkit,
tracking the evolution of their simulated
galaxies in the $j$--$M$ diagram,
while refining our schematic estimates of $\Delta j$--$\Delta M$ vectors,
and thereby gaining more insights
into the underlying physical processes in the simulations.
\section{Summary and conclusions}\label{sec:concl}
We have revisited the pioneering study of F83 which derived observational
estimates for the fundamental quantities $M_\star$\ and $j_\star$\ (stellar mass
and specific angular momentum) of spiral and elliptical galaxies,
and compared these to theoretical expectations based on hierarchical assembly.
Although the amount and distribution of $j_\star$\ in late-type galaxies
has been an intensively-studied
topic in the intervening years, even the most basic trends for early-types
have not been satisfactorily established.
We have capitalized on the advent of radially-extended kinematic data for a large sample
of early-type galaxies, to update and extend the analyses of F83.
We focus first on detailed analysis of a small sample of galaxies with data
extending to typically five effective radii, which is the distance one must reach
for a high degree of confidence in the $j_\star$\ estimates.
We derive various formulae for use in quantifying $j_\star$\ for pressure supported
systems, including deprojection effects.
In order to estimate $j_\star$\ for a larger sample of galaxies without
requiring detailed modeling and data to very large radii, we test
a simple, heuristic $j_\star$-estimator.
Based on the shapes of observed rotation-velocity profiles for
the detailed sample of galaxies, we find that a convenient metric for
the characteristic rotation velocity $v_{\rm s}$ of a galaxy is provided by the observed
rotation at a semi-major-axis distance of two effective radii.
This approximation is accurate at the level of
$\sim$~0.1~dex, which is suitable for studying galaxy-to-galaxy variations in $j_\star$.
We next assemble a large sample of galaxies in the nearby universe with
adequate photometric and kinematic data for estimating $j_\star$\ and $M_\star$.
This sample covers the full spectrum of bright galaxy types from bulgeless-spiral to
diskless elliptical, as well as a wide range in $M_\star$,
centered approximately at the characteristic mass $M_\star^*$.
We use our simple formula for estimating $j_\star$, while adopting simple bulge+disk
models for the spiral galaxies.
Along the way, we also introduce an important new observational scaling relation for
galaxies of all types: $v_{\rm s}$ versus $M_\star$. This relation is analogous
to the well-known Tully-Fisher relation for disk galaxies, but is more closely related to
angular momentum than to dynamical mass.
Unlike the generalized Tully-Fisher relation, the mass--rotation~velocity relation shows
{\it near-perpendicular} rather than parallel trends for spiral and elliptical galaxies.
These rotation-velocity trends combine with size--mass trends to
trace the more fundamental $j_\star$--$M_\star$\ trends.
Our combined $j_\star$--$M_\star$\ estimates confirm the basic result of F83 that late-type spiral and
elliptical galaxies follow parallel sequences of roughly $\alpha \sim 2/3$ log-slope,
but with a large zeropoint difference (in our analysis,
the ellipticals have a factor of $\sim$~3--4 lower $j_\star$\ at a fixed $M_\star$).
Although this conclusion has already been used in some theoretical analyses,
now it has a much firmer observational basis.
In particular, the data do not support
previous suggestions that major mergers have transported large
amounts of angular momentum into the outer regions of ellipticals.
We confirm for the first time that lenticular galaxies on average
lie intermediate to ellipticals and late-type spirals in the $j_\star$--$M_\star$\ plane,
with tentative indications for two families of lenticulars
characterized by low and high $j_\star$.
We see no indication of systematic, overall
differences between centrally fast- and slow-rotator ellipticals.
We also find that spiral bulges
are consistent with following the $j_\star$--$M_\star$\ sequence for ellipticals, despite
having very different relations between mass, size, and rotation.
Thus, as far as the fundamental parameters $j_\star$\ and $M_\star$\ are concerned,
spiral bulges are essentially like mini-ellipticals.
We examine the residuals of the combined galaxy $j_\star$--$M_\star$\ data with respect to the
disk-only trend, and find that these correlate better with disk-to-bulge ratio
than with Hubble type.
They also deviate from a lognormal distribution, possibly suggesting instead
a bimodality in $j_\star$. Considering all of these results together, we propose an
alternative framework to the Hubble sequence, based on more physically motivated parameters.
In this picture, all galaxies are a combination of a bulge and a disk, which are
distinct subcomponents with different characteristic amounts of $j_\star$.
Galaxy morphology may then be seen as a secondary manifestation of the
mix of high- and low-$j$ material, or equivalently, the position of a galaxy in
$j_\star$--$M_\star$\ parameter space is a reflection of its bulge-to-disk ratio.
We next connect our observational results to a theoretical framework based on
the hierarchical assembly of galaxy halos in a $\Lambda$CDM cosmology.
We use numerically-informed analytic methods that are much simpler than
hydrodynamical simulations,
but less susceptible to the large, lingering uncertainties about
baryonic recipes, resolution effects, and other numerical issues.
We find that the predictions for universal mean values of halo spin translate into
$j_{\rm vir}$--$M_{\rm vir}$ relations with an $\alpha=2/3$ log-slope, which
is remarkably similar to the observed $j_\star$--$M_\star$\ relations.
The zeropoint differences among these relations provide valuable clues
to the formation processes of different galaxy types.
Mapping between halo and stellar quantities involves two basic parameters:
the net fraction of baryons turned into stars, $f_\star$, and the fraction of specific $j$
retained, $f_j$. We find that realistic variations of $f_\star$ with mass
produce surprisingly mild deviations of the $j_\star$--$M_\star$\ relation from a simple $\alpha=2/3$
power-law. The most noticeable correction is a slightly shallower predicted slope for
the spirals, which turns out to agree well with the observations.
We explore two simplified alternative scenarios for explaining the spiral-elliptical
dichotomy in the $j_\star$--$M_\star$\ plane: the formation of spiral and elliptical
galaxies in low- and high-spin halos, respectively (spin-bias scenario);
and a difference in $j$ retention (variable-$f_j$ scenario).
We find that spin-bias does not explain the tails of the observed
$j_\star$\ distribution, nor does it agree with the observed trend as a function of mass
for the elliptical galaxies.
The variable-$f_j$ scenario, on the other hand, matches the data well and
suggests universal values of $f_j\sim0.55$ and $f_j\sim0.1$ for spirals and ellipticals,
or for disks and bulges, respectively.
The near-constancy of these values is intriguing,
and means that
all the complexities of galaxy evolution
somehow effectively reduce to a simple model, where
galactic stars have
preserved the ``initial'' conditions of their host halos, including
the $j_{\rm vir}$--$M_{\rm vir}$ slope and scatter.
This interpretation may be useful for semi-analytically
populating DM halos with both spiral and elliptical galaxies
(cf.\ \citealt{1998MNRAS.295..319M}).
Our $f_j$ result for spirals confirms similar conclusions going back decades,
that these galaxies have retained most of their primordial specific angular momentum.
This is an unavoidable conclusion from basic comparisons between observational constraints
and theoretical expectations, which for many years presented a major challenge
to numerical simulations of galaxy formation, as these apparently predicted very low values
for $f_j$. It has gradually been realized that such simulations
overpredicted
angular momentum transport (e.g., \citealt{2011arXiv1109.4638K}),
with major uncertainties still lingering in the baryonic physics included in the
simulations.
Our consolidation of the $f_j$ result for elliptical galaxies provides a new benchmark
for models of galaxy evolution, which to be fully credible must
reproduce the observed $f_j$ (and $f_\star$) trends for both spirals and ellipticals
{\it in the same simulation}. For example, any feedback processes that are invoked
to prevent overcooling and $j$ loss in spirals should be much less effective for
ellipticals.
We have explored a few toy models for galaxy evolution that exploit the basic
constraints provided by the parameter space of $j_\star$\ and $M_\star$, or equivalently of
$f_j$ and $f_\star$. Galaxies cannot evolve arbitrarily in this parameter space,
requiring physically-motivated diagonal vectors of change
($\Delta j_\star, \Delta M_\star$).
Thus we suggest the $j$--$M$ diagram as a key tool for assessing and interpreting
any model of galaxy formation.
Our simplified models suggest that
a combination of early baryonic collapse and
repeated galaxy merging (major or minor)
might account for the parallel-but-offset
trend of ellipticals relative to spirals. We have provided illustrative constraints
on the numbers and mass-ratios of the mergers, which after refinement with more
detailed modeling could be compared with cosmologically-based
predictions for mass growth and merging.
In summary, we have established a new synthesis of $j_\star$--$M_\star$\ trends from observations,
whose general resemblance to halos in $\Lambda$CDM cosmology
provides important support for that theory, and in turn
furnishes a valuable framework for
constraining baryonic processes as discussed above.
Our course, the observations presented here must be relevant to any model
of galaxy formation, even if $\Lambda$CDM theory eventually needs revision.
More generally, we propose that the morphologies of galaxies are
closely tied to the evolution of angular momentum during their assembly,
with late-types being very efficient at retaining $j$, and
early-types proficient at shedding $j$.
In this context, there are several areas ripe for additional progress.
First, clear predictions from high-resolution cosmological simulations
are needed for $j_\star$\ versus $M_\star$\ as a function of galaxy type, to
explore whether the dichotomy between spirals and ellipticals, or
disks and bulges, can be explained by differences in their assembly histories.
Second, the observational work on nearby galaxies should be extended to the
next level, via a volume-limited, homogeneous survey of all types of galaxies
including full two-dimensional, wide-field photometric-kinematic bulge--disk
decompositions,
with attention to stellar mass-to-light ratio variations.
This would allow for more robust analysis of the deviations of the $j_\star$\ residuals
from lognormality, and for more secure treatments of S0/Sa galaxies and of
bulge and disk subcomponents.
Third, the extensions of our study to lower-mass
(e.g., \citealt{2012ApJS..198....2K})
as well as to higher-redshift galaxies
(cf.\ \citealt{2007A&A...466...83P,2010ApJ...725.2324B}),
to freshly accreted material within galaxies
(cf.\ \citealt{2011ApJ...738...39S}),
and to the orientations of the ${\bf j}_\star$ vectors
(e.g., \citealt{2012arXiv1201.5794C,2012arXiv1207.0068T}),
would provide additional, valuable diagnostics.
\vspace{0.3cm}
\noindent
\acknowledgements
We thank Brad Whitmore for assistance with the spiral galaxy data compilation,
and the referee for a constructive review.
We thank Frank van den Bosch, Andi Burkert, Roger Davies, Aaron Dutton, George Efstathiou,
Eric Emsellem, Ken Freeman, Marcel Haas, Phil Hopkins, Koen Kuijken, Surhud More,
Joel Primack, and Mike Williams for helpful comments and discussions.
This research was supported in part by the National Science Foundation under
Grants No. AST-0507729, AST-0808099, AST-0909237,
and PHY05-51164.
This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
\bigskip
\noindent
\vspace{0.2cm}\\
\noindent
\newpage
|
1,116,691,498,278 | arxiv |
\section{Appendix A: Feynman Rules}
The Feynman diagrams at the tree level for two particles in the initial
state and three particles at the final state are classified physically into
three different topologies according to the interaction of the two initial
particles (or annihilation of particle-antiparticle pair) through no
propagators, one propagator, and two propagators.
\textbf{External Lines (Incoming and Outgoing Particles)}
\subsection{Fermions}
\begin{center}
\begin{tabular}{ccc}
& Incoming & Outgoing \\
Particle & $u(p,s)$ & $\overline{u}(p,s)$ \\
AntiParticle & $\overline{v}(p,s)$ & $v(p,s)$%
\end{tabular}
\end{center}
\subsection{Vector Bosons}
\begin{center}
\begin{tabular}{cc}
Incoming & Outgoing \\
$\varepsilon _{\sigma }(p,s)$ & $\varepsilon _{\sigma }^{\ast }(p,s)$%
\end{tabular}
\end{center}
\subsection{Scalar (Higgs) Bosons}
Scalar bosons as external lines take the value \textbf{1 }in general.
\textbf{Internal Lines (Propagators)}
\subsection{fermions}
\[
i\frac{\NEG{p}+m}{p^{2}-m^{2}+i\epsilon }
\]
\subsection{Massive Vector Bosons}
\[
-i\frac{g^{\mu \nu }-k^{\mu }k^{\nu }/M^{2}}{p^{2}-m^{2}+i\epsilon }
\]
\subsection{Massless Vector Bosons (Photons)}
\[
-i\frac{g^{\mu \nu }}{p^{2}}
\]
\subsection{Scalar (Higgs) Bosons}
\[
i\frac{1}{p^{2}-m^{2}}
\]
\textbf{vertices}
\newpage
\section{Appendix B: Numerical Values of Constants}
The different factors of vertices are model dependent constants. The
numerical values of constants are given by
\[
A=e
\]
\[
B^{L}=g^{`}\left( \frac{\frac{1}{2}-\sin ^{2}\theta _{w}}{\sin \theta _{w}}%
\right)
\]
\[
B^{R}=-g^{`}\sin \theta _{w}
\]%
\[
C=-\frac{g}{\sqrt{2}}
\]%
\[
D=e
\]%
\[
E=g\cos \theta _{w}
\]%
\[
F_{1}^{L}=-g\left( \frac{m_{e}\cos \alpha }{2M_{w}\cos \beta }\right)
\]%
\[
F_{2}^{L}=g\left( \frac{m_{e}\cos \alpha }{2M_{w}\cos \beta }\right)
\]%
\[
F_{3}^{L}=0
\]%
\[
F_{1}^{R}=0
\]%
\[
F_{2}^{R}=0
\]%
\[
F_{3}^{R}=ig\left( \frac{m_{e}\tan \beta }{2M_{w}}\right)
\]%
\[
G_{1}=ig\left[ \frac{\sin (\beta -\alpha )}{2\cos \theta _{w}}\right]
\]%
\[
G_{2}=-ig\left[ \frac{\cos (\beta -\alpha )}{2\cos \theta _{w}}\right]
\]%
\[
H_{1}=gM_{w}\cos (\beta -\alpha )
\]%
\[
H_{2}=gM_{w}\sin (\beta -\alpha )
\]%
\[
I_{1}=\frac{g}{2}\sin (\alpha -\beta )
\]%
\[
I_{2}=\frac{g}{2}\cos (\alpha -\beta )
\]%
\[
I_{3}=-i\frac{g}{2}
\]%
\[
J_{1}=g\left[ \frac{M_{Z}\cos (\beta -\alpha )}{\cos \theta _{w}}\right]
\]%
\[
J_{2}=g\left[ \frac{M_{Z}\sin (\beta -\alpha )}{\cos \theta _{w}}\right]
\]%
\[
K_{L}=g\left( \frac{\frac{1}{2}-\sin ^{2}\theta _{w}}{\cos \theta _{w}}%
\right)
\]%
\[
K_{R}=-g\left( \frac{\sin ^{2}\theta _{w}}{\cos \theta _{w}}\right)
\]%
\[
L=\frac{g}{\sqrt{2}}
\]%
\[
M=\frac{g}{2\cos \theta _{w}}
\]%
\[
N_{Li}=\frac{-g}{2\sqrt{2}}\left( \varepsilon _{i}\frac{m_{e}Z_{i3}}{%
M_{w}\cos \beta }-Z_{i2}-Z_{i1}\tan \theta _{w}\right)
\]%
\[
N_{Ri}=\frac{-g}{2\sqrt{2}}\left( \frac{m_{e}Z_{i3}}{M_{w}\cos \beta }%
+2\varepsilon _{i}Z_{i1}\tan \theta _{w}\right)
\]%
\[
N_{Li}^{`}=\frac{-g}{2\sqrt{2}}\left( -\varepsilon _{i}\frac{m_{e}Z_{i3}}{%
M_{w}\cos \beta }-Z_{i2}-Z_{i1}\tan \theta _{w}\right)
\]%
\[
N_{Ri}^{`}=\frac{-g}{2\sqrt{2}}\left( \frac{m_{e}Z_{i3}}{M_{w}\cos \beta }%
-2\varepsilon _{i}Z_{i1}\tan \theta _{w}\right)
\]%
\[
O=\frac{g}{\sqrt{2}}\left( Z_{i1}\tan \theta _{w}-Z_{i2}\right)
\]%
\[
P_{ij}^{L}=\varepsilon _{i}g\left( Z_{i2}U_{j1}+\frac{Z_{i4}V_{j2}}{\sqrt{2}}%
\right)
\]%
\[
P_{ij}^{R}=g\left( Z_{i2}U_{j1}+\frac{Z_{i3}U_{j2}}{\sqrt{2}}\right)
\]%
\[
Q_{ij}^{R}=\varepsilon _{i}g\sin \beta \left[ Z_{i3}U_{j1}-\frac{%
(Z_{i2}+Z_{i1}\tan \theta _{w})U_{i2}}{\sqrt{2}}\right]
\]
\[
R_{lij}=-\frac{g}{2\sin \beta }\left\{ \frac{m_{\widetilde{\chi }%
_{i}^{o}}\delta _{ij}\sin \alpha }{M_{w}}+\left( \varepsilon
_{i}+\varepsilon _{j}\right) \left[ Q_{ij}^{``}\sin (\beta -\alpha
)-R_{ij}^{``}\sin \alpha \right] \right\}
\]%
\[
R_{2ij}=-\frac{g}{2\sin \beta }\left\{ \frac{m_{\widetilde{\chi }%
_{i}^{o}}\delta _{ij}\cos \alpha }{M_{w}}-\left( \varepsilon
_{i}+\varepsilon _{j}\right) \left[ Q_{ij}^{``}\cos (\beta -\alpha
)+R_{ij}^{``}\cos \alpha \right] \right\}
\]%
\[
R_{3ij}=\frac{ig}{2\sin \beta }\left( \varepsilon _{i}-\varepsilon
_{j}\right) \left[ Q_{ij}^{``}\cos 2\beta -R_{ij}^{``}\cos \beta \right]
\]%
\[
R_{1ij}^{`}=-\frac{g}{2\sin \beta }\left[ \left( \varepsilon
_{i}-\varepsilon _{j}\right) Q_{ij}^{``}\sin (\beta -\alpha )-(\varepsilon
_{i}-\varepsilon _{j})R_{ij}^{``}\sin \alpha \right]
\]%
\[
R_{2ij}^{`}=\frac{g}{2\sin \beta }\left( \varepsilon _{i}-\varepsilon
_{j}\right) \left[ Q_{ij}^{``}\cos (\beta -\alpha )+R_{ij}^{``}\cos \alpha %
\right]
\]%
\[
R_{3ij}^{`}=-\frac{ig}{2\sin \beta }\left\{ \frac{m_{\widetilde{\chi }%
_{i}^{o}}\delta _{ij}\cos \alpha }{M_{w}}-\left( \varepsilon
_{i}+\varepsilon _{j}\right) \left[ Q_{ij}^{``}\cos 2\beta +R_{ij}^{``}\cos
\beta \right] \right\}
\]
where
\[
Q_{ij}^{``}=\frac{1}{2}\left[ Z_{i3}\left( gZ_{j2}-g^{`}Z_{j1}\right)
+Z_{j3}(gZ_{i2}-g^{`}Z_{i1})\right]
\]
and
\[
R_{ij}^{``}=\frac{1}{2M_{w}}\left[ MZ_{i2}Z_{j2}+M^{`}Z_{i1}Z_{j1}-\mu
\left( Z_{i3}Z_{j4}+Z_{i4}Z_{j3}\right) \right]
\]
\[
S_{ij}=\frac{g}{4\cos \theta _{w}}\left( 1-\varepsilon _{i}\varepsilon
_{j}\right) \left( Z_{i3}Z_{j3}-Z_{i4}Z_{j4}\right)
\]
\[
S_{ij}^{`}=\frac{g}{4\cos \theta _{w}}\left( 1+\varepsilon _{i}\varepsilon
_{j}\right) \left( Z_{i3}Z_{j3}-Z_{i4}Z_{j4}\right)
\]
\[
T_{i}^{L}=g\left( \frac{m_{e}U_{i2}}{\sqrt{2}M_{w}\cos \beta }\right)
\]%
\[
T_{i}^{R}=-gV_{i1}
\]%
\[
U_{i}=-gU_{i1}
\]
\[
V=e
\]
\[
W_{ij}^{L}=\frac{g}{\cos \theta _{w}}\left( \delta _{ij}\sin ^{2}\theta
_{w}-V_{i1}V_{j1}-\frac{1}{2}V_{i2}V_{j2}\right)
\]
\[
W_{ij}^{R}=\frac{g}{\cos \theta _{w}}\left( \delta _{ij}\sin ^{2}\theta
_{w}-U_{i1}U_{j1}-\frac{1}{2}U_{i2}U_{j2}\right)
\]%
\[
X_{1ij}=-\frac{g}{2\sin \beta }\left\{ \frac{m_{\widetilde{\chi }%
_{i}^{o}}\delta _{ij}\sin \alpha }{M_{w}}+2\left[ Q_{ij}\sin (\beta -\alpha
)-R_{ij}\sin \alpha \right] \right\}
\]%
\[
X_{2ij}=-\frac{g}{2\sin \beta }\left\{ \frac{m_{\widetilde{\chi }%
_{i}^{o}}\delta _{ij}\cos \alpha }{M_{w}}-2\left[ Q_{ij}\cos (\beta -\alpha
)+R_{ij}\cos \alpha \right] \right\}
\]
\[
X_{3ij}=0
\]
\[
X_{1ij}^{`}=0
\]
\[
X_{2ij}^{`}=0
\]
\[
X_{3ij}^{`}=-\frac{ig}{2\sin \beta }\left\{ \frac{m_{\widetilde{\chi }%
_{i}^{o}}\delta _{ij}\cos \beta }{M_{w}}-2\left[ Q_{ij}\cos 2\beta
+R_{ij}\cos \beta \right] \right\}
\]
where
\[
Q_{ij}=\frac{U_{i2}V_{j1}}{\sqrt{2}}
\]
\[
R_{ij}=\frac{MU_{i1}V_{j1}+\mu U_{i2}V_{j2}}{2M_{w}}
\]
the values of the constants $e$, $\theta _{w}$, $g$, $g^{`}$, given in
natural uints ($\hbar =c=1$) are
\[
e=0302822
\]
\[
\theta _{w}=0.495293
\]
\[
g=\frac{e}{\sin \theta _{w}}=0.637132
\]
\[
g^{`}=\frac{e}{\cos \theta _{w}}=0.344183
\]
The masses of the electron, the $Z$, and $W$, bosons are
\[
m_{e}=0.51099906\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }MeV/c^{2}
\]
\[
M_{W}=80.33\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }GeV/c^{2}
\]
\[
M_{Z}=91.187\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }GeV/c^{2}
\]
The width of $Z$ and $W$ bosons are
\[
\Gamma _{W}=2.07\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }GeV/c^{2}
\]
\[
\Gamma _{Z}=2.490\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }GeV/c^{2}
\]
The mixing angles $\alpha $ and $\beta $ and the masses $M$ and $M^{`}$ and $%
\mu $ are model dependent free parameters and they are defined in chapter 3
together with the sign factors $\eta _{i}$ and $\varepsilon _{j}$, masses $%
m_{\widetilde{\chi }_{i}^{\pm }}$ and $m_{\widetilde{\chi }_{i}^{o}}$ and
the mixing matrices $U$, $V$ and $Z$ of charginos and neutralinos.
\textbf{Three body kinematics}
\textbf{Introduction}
The energy $E$ and the 3-momentum\textbf{\ }$\mathbf{p}$ of a particle of
mass $m$ form a 4-vector momentum $p$ $\equiv (E,\mathbf{p)}$ whose square $%
p^{2}\equiv E^{2}-\left\vert \mathbf{p}^{2}\right\vert =m^{2}$. The scalear
dot product of two 4-momenum $p_{1}.p_{2}=E_{1}E_{2}-\mathbf{p}_{1}.\mathbf{p%
}_{2}$ is invariant (frame independent).
In this appendix we describe the three body reaction kinematics and
cross-sections in terms of invariant 4-momenta dot products [39], [40].
\textbf{Dot Products of Momenta}
Three body reactions consists of two particles of 4-momenta $p_{1}$ and $%
p_{2}$ and masses $m_{1}$ and $m_{2}$ in the initial state which scatter to
three particles of 4-momenta $p_{3},p_{4},$ and $p_{5}$ and masses $m_{3}$, $%
m_{4}$ and $m_{5}$ in the final state. When the particles in the initial
state are light with respect to their energies (e.g. electrons and
positrons); the masses $m_{1}$ and $m_{2}$ are ignored and we take $m_{1}$=$%
m_{2}\simeq 0.$
The Lorentz covariant variables, or the dot products in the case of three
body reactions with light particles in the initial state are defined as
$p_{1}.p_{1}=m_{1}^{2}\approx 0,$
$p_{1}.p_{2}=\frac{1}{2}(s-m_{1}^{2}-m_{2}^{2})\simeq \frac{s}{2}$
$p_{1}.p_{3}\equiv \zeta $
$p_{1}.p_{4}=\frac{1}{2}(s+m_{1}^{2}-m_{2}^{2}-2x-2\zeta )\approx \frac{1}{2}%
(s-2x-2\zeta )$
$p_{1}.p_{5}\equiv x$
$p_{2}.p_{2}=m_{2}^{2}\approx 0$
$p_{2}.p_{3}=\frac{1}{2}(s^{\prime }+2y-2\zeta +m_{3}^{2}-m_{4}^{2})$
$p_{2}.p_{4}=\frac{1}{2}(2\zeta
+2x-2y-m_{1}^{2}+m_{2}^{2}-m_{3}^{2}+m_{4}^{2}-m_{5}^{2})$
\qquad \qquad $\approx \frac{1}{2}(2\zeta
+2x-2y-m_{3}^{2}+m_{4}^{2}-m_{5}^{2})$
$p_{2}.p_{5}=\frac{1}{2}(s-s^{\prime }-2x+m_{5}^{2})$
$p_{3}.p_{3}=m_{3}^{2}$
$p_{3}.p_{4}=\frac{1}{2}(s^{\prime }-m_{3}^{2}-m_{4}^{2})$
$p_{3}.p_{5}\equiv y$
$p_{4}.p_{4}=m_{4}^{2}$
$p_{4}.p_{5}=\frac{1}{2}(s-s^{\prime }-2y-m_{5}^{2})$
$p_{5}.p_{5}=m_{5}^{2}$
where the variables $s$ (square of the center of mass energy) and $s^{\prime
}$ are defined from
\[
\sqrt{s}=p_{1}+p_{2}
\]
\[
\sqrt{s^{^{\prime }}}=p_{3}+p_{4}
\]
The function $\zeta $ is defined as
\[
\zeta =\frac{1}{\Lambda \left( \sqrt{s},\sqrt{s^{^{\prime }}}%
,m_{5}^{2}\right) }\left[
\begin{array}{c}
-m_{5}^{2}(s+m_{1}^{2}-m_{2}^{2}) \\
+x(s^{^{\prime }}+m_{3}^{2}-m_{4}^{2})(s-s^{^{\prime }}+m_{5}^{2}) \\
+y(s+m_{1}^{2}-m_{2}^{2})(s-s^{^{\prime }}-m_{5}^{2}) \\
-2xy(s+s^{^{\prime }})%
\end{array}%
\right]
\]
For massless or light particles in the initial state; $\zeta $ is
approximated as
\[
\zeta =\frac{1}{\Lambda \left( \sqrt{s},\sqrt{s^{^{\prime }}}%
,m_{5}^{2}\right) }%
\begin{array}{c}
-m_{5}^{2}s(s^{^{\prime }}+m_{3}^{2}-m_{4}^{2}) \\
+x(s^{^{\prime }}+m_{3}^{2}-m_{4}^{2})(s-s^{^{\prime }}+m_{5}^{2}) \\
+ys(s-s^{^{\prime }}-m_{5}^{2}) \\
-2xy(s+s^{^{\prime }})%
\end{array}%
\]
where
\[
\Lambda (a,b,c)=\sqrt{a^{4}+b^{4}+c^{4}-2a^{2}b^{2}-2a^{2}c^{2}-2b^{2}c^{2}}
\]
\textbf{Differential Cross Section}
The covariant phase space differential cross-section for three body
reactions can be written as
\[
d\sigma =\frac{(2\pi )^{4}\left\vert M\right\vert ^{2}}{4\sqrt{%
(p_{1}.p_{2})^{2}-m_{1}^{2}m_{2}^{2}}}d\Phi _{3}
\]
where $M$ is the matrix element (amplitude) for the reaction or scattering
process and $d\Phi _{3}$ is an element of the three body phase space given by
\[
d\Phi _{3}=\frac{1}{(2\pi )^{9}}d\rho _{3}
\]
where $d\rho _{3}$ is the Lorentz invariant phase space volume element
\[
d\rho _{3}=\delta ^{4}(p_{1}+p_{2}-p_{3}-p_{4}-p_{5})\frac{d^{3}\left\vert
\mathbf{p}_{3}\right\vert }{2E_{3}}\frac{d^{3}\left\vert \mathbf{p}%
_{4}\right\vert }{2E_{4}}\frac{d^{3}\left\vert \mathbf{p}_{5}\right\vert }{%
2E_{5}}
\]
the energy and momenta are related by
\begin{eqnarray*}
p_{i} &=&(E_{i},\mathbf{p}_{i}) \\
p_{i}^{2} &=&E_{i}^{2}-\mathbf{p}_{i}^{2}=m_{i}^{2}
\end{eqnarray*}
In center-of-mass frame
\[
\sqrt{(p_{1}.p_{2})^{2}-m_{1}^{2}m_{2}^{2}}=\left\vert \mathbf{p}%
_{lcn}\right\vert \sqrt{s}
\]
for light particles in the initial state, this expression is approximated as
\[
\sqrt{(p_{1}.p_{2})^{2}-m_{1}^{2}m_{2}^{2}}\approx \sqrt{(p_{1}.p_{2})^{2}}%
=p_{1}.p_{2}=\frac{1}{2}(s-m_{1}^{2}-m_{2}^{2})\approx \frac{s}{2}
\]
\textbf{Total Cross Section}
The Lorentz invariant three body phase space volume element in terms of
covariant dot products is given by
\[
d\rho _{3}=\frac{\pi }{2}\frac{ds^{^{\prime }}dxdyd\phi }{\Lambda \left(
\sqrt{s},m_{1},m_{2}\right) \Lambda \left( \sqrt{s},\sqrt{s^{^{\prime }}}%
,m_{5}\right) }
\]
for light articles in the initial state
\[
d\rho _{3}\approx \frac{\pi }{2}\frac{ds^{^{\prime }}dxdyd\phi }{\Lambda
\left( \sqrt{s},0,0\right) \Lambda \left( \sqrt{s},\sqrt{s^{^{\prime }}}%
,m_{5}\right) }=\frac{\pi }{2}\frac{ds^{^{\prime }}dxdyd\phi }{s\Lambda
\left( \sqrt{s},\sqrt{s^{^{\prime }}},m_{5}\right) }
\]
where $\phi $ is the polar angle of $\mathbf{p}_{1}$ with respect to $%
\mathbf{p}_{5}$.
The total cross-section as a function of the center-of-mass energy $\sqrt{s}$
given by
\[
\sigma (\sqrt{s})=\int\limits_{s_{-}^{^{\prime }}}^{s_{+}^{^{\prime
}}}ds^{^{\prime
}}\int\limits_{x_{-}}^{x_{+}}dx\int\limits_{y_{-}}^{y_{+}}dy\int%
\limits_{z_{-}}^{z_{+}}dz\int\limits_{0}^{2\pi }d\phi \left[ \frac{1}{128\pi
^{4}}\frac{\left\vert M\right\vert ^{2}}{\left\vert \mathbf{p}%
_{lcn}\right\vert \sqrt{s}\Lambda \left( \sqrt{s},m_{1},m_{2}\right) \Lambda
\left( \sqrt{s},\sqrt{s^{^{\prime }}},m_{5}\right) }\right]
\]%
and for light particles in the initial state
\[
\sigma (\sqrt{s})=\int\limits_{s_{-}^{^{\prime }}}^{s_{+}^{^{\prime
}}}ds^{^{\prime
}}\int\limits_{x_{-}}^{x_{+}}dx\int\limits_{y_{-}}^{y_{+}}dy\int%
\limits_{z_{-}}^{z_{+}}dz\int\limits_{0}^{2\pi }d\phi \left[ \frac{1}{128\pi
^{4}}\frac{\left\vert M\right\vert ^{2}}{s^{2}\Lambda \left( \sqrt{s},\sqrt{%
s^{^{\prime }}},m_{5}\right) }\right]
\]
If the variable angle $\phi $ is cyclic, the integration over $\phi $ just
contributes a factor of $2\pi $ to the phase space. \ The expression then
becomes
\[
\sigma (\sqrt{s})=\int\limits_{s_{-}^{^{\prime }}}^{s_{+}^{^{\prime
}}}ds^{^{\prime }}\int\limits_{x_{-}}^{x_{+}}dx\int\limits_{y_{-}}^{y_{+}}dy%
\left[ \frac{1}{64\pi ^{3}}\frac{\left\vert M\right\vert ^{2}}{s^{2}\Lambda
\left( \sqrt{s},\sqrt{s^{^{\prime }}},m_{5}\right) }\right]
\]
The domain of integration for $s^{^{\prime }},x,$ and $y$ are given by
$s_{-}^{^{\prime }}=(m_{3}+m_{4})^{2}$
$s_{+}^{^{\prime }}=(s-m_{5})^{2}$
$x_{\pm }=\frac{1}{4s}\left[ \left( s+m_{1}^{2}+m_{2}^{2}\right) \left(
s-s^{^{\prime }}+m_{5}^{2}\right) \pm \Lambda \left( \sqrt{s}%
,m_{1},m_{2}\right) \Lambda \left( \sqrt{s},\sqrt{s^{^{\prime }}}%
,m_{5}\right) \right] $
\qquad $\approx \frac{1}{4s}\left[ \left( s+0+0\right) \left( s-s^{^{\prime
}}+m_{5}^{2}\right) \pm \Lambda \left( \sqrt{s},0,0\right) \Lambda \left(
\sqrt{s},\sqrt{s^{^{\prime }}},m_{5}\right) \right] $
\qquad $\approx \frac{1}{4s}\left[ \left( s-s^{^{\prime }}+m_{5}^{2}\right)
\pm \Lambda \left( \sqrt{s},\sqrt{s^{^{\prime }}},m_{5}\right) \right] ,$
$y_{\pm }=\frac{1}{4s^{^{\prime }}}\left[ \left( s^{^{\prime
}}+m_{3}^{2}+m_{4}^{2}\right) \left( s-s^{^{\prime }}+m_{5}^{2}\right) \pm
\Lambda \left( \sqrt{s^{^{\prime }}},m_{3},m_{4}\right) \Lambda \left( \sqrt{%
s},\sqrt{s^{^{\prime }}},m_{5}\right) \right] .$
The resulting total cross section is in terms of $GeV^{-2}$. To convert it
into pbarn, we multiply it by the conversion constant
\[
(\hbar c)^{-2}=0.38937966\times 10^{9}GeV^{2}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ pbarn.}
\]
\part{Theoretical Framework}
\begin{center}
{\huge Introduction}
\end{center}
Progress in theoretical physics, as in all sciences, has almost
always been based on an interplay of two quite different
approaches of nature.
One starts by collecting and ordering observational, or
experimental, data \textquotedblleft\ Tycho\textquotedblright ,
then describes these data by a small number of empirical laws
\textquotedblleft\ Kepler\textquotedblright , and finally
\textit{explains} these laws by a \textbf{theory, }based on a
few principles \textbf{\textquotedblleft\ Newton\textquotedblright . }%
Theoretical predictions\textbf{, }or the outcome, of further, more
refined, observations and experiments can then be made (
\textquotedblleft\ discovery of Neptune\textquotedblright\ ).
the other approach starts from an idea, formulates it in terms of
a theory, and proceeds to make predictions which then acts as a
test of the theory and of its original idea. The later approach --
\ in its pure form-- has been most dramatically and singularly
successful in Einstein's development of the Theory of General
Relativity (TGR). \textbf{Su}per\textbf{sy}mmetry (\ \textbf{SUSY\
}) has started from an idea, and at the moment a huge work is
going on to confirm this idea.
Modern particle physics, in seeking a single unified\ theory of
all elementary particles and their fundamental interactions,
appear to be reaching the limits of this process and finds itself
forced, in part and often reluctantly, to revert to guidelines to
the medieval principles of symmetry and beauty.
Supersymmetric theories are highly symmetric and very beautiful.
They are remarkable in that they unify fermions (matter) with
bosons ( force carriers), either in flat space (SUSY) or in curved
space, supergravity, (SUGRA). \textbf{Su}per\textbf{Gra}vity,
\textbf{( SUGRA\ )} naturally unifies gravity with other
interactions. None of the present model theories is in any sense
complete; the hurdles on the way to experimental predictions-- and
thus to acceptance or rejection-- have not yet been cleared. What
naive immediate predictions can be made seem to be in disagreement
with nature. Yet, this particular field of research appears to
promise solutions of so many outstanding problems that it has
excited enthusiasm in large parts of the physics community ( and
equally large doubts in others). In a truly philosophical spirits
has been even said about the theory \textbf{that it is so
beautiful so it is hard to be incorrect. }
In high energy physics, or as it is sometimes called- elementary
particle physics, the hope is that we will eventually\ achieve a
unified scheme which combines all particles and all interactions
into one consisting theory. We wish to make further progress on
the way which started with Maxwell's unification of the magnetism
and electrostatics, and which has more recently led to unified
gauge theories (UGT) of electromagnetic and the weak, and perhaps
also of the strong interactions.
\textsl{Supersymmetry }is, by definition, a symmetry between
fermions and bosons. A supersymmetric field theoretical model
consists of a set of quantum fields and of a Lagrangian for them
which exhibit such a symmetry. The Lagrangian determines, through
the Action Principle, the equations of motion and hence the
dynamic behavior of the particles. A supersymmetric model which is
covariant under general coordinate transformations, or equally, a
model which posses a local ( \textquotedblleft\
gauged\textquotedblright\ ) supersymmetry is called a supergravity
model. Supersymmetric theories describe model worlds of particles,
created from the vacuum by the fields, and the interactions
between these particles. The supersymmetry manifests itself in the
particle spectrum and in the stringent relationship between
different interaction processes even if these processes involve
particles of different spin and statistics.
Both supersymmetry and supergravity aim at unified description of \textit{%
fermions} and \textit{bosons,} and hence of matter and
interactions. Supergravity is particularly promising at its
attempt to \textit{unify gravity with other interactions. }All
supersymmetric models succeed to some extent in these aims, but
they fail in describing the real world as we live in and
experience it and thus are models not theories. We are still to
struggle to find some contact between one of the models and the
physical world, reality, so that the model could become an
underlying theory for nature ant its \textquotedblleft\ most
fundamental level\textquotedblright .
By \textquotedblleft\ most fundamental level\textquotedblright\ we
mean at present the decomposition of matter into quarks and
leptons (fermions) and the understanding of all forces between
them as arising out of four types of basic interactions, namely,
\textbf{gravitational, weak, electromagnetic, and strong}. These
are described in terms of exchanged particles (bosons ). The
framework within which these building blocks make up a physical
theory is \textit{Relativistic Quantum Field Theory. }Seen at this
level, \textquotedblleft\ unification\textquotedblright\ ought to
include all four interactions. There is , however, a quantitative
and a qualitative difference between the gravitational interaction
and the others which has had profound consequences on both the
structure of the universe and on our understanding of it.
Unification of gravity with other forces is an illusive goal.
Since gravity is always attractive and a long range force, it
would make all complex and large physical objects collapse under
their own weight, however, it is not stronger enough at short
distances to have any remarkable effect with respect to other
forces, [\ interactions].
This difference in strength, necessary to the universe as we see,
has in turn set gravity so far apart from the rest of physics that
it is difficult to think of any experiment which could actually
test predictions of a unified field theory of all interactions,
and even less of one that could provide experimental input into
the construction of such a theory. The natural domain of Newtonian
gravity and of its modern replacement , Einstein's TGR, is the
world of large distances and massive accumulations of matter, that
of the other forces is the world of atoms, nuclei, and elementary
particles. A very large Order of magnitude separate them.
The \textit{strong, electromagnetic and weak interactions }are
fairly understood today. It has been found that their exchange
particles arise naturally in a quantum field theory, if that field
theory is required to be locally gauge invariant.
The theory for both gravitation and elementary particles'
interactions are well established within their respective domains.
On the submicroscopic level, for the masses and distances
involved, the deviation introduced by gravity from the flat
Minkowiskian metric are so minute that elementary particles, or
atoms, can safely be treated as if gravitation does not exist. Any
\textquotedblleft \textbf{\ true}\textquotedblright , that is,
generally covariant, theory should thus be closely approximated by
Lorentz--covariant, non-gravitational theories. We must, however
demand of the \textit{true} \ theory to be mathematically
consistent and that it predicts the correct flat limit. Any
quantum theory of gravitation so far fails to do so.
The energy at which gravity and quantum effects become of
comparable strength can be estimated from only an expression with
a dimension of an energy that can be formed from the constants of
nature $\hbar ,c,G,$\bigskip
\begin{equation*}
E_{Planck}=c^{2}\sqrt{\frac{\hbar c}{G}}\simeq 10^{19}GeV
\end{equation*}
It is in the region of this energy, \textit{Planck's energy},
where our present theories for the gravitational and other
interactions become incompatible with each other and where we
expect a possible unification of the interactions to become
manifest. A point particle with Plank mass would have a
Schwarzschild radius equal to twice its Compton wavelength. The
very remoteness of such an energy region eliminates all hope for a
direct experimental proof. Perhaps, if we are lucky enough, some
isolated prediction of such a unified theory could be testable on
a system that integrates a minute effect over a vast range (
proton decay experiments are of this type, where large numbers of
available protons can make the very small probabilities for decay
available and measurable). We can, however, not expect
experimental physics to give us much reliable guidance into the
\textit{Plank region.}
The $SU(3)\times SU(2)\times U(1)$ picture of the
non-gravitational forces is not yet a \textit{unified} one. The
only property which unites them is that they are each described by
a gauge theory. The fact that the direct product structure of
$SU(2)\times U(1)$ is \textquotedblleft skew\textquotedblright\
(by the weak mixing angle) against the natural distinction between
the weak and electromagnetic force may suggest some underlying
unified scheme which can predict that angle. A lot of work has.
over many years, gone into finding a larger gauge group which
would describe all three interactions at some high energy. If such
a grand unification occurred it is known that it must happen at
energies of about $10^{15}GeV,$ only four orders of magnitude less
than $E_{Planck}.$\textit{Grand Unified Theories} (GUTs) have had
some success (such as the prediction of the mixing angle) and some
failures (such as proton life time experiments). In any case, even
a GUT would at most unify different kinds of interactions (strong
and electroweak) with each other and different kinds of matter
(quarks and leptons) with each other. the unification of matter
with interactions is not one of the aims of GUTs.
What is it then that points in the direction of supersymmetric
theories for a \textit{solution to the unification problem?}
Already the most obvious difference between gravity and, say,
electrodynamics, namely the absence of negative gravitational
charges, can be shown to imply that only a supersymmetric theory
can unify them. As long as we do not dramatically deviate from
standard quantum field theory, and we hope that, that will not be
necessary, a purely attractive force must be carried by a field
with even integer spin. The electromagnetic force, on the other
hand, which is- of course- not always attractive, is carried be a
field of spin one. A number off no-go theorems about which we will
have to know more later, forbids any direct symmetry
transformations between fields of different integer spin and
actually leave supersymmetric theories as the only fields
theoretical models which achieve unification of all forces of
nature. Supersymmetry transformations do not directly relate
fields of different integer spin, rather they relate a graviton (a
quantum of gravity with spin 2) to a photon (spin 1) via a spin
$\frac{3}{2}$ intermediary, the \textit{gravitino}.
A partial unification of matter (fermions) with interactions
(bosons) thus arises naturally out of the attempt to unite gravity
with the other interactions.
The no-go theorems imply that \textit{supersymmetry} and \textit{supergravity%
} are only possibilities for unification within the framework of
quantum field theory. Failure of these theories to ultimately give
results which are compatible with the real world would force us to
give up either unification or quantum field theory.
A part from the no-go theorems there is a further, more technical
point that singles out supersymmetric theories; they may resolve
the nonrenormalizability problem of quantized gravity. In
perturbation quantum field theory fermions and bosons tend to
contribute with opposite signs to higher order corrections.
Supersymmetry almost always achieve a fine-tuning
between these contributions which makes some a aprori present and \textit{%
divergent terms vanish}. For a long time now there has been great
optimism that of the supergravity models may be entirely free of
infinities and thus be a consistent model of quantized gravitation
with no need for renormalization. This hope is slowly fading now,
but only to the extent that no obvious proof can be found for the
conjectured finiteness. What remains is a remarkably friendly
behavior of all supersymmetric theories when it comes to quantum
divergences, and the conviction of many theorists that it would be
surprising if nature did not make use of it. We will have to say
later more about \textit{cancellations of divergences} and the
enormous interest which they have aroused, not only for gravity
but also for the \textit{hierarchy problem} of GUTs, the thirteen
orders of magnitude between the GUT mass of $10^{15}GeV/c^{2}$and
the $W$ -boson mass. Normally, a gap of this size is not stable in
perturbation theory because of the considerable admixture of the
large mass to the small one through vacuum polarization. the gap
can only be maintained by repeated fine-tuning up to high orders
in the perturbation expansion. In supersymmetric versions of GUTs
new particles are exchanged and pair-created, and these new
processes cancel some of the effects of the old ones. Mass mixing
and consequently fine-tuning can usually be avoided, and the
hierarchy, once established, is stabilized.
\begin{center}
\textbf{\Large Outline of Part One:}
\end{center}
This part consists of four chapters of which the fourth chapter is
the main one.
In the \textbf{first chapter, }Introduction, an introduction to
the subjected presented in this part is given (this one).
In the \textbf{second chapter, }The \textit{Standard Model of
Electroweak Interactions }is presented with a brief background
examples of gauge theories and Higgs mechanism.
In the \textbf{third chapter, }The \textit{Supersymmetry concepts,
Supersymmetry Algebra, and Supersymmetric Models }are introduced.
And in the \textbf{fourth chapter, }The \textsl{Minimal
Supersymmetric
Standard Model was studied, }including\textsl{\ the extended Higgs model }and%
\textsl{\ the particle spectrum of the model.}
\chapter{Standard Model $of$ Electroweak Interactions}
\section{Introduction}
THE invention of unified renormalization theories of electroweak
interactions is actually one of the outstanding successes of
elementary particle physics. the first of this theories was the
theory of Glashow, Weinberg and Salaam ($\mathbf{GWS}$) and is
known as the standard electroweak theory.
In 1961 Glashow constructed a model for the weak and
electromagnetic interactions of leptons which was based on the
assumption that, together with the photon, there exist also a
charged $W$ and a neutral $Z$ intermediate bosons. The mass of the
$W$ and $Z$ bosons were introduced \textquotedblleft\ by
hand\textquotedblright , \textit{ad hoc.} As a result, the model
was unrenormalizable. In 1967-1968 Weinberg and Salam constructed
the $SU(2)\times U(1)$ model of electroweak interactions of
leptons including a spontaneous breakdown of the gauge symmetry.
In 1971-1972 it was proved by t'Hooft and others that the model of
this type were renormalizable. The model was generalized to quarks
using the mechanism proposed by Glashow, Iliopoulos, and Maiani.
The $GWS$ theory is based on the assumption of the existence of
charged and neutral intermediate vector bosons and it is
constructed so that, for massless fundamental fermions (leptons
and quarks), a local $SU(2)\times SU(1)$ gauge invariance takes
place. then the interaction (again locally gauge invariance) of
Higgs scalar fields with both gauge vector bosons and fermions, is
introduced. As a consequence of the spontaneous breakdown of the
underlying symmetry, leptons, quarks, and intermediate bosons all
acquire masses.
The only free parameter, which enters in the definition of the
neutral current in the $GWS$ theory, is $\sin ^{2}\theta _{W}$ (
where $\theta _{W}$ is the Weinberg angel)
Neutral currents were discovered at CERN in 1973 in an experiment
using the large bubble chamber \textquotedblleft
Gargamelle\textquotedblright . In
this experiment the process $\overline{\nu }_{\mu }+e\rightarrow \overline{%
\nu }_{\mu }+e$ was observed. After the pioneering work of the
\textquotedblleft Gargamelle\textquotedblright\ collaboration, a
large number of experiments were done investigating various
neutral current induced processes. After this work it became
possible to perform a complete phenomenological analysis of all
the neutral current data. As a result one could uniquely determine
all the coefficients appearing in the most general
phenomenological $V,$ $A$ expressions written for hadron and
lepton neutral
currents. It was shown that this unique solution is in agreement with the $%
GWS$ theory.
In 1980-1981, in experiments on the $e^{-}e+$ colliding beams,
information has been obtained on the contribution of neutral
currents to the cross sections of the process \
$e^{+}+e^{-}\rightarrow l^{+}+l^{-}(l=e,\mu ,\tau ).$ These data
also agreed with the \textsl{standard electroweak model.}
The $GWS$ theory predicts the values of the charged $(W)$ and
neutral $(Z)$ intermediate boson masses, namely, $\ m_{W}\sim
80GeV$ and $m_{Z}\sim 91GeV.$
the discovery in 1983 of the $W$ and $Z$ bosons at the $CERN$
$p\overline{p}$ collider, with exactly the predicted masses, was a
dramatic confirmation of the $GWStheory.$
In the $GWStheory$ $\sin ^{2}\theta _{W}$ is a free parameter. It
is related
to the value of $W$ and $Z$ masses as $\sin ^{2}\theta _{W}=1-(\frac{%
m_{W}^{2}}{m_{Z}^{2}})\sim 0.23$
\section{Gauge Invariance}
The concept of gauge invariance [1] grew out of the observation that of a
\textquotedblleft\ charge\textquotedblright\ (e.g. electric
charge, total energy, isospin, etc.) is conserved in a dynamically
system, then the Lagrangian for the system is invariant under
\textquotedblleft\ global gauge transformation\textquotedblright\
of the fields. For example, the electric charge is related to
invariance under
phase transformations $\psi \rightarrow e^{iq\theta }\psi $ for all fields $%
\psi $ which describe particles of charge $q$. Similarly, the
energy is related to time translations $\psi (t,x)\rightarrow \psi
(t+\Delta t,x).$ The converse is also true (Nether's theorem); if
the Lagrangian is invariant under some infinitesimal
transformation $\psi \rightarrow \psi +\delta \psi , $ then there
is a conserved current and a conserved charge associated with this
gauge invariance, (\textquotedblleft\ gauge\textquotedblright\ is
an unfortunate naming, originating in an attempt by H. Wyle in
1918 to relate the electric charge to re-scaling transformation
$\psi \rightarrow e^{\lambda }\psi ).$We call the transformation
global if their parameters do not depend on the space-time
coordinates, i.e. if $\theta =cons.$ This relationship between
conserved quantum numbers and global symmetries of the Lagrangian
led, in the 1960's, to a search for globally-invariant field
theories capable of describing and classifying all elementary
particles. The \textbf{\textquotedblleft\ 8-fold
way\textquotedblright } very much in this vein and it was in this
context that quarks were first postulated as building blocks of
strongly interacting particles.
The requirement of the \textsl{local gauge invariance} (also known
as \textquotedblleft\ gauge invariance of the 2$^{nd}$
kind\textquotedblright ) goes beyond that which can be inferred
from charge conservation. We now demand invariance of the
Lagrangian under transformations with a space-time dependent
parameter $\theta =\theta (x).$This interaction, which results in
the exchange of the field quanta, will generate forces between the
particles. \textquotedblleft\ Gauging\textquotedblright\ the phase
transformation associated with electric charge ( i.e. making them $%
x-dependent$ ) forces us to introduce the electromagnetic
four-vector potential and, as its quanta, the photons. The result
is quantum electrodynamics. Requiring other gauge invariances
requires additional gauge potentials which give rise to more
exchange particles and the other
interactions. These exchanged particles are the discovered $W^{\pm }$ and $%
Z^{0}$ for the weak force and the gluons for the strong
interactions. The latter have only been indirectly seen in their
effects on the distribution of the debris in high energy particle
collisions (\textsl{jets}). To sum up, \textquotedblleft\
gauging\textquotedblright\ an invariance of the Lagrangian will
always give rise to interaction and to forces.
Nowadays, the name gauge theory is used exclusively for theories
with local gauge invariance.
The gauge transformation under which the Lagrangian is invariant
forms a group as it fulfills the axioms of a group (in the
mathematical sense):
\ \ \ 1- Two subsequent invariances will again be an invariance,
\ \ \ 2- \textquotedblleft\ No transformation\textquotedblright\
is the identity element,
\ \ \ 3- There is exactly one inverse transformation for each
invariance transformation, and
\ \ \ 4- Three transformations are associative.
Using the standard terminology of groups, the respective gauge groups are $%
SU(3)$ for the strong interactions and $SU(2)\times U(1)$ for the
electroweak interactions. The $SU(3)$ transformations act on
triplets of quarks whose properties are very similar. They are
said to differ only in \textquotedblleft\
\textbf{color\textquotedblright }, hence the name
quantum-chromodynamics $(QCD)$ \ for the $SU(3)$ gauge theory of
the strong interactions. The success of gauge theories in
describing a variety of elementary particles phenomena eclipsed
the rule played by global invariance, and nowadays such global
symmetries are thought of as more or less accidental -if indeed
they are present. In this context it is already important to
mention that local (gauged) supersymmetry will imply supergravity.
\section{Renormalization}
\textsl{Renormalizatoin} is required in all quantum field theories
in order to make sense of divergent integrals which appear in the
perturbation expansions for physical processes. Such expansions
are unfortunately the only calculations tools currently available
for solving the equations of motion of the theory; they are
usually conceptualized in terms of vacuum polarizations and
virtual particle interactions and are illustrated by Feynman
Diagrams. In renormalizable theories, the divergences which appear
can be tested by redefining, in each order of the perturbation
expansion, a finite number of theoretical parameters in such a way
that the results of \textquotedblleft\ test
experiments\textquotedblright\ are reproduced. Other processes can
be calculated uniquely to the same order. In the lowest order, the
parameters which must be so renormalized typically present vacuum
energies, masses, coupling constants and factors which are
multiplied to the wave functions. Correspondingly, one speaks of
\textquotedblleft\ vacuum, mass, coupling constant, and wave
function renormalizatoin\textquotedblright . One of the strongest
motivations for gauge theories is their renormalizability.
A theory is called non-renormalizable if infinitely many
parameters must be redefined. Such a \textquotedblleft\
theory\textquotedblright\ can make no predictions and therefore
not a theory in the sense of exact science. I general, coupling
constants with negative mass dimensions (for $\hbar =c=1$) lead to
non- renormalizable theories. No matter how we attempt to quantize
gravity,we end up with a field theory whose coupling constant, \textbf{%
Newton's} \textsl{universal gravitational constant} $G$, has dimensions $%
\frac{1}{mass^{2}}$ in these units and quantum gravity is
therefore non-renormalizable.
\section{Quantum Electrodynamics}
Quantum Electrodynamics ($QED$) is the gauge invariance theory
which describes all relevant experimental data. As an example,
consider the electron field $\psi (x).$ The free Lagrangian of
this field has the
standard form,[2].
\begin{equation}
\mathfrak{L}=-\overline{\psi }(\gamma _{\alpha }\partial _{\alpha
}+m)\psi
\end{equation}
where $m$ is the mass of the electron, $\partial _{\alpha }\equiv \frac{%
\partial }{\partial x_{\alpha }}.$ The Lagrangian (1.1) is invariant with
respect to the global gauge transformation%
\begin{equation}
\psi (x)\rightarrow \psi ^{^{\prime }}(x)=e^{i\lambda }\psi (x),
\end{equation}
where $\lambda $ is an arbitrary real constant. It is obvious that
the Lagrangian $(1.1)$ is not invariant with respect to the local
gauge transformation
\begin{equation}
\psi (x)\rightarrow \psi ^{^{\prime }}(x)=U(x)\psi (x)
\end{equation}
where
\begin{equation*}
U(x)=\exp \{i\lambda (x)\}
\end{equation*}
and where $\lambda (x)$ is an arbitrary real function of $x$. The
derivative $\partial _{\alpha }\psi (x)$ is indeed not transformed
under (1.3) as the field $\psi (x)$ itself. Really, we have
\begin{equation*}
\partial _{\alpha }\psi ^{^{\prime }}(x)=U(x)\left( \partial _{\alpha
}+i\partial _{\alpha }\lambda (x)\right) \psi (x)
\end{equation*}
As is well known, the local gauge invariance (1.3) can be
maintained provided that the interaction of the field $\psi $ with
electromagnetic field $A_{\alpha }$ is introduced. Consider the
quantity $(\partial _{\alpha }-ieA_{\alpha })\psi $ ($e$ is the
electron charge), we will have
\begin{equation}
(\partial _{\alpha }-ieA_{\alpha }(x))\psi (x)=U^{-1}(x)(\partial
_{\alpha }-ieA_{\alpha }^{^{\prime }}(x))\psi ^{\prime
}(x)(\partial _{\alpha }-ieA_{\alpha })\psi
\end{equation}
where
\begin{equation}
A^{\prime }(x)=A_{\alpha }(x)+\frac{1}{e}\partial _{\alpha
}\lambda (x)
\end{equation}
From (1.4) it is obvious that the Lagrangian, which follows from
(1.1) by the substitution
\begin{equation}
\partial _{\alpha }\psi \rightarrow (\partial _{\alpha }-ieA_{\alpha })\psi
\end{equation}
is now invariant with respect to the gauge transformation (1.3)
and (1.5).
To construct the complete Lagrangian of the system under
consideration, we have to add also the gauge invariant Lagrangian
of the electromagnetic field. the tensor of the electromagnetic
field is given as
\begin{equation}
F_{\alpha \beta }=\partial _{\alpha }A_{\beta }-\partial _{\beta
}A_{\alpha }
\end{equation}
Clearly, $F_{\alpha \beta }^{\prime }=F_{\alpha \beta }\,$,
consequently, the gauge invariant Lagrangian of the fields of
electrons and photons takes the form
\begin{equation}
\mathfrak{L}=-\overline{\psi }[\gamma _{\alpha }(\partial _{\alpha
}-ieA_{\alpha })+m]\psi -\frac{1}{4}F_{\alpha \beta }F_{\alpha
\beta }
\end{equation}
The substitution of the derivative $\partial _{\alpha }\psi $ by
the covariant derivative $(\partial _{\alpha }-ieA_{\alpha })\psi
$ in the free Lagrangian of the field $\psi $ leads to the
following interaction Lagrangian for electrons and photons;
\begin{equation}
\mathfrak{L}_{i}=iej_{\alpha }A_{\alpha }
\end{equation}
where $j_{\alpha }=\overline{\psi }\gamma _{\alpha }\psi $ is the
electromagnetic current. Thus the substitution (1.6) fixes
uniquely the form of the interaction Lagrangian. Such an
interaction is called minimal electromagnetic interaction. Let us
note however that the principle of gauge invariance alone does not
fix the interaction Lagrangian uniquely. For example, the addition
of the Pauli term $\mu \overline{\psi }\sigma _{\alpha \beta }\psi
F_{\alpha \beta }$ to the Lagrangian (1.8) does not spoil the
gauge invariance of the theory, $(\mu $ is the anomalous magnetic
moment$)$.
All available experimental data confirm that the Lagrangian (1.9)
is the true Lagrangian which governs the interactions of electrons
and photons. It is also well known that electrodynamics, with the
minimal interaction (1.9), is a renormalizable theory.
\section{Yang-Mills Theory}
The modern theory of weak interactions is constructed in analogy
with quantum electrodynamics. We know from the experiment that the
Hamiltonian of weak interactions contains charged currents.
Therefore, to construct a theory of weak interactions we have to
start with a gauge theory containing fields of charged vector
particles. Such a theory does exit. It is the \textbf{Yang-Mills}
theory which we will now briefly present.
consider the doublet
\begin{equation*}
\psi =\binom{\psi ^{(1)}}{\psi ^{(-1)}}
\end{equation*}
of the group $SU(2)$ $(\psi ^{(1)},\psi
^{(-1)}arespinorfields).$The Lagrangian of the field $\psi $ is
written as
\begin{equation}
\mathfrak{L}_{0}=-\overline{\psi }(\gamma _{\alpha }\partial
_{\alpha }+m)\psi
\end{equation}
where $m$ is the common mass of particles, which corresponds to the fields $%
\psi ^{(1)},\psi ^{(-1)}.$ Obviously, The Lagrangian (1.10) is
left invariant with respect to the global $SU(2)$ transformation
\begin{equation}
\psi (x)\rightarrow \psi ^{\prime }(x)=\exp \{i\frac{1}{2}\tau
\lambda \}\psi (x)
\end{equation}
Here $\tau _{i}$ are Pauli matrices and $\lambda _{i}$ are real
constants.
We are now interested in the conditions under which the Lagrangian
of the system is invariant with respect to the local $SU(2)$
transformations
\begin{equation}
\psi (x)\rightarrow \psi ^{\prime }(x)=U(x)\psi (x)
\end{equation}
where
\begin{equation*}
U(x)=\exp \{i\frac{1}{2}\tau \lambda (x)\}
\end{equation*}
nd where $\lambda (x)$ are arbitrary real functions of $x$. It is
sufficient
to consider only the infinitesimal transformations (1.12). The parameters $%
\lambda _{i}$ will be taken as infinitesimal and in all expansions
in powers of $\lambda _{i}$ we shall keep only the linear terms.
Thus we have
\begin{equation}
U(x)\cong 1+i\frac{1}{2}\tau \lambda (x)
\end{equation}
Next, we get
\begin{equation}
\partial _{\alpha }\psi (x)=U^{-1}(x)\left( \partial _{\alpha }-i\frac{1}{2}%
\tau \lambda (x)\right) \psi ^{\prime }(x)
\end{equation}
It is clear from equation (1.14) that the Lagrangian (1.10) is not
invariant under the transformation of (1.12). To construct a gauge
invariant theory in
analogy with electrodynamics, we thus introduce, besides the field $\psi $%
,the vector field $A_{\alpha }$. consider the quantity
\begin{equation}
\left( \partial _{\alpha }-ig\frac{1}{2}\tau A_{\alpha }\right)
\psi (x)
\end{equation}
where $g$ is a dimensionless constant. Using equation (1.13) and
the
commutation relations $\left[ \frac{1}{2}\tau _{i},\frac{1}{2}\tau _{j}%
\right] =i\epsilon _{ijk\frac{1}{2}\tau _{k}},\,$\ we find
\begin{eqnarray}
\left( \partial _{\alpha }-ig\frac{1}{2}\tau A_{\alpha }\right)
\psi (x) &=&U^{-1}(x)U(x)\left( \partial _{\alpha
}-ig\frac{1}{2}\tau A_{\alpha
}\right) U^{-1}(x)\psi ^{\prime }(x) \notag \\
&=&U^{-1}(x)\left( \partial _{\alpha }-ig\frac{1}{2}\tau A_{\alpha
}^{\prime }(x)\right) \psi ^{\prime }(x),
\end{eqnarray}
with
\begin{equation}
A_{\alpha }^{\prime }(x)=A_{\alpha }(x)+\frac{1}{g}\partial
_{\alpha }\lambda (x)-\lambda (x)\times A_{\alpha }(x)
\end{equation}
The field $A_{\alpha }$ is called a $Yang-Mills$ $field.$ It is
seen from
equation (1.17), that under the global $SU(2)$ transformations the fields $%
A_{\alpha }$ transforms as a triplet.
Thus, as it is seen from the expression (1.16), the covariant derivative $%
\partial _{\alpha }-ig\frac{1}{2}\tau A_{\alpha }$ applied to the filed $%
\psi ,$transforms under the gauge transformations (1.12) and
(1.17) as the field $\psi $ itself ( a primed quantity is obtained
from unprimed one through its multiplication by the matrix$U$).
This means that the substitution for the derivative $\partial
_{\alpha }\psi $ in the free Lagrangian by the covariant
derivative (1.15) leads to a Lagrangian, which is invariant with
respect to the gauge transformations (1.12) and (1.17).
To construct the gauge invariant Lagrangian of the field $A$,
consider the quantity
\begin{equation}
F_{\alpha \beta }=\partial _{\alpha }A_{\beta }-\partial _{\beta
}A_{\alpha }+gA_{\alpha }\times A_{\beta }
\end{equation}
With the help of equation (1.17) it is easy to check that
\begin{equation}
F_{\alpha \beta }^{\prime }=F_{\alpha \beta }-\lambda \times
F_{\alpha \beta }
\end{equation}
It is immediately seen that the quantity $F_{\alpha \beta
}F_{\alpha \beta }$ is a group scalar. In analogy with
electrodynamics we take the Lagrangian of the field $A_{\alpha
}$in the form
\begin{equation}
\mathfrak{L}_{0}^{\prime }=-\frac{1}{4}F_{\alpha \beta }F_{\alpha
\beta }
\end{equation}
Thus, if the interaction of the fields $\psi $ and $A_{\alpha }$
is introduced through the \textquotedblleft\
minimal\textquotedblright\ substitution $\ \partial _{\alpha }\psi
\rightarrow \left( \partial _{\alpha }-ig\frac{1}{2}\tau A_{\alpha
}\right) \psi ,$ the total Lagrangian of the system under
consideration has the form
\begin{equation}
\mathfrak{L}=-\overline{\psi }\left[ \gamma _{\alpha }\left(
\partial
_{\alpha }-ig\frac{1}{2}\tau A_{\alpha }\right) +m\right] \psi -\frac{1}{4}%
F_{\alpha \beta }F_{\alpha \beta }
\end{equation}
Consequently, the interaction Lagrangian of the fields $\psi
andA_{\alpha }$ is as follows:
\begin{equation}
\mathfrak{L}_{i}=ig\overline{\psi }\gamma _{\alpha
}\frac{1}{2}g\tau \psi A_{\alpha }
\end{equation}
The constant $g$ introduced before becomes the interaction
constant. Therefore the \textquotedblleft\
minimal\textquotedblright\ substitution $\
\partial _{\alpha }\psi \rightarrow \left( \partial _{\alpha }-ig\frac{1}{2}%
\tau A_{\alpha }\right) \psi $ fixes uniquely the interaction
Lagrangian of the fields $\psi $ and $A_{\alpha }$. We have
arrived at the \ \ \textquotedblleft\ minimal\textquotedblright\
interaction Lagrangian for the fields $\psi $ and $A_{\alpha }$,
which is compatible with gauge invariance. One should notice also
that owing to the non-linear term $gA_{\alpha }\times A_{\beta }$
which appears in the expression (1.18) written for the field
tensor $F_{\alpha \beta },$ the Lagrangian (1.21) contains terms
which are responsible for the self-interaction of the field
$A_{\alpha }$.
Notice that a mass term $-\frac{1}{2}m_{\gamma }^{2}A_{\alpha
}A_{\alpha }$ for the gauge field can not be added to the
Lagrangian of the fields of electrons and photons because its
presence would destroy the gauge invariance of the theory. this
means that the mass of the photon is equal to zero. In this case
of the Yang-Mills theory, the imposed gauge invariance also does
not allow a mass term of the $-\frac{1}{2}m_{\gamma }^{2}A_{\alpha
}A_{\alpha }$. Consequently, the particles of the fields
$A_{\alpha }$ are all massless.
We conclude this section with the following remark. Consider several fields $%
\psi _{i}(i=1,2,3,...,n)$ interacting with the gauge field
$A_{\alpha }$. We can write
\begin{equation}
\psi _{i}^{\prime }(x)=\exp \{i\lambda _{i}(x)\}\psi _{i}(x)
\end{equation}
and
\begin{equation}
\left( \partial _{\alpha }-ieA_{\alpha }(x)\right) \psi (x)=\exp
\{i\lambda _{i}(x)\}\left( \partial _{\alpha }-ieA_{\alpha
}(x)\right) \psi _{i}^{\prime }(x)
\end{equation}
where
\begin{equation}
A_{\alpha }^{\prime }(x)=A_{\alpha }(x)+\frac{1}{e_{i}}\partial
_{\alpha }\lambda _{i}(x)
\end{equation}
$e_{i}$ are constants of interaction between the fields $\psi
_{i}$ and the gauge fields $A_{\alpha }$. It is clear from
equation (1.25) that the local gauge invariance is guaranteed
provided that
\begin{equation}
\lambda _{i}(x)=e_{i}\lambda (x)
\end{equation}
($\lambda (x)$ is an arbitrary real function of $x$). Gauge
invariance does not impose any restriction on the coupling
constants $e_{i}.$
In a non-Abelian $Yang-Mills$ theory the situation is completely
different. If there are several field multiplets interacting with
one $\ Yang-Mills$ gauge field, the coupling constants of all the
fields with the gauge field are unique. It follows immediately
from the fact that the coupling constants enters into the
expression for the field tensor $F_{\alpha \beta }$ ( eq. (1.18))
because of the non-Abelian character of the Yang-Mills group.
\section{The Higgs Mechanism}
The Lagrangian mass terms are introduced into the $GWS$ theory via
the so called $Higgs$ mechanism for the spontaneous breakdown of
the gauge symmetry. To illustrate how this mechanism works, we
shall consider in this section some classical examples of
spontaneous symmetry breakdown in relativistic field theory.
Consider for instance the complex \ scalar field $\phi (x)$ with
the Lagrangian density[3]
\begin{equation}
\mathfrak{L}=-\partial _{\alpha }\phi ^{\ast }\partial _{\alpha
}\phi -V\left( \phi ^{\ast }\phi \right)
\end{equation}
where
\begin{equation}
V(\phi ^{\ast }\phi )=-\mu ^{2}\phi ^{\ast }\phi +\lambda \left(
\phi ^{\ast }\phi \right) ^{2}
\end{equation}
and where $\mu ^{2}and$ $\lambda $ are positive constants. The
Hamiltonian density obtained from equation (1.27) reads:
\begin{equation}
\mathfrak{H}=\partial _{0}\phi ^{\ast }\phi +\bigtriangledown \phi
^{\ast }\bigtriangledown \phi +V(\phi ^{\ast }\phi )
\end{equation}
We now look for the minimum of the energy of the system.
Obviously, The Hamiltonian (1.29) is minimal at $\phi =cont.,$ a
value obtained from the condition
\begin{equation*}
\frac{\partial V}{\partial \phi }=\phi ^{\ast }\left( -\mu
^{2}+2\lambda \phi ^{\ast }\phi \right) =0
\end{equation*}
Then we find that the energy of the field is minimal at
\begin{equation}
\left\vert \phi _{0}\right\vert ^{2}=\frac{\mu ^{2}}{\lambda }=\frac{%
\upsilon ^{2}}{2}
\end{equation}
i.e
\begin{equation}
\phi _{0}=\frac{\upsilon }{\sqrt{2}}e^{i\alpha }
\end{equation}
where $\alpha $ is an arbitrary real parameter. Thus the minimum
of the Hamiltonian (1.29) is infinitely degenerate. The degeneracy
is obviously connected with the fact the Lagrangian (1.27) is
invariant with respect to the global $U(1)$ transformations
\begin{equation}
\phi (x)\rightarrow \phi ^{\prime }(x)=e^{i\lambda }\phi (x)
\end{equation}
The energy minimum of the system under consideration corresponds
to an arbitrary value of $\alpha $ in equation (1.31). Due to the
gauge invariance of equation (1.32) it is always possible to take
\begin{equation}
\phi _{0}=\frac{\upsilon }{\sqrt{2}}
\end{equation}
This is the typical example of the spontaneouly broken symmetry;
the
Lagrangian of the field $\psi $ is invariant with respect to the global $%
U(1) $ transformations, while the value of the field $\phi $ is
invariant with respect to the global $U(1)$ transformations, while
the value of the field $\phi $, corresponding to the minimal
energy, is just one of many possible choices.
We further introduce two real fields $\chi _{1}$ and $\chi _{2}$
as
\begin{equation}
\phi =\frac{\upsilon }{\sqrt{2}}+\frac{1}{\sqrt{2}}(\chi
_{1}+i\chi _{2})
\end{equation}
It follows from equation (1.33) that the energy of the system
reaches its minimum value when the fields $\chi _{1},$ $\chi _{2}$
have vanishing values. Substituting (1.34) into equation (1.27),
and omitting the unimportant constant $\frac{\lambda \upsilon
^{4}}{4},$ we get the Lagrangian of the system in the following
form:
\begin{equation}
\mathfrak{L}=-\frac{1}{2}\partial _{\alpha }\chi _{1}\partial
_{\alpha }\chi
_{1}-\frac{1}{2}\partial _{\alpha }\chi _{2}\partial _{\alpha }\chi _{2}-%
\frac{1}{4}\lambda (4\upsilon ^{2}\chi _{1}^{2}+4\upsilon \chi
_{1}^{3}+\chi _{1}^{4}+4\upsilon \chi _{1}\chi _{2}^{2}+4\chi
_{1}^{2}\chi _{2}^{2}+\chi _{2}^{2})
\end{equation}
It now describes the interactions of two neutral scalar fields.
The mass term of the field $\chi _{1}$ is
\begin{equation}
2\lambda \upsilon ^{2}\chi _{1}^{2}=m_{\chi _{1}}^{2}\chi _{1}^{2}
\end{equation}
Consequently, in the case of quantized fields, the mass of the field quantum$%
\chi _{1}$ equals $m_{\chi _{1}}=\sqrt{2\lambda \mu }=\sqrt{2}\mu
.$ There is no term quadratic in the field $\chi _{2}.$ This means
that the particle corresponding to the quantum of the field $\chi
_{2}$ is massless.
We have assumed that the value of the constants $\lambda $ and
$\mu ^{2}$ in the Lagrangian (1.27) are positive. Consequently,
the quadratic in the field $\phi $ appears in equation (1.27) are
positive. , i.e. \textquotedblleft\ wrong\textquotedblright\ sign.
This leads to the spontaneous breaking of the symmetry. The
degeneracy of the ground state is a characteristic of this
phenomenon. We however introduced new real fields $\chi _{1}$ and
$\chi _{2}$ for which the ground state is not degenerate. This
leads to the spontaneous breakdown of the original $U(1)$ global
symmetry of the Lagrangian. As a result, the quanta of one field
are massive, while the mass of the second field is zero.
With spontaneous breakdown of continuous symmetry massless
spinless ( spin zero) particles always appear. This statement is
quite general, and it comprises the content of the goldstone
theorem. the corresponding massless particles are not observed.
This might imply that the ideas of spontaneous symmetry breakdown
are useless in constructing realistic physical theories in
elementary particle physics. However, it will be shown in the
following, how the spontaneous breakdown of local $gauge$ \
symmetry results in massive gauge quanta due to the disappearance
of Goldstone bosons.
Let us assume that the complex field $\phi $ with the Lagrangian
(1.27) interacts minimally with the gauge field $A_{\alpha }.$
This interaction is introduced by the substitution $\partial
_{\alpha }\phi \rightarrow \left(
\partial _{\alpha }-igA_{\alpha }\right) \phi $ in equation (1.27)
The complete Lagrangian of the system is
\begin{equation}
\mathfrak{L}=\left( \partial _{\alpha }+igA_{\alpha }\right) \phi
^{\ast
}\left( \partial _{\alpha }-igA_{\alpha }\right) \phi -V(\phi ^{\ast }\phi )-%
\frac{1}{4}F_{\alpha \beta }F_{\alpha \beta }
\end{equation}
where
\begin{equation}
F_{\alpha \beta }=\partial _{\alpha }A_{\beta }-\partial _{\beta
}A_{\alpha }
\end{equation}
the Lagrangian (1.37) is invariant with respect to the local gauge
transformations
\begin{eqnarray}
\phi (x) &\rightarrow &\phi ^{\ast }(x)=e^{i\lambda (x)}\phi , \notag \\
A_{\alpha }(x) &\rightarrow &A_{\alpha }^{\prime }=A_{\alpha }(x)+\frac{1}{g}%
\partial _{\alpha }\lambda (x),
\end{eqnarray}
where $\lambda (x)$ is an arbitrary function of $x$ .
As in the previous example, the minimum of the energy corresponds
to a value of the field $\phi $ equal to $\left( \frac{\upsilon
}{\sqrt{2}}\right)
e^{i\alpha }$ (where $\alpha $ is an arbitrary parameter, $\left( \frac{%
\upsilon }{\sqrt{2}}=\sqrt{\frac{\mu ^{2}}{2\lambda }}\right) .$
Due to the gauge invariance of the Lagrangian (1.37) the
\textquotedblleft\ vacuum\textquotedblright\ value of the field
$\phi $ can always be taken as
\begin{equation}
\phi _{0}=\frac{\upsilon }{\sqrt{2}}
\end{equation}
We shall write the field $\phi $ in the form
\begin{equation}
\phi \left( x\right) =\frac{1}{\sqrt{2}}\left( \upsilon +\chi (x)\right) e^{i%
\frac{\theta (x)}{\upsilon }}
\end{equation}
where $\chi (x)$ and $\theta (x)$ are real functions of $x$
defined so that zero values correspond to the minimum of $V.$
It is clear that due to the local gauge invariance of the theory
the function $\theta (x)$ appearing in equation (1.41) has no
physical meaning. It can always be eliminated by an appropriate
gauge transformation. Thus, we have
\begin{equation}
\phi (x)=\frac{\left( \upsilon +\chi (x)\right) }{\sqrt{2}}
\end{equation}
Substituting (1.42) and equation (1.37) and omitting the
unimportant constant, we get the Lagrangian of the system under
consideration in the following form
\begin{equation}
\mathfrak{L}=-\frac{1}{2}\partial _{\alpha }\chi \partial _{\alpha }\chi -%
\frac{1}{2}g^{2}\left( \upsilon +\chi \right) ^{2}A_{\alpha }A_{\alpha }-%
\frac{1}{4}\lambda \left( \chi +2\upsilon \right) ^{2}\chi ^{2}-\frac{1}{4}%
F_{\alpha \beta }F_{\alpha \beta }
\end{equation}
The Lagrangian (1.43) contains the term of the vector field
$A_{\alpha }\left( -\frac{1}{2}g^{2}\upsilon ^{2}A_{\alpha
}A_{\alpha }\right) $ and the mass term of the scalar field $\chi
\left( -\frac{1}{2}2\lambda \upsilon ^{2}\chi ^{2}\right) .$
Consequently, the masses of the fields $A_{\alpha }$ and $\chi $
are equal to $m_{A}=g\upsilon $ and $m_{\chi }=\sqrt{2\lambda
\upsilon ^{2}}=\sqrt{2}\mu ,$ respectively.
Before spontaneous symmetry breakdown the Lagrangian of the system
contained a complex scalar field ( two real functions) and a
massless gauge field (two independent real functions). After
spontaneous breakdown of the local symmetry we arrived at the
Lagrangian of an interacting real massive scalar field (one real
function) and a massive vector field (three real functions). The
degree of freedom, which would correspond to the massless
Goldstone boson (in the absence of the gauge field $A_{\alpha }$),
has been transformed through the spontaneous breakdown of the
local gauge symmetry of the Lagrangian (1.37), into the additional
degree of freedom ( masses of the vector field).
The mechanism thus discussed is called Higgs mechanism. The scalar
particle, corresponding to the quantum of the field $\chi ,$ is
called the Higgs particle.
We have explained the basic principles which are used in
constructing models of electroweak interactions. Now we turn to
the detailed discussion of the standard $SU(2)\times U(1)$ theory
of $\ Glashow,$ $Weinberg,$ $andSalam.$
\section{Glashow, Weinberg $and$ Salam Theory}
The phenomenological V-A ($current\times current$ )theory was
capable of describing the vast amount of existing experimental
data. Consequently, any new theory of weak interactions has to be
built up as to reproduce the results of the results of this
theory.
The GWS
theory[4],[5], [6] is based on the assumption that there exists intermediate vector
bosons. To reproduce the results of the V-A theory at low energies
it is therefore necessary to assume that at least part of the
\textquotedblleft\
true\textquotedblright\ weak interaction Lagrangian is of the form[7][8]
\begin{equation}
\mathfrak{L}=\frac{ig}{2\sqrt{2}}j_{\alpha }^{(+)}W_{\alpha }+h.c
\end{equation}
where $W_{\alpha }$ is the field of the vector bosons and
$j_{\alpha }^{(+)}$ is the charged weak currents. The
dimensionless coupling constant $g$ is related to the fermion
constant by
\begin{equation}
\frac{g^{2}}{8m_{W}^{2}}=\frac{G_{F}}{\sqrt{2}}
\end{equation}
where $m_{W}$ is the mass of the charged intermediate boson. The
charged current is the sum of lepton and hadron (quark) current.
In this study we
shall consider the $GWS$ theory of leptons\footnote{%
The study for the case of quarks can be found in reference [2] and
the references therein}. Consequently, we will be interested only
in the lepton current. It follows from all available data that the
charged lepton current is
\begin{equation}
j_{\alpha }^{(+)}=\overline{v}_{e}\gamma _{\alpha }\left( 1+\gamma
_{5}\right) e+\overline{v}_{\mu }\gamma _{\alpha }\left( 1+\gamma
_{5}\right) \mu +\overline{v}_{\tau }\gamma _{\alpha }\left(
1+\gamma _{5}\right) \tau
\end{equation}
where $e,\mu ,\tau $ are the field operators of the electron, muon
and $\tau -lepton.,$ respectively; $v_{e},v_{\mu },v_{\tau }are$
the field operators of the electron-, muon-, and tau-neutrinos,
respectively.
At the beginning we shall consider the case of massless fields. In
order to get the term (1.44) in the interaction Lagrangian of the
leptons and vector bosons we assume that
\begin{equation}
\psi _{lL}=\binom{v_{lL}^{^{\prime }}}{l_{L}^{^{\prime }}},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ }(l=e,\mu ,\tau )
\end{equation}
forms a doublet of the $SU(2)$ group and
\begin{equation}
l_{R}^{^{\prime }},v_{lR}^{^{\prime }}
\end{equation}
are singlets of this group\footnote{%
the primes put on lepton fields indicates that these fields do not
necessarily correspond \ to lepton fields with well defined masses
which will be generated later through spontaneous breakdown of the
underlying symmetry.}. Here
\begin{eqnarray}
\psi _{lL} &=&\frac{1}{2}\left( 1+\gamma _{5}\right) \binom{v_{i}^{\prime }}{%
l^{\prime }} \notag \\
l_{R}^{\prime } &=&\frac{1}{2}(1-\gamma _{5})l^{\prime }, \notag \\
v_{lR}^{\prime } &=&\frac{1}{2}(1-\gamma _{5})v_{l}^{\prime }
\end{eqnarray}
are the left-handed (L) and the right-handed (R) components of the
corresponding fields.
The free field lepton Lagrangian
\begin{equation}
\mathfrak{L}=\dsum\limits_{l=e,\mu ,\tau }\left( \overline{\psi
}_{lL}\gamma _{\alpha }\partial _{\alpha }\psi
_{lL}+\overline{l}_{R}^{\prime }\gamma _{\alpha }\partial _{\alpha
}l_{R}^{\prime }+\overline{v}_{lR}^{\prime }\gamma _{\alpha
}\partial _{\alpha }v_{lR}^{\prime }\right)
\end{equation}
is clearly invariant with respect to the global $SU(2)group.$ We
demand now for massless fields the local $Yang-Mills$ invariance
with respect to
\begin{subequations}
\begin{eqnarray}
\psi _{lL}(x) &\rightarrow &\psi _{lL}^{\prime }(x)=\exp
\{i\frac{1}{2}\tau
\lambda (x)\}\psi _{L}(x)\}, \\
l_{R}^{\prime }(x) &\rightarrow &(l_{R}^{\prime }(x))^{\prime
}=l_{R}^{\prime }(x) \\
v_{R}^{\prime }(x) &\rightarrow &(l_{R}^{\prime }(x))^{\prime
}=l_{R}^{\prime }(x) \\
\QTR{sl}{A}_{\alpha }(x) &\rightarrow &\QTR{sl}{A}_{\alpha }^{\prime }(x)=%
\QTR{sl}{A}_{\alpha }(x)+\frac{1}{g}\partial _{\alpha }\lambda
(x)-\lambda (x)\times A_{\alpha }(x)
\end{eqnarray}
(where the $\lambda (x)$ are arbitrary real functions of $x$
($i=1,2,3$ ), and where $A_{\alpha }^{i}$ is a triplet of vector
fields). We assume the interaction of vector bosons to be minimal.
Such an interaction is introduced via the substitution (see
sec.(1.5))
\end{subequations}
\begin{equation}
\partial _{\alpha }\psi _{lL}\rightarrow \left( \partial _{\alpha }-ig\frac{1%
}{2}\tau A_{\alpha }\right) \psi _{lL}
\end{equation}
($g$ is the dimensionless constant). from equations (1.50) and
(1.52) we get the interaction Lagrangian of leptons and vector
bosons as
\begin{equation}
\mathfrak{L}_{i}=ig\mathbf{j}_{\alpha }A_{\alpha }
\end{equation}
where
\begin{equation}
\mathbf{j}_{\alpha }=\dsum\limits_{l}\overline{\psi }_{lL}\gamma _{\alpha }%
\frac{1}{2}\tau \psi _{lL}
\end{equation}
From (1.53) we can single out the interaction of leptons with
charged vector bosons:
\begin{equation}
\mathfrak{L}_{i}=\left( \frac{ig}{2\sqrt{2}}j_{\alpha
}^{(+)}W_{\alpha }+h.c.\right) +igj_{\alpha }^{3}A_{\alpha }^{3}
\end{equation}
where
\begin{equation}
W_{\alpha }=\frac{1}{\sqrt{2}}\left( A_{\alpha }^{1}-iA_{\alpha
}^{2}\right) =\frac{1}{\sqrt{2}}A_{\alpha }^{1-i2}
\end{equation}
is the field of charged vector bosons and
\begin{equation}
j_{\alpha }^{(+)}=2j_{\alpha }^{1+i2}=2\dsum\limits_{i}\overline{\psi }%
_{iL}\gamma _{\alpha }\tau _{+}\psi _{lL}=\dsum\limits_{l}\overline{%
v_{l}^{\prime }}\gamma _{\alpha }\left( 1+\gamma _{5}\right)
l^{\prime }
\end{equation}
is the charged current. therefore, the interaction Lagrangian
(1.55) which follows from the local gauge invariance does contain
the term (1.44) describing the interaction of leptons with a
charged intermediate boson.
The second term in the Lagrangian (1.55) describes the interaction
of neutrinos and charged leptons with the neutral vector boson:
\begin{equation}
\mathfrak{L}_{i}^{\prime }=ig\frac{1}{4}\dsum\limits_{l}\left( \overline{%
v_{l}^{\prime }}\gamma _{\alpha }\left( 1+\gamma _{5}\right)
v_{l}^{\prime }-l\overline{^{\prime }}\gamma _{\alpha }\left(
1+\gamma _{5}\right) l^{\prime }\right) A_{\alpha }^{3}
\end{equation}
The $GWS$ theory is a unified theory of weak and electromagnetic
interaction. Obviously, the interaction (1.58) is not
electromagnetic interaction. For a unification of weak and
electromagnetic interactions it is necessary, therefore, to
require the invariance of the Lagrangian of the system with
respect to a larger group than the local $SU(2).$ The simplest
possibility is the group $SU(2)\times U(1)$ which makes the basis of the $%
GWS $ theory.
To construct the locally $SU(2)\times U(1)$ invariant Lagrangian
we perform in equation (1.50) the minimal substitution ( section
(1.5))
\begin{subequations}
\begin{eqnarray}
\partial _{\alpha }\psi _{lL} &\rightarrow &\left( \partial _{\alpha }-ig%
\frac{1}{2}\tau A_{\alpha }-ig^{\prime }\frac{1}{2}y_{L}B_{\alpha
}\right)
\psi _{lL}, \\
\partial _{\alpha }l_{R}^{\prime } &\rightarrow &\left( \partial _{\alpha
}-ig^{\prime }\frac{1}{2}y_{R}^{(-1)}B_{\alpha }\right) l_{R}^{\prime }, \\
\partial _{\alpha }v_{lR}^{\prime } &\rightarrow &\left( \partial _{\alpha
}-ig^{\prime }\frac{1}{2}y_{R}^{(0)}B_{\alpha }\right)
v_{lR}^{\prime },
\end{eqnarray}
where $A_{\alpha }is$ a triplet of gauge fields with respect to the group $%
SU(2),B_{\alpha }is$ the gauge field associated with the symmetry group $%
U(1) $ and the $y$ constants are the corresponding hypercharge.
The complete gauge invariant Lagrangian of leptons and vector
bosons consequently becomes
\end{subequations}
\begin{eqnarray}
\mathfrak{L} &=&-\sum\limits_{l}\overline{\psi }_{lL}\gamma
_{\alpha }\left(
\partial _{\alpha }-ig\frac{1}{2}\tau A_{\alpha }-ig^{\prime }y_{L}B_{\alpha
}\right) \overline{\psi
}_{lL}-\sum\limits_{l}\overline{l}_{R}^{\prime
}\gamma _{\alpha }\left( \partial _{\alpha }-ig^{\prime }\frac{1}{2}%
y_{R}^{(-1)}B_{\alpha }\right) l_{R}^{\prime } \notag \\
&&-\sum\limits_{l}\overline{v}_{lR}^{\prime }\gamma _{\alpha
}\left(
\partial _{\alpha }-ig^{\prime }\frac{1}{2}y_{R}^{(-1)}B_{\alpha }\right)
l_{R}^{\prime }-\frac{1}{4}\mathbf{F}_{\alpha \beta
}\mathbf{F}_{\alpha \beta }-\frac{1}{4}F_{\alpha \beta }F_{\alpha
\beta }
\end{eqnarray}
where
\begin{subequations}
\begin{eqnarray}
\mathbf{F}_{\alpha \beta } &=&\partial _{\alpha }A_{\beta
}-\partial _{\beta
}A_{\alpha }+gA_{\alpha }\times A_{\beta } \\
F_{\alpha \beta } &=&\partial _{\alpha }B_{\beta }-\partial
_{\beta }B_{\alpha }
\end{eqnarray}
The interaction Lagrangian of leptons and vector bosons, which
follows from equation (1.60), can be written as
\end{subequations}
\begin{equation}
\mathfrak{L}_{i}=ig\mathbf{j}_{\alpha }\mathbf{A}_{\alpha }+ig^{\prime }%
\frac{1}{2}j_{\alpha }^{y}B_{\alpha }
\end{equation}
The current $j_{\alpha }$ is given by (1.54), and
\begin{equation}
j_{\alpha }^{y}=\sum\limits_{l}y_{L}\overline{\psi }_{lL}\gamma
_{\alpha }\psi
_{lL}+\sum\limits_{l}y_{R}^{(-1)}\overline{l}_{R}^{\prime }\gamma
_{\alpha }l_{R}^{\prime
}+\sum\limits_{l}y_{R}^{(0)}\overline{v}_{R}^{\prime }\gamma
_{\alpha }l_{R}^{\prime }
\end{equation}
The $U(1)$ invariance does not impose any constraints on the
coupling constants between the leptons and the field $B_{\alpha }$
(see the discussion at the end of sec. (1.5)). This freedom in the
choice of the coupling constants for the $U(1)$ gauge group can
then be used to unify weak and electromagnetic interactions.
We will choose $y_{L},$ $y_{R}^{(-1)}$ and $y_{R}^{(0)}so$ as to
satisfy the Gell-Mann--Nishijima relation
\begin{equation}
Q=I_{3}+\frac{1}{2}y
\end{equation}
Here $Q$ is the electric charge in units of the proton charge,
$I_{3}$ is the third component of the weak isospin. It follows
that $y_{L}$ equals the sum of the charges of the
\textquotedblleft\ upper\textquotedblright\ and \textquotedblleft\
lower\textquotedblright components of the doublet $\psi _{lL}$
\begin{equation}
y_{L}=-1
\end{equation}
correspondingly, the weak hypercharges of the right-handed singlets $%
l_{R}^{\prime }$ and $v_{R}^{\prime }$ are equal to:
\begin{equation}
y_{R}^{(-1)}=-2,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ }y_{R}^{(0)}=0
\end{equation}
respectively. With the help of equations (1.64)-(1.66) it is easy
to check that
\begin{equation}
\frac{1}{2}j_{\alpha }^{(y)}=j_{\alpha }^{em}-j_{\alpha }^{3}
\end{equation}
where
\begin{equation}
j_{\alpha }^{em}=\sum\limits_{l}\left( -1\right)
\overline{l}^{\prime }\gamma _{\alpha }l^{\prime }
\end{equation}
is the electromagnetic current of leptons and where $j_{\alpha
}^{3}is$ the third component of the isovector \thinspace
$\mathbf{j}_{\alpha }.$
Using equation (1.67) the interaction Lagrangian (1.62) can be
written as
\begin{equation}
\mathfrak{L=}\left( \frac{ig}{2\sqrt{2}}j_{\alpha }^{(+)}W_{\alpha
}+h.c.\right) +\mathfrak{L}_{i}^{0}
\end{equation}
where
\begin{equation}
\mathfrak{L}_{i}^{0}=igj_{\alpha }^{3}A_{\alpha }^{3}+ig^{\prime
}\left( j_{\alpha }^{em}-j_{\alpha }^{3}\right) B_{\alpha }
\end{equation}
is the interaction Lagrangian of the leptons and the neutral
vector bosons.
To single out the electromagnetic interaction from equation
(1.70), we rewrite this expression as
\begin{equation}
\mathfrak{L}_{i}^{0}\mathfrak{=}i\sqrt{g^{2}+g^{\prime
2}}j_{\alpha
}^{3}\left( \frac{g}{\sqrt{g^{2}+g^{\prime 2}}}A_{\alpha }^{3}-\frac{%
g^{\prime }}{\sqrt{g^{2}+g^{\prime 2}}}B_{\alpha }\right)
+ig^{\prime }j_{\alpha }^{em}B_{\alpha }
\end{equation}
Instead of the fields $A_{\alpha }^{3}$ and $B_{\alpha }$ we
introduce the field
\begin{equation}
Z_{0}=\frac{g}{\sqrt{g^{2}+g^{\prime 2}}}A_{\alpha }^{3}-\frac{g^{\prime }}{%
\sqrt{g^{2}+g^{\prime 2}}}B_{\alpha }
\end{equation}
and the field
\begin{equation}
A_{\alpha }=\frac{g^{\prime }}{\sqrt{g^{2}+g^{\prime 2}}}A_{\alpha }^{3}-%
\frac{g}{\sqrt{g^{2}+g^{\prime 2}}}B_{\alpha }
\end{equation}
orthogonal to $Z_{\alpha }.$ Elementary algebra implies that the field $%
A_{\alpha }$ is coupled only to $j_{\alpha }^{em},$ while the field $%
Z_{\alpha }$ is coupled both with the current $j_{\alpha }^{3}$ and $%
j_{\alpha }^{em}.$ This means that the expression (1.71) contains
the Lagrangian of the electromagnetic interactions and that
$A_{\alpha }$ is the electromagnetic field.
Indeed, we have
\begin{equation}
\mathfrak{L}_{i}^{0}=i\frac{1}{2}\sqrt{g^{2}+g^{\prime
2}}j_{\alpha }^{0}Z_{0}+i\frac{gg^{\prime }}{\sqrt{g^{2}+g^{\prime
2}}}j_{\alpha }^{em}A_{\alpha }
\end{equation}
where
\begin{equation}
j_{\alpha }^{0}=2\left( j_{\alpha }^{3}-\frac{g^{\prime }}{g^{2}+g^{\prime 2}%
}j_{\alpha }^{em}\right)
\end{equation}
If the coupling constants $g$ and $g^{\prime }$ are related to the
charge of the proton as
\begin{equation}
\frac{gg^{\prime }}{\sqrt{g^{2}+g^{\prime 2}}}=e
\end{equation}
the second term $iej_{\alpha }^{em}A_{\alpha }$ in expression
(1.74) becomes the interaction Lagrangian between leptons and
photons.
Thus there are four vector bosons fields associated with the gauge $%
SU(2)\times U(1)$ group. Two fields correspond to charged vector bosons ($%
W^{+}andW^{-}$) and two fields correspond to neutral ones. One
neutral field is identified with the electromagnetic field, the
other is the field of the neutral intermediate boson.
Consequently, the unification of weak and electromagnetic
interactions based on the group $SU(2)\times U(1)$ is possible
provided that not only charged vector bosons and charged currents
but also neutral vector bosons and neutral currents, do exist.
Now we will continue our constructing of the unified electroweak theory of $%
GWS$. The Weinberg angle $\theta _{W}$ is introduced as follows:
\begin{equation}
\tan \theta _{W}=\frac{g^{\prime }}{g}
\end{equation}
For the neutral current $j_{\alpha }^{0}$ we get
\begin{equation}
j_{\alpha }^{0}=2j_{\alpha }^{3}-2\sin ^{2}\theta _{W}j_{\alpha
}^{em}
\end{equation}
and the relation (1.76) turns into
\begin{equation}
g=\frac{e}{\sin \theta _{W}}
\end{equation}
the complete interaction Lagrangian of leptons and gauge bosons
can be rewritten with the help of equations (1.69), (1.74) and
(1.77) as
\begin{equation}
\mathfrak{L}_{i}\mathfrak{=}\left( \frac{ig}{2\sqrt{2}}j_{\alpha
}^{(+)}W_{\alpha }+h.c\right) +i\frac{g}{2\cos \theta
_{W}}j_{\alpha }^{0}Z_{\alpha }+iej_{\alpha }^{em}A_{\alpha }
\end{equation}
The structure of the neutral current in the $GWS$ theory is
determined by the unifying weak and electromagnetic interactions.
the first term in equation (1.78) is the third component of the
isovector, whose \textquotedblleft\
plus-component\textquotedblright\ is identified with the charged
weak current. the parameter $\sin ^{2}\theta _{W}$ is thus the
only parameter which enters the expression for the neutral
current. Its value can be determined from the data on the neutral
current induced processes.
The theory we have considered so far satisfies the requirements of a local $%
SU(2)\times U(1)$ gauge invariance. Mass terms of the vector boson
fields can not be introduced into the Lagrangian of such a theory.
It is also obvious that the $SU(2)\times U(1)$ invariance with
left-handed fields in doublets $\psi _{lL}$ and right-handed
fields \thinspace $l_{R}$ in singlets also forbids the
introduction of lepton mass terms into the Lagrangian.
In the standard electroweak theory the Lagrangian mass terms of
the both the vector boson and fermion fields are introduced by the
Higgs mechanism of spontaneous breakdown of the gauge symmetry (
see sec. (1.6) ). The theory is built up so that, at the
beginning, the complete Lagrangian, including the Higgs sector, is
locally $SU(2)\times U(1)$ invariant. It is then necessary to
assume that the Higgs fields transformed according to a definite
(non-trivial) representation of the gauge group.\ Further, due to
the spontaneous breakdown of the gauge invariance charged ($W^{+}and$ $W^{-}$%
) as well as neutral $Z^{0}$ intermediate bosons have to acquire
masses. That is, three Goldstone degrees of freedom of Higgs field
can transform at the spontaneous breakdown of the gauge invariance
into the additional degrees of freedom of vector fields (three
masses). Thus, we are forced to assume that the Higgs fields form
at least doublet. It is this \textquotedblleft\
minimal\textquotedblright\ assumption which is at the bases of the
$GWS$ theory.
hence we assume that the Higgs fields forma doublet of the $SU(2)$
group
\begin{equation}
\phi =\binom{\phi _{+}}{\phi _{0}}
\end{equation}
where the complex function $\phi _{+},$ $\phi _{0}$ are the fields
of the charged and neutral bosons, respectively. Weak hypercharge
of the doublet (1.81) is defined so as to fulfill the
Gell-Mann--Nishijima \ relation (1.64).\ We have
\begin{equation}
y_{\phi }=1
\end{equation}
The Lagrangian of the Higgs field $\phi (x)$ is given as (see
sec.(1.6))
\begin{equation}
\mathfrak{L}^{0}=-\partial _{\alpha }\phi ^{+}\partial _{\alpha
}\phi -V(\phi ^{+}\phi ).
\end{equation}
Here
\begin{equation}
V(\phi ^{+}\phi )=-\mu ^{2}\phi ^{+}\phi +\lambda (\phi ^{+}\phi
)^{2}=\lambda \left( \phi ^{+}\phi -\frac{\mu ^{2}}{2\lambda }\right) ^{2}-%
\frac{\mu ^{4}}{4\lambda }
\end{equation}
where $\mu ^{2}$ and $\lambda $ are positive constants.
Taking into account (1.82), we get from (1.83) by the standard
substitution
\begin{equation*}
\partial _{\alpha }\phi \rightarrow \left( \partial _{\alpha }-ig\frac{1}{2}%
\tau A_{\alpha }-ig^{\prime }\frac{1}{2}B_{\alpha }\right) \phi
\end{equation*}
the Lagrangian
\begin{equation}
\mathfrak{L}=-\left[ \partial _{\alpha }\phi ^{+}+\phi ^{+}\left( ig\frac{1}{%
2}\tau A_{\alpha }+ig^{\prime }\frac{1}{2}B_{\alpha }\right)
\right] \left[
\partial _{\alpha }\phi +\left( ig\frac{1}{2}\tau A_{\alpha }+ig^{\prime }%
\frac{1}{2}B_{\alpha }\right) \phi \right] -V(\phi ^{+}\phi )
\end{equation}
is invariant with respect to the gauge group $SU(2)\times U(1).$
It is obvious from (1.84) that the potential $V(\phi ^{+}\phi )$
is minimal for
\begin{equation}
(\phi ^{+}\phi )_{0}=\frac{\mu ^{2}}{2\lambda }=\frac{\upsilon
^{2}}{2}
\end{equation}
For the minimal (vacuum) value of $\phi $ we choose
\begin{equation}
\phi _{vac.}=\binom{0}{\frac{\upsilon }{\sqrt{2}}}
\end{equation}
Further, the doublet $\phi $ can always be written in the form
\begin{equation}
\phi (x)=\exp \left\{ i\frac{1}{2}\tau \frac{\theta (x)}{\upsilon
}\right\} \binom{0}{\frac{\left\{ \upsilon +\chi (x)\right\}
}{\sqrt{2}}}
\end{equation}
where $\theta (x)$ and $\chi (x)$ are real functions. Finally, the
functions $\theta (x),$ which, correspond to the \textquotedblleft
would be\textquotedblright\ Goldstone bosons can always be
eliminated owing to the gauge invariance of the Lagrangian (1.85)
by appropriately fixing the gauge (the so called unitary gauge).
Thus we have
\begin{equation}
\phi (x)=\binom{0}{\frac{\left\{ \upsilon +\chi (x)\right\}
}{\sqrt{2}}}
\end{equation}
Let us substitute (1.89) into (1.85). Taking into account that
\begin{equation*}
\left( \mathbf{\tau A}_{\alpha }\right) \left( \mathbf{\tau
A}_{\alpha }\right) =2W_{\alpha }\overline{W}_{\alpha }+A_{\alpha
}^{3}A_{\alpha }^{3},
\end{equation*}
\begin{equation*}
\phi ^{+}\left( \mathbf{\tau A}_{\alpha }\right) B_{\alpha }\phi
=-A_{\alpha }^{3}B_{\alpha }\frac{1}{2}\left( \upsilon +\chi
\right) ^{2}
\end{equation*}
we get
\begin{equation}
\mathfrak{L}=\frac{1}{2}\partial _{\alpha }\chi \partial _{\alpha }\chi -%
\frac{1}{2}\left( \upsilon +\chi \right) ^{2}\left[ \frac{1}{4}%
g^{2}2W_{\alpha }\overline{W}_{\alpha }+\frac{1}{4}\left(
g^{2}+g^{\prime 2}\right) Z_{\alpha }Z_{\alpha }\right]
-\frac{1}{4}\lambda \chi ^{2}\left( \chi +2\upsilon \right) ^{2}
\end{equation}
Here
\begin{equation*}
W_{\alpha }=\frac{A_{\alpha }^{1-i2}}{\sqrt{2}}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \
\ \ \ \ \ \ and }\overline{W}_{\alpha }=\frac{A_{\alpha
}^{1+i2}}{\sqrt{2}}
\end{equation*}
are the fields of the charged vector bosons and $Z_{\alpha }is$
the field of the neutral vector bosons.
As a results of the spontaneous breakdown of the symmetry, mass
term for the intermediate bosons have emerged from in the
Lagrangian
\begin{equation}
\mathfrak{L}_{m}=-m_{W}^{2}W_{\alpha }\overline{W}_{\alpha }-\frac{1}{2}%
m_{Z}^{2}Z_{\alpha }Z_{\alpha },
\end{equation}
where
\begin{equation}
m_{W}^{2}=\frac{1}{4}g^{2}\upsilon ^{2},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ \ \ }m_{Z}^{2}=%
\frac{1}{4}(g^{2}+g^{\prime 2})\upsilon ^{2}.
\end{equation}
Symmetry was broken in such a way that the photon remained a
massless particle.
The function $\chi \left( x\right) $ is a field of neutral scalar
particles (the so called Higgs particles). It follows from (1.90)
that their mass is equal to
\begin{equation}
m_{\chi }=\sqrt{2\lambda }\upsilon =\sqrt{2}\mu
\end{equation}
note that the Lagrangian (1.90) contains also a term describing
the interaction of the Higgs particles with the intermediate
bosons.
We find from (1.77) and (1.92) that the mass squared of the $Z$
boson is related to that of the $W$ boson and the parameter $\cos
^{2}\theta _{W}$ by
\begin{equation}
m_{Z}^{2}=\frac{m_{W}^{2}}{\cos ^{2}\theta _{W}}.
\end{equation}
It should be stressed that this relation is satisfied only if the
Higgs fields form doublets. In the case of higher Higgs multiplets
no relation between masses of neutral and charged intermediate
bosons does exist.
It follows from (1.45) and (1.92) that
\begin{equation}
\upsilon =\frac{1}{\left( \sqrt{2}G_{F}\right) ^{\frac{1}{2}}}
\end{equation}
Therefore, the theory enables us to calculate the parameter
$\upsilon .$ Substituting the numerical value of $G_{F:}$
\begin{equation*}
G_{F}=1.1664\times 10^{-5}GeV,
\end{equation*}
we find
\begin{equation}
\upsilon =246.2GeV.
\end{equation}
Also, it follows from (1.45) and (1.79) that
\begin{equation}
m_{W}=\left( \frac{\pi \alpha }{\sqrt{2}G_{F}}\right) ^{\frac{1}{2}}\frac{1}{%
\sin \theta _{W}}.
\end{equation}
the value of the parameter $\sin \theta _{W}$ is determined from
experimental data on neutral currents induced processes.
therefore, the theory enables us to predict the value of the $\ W$
boson mass.
From the analysis of the world data on deep inelastic processes,
one could deduce the value
\begin{equation}
\sin ^{2}\theta _{W}=0.2315.
\end{equation}
With the above value of $\sin ^{2}\theta _{W},$ the masses of
charged and neutral intermediate bosons using (1.97) and (1.94)
turn out to be
\begin{eqnarray}
m_{W} &=&80.330GeV \\
m_{Z} &=&91.187GeV
\end{eqnarray}
\chapter{Supersymmetry}
\section{Motivation $for$ Supersymmetry}
Ever since its discovery in the early seventies, supersymmetry has
been the focus of considerable attention. Although no compelling
supersymmetric model has yet emerged, and in spite of the fact
that there is no experimental evidence for supersymmetry, its
remarkable theoretical properties have provided sufficient
motivation for its study.
Supersymmetry, is a novel symmetry that interrelates bosons and
fermions, thereby providing a new level of synthesis. It is the
most general (known) symmetry of the $S-matrix$ consistent with
$Poincare^{\prime }$ invariance. Supersymmetry leads to an
improvement ( and sometimes even to elimination of divergencies
that occur in Quantum Field Theory (QFT), in particular, quadratic
divergencies are absent); these feature will play an important
role in our subsequent discussions. Since two successive
supersymmetry transformation involve a space-time translation,
local supersymmetry theories ( and past experience shows that
nature prefers local supersymmetry!) necessarily include
gravitation with the gauge fermion (the
\textsl{gravitino}) being the supersymmetric partner of the \textsl{%
graviton. }Further, because supersymmetric QFT's exhibit a better
ultraviolet behavior, they may provide a hope of eventually
obtaining a consistent quantum theory that include gravitation.
Finally, supersymmetry is an essential ingredient in the
construction of the most recent candidate for a \textsl{theory of
everything,TOE, the SUPERSTRING.}
At this point, one may ask why any extension of the Standard Model
(SM) needs to be considered [7]. After all, the $GWS$ theory seems
for all known electromagnetic and weak phenomena, and quantum
chromodynamics $(QCD)$ \ is generally accepted to be the theory of
strong interactions. Indeed, it
appears that all experiments are consistent \ with a gauge theory based on $%
SU(3)_{c}\times SU(2)_{L}\times U(1)_{Y},$ with the
$SU(2)_{L}\times U(1)_{Y} $ being spontaneously broken to
$U(1)_{em}.$
In the $GWS$ model, the spontaneous breakdown is brought about the
introduction of an elementary scalar field. This leads to the
prediction of (at least) an additional spin-zero particle, the
Higgs boson. With the discovery of the $W^{\pm }$ and $Z^{0}$
bosons at the$CERN$ $p\overline{p}$ collider, and the top quark at
the $FERMILAB$, only the Higgs boson remains to be discovered to
complete the particle content of the $GWS$ electroweak theory. It
should be pointed out that no elementary spin-zero particles have
ever been found. In fact, the $ad$ $hoc$ introduction of these
considered by many theorists to be an unpleasant feature of the
standard model.
The problem is the instability of the scalar particles masses
under radiative corrections. For example, one-loop radiative
corrections to these diverge quadratically, leading to corrections
of the form:
\begin{equation}
\delta m^{2}=O(\alpha /\pi )\Lambda ^{2}
\end{equation}
where $\Lambda $ is a cut-off parameter representing the scale of
the theory and $\alpha =e^{2}/hc$ $\approx \frac{1}{137}$ is the
fine structure
constant. The parameter $\Lambda $ may be the \textsl{Grand Unified Theory} (%
$GUT$) scale $O(10^{15}GeV/c^{2}),$ or the \textsl{Plank scale }$%
O(10^{19}GeV/c^{2})$ if we believe there is no new physics all the
way up to the scale associated with quantum gravity. On the other
hand, we know that for the scalar self-couplings to be sensibly
treated within perturbation theory, the scalar mass
\begin{equation}
m^{2}\leqslant O(m_{W}^{2}/\alpha )\sim 1TeV/c^{2}
\end{equation}
In other words, either the Higgs sector is strongly interacting,
or $\delta m^{2}\gg ,m^{2.}$ Such theories (where $\delta m^{2}\gg
,m^{2}$) have been technically referred to as \textquotedblleft\
unnatural\textquotedblright , because the parameters have to be
tuned with unusual precision in order to preserve the lightness of
the Higgs mass compared to the GUT scale $\Lambda
\symbol{126}O(1TeV/c^{2})$
One possible solution to the problem of naturalness is to imagine
that the
quarks, leptons and gauge bosons are all composites with associated scale $%
\Lambda \symbol{126}O(1TeV/c^{2}).$ While these solve the problem
at present energies, it does not really represent a solution as we
could ask the same questions of any underlying theory. Moreover,
it would be difficult to understand why the gauge principle seems
to work so well at least up to this point.
A different approach would be to eliminate the fundamental scalars
and
imagine that the Higgs is a composite of new fermions bound by a new force -%
\textsl{the technicolor force-} that becomes strong at a scale of $%
O(1TeV/c^{2})$. While this is an appealing idea, and while the
composite Higgs boson can indeed lead to masses for the $W^{\pm
}and$ $Z^{0},$ it does not account for quark and lepton masses.
This led to the introduction of the yet another interaction, the
extended technicolor. This appears to have phenomenological
problems, particularly with flavor-changing neutral currents
(FCNC). Although that it appears there is no reason for the
technicolor idea not to work, it seems fair to say that no viable
model has yet emerged.
Supersymmetry provides yet another to the naturalness question.
this may be simply understood by recalling that fermion and masses
are protected from large radiative corrections by chiral symmetry.
The analogue for equation (2.1) is
\begin{equation}
\delta m_{f}=O\left( \alpha /\pi \right) m_{t}\ln \left( \Lambda
/m_{t}\right)
\end{equation}
Thus, massless fermions do not acquire masses via radiative
corrections. This is a manifestation of the chiral symmetry The
naturalness problem arises because, unlike the case of fermions,
there is no symmetry to keep massless scalars from acquiring large
masses via radiative corrections.
In practice, at the one-loop level, this works because boson and
fermion loops both enter the scalar correction, but with a
relative minus sign. For supersymmetric theories, equation (2.1)
takes the form
\begin{equation}
\delta m^{2}\approx O(\alpha /\pi )\Lambda ^{2}-O(\alpha /\pi
)\Lambda ^{2}=0
\end{equation}
Exact cancellation requires that the bosons and fermions enter
with the same quantum numbers (this is ensured by supersymmetry)
so that their couplings are the same except for supersymmetry
Clebsch-Gordon factors, and also that they have the same masses.
Spontaneous breaking of supersymmetry as for any other symmetry,
maintains a relation between the couplings but breaks the mass
relations, and equation (2.4) takes the form
\begin{equation}
\delta m^{2}\approx O(\alpha /\pi )\left\vert
m_{B}^{2}-m_{f}^{2}\right\vert
\end{equation}
We see from expression (1.2) and (1.5) that for supersymmetry to
solve the naturalness problem,
\begin{equation}
\left\vert m_{B}^{2}-m_{f}^{2}\right\vert \leq O(1TeV/c^{2})^{2}
\end{equation}
where $m_{B}^{2}$ ($m_{f}^{2}$) is the boson (fermion) to have
masses $\leq $ $O(1TeV/c^{2})$ and hope that these may show up at
future LEP energies.
We emphasize that not one of all these reasons for considering
supersymmetry, no matter how compelling it may appear, requires
any particular mass scale for supersymmetric particles
(sparticles). It is only if we require supersymmetry to address
the naturalness question that the mass scale is fixed. We note
that the mass scale does address the question "
why is the scalar mass 12 (16) order of magnitude smaller than the $GUT$ ($%
Planck$ ) scale in the first place?". But in supersymmetric
theories, once this value has been set, either \textquotedblleft
by hand\textquotedblright\ or by any other mechanism, radiative
corrections preserve this hierarchy of scales.
\section{Rules $of$ Supersymmetry}
We are here to establish some of the rules of supersymmetry
treatment [8].
First and foremost we postulate the $existence$ of supersymmetry
between fermions and bosons which should underlay the laws of
physics. No experimental observation has yet revealed particles or
forces which manifestly show such a symmetry except for some rare
events [9]. therefore the development of a theory based on
supersymmetry requires an understanding not only of how the
various symmetry transformations affect each other ( the algebra )
but also of all possible systems ( \textit{multiplets }of
particles or quantum fields ) on which the supersymmetry
transformations can act. The symmetry operations will transform
different members of a multiplet into each other. More precisely,
the transformations are to be represented
by linear operators acting on the vector space. ( the \textquotedblleft $%
representation$ $space$\textquotedblright ) spanned by the
multiplet. Finally the theory must predict the time development of
interacting physical
systems. This is usually achieved by finding appropriate \textsl{%
Hamiltonians or Lagrangians. }The supersymmetry present in the
physical system will manifest itself in the invariance of this
\textsl{Lagrangian} - or rather its integral over all time,
\textbf{the action- }\ if all the fields undergo their respective
supersymmetry transformation. Because of the lack of experimental
input, a large fraction of the research effort of supersymmetry
theorists has, in fact, been devoted to the finding of, and
exploration of, possible supersymmetry respecting interactions.
The theoretical framework in which to construct supersymmetric
models in flat space-time is quantum field theory, and it must be
pointed out that \textsl{the standard concept of quantum field
theory allow for supersymmetry without any further assumptions.
}This introduction of supersymmetry is not a revolution in the way
one views physics. It is an additional symmetry that in otherwise
\textquotedblleft normal\textquotedblright\ field theoretical
model can have. As we shall see, all that is required for a field
theory to be supersymmetric is that it contains specified types
and numbers of fields in interactions with each other and that the
various interaction strengths and particle masses have properly
related events. As an example, consider the $SU(3)$ gauge theory
of gluons, which can be made supersymmetric by including a
massless neutral color octet of spin $\frac{1}{2}$ particles which
are their own anti-particles. Such spin $\frac{1}{2}$ partners of
the gluons are called \textquotedblleft
\textsl{gluinos}\textquotedblright . If our models contains not
only gluons but also quarks, we must also add corresponding
partners for them. These have spin $0$ and are commonly called
\textquotedblleft \textsl{squarks }\textquotedblright .
(Procedures like these are employed particularly in the
construction of supersymmetric Grand Unified Theories$susyGUTs$ or
$\sup erGUTs.$ )
Before we proceed to discuss the ingredients of supersymmetric
models, we must address the question of the $Fermi-Bose$
matter-force duality. After all, the wave particle duality of
quantum mechanics and the subsequent question of the
\textquotedblleft exchange particle\textquotedblright\ in
perturbative quantum field theory seemed to have abolished that
distinction for good. The recent triumph progress of gauge
theories has, however,
reintroduced it. forces are mediated by gauge potentials, i.e., from spin $%
\frac{1}{2}fermions$. The Higgs particles, necessary to mediate
the needed spontaneous breakdown of the gauge invariances (more
about this later), play a somewhat intermediate role. They must
have zero spin and are thus bosons, but they are not directly
related to any of the forces. Purists hope to see the arise as
bound states of the fermions (condensates). Supersymmetric
theories, and particularly supergravity theories,
\textquotedblleft unite\textquotedblright\ fermions and bosons
into multiplets and left the basic distinction between matter and
interaction. The gluinos, for example, are thought of as carriers
of the strong force as much as the gluons, except that as fermions
they must obey an exclusion principle an thus will never conspire
to form a coherent, measurable potential. the distinction between
forces and matter becomes phenomenological: bosons - and
particularly massless ones- manifest themselves as forces because
they can build up coherent classical fields; fermions are seen as
matter because no tow identical ones can occupy the same point in
space.
For some time it was thought that supersymmetry which would
naturally relate forces and fermionic matter would be in conflict
with field theory. The progress in understanding elementary
particles through the $SU(3)$ classification of the
\textquotedblleft eight-fold way\textquotedblright\ (a global
symmetry) had led to attempts to find a unifying symmetry which
would directly relate to each other several of the $SU(3)$
multiplets (baryon octet, decuplet,etc.) even if these had
different spins. The failure of attempts to make those
\textquotedblleft\ spin symmetry\textquotedblright\
relativistically covariant led to the formulation of a series of
no-go theorems, culminating in 1967 in a paper by $Coleman$ and
$Mandula$ which was widely understood to show that it is
impossible, within the theoretical framework of relativistic field
theory, to unify space-time symmetry with internal symmetries.
More precisely, say that the charge operators whose eigenvalues
represent the \textquotedblleft internal\textquotedblright\
quantum numbers such as electric charge, isospin, hypercharge,
etc., must be transitionally invariant. This means that these
operators commute with the energy, the momentum operators. indeed
the only symmetry generators which
transfer at all under both translations and rotations are those of the $%
Lorentz$ transformations themselves ( rotations and
transformations to coordinate systems which move with constant
velocity). The generators of
internal symmetries can not relate eigenstates with different eigenvalues $%
m^{2}$ and $l(l+1)\hbar ^{2}$ of the mass and spin operators. This
means that irreducible multiplets of symmetry groups can not
contain particles of different mass or of different spin. This
no-go theorem, seemed to rule out exactly the sort of unity which
was sought. One of the assumptions made in Coleman and Mandula's
proof did, however, turn out to be unnecessary: they had admitted
only those symmetry transformations which form $Lie$ group with
real parameters. Examples of such symmetries are space rotations with the $%
Euler$ angles as parameters and the phase transformations of
electrodynamics with a real phase angel $\theta $ about which we
talked earlier. The charge operators associated with such Lie
groups of symmetry transformations (their generators) obey
well-defined $commutation$ $relations$ with each other.
Perhaps the best-known example is the set of commutators $%
L_{x}L_{y}-L_{y}L_{x}\equiv \left[ L_{x},L_{y}\right] =i\hbar
L_{z},$ for the angular momentum operators which generate spatial
rotations.
Different spins in the same multiplet are allowed if one includes
symmetry
operations whose generators obey $anticommutation$ $relations$ of the form $%
AB+BA\equiv \{A,B\}=C.$ This was first proposed in 1971 by
Gol'fand and Likhtman, and followed up by Volkov and Akulov who
arrived at what we now call a non-linear realization of
supersymmetry. Their model was not renormalizable. In 1973, Wess
and Zumino presented a renormalizable theoretical model of a spin
$\frac{1}{2}$ particle in interaction with two spin $0$ particles
where the particles are related by symmetry transformations, and
therefore \textquotedblleft sit\textquotedblright\ in the same
multiplet. The limitation of the Coleman-Mandula no-go theorem had
been avoided by the introduction of a fermionic symmetry operator
which carried a spin $\frac{1}{2},$ and thus when acting on a
state of spin $j$
resulted in a linear combination of states with spin $j+\frac{1}{2}$ and $j-%
\frac{1}{2}.$ Such operators must and do observe anticommutator
relations with each other. They do not generate Lie groups and are
therefore not rules out by the Coleman-Mandula no-go theorem. In
the light of this discovery, Haag, Lopuszanski, and Sohnius
extended the results of Coleman-Mandula to include symmetry
operations which obey Fermi-statistics. they proved that in the
context of relativistic field theory the only model which can lead
to a solution of the unification problems are supersymmetric
theories, and space-time and internal symmetries can only be
related to each other by
fermionic symmetry operators $Q$ of spin $\frac{1}{2}(not$ $\frac{3}{2}or$ $%
higher$) whose properties are either exactly those of the
Wess-Zumino model or are at least closely related to them. Only in
the presence of supersymmetry can multiplets contain particles of
different spin, such as the graviton and the photon.
\section{Essentials $of$ Supersymmetry Algebra}
Supersymmetry transformations are generated by quantum operators
$Q$ which change fermionic states to bosonic ones and vice versa,
\begin{equation}
Q\left\vert fermion\right\rangle =\left\vert boson\right\rangle
;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ \ \ \ }Q\left\vert boson\right\rangle
=\left\vert fermion\right\rangle .
\end{equation}
Which particular bosons and fermions are related to each other by
the operation of some such $Q?$, how many $Q^{\prime }s$ there
are? and which properties other than statistics of the states are
changed by that operation depends on the supersymmetric model
under study. There are, however, a number of properties which are
common to the $Q^{\prime }s$ in all supersymmetric models.
By definition, the $Q^{\prime }s$ change the statistics and hence
the spin of the state. Spin is related to behavior under spatial
rotations, and thus supersymmetry is- in some since- a space-time
symmetry. Normally, and particularly so in models of
\textquotedblleft extended
supersymmetry\textquotedblright\ (supergravity is being one example), the $%
Q^{\prime }s$ also affect the internal quantum numbers of the
states. It is this property of combining internal with space-time
behavior that makes supersymmetric field theories interesting in
the attempts to unify all fundamental interactions.
As a simple illustration of the non-trivial space-time properties of the $%
Q^{\prime }s,$ consider the following. Because fermions and bosons
behave differently under rotations, the $Q$ can not be invariant
under such rotations. We can, for example, apply the unitary
operator $U(U^{-1}U=1)$ which, in Hilbert space, represents \ a
rotation in configuration space by 360$^{0}$ around some axis.
Since fermionic states pick up a minus sign when rotated through
360$^{0}$ and bosonic states do not, we have
\begin{equation}
U\left\vert fermion\right\rangle =-\left\vert fermion\right\rangle
,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ \ \ \ \ }U\left\vert boson\right\rangle
=\left\vert boson\right\rangle ,
\end{equation}
then from equation (2.1) we get
\begin{eqnarray}
Q\left\vert boson\right\rangle &=&\left\vert fermion\right\rangle
=-U\left\vert fermion\right\rangle =-UQ\left\vert
boson\right\rangle
=-UQU^{-1}U\left\vert boson\right\rangle \notag \\
&=&-UQU^{-1}\left\vert boson\right\rangle ,
\end{eqnarray}
\begin{eqnarray}
Q\left\vert fermion\right\rangle &=&\left\vert boson\right\rangle
=U\left\vert boson\right\rangle =UQ\left\vert fermion\right\rangle
=UQU^{-1}U\left\vert fermion\right\rangle \notag \\
&=&-UQU^{-1}\left\vert fermion\right\rangle ,
\end{eqnarray}
and since all fermionic and bosonic states, taken together, from a
basis in the Hilbert space, we easily see that we $must$ have
\begin{equation}
UQU^{-1}=-Q
\end{equation}
The rotated supersymmetry generator picks up a minus sign, just as
a fermionic state does. One can extend this analysis and show that
the behavior of the $Q^{\prime }s$ under any Lorentz
transformation -not only under rotations by 360$^{0}$- is
precisely that of a \textsl{spinor operator. \ }More technically
speaking, the $Q^{\prime }s$ transform like tensor operators of
spin $\frac{1}{2}$ and, in particular, they do not commute with
Lorentz transformations followed by a supersymmetry transformation
is different from that when the order of the transformation is
reserved.
It is not easy to illustrate, but it is nevertheless true that, on
the other hand, the $Q^{\prime }s$ are $invariant$ $under$
$transfomation.$ It does not matter whether we translate the
coordinate system before or after we perform a supersymmetry
transformation. In technical terms, this means that
we have a vanishing commutator of $Q$ with the energy of momentum operator $%
E $ $and$ $\mathbf{P}$, which generates space-time translations,
\begin{equation}
\left[ Q,E\right] =\left[ Q,\mathbf{P}\right] =0
\end{equation}
The structure of a set of symmetry operations is determined by the
result of two subsequent operations. For continuous symmetries
like space rotations or supersymmetry, this structure is best
described by the commutators of the generators, such as the ones
given above for the angular momentum operators. The commutator
structure of the $Q^{\prime }s$ with themselves can best be
examined by if they are viewed as products of operators which
annihilate fermions and create bosons instead, or vice versa. It
can be shown that the canonical quantization rules for creation
and annihilation operators of particles (and in particular the
$anti$commutator rules for fermions, which reflect Pauli's
exclusion principle) lead to the results that it is the anti
commutator of two $Q^{\prime }s$, not the commutator, which is
again a supersymmetry generator, albeit one of bosonic nature.
Let us consider the anticommutator of some \thinspace $Q$ with its
Hermitian adjoint $Q^{+}.$ As spinor components, the $Q^{\prime
}s$ are in general not Hermitian, but $\{Q,Q^{+}\}\equiv
QQ^{+}+Q^{+}Q$ is a Hermitian operator with positive definite
eigenvalues;
\begin{equation}
\left\langle ...\left\vert QQ^{+}\right\vert ...\right\rangle
+\left\langle ...\left\vert Q^{+}Q\right\vert ...\right\rangle
=\left\vert Q\left\vert ...\right\rangle \right\vert
^{2}+\left\vert Q^{2}\left\vert ...\right\rangle \right\vert
^{2}\geq 0.
\end{equation}
This can only be zero for all states $\left\vert ...\right\rangle
$ if $Q=0.$ A more detailed investigation will show that
$\{Q,Q^{+}\}$ must be a linear combination of the energy and
momentum operators;
\begin{equation}
\left\{ Q,Q^{+}\right\} =\alpha E+\beta \mathbf{P}
\end{equation}
This relation between the anticommutator of two generators of
supersymmetry
transformations on the one hand and the generators of $space-time$ $%
translations$ (namely energy and momentum) on the other, is
central to the entire field of supersymmetry and supergravity. It
means that the subsequent operations of two finite supersymmetry
transformations will include translations in space and time of the
states on which they operate.
There is a further important consequence of the form of equation
(2.14).
When summing this equation over all supersymmetry generators, we find that $%
\beta \mathbf{P}$ terms cancel while the $\alpha E$ terms add up,
so that
\begin{equation}
\sum\limits_{allQ}\left\{ Q,Q^{+}\right\} \propto E.
\end{equation}
Depending on the sign of the proportionality factor, the spectrum
for the energy would have to be either $\geq 0$ or $\leq 0$
because of the inequality (2.13). For a physical sensible theory
with energies bounded from below but not from above, the
proportionality factor will therefore be positive.
The equation (2.12) to (2.15) are crucial properties of the
supersymmetry generators, and many of the most important features
of supersymmetric theories, whether in flat or curved space-time,
can be derived from them.
One such feature is the \textit{positivity of energy, }\ as can be
seen from equation (2.15)in conjunction with (2.13), \emph{the
spectrum of the energy operator E (the Hamiltonian) in a theory
with supersymmetry contains no negative eigenvalues. }\ We denote
the state (or the family of states) with
the lowest energy by $\left\vert 0\right\rangle $ and call it the \textit{%
vacuum. }The vacuum will have zero energy
\begin{subequations}
\begin{equation}
E\left\vert 0\right\rangle =0
\end{equation}
if and only if
\begin{equation}
Q\left\vert 0\right\rangle =0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ \ \ \ and \ \ \ \ \ \ }%
Q^{+}\left\vert 0\right\rangle =0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ \ for all }Q
\end{equation}
Any state whose energy is not zero, e.g. any one-particle state,
can not be invariant under supersymmetry transformations. This
means that there must be
one ( or more) superpartner state $Q\left\vert 1\right\rangle $ or $%
Q^{+}\left\vert 1\right\rangle $ for every one-particle state
$\left\vert 1\right\rangle .$ thus: \emph{each supermultiplet must
contain at least one boson and one fermion whose spins differ by
}$\frac{1}{2}.$ A supermultiplet is a set of quantum states (or,
in different context, of quantum fields) which can be transformed
into one another by one or more supersymmetry transformations.
This is exactly analogous to the concept of \textquotedblleft
multiplet\textquotedblright\ known from atomic, nuclear and
elementary particle physics where e.g., the proton and the neutron
from an isospin doublet and can be transformed into each other by
an isospin rotation.
The translational invariance of $Q,$ expressed by equation (2.12)
implies that $Q$ does not change energy and momentum and that
therefore:\emph{\ all states in a multiplet of unbroken symmetry
have the same mass. }Experiments do not show elementary particles
to be accompanied by superpartners. with different spins but
identical mass. Thus, if supersymmetry is fundamental to nature,
it can only be realized as \emph{a spontaneously broken symmetry}.
The term \textquotedblleft spontaneously broken
symmetry\textquotedblright\ is used when the interaction
potentials in a theory, and therefore the basic dynamics, are
symmetric but the states with lowest energy, the ground state or
vacuum, is not. If a generator of such symmetry acts on the vacuum
the result will not be zero. Perhaps the most familiar example of
a spontaneously broken symmetry is the occurrence of
ferromagnetism in spite of the spherical symmetry of the laws of
electrodynamics. because the dynamics retain essential symmetry of
the theory, states with very high energy tend to lose the memory
of the asymmetry of the ground state and the \textquotedblleft
spontaneously broken symmetry\textquotedblright\ gets
re-established. high energy may mean high temperatures,
essentially because the statistics of the occupation of states is
different from fermions and bosons.
If supersymmetry is spontaneously broken, the ground state will
not be invariant under all supersymmetric operations: $Q\left\vert
0\right\rangle \neq 0$ or $Q^{+}\left\vert 0\right\rangle \neq 0$
for some $Q$. From what we said above in equation (2.16), we
conclude that:\emph{\ supersymmetry is spontaneously broken if and
only if the energy of the lowest lying state (the vacuum) is not
exactly zero. }Whereas spontaneous supersymmetry breaking may lift
the mass degeneracy of the supermultiblets by giving different
masses to different members of the multiplet spectrum itself will
remain intact. In particular, we still need \textquotedblleft
superpartners\textquotedblright\ for all known elementary
particles, although these may not be superheavy or otherwise
experimentally unobtainable. The superpartners carry a new quantum
number (called R-charge). It has been shown that the highly
desirable property of super GUTs model, mentioned in the
introduction, namely that they stabilize the GUT hierarchy, is
closely associated with a strict conservation law for this quantum
number. If nature works that way the lightest particle with a
non-zero R-charge must be stable. Whereas this particle may be so
weakly interacting that it has not yet been observed, its presence
in the universe could crucially and measurably influence
cosmology.
As a matter of convention, fermionic superpartners of known bosons
are denoted by the suffix -ino ( hence \textquotedblleft\
gravitino, photino, gluino\textquotedblright ); the bosonic
superpartners of fermions are denoted by the prefixes s-(
\textquotedblleft squark, slepton\textquotedblright ). The
discovery of any such bosinos or sfermions would confirm the
important prediction of superpartners which is common to all
supersymmetric models. it would be a major breakthrough and would
establish supersymmetry as an important property of the physics of
nature rather than just an attractive hypothesis.
We have not yet specified \textquotedblleft how
much\textquotedblright\ supersymmetry there should be. Do we
propose one spin $\frac{1}{2}$ photino as a partner of a physical
photon, or two, or how many? Different supersymmetric models give
different answers, depending on how many supersymmetric generators
$Q$ are present, as conserved charges, in the model. As already
said, the $Qs$ are spinor operators, and a spinor in four
space-time dimensions must have at least four real components. The
total number of $Q^{\prime }s$ must therefore be a multiple of
four. A theory with minimal supersymmetry would be invariant under
the transformations generated
by just the four independent components of a single spinor operator $%
Q_{\alpha }$ with $\alpha =1,...,4$. We call this \emph{a theory
with N=1 supersymmetry, }\ and it would give rise to, e.g., a
single uncharged massless spin $\frac{1}{2}$ photino which is its
own antiparticle ( a \textquotedblleft Majorana
neutrino\textquotedblright ). If there is more supersymmetry, then
there will be several spinor generators with four
components each, $Q_{\alpha i}$ with $i=1,...N,$ and we speak of a \emph{%
theory with N-extended supersymmetry, }\ which will give rise to
N-photinos. The fundamental relationship (2.14) between the
generators of supersymmetry is now replaced by
\end{subequations}
\begin{equation}
\{Q_{i},Q_{j}^{+}\}=\delta _{ij}\left( \alpha E+\beta
\mathbf{P}\right)
\end{equation}
\section{The Algebra of N=1 supersymmetry}
Coleman and Mandula (1967) showed that under very general
assumptions, a Lie group that contains both the Poincare$^{\prime
}$ group \textit{P }and an
internal symmetry group \textit{G }must be just a direct product of $P$ and $%
G$ [10]. the generators of the Poincare$^{\prime }$ group are the
four-momentum $P_{\mu }(P_{0}=E=H,$ $P_{1}=P_{x},$ $P_{2}=P_{y},$ $%
P_{3}=P_{z}),$ which produces space-time translations, and the
antisymmetric tensors $M_{\mu \upsilon },$ which generates
space-time rotations, that is
\begin{equation}
\mathbf{J\equiv }\left( M_{23},M_{31},M_{12}\right) ;\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ \ \ }%
\mathbf{K\equiv (}M_{01},M_{02},M_{03}),
\end{equation}
Where the regular momentum operator $J_{i}generates$ space
rotations about the $i-axis$ and $K_{i}$ generates Lorentz boosts
along the $i-axis.$ So, if
the generators of the internal supersymmetry group $G$ are denoted by $%
T_{a}, $ the Coleman-Mandula theorem requires that
\begin{equation}
\left[ P_{\mu },T_{a}\right] =\left[ M_{\mu \upsilon
},T_{a}\right] =0
\end{equation}
This no-go theorem shows that it is impossible to mix internal and
Lorentz space-time symmetries in a non-trivial way. Supersymmetry
escapes this \textquotedblleft no-go\textquotedblright\ theorem
because, in addition to the generators $P_{\mu },$ $M_{\mu
\upsilon },$ $T_{a}$ which satisfy commutation relations, it
involves fermionic generators $Q$ that satisfy anti-commutation
relations. If we call the generators $P_{\mu },$ $M_{\mu \upsilon
},$ $T_{a}$ \textquotedblleft even\textquotedblright , and $Q$
\textquotedblleft odd\textquotedblright , then the supersymmetry
algebra has the general structure
\begin{eqnarray}
\left[ even,even\right] &=&even, \notag \\
\left[ odd,odd\right] &=&even, \notag \\
\left[ even,odd\right] &=&odd,
\end{eqnarray}
The above is called \emph{graded Lie Algebra.}
We now present the simplest form of supersymmetry algebra (N=1
supersymmetry). We introduce four generators $Q_{\alpha }(\alpha
=1,...,4),$ which form a four-component Majorana spinor. Majorana
spinors are the simplest possible type of spinor. They are
self-conjugate, i.e.
\begin{equation}
Q=Q^{c}=C\overline{Q^{\dagger }},
\end{equation}
and hence have only half as many degrees of freedom as a Dirac
spinor. Indeed, any Dirac spinor $\psi $ may be written
\begin{equation}
\psi =\frac{1}{\sqrt{2}}\left( \psi _{1}+i\psi _{2}\right) ,
\end{equation}
where
\begin{equation}
\psi _{1}=\frac{1}{\sqrt{2}}(\psi +\psi ^{c}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \
}and\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ }\psi _{2}=-\frac{i}{\sqrt{2}}(\psi -\psi ^{c})
\end{equation}
are two independent Majorana spinors that satisfy $\psi _{i}=\psi
_{i}^{c}.$
Since $Q_{\alpha }$ is a spinor, it must satisfy
\begin{equation}
\left[ Q_{\alpha },M_{\mu \upsilon }\right] =\frac{1}{2}\left(
\sigma _{\mu \upsilon }\right) _{\alpha \beta }Q_{\beta }
\end{equation}
This relation expresses the fact that the $Q_{\alpha }$transform
as a spinor under the rotation generated by $M_{\mu \upsilon }$
(recall that $\sigma _{\mu \upsilon }$ when sandwiched between
spinors, transform as an antisymmetric tensor). The Jacobi
identity of commutators
\begin{equation}
\left[ \left[ Q_{\alpha },P_{\mu }\right] ,P_{\nu }\right] +\left[
\left[
P_{\nu },Q_{\alpha }\right] ,P_{\mu }\right] +\left[ \left[ P_{\mu },P_{\nu }%
\right] ,Q_{\alpha }\right] =0
\end{equation}
requires that $Q_{\alpha }$must be transnational invariant, that
is
\begin{equation}
\left[ Q_{\alpha },P_{\mu }\right] =0.
\end{equation}
It is the remaining anticommutation relation
\begin{equation}
\left\{ Q_{\alpha },\overline{Q_{\beta }}\right\} =2(\gamma ^{\mu
})_{\alpha \beta }P_{\mu }
\end{equation}
which we shall derive later, and which closes the algebra, that
has the most interesting consequences. Clearly the anticommutator
has to yield an even generator, which might be either $P_{\mu }$
or $M_{\mu \nu }.$ A term of the form $\sigma ^{\mu \nu }M_{\mu
\nu }on$ the right-hand side would violate a
generalized Jacobi identity involving $Q_{\alpha },$ $Q_{\beta }$ and $%
P_{\mu }$ and the algebra would not close. Indeed, if go back to
the \textquotedblleft\ no-go\textquotedblright\ theorem and allow
for anticommutators as well as commutators, we find that the only
allowed supersymmetries (apart from trivial extension) are those
based on the graded Lie algebra defined by eqs. (2.24)-(2.27).
we choose $Q_{\alpha }$ to be Majorana spinor with four
independent (real) parameters, but we could have used a Weyl
spinor with two complex components equally well. In fact, we shall
find it more convenient to work with a left-handed Weyl spinor
$\psi _{\alpha }$ with $\alpha =1,2,$ and the chiral
representation of the Dirac matrices in which
\begin{equation}
\gamma =%
\begin{pmatrix}
0 & \sigma \\
-\sigma & 0%
\end{pmatrix}%
,\gamma _{0}=%
\begin{pmatrix}
0 & I \\
I & 0%
\end{pmatrix}%
,\gamma _{5}=%
\begin{pmatrix}
-I & 0 \\
0 & I%
\end{pmatrix}%
,C\gamma _{0}^{T}=%
\begin{pmatrix}
0 & i\sigma _{2} \\
-i\sigma _{2} & 0%
\end{pmatrix}%
.
\end{equation}
Using the two component Weyl spinor $\psi _{\alpha },$ we can
construct a Majorana spinor in this chiral representation we find
\begin{equation}
Q=Q^{c}=\QATOPD\{ \} {\psi }{0}+C\gamma _{0}^{T}\QATOPD\{ \} {\psi
^{\ast }}{0}-\QATOPD\{ \} {\psi }{-i\sigma \psi ^{\ast }}.
\end{equation}
We then look for possible supersymmetric representations that
contain massless particles. these should be the relevant
multiplets, the particles we observe are thought to acquire their
masses only as a result of spontaneous symmetry breaking. The
procedure we employ is to evaluate \ the
anticommutator (2.27) for a massless particle moving along the z-axis with $%
P_{\mu }=(E,0,0,E).$ On substituting (2.29) into equation (2.27),
we find
\begin{equation}
\{\psi _{a},\psi _{b}^{+}\}=2E(1-\sigma _{3})_{ab}
\end{equation}
with $a,b=1,2,$ giving
\begin{equation}
\left\{ \psi _{1},\psi _{2}^{+}\right\} =0,\left\{ \psi _{1},\psi
_{1}^{+}\right\} =0,\left\{ \psi _{2},\psi _{2}^{+}\right\} =4E.
\end{equation}
We see that $\psi _{2}^{+}$ and $\psi _{2}$ act as creation and
annihilation operators.
Now, a massless particle of spin $s$ can only have helicities
$\lambda =\pm s,$ so, starting from the left-handed state
$\left\vert s,\lambda =-s\right\rangle ,$ which is annihilated by
$\psi _{2},$ only one new state can be formed, i.e., $\psi
_{2}^{+}\left\vert s,-s\right\rangle .$ This describes a particle
of spin $s+\frac{1}{2}$ and helicity $-(s+\frac{1}{2}),$ and by
virtue of (2.26) it is also massless. Then, acting again with
$\psi _{1}^{+}$ or with $\psi _{2}^{+}$ gives states with zero
norm by virtue of (2.31) and $\psi _{2}^{+}\psi _{2}^{+}=0$ (which
follows from the fermionic nature of $\psi _{2}$). So the
resulting massless irreducible representation consists of just two
states. Hence, the possible supersymmetric multiplets of interest
to us are
\begin{center}
\begin{tabular}{|c|c|}
\hline \textbf{Chairal multiplet} & \textbf{Vector (or gauge)
multiplet} \\ \hline
fermion $\left\vert \frac{1}{2},\frac{1}{2}\right\rangle $ & gauge boson $%
\left\vert 1,1\right\rangle $ \\ \hline
sfermion $\left\vert 0,0\right\rangle $ & gaugino $\left\vert \frac{1}{2},%
\frac{1}{2}\right\rangle $ \\ \hline
\end{tabular}
\end{center}
To maintain \emph{CPT }invariance we must add the antiparticle
states that have opposite helicity, thus giving a total of four
states, $\left\vert s\pm \frac{1}{2},\pm
(s+\frac{1}{2})\right\rangle ,$ $\left\vert s,\pm s\right\rangle
,$in each multiplet.
All the particles in such multiplets carry the same gauge quantum
numbers. For this reason, the know fermions ($i.e$ the quarks and
leptons) must be partnered by spin 0 particles (called
"sfermions"), not spin 1 bosons. This is because the only spin 1
bosons allowed in a renormalizable theory are the gauge bosons and
they have to belong to the adjoin representation of the gauge
group, but the quarks and leptons do not. Instead, the gauge
bosons are partnered by new spin-$\frac{1}{2}$ "gauginos"
(spin-$\frac{3}{2}$ being rules out by the requirement of
renormalizability). The need to introduce new supersymmetry
partners, rather than interrelate the known bosons and fermions,
is undoubtedly a major setback for supersymmetry.
For completeness, we briefly consider also supermultiplets of
particles with nonzero mass $\mathit{M}$. in the particle's rest
frame, $P_{\mu }=(M,0,0,0)$ so the anticommutator (2.27) becomes
\begin{equation}
\{\psi _{a},\psi _{b}^{+}\}=2M\delta _{ab}.
\end{equation}
We see that $\frac{\psi _{a}^{+}}{\sqrt{2M}}$and $\frac{\psi
_{a}}{\sqrt{2M}} $ act as creation and annihilation operators,
repetitively, for both $a=1$ and $2.$ Starting from a spin state
$\left\vert s,s_{3}\right\rangle ,$which is annihilated by the
$\psi _{a},$we reach three other states by the action of $\psi
_{1}^{+},\psi _{2}^{+},$ and $\psi _{1}^{+}\psi _{2}^{+}=-\psi
_{2}^{+}\psi _{1}^{+}.$
\section{The Wess-Zumino Model}
We are now going to consider the construction of supersymmetric
field theories. We shall begin with the model of Wess and Zumino
(1974) of the massless spin 0, spin-$\frac{1}{2}$ multiplet.
Indeed, probably the most intuitive way of introducing
supersymmetry is to explore, through this simple model, possible
\textit{Fermi-Bose }symmetries of the Lagrangian. It cold
therefore equally well have been the starting point for our
discussion of supersymmetry.
The simplest multiplet in which to search for supersymmetry
consists of a two-component \textit{Weyl spinor} (or equivalently
a four-component of \textit{Majorana spinor}) together with two
real scalar fields. To be specific, we take a massless Majorana
spinor field $\psi ,$and massless scalar and pseudoscalar fields
$A$ and $B$, respectively. The Kinetic energy is
\begin{equation}
\mathfrak{L}=\frac{1}{2}(\partial ^{\mu }A)(\partial _{\mu }A)+\frac{1}{2}%
(\partial ^{\mu }B)(\partial _{\mu }B)+\frac{1}{2}i\overline{\psi
}\gamma _{\mu }\partial ^{\mu }\psi .
\end{equation}
The unfamiliar factor of $\frac{1}{2}$ in the fermion term arises because $%
\psi $ is a Majorana spinor; a Dirac spinor is a linear
combination of two Majorana spinors (see(2.22)).
The following bilinear identities are particularly useful when
exploring supersymmetry. For any two Majorana\textit{\ }spinors
$\psi _{1},\psi _{2}$ we have
\begin{equation}
\overline{\psi }_{1}\mathit{\Gamma }\psi _{2}=\eta \overline{\psi }_{2}%
\mathit{\Gamma }\psi _{1}
\end{equation}
where $\eta =(1,1,-1,1,-1)$ for $\Gamma =(1,\gamma _{5},\gamma
_{\mu },\gamma _{\mu }\gamma _{5},\sigma _{\mu \nu }).$
To discover the Fermi-Bose symmetries of $\mathfrak{L}$, we make
the following infinitesimal transformations: $A\rightarrow
A^{^{\prime }}=A+\delta A,$etc., where
\begin{subequations}
\begin{eqnarray}
\delta A &=&\overline{\varepsilon }\psi , \\
\delta B &=&i\overline{\varepsilon }\gamma _{5}\psi , \\
\delta \psi &=&-i\gamma ^{\mu }\partial _{\mu }(A+i\gamma
_{5}B)\varepsilon .
\end{eqnarray}
$\varepsilon $ being a constant infinitesimal Majorana spinor that
anticommutes with $\psi $ and commutes with $A$ and $B.$ These
transformations are clearly Lorentz-covariant but otherwise $\delta A$ and $%
\delta B$ are just fairly obvious first guesses. The possibility
of just
constructing two independent invariant quantities $\overline{\varepsilon }%
\psi $ and $\overline{\varepsilon }\gamma _{5}\psi $ is the reason
for introducing both scalar and pseudoscalar fields. Since $A$ and
$\psi $ have mass dimensions $1$ and $\frac{3}{2},$ respectively,
$\varepsilon $ must have dimension $-\frac{1}{2}.$ The derivative
in $\delta \psi $ is therefore required to match these dimensions.
We have assumed that the transformations have to be linear in the
fields.
Under (2.35) the change in $\mathfrak{L}$ can be written in the
form
\end{subequations}
\begin{eqnarray}
\delta \mathfrak{L} &\mathfrak{=}&\partial ^{\mu }A\partial _{\mu
}(\delta
A)+\partial ^{\mu }B\partial _{\mu }(\delta B)+\frac{1}{2}i(\delta \overline{%
\psi })\gamma ^{\mu }\partial _{\mu }\psi +\frac{1}{2}i\overline{\psi }%
\gamma ^{\mu }\partial _{\mu }(\delta \psi ) \notag \\
&=&\partial ^{\mu }A\overline{\varepsilon }\partial _{\mu }\psi
+i\partial
^{\mu }B\overline{\varepsilon }\gamma _{5}\partial _{\mu }\psi -\frac{1}{2}%
\overline{\varepsilon }\gamma ^{\nu }\gamma ^{\mu }\partial _{\mu
}(A+i\gamma _{5}B)\partial _{\mu }\psi +\frac{1}{2}\overline{\psi
}\gamma ^{\mu }\partial _{\mu }[\gamma ^{\nu }\partial _{\nu
}(A+i\gamma
_{5}B)\varepsilon ] \notag \\
&=&\partial _{\mu }\left[ \overline{\varepsilon }\{\partial ^{\mu
}(A+i\gamma _{5}B)-\frac{1}{2}\gamma ^{\nu }\gamma ^{\mu }\partial
_{\nu
}(A+i\gamma _{5}B)\}\psi \right] \notag \\
&=&\partial _{\mu }[\frac{1}{2}\overline{\varepsilon }\gamma ^{\mu
}\{\partial (A+i\gamma _{5}B)\}\psi ],
\end{eqnarray}
where we have used the identities
\begin{equation}
\overline{\varepsilon }\psi =\psi \overline{\varepsilon },\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }%
\overline{\varepsilon }\gamma ^{5}\psi =\overline{\psi }\gamma
^{5}\varepsilon ,
\end{equation}
of (2.34). Since $\delta \mathfrak{L}$ is a total derivative, it
integrates to zero when we form the action. hence, the action is
invariant under the combined global supersymmetric transformations
(2.35) that mix the fermion
and the boson fields. As usual, "global" is used ti indicate that $%
\varepsilon $ is independent of space-time.
We have remarked that the $\delta \psi $ transformations (2.35c)
contains a derivative. It thus relates the Fermi-Bose symmetry to
the Poincare$^{\prime }$ group. In particular, the appearance of
the time derivative gives an \textit{absolute }significance to the
total energy, which is normally absent in theories that do not
involve gravity.
Returning attention again to the global transformations (2.35), we
recall that the commutator of two successive transformations of a
symmetry group must itself be a symmetry transformation. In this
way we identify the algebra of the generators of the group. To
obtain the corresponding result for supersymmetry, we must
therefore consider two successive supersymmetric transformations
like (2.35). For example, if for the scalar field \textit{A} we
may take a transformation (2.35a) associated with parameter
$\varepsilon _{1},$ followed another with parameter $\varepsilon
_{2},$ then we obtain from (2.35c)
\begin{equation}
\delta _{2}(\delta _{1}A)=\delta _{2}(\overline{\varepsilon _{1}}\psi )=-i%
\overline{\varepsilon _{1}}\gamma ^{\mu }\partial _{\mu
}(A+i\gamma _{5}B)\varepsilon _{2}.
\end{equation}
Hence, the commutator
\begin{eqnarray}
(\partial _{2}\partial _{1}-\partial _{1}\partial _{2})A &=&-i\overline{%
\varepsilon _{1}}\gamma ^{\mu }\partial _{\mu }(A+i\gamma
_{5}B)\varepsilon _{2}+i\overline{\varepsilon _{2}}\gamma ^{\mu
}\partial _{\mu }\left(
A+i\gamma _{5}B\right) \varepsilon _{1} \notag \\
&=&-2i\overline{\varepsilon _{1}}\gamma ^{\mu }\varepsilon
_{2}\partial
_{\mu }A \notag \\
&=&-2\overline{\varepsilon _{1}}\gamma ^{\mu }\varepsilon
_{2}P_{\mu }A
\end{eqnarray}
as the terms involving \textit{B} cancel when we use the
identities for Majorana spinors (2.34) and $i\partial _{\mu
}=P_{\mu }.$
Now the generator of supersymmetric transformations $Q_{\alpha }$
is a four-component Majorana spinor, which we define by the
requirement that
\begin{equation}
\delta A=\overline{\varepsilon }QA,
\end{equation}
To make this consistent with (2.39), we form the commutator
\begin{eqnarray}
\left[ \partial _{2},\partial _{1}\right] A &=&\left[
\overline{\varepsilon _{2}}Q,\overline{\varepsilon _{1}}Q\right]
A=\left[ \overline{Q}\varepsilon
_{2},\overline{\varepsilon _{1}}Q\right] A \notag \\
&=&(\overline{Q_{\beta }}\varepsilon _{2\beta }\overline{\varepsilon }%
_{1\alpha }Q_{\alpha }-\overline{\varepsilon }_{1\alpha }Q_{\alpha }%
\overline{Q}_{\beta }\varepsilon _{2\beta })A \notag \\
&=&-\overline{\varepsilon }_{1\alpha }\varepsilon _{2\beta }\{Q_{\alpha },%
\overline{Q}_{\beta }\}A,
\end{eqnarray}
using (2.37). Writing (2.39) in component form and equating it
with (2.41) reveals the requirement
\begin{equation}
\{Q_{\alpha },\overline{Q}_{\beta }\}=2(\gamma ^{\mu })_{\alpha
\beta }P_{\mu }
\end{equation}
which is indeed part of the supersymmetry algebra (2.27). The same
commutator is found on applying successive supersymmetry
transformations to field $B$.
Finally, we must check that the algebra closes when acting on the
spinor field $\psi .$We find from (2.35c)
\begin{eqnarray}
\left[ \delta _{2},\delta _{1}\right] \psi &=&-i\gamma ^{\mu
}\partial _{\mu
}\delta _{2}(A+i\gamma _{5}B)\varepsilon _{1}-(1\leftrightarrow 2) \notag \\
&=&-i\eth (\overline{\varepsilon }_{2}\psi \varepsilon _{1}+i\gamma _{5}%
\overline{\varepsilon }_{2}i\gamma _{5}\psi \varepsilon
_{1})\varepsilon
_{1}-(1\leftrightarrow 2) \notag \\
&=&-2i\overline{\varepsilon }_{1}\gamma ^{\mu }\varepsilon
_{2}\partial _{\mu }\psi +i\overline{\varepsilon }_{1}\gamma ^{\nu
}\varepsilon _{2}\gamma _{\nu }\eth \psi ,
\end{eqnarray}
where the last equality uses a Fierz rearrangements to bring the two $%
\varepsilon $ together, as well as the Majorana identities (2.37).
If we use the field equation $\eth \psi =0$ for a free massless
fermion, the last term vanishes identically and (2.43) has exactly
the same form as (2.39) for the field $A$, and hence we obtain
(2.42) as before.
However, there is a problem with (2.43) because it gives the
required closure only when $\psi $ satisfies the Dirac equation,
but not for interacting fermions that are \textquotedblleft\
off-the mass-shell\textquotedblright . The reason is that for
off-mass-shell particles, the numbers of fermion and boson degrees
of freedom no longer match up. $A$ and $B$ still have two bosonic
degrees of freedom, whereas the Majorana spinor $\psi $ has four.
We can restore the symmetry by adding two extra bosonic fields,
$\mathfrak{F}$ and $\mathfrak{G}$ (called auxiliary fields), whose
free Lagrangian takes the form
\begin{equation}
\mathfrak{L}=\frac{1}{2}\mathfrak{F}^{2}+\frac{1}{2}\mathfrak{G}^{2}.
\end{equation}
This gives the field equations $\mathfrak{F=G=}0\mathfrak{,}$ so
these new fields have no on-mass-shell states. From (2.44) they
clearly must have mass dimension 2, so on dimensional grounds
their supersymmetry transformations can only take the forms
\begin{subequations}
\begin{equation}
\mathfrak{\delta F=}-i\overline{\varepsilon }\gamma ^{\mu
}\partial _{\mu
}\psi ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\ \ \ \ \ \ \ \ \ \ \ \ \mathfrak{\delta G=}\overline{%
\varepsilon }\gamma ^{5}\gamma ^{\mu }\partial _{\mu }\psi
\end{equation}
and (2.35c) becomes,\qquad \qquad \qquad \qquad \qquad \qquad
\begin{equation}
\delta \psi =-i\gamma ^{\mu }\partial _{\mu }\left( A+i\gamma
_{5}B\right) \varepsilon +\left( \mathfrak{F}+i\gamma
_{5}\mathfrak{G}\right) \varepsilon .
\end{equation}
The mass dimensions prevent $\mathfrak{F}$ and $\mathfrak{G}$ from
occurring in
\begin{equation}
\delta A=\overline{\varepsilon }\psi ,\ \ \ \ \ \ \ \ \ \ \delta B=i%
\overline{\varepsilon }\gamma _{5}\psi .
\end{equation}
Under these modified supersymmetry transformations (2.45), we can
show that the unwanted term in (2.43) cancels and, moreover, that
\end{subequations}
\begin{equation}
\left[ \delta _{1},\delta _{2}\right] \mathfrak{F}=-2i\overline{\varepsilon }%
_{1}\gamma ^{\mu }\overline{\varepsilon }_{2}\partial _{\mu
}\mathfrak{F,}
\end{equation}
and similarly for $\mathfrak{G}$, as required by (2.42).
In this way, we have obtained the spin 0, spin-$\frac{1}{2}$
realization of supersymmetry originally found by Wess and Zumino
(1974).
\section{Mass $and$ Interaction Terms $in$ $the$ Lagrangian}
We have found that the free Lagrangian
\begin{equation}
\mathfrak{L}=\frac{1}{2}\partial _{\mu }A\partial _{\mu }A+\frac{1}{2}%
\partial ^{\mu }B\partial _{\mu }B+\frac{1}{2}i\overline{\psi }\eth \psi +%
\frac{1}{2}\mathfrak{F}^{2}+\frac{1}{2}\mathfrak{G}^{2},
\end{equation}
that describes the multiplet ($A,B,\psi
,\mathfrak{F},\mathfrak{G}$), is invariant (up to a total
derivative) under the supersymmetry transformations (2.45).
However, it is easy to check that supersymmetry invariance is
still preserved if the Lagrangian is extended to include a
quadratic mass term of the form
\begin{equation}
\mathfrak{L}_{m}=m(\mathfrak{F}A+\mathfrak{G}B-\frac{1}{2}\psi \overline{%
\psi })
\end{equation}
and a cubic interaction term
\begin{equation}
\mathfrak{L}_{i}=\frac{g}{\sqrt{2}}[\mathfrak{F}A^{2}-\mathfrak{F}B^{2}+2%
\mathfrak{G}AB-\overline{\psi }(A-i\gamma _{5}B)\psi ]
\end{equation}
Higher-order terms must be excluded because they are
nonrenormalizable. When we use the classical equations of motion,
\begin{equation}
\frac{\partial \mathfrak{L}}{\partial \mathfrak{F}}\mathfrak{=}\frac{%
\partial \mathfrak{L}}{\partial \mathfrak{G}}=0,
\end{equation}
for the complete Lagrangian, $\mathfrak{L=L}_{0}+\mathfrak{L}_{m}+\mathfrak{L%
}_{i},$ we find
\begin{subequations}
\begin{eqnarray}
\mathfrak{F}+mA+\frac{g}{\sqrt{2}}(A^{2}-B^{2}) &=&0, \\
\mathfrak{G}+mB+\sqrt{2}gAB &=&0.
\end{eqnarray}
These equations of motion are purely algebraic and so the dynamics
is
unchanged if we use them to eliminate the auxiliary fields $\mathfrak{F\ }$%
and $\mathfrak{G}$ from the Lagrangian. We obtain
\end{subequations}
\begin{eqnarray}
\mathfrak{L}\mathfrak{=\ } &&\frac{1}{2}\partial _{\mu }A\partial ^{\mu }A+%
\frac{1}{2}\partial _{\mu }B\partial ^{\mu }B+\frac{1}{2}i\overline{\psi }%
\eth \psi -\frac{1}{2}m\overline{\psi }\psi -\frac{1}{2}m^{2}(A^{2}+B^{2})-%
\frac{1}{\sqrt{2}}mgA(A^{2}+B^{2}) \notag \\
&&-\frac{1}{2}g^{2}(A^{2}+B^{2})^{2}-\frac{1}{\sqrt{2}}g\overline{\psi }%
(A-i\gamma ^{5}B)\psi
\end{eqnarray}
Several features of this Lagrangian, which are characteristics of
supersymmetric theories, are worth nothing. The masses of the
scalars and the fermions are all equal. There are cubic and
quartic couplings between
the scalar fields, and also a Yukawa-type interaction between the fermion $%
\psi $ and the scalars $A,$ and $B$ yet in total there are only
two free parameters: $m$ and $g.$ These interrelation between
fermion and boson masses and couplings is the essence of the
supersymmetry.
The model can also be shown to have some remarkable
renormalization properties in that, despite the presence of the
scalar fields, there is no renormalization of the mass and
coupling constant (although wave function renormailzation is still
necessary). the divergences arising from boson loops are cancelled
by those from fermion loops which have the opposite sign. This is
just the type of cancellation we need to stabilize the gauge
hierarchy. These powerful nonrenormailzation theorems make
supersymmetry particularly compelling. However when we break
supersymmetry, as well give the absence of fermion-boson mass
degeneracy in nature, we have to be careful to preserve the
relations between the couplings of particles of different spin
implied in (2.52).
\section{The Superpotential}
To see how this results generalize with higher symmetries, it is
convenient to work entirely with left handed fermion fields.
Recall from (2.29) that Majorana spinor $\psi $ can be formed
entirely from a left-handed Weyl spinor
\begin{equation}
\psi =\psi _{L}+C\overline{\psi }_{L}^{T}
\end{equation}
and that the mass term is
\begin{eqnarray}
\mathfrak{L} &=&m\overline{\psi }\psi =m\overline{\psi
}_{R}^{c}\psi
_{L}+herm.conj. \notag \\
&=&m\psi _{L}^{T}C\psi _{L}+herm.conj.
\end{eqnarray}
using the relation
\begin{equation}
\overline{\psi }_{R}^{c}=\psi _{R}^{c+}\gamma ^{0}=\psi _{L}^{\ast
+}\gamma _{0}^{T+}C^{+}\gamma ^{0}=-\psi _{L}^{T}C^{-1}=\psi
_{L}^{T}C.
\end{equation}
For simplicity we have set $-C^{-1}=C,$ which is valid in all the
familiar representations of the Dirac matrices. We can rewrite the
supersymmetry Lagrangian of section (2.6) using just a left handed
field $\psi _{L},$ and complex fields $\phi $ and $F$ for its
scalar partner, viz.,
\begin{equation}
\phi \equiv \frac{1}{\sqrt{2}}(A+iB),\ \ \ \ \ \ \ \ \ \ \ \ \ \ \
and\ \ \
\ \ \ \ \ \ \ \ \ \ \ \ \ \ F\equiv \frac{1}{\sqrt{2}}(\mathfrak{F}-i%
\mathfrak{G})
\end{equation}
From $\mathfrak{L}$ $=\mathfrak{L}_{0}+\mathfrak{L}_{m}\ +\mathfrak{L}$%
\bigskip $_{i}$ we obtain
\begin{equation}
\mathfrak{L}=\partial _{\mu }\phi ^{\ast }\partial ^{\mu }\phi +i\overline{%
\psi }_{L}\eth \psi _{L}+FF^{\ast }+m(\phi F-\frac{1}{2}\psi
_{L}^{T}C\psi _{L})+herm.conj.+g(\phi ^{2}F-\phi \psi
_{L}^{T}C\psi _{L})+hem.conj.
\end{equation}
Then using the equation of motion $\frac{\partial
\mathfrak{L}}{\partial \mathfrak{F}}=0,$ which gives
\begin{equation}
F^{\ast }=-m\phi -g\phi ^{2},
\end{equation}
we can eliminate the auxiliary field $F^{\ast }$ and so the
Lagrangian becomes
\begin{equation}
\mathfrak{L}=\partial _{\mu }\phi ^{\ast }\partial ^{\mu }\phi +i\overline{%
\psi }_{L}\eth \psi _{L}-\left\vert m\phi +g\phi ^{2}\right\vert -(\frac{1}{2%
}m\psi _{L}^{T}C\psi _{L}+g\phi \psi _{L}^{T}C\psi
_{L}+herm.conj.)
\end{equation}
It is useful to re-examine the Lagrangian (2.57) in terms of an
analytic function W($\phi $), known as the \textquotedblleft
superpotential\textquotedblright , viz.,
\begin{equation}
\mathfrak{L}=\mathfrak{L}_{K.E.}+FF^{\ast }+F^{\ast }\frac{\partial W}{%
\partial \phi }+F^{\ast }\frac{\partial W^{\ast }}{\partial \phi ^{\ast }}-%
\frac{1}{2}(\frac{\partial ^{2}W}{\partial \phi ^{2}}\psi
_{L}^{T}C\psi _{L}+h.c.),
\end{equation}
where $\mathfrak{L}_{K.E.}$ denotes the sum of the kinetic energy
terms of the $\phi $ and $\psi _{L}$ fields. Note that $W$, which
is of dimension 3,
depends only on $\phi $and not on $\phi ^{\ast }.$ Upon using $\frac{%
\partial \mathfrak{L}}{\partial \mathfrak{F}}=\frac{\partial \mathfrak{L}}{%
\partial \mathfrak{F}^{\ast }}=0$ to eliminate the auxiliary fields, we find
\begin{equation}
\mathfrak{L}=\mathfrak{L}_{K.E.}-\left\vert \frac{\partial W}{\partial \phi }%
\right\vert ^{2}-\frac{1}{2}(\frac{\partial ^{2}W}{\partial \phi
^{2}}\psi _{L}^{T}C\psi _{L}+h.c.).
\end{equation}
For a renormalizable theory $W$ can be, at most, a cubic function
of $\phi ,$ since otherwise the Lagrangian would contain couplings
with dimension less than 0. Substituting
\begin{equation}
W=\frac{1}{2}m\phi ^{2}+\frac{1}{3}g\phi ^{3}
\end{equation}
into (2.61) immediately reproduces the Lagrangian of (2.59). The
superpotential is the only free function in the supersymmetry
Lagrangian and determines both the potential of the scalar fields,
and the masses and couplings of fermions and bosons.
In general there may be several chiral multiplets to consider, For
example, if $\psi ^{i}$ belongs to \ a representation of an
$SU(N)$ symmetry group, we will have the supermultiplets
\begin{equation}
(\phi ^{i},\psi _{L}^{i})
\end{equation}
where in the fundamental representation $i=1,2,...,N.$ From (2.61)
we readily obtain a Lagrangian that is invariant under the
additional symmetry and incorporates the new supermultiplets. It
is
\begin{equation}
\mathfrak{L}_{chiral}=\sum_{i}\left\vert \partial _{\mu }\phi
^{i}\right\vert ^{2}+i\sum_{i}\overline{\psi _{L}^{i}}\eth \psi
_{L}^{i}-\sum_{i}\left\vert \frac{\partial W}{\partial \phi
^{i}}\right\vert ^{2}-\frac{1}{2}\left( \sum_{i,j}\frac{\partial
^{2}W}{\partial \phi ^{i}\partial \phi ^{j}}\psi _{L}^{iT}C\psi
_{L}^{j}+herm.conj.\right) ,
\end{equation}
and the most general form of the superpotential $W$ is
\begin{equation}
W=\lambda _{i}\phi ^{i}+\frac{1}{2}m_{ij}\phi ^{i}\phi ^{j}+\frac{1}{3}%
g_{ijk}\phi ^{i}\phi ^{j}\phi ^{k},
\end{equation}
where the coefficients $m$ and \thinspace $g$ are completely
symmetric under interchange of indices. The relevance of the term
that is linear in the fields will be discussed below. Since $W$
must be invariant under $SU(N)$ symmetry transformations this term
can only occur if a gauge-singlet field exists.
\section{Supersymmetric Gauge Theory}
A combination of supersymmetry with gauge theory is clearly
necessary if these ideas are to make any contact with the real
world. In \ addition to the chiral multiplets of (2.63), we must
include the \textquotedblleft gauge\textquotedblright\
supermultiplets
\begin{equation}
\left( A_{\mu }^{a},\chi ^{a}\right) ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ \ }%
a=1,2,3,...,N^{2}-1,
\end{equation}
where $A_{\mu }^{a}$ are the spin 1 gauge bosons of the gauge
group G (taken to be SU(N)) and $\chi ^{a}$ are their Majorana
fermion superpartners ( the so called \textquotedblleft
gauginos\textquotedblright ). These boson-fermion pairs, which in
the absence of symmetry breaking are assumed to be massless,
belong to the adjoint representation of the gauge group. Our task
is to find a supersymmetry, ad a gauge-invariant Lagrangian
containing all these chiral and gauge supermultiplets.
The gauge multiplets are described by the Lagrangian
\begin{equation}
\mathfrak{L}_{G}=\frac{-1}{4}F_{\mu \nu }^{a}F_{a}^{\mu \nu }+\frac{1}{2}i%
\overline{\chi ^{a}}(D\chi )_{a}+\frac{1}{2}(D^{a})^{2},
\end{equation}
where the gauge field-strength tensor is
\begin{equation}
F_{a}^{\mu \nu }=\partial ^{\mu }\chi _{a}-g_{G}f_{abc}A_{b}^{\mu
}A_{c}^{\nu },
\end{equation}
$D^{\mu }$ is the covariant derivative satisfying
\begin{equation}
(D^{\mu }\chi )_{a}=\partial ^{\mu }\chi
_{a}-g_{G}f_{abc}A_{b}^{\mu }\chi _{c}
\end{equation}
and $D^{\mu }$ are auxiliary scalar fields ( similar to $F_{i}$ of
the chiral multiplet).
Actually, for this pure gauge Lagrangian the equation of motion
$\partial \mathfrak{L}_{G}/\partial D^{a}=0,$ implies $D^{a}=0;$
however, it will become nonzero when the chiral fields are coupled
in the notation will be familiar: $g_{G}$ and $f_{abc}$ are the
coupling and the structure constants of the gauge group, and in
(2.96) the matrices $T^{b}$ representing the
generators in the adjoint representation have been replaced by (\thinspace $%
T^{b})=if_{abc.}$ It is straight forward to show that the lagrangian $%
\mathfrak{L}_{G}$ is invariant, and that the algebra closes, under
the supersymmetry transformation
\begin{eqnarray}
\delta A_{a}^{\mu } &=&-\overline{\varepsilon }\gamma ^{\mu
}\gamma ^{5}\chi
_{a}, \notag \\
\delta \chi ^{a} &=&-\frac{1}{2}\sigma ^{\mu \nu }F_{\mu \nu
}^{a}\gamma
^{5}\varepsilon +D^{a}\varepsilon , \\
\delta D^{a} &=&-i\overline{\varepsilon }(D\chi )^{a}, \notag
\end{eqnarray}
where $\varepsilon $ is a constant infinitesimal Majorana spinor.
This transformation is analogous to (2.35) for the chiral
multiplet.
To include the chiral fields ($\phi ^{i},\psi _{L}^{i}),$ we add $\mathfrak{L%
}_{chiral}$ of (2.64) but substituting the covariant derivative
$D_{\mu }$ for $\partial _{\mu }$ in the kinetic energy terms,
viz.,
\begin{equation}
\partial _{\mu }\rightarrow D_{\mu }=\partial _{\mu }+ig_{G}T^{a}A_{\mu
}^{a},
\end{equation}
where $T^{a}$ are the matrices representing the generators of the
gauge group in the representation to which $(\phi _{i},\psi _{i})$
belong. To ensure the supersymmetry of the combined
\textquotedblleft chiral + gauge\textquotedblright\ Lagrangian, \
we must include two further terms, and write
\begin{equation}
\mathfrak{L=L}_{chiral}+\mathfrak{L}_{G}-g_{G}\phi _{i}^{\ast
}(T^{a})_{ij}\phi _{j}D^{a}+[\sqrt{2}g_{G}\phi _{i}^{\ast
}\overline{\chi ^{a}}(T^{a})_{ij}P_{L}\psi _{j}+herm.conj.]
\end{equation}
where $P_{L}\equiv \frac{1}{2}(1-\gamma ^{5}),$ and replace
$\partial _{\mu } $ in (2.45) by $D_{\mu }.$Using $\partial
\mathfrak{L}_{G}/\partial D^{a}=0 $ to eliminate the auxiliary
field gives
\begin{equation}
D^{a}=g_{G}\phi _{i}^{\ast }(T^{a})_{ij}\phi _{j}.
\end{equation}
The term in the Lagrangian that contribute to the potential for
the scalar fields are evident by inspection of (2.61) and (2.67).
They are
\begin{eqnarray}
V(\phi ,\phi ^{\ast }) &=&\left\vert F_{i}\right\vert ^{2}+\frac{1}{2}%
D_{a}^{2} \notag \\
&=&\sum_{i}\left\vert \frac{\partial W}{\partial \phi _{i}}\right\vert ^{2}+%
\frac{1}{2}\sum_{a}[g_{G}\sum_{i,j}\phi _{i}^{\ast
}(T^{a})_{ij}\phi _{j}]^{2},
\end{eqnarray}
which are known as $F$ \ and $D$ terms, respectively. This
potential will play a central role in the spontaneous breaking of
supersymmetry and the gauge symmetry.
Model building begins with the supersymmetry Lagrangian (2.72).
Apart from the choice of gauge group and the representations
formed by the chiral
multiplets, the only freedom lies in the choice of the superpotential $%
W(\phi _{i})$ which must, of course, be a single of the gauge
group.
\section{Spontaneous Breaking $of$ Supersymmetry}
The particles observed in nature show no sign whatsoever of
degeneracy between fermions and bosons. Even the photon and
neutrino, which appear to be degenerate in mass, can not be
supersymmetric partners. Hence, supersymmetry, is it to be
relevant in nature, must be broken.
The breaking could be either explicit or spontaneous. Explicit
breaking would be quite \textsl{ad hoc. }The supersymmetry
generators would no longer commute with the Hamiltonian,
\begin{equation}
\left[ Q_{\alpha },H\right] \neq 0
\end{equation}
and so the violation would have to be small enough to preserve the
good features of supersymmetry and yet large enough to push the
supersymmetric partners out of reach of current experiments.
However, we would inevitably lose the nice nonrenormalization
theorems and, even worse, any attempt to embrace gravity viz local
supersymmetry would be prohibited. So, instead, we prefer to
consider the spontaneous breaking of supersymmetry, not least
because this has proved so successful previously for breaking
gauge symmetries. Hence, we assume that the Lagrangian is
supersymmetric but that the vacuum state is not, that is
\begin{equation}
\left[ Q_{\alpha },H\right] =0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{, \ \ \ \ but \ \ \ \ \ \ \ \
}Q_{\alpha }\left\vert 0\right\rangle \neq 0
\end{equation}
A new feature arises here, however. The Higgs mechanism of
spontaneous symmetric breaking is not available in supersymmetry
because, if we were to introduce a spin 0 field with negative
mass-squared, its fermionic superpartner would have an imaginary
mass. Also, using the anticommutator (2.27),
\begin{equation}
\left\{ Q_{\alpha },Q_{\delta }^{+}\right\} \gamma _{\delta \beta
}^{0}=2\gamma _{\alpha \beta }^{0}P_{\mu },
\end{equation}
we can directly establish a general and important theorem. If we
multiply (2.77) by $\gamma _{\beta \alpha }^{0}$ and sum over
$\beta $ and $\alpha $, we obtain
\begin{equation}
\sum_{\alpha }\{Q_{\alpha },Q_{\alpha }^{+}\}=8P_{0}=8H
\end{equation}
and hence
\begin{equation}
8\left\langle 0\right\vert H\left\vert 0\right\rangle
=\sum_{\alpha }\left\vert Q_{\alpha }\left\vert 0\right\rangle
\right\vert ^{2}+\sum_{\alpha }\left\vert Q_{\beta }^{+}\left\vert
0\right\rangle \right\vert ^{2}
\end{equation}
It follows immediately that
1- The vacuum energy must be greater than or equal to zero;
2- If the vacuum is supersymmetric, that is, if $Q_{\alpha
}\left\vert 0\right\rangle =Q_{\beta }^{+}\left\vert
0\right\rangle =0$ for all $\alpha , $ the vacuum energy is zero;
and
3- Conversely, if supersymmetry is spontaneously broken,
$Q_{\alpha }\left\vert 0\right\rangle \neq 0,$then the vacuum
energy is positive.
\bigskip
These results have a disappointing consequences. Conclusion (1)
gives an absolute meaning to the zero of energy, a fact that it
was hoped to use to explain why the vacuum energy of the universe
is zero or very close to zero. But now from (3) we see that broken
supersymmetry implies a positive vacuum energy.
Leaving this aside we can see from (3) that supersymmetry breaking
is rather special because it requires the ground-state energy to
be positive.
In The classical approximation, the energy of the ground state is
given by the minimum of the potential (2.74):
\begin{eqnarray}
V(\phi ,\phi ^{\ast }) &=&\left\vert F_{i}\right\vert ^{2}+\frac{1}{2}%
D_{a}^{2} \notag \\
&=&\sum_{i}\left\vert \frac{\partial W}{\partial \phi _{i}}\right\vert ^{2}+%
\frac{1}{2}\sum_{\beta ,a}\left[ g_{\beta }\phi _{i}^{\ast
}(T_{\beta }^{\ast })_{ij}\phi _{j}+\eta \delta _{\beta 1}\right]
^{2},
\end{eqnarray}
with
\begin{equation}
W=\lambda _{i}\phi _{i}+\frac{1}{2}m_{ij}\phi _{i}\phi
_{j}+g_{ijk}\phi _{i}\phi _{j}\phi _{k}
\end{equation}
The sum over $\beta $ has been included to allow for the
possibility of different gauge groups with different couplings,
and the constant term $\eta $ can only occur if $\beta $ labels a
$U(1)$ factor.
It is evidently hard to break supersymmetry. The minimum $V=0$
will occur when $\phi _{i}=0$ for all $i$ ( and so supersymmetry
will be broken) unless one of the following conditions applies.
1- $\lambda _{i}\neq 0,$ that is, there exists a gauge singlet
field $\phi
_{i},$ so $W$ can contain a linear term yet still be gauge invariant ($%
F-type $ $breaking$);
2- $\eta \neq 0,$ so the gauge group contain an abelian $U(1)$ factor ($%
D-type$ $breaking$). This is a necessary but not a sufficient
requirement. This mechanism can not occur in $GUTs$ because they
are based on simple gauge groups that do not have $U(1)$ factor.
There is an alternative way of seeing that the spontaneous
symmetry breaking of supersymmetry can only be accomplished by
$\left\langle F\right\rangle \neq 0$ and/or $\left\langle
D\right\rangle \neq 0.$ If we look back at the general structure
of supersymmetric transformations (2.45) for the chiral multiplet
($\phi ,\psi ,F$), which takes the form
\begin{equation}
\delta \phi \sim \psi ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ }\delta \psi \sim \eth \phi +F,%
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ }\delta F\sim \eth \phi
\end{equation}
and at (2.70) for the gauge multiplet ($A_{\mu },\chi ,D$), in
which
\begin{equation}
\delta A_{\mu }\sim \gamma _{\mu }\chi ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ }\delta
\chi \sim \sigma ^{\mu \nu }F_{\mu \nu }+D,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \
}\delta D\sim \eth \chi ,
\end{equation}
and note that the vacuum expectation values of the spinor and
tensor fields and $\partial _{\mu }\phi $ must be zero to preserve
the Lorentz invariance of the vacuum, then it is only to break the
symmetry through the non-zero VEVs of the auxiliary fields $F$ and
$D.$
The spontaneous breaking of supersymmetry requires
\begin{equation}
Q_{\alpha }\left\vert 0\right\rangle \neq 0
\end{equation}
and $Q_{\alpha }\left\vert 0\right\rangle $is necessarily a
fermionic state, which we denote by $\left\vert \psi
_{G}\right\rangle .$
Since the $Q_{\alpha }$ commute with $H$, the state $\left\vert
\psi _{G}\right\rangle $ must be degenerate with the vacuum. It
must therefore describe a massless fermion (with zero momentum).
The situation is thus exactly analogous to the spontaneous
breaking of ordinary global symmetry in which massless Goldstone
bosons are created out of the vacuum. Here the spontaneous
breaking of global supersymmetry implies the existence of a
massless fermion, which is called \textquotedblleft
Goldstino.\textquotedblright
We next consider examples of these two types of symmetry breaking
$F-type$ \ and $D-type$ \ introduced in (1) and (2) above.
\section{F-type Breaking (O'raifeartaigh Model)}
A simple example of supersymmetry breaking arising from the
presence of a linear term in the superpotential $W$ is provided by
\begin{equation}
W=-\lambda A+mBC+gAB^{2},
\end{equation}
which contain three complex scalar fields $A,B,$ and $C.$ In this
example the scalar potential (2.80) becomes
\begin{eqnarray}
V &=&\left\vert \frac{\partial W}{\partial A}\right\vert
^{2}+\left\vert
\frac{\partial W}{\partial B}\right\vert ^{2}+\left\vert \frac{\partial W}{%
\partial C}\right\vert ^{2} \notag \\
&=&\left\vert -\lambda +gB^{2}\right\vert ^{2}+\left\vert
mC+2gAB^{2}\right\vert ^{2}+\left\vert mB\right\vert ^{2}
\end{eqnarray}
and we see that $V=0$ is excluded because the last term is only zero if $%
B=0, $but then the first term is positive-definite. We conclude
that $V>0$ and that supersymmetry is broken. Provided that
$m^{2}>2\lambda g$ \ the potential$V$ has a minimum when $B=C=0,$
independently of the value of $A.$ For simplicity, we set $A=0$ at
the minimum.
As usual, the scalar masses are determined by evaluating
\begin{equation}
V_{AB}\equiv \frac{\partial ^{2}V}{\partial A\partial B},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ }%
etc.,
\end{equation}
at the minimum. The only non-zero elements are
\begin{equation}
\left\langle V_{BB^{\prime }}\right\rangle B^{2}+2\left\langle
V_{BB^{\ast }}\right\rangle BB^{\ast }+\left\langle V_{B^{\ast
}B^{\ast }}\right\rangle B^{\ast 2}=(m^{2}-2g\lambda
)B_{1}^{2}+(m^{2}+2g\lambda )B_{2}^{2},
\end{equation}
where $B=(B_{1}+iB_{2})/\sqrt{2},$ and so the real scalar fields $B_{1}$and $%
B_{2}$ have $(mass)^{2}=m^{2}\mp 2g\lambda ,$respectively.
The fermion masses are obtained by evaluating $\partial
^{2}W/\partial A\partial B,$ etc., at the minimum ( see (2.64)).
From (2.85) we find that the only non-zero term is
\begin{equation}
\frac{\partial ^{2}W}{\partial B\partial C}=m
\end{equation}
and so the fermion mass matrix takes the form
\begin{equation}
M_{F}=%
\begin{pmatrix}
0 & 0 & 0 \\
0 & 0 & m \\
0 & m & 0%
\end{pmatrix}%
\end{equation}
in the bases of Majorana spinors $\psi _{A},\psi _{B},\psi _{C}.$
The massless Goldstino state $\psi _{A}$ is evident, and the
off-diagonal structure signals that the two Majorana spinors $\psi
_{B},$ $\psi _{C}$ \ will combine to give a single Dirac fermion
of mass $m$. \ Despite the supersymmetry breaking, there is still
\ an equality between the sum of the (mass)$^{2}$ of the bosons
and that of the fermions. Explicitly, for the each degree of
freedom we have the masses given in table (2.1)
\begin{equation*}
\underset{\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\textbf{Table (2.1): Masses of bosons and their
fermionic
partners}}}{%
\begin{tabular}{|c|c|c|c|c|}
\hline \multicolumn{3}{|c|}{$Bosons$} &
\multicolumn{2}{|c|}{$Fernions$} \\ \hline $A$ & $B$ & $C$ & $\psi
_{A}$ & $\psi _{B},$ $\ \psi _{C}$ \\ \hline
$0,0$ & $m^{2}\pm 2\lambda g$ & $m^{2},m^{2}$ & $0,0$ & $%
m^{2},m^{2},m^{2},m^{2}$ \\ \hline
\end{tabular}%
}
\end{equation*}
\bigskip
Only $B$ suffers supersymmetry breaking. The reason is that it is
the only
field that couples to the Goldstino; its coupling $gB\overline{\psi }%
_{B}\psi _{A}$ appears when (2.85) is inserted into (2.64). The
value of the potential at the minimum can be written
\begin{equation}
\left\langle V\right\rangle =\lambda ^{2}\equiv M_{S}^{4}
\end{equation}
where the mass splitting with in the supermultiplet of table (2.1)
are therefore
\begin{equation}
\vartriangle m^{2}\approx gM_{S}^{2},
\end{equation}
where $g$ is the coupling to the Goldstino.
This simple model illustrates several more general results. The
mass relation is a particular example of the \textquotedblleft
super-trace relation\textquotedblright ,
\begin{equation}
STr(M^{2})\equiv \sum_{J}(2J+1)(-1)^{2J}Tr(M_{J}^{2})=0,
\end{equation}
which holds whether supersymmetry is spontaneously broken or not. Here $%
M_{J} $is the mass matrix for the fields of spin-$J$, and the sum
is over all the physical particles of the theory. Relation (2.93)
holds in
lowest-order perturbation theory. We say that it is a \textquotedblleft $%
tree-level$\textquotedblright\ results because it neglects
corrections due to the diagrams containing loops. This super-trace
mass relation is important because it ensures that the scalars are
not subject to quadratically divergent renormalization.
We may readily verify that (2.93) holds for an arbitrary multiplet
structure. If there are several chiral multiplets $(\phi _{i},\psi
_{i}),$ then it is convenient to arrange the scalar fields and
their complex conjugates as a column vector so that the boson mass
terms have the matrix structure
\begin{equation}
\begin{pmatrix}
\phi ^{\ast } & \phi%
\end{pmatrix}%
\begin{pmatrix}
X & Y \\
Y^{+} & X%
\end{pmatrix}%
\begin{pmatrix}
\phi \\
\phi ^{\ast }%
\end{pmatrix}%
.
\end{equation}
The block diagonal parts of the boson $(mass)^{2}$ matrix,
$M_{B}^{2},$have elements
\begin{align}
X_{ij}& =\frac{\partial ^{2}V}{\partial \phi _{i}\partial \phi _{j}^{\ast }}%
=\sum_{k}\left( \frac{\partial ^{2}W}{\partial \phi _{i}\partial \phi _{k}}%
\right) \left( \frac{\partial ^{2}W}{\partial \phi _{j}^{\ast
}\partial \phi
_{k}^{\ast }}\right) \notag \\
& =\sum_{k}\left( M_{F}\right) _{ik}\left( M_{F}^{\ast }\right)
_{kj}=(M_{f}M_{F}^{\ast })_{ij}
\end{align}
where $M_{F}$ is the fermion mass matrix and so it follows that
\begin{equation}
Tr(M_{B}^{2})=2Tr(M_{F}^{2})
\end{equation}
at tree level.
We can also show that the fermion mass matrix has a zero
eigenvalue and hence identify the Goldstone. At the minimum of the
potential,
\begin{equation}
0=\frac{\partial V}{\partial \phi _{i}}=\frac{\partial }{\partial \phi _{i}}%
\left( \sum_{j}\left\vert \frac{\partial W}{\partial \phi
_{i}}\right\vert ^{2}\right) =\sum_{j}\left( \frac{\partial
^{2}W}{\partial \phi _{i}\partial \phi _{j}}\right) \left(
\frac{\partial W}{\partial \phi _{j}}\right) ^{\ast
}=\sum_{j}\left( M_{F}\right) _{ij}\left\langle F_{j}\right\rangle
^{\ast }.
\end{equation}
Thus, the mass matrix $M_{F}$ annihilates the fermion state
\begin{equation}
\psi _{G}=\sum_{j}\left\langle F_{j}\right\rangle ^{\ast }\psi
_{j}
\end{equation}
which is thus identified as the massless Goldstino. In our
example, $\psi _{G}=\psi _{A}$ since $\left\langle
F_{B}\right\rangle =\left\langle F_{c}\right\rangle =0.$
However, the equality (2.96), which is so desirable to ensure the
boson-fermion loop cancellations, is not supported experimentally.
The difficulty is that in these simple models, the relation
applies to each supermultiplet separately. Hence, for the
electron, for example, we require
\begin{equation}
2m_{e}^{2}=m_{A}^{2}+m_{B}^{2},
\end{equation}
which implies that one of the two scalar electrons ($A,B$) must
have a mass less than or equal to that of the electron. Such a
particle would have been detected long time ago if it existed.
\section{D-type Breaking (The Fayet-Iliopoulos Model)}
As a simple example of supersymmetry breaking caused by the presence of a $%
U(1)$ factor in the gauge group, we take a supersymmetric version
of $QED$ with two chiral multiplets $(\phi _{+},\psi _{+})$ and
$(\phi _{-},\psi _{-}),$ where the subscripts give the sign of the
charge. The $U(1)$ gauge-invariant superpotential is
\begin{equation}
W=m\phi _{+}\phi _{-}
\end{equation}
and so the scalar potential (2.80) becomes
\begin{eqnarray}
V &=&m^{2}\left\vert \phi _{+}\right\vert ^{2}+m^{2}\left\vert
\phi _{-}\right\vert ^{2}+\frac{1}{2}\left[ e\left( \left\vert
\phi
_{+}\right\vert ^{2}-\left\vert \phi _{-}\right\vert ^{2}\right) +\eta %
\right] ^{2} \\
&=&\frac{1}{2}e^{2}\left( \left\vert \phi _{+}\right\vert
^{2}-\left\vert \phi _{-}\right\vert ^{2}\right) ^{2}+\left(
m^{2}+e\eta \right) \left\vert \phi _{+}\right\vert ^{2}+\left(
m^{2}-e\eta \right) \left\vert \phi _{-}\right\vert
^{2}+\frac{1}{2}\eta ^{2} \notag
\end{eqnarray}
Various possible forms for $V$ \ are shown in Figure (2.1)
\begin{equation*}
The\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ }Pic
\end{equation*}
Provided $m^{2}>2\eta $ (where $e\eta >0$), the minimum occurs at
\begin{equation}
\phi _{+}=\phi _{-}=0,
\end{equation}
so $U(1)$ gauge invariance is not spontaneously broken, but
supersymmetry is broken since $V\neq 0.$ The boson masses are
split, $\ m_{\pm }^{2}=m^{2}\pm e\eta ,$ whereas the fermion
masses are unaffected by the breakdown of supersymmetry. Like
(2.90) the (off-diagonal)\ form of the fermion mass matrix in the
$\psi _{+},$ $\ \psi _{-}$ Majorana basis implies that these two
states combine together to give a Dirac fermion of \ mass $m$. The
fermion-boson mass splitting signals that the breakdown of
supersymmetry but the $(mass)^{2}$ equality still holds, since
\begin{equation}
m_{+}^{2}+m_{-}^{2}=2m^{2}
\end{equation}
For $m^{2}>e\eta ,$the $U(1)$ symmetry is unbroken and the gauge multiplet $%
\left( A_{\mu },\chi \right) $ remains massless. The fermion $\chi
$ is the \textquotedblleft Goldstino\textquotedblright\ arising
from the spontaneous supersymmetry breaking.
The case $m^{2}<e\eta $ is more interesting. The minimum of the
potential now occurs at
\begin{equation}
\phi _{+}=0\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ \ \ \ \ \ \ \ }\phi _{-}=\upsilon
\end{equation}
where $e^{2}\upsilon ^{2}=\left( e\eta -m^{2}\right) .$ Now, both the $U(1)$%
gauge symmetry and supersymmetry are spontaneously broken; see
Figure (2.1.c). We find that the complex field $\phi _{+}$ has
$\left( mass\right) ^{2}=2m^{2},$ while one component of $\phi
_{-}$ is \textquotedblleft eaten\textquotedblright\ \ by the usual
Higgs mechanism to give $\left( mass\right) ^{2}=2e^{2}\upsilon
^{2}$ \ to the vector gauge field $A_{\mu },$ and the remaining
component also acquires $\left( mass\right) ^{2}=2e^{2}\upsilon
^{2}$. A linear combination of the $\psi _{+}$ and$\ \chi $
Majorana fields forms the massless \textquotedblleft
Goldstino\textquotedblright , whereas the two remaining combinations of $%
\psi _{+}$,$\psi _{-},$ and $\chi $ both have $\left( mass\right)
^{2}=m^{2}+2e^{2}\upsilon ^{2}.$ Despite the symmetry breaking,
the super-trace mass relation (2.93) remains true, that is
\begin{equation}
2\left( 2m^{2}\right) +2e^{2}\upsilon ^{2}-\left( 2+2\right)
\left( m^{2}+2e^{2}\upsilon ^{2}\right) +3\left( 2e^{2}\upsilon
^{2}\right)
\end{equation}
It is straight forward to show that, as in sec. (2.9), the
super-trace relation $STr\left( M^{2}\right) =0$ holds in general,
with just one
exception, and that the Goldstino can be identified as the combination%
\begin{equation}
\psi _{G}=\left\langle F_{j}\right\rangle \psi _{j}-\frac{1}{\sqrt{2}}%
\left\langle D^{a}\right\rangle \chi _{a}.
\end{equation}
Since the super-trace relation leads to problems, as we found in
(2.99), it is desirable to explore the exception. In the presence
of a $U(1)$ factor of the gauge group, we find that (2.93) becomes
\begin{equation}
STr\left( M^{2}\right) =2\left\langle D\right\rangle Tr\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }Q,
\end{equation}
where $Q$ is the $U(1)$ charge matrix of the chiral multiplet and
$D$ is the auxiliary field. Perhaps this extra contribution will
permit the superpartners to be sufficiently massive to escape
detection. Unfortunately, the $U(1)$ of the standard model is not
suitable, as the weak hypercharge $Y$ must satisfy $Tr$ $Y=0.$
This extra contribution is also absent in $GUTs,$
which have no $U\left( 1\right) $ factor. To have an additional $U(1)$ with $%
Tr$ $Q\neq 0$ \ would create new problems with triangle anomalies,
which can only be avoided by introducing new chiral multiplets.
Thus far, no satisfactory model of $D-type$ breaking has been
found.
\section{The Supersymmetric Standard Model}
The standard model $\left( SM\right) $ has 28 bosonic degrees of
freedom (12 massless gauge bosons and 2 complex scalars) together
with 90 fermionic degrees of freedom (3 families each with 15
two-component Weyl fermions). To make the model supersymmetric we
must clearly introduce additional particles. In fact, since none
of the observed particles pare off,
we have to double the number. In section 2.3 we saw that the gauge
bosons are partnered by spin-$\frac{1}{2}$ gauginos, and these
cannot be identified with any of the quarks and leptons. So the
latter have to be partnered by new spin 0 squarks and sleptons.
We also have to complete the Higgs supermultiplet. now the $Y=-1$
Higgs doublet has the same quantum numbers as the $\left( \nu
,e^{-}\right) _{L}$ doublet, so one might try to identify the
Higgs with a spin 0 slepton. Unfortunately even this is not
possible, because any attempt to partner a lepton with a Higgs, by
giving the latter a non-zero lepton number $L,$ leads to
$L$-violating processes and large $\bigtriangleup L=2$ Majorana
mass terms. Even worse, in the standard Higgs $\left( \phi \right)
$ generates masses for the down-type quarks and the charged
leptons, while its charge conjugate $\left( \phi _{c}=i\tau
_{2}\phi ^{\ast }\right) $ gives masses to the up- type quarks.
Now the superpotential $W$ is a function only of $\phi $ and not
$\phi ^{\ast },$ and so in supersymmetry we need to introduce a
second unrelated Higgs doublet. There is an alternative way to see
this. Under charge conjugation, the helicity of the
spin-$\frac{1}{2}$ partner of the Higgs (\textquotedblleft the
Higgsino\textquotedblright ) is reversed, and so it proves
impossible to use a single Higgs to give masses
to both up-type and down-type quarks. the second $\left( \func{complex}%
\right) $ doublet is also needed to cancel the anomalies that
would arise if there were only on Higgsino. As in the standard
model, three of the Higgs fields are absorbed to make the $W^{\pm
}$ and $Z$ bosons massive, and we are therefore left with two
charged and three neutral massive Higgs particles.
The particle content of the supersymmetric standard model is shown in table $%
\left( 2.2\right) .$
\begin{eqnarray*}
&&%
\begin{tabular}{|c|c|c|c|}
\hline \multicolumn{2}{|c|}{$Chiral$\textbf{\ \ }$Multiplets$} &
\multicolumn{2}{|c|}{$Gauge$\textbf{\ }$\ Multiplets$} \\ \hline
\textbf{Spin-}$\frac{1}{2}$ & \textbf{Spin 0} & \textbf{Spin 1} & \textbf{%
Spin-}$\frac{1}{2}$ \\ \hline
Quark $q_{L},q_{R}$ & Squark $\widetilde{q_{L}},\widetilde{q_{R}}$ & photon $%
\gamma $ & photino $\widetilde{\gamma }$ \\ \hline Lepton
$l_{L},l_{R}$ & Slepton $\widetilde{l_{L}},\widetilde{l_{R}}$ &
$W,Z$ \ bosons & Wino $\widetilde{W},$ Zino $\widetilde{Z}$ \\
\hline Higgsino $\widetilde{\phi }$ , $\widetilde{\phi ^{\prime
}}$ & Higgs $\phi ,\phi ^{\prime }$ & Gluon $g$ & Gluino
$\widetilde{g}$ \\ \hline
\end{tabular}
\\
&&\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Table (2.2) Particle Multiplets in the Supersymmetric
Standard Model}
\end{eqnarray*}
There is no doubt that this table is a setback for supersymmetry.
To be economical, supersymmetry ought to unite the known fermionic
\textquotedblleft matter\textquotedblright\ $quarks$ and $\
leptons$ with the vector \textquotedblleft
forces\textquotedblright\ $\gamma ,g,W,Z,$ but we have been
compelled to keep them separate and to introduce a new
superpartner for each particle. A great deal of effort has gone
into the search for these superpartners but so far non has been
found.
\chapter{Minimal Supersymmetric Standard Model}
\section{Introduction}
The minimal supersymmetric extension of the standard model (MSSM)
[11] was generated by taking the standard model (SM) and adding
the corresponding
supersymmetric partners. In addition, the MSSM contains two hypercharge $%
Y=\pm 1$ Higgs doublets, which is the minimal structure for the
Higgs sector of an anomaly-free supersymmetric extension of the
standard model. The supersymmetric structure of the theory also
requires (at least) two Higgs doublets to generate mass for both
up-type and down-type quarks (and charged leptons). All
renormalizable supersymmetric interactions \ consistent with
(global) $B-L$ conservation ($B=baryon$ $number$ and $L=lepton$
$number$) are included. Finally, the most general
soft-supersymmetric-breaking terms are added.
If supersymmetry is relevant for explaining the scale of
electroweak interactions, then the mass parameters exist due to
the absence of supersymmetric-particle at current accelerators.
Additional constraints arise from limits on the contributions of
virtual supersymmetric particle exchange to a variety of SM
processes.
As a consequence of $B-L$ \ invariance, the MSSM processes a discreet $%
R-parity$ invariance, where $R=(-1)^{3(B-L)-2S}$ for a particle of
spin $S.$
This formula implies that all the ordinary SM particles have $even-R$ $%
parity,$ whereas the corresponding supersymmetric partners have odd $%
R-parity.$ The conservation of $R-parity$ in scattering and decay
processes has a crucial impact on supersymmetric phenomenology.
For example starting from an initial state involving ordinary
($R-even$) particles, it follows that supersymmetric particles
must be produced in pairs. In general, these particles are highly
unstable and decay quickly into lighter states. However R-parity
invariance also implies that the lightest supersymmetric particle
(LSP) is absolutely stable, and must eventually be produced at the
end of a heavy unstable supersymmetric particle.
In order to be consistent with cosmological constraints, the LSP
is almost certainly electrically and color neutral. Consequently,
the LSP is weakly-interacting in ordinary matter, \textsl{i.e. }it
behaves like a stable heavy neutrino and will escape detectors
without being directly observed. Thus, the canonical signature for
($R-parity$ conserving) supersymmetric theories is a missing
(transverse) energy, due to the escape of the LSP.
Some model builders attempt to relax the assumption of $R-parity$
conservation. Models of this type must break $B-L$ conservation
and are therefore constrained by experiment. Nevertheless, it is
still important to allow the possibility of $R-parity$ violation
processes in the search for supersymmetry. In such models the LSP
is unstable and supersymmetric
particles can be singly produced and destroyed in association with $B$ and $%
L $ violation. These features lead to a phenomenology of broken
$R-parity$ models that is very different from that of the MSSM
In the MSSM, supersymmetry breaking is accompanied by including
the soft-supersymmetry breaking terms. These terms parameterize
our ignorance of the fundamental mechanism of supersymmetry
breaking. If this breaking occurs spontaneously, then (in the
absence) of supergravity a massless goldstone fermion is called
the $\mathit{goldstino}$ ($\widetilde{G}$) must exist. The
goldstino would be the LSP and could play an important role in
supersymmetric phenomenology. In models that incorporates
supergravity (SUGRA), this picture changes. If supergravity is
spontaneously broken, the
goldstino is absorbed (eaten) by the \textsl{gravitino}, the spin-$\frac{3}{2%
}$ partner of the graviton. By this super\_Higgs mechanism, the
gravitino acquires a mass. In many models, the gravitino mass is
of order as the order of the electroweak-breaking scale, while its
coupling are gravitational in strength. Such a gravitino would
play no role in supersymmetric phenomenology at colliders. The
parameters of the MSSM are conveniently described by considering
separately the supersymmetric conserving sector and the
supersymmetry breaking sector. Among the parameters of the
supersymmetry conserving sector are:
1- gauge couplings: $g^{\prime },$ $g,$ and $g_{s},$ corresponding
to U(1), SU(2), and SU(3) subgroups of the SM respectively;
2- Higgs-Yukawa couplings: $\lambda _{e},$ $\lambda _{u},$ and
$\lambda _{d}$ (which are 3$\times $3 matrices in flavor space);
and
3- a supersymmetry-conserving Higgs mass parameter $\mu .$
The supersymmetric-breaking sector contains the following set of
parameters:
i- gauging Majorana masses $M_{1},$ $M_{2}$ and $M_{3}$ associated
with the U(1), SU(2), and SU(3) subgroups of the SM;
ii- scalar mass matrices for the squarks and sleptons.;
iii- Higgs-squark trilinear interaction terms (the so-called
$A-parameters$) and corresponding terms involving the sleptons;
and
iv- three scalar Higgs mass parameters- two-diagonal and one
off-diagonal mass terms for two Higgs doublets. These three mass
parameters can be
re-expressed in terms of the two Higgs vacuum expectation values (VEV), $%
\upsilon _{1}$ and $\upsilon _{2},$and one physical Higgs mass (usually, $%
m_{H_{3}^{0}}$). Here, $\upsilon _{1}$ ($\upsilon _{2}$) is the
vacuum
expectation values of the Higgs field which couples exclusively to $%
down-type $ ($up-type$) quarks and leptons. The value $\upsilon
_{1}^{2}+\upsilon _{2}^{2}$ is fixed by the $W$ mass (or
equivalently by the Fermi constant $G_{F}$),
\begin{equation}
\upsilon _{1}^{2}+\upsilon _{2}^{2}\approx (246\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }GeV)^{2}
\end{equation}
while the ratio $\upsilon _{2}/\upsilon _{1}$ is a free parameter
of the model in terms of the angle $\beta ;$
\begin{equation}
\tan \beta =\upsilon _{2}/\upsilon _{1}
\end{equation}
The supersymmetric constraints imply that the MSSM Higgs sector is
automatically $CP-conserving$ (at tree level). Thus, $\tan \beta $
is a real parameter (conventionally taken to be positive), and the
physical neutral Higgs scalars are $CP-eigenstates.$ Nevertheless,
the MSSM does contain a number of possible new sources of
$CP-violation.$ For example, gaugino-mass parameters, the
$A-parameters,$ and $\mu $ may be complex. Some combination of
these complex phases must be less than an order of
$10^{-2}-10^{-13}$ (for a supersymmetric-breaking scale of $100$
$GeV$) to avoid generating electric dipole moments for the
neutron, electron and atoms in conflict with observed data.
However, these complex phases have little impact on the direct
searches for supersymmetric particles, and are usually ignored in
experimental analysis.
\section{Extended Higgs Sectors}
The Higgs mechanism has solved the problem of having massive gauge
bosons in an invariant local gauge theory without spoiling
renormalizability and unitarity. This is achieved by means of a
spontaneous breaking of the gauge symmetry in which the ground
state (vacuum) loses part of the symmetry whereas the Lagrangian
itself remains fully symmetric.
In the SU(2)$\times $U(1) standard Glashow-Weinberg-Salam model ($GWS$ or $%
SM $) the spontaneous symmetry breaking is induced by the presence
of a doublet (under SU(2)) of complex scalar fields [12]
\begin{equation}
\phi =\binom{\phi ^{+}}{\phi ^{0}}.
\end{equation}
The new fields have Yukawa type interactions with matter fermion
fields and
also have self-interactions of the form%
\begin{equation}
V(\phi )\equiv -\mu ^{2}|\phi |^{2}+\lambda |\phi |^{2},
\end{equation}
where $\mu ^{2}$ and $\lambda $ are positive constants. After the
Higgs mechanism, the theory contains - apart from the fields- 3
massive gauge
bosons $(W^{+},$ $W^{-},$ $Z),$ 1 massless photon and 1 physical scalar $%
(H), $ the \textquotedblleft\ Higgs boson\textquotedblright . The
other three real scalars of the doublet (the \textquotedblleft\
Goldstino bosons\textquotedblright ) have become the longitudinal
components of the three massive gauge bosons.
Although the minimal Higgs sector of the SM is sufficient to
explain the generation of the fermion and gauge boson masses, more
complicated structures in the scalar sector can not be excluded
and are even unavoidable in many unifying extensions of the SM.
These extended Higgs sectors have potentially richer phenomenology
but are also subjected to phenomenological constraints, for
example, the electroweak $\rho -parameter,$ and the presence of
tree level couplings of the type $W^{-}Z^{0}H^{-}.$
\subsection{The $\protect\rho -$Parameter constraint}
The most important phenomenological constraint on the Higgs sector
is the value of the electroweak $\rho -$parameter which,
experimentally, is very close to 1
\begin{equation}
\rho \equiv \frac{m_{W}^{2}}{m_{Z}^{2}\cos ^{2}\theta _{W}}\approx
1
\end{equation}
With an arbitrary Higgs sector consisting of several scalar
multiplets $\phi
_{i}$ of weak isospin $T_{(i)}$ and weak hypercharge $Y_{(i)},$ the $\rho -$%
parameter is given by
\begin{equation}
\rho =\frac{\dsum\limits_{i}\left[ T_{(i)}\left( T_{(i)}+1\right)
-\left( Y_{(i)}/2\right) ^{2}\right] \upsilon
_{i}^{2}c_{i}}{2\dsum\limits_{i}\left( \left( Y_{(i)}/2\right)
^{2}\right) \upsilon _{i}^{2}c_{i}}
\end{equation}
where $\upsilon _{i}$ is the VEV of the multiplet $\phi _{i}$ and $c_{i}=1(%
\frac{1}{2})$ when $Y_{(i)}\neq 0$ $(=0).$
It is easy to check that below $T=10$ only the representations $(T,Y)=(0,0),(%
\frac{1}{2},\pm 1)$ and $(3,\pm 4)$ lead \textquotedblleft\
naturally\textquotedblright\ $(i.e.$ independently of the values of $%
\upsilon _{i})$ to $\rho =1.$Even if we allow $\rho $ to deviate
from 1 by 1\%, no new representations appear. Leaving aside the
case of $(T,Y)=(3,\pm 4)$ involving scalars of electric charge
$Q=5,$only doublets and singlets are acceptable.
Of course other representations are allowed if we only require
$\rho \approx 1$ for \textsl{some} values of the $\upsilon _{i}$.
The simplest cases are the models with a doublets and a (real or
complex) triplet but with $\rho $ differing slightly from 1 unless
$\upsilon _{i}=0$. The simplest \textquotedblleft\
unnatural\textquotedblright\ case with $\rho =1$ is a model with
one doublet and two triplets (one real and one complex) with equal
VEV's.
\subsection{The $W^{-}Z^{0}H^{-}$ Couplings}
a general feature of the extended Higgs sectors is the presence of $\mathit{%
physicsal}$ charged scalar fields ($H^{\pm }$). This fact implies
a potentially rich phenomenology. In particular one expects a tree
level
couplings of the type $W^{-}Z^{0}H^{-}$ in analogy to $W^{-}W^{+}H$ and $%
Z^{0}Z^{0}H$ \ in the SM. However, it turns out that this coupling
is absent (at the tree level) in the simplest\ \ \ \ \ \ \
\textquotedblleft\ natural\textquotedblright\ extensions of the
Higgs sector ($i.e.$ with doublets and singlets) and it is small
(proportional to $\sqrt{\left\vert 1-\rho \right\vert }$) in the
simplest \textquotedblleft unnatural \textquotedblright\
extensions. It is also easy to prove (using the electromagnetic
gauge invariance) that the $W^{-}\gamma H^{+}$ couplings vanish at
the tree level in all models.
\section{The Two Higgs Doublet Model}
A Higgs sector consisting of two scalar doublets
\begin{equation}
\phi _{1}=\binom{\phi _{1}^{+}}{\phi _{1}^{0}},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \
\ \ \ and \ \ \ \ \ \ \ \ \ \ \ }\phi _{2}=\binom{\phi
_{2}^{+}}{\phi _{2}^{0}}
\end{equation}
is the simplest \textquotedblleft\ natural\textquotedblright\
$extension$ of the SM. As we have seen, in such case the
$W^{-}Z^{0}H^{+}$ coupling is automatically absent at the tree
level. The most general $CP-$conserving potential involving two
doublets is [13]
\begin{eqnarray}
V\left( \phi _{1},\phi _{2}\right) &=&\lambda _{1}(\phi
_{1}^{+}\phi _{1}-\upsilon _{1}^{2})^{2}+\lambda _{2}(\phi
_{2}^{+}\phi _{2}-\upsilon
_{2}^{2})^{2} \notag \\
&&+\lambda _{3}\left[ (\phi _{1}^{+}\phi _{1}-\upsilon
_{1}^{2})^{2}+(\phi
_{2}^{+}\phi _{2}-\upsilon _{2}^{2})^{2}\right] ^{2} \notag \\
&&+\lambda _{4}\left[ (\phi _{1}^{+}\phi _{1})(\phi _{2}^{+}\phi
_{2})-(\phi
_{1}^{+}\phi _{2})(\phi _{2}^{+}\phi _{1})\right] \\
&&+\lambda _{5}\left[ \func{Re}(\phi _{1}^{+}\phi _{2})-\upsilon
_{1}\upsilon _{2}\right] ^{2}+\lambda _{6}\left[ \func{Im}(\phi
_{1}^{+}\phi _{2})\right] ^{2}, \notag
\end{eqnarray}
where $\lambda _{i}$ are 6 arbitrary real parameters. If $\lambda _{i}>0,$%
the minimum of the potential corresponds to
\begin{equation}
\left\langle \phi _{i}\right\rangle =\binom{0}{\upsilon
_{1}},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \
\ \ and \ \ \ \ \ \ \ }\left\langle \phi _{2}\right\rangle =\binom{0}{%
\upsilon _{2}}.
\end{equation}
In general the neutral components of the doublets can have flavor
changing couplings to the fermions. These flavor changing neutral
currents (FCNC)interactions can be suppressed ( as required
phenomenologically) either by giving large masses to these scalars
or by arranging their Yukawa couplings to the fermions.
\textit{Glashow} and \textit{Weinberg }proved a theorem stating
the sufficient condition for avoiding these FCNC effects;
\textquotedblleft\ (FCNC) interactions induced by neutral Higgs
scalars are absent if all fermions of a given charge receive their
masses from a single doublet\textquotedblright . This theorem is
trivially satisfied in the SM since there is only one doublet
available. It is also satisfied in the \textquotedblleft Minimal
supersymmetric model\textquotedblright .
Of the 8 available degrees of freedom (4 complex scalars with two
real components), 3 are the Goldstone bosons to become the
longitudinal components of the $W^{\pm }$ and $Z^{0}$ bosons that
become massive by the \textquotedblleft Higgs
mechanism\textquotedblright . The remaining five scalars are
\textit{physical} states (two are charged and three neutral). They
are
\begin{subequations}
\begin{eqnarray}
H^{\pm } &=&-\sin \beta \phi _{1}^{\pm }+\cos \beta \phi _{2}^{\pm }, \\
H_{1}^{0} &=&\sqrt{2}\left[ \left( \func{Re}\phi _{1}^{0}-\upsilon
_{1}\right) \cos \alpha +\left( \func{Re}\phi _{2}^{0}-\upsilon
_{2}\right)
-\sin \alpha \right] , \\
H_{2}^{0} &=&\sqrt{2}\left[ -\left( \func{Re}\phi
_{1}^{0}-\upsilon _{1}\right) \sin \alpha +\left( \func{Re}\phi
_{2}^{\pm }-\upsilon
_{2}\right) -\cos \alpha \right] , \\
H_{3}^{0} &=&\sqrt{2}\left[ \sin \beta \func{Im}\phi _{1}^{0}+\cos
\beta \func{Im}\phi _{2}^{\pm }\right] ,
\end{eqnarray}
with masses $m_{H^{\pm }},$ $m_{H_{1}^{0}},$ $m_{H_{2}^{0}},$ and $%
m_{H_{3}^{0}}$, respectively. The angle $\alpha $, and the masses
are functions of the parameters $\lambda _{i}$ and the VEV's
$\upsilon _{1,2}$
\end{subequations}
\begin{subequations}
\begin{eqnarray}
m_{H^{\pm }}^{2} &=&\lambda _{4}(\upsilon _{1}^{2}+\upsilon _{2}^{2}), \\
m_{H_{3}^{0}} &=&\lambda _{6}(\upsilon _{1}^{2}+\upsilon _{2}^{2}), \\
m_{H_{1}^{0},H_{2}^{0}}^{2} &=&\frac{1}{2}\left[ A+C\pm D\right] , \\
\sin 2\alpha &=&\frac{2B}{D}, \\
\cos 2\alpha &=&\frac{A-C}{D},
\end{eqnarray}
where
\end{subequations}
\begin{subequations}
\begin{eqnarray}
A &=&4\upsilon _{1}^{2}(\lambda _{2}+\lambda _{3})+\upsilon
_{2}^{2}\lambda
_{5}, \\
B &=&(4\lambda _{3}+\lambda _{5})\upsilon _{1}\upsilon _{2}, \\
C &=&4\upsilon _{2}^{2}(\lambda _{2}+\lambda _{3})+\upsilon
_{1}^{2}\lambda
_{5}, \\
D &=&\sqrt{(A-C)^{2}+4B^{2}}.
\end{eqnarray}
This model is completely specified by six parameters: $m_{H^{\pm }},$ $%
m_{H_{1}^{0}},$ $m_{H_{2}^{0}},$ $m_{H_{3}^{0}},$ $\alpha $, and
$\tan \beta \left( \equiv \upsilon _{2}/\upsilon _{1}\right) .$
In the absence of fermions, the Lagrangian (involving only gauge
bosons and scalars) is $C-$ and $P-$conserving and the gauge
bosons have the following quantum numbers:
\end{subequations}
\begin{equation}
J^{PC}=1^{-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }-}(\gamma ),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }1^{-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }-}(Z),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }%
J^{P}=1^{-}(W).
\end{equation}
Similarly, the physical scalars are fixed to be
\begin{equation}
J^{PC}=0^{+\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }+}(H_{1}^{0}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }0^{+\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }+}(H_{2}^{0}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }%
0^{+\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }+}(H_{3}^{0}),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ and }J^{P}=0^{+}(H^{\pm }).
\end{equation}
As a consequence, the couplings $ZH_{1}^{0}H_{1}^{0}$ and $%
ZH_{2}^{0}H_{2}^{0}$ are zero since they would violate Bose
symmetry. The coupling $ZH_{1}^{0}H_{2}^{0}$ also vanishes due to
$CP-$conservation and
couplings $ZZH_{3}^{0}$ and $W^{-}W^{+}H_{3}^{0}$ are forbidden by $C-$%
conservation. These last tree results hold to all orders of
perturbation theory before fermions are introduced.
Other couplings are absent only at the tree level:
$\gamma H_{i}^{0}H_{j}^{0},$ $\gamma \gamma H_{i}^{0},$ $ggH_{i}^{0},$ $%
W^{\pm }\gamma H^{\mp },$ and $W^{\pm }ZH^{\mp },$ but they can be
generated in higher orders of perturbation theory and can lead to
interesting rare decays. All other couplings are in principle
allowed. In particular, the couplings $W^{-}W^{+}H_{1,2}^{0}$ exit
and satisfy the sum rule
\begin{equation}
g_{VVH_{1}^{0}}^{2}+g_{VVH_{2}^{0}}^{2}=g_{VVH}^{2}(SM),\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \
}V=(W,Z),
\end{equation}
$i.e.$ they are somewhat suppressed compared to the analogous SM
couplings.
When fermions are introduced, since the couplings to fermions are
not $C-$ and $P-$conserving (although $CP$ still approximately
conserved), scalar and
gauge bosons are regarded by fermions as mixtures of $J^{PC}$and $%
J^{(-P)(-C)}$ states. In particular, since a fermion-antifermion
pair with zero total angular momentum always has $C=+,$ in the
$H_{i}^{0}f\overline{f}$ couplings, the Higgs fields $H_{1}^{0},$
$H_{2}^{0},$ and $H_{3}^{0}$ acts respectively as $0^{+\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
}+},$ $0^{+\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }+},$ and $0^{-\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }+}$ states.
\section{The Higgs Sector $of$ $the$ MSSM}
One peculiar fact of all supersymmetric gauge theories is that at
least two doublets are required. The simplest supersymmetric
extension of the SM is the MSSM where the Higgs sector consists of
just two doublets and it is a particular example of the
two-doublet models. In this case, SUSY imposes constrains and the
number of the independent parameters is reduced from six to two:
$m_{H_{3}^{0}}$ and $\tan \beta .$ The Higgs sector is then
completely specified by the values of these two parameters.
The Higgs potential of the MSSM can be written as [14], [15]:
\begin{eqnarray}
V\left( \phi _{1},\phi _{2}\right) &=&m_{1}^{2}\phi _{1}^{+}\phi
_{1}+m_{2}^{2}\phi _{2}^{+}\phi _{2}-m_{1,2}^{2}(\phi _{1}^{+}\phi
_{2}+\phi
_{2}^{+}\phi _{1}) \notag \\
&&+\frac{1}{8}(g^{\prime 2}+g^{2})\left[ \left( \phi _{1}^{+}\phi
_{1}\right) ^{2}+(\phi _{2}^{+}\phi _{2})^{2}\right] \\
&&+\frac{1}{4}(g^{\prime 2}-g^{2})(\phi _{1}^{+}\phi _{1})\left(
\phi _{2}^{+}\phi _{2}\right) -\frac{1}{2}g^{2}(\phi _{1}^{+}\phi
_{2})(\phi _{2}^{+}\phi _{1}), \notag
\end{eqnarray}
where $g^{\prime }$ and $g$ are the U(1) and SU(2) gauge
couplings, respectively. Let $V$ be broken spontaneously, then by
comparing eq.(3.8) with eq.(3.16) we obtain the following results
\begin{subequations}
\begin{eqnarray}
\lambda _{2} &=&\lambda _{1} \\
\lambda _{3} &=&\frac{1}{8}(g^{2}+g^{\prime 2})-\lambda _{1} \\
\lambda _{4} &=&2\lambda _{1}-\frac{1}{2}g^{\prime 2} \\
\lambda _{5} &=&\lambda _{6}=2\lambda _{1}-\frac{1}{2}(g^{2}+g^{\prime 2}) \\
\lambda _{7} &=&-\frac{1}{8}(\upsilon _{1}^{2}-\upsilon
_{2}^{2})^{2}(g^{2}+g^{\prime 2}) \\
m_{1}^{2} &=&2\lambda _{1}\upsilon
_{2}^{2}-\frac{1}{4}(g^{2}+g^{\prime
2})(\upsilon _{1}^{2}+\upsilon _{2}^{2}) \\
m_{2}^{2} &=&2\lambda _{1}\upsilon
_{1}^{2}-\frac{1}{4}(g^{2}+g^{\prime
2})(\upsilon _{1}^{2}+\upsilon _{2}^{2}) \\
m_{1,2}^{2} &=&\frac{1}{2}\upsilon _{1}\upsilon _{2}(4\lambda
_{1}-g^{2}-g^{\prime 2})
\end{eqnarray}
Using eq. (3.17h) to eliminate $\lambda _{1}$ in eqs. (3.17f) and
(3.17g) we get
\end{subequations}
\begin{equation}
m_{1}^{2}=m_{1,2}^{2}\frac{\upsilon _{2}}{\upsilon _{1}}-\frac{1}{4}%
(g^{2}+g^{\prime 2})(\upsilon _{1}^{2}+\upsilon _{2}^{2})
\end{equation}
\begin{equation}
m_{2}^{2}=m_{1,2}^{2}\frac{\upsilon _{1}}{\upsilon _{2}}-\frac{1}{4}%
(g^{2}+g^{\prime 2})(\upsilon _{2}^{2}+\upsilon _{1}^{2})
\end{equation}
Hence
\begin{equation}
m_{1}^{2}+m_{2}^{2}=m_{1,2}^{2}(\tan \beta +\cot \beta )
\end{equation}
\begin{equation}
\upsilon _{1}^{2}+\upsilon _{2}^{2}=\frac{-4m_{1}^{2}\cos
^{2}\beta +4m_{2}^{2}\sin ^{2}\beta }{(g^{2}+g^{\prime 2})(\cos
^{2}\beta -\sin ^{2}\beta )}
\end{equation}
Using eqs. $\left( 3.12\right) $, and (3.17)-(3.21), we
immediately get the spectrum of physical Higgs particles. The
results are
\begin{eqnarray}
m_{H_{1}^{0}}^{2} &=&m_{1}^{2}+m_{2}^{2}, \notag \\
m_{H_{2}^{0}}^{2} &=&m_{W}^{2}+m_{H_{3}^{0}}^{2} \\
m_{_{H_{1}^{0},H_{2}^{0}}}^{2} &=&\frac{1}{2}\left[ \sqrt{%
m_{Z}^{2}+m_{_{H_{3}^{0}}}^{2}\pm
(m_{Z}^{2}+m_{_{H_{3}^{0}}}^{2})^{2}-4m_{Z}^{2}m_{_{H_{3}^{0}}}^{2}\cos
^{2}2\beta }\right] , \notag
\end{eqnarray}
where
\begin{eqnarray}
m_{W}^{2} &=&\frac{1}{2}g^{2}(\upsilon _{1}^{2}+\upsilon
_{2}^{2}), \notag
\\
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ }m_{Z}^{2} &=&\frac{1}{2}(g^{2}+g^{\prime 2})(\upsilon
_{1}^{2}+\upsilon _{2}^{2})
\end{eqnarray}
are the squares of the masses of both $W$ and $Z$ bosons.
In this case the relations between $\alpha $ and $\beta $ are
\begin{eqnarray}
\sin 2\alpha &=&-\sin 2\beta \left( \frac{m_{H_{1}^{0}}^{2}+m_{H_{2}^{0}}^{2}%
}{m_{H_{1}^{0}}^{2}-m_{H_{2}^{0}}^{2}}\right) , \notag \\
\cos 2\alpha &=&-\cos 2\beta \left( \frac{m_{H_{3}^{0}}^{2}-m_{Z}^{2}}{%
m_{H_{1}^{0}}^{2}-m_{H_{2}^{0}}^{2}}\right) .
\end{eqnarray}
From these expressions, the following inequalities follow,
\begin{eqnarray}
m_{W} &<&m_{H^{\pm }}, \notag \\
m_{H_{2}^{0}} &<&m_{H_{3}^{0}}, \\
m_{H_{2}^{0}} &<&m_{Z}\,<m_{H_{1}^{0}} \notag
\end{eqnarray}
The fact that one of the neutral scalars, $H_{2}^{0},$ is lighter than the $%
Z $ boson, is an interesting results of the MSSM. The heavy scalar $%
H_{1}^{0},$ on the other hand, has a $W^{-}W^{+}H_{1}^{0}$
coupling which is suppresses with respect to corresponding one in
the SM by a factor
\begin{equation}
\left[ \frac{m_{H_{2}^{0}}^{2}\left( m_{Z}^{2}-m_{H_{2}^{0}}^{2}\right) }{%
\left( m_{H_{1}^{0}}^{2}+m_{H_{2}^{0}}^{2}\right) \left(
m_{H_{1}^{0}}^{2}+m_{H_{2}^{0}}^{2}-m_{Z}^{2}\right) }\right]
^{2}.
\end{equation}
This factor decreases as $1/m_{H_{1}^{0}}^{2}$ where
$m_{H_{1}^{0}}^{2}$ increases, and it is lower than $\sim 0.15$
for $m_{H_{1}^{0}}^{2}>2m_{W}$. The heavy scalar then behaves
differently from the SM Higgs since the latter interacts more and
more strongly with the $W$ boson when $m_{H}$ increases. This
different behavior of SUSY theories is consistent with the fact
that the ultraviolet cut-off of the latter is far above $\sim
1TeV.$
These two results, namely, the existence of a light Higgs boson
and the decoupling of the heavy one from the gauge bosons are
general results which survive in the more general supersymmetric
models, including the \textquotedblleft superstring
inspired\textquotedblright\ ones.
Recent results from LEP experiments have restricted the allowed
region in the $m_{H_{1}^{0}},$ $\tan \beta $ plane and on
$m_{H^{\pm }}.$
To summarize, there are five physical Higgs particles in the MSSM,
a charged
Higgs pair ($H^{\pm }$), two $CP-$even neutral Higgs bosons (denoted by $%
H_{1}^{0}$ and $H_{2}^{0}$ where $m_{H_{1}^{0}}>m_{H_{2}^{0}}$) and one $CP-$%
odd neutral Higgs boson\footnote{%
In recent reviews of particle properties, the symbol $A^{0}$ replaces $%
H_{3}^{0}$, to denote the $CP-$odd neutral Higgs boson.
\par
{}} $H_{3}^{0}$. The properties of the Higgs sector of the MSSM
are determined by the Higgs potential which is made of quadratic
terms and quartic interaction terms. The strength of the
interaction terms are directly related to the gauge couplings by
supersymmetry (and are not affected at tree-level by supersymmetry
breaking). As a result, $\tan \beta $ and one Higgs mass
($m_{H_{3}^{0}}$) determine: The Higgs spectrum, an angle $\alpha
$ (which indicates the amount of mixing of the original $Y=\pm 1$
Higgs doublet states in the physical $CP-$even scalars), and the
Higgs boson couplings.
\section{The Supersymmetric Particle Sector $of$ $the$ MSSM}
The supersymmetric partner of the gauge and Higgs bosons are
fermions, whose names are obtained by appending \textquotedblleft
$ino$\textquotedblright\ at the end of the SM particle name. The
\textit{gluino }is the color octet
Majorana fermion partner of the gluon with mass $m_{\widetilde{g}%
}=\left\vert M_{3}\right\vert $. The supersymmetric partner of the
electroweak gauge and Higgs bosons (the \textit{gauginos} and \textit{%
Higgsinos}) can mix. As a result, the physical mass eigenstates
are
model-dependent linear combinations of these states, called \emph{charginos }%
and\emph{\ neutralinos}, which are obtained by diagonalizing the
corresponding mass matrix [16].
The chargino-mass matrix depends on $M_{2}$, $\mu ,$ $\tan \beta ,$ and $%
m_{W}$. The chargino mass eigenstates are denoted by $\widetilde{\chi }%
_{1}^{+},$ $\widetilde{\chi }_{2}^{+}$ according to the convention that $%
\widetilde{\chi }_{1}^{+}\leq \widetilde{\chi }_{2}^{+}$.
The neutralino mass matrix depends on $M_{1},$ $M_{2}$, $\mu $, $\tan \beta $%
, $m_{Z}$, and the weak mixing angle $\theta _{W}$. The
corresponding neutralino eigenstates are denoted by
$\widetilde{\chi }_{i}^{0}$
(\thinspace $i=1,...,4$), according to the convention that $\widetilde{\chi }%
_{1}^{0}\leq \widetilde{\chi }_{2}^{0}\,\leq \widetilde{\chi
}_{3}^{0}\leq \widetilde{\chi }_{4}^{0}$.
If a chargino or a neutralino eigenstate approximates a particular
gaugino or Higgsino state, it may be convenient to use the
corresponding
nomenclature. For example, if $M_{1}$ and $M_{2}$ are small compared to $%
m_{Z}$ (and $\mu $), then the lightest neutralino $\widetilde{\chi
}_{1}^{0}$ will be nearly a pure photino, $\widetilde{\gamma }$ (
the supersymmetric partner of the photon).
It is common to reduce the supersymmetric parameter freedom by
requiring that all three gaugino-mass parameters are equal at some
grand unification scale. Then, at the electroweak scale the
gaugino-mass parameter can be expressed in terms of on of them
which we choose to be $M_{2}\equiv M$. The other two gaugino-mass
parameters are given by
\begin{eqnarray}
M_{1} &=&\frac{3}{5}M^{\prime }=\left( \frac{g^{\prime
2}}{g^{2}}\right) M,
\notag \\
M_{3} &\equiv &m_{\widetilde{g}}=\left(
\frac{g_{s}^{2}}{g^{2}}\right) M,
\end{eqnarray}
where $M^{\prime }$, $M$ and $m_{\widetilde{g}}$ are the bino
masses respectively. Having made this assumption, the chargino and
neutralino masses and mixing angles depend only on three unknown
parameters: the \textit{wino} mass $M$, the Higgs mass parameter
$\mu $, and $\tan \beta $.
The supersymmetric partners of the squarks and leptons are spin
zero bosons: the \textit{squarks}, charged\textit{\ sleptons},
and\textit{\ sneutrinos.}
\subsection{The charginos}
The charginos, $\widetilde{\chi }_{i}^{\pm }$ (\thinspace
$i=1,2$), are four
component Dirac fermions which arise due to mixing of winos, $\widetilde{W}%
^{-}$ , $\widetilde{W}^{+}$ and the charged Higgsinos,
$\widetilde{H}^{-},$ and $\widetilde{H}^{+}$ [17], [18]. Because
there actually two independent mixings, $\left(
\widetilde{W}^{-},\widetilde{H}^{-}\right) $ and $\left(
\widetilde{W}^{+},\widetilde{H}^{+}\right) ,$ we shall need to
define two unitary mixing matrices. We define in two components
spinor notation:
\begin{equation}
\left( \psi _{j}^{\pm }\right) ^{T}=\left( -i\widetilde{W}^{\pm },\widetilde{%
H}^{\pm }\right) ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ \ \ where }j=1,2
\end{equation}
The mass term in the Lagrangian is:
\begin{equation}
\mathfrak{L}_{m}=\left( \psi ^{-}\right) ^{T}\mathbf{X}\psi
^{+}+h.c.,
\end{equation}
where
\begin{equation}
\mathbf{X=}%
\begin{pmatrix}
M & \sqrt{2}m_{W}\sin \beta \\
\sqrt{2}m_{W}\cos \beta & \mu%
\end{pmatrix}%
.
\end{equation}
The mass matrix \textbf{X }is diagonalized by the unitary $2\times
2$ matrices \textbf{U }and \textbf{V}:
\begin{equation}
\mathbf{U}^{\ast }\mathbf{XV}^{-1}=\mathbf{M}_{D},
\end{equation}
where \textbf{M}$_{D}$ is the diagonal chargino mass matrix. In
particular, \textbf{U }and \textbf{V} can be cj=hosen so that the
elements of the diagonal matrix \textbf{M}$_{D}$ are real and
\textit{non-negative}. We define two component mass eigenstates
via:
\begin{eqnarray}
\chi _{1}^{+} &=&V_{ij}\psi _{j}^{+}, \notag \\
\chi _{1}^{-} &=&V_{ij}\psi _{j}^{-},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ \ \ \ \ \
\ \ \ \ \ \ \ \ \ \ where }i,j=1,2,
\end{eqnarray}
The proper four component mass eigenstates are the charginos which
are defined in terms of two component mass eigenstates as:
\begin{equation}
\widetilde{\chi }_{1}^{+}=\binom{\chi _{1}^{+}}{\chi
_{1}^{-}},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ \ }\widetilde{\chi
}_{2}^{+}=\binom{\chi _{2}^{+}}{\chi _{2}^{-}},
\end{equation}
The mass eigenvalues \textbf{M}$_{D}$ $_{i}$ (the two components
of the diagonal) are given by
\begin{equation}
\mathbf{M}_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\ 1,2}^{2}=\frac{1}{2}\left\{
\begin{array}{c}
\left\vert \mu ^{2}\right\vert +\left\vert M^{2}\right\vert +2m_{W}^{2} \\
\mp \sqrt{\left( \left\vert \mu ^{2}\right\vert +\left\vert
M^{2}\right\vert +2m_{W}^{2}-4\left\vert \mu ^{2}\right\vert
\left\vert M^{2}\right\vert
\right) ^{2}-4m_{W}^{4}\sin ^{2}2\beta +8m_{W}^{2}\sin ^{2}2\beta \func{Re}%
\left( \mu M\right) }%
\end{array}%
\right\}
\end{equation}
If $CP-$violation effects are ignored (in such case, $M$ and $\mu
$ are real parameters), then one can choose a convention where
$\tan \beta $ and $M$ are positive- note that the relative sign of
$M$ and $\mu $ are meaningful.
The sign of $\mu $ is convention dependent\footnote{%
Notice that both sign conventions appear in literature.} Now eq.
(3.34) becomes
\begin{equation}
\mathbf{M}_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\ 1,2}^{2}=\frac{1}{2}\left\{
\begin{array}{c}
\mu ^{2}+M^{2}+2m_{W}^{2} \\
\pm \sqrt{\left( M^{2}-\mu ^{2}\right) ^{2}+4m_{W}^{4}\cos
^{2}2\beta
+4m_{W}^{2}\left( M^{2}+\mu ^{2}+2M\mu \sin 2\beta \right) }%
\end{array}%
\right\}
\end{equation}
and it has the roots
\begin{equation}
M_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }1,2}=\frac{1}{2}\left( \sqrt{\left( M-\mu \right)
^{2}+2m_{W}^{2}\left( 1+\sin 2\beta \right) }\mp \sqrt{\left(
M+\mu \right) ^{2}+2m_{W}^{2}\left( 1-\sin 2\beta \right) }\right)
\end{equation}
We write the chargino mass eigenvalues in the form $M_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{
}i}=\eta
_{i}m_{\widetilde{\chi }_{i}^{\pm }},$ $i,j=1,2,$ with $m_{\widetilde{\chi }%
_{i}^{\pm }}=\left\vert M_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }i}\right\vert ,$ and $\eta
_{i}=sign\left( M_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }i}\right) =\pm 1.$
Assuming $CP-$conservation we choose the matrices \textbf{U} and \textbf{V }%
real. The matrix elements $U_{ij}$ and $V_{ij}$ are given by
\begin{subequations}
\begin{eqnarray}
U_{1,2} &=&U_{2,1}=\frac{\theta
_{1}}{\sqrt{2}}\sqrt{1+\frac{M^{2}-\mu
^{2}-2m_{W}^{2}\cos 2\beta }{W}} \\
U_{2,2} &=&-U_{1,1}=\frac{\theta
_{2}}{\sqrt{2}}\sqrt{1-\frac{M^{2}-\mu
^{2}-2m_{W}^{2}\cos 2\beta }{W}} \\
V_{2,1} &=&-V_{1,2}=\frac{\theta
_{3}}{\sqrt{2}}\sqrt{1+\frac{M^{2}-\mu
^{2}+2m_{W}^{2}\cos 2\beta }{W}} \\
V_{2,2} &=&V_{1,1}=\frac{\theta
_{4}}{\sqrt{2}}\sqrt{1-\frac{M^{2}-\mu ^{2}+2m_{W}^{2}\cos 2\beta
}{W}}
\end{eqnarray}
Where the sign factors $\theta _{i}$, $i=1,...,4$, are given in
Table 3.1, and
\end{subequations}
\begin{equation}
W=\sqrt{\left( M^{2}+\mu ^{2}+2m_{W}^{2}\right) ^{2}-4\left( M\mu
-m_{W}^{2}\sin 2\beta \right) ^{2}}
\end{equation}
\begin{center}
\bigskip
\begin{tabular}[t]{|c|c|c|}
\hline\hline \multicolumn{1}{||c|}{$\theta _{i}$ \ \ \ \ \ \ \ \ \
\ } & \multicolumn{1}{||c|}{$\tan \beta >1$} &
\multicolumn{1}{||c||}{$\tan \beta <1$} \\ \hline\hline $\theta
_{1}$ & 1 & $\varepsilon _{B}$ \\ \hline $\theta _{2}$ &
$\varepsilon _{B}$ & 1 \\ \hline $\theta _{3}$ & $\varepsilon
_{A}$ & 1 \\ \hline $\theta _{4}$ & 1 & $\varepsilon _{A}$ \\
\hline
\end{tabular}
Table 3.1. Sign factors $\theta _{i},i=1,...,$ where $\varepsilon
_{A}=sign\left( M\sin \beta +\mu \cos \beta \right) $
and $\varepsilon _{B}=sign\left( M\cos \beta +\mu \sin \beta
\right) .$
\end{center}
\subsection{The Neutralinos}
The neutralinos, $\widetilde{\chi }_{i}^{0}\left( i=1,...,4\right)
,$ are four-component Majorana fermions which arise due to mixing
of the two neutral gauginos $\widetilde{B}$ $\left(
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{Bino}\right) ,$ $\widetilde{W} $ $^{3}\left( \RIfM@\expandafter\text@\else\expandafter\mbox\fi{neutral
}W-ino\right) ,$ and the two neutral Higgsinos, \
$\widetilde{H}_{1}^{0},$ and
$\widetilde{H}_{2}^{0}$[19],[20],[21]. As basis of the neutral
gaugino-Higgsino system we conveniently take
\begin{equation}
\left( \psi ^{0}\right) ^{T}=\left( -i\widetilde{B},-i\widetilde{W}^{3},%
\widetilde{H}_{1}^{0},\widetilde{H}_{2}^{0}\right) ,
\end{equation}
The mass term in the Lagrangian is:
\begin{equation}
\mathfrak{L}_{m}=\frac{1}{2}\left( \psi ^{0}\right)
^{T}\mathbf{Y}\psi ^{0}+h.c.,
\end{equation}
where
\begin{equation}
\mathbf{Y}=%
\begin{pmatrix}
M^{\prime } & 0 & -m_{Z}\sin \theta _{W}\cos \beta & -m_{Z}\sin
\theta
_{W}\cos \beta \\
0 & M & m_{Z}\cos \theta _{W}\sin \beta & -m_{Z}\cos \theta
_{W}\sin \beta
\\
-m_{Z}\sin \theta _{W}\cos \beta & m_{Z}\cos \theta _{W}\cos \beta
& 0 & -\mu
\\
m_{Z}\sin \theta _{W}\sin \beta & -m_{Z}\cos \theta _{W}\sin \beta
& -\mu
^{2} & 0%
\end{pmatrix}%
\end{equation}
on the $\left( -i\widetilde{B},-i\widetilde{W}^{3},\widetilde{H}_{1}^{0},%
\widetilde{H}_{2}^{0}\right) $ basis. The two-component mass
eigenstates can be obtained by diagonalizing the mass matrix
$\mathbf{Y}$
\begin{equation}
\widetilde{\chi }_{i}^{0}=N_{ij}\psi _{j}^{0},
\end{equation}
where $N$ is a complex unitary matrix
$(\mathbf{N}^{+}\mathbf{N=1)}$ satisfying
\begin{equation}
\mathbf{N}^{\ast }\mathbf{YN}^{-1}=\mathbf{N}_{D},
\end{equation}
and $\mathbf{N}_{D}$ is the diagonal neutralino mass matrix. the
four-component mass eigenstates are the neutralinos which are
defined in terms of two-component mass eigenstates:
\begin{equation}
\widetilde{\chi }_{i}^{0}=\binom{\chi _{i}^{0}}{\overline{\chi }_{i}^{0}},%
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ }i=1,...,4.
\end{equation}
Assuming $CP-$invariance $\mathbf{N}$ is replaced by another matrix $\mathbf{%
Z.}$ This implies the changing of eq.$\left( 3.43\right) $ to
\begin{equation}
\mathbf{Z}^{\ast }\mathbf{YZ}^{-1}=\mathbf{M}_{D},
\end{equation}
and we write the neutralino mass eigenvalues in the form $M_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }%
i}=\varepsilon _{i}m_{\widetilde{\chi }_{i}^{0}},i=1,...,4,$ with $m_{%
\widetilde{\chi }_{i}^{0}}=\left\vert M_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }i}\right\vert ,$ and $%
\varepsilon _{i}=sign\left( M_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }i}\right) =\pm 1.$ The
relation between the $\mathbf{N}$ and $\mathbf{Z}$ matrices is
\begin{equation}
N_{ij}=\sqrt{\varepsilon _{i}}Z_{ij},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ }\left( no\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }sum%
\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }over\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }i\right) .
\end{equation}
Using the theory of equations, the expressions for the $M_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }i}$ $%
\left( \RIfM@\expandafter\text@\else\expandafter\mbox\fi{the four components of the diagonal}\right) $ are
given by
\begin{subequations}
\begin{eqnarray}
M_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }1} &=&-A+B_{+}+\frac{1}{4}C_{+}, \\
M_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }2} &=&+A-B_{\_}+\frac{1}{4}C_{+}, \\
M_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }3} &=&-A-B_{+}+\frac{1}{4}C_{+}, \\
M_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }4} &=&+A+B_{\_}+\frac{1}{4}C_{+},
\end{eqnarray}
where
\end{subequations}
\begin{subequations}
\begin{eqnarray}
A &=&\sqrt{\frac{1}{2}a-\frac{1}{6}b}, \\
B_{\pm } &=&\sqrt{-\frac{1}{2}a-\frac{1}{3}b\pm \frac{c}{\sqrt{8a-\frac{8}{3}%
b}}}, \\
C_{\pm } &=&M^{\prime }\pm M,
\end{eqnarray}
and
\end{subequations}
\begin{subequations}
\begin{align}
a& =\frac{1}{\sqrt[3]{2}}\func{Re}\left( c^{2}+\frac{2}{27}b^{3}-\frac{8}{3}%
bd+i\sqrt{\frac{D}{27}}\right) ^{\frac{1}{3}}, \\
b& =E-\frac{3}{8}C_{+}^{2}, \\
c& =\frac{1}{8}C_{+}^{3}+\frac{1}{2}C_{+}E+C_{+}\mu
^{2}+Fm_{Z}^{2}-\mu
m_{Z}^{2}\sin 2\beta , \\
d& =F\mu m_{Z}^{2}\sin 2\beta -M^{\prime }M\mu ^{2}+\frac{1}{16}EC_{+}^{2}-%
\frac{3}{256}C_{+}^{4}+\frac{1}{4}C_{+}(C_{+}\mu
^{2}+Fm_{Z}^{2}-\mu m_{Z}^{2}\sin 2\beta ),
\end{align}
where
\end{subequations}
\begin{subequations}
\begin{eqnarray}
D &=&-4\left( -\frac{1}{3}b^{3}-4d\right) ^{3}-27\left( -c^{2}-\frac{2}{27}%
b^{3}+\frac{8}{3}bd\right) ^{2}, \\
E &=&M^{\prime }M-m_{Z}^{2}-\mu ^{2}, \\
F &=&M^{\prime }\cos ^{2}\theta _{W}+M\sin ^{2}\theta _{W}.
\end{eqnarray}
The elements of the mixing matrix $\mathbf{Z}$ are given by
\end{subequations}
\begin{subequations}
\begin{eqnarray}
Z_{i1} &=&\frac{1}{\sqrt{1+G_{i}^{2}+H_{i}^{2}+I_{i}^{2}}} \\
Z_{i2} &=&G_{i}Z_{i1}, \\
Z_{i3} &=&H_{i}Z_{i1}, \\
Z_{i4} &=&I_{i}Z_{i1},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ \ }i=1,...,4,
\end{eqnarray}
where
\end{subequations}
\begin{subequations}
\begin{eqnarray}
G_{i} &=&-\frac{J_{i}^{\prime }}{J_{i}\tan \theta _{W}}, \\
H_{i} &=&\frac{\mu J_{i}J_{i}^{\prime }-\frac{1}{2}K_{i}}{L_{i}}, \\
I_{i} &=&\frac{-M_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }i}J_{i}J_{i}^{\prime }-K_{i}}{L_{i}}, \\
J_{i} &=&M-M_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }i},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ }J_{1}^{\prime }=M^{\prime }-M_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }%
i}, \\
K_{i} &=&m_{Z}^{2}\sin 2\beta \left( C_{-}\cos ^{2}\theta
_{W}+J_{i}\right) ,
\\
L_{i} &=&m_{Z}J_{i}\sin \theta _{i}\left( \mu \cos \beta
+M_{D\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }i}\sin \beta \right) .
\end{eqnarray}
\subsection{The Sferminos}
For a given fermion $f$, there are two supersymmetric partners $\widetilde{f}%
_{L}$ and $\widetilde{f}_{R}$ (sfermions) which are scalar
parameters of the
corresponding left and right-handed fermion. There are no $\widetilde{\nu }%
_{R}$. However, in general, $\widetilde{f}_{L}$ and
$\widetilde{f}_{R}$ are not mass-eigenstates since there is
$\widetilde{f}_{L}-\widetilde{f}_{R}$ mixing which is proportional
in strength to the corresponding element of the scalar
mass-squared matrix [22]:
\end{subequations}
\begin{equation}
M_{LR}^{2}=\QATOPD\{ \} {m_{d}\left( A_{d}-\mu \tan \beta \right)
,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ for "down"-type }f}{m_{u}\left( A_{u}-\mu \cot \beta
\right) ,\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ for "up"type \ \ \ \ \ \ }f},
\end{equation}
where $m_{d}\left( m_{u}\right) $ is the mass of the appropriate "down" $%
\left( \RIfM@\expandafter\text@\else\expandafter\mbox\fi{"up"}\right) $ type quark or lepton. Here, $A_{d}$
and $A_{u}$
are (unknown) soft-supersymmetric-breaking $A$-parameters and $\mu $ and $%
\tan \beta $ have been defined earlier. The signs of the
$A$-parameters are also convenient-dependent due to the appearance
of the \textsl{fermion }mass in eq.(3.53), one expects $M_{LR}$ to
be small compared to the diagonal squark and slepton masses, with
the possible exception of the top-squark, since $m_{t}$ is large,
and the bottom-squark and tau-slepton if $\tan \beta
>>1.$
The (diagonal) \ $L-and$ $R-type$ squark and slepton masses are
given by
\begin{subequations}
\begin{eqnarray}
m_{\widetilde{u}_{L}}^{2}
&=&M_{\widetilde{Q}}^{2}+m_{u}^{2}+m_{Z}^{2}\cos
2\beta \left( \frac{1}{2}-\frac{2}{3}\sin ^{2}\theta _{W}\right) , \\
m_{\widetilde{u}_{R}}^{2} &=&M_{\widetilde{U}}^{2}+m_{u}^{2}+\frac{2}{3}%
m_{Z}^{2}\cos 2\beta \sin ^{2}\theta _{W}, \\
m_{\widetilde{d}_{L}}^{2}
&=&M_{\widetilde{Q}}^{2}+m_{d}^{2}-m_{Z}^{2}\cos
2\beta \left( \frac{1}{2}-\frac{1}{3}\sin ^{2}\theta _{W}\right) , \\
m_{\widetilde{d}_{R}}^{2} &=&M_{\widetilde{D}}^{2}+m_{d}^{2}-\frac{1}{3}%
m_{Z}^{2}\cos 2\beta \sin ^{2}\theta _{W}, \\
m_{\widetilde{\nu }}^{2}
&=&M_{\widetilde{L}}^{2}+\frac{1}{3}m_{Z}^{2}\cos
2\beta , \\
m_{\widetilde{e}_{L}}^{2}
&=&M_{\widetilde{L}}^{2}+m_{e}^{2}-m_{Z}^{2}\cos
2\beta \left( \frac{1}{2}-\sin ^{2}\theta _{W}\right) , \\
m_{\widetilde{e}_{R}}^{2}
&=&M_{\widetilde{E}}^{2}+m_{e}^{2}-m_{Z}^{2}\cos 2\beta \sin
^{2}\theta _{W},
\end{eqnarray}
The soft-supersymmetry-breaking parameters: $M_{\widetilde{Q}}^{2}$ ,$M_{%
\widetilde{U}}^{2}$ ,$M_{\widetilde{D}}^{2}$ ,$M_{\widetilde{L}}^{2}$ ,$M_{%
\widetilde{E}}^{2}$ are unknown parameters. In the equations
above, the notation of the first generation fermions has been used
and generational indices have been suppressed. further
complications such as integrational mixing are possible, although
there are some constraints from the non-observation of
flavor-changing current (FCNC).
\section{Reducing \textsl{the }MSSM Parameter Freedom}
One way to guarantee the absence of FCNC's mediated by virtual
supersymmetric-particle exchange is to posit that the diagonal
soft-supersymmetry-breaking scalar squared-masses are universal in
flavor space at some energy scale (normally taken to be at or near
the Plank scale).\ Renormalization group evolution is used to
determine the low energy values for the scalar mass parameters
listed above. This assumption reduces the MSSM parameter freedom.
For example, the supersymmetric grand unified
models with universal scalar masses at Plank scale typically give $M_{%
\widetilde{L}}\approx M_{\widetilde{E}}<M_{\widetilde{Q}}\approx M_{%
\widetilde{U}}\approx M_{\widetilde{D}}$ with the squark masses
somewhere between a factor of 1-3 larger than the slepton masses
(neglecting generational distinction). More specifically, the
first two generations are
thought to be degenerate in mass, while $M_{\widetilde{Q}_{3}}$ and $M_{%
\widetilde{U}_{3}}$are typically reduced by a factor of 1-3 from
the other soft supersymmetric breaking masses because of
renormalization effects due to the heavy top quark masses.
As a result, four flavors of the squarks (with two squarks
eigenstates per flavor) and $\widetilde{b}_{R}$ will be nearly
mass-degenerate and somewhat heavier than six flavors of nearly
mass-degenerate sleptons (with two per flavor for the charged
sleptons and one for the sneutrinos). On the other
hand, $\widetilde{b}_{L}$ mass and the diagonal $\widetilde{t}_{L}$ and $%
\widetilde{t}_{R}$ masses are reduced compared to the common
squark mass of the first two generations. In addition, third
generation squark masses and
tau-slepton masses are sensitive to the respective \thinspace $\widetilde{f}%
_{L}-\widetilde{f}_{R}$ mixing as discussed before.
Two additional theoretical frameworks are often introduced to
reduce further the MSSM parameter freedom. the first involves
grand unified theories (GUTs) and the desert hypothesis ($i.e.$ \
no new physics between the TeV-scale and the GUT-scalae). Perhaps
one of the most compelling hints for low energy supersymmetry is
the unification of $SU(3)\times SU(2)\times U(1)$ gauge coupling
predicted by supersymmetric GUT models (with the supersymmetry
breaking scale of order 1 TeV or below).
The unification, which takes place at an energy scale of order
$10^{16}GeV$ is quite robust (and depends weakly on the details of
the GUT-scale theory). For example, a recent analysis finds that
supersymmetric GUT unification implies that $\alpha
_{s}(m_{Z})=0.129\pm 0.010,$not including threshold
corrections due to GUT-scale particles (which could diminish the value of $%
\alpha _{s}(m_{Z})$). This result is compatible with the world average of $%
\alpha _{s}(m_{Z})=0.118\pm 0.003$. In contrast, gauge coupling
unification in the simplest non-supersymmetric GUT models fails by
many standard deviations.
\subsection{Minimization \textsl{of the }Higgs potential}
In the MSSM, the Higgs sector has two unknown parameters, usually
taken to be $\tan \beta \equiv \upsilon _{2}/\upsilon _{1}$ and
$m_{H_{3}^{0}}$, the mass of its one physical pseudoscalar
particle.\ numerous phenomenological studies have been made using
these parameters as continuous variables. However, there is an
argument for $m_{H_{3}^{0}}=m_{Z}$ at the tree level and perhaps
also $\tan \beta >\sqrt{3}$, by minimizing the minimum of the
Higgs potential along a certain direction in parameter space [23].
The part of $V$ (see eq.(316)) involving only neutral fields
depends on four
parameters: $m_{1}^{2}$, $m_{2}^{2}$, $m_{1,2}^{2}$, and $%
g_{1}^{2}+g_{2}^{2} $. At its minimum $V_{0}$, we can choose to keep $%
m_{1}^{2}$ and $m_{2}^{2}$, but replace $m_{1,2}^{2}$ with $\tan
\beta $ through eq. (3.20) and $g_{1}^{2}+g_{2}^{2}$ with
$\upsilon _{1}^{2}+\upsilon _{2}^{2}$ through eq.(3.21). In that
case,
\end{subequations}
\begin{equation}
V_{0}=\frac{1}{2}(\upsilon _{1}^{2}+\upsilon _{2}^{2})\left( \cos
^{2}\beta -\sin ^{2}\beta )(m_{1}^{2}\cos ^{2}\beta -m_{2}^{2}\sin
^{2}\beta \right)
\end{equation}
We now seek to minimize $V_{0}$ in parameter space. This is based
on the assumption that the dynamic mechanism responsible for the
soft breaking of supersymmetry may be such that the lowest
possible value of $V_{0}$ is automatically chosen. It is also
clear that $\upsilon _{1}^{2}+\upsilon _{2}^{2}$, $m_{1}^{2}$,
$m_{2}^{2}$ set the energy scale of the symmetry breaking and
$V_{0}$ has no lower bound as a function of these parameters. We
should therefore consider them as fixed and vary $\sin ^{2}\beta $
to minimize $V_{0}.$ Let $x\equiv \sin ^{2}\beta $, then
\begin{equation}
\frac{\partial V_{0}}{\partial x}=\frac{1}{2}\left( \upsilon
_{1}^{2}+\upsilon _{2}^{2}\right) [-\left(
3m_{1}^{2}+m_{2}^{2}\right) +4\left( m_{1}^{2}+m_{2}^{2}\right)
x],
\end{equation}
and
\begin{equation}
\frac{\partial ^{2}V_{0}}{\partial x^{2}}=2\left( \upsilon
_{1}^{2}+\upsilon _{2}^{2}\right) \left(
m_{1}^{2}+m_{2}^{2}\right) .
\end{equation}
Hence, the minimization of $V_{0}$ is achieved if
\begin{equation}
x=\frac{3m_{1}^{2}+m_{2}^{2}}{4\left( m_{1}^{2}+m_{2}^{2}\right) }
\end{equation}
and $m_{1}^{2}+m_{2}^{2}>0$, which is consistent with eq.$\left(
3.22\right) .$ Using eq. $\left( 3.21\right) $ we then find
\begin{equation}
m_{1}^{2}+m_{2}^{2}=\frac{1}{2}\left( g_{1}^{2}+g_{2}^{2}\right)
\left( \upsilon _{1}^{2}+\upsilon _{2}^{2}\right) ,
\end{equation}
or equivalently,%
\begin{equation}
m_{H_{3}^{0}}=m_{Z}\RIfM@\expandafter\text@\else\expandafter\mbox\fi{.}
\end{equation}
This implies
\begin{equation}
m_{H^{\pm }}=\sqrt{m_{W}^{2}+m_{Z}^{2}}\approx 121GeV
\end{equation}
and
\begin{equation}
m_{H_{1}^{0},H_{2}^{0}}=m_{Z}^{2}\sqrt{1\pm \sin 2\beta }
\end{equation}
From eq.$\left( 3.58\right) ,$we find
\begin{equation}
\tan ^{2}\beta =\frac{3m_{1}^{2}+m_{2}^{2}}{m_{1}^{2}+3m_{2}^{2}},
\end{equation}
which shows that if $m_{1}^{2}>0$ and $m_{2}^{2}>0$, then $\frac{1}{\sqrt{3}}%
<\tan \beta <\sqrt{3}$. However, because $\phi _{2}$ couples to the \textit{%
top} quark with a large Yukawa coupling, $m_{2}^{2}$ is expected
to differ from $m_{1}^{2}$ by a large negative contribution from
the
renormalization-group equations, hence the case $m_{1}^{2}>0$ and $%
m_{2}^{2}<0$ should be considered. We then obtain
\begin{equation}
\tan \beta >\sqrt{3}\approx 1.731,
\end{equation}
where $m_{1}^{2}>3\left\vert m_{2}^{2}\right\vert $ has also been
assumed or else $V_{0}$ would have been minimized at $\sin
^{2}\beta >1$, which is impossible. Using eq.$\left( 3.62\right) $
we find $m_{H_{1}^{0}}>33GeV$ and
$m_{H_{2}^{0}}<125GeV$, with the constraints that $%
m_{H_{1}^{0}}^{2}+m_{H_{2}^{0}}^{2}=2m_{Z}^{2}$. Experimentally,
there is no evidence for the existence of any of the five scalar
particles of the MSSM from the $Z$ decay or in any other process.
\subsection{Diophantine Analysis $of$\textsl{\ }$the$ Higgs Boson}
Diophantine quantization involves treating mass relations as
Diophantine equations and seeking solutions in integers, analogous
to the sets $\left(
3,4,5\right) ,\left( 5,12,13\right) ,etc.,$ for the Pythagorean equation $%
x^{2}+y^{2}=z^{2}.$ It was first applied to the Gell-Mann-Okubo
meson-mass relation [24]
\begin{equation}
m_{\pi }^{2}+3m_{\eta }^{2}=4m_{K}^{2},
\end{equation}
for which the simplest nontrivial solution was the set $\left( 2,8,7\right) $%
. The procedure not only gave integers proportional to the
experimental masses $m_{\pi }=135-140MeV,$ $m_{\eta }=547MeV,$
$m_{K}=494-498MeV$ but
also set a unit mass- the GCF of the three mesons masses- of the $70MeV$ $%
\left( =\left( \hbar c/e^{2}\right) m_{e}c^{2}\right) ,$ as
originally proposed by Nambu.
There is an analogy to this in the standard model. If one looks
carefully at the values of the $W$ and $Z$ masses
\begin{equation}
m_{W}=80.22\pm 0.26GeV,
\end{equation}
\begin{equation}
m_{Z}=91.173\pm 0.020GeV,
\end{equation}
one finds that within experimental limits,
\begin{equation}
\cos \theta _{W}=m_{W}/m_{Z}=15/17.
\end{equation}
not only is this the ratio of the two integers, but these two
particular two integers $\left( 15,17\right) $ happen to be two
sides of a right-angled triangle, then $\theta _{W}$ is, quite
interestingly, a rational angle. A simple consequence of this,
which we shall use below, is that the sine and cosine of such an
angle must be rational.
A lengthy Diophantine analysis [25] applied to the scalar Higgs
gauge boson mass relations of eq.$\left( 3.22\right) $ of minimal
supersymmetry results in the following relations
\begin{equation}
\sin \beta =\QATOPD\{ . {\frac{8}{17},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ \ \ \
}\beta <\pi /4}{\frac{15}{17},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ \ \ \ }\beta >\pi
/4},
\end{equation}
\begin{subequations}
\begin{eqnarray}
m_{H_{3}^{0}} &=&m_{Z}=91.2GeV, \\
m_{H_{1}^{0}} &=&37.5GeV, \\
m_{H_{2}^{0}} &=&123GeV, \\
m_{H^{\pm }} &=&121.5GeV.
\end{eqnarray}
Furthermore, most surprising of all, if we recall that the Weinberg angle $%
\theta _{W}$ is characterized by eq.$\left( 3.68\right) ,$ we have
a relation between Higgs and Weinberg angles
\end{subequations}
\begin{equation}
\QATOPD\{ . {\theta _{W},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ \ \ \ \ \ \ \ }\beta <\pi /4}{%
\frac{\pi }{4}-\theta _{W},\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ \ \ \ \ \ }\beta >\pi /4.}
\end{equation}
Finally, if $\beta >\pi /4,$ $\sin \beta =\frac{15}{17},$ where
the infrared quasi-fixed-point solution yields a
\textit{top}-quark mass
\begin{equation}
m_{t}\cong \left( 190-210\right) \sin \beta GeV\approx 168-175GeV
\end{equation}
within the range of recent data [26].
\section{Conclusion}
The minimal supersymmetric standard model $\left( MSSM\right) $ is
the simplest extension of the standard model $\left( SM\right) $
that includes soft broken supersymmetry $\left( SUSY\right) $. The
word \textquotedblleft minimal\textquotedblright\ refers to
minimal content of particles and gauge groups.
The $MSSM$ contains a great number of free parameters which
considerably limit the predictive power of the model. These are
some commonly used ways to reduce the number of free constants and
parameters in this theory. The most often employed method is to
obtain values of parameters at the scale of the order of $m_{W}$
by renormalization-group equations from the coupling constants of
the supergravity theories investigated at the Plank mass. usually
such theories are much more unified and contain typically only few
free numbers. Of course, there are also many constraints
originating from the experimental data-first of all the masses of
the superpartners are bounded from below by their absence in the
present experiments, but one can find also many more subtle
limits.
The usual $MSSM$ framework assumptions are:
\begin{enumerate}
\item \textit{R}-parity is conserved. this assumption is commonly
made in supersymmetry studies, as it prevents protons from
decaying too quickly.
\item The lightest supersymmetric particle (LSP) is the lightest
neutralino.
\item The intergenerational mixing in the squark, slepton, and
quark sectors is small and can be neglected.
\item The four \textit{left-handed} squarks of the first two
generations are nearly degenerate at low energy with mass
$m_{\widetilde{q}}$ as are all six left-handed\textit{\ }sleptons
with mass $m_{\widetilde{l}}:$
\end{enumerate}
\begin{eqnarray}
m_{\widetilde{u}_{L}} &\approx &m_{\widetilde{d}_{L}}\approx m_{\widetilde{c}%
_{L}}\approx m_{\widetilde{s}_{L}}\approx m_{\widetilde{q}_{L}}, \notag \\
m_{\widetilde{\nu }_{e_{L}}} &\approx &m_{\widetilde{e}_{L}}\approx m_{%
\widetilde{\nu }_{\mu _{L}}}\approx m_{\widetilde{\mu }_{L}}\approx m_{%
\widetilde{\nu }_{\tau _{L}}}\approx m_{\widetilde{\tau }_{L}}\approx m_{%
\widetilde{l}}.
\end{eqnarray}
\begin{enumerate}
\item[5.] The gaugino mass $M_{i},$the parameter $\mu $ and $\tan
\beta $ may be taken to be real, so that $CP-violation$ plays no
role.
\end{enumerate}
Unification of gaugino mass, $i.e.$ $M_{1}$and $M_{2}$ are not
independent.
\part{Phenomenological Calculations}
\chapter{Introduction}
\bigskip The phenomenological predictions of supersymmetry (SUSY) may be
divided into three categories [27], [28], [29]:
\begin{enumerate}
\item Reflections of supersymmetric lagrangian in Standard Model
(SM) phenomenology, including relations among the gauge coupling
constants from supersymmetric grand unification and the presence
of a heavy top quark and light Higgs scalar;
\item The prediction of new particles with the correct spin and
quantum numbers assignments to be superpartners of the standard
model particles; and
\item Well-defined quantitative relations among the couplings and
masses of these new particles.
\end{enumerate}
While the predictions of (1) are of great interest, their
verification is clearly no substitute for direct evidence. The
discovery of a large number of particles in category (2) would be
strong support for supersymmetry. On the other hand, the most
compelling confirmation of supersymmetry would likely be the
precise verification of the relations of category (3). This would
be specially true if, initially, only a small set of candidate
supersymmetric partners are observed.
Most discussions of supersymmetry at future high energy colliders
have concentrated on the question of particle searches. From one
point of view, this is reasonable, because the existence of
supersymmetric partners is unproven and this is a prerequisite for
any further analysis. On the other hand, the discovery of the
first evidence for supersymmetry, or for any other theoretical
extension of the standard model, will begin a program of detailed
experimental investigation of the new sector of particles required
by this extension.
Supersymmetry provides a particularly interesting subject for
studies of the detailed analysis of physics beyond the standard
model. Supersymmetry are weakly coupled, and so their consequences
can be worked out straightforwardly using perturbative
computations. At the same time, supersymmetric models depend on a
large number of unknown parameters, and different choices for
these parameters yield qualitatively different realizations of
possible new physics. Thus the phenomenology of supersymmetry is
quite complex. Eventually, if supersymmetry does give a correct
model of nature, the colliders of the next generation will be
expected to determine the supersymmetric parameters, and their
values will become clues that take us a step closer to a
fundamental theory.
In the minimal supersymmetric extension of the standard model,
MSSM, among the lightest supersymmetric particles there are four
neutralinos (the supersymmetric partners of the neural electroweak
gauge and Higgs bosons). In most scenarios, apart from the
lightest supersymmetric particle (LSP),
which is in general assumed to be the lightest neutralino ($\widetilde{\chi }%
_{1}^{o}$) (stable and invisible), the particles that could be
first
observed at future colliders are the next-to-lightest neutralino ($%
\widetilde{\chi }_{2}^{o}$) and the light chargino ($\widetilde{\chi }%
_{1}^{\pm }$) [30]. Therefore, any reasonable large supersymmetric
signal
must involve either the second lightest neutralino $\widetilde{\chi }%
_{2}^{o} $ or the lighter charginos $\widetilde{\chi }_{1}^{\pm
}$. In general, we can not assume that the second lightest
neutralino is heavier than the lighter chargino, since,
$m_{\widetilde{\chi }_{2}^{o}}$ is not
independent of $m_{\widetilde{\chi }_{1}^{o}}$ and $m_{\widetilde{\chi }%
_{1}^{\pm }}$. In fact, in the region of parameter space in which
charginos
production is accessible to he future $e^{-}e+$ colliders, $m_{\widetilde{%
\chi }_{2}^{o}}$ and $m_{\widetilde{\chi }_{1}^{\pm }}$ are very
roughly degenerate, with the mass difference typically in the
range
\begin{equation*}
-10GeV\leq m_{\widetilde{\chi }_{2}^{o}}-m_{\widetilde{\chi
}_{1}^{\pm }}\leq 20GeV.
\end{equation*}
When $m_{\widetilde{\chi }_{2}^{o}}<m_{\widetilde{\chi }_{1}^{\pm
}}$, it is
possible for the lighter chargino to decay through a cascade decays to a $%
\widetilde{\chi }_{2}^{o}$, which in urn decays to an LSP.
The $e^{-}e+$ colliders have been playing complementary roles to
the hadron colliders in supersymmetry searches. In general,
$e^{-}e+$ colliders have reasonable signal rates in a clean
environment with a definite center-of-mass energy, enabling us to
perform precision measurements of particles' masses, lifetimes,
and various different cross-sections, while hadron colliders
provide opportunities to quickly survey high energy
frontier. In particular, the production of $\widetilde{\chi }_{1}^{o}$ $%
\widetilde{\chi }_{2}^{o}$ pairs at $e^{-}e+$ colliders could
allow the study of a wide region of the supersymmetry parameter
space.
Owing to the relatively large cross-section of two body final
state reactions, they can be used to search for supersymmetric
particles with masses up to the beam energy. In this study, the
production of certain three body final state reactions are
calculated to improve the sensitivity in searching for
supersymmetric particles.
\newpage
\begin{center}
{\huge Outline of Part Two}
\end{center}
In this part "the calculation part", reactions' cross sections are
calculated and the corresponding curves are plotted with the
results tabulated for a convenient reference. Work went as follows:
\begin{enumerate}
\item In chapter five, the reaction $e^- e^+ \rightarrow
H^- \widetilde{\chi }_{1}^+ \widetilde{\chi }_{1}^o$
is considered.
\item In chapter six, the reaction $e^{-}e^{+}\rightarrow h\widetilde{\chi }%
_{1}^{+}\widetilde{\chi }_{1}^{-}$ is considered.
\item In chapter seven, the reaction $e^{-}e^{+}\rightarrow
h\widetilde{\chi }_{1}^{o}\widetilde{\chi }_{1}^{o}$ is
considered$.$
\item In chapter eight, the reaction $e^{-}e^{+}\rightarrow
hH^{+}H^{-}$ is considered.
\end{enumerate}
\chapter{Production of a charged Higgs boson with a chargino and a neutralino%
}
\section{Introduction}
In this chapter, the total cross section for the process $e^-(p1)
e^+(p2) \rightarrow H^-(p3) \widetilde{\chi }_{1}^+(p4)
\widetilde{\chi }_{1}^o(p5)$ for different topologies and
propagators (see Appendix A) are calculated and represented
graphically. There are 28 different Feynman diagrams (tree level
approximation) for which we gave the matrix element corresponding
to each diagram. Diagrams with the same topology that can be
obtained by changing the indices were represented once.
Work will go as follows:
\begin{enumerate}
\item Feynman diagrams are given,
\item Diagrams with the same topology are represented once, but
has been taken into considerations when calculating the cross
section.
\item Matrix elements are written, all the four momenta squares
are defined to be mass squared $(>0)$,
\item Matrix elements are squared,
\item An average over the initial spin polarizations of the
electron and positron pair and a sum over the final spin states of
the outgoing particles arising from each initial spin state is
carried out.
\item Results are represented graphically, and summarized in
subsequent tables.
\end{enumerate}
\section{Feynman Diagrams}
The follwoing is the set of Feynman diagrams which were used to
calculate the cross section of the associated production of a
charged Higgs boson with
a chargino and a neutralino. Our momentum notation is: $%
e^-(p_1) $, $e^+(p_2)$, $H^-(p_3)$, $\widetilde{\chi}^+_1(p_4)$
and $\widetilde{\chi}^o_1(p_3)$.
\begin{figure}[tph]
\begin{center}
\vskip-5.5cm
\mbox{\hskip-3.5cm\centerline{\epsfig{file=feyn1,width=17cm}}}
\end{center}
\caption{Feynman diagrams for the reaction: $e^-(p1) e^+(p2)
\rightarrow H^-(p3) \widetilde{\chi }_{1}^+(p4) \widetilde{\chi
}_{1}^o(p5)$} \label{feyn1}
\end{figure}
\begin{figure}[tph]
\begin{center}
\vskip-5.5cm
\mbox{\hskip-3.5cm\centerline{\epsfig{file=feyn2,width=17cm}}}
\caption{Cont. Feynman diagrams for the reaction: $e^-(p1) e^+(p2)
\rightarrow H^-(p3) \widetilde{\chi }_{1}^+(p4) \widetilde{\chi
}_{1}^o(p5)$}
\end{center}
\label{feyn2}
\end{figure}
\newpage
\section{Matrix Elements}
The following is the set of matrix elements corresponding to
feynman diagrams in figures \ref{feyn1} and \ref{feyn2} used in
our calculations:
\begin{eqnarray*}
\mathcal{M}_{1} &=&\overline{v}(p_{2})A\gamma _{\mu
}u(p_{1})P_{\gamma
}^{\mu \nu }(p_{1}+p_{2})J(p_{3}+p_{4}-p_{5})_{\nu }D_{H^{+}}(p_{3}+p_{4})%
\overline{u}(p_{4}) \notag \\
&&(Q_{ij}^{L}P_{L}+Q_{ij}^{R}P_{R})v(p_{3})
\end{eqnarray*}
\begin{center}
\begin{eqnarray*}
\mathcal{M}_{2} &\mathbf{=}&\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\overline{v}(p_{2})A\gamma
_{\mu }u(p_{1})P_{\gamma }^{\mu \nu
}(p_{1}+p_{2})\overline{u}(p_{4})V\gamma _{\nu
}D_{\overline{\chi }_{1}^{+}}(p_{3}+p_{4})(\NEG{p}_{3}+\NEG{p}_{4}+m_{%
\overline{\chi }_{1}^{+}}) \notag \\
&&(Q_{ij}^{L}P_{L}+Q_{ij}^{R}P_{R})v(p_{3})
\end{eqnarray*}
\end{center}
\begin{eqnarray*}
\mathcal{M}_{3} &=&\overline{v}(p_{2})\gamma _{\mu
}(B^{L}P_{L}+B^{R}P_{R})u(p_{1})P_{Z}^{\mu \nu
}(p_{1}+p_{2})H(p_{1}+p_{2})_{\nu }D_{H^{+}}(p_{3}+p_{4}) \notag \\
&&\overline{u}(p_{4})(Q_{ij}^{L}P_{L}+Q_{ij}^{R}P_{R})v(p_{3})
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{4,5} &=&\RIfM@\expandafter\text@\else\expandafter\mbox\fi{ }\overline{v}(p_{2})\gamma _{\mu
}(B^{L}P_{L}+B^{R}P_{R})u(p_{1})P_{Z}^{\mu \nu }(p_{1}+p_{2})\overline{u}%
(p_{4})\gamma _{\nu }(W_{ij}^{L}P_{L}+W_{ij}^{R}P_{R}) \notag \\
&&D_{_{\overline{\chi }_{k}^{+}}}(p_{3}+p_{4})(\NEG{p}_{3}+\NEG%
{p}_{4}+m_{_{_{\overline{\chi }%
_{k}^{+}}}})(Q_{ij}^{L}P_{L}+Q_{ij}^{R}P_{R})v(p_{3})
\end{eqnarray*}
Where $k=1,2.$
\begin{eqnarray*}
\mathcal{M}_{6,7,8,9} &=&\overline{v}(p_{2})\gamma _{\mu
}(B^{L}P_{L}+B^{R}P_{R})u(p_{1})P_{Z}^{\mu \nu }(p_{1}+p_{2})\overline{u}%
(p_{3})\gamma _{\nu }(S_{ij}+S_{ij}^{^{\prime }}\gamma _{5}) \notag \\
&&D_{_{\overline{\chi }_{k}^{o}}}(p_{4}+p_{5})(\NEG{p}_{4}+\NEG%
{p}_{5}+m_{_{_{\overline{\chi }_{k}^{o}}}})(Q_{ij}^{L}P_{L}+Q_{ij}^{R}P_{R})%
\overline{u}(p_{4})
\end{eqnarray*}
Where $k=1,2,3,4.$
\begin{eqnarray*}
\mathcal{M}_{10,16,17,18,19,24,25,26} &=&\overline{v}%
(p_{2})(N_{hi}+N_{hi}^{^{\prime }}\gamma _{5})v(p_{3})D_{\widetilde{e}%
_{h}}(p_{1}-p_{3})\overline{u}(p_{3})(Q_{ij}^{L}P_{L}+Q_{ij}^{R}P_{R})
\notag \\
&&D_{_{\overline{\chi }_{k}^{o}}}(p_{3}+p_{5})(\NEG{p}_{3}+\NEG%
{p}_{5}+m_{_{_{\overline{\chi
}_{k}^{o}}}})(N_{hi}-N_{hi}^{^{\prime }}\gamma _{5})u(p_{1})
\end{eqnarray*}
Where $k=1,2,3,4.$
and $h=L,R.$
\begin{eqnarray*}
\mathcal{M}_{11,12,13,14,20,21,22,23} &=&\overline{v}%
(p_{2})(N_{hi}+N_{hi}^{^{\prime }}\gamma _{5})(\NEG{p}_{4}+\NEG%
{p}_{5}+m_{_{_{\overline{\chi }_{k}^{o}}}})D_{_{\overline{\chi }%
_{k}^{o}}}(p_{4}+p_{5})\overline{u}(p_{4})(Q_{ij}^{L}P_{L}+Q_{ij}^{R}P_{R})
\notag \\
&&D_{\widetilde{e}_{h}}(p_{1}-p_{3})\overline{u}(p_{3})(N_{hi}-N_{hi}^{^{%
\prime }}\gamma _{5})u(p_{1})
\end{eqnarray*}
again, $k=1,2,3,4.$
and $h=L,R.$
\begin{eqnarray*}
\mathcal{M}_{27,28} &=&\overline{v}%
(p_{2})(T_{i}^{L}P_{L}+T_{i}^{R}P_{R})v(p_{4})D_{\widetilde{\nu }%
_{e}}(p_{2}-p_{4})\overline{u}(p_{3})(Q_{ij}^{L}P_{L}+Q_{ij}^{R}P_{R})(\NEG%
{p}_{3}+\NEG{p}_{5}+m_{_{_{\overline{\chi }_{k}^{+}}}}) \notag \\
&&D_{\widetilde{\chi }%
_{k}^{+}}(p_{3}+p_{5})(T_{i}^{L}P_{L}+T_{i}^{R}P_{R})u(p_{1})
\end{eqnarray*}
here $k=1,2.$
\begin{eqnarray*}
\mathcal{M}_{8} &=&\overline{v}(p_{2})T_{i}^{R}P_{R}v(p_{4})D_{\widetilde{e}%
_{L}}(p_{2}-p_{4})L(p_{1}+p_{2}-p_{3}-p_{4})D_{\widetilde{\nu }%
_{e}}(p_{1}-p_{3}) \notag \\
&&\overline{v}(p_{3})(T_{i}^{L}P_{L}+T_{i}^{R}P_{R})u(p_{1})
\end{eqnarray*}
Where:
$D_{X}(q)=\frac{1}{(q^{2}-m_{X}^{2})},$
$P_{Z,\gamma }^{\mu \nu }=\frac{-g^{\mu \nu }+\frac{q^{\mu }q^{\nu }}{%
m_{Z,\gamma }^{2}}}{q^{2}-m_{Z,\gamma }^{2}+im_{Z,\gamma }\Gamma
_{Z}}.$
\noindent For the definitions of the constants used here, the
reader is referred to Appendix A.
\section{Cross Sections}
To be able to calculate the differential cross sections, and
hence, the total cross section, we need first to obtain the
squared matrix element for each Feynman diagram, where use of the
trace theorems was made. Later an average over the initial spin
polarizations of the electron and the positron pair and the sum
over the final spin states of the outgoing particles arising from
each initial spin state is carried out. The total cross section as
a function of the center of mass energy (see Appendix
B) is then calculated. The calculations were done for the following cases:\\
$tan\beta = 05$, $tan\beta = 15$ and $tan\beta = 35$ where $M_2 =
150$ or $M_2 = 300$ for each case of $tan\beta$. All results are
given in the following figures.
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig01tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig01tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig01tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 1 in figure \ref{feyn1}} \label{fig.1}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig02tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig02tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig02tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 2 in figure \ref{feyn1}} \label{fig.2}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig03tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig03tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig03tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 3 in figure \ref{feyn1}} \label{fig.3}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig04tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig04tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig04tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 4 in figure \ref{feyn1}} \label{fig.4}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig05tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig05tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig05tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 5 in figure \ref{feyn1}} \label{fig.5}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig06tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig06tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig06tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 6 in figure \ref{feyn1}} \label{fig.6}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig07tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig07tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig07tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 7 in figure \ref{feyn1}} \label{fig.7}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig08tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig08tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig08tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 8 in figure \ref{feyn1}} \label{fig.8}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig09tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig09tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig09tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 9 in figure \ref{feyn1}} \label{fig.9}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig10tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig10tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig10tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 10 in figure \ref{feyn1}} \label{fig.10}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig11tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig11tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig11tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 11 in figure \ref{feyn1}} \label{fig.11}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig12tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig12tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig12tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 12 in figure \ref{feyn1}} \label{fig.12}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig13tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig13tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig13tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 13 in figure \ref{feyn1}} \label{fig.13}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig14tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig14tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig14tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 14 in figure \ref{feyn1}} \label{fig.14}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig15tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig15tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig15tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 15 in figure \ref{feyn2}} \label{fig.15}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig16tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig16tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig16tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 16 in figure \ref{feyn2}} \label{fig.16}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig17tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig17tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig17tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 17 in figure \ref{feyn2}} \label{fig.17}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig18tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig18tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig18tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 18 in figure \ref{feyn2}} \label{fig.18}
\end{figure}
\clearpage
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig19tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig19tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig19tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 19 in figure \ref{feyn2}} \label{fig.19}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig20tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig20tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig20tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 20 in figure \ref{feyn2}} \label{fig.20}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig21tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig21tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig21tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 21 in figure \ref{feyn2}} \label{fig.21}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig22tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig22tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig22tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 22 in figure \ref{feyn2}} \label{fig.22}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig23tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig23tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig23tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 23 in figure \ref{feyn2}} \label{fig.23}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig24tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig24tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig24tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 24 in figure \ref{feyn2}} \label{fig.24}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig25tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig25tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig25tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 25 in figure \ref{feyn2}} \label{fig.25}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig26tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig26tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig26tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 26 in figure \ref{feyn2}} \label{fig.26}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig27tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig27tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig27tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 27 in figure \ref{feyn2}} \label{fig.27}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig28tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig28tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_fig28tb35.eps}}
\vspace{0.5cm} \caption{ \small Cross sections for
diagram no. 28 in figure \ref{feyn2}} \label{fig.28}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_tot_tb05.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_tot_tb15.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=4.3truein\epsfbox{ch5_tot_tb35.eps}}
\vspace{0.5cm} \caption{ \small Total cross section for the
reaction $e^-(p1) e^+(p2) \rightarrow H^-(p3) \widetilde{\chi
}_{1}^+(p4) \widetilde{\chi }_{1}^o(p5)$} \label{total}
\end{figure}
\clearpage
\section{Conclusion}
Results of the previous section are summarized in tables
\ref{table1} and \ref{table2} for $M_2 = 150$ GeV, and tables
\ref{table3} and \ref{table4} for $M_2 = 300$ GeV.\\
From these results, it is clear that the reaction is most probably to proceed
through diagram no. 5 through the exchange of a $Z$ boson. The maximum cross sextion achieved reached the value of $3.4611\times 10^{-4}$ [pb] for $tan\beta$ = 5 and $M_2$ = 150 GeV at a center of mass energy, $E_{CM}$ = 560 GeV. A maximum of $3.0094\times 10^{_4}$ [pb] was obtained for $tan\beta$ = 15 and $M_2$ = 150 GeV at a center of mass energy, $E_{CM}$ = 560 GeV and again for the diagram no. 5. For $tan\beta$ = 35 and $M_2$ = 150 GeV at a center of mass energy, $E_{CM}$ = 580 GeV, the cross section obtained, reached the value of $2.5145\times 10^{-4}$ [pb].\\
The maximum cross sextion achieved reached the value of $1.6853\times 10^{-5}$ [pb] for $tan\beta$ = 5 and $M_2$ = 300 GeV at a center of mass energy, $E_{CM}$ = 880 GeV for the diagram no. 2 which proceeds through the photon propagator, $\gamma$. A maximum of $1.0836\times 10^{_4}$ [pb] was obtained for $tan\beta$ = 15 and $M_2$ = 300 GeV at a center of mass energy, $E_{CM}$ = 860 GeV for the diagram no. 28 which proceeds through the exchange of the scalar neutrino propagator, $\widetilde{\nu}_e$.
For $tan\beta$ = 35 and $M_2$ = 300 GeV at a center of mass energy, $E_{CM}$ = 900 GeV, the cross section obtained, reached the value of $1.8623\times 10^{-4}$ [pb] for diagram no. 28.\\
total cross section achieved the following values,
\begin{enumerate}
\item For $tan\beta$ = 5 and $M_2$ = 150, $\sigma_{max}$ = $7.8383\times 10^{-4}$ [pb] at $E_{CM}$ = 720 GeV.
\item For $tan\beta$ = 15 and $M_2$ = 150, $\sigma_{max}$ = $5.2422\times 10^{-4}$ [pb] at $E_{CM}$ = 820 GeV.
\item For $tan\beta$ = 35 and $M_2$ = 150, $\sigma_{max}$ = $4.3095\times 10^{-4}$ [pb] at $E_{CM}$ = 800 GeV.
\item For $tan\beta$ = 5 and $M_2$ = 300, $\sigma_{max}$ = $3.3550\times 10^{-4}$ [pb] at $E_{CM}$ = 1080 GeV.
\item For $tan\beta$ = 15 and $M_2$ = 300, $\sigma_{max}$ = $8.8536\times 10^{-5}$ [pb] at $E_{CM}$ = 960 GeV.
\item For $tan\beta$ = 35 and $M_2$ = 300, $\sigma_{max}$ = $1.3291\times 10^{-4}$ [pb] at $E_{CM}$ = 940 GeV.
\end{enumerate}
\begin{table}[htbp]
\begin{center}
\begin{tabular}[htbp]{|c||c|c||c|c||c|c|}
\hline
\hline
Figure No. & \multicolumn{2}{c|}{$\sigma_{tan\beta = 5}$} & \multicolumn{2}{c|}{$\sigma_{tan\beta = 15}$} & \multicolumn{2}{c|}{$\sigma_{tan\beta = 35}$}\\
\cline{2-7}
&$E_{CM}$& $\sigma$ (Pb)&$E_{CM}$& $\sigma$ (Pb)&$E_{CM}$& $\sigma$ (Pb)\\
\hline
1 & 1040 & 1.5779e-06 & 1140& 1.5906e-06& 1160& 1.6421e-06\\
2 & 900 & 2.0289e-06 & 880& 2.6311e-06& 920& 2.7928e-06\\
3 & 1080 & 2.2854e-07 & 1140& 2.3085e-07& 1120& 2.3836e-07\\
4 & 860 & 1.8844e-06 & 840& 2.4976e-06& 860& 2.6618e-06\\
5 & 560 & 0.00034611 & 560& 0.00030094& 580& 0.00025145\\
6 & 1000 & 5.3209e-09 & 1060& 7.9594e-09& 1060& 8.5888e-09\\
7 & 1000 & 2.2084e-10 & 1060& 5.1532e-10& 1060& 6.4742e-10\\
8 & 520 & 0.000346 & 560& 6.9141e-05& 600& 3.2496e-05\\
9 & 720 & 1.9348e-05 & 720& 3.2102e-05& 740& 3.167e-05\\
10 & 1480 & 8.8544e-08 & 1500& 6.5752e-08& 1480& 5.2112e-08\\
11 & 1420 & 1.2839e-06 & 1420& 9.6548e-07& 1460& 7.9427e-07\\
12 & 780 & 6.4527e-07 & 880& 2.0701e-07& 940& 1.1992e-07\\
13 & 800 & 2.6592e-05 & 800& 2.125e-05& 780& 1.6044e-05\\
14 & 1160 & 1.0001e-08 & 1160& 9.3249e-10& 1160& 1.4909e-10\\
15 & 1200 & 3.5258e-09 & 1020& 3.4863e-09& 1040& 3.04e-09\\
16 & 1080 & 7.6575e-08 & 1040& 6.8391e-08& 1040& 5.8846e-08\\
\hline
\end{tabular}
\caption{Summary of the results obtained for the reaction,
$e^-(p1) e^+(p2) \rightarrow H^-(p3) \widetilde{\chi }_{1}^+(p4)
\widetilde{\chi }_{1}^o(p5)$ for $M_2 = 150$ GeV} \label{table1}
\end{center}
\end{table}
\begin{table}[thbp]
\begin{center}
\begin{tabular}[thbp]{|c||c|c||c|c||c|c|}
\hline
Figure No. & \multicolumn{2}{c|}{$\sigma_{tan\beta = 5}$} & \multicolumn{2}{c|}{$\sigma_{tan\beta = 15}$} & \multicolumn{2}{c|}{$\sigma_{tan\beta = 35}$}\\
\cline{2-7}
&$E_{CM}$& $\sigma$ (Pb)&$E_{CM}$& $\sigma$ (Pb)&$E_{CM}$& $\sigma$ (Pb)\\
\hline
\hline
17 & 760 & 6.1116e-07 & 800& 1.6734e-07& 820& 8.3768e-08\\
18 & 800 & 2.6135e-05 & 780& 2.0918e-05& 800& 1.5747e-05\\
19 & 1180 & 1.3204e-08 & 1060& 3.0367e-08& 1040& 4.1023e-08\\
20 & 1060 & 1.1262e-07 & 1060& 1.3572e-07& 1020& 1.3639e-07\\
21 & 780 & 9.6615e-07 & 800& 4.0522e-07& 820& 2.5265e-07\\
22 & 780 & 3.5148e-05 & 760& 4.295e-05& 780& 4.0162e-05\\
23 & 1440 & 3.3033e-07 & 1420& 5.7192e-07& 1460& 7.0432e-07\\
24 & 1440 & 1.8847e-06 & 1420& 1.9127e-06& 1460& 1.8439e-06\\
25 & 780 & 1.023e-06 & 880& 5.0101e-07& 960& 3.6067e-07\\
26 & 820 & 3.5797e-05 & 820& 4.3613e-05& 820& 4.0866e-05\\
27 & 1120 & 4.4032e-07 & 1040& 7.3516e-07& 1080& 8.5312e-07\\
28 & 800 & 0.00033039 & 780& 0.00023661& 800& 0.00018492\\
\hline
\end{tabular}
\caption{Cont. Summary of the results obtained for the reaction,
$e^-(p1) e^+(p2) \rightarrow H^-(p3) \widetilde{\chi }_{1}^+(p4)
\widetilde{\chi }_{1}^o(p5)$ for $M_2 = 150$ GeV} \label{table2}
\end{center}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{tabular}[htbp]{|c||c|c||c|c||c|c|}
\hline
\hline
Figure No. & \multicolumn{2}{c|}{$\sigma_{tan\beta = 5}$} & \multicolumn{2}{c|}{$\sigma_{tan\beta = 15}$} & \multicolumn{2}{c|}{$\sigma_{tan\beta = 35}$}\\
\cline{2-7}
&$E_{CM}$& $\sigma$ (Pb)&$E_{CM}$& $\sigma$ (Pb)&$E_{CM}$& $\sigma$ (Pb)\\
\hline
1 & 1460 & 2.8416e-06 & 1580& 2.6736e-06& 1640& 2.6225e-06\\
2 & 880 & 1.6853e-05 & 880& 2.2289e-05& 860& 2.2811e-05\\
3 & 1560 & 4.1064e-07 & 1560& 3.8557e-07& 1560& 3.7849e-07\\
4 & 860 & 8.5308e-06 & 860& 1.1303e-05& 840& 1.1522e-05\\
5 & 720 & 6.843e-06 & 720& 4.6316e-05& 700& 7.9708e-05\\
6 & 1280 & 2.0786e-09 & 1300& 2.293e-09& 1360& 2.25e-09\\
7 & 1280 & 8.5288e-10 & 1300& 9.6192e-10& 1260& 9.378e-10\\
8 & 980 & 5.045e-07 & 980& 4.6375e-07& 960& 4.2704e-07\\
9 & 1060 & 2.0822e-08 & 1100& 2.7708e-08& 1120& 2.7887e-08\\
10 & 1660 & 3.6361e-07 & 1560& 5.1425e-07& 1680& 5.5852e-07\\
11 & 1520 & 1.6809e-07 & 1460& 1.5064e-07& 1540& 1.4015e-07\\
12 & 1460 & 6.7452e-09 & 1400& 1.1721e-08& 1400& 1.3222e-08\\
13 & 1220 & 4.4132e-06 & 1280& 5.285e-06& 1280& 5.3928e-06\\
14 & 1260 & 5.8849e-09 & 1280& 8.0866e-10& 1280& 1.5408e-10\\
15 & 1620 & 9.3888e-09 & 1220& 1.6336e-08& 1240& 2.0865e-08\\
16 & 1200 & 1.8497e-08 & 1120& 2.1328e-08& 1140& 2.061e-08\\
\hline
\end{tabular}
\caption{Summary of the results obtained for the reaction,
$e^-(p1) e^+(p2) \rightarrow H^-(p3) \widetilde{\chi }_{1}^+(p4)
\widetilde{\chi }_{1}^o(p5)$ for $M_2 = 300$ GeV} \label{table3}
\end{center}
\end{table}
\begin{table}[thbp]
\begin{center}
\begin{tabular}[thbp]{|c||c|c||c|c||c|c|}
\hline
Figure No. & \multicolumn{2}{c|}{$\sigma_{tan\beta = 5}$} & \multicolumn{2}{c|}{$\sigma_{tan\beta = 15}$} & \multicolumn{2}{c|}{$\sigma_{tan\beta = 35}$}\\
\cline{2-7}
&$E_{CM}$& $\sigma$ (Pb)&$E_{CM}$& $\sigma$ (Pb)&$E_{CM}$& $\sigma$ (Pb)\\
\hline
\hline
17 & 1180 & 1.1424e-09 & 1120& 2.5658e-09& 1100& 3.0713e-09\\
18 & 1020 & 1.7106e-06 & 1060& 2.12e-06& 1080& 2.1515e-06\\
19 & 1640 & 3.0247e-07 & 1240& 4.1924e-07& 1200& 5.0226e-07\\
20 & 1160 & 1.0304e-08 & 1140& 8.1575e-09& 1140& 7.0367e-09\\
21 & 1160 & 1.2888e-08 & 1120& 2.5779e-08& 1160& 2.9906e-08\\
22 & 1020 & 4.1835e-07 & 1040& 3.9874e-07& 1060& 3.7029e-07\\
23 & 1600 & 1.1700e-05 & 1660& 1.3191e-05& 1640& 1.3441e-05\\
24 & 1540 & 9.3783e-08 & 1520& 5.7649e-08& 1580& 4.7951e-08\\
25 & 1480 & 7.6122e-08 & 1400& 1.1802e-07& 1480& 1.2841e-07\\
26 & 1300 & 1.0781e-06 & 1300& 9.9456e-07& 1320& 9.3037e-07\\
27 & 1040 & 3.9654e-06 & 1040& 6.3122e-06& 1060& 6.788e-06\\
28 & 880 & 1.5994e-05 & 860 & 0.00010836& 900 & 0.00018623\\
\hline
\end{tabular}
\caption{Cont. Summary of the results obtained for the reaction,
$e^-(p1) e^+(p2) \rightarrow H^-(p3) \widetilde{\chi }_{1}^+(p4)
\widetilde{\chi }_{1}^o(p5)$ for $M_2 = 300$ GeV} \label{table4}
\end{center}
\end{table}
\chapter{Production of a light neutral Higgs boson with a chargino and a neutralino}
\section{Introduction}
In this chapter, the production of a light neutral Higgs boson is
considered through the reaction, $e^{-}(p1)e^{+}(p2)\rightarrow
h(p3) \widetilde{\chi }_{1}^+(p4) \widetilde{\chi }_{1}^-(p5)$,
for different topologies and different propagators (see Appendix
A). There are a total of 13 feynman diagrams for this reaction
(tree level approximation) for which we gave the matrix element
corresponding to each diagram. Again, diagrams with the same
topology which can be obtained by interchanging the indices were
represented once. Our work will proceed as before,
\begin{enumerate}
\item Feynman diagrams are given,
\item Diagrams with the same topology are represented once, but
has been taken into considerations when calculating the cross
section.
\item Matrix elements are written, all the four momenta squares
are defined to be mass squared $(>0)$,
\item Matrix elements are squared,
\item An average over the initial spin polarizations of the
electron and positron pair and a sum over the final spin states of
the outgoing particles arising from each initial spin state is
carried out.
\item Results are represented graphically, and summarized in
subsequent tables.
\end{enumerate}
\section{Feynman Diagrams}
The follwoing is the set of Feynman diagrams which were used to
calculate the cross section of the associated production of a
charged Higgs boson with a chargino and a neutralino. Our momentum
notation is: $e^{-}(p1)$, $e^{+}(p2)$, $h(p3)$ $\widetilde{\chi
}_{1}^+(p4)$ and $\widetilde{\chi }_{1}^-(p5)$.
\begin{figure}[tph]
\begin{center}
\vskip-5.5cm
\mbox{\hskip-3.5cm\centerline{\epsfig{file=feyn3,width=17cm}}}
\end{center}
\caption{Feynman diagrams for the reaction: $e^{-}(p1)$,
$e^{+}(p2)$, $h(p3)$ $\widetilde{\chi }_{1}^+(p4)$ and
$\widetilde{\chi }_{1}^-(p5)$} \label{feyn3}
\end{figure}
\newpage
\section{Matrix Elements}
The following is the set of matrix elements corresponding to
diagrams in figure \ref{feyn3} used in our calculations:
\begin{eqnarray*}
\mathcal{M}_{1}=\overline{v}(p_{2})\gamma _{\mu }u(p_{1})\frac{ige^{2}}{%
\left( p_{1}+p_{2}\right) ^{2}+i\epsilon }\overline{u}(p4)\left(
C_{11}^{L}P_{L}+C_{11}^{R}P_{R}\right) \frac{\left( \NEG{p}_{3}+\NEG%
{p}_{4}+m_{\widetilde{\chi }_{1}^{+}}\right) }{\left(
p_{3}+p_{4}\right) ^{2}+m_{\widetilde{\chi
}_{1}^{+}}^{2}+i\epsilon }\gamma ^{\mu }v(p_{5})
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{2}=\overline{v}(p_{2})\gamma _{\mu }u(p_{1})\frac{ige^{2}}{%
\left( p_{1}+p_{2}\right) ^{2}+i\epsilon }\overline{u}(p5)\gamma ^{\mu }%
\frac{\left( \NEG{p}_{3}+\NEG{p}_{5}+m_{\widetilde{\chi }_{1}^{-}}\right) }{%
\left( p_{3}+p_{5}\right) ^{2}+m_{\widetilde{\chi }_{1}^{-}}^{2}+i\epsilon }%
\left( C_{11}^{L}P_{L}+C_{11}^{R}P_{R}\right) v(p_{4})
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{3} &=&\overline{v}(p_{2})\gamma _{\sigma
}(g_{V}-\gamma
_{5})u(p_{1})\frac{ig^{3}M_{Z}\sin (\beta -\alpha )}{4\cos ^{3}\theta _{w}}%
\overline{u}(p_{5})\gamma ^{\mu }\left(
O_{11}^{L}P_{L}+O_{11}^{R}P_{R}\right) v(p_{4}) \\
&&\times \frac{(g_{\mu \nu }-q_{\mu }q_{\nu }/m_{Z}^{2})(g^{\nu
\sigma
}-k^{\nu }k^{\sigma }/M_{Z}^{2})}{((p_{1}+p_{2}-p_{5})^{2}-M_{Z}^{2}+i%
\epsilon )}
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{4} &=&\overline{v}(p_{2})\gamma ^{\nu }(g_{V}-\gamma
_{5})u(p_{1})\frac{ig^{3}\cos (\alpha -\beta )}{8\cos ^{2}\theta _{w}}%
\overline{u}(p_{5})\left(
C_{11}^{A,L}P_{L}+C_{11}^{A,R}P_{R}\right) v(p_{4})
\\
&&\times \frac{(q_{\mu }-h_{\mu })}{q^{2}-M_{H_{3}}^{2}+i\epsilon }\frac{%
(g^{\mu \nu }-k^{\mu }k^{\nu }/M_{Z}^{2})}{((p_{1+}p_{2})^{2}-M_{Z}^{2}+i%
\epsilon )}
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{5} &=&\overline{v}(p_{2})\gamma ^{\nu }(g_{V}-\gamma
_{5})u(p_{1})\frac{-ig^{3}}{4\cos ^{2}\theta
_{w}((p_{1+}p_{2})^{2}-M_{Z}^{2}+i\epsilon
)}\overline{u}(p_{4})\left(
C_{11}^{L}P_{L}+C_{11}^{R}P_{R}\right) \\
&&\times \frac{\left( \NEG{p}_{3}+\NEG{p}_{4}+m_{\widetilde{\chi }%
_{1}^{+}}\right) }{\left( p_{3}+p_{4}\right) ^{2}+m_{\widetilde{\chi }%
_{1}^{+}}^{2}+i\epsilon }\gamma ^{\mu }\left(
O_{11}^{L}P_{L}+O_{11}^{R}P_{R}\right) v(p_{5})\left( g_{\mu \nu }-\frac{%
k_{\mu }k_{\nu }}{M_{Z}^{2}}\right)
\end{eqnarray*}
\bigskip
\begin{eqnarray*}
\mathcal{M}_{6} &=&\overline{v}(p_{2})\gamma ^{\nu }(g_{V}-\gamma
_{5})u(p_{1})\frac{-ig^{3}}{4\cos ^{2}\theta
_{w}((p_{1+}p_{2})^{2}-M_{Z}^{2}+i\epsilon
)}\overline{u}(p_{5})\gamma ^{\mu
}\left( O_{11}^{L}P_{L}+O_{11}^{R}P_{R}\right) \\
&&\times \frac{-(\NEG{p}_{3}+\NEG{p}_{5}+m_{\chi
_{1}^{+}})}{\left( p_{3}+p_{5}\right) ^{2}+m_{\widetilde{\chi
}_{1}^{+}}^{2}+i\epsilon }\left(
C_{11}^{L}P_{L}+C_{11}^{R}P_{R}\right) v(p_{4})\left( g_{\mu \nu }-\frac{%
k_{\mu }k_{\nu }}{M_{Z}^{2}}\right)
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{7} &=&\overline{v}(p_{2})\gamma ^{\nu }(g_{V}-\gamma
_{5})u(p_{1})\frac{-ig^{3}}{4\cos ^{2}\theta
_{w}((p_{1+}p_{2})^{2}-M_{Z}^{2}+i\epsilon
)}\overline{u}(p_{4})\left(
C_{12}^{L}P_{L}+C_{12}^{R}P_{R}\right) \\
&&\times \frac{\NEG{p}_{3}+\NEG{p}_{4}+m_{\chi _{2}^{+}}}{\left(
p_{3}+p_{4}\right) ^{2}+m_{\widetilde{\chi
}_{2}^{+}}^{2}+i\epsilon }\gamma
^{\mu }\left( O_{21}^{L}P_{L}+O_{21}^{R}P_{R}\right) \left( g_{\mu \nu }-%
\frac{k_{\mu }k_{\nu }}{M_{Z}^{2}}\right)
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{8} &=&\overline{v}(p_{2})\gamma ^{\nu }(g_{V}-\gamma
_{5})u(p_{1})\frac{-ig^{3}}{4\cos ^{2}\theta
_{w}((p_{1+}p_{2})^{2}-M_{Z}^{2}+i\epsilon
)}\overline{u}(p_{5})\gamma ^{\mu
}\left( O_{12}^{L}P_{L}+O_{12}^{R}P_{R}\right) \\
&&\times \frac{\NEG{p}_{3}+\NEG{p}_{5}+m_{\chi _{2}^{+}}}{\left(
p_{3}+p_{5}\right) ^{2}+m_{\widetilde{\chi
}_{2}^{+}}^{2}+i\epsilon }\left(
C_{21}^{L}P_{L}+C_{21}^{R}P_{R}\right) v(p_{4})\left( g_{\mu \nu }-\frac{%
k_{\mu }k_{\nu }}{M_{Z}^{2}}\right)
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{9}=\frac{ig^{3}\left\vert V_{11}\right\vert ^{2}}{%
(p_{1}-p_{5})^{2}-m_{\widetilde{\nu }}^{2}}\overline{v}(p_{2})P_{L}\frac{\NEG%
{p}_{3}+\NEG{p}_{4}+m_{\widetilde{\chi }_{1}^{+}}}{((p_{3}+p_{4})^{2}-m_{%
\widetilde{\chi }_{1}^{+}}^{2}+i\epsilon }\left(
C_{11}^{L}P_{L}+C_{11}^{R}P_{R}\right) u(p_{5})\overline{v}(p_{4})P_{R}%
\overline{u}(p_{1})
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{10}=\frac{ig^{3}\left\vert V_{11}\right\vert
\left\vert
V_{21}\right\vert }{(p_{1}-p_{5})^{2}-m_{\widetilde{\nu }}^{2}}\overline{v}%
(p_{2})P_{L}\frac{\NEG{p}_{3}+\NEG{p}_{4}+m_{\widetilde{\chi }_{1}^{+}}}{%
((p_{3}+p_{4})^{2}-m_{\widetilde{\chi }_{1}^{+}}^{2}+i\epsilon
}\left(
C_{21}^{L}P_{L}+C_{21}^{R}P_{R}\right) u(p_{5})\overline{v}(p_{4})P_{R}%
\overline{u}(p_{1})
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{11}=\frac{ig^{3}M_{W}\sin (\alpha +\beta )\left\vert
V_{11}\right\vert ^{2}}{2\cos ^{2}\theta _{w}((p_{1}-p_{5})^{2}-m_{%
\widetilde{\nu }}^{2})((p_{1}-p_{4})^{2}-m_{\widetilde{\nu }}^{2})}\overline{%
v}(p_{2})P_{L}u(p_{5})\overline{v}(p_{4})P_{R}\overline{u}(p_{1})
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{12}=\frac{ig^{3}\left\vert V_{11}\right\vert
\left\vert
V_{21}\right\vert }{((p_{1}-p_{4})^{2}-m_{\widetilde{\nu }}^{2})}\overline{v}%
(p_{2})P_{L}u(p_{5})\overline{v}(p_{4})\left(
C_{21}^{L}P_{L}+C_{21}^{R}P_{R}\right) \frac{-(\NEG{p}_{3}+\NEG{p}_{5})+m_{%
\widetilde{\chi }_{2}^{+}}}{((p_{3}+p_{5})^{2}-m_{\widetilde{\chi }%
_{2}^{+}}^{2}+i\epsilon }P_{R}\overline{u}(p_{1})
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{13}=\frac{ig^{3}\left\vert V_{11}\right\vert ^{2}}{%
((p_{1}-p_{4})^{2}-m_{\widetilde{\nu }}^{2})}\overline{v}(p_{2})P_{L}u(p_{5})%
\overline{v}(p_{4})\left( C_{11}^{L}P_{L}+C_{11}^{R}P_{R}\right) \frac{-(\NEG%
{p}_{3}+\NEG{p}_{5})+m_{\widetilde{\chi }_{1}^{+}}}{((p_{3}+p_{5})^{2}-m_{%
\widetilde{\chi }_{1}^{+}}^{2}+i\epsilon }P_{R}\overline{u}(p_{1})
\end{eqnarray*}
\noindent For the definitions of the constants used here, the
reader is referred to Appendix A.
\section{Cross Sections}
As before, to calculate the differential cross sections, and
hence, the total cross section, we need first to obtain the
squared matrix element for each Feynman diagram, where use of the
trace theorems was made. Later an average over the initial spin
polarizations of the electron and the positron pair and the sum
over the final spin states of the outgoing particles arising from
each initial spin state is carried out. The total cross section as
a function of the center of the mass energy (see Appendix
B) is then calculated.\\
Calculations were done with the following set of parameters:\\
$tan\beta = 10$, and $tan\beta = 15$ where $M_2 = 150$ or $M_2 = 300$.\\
All results are given in the following figures.
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig01tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig01tb15.eps}}
\vspace{0.5cm} \caption{\small Cross sections for
diagram no. 1 in figure \ref{feyn3}} \label{hXX1}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig02tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig02tb15.eps}}
\vspace{0.5cm} \caption{\small Cross sections for
diagram no. 2 in figure \ref{feyn3}} \label{hXX2}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig03tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig03tb15.eps}}
\vspace{0.5cm} \caption{\small Cross sections for
diagram no. 3 in figure \ref{feyn3}} \label{hXX3}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig04tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig04tb15.eps}}
\vspace{0.5cm} \caption{\small Cross sections for
diagram no. 4 in figure \ref{feyn3}} \label{hXX4}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig05tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig05tb15.eps}}
\vspace{0.5cm} \caption{\small Cross sections for
diagram no. 5 in figure \ref{feyn3}} \label{hXX5}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig06tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig06tb15.eps}}
\vspace{0.5cm} \caption{\small Cross sections for
diagram no. 6 in figure \ref{feyn3}} \label{hXX6}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig07tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig07tb15.eps}}
\vspace{0.5cm} \caption{\small Cross sections for
diagram no. 7 in figure \ref{feyn3}} \label{hXX7}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig08tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig08tb15.eps}}
\vspace{0.5cm} \caption{\small Cross sections for
diagram no. 8 in figure \ref{feyn3}} \label{hXX8}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig09tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig09tb15.eps}}
\vspace{0.5cm} \caption{\small Cross sections for
diagram no. 09 in figure \ref{feyn3}} \label{hXX09}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig10tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig10tb15.eps}}
\vspace{0.5cm} \caption{\small Cross sections for
diagram no. 10 in figure \ref{feyn3}} \label{hXX10}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig11tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig11tb15.eps}}
\vspace{0.5cm} \caption{\small Cross sections for
diagram no. 11 in figure \ref{feyn3}} \label{hXX11}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig12tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig12tb15.eps}}
\vspace{0.5cm} \caption{\small Cross sections for
diagram no. 12 in figure \ref{feyn3}} \label{hXX12}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig13tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_fig13tb15.eps}}
\vspace{0.5cm} \caption{\small Cross sections for
diagram no. 13 in figure \ref{feyn3}} \label{hXX13}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_tot_tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch6_tot_tb15.eps}}
\vspace{0.5cm} \caption{\small Total cross sections for the
reaction $e^{-}e^{+}\rightarrow h \widetilde{\chi }_{1}^+
\widetilde{\chi }_{1}^-$} \label{hXXtot}
\end{figure}
\clearpage
\section{Conclusion}
Results of the previous section are summarized in tables
\ref{table5} and \ref{table6} for $M_2 = 150$ GeV and $M_2 = 300$
GeV respectively.\\
From these results, it was found that the maximum cross section obtained is $2.0371\times 10^{-3}$ [pb] at $tan\beta$ = 10 and $M_2$ = 150 GeV for diagram number 7, in which, the reaction procees through the $Z$ boson propagator. It takes place at a center of mass energy, $E_{CM}$ = 560 GeV. For $tan\beta$ = 15, the maximum cross section is $1.8340\times 10^{-3}$ [pb] where $M_2$ = 150 GeV for the diagram no. 8, in which, the reaction proceeds, again, through the $Z$ boson propagator. It occurs at a center of mass energy, $E_{CM}$ = 560 GeV.\\
The maximum cross section obtained by this reaction for $tan\beta$ = 10 and $M_2$ = 300 GeV is $1.8605\times 10^{-3}$ [pb] at a center of mass energy, $E_{CM}$ = 580 GeV for the feynman diagram no. 7 which proceeds through the $Z$ boson propagator. While the maximum cross section takes the value $5.4781\times 10^{-5}$ [pb] for $tan\beta$ = 15 and $M_2$ = 300 GeV and at a center of mass energy, $E_{CM}$ = 960 GeV. It occurs at diagram no. 1, which proceeds through the photon propagator.\\
Total cross section of this reaction is $3.0430\times 10^{-3}$ [pb] for $tan\beta$ = 10 and $M_2$ = 150 GeV at a center of mass energy, $E_{CM}$ = 780 GeV, and is $1.4490\times 10^{-4}$ [pb] for $tan\beta$ = 15 and $M_2$ = 150 GeV at a center of mass energy, $E_{CM}$ = 920 GeV.\\
Total cross section also assumes the value $2.7681\times 10^{-3}$ [pb] for $tan\beta$ = 15 and $M_2$ = 300 GeV at a center of mass energy, $E_{CM}$ = 840 GeV, and $1.0993\times 10^{-4}$ [pb] for $tan\beta$ = 15 and $M_2$ = 300 GeV at a center of mass energy, $E_{CM}$ = 920 GeV.
\begin{table}[htbp]
\begin{center}
\begin{tabular}[htbp]{|c||c|c||c|c|}
\hline
\hline
Figure No.&\multicolumn{2}{c|}{$\sigma_{tan\beta = 10}$}&\multicolumn{2}{c|}{$\sigma_{tan\beta =15}$}\\
\cline{2-5}
&$E_{CM}$& $\sigma$ (Pb)&$E_{CM}$& $\sigma$ (Pb)\\
\hline
1 & 660 & 2.8134e-05 & 960& 6.5301e-05\\
2 & 660 & 2.819e-05 & 960 & 6.5243e-05\\
3 & 600 & 1.1079e-05 & 1240& 1.1034e-06\\
4 & 1000 & 3.7658e-07 & 1720& 1.8818e-07\\
5 & 660 & 2.8988e-05 & 960& 3.4915e-05\\
6 & 660 & 2.9043e-05 & 960& 3.4883e-05\\
7 & 560 & 0.0020371 & 720& 1.1311e-05\\
8 & 560 & 0.0020166 & 560& 0.001834\\
9 & 1180 & 3.7073e-05 & 1160& 2.8931e-05\\
10 & 860 & 0.0017111 & 800& 0.0014741\\
11 & 1100 & 5.2035e-07 & 1120& 5.8349e-07\\
12 & 1140 & 3.6974e-05 & 1100& 2.885e-05\\
13 & 780 & 0.0016977 & 860& 0.0014679\\
total& 780& 0.003043& 920& 0.0001493\\
\hline
\end{tabular}
\caption{Summary of the results obtained for the reaction,
$e^-(p1) e^+(p2) \rightarrow h(p3) \widetilde{\chi }_{1}^+(p4)
\widetilde{\chi }_{1}^-(p5)$ for $M_2 = 150$ GeV} \label{table5}
\end{center}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{tabular}[htbp]{|c||c|c||c|c|}
\hline
\hline
Figure No.&\multicolumn{2}{c|}{$\sigma_{tan\beta = 10}$}&\multicolumn{2}{c|}{$\sigma_{tan\beta =15}$}\\
\cline{2-5}
&$E_{CM}$& $\sigma$ (Pb)&$E_{CM}$& $\sigma$ (Pb)\\
\hline
1 & 680 & 2.1275e-05 & 960& 5.4781e-05\\
2 & 660 & 2.1232e-05 & 960& 5.4754e-05\\
3 & 600 & 1.0909e-05 & 1340& 1.1712e-06\\
4 & 1060 & 2.1442e-07 & 1820& 1.0861e-07\\
5 & 680 & 2.1987e-05 & 960& 2.9262e-05\\
6 & 660 & 2.1988e-05 & 960& 2.9257e-05\\
7 & 580 & 0.0018605 & 780& 2.6224e-06\\
8 & 720 & 1.118e-05 & 800& 2.6398e-06\\
9 & 1280 & 4.6013e-05 & 1260& 3.9595e-05\\
10 & 840 & 2.8019e-05 & 1100& 8.4166e-06\\
11 & 1280 & 1.6293e-07 & 1340& 1.8258e-07\\
12 & 1280 & 4.5969e-05 & 1260& 3.962e-05\\
13 & 920 & 2.8152e-05 & 1160& 8.646e-06\\
total & 840 & 0.0027681 & 920 & 0.00010993\\
\hline
\end{tabular}
\caption{Summary of the results obtained for the reaction,
$e^-(p1) e^+(p2) \rightarrow h(p3) \widetilde{\chi }_{1}^+(p4)
\widetilde{\chi }_{1}^-(p5)$ for $M_2 = 300$ GeV} \label{table6}
\end{center}
\end{table}
\chapter{Production of a light neutral Higgs boson with a pair of neutralinos}
\section{Introduction}
In this chapter, the production of a light neutral Higgs boson is considered through the reaction, $e^{-}(p1)e^{+}(p2)\rightarrow h(p3) \widetilde{\chi }_{1}^o(p4) \widetilde{\chi }_{1}^o(p5)$, for different topologies and different propagators (see Appendix A). There are a total of 24 Feynman diagrams for this reaction (tree level approximation) for which we gave the matrix element corresponding to each diagram. Again, diagrams with the same topology which can be obtained by interchanging the indices were represented once.
Our work will proceed as before,
\begin{enumerate}
\item Feynman diagrams are given,
\item Diagrams with the same topology are represented once, but has been taken into considerations when calculating the cross section.
\item Matrix elements are written, all the four momenta squares are
defined to be mass squared $(>0)$,
\item Matrix elements are squared,
\item An average over the initial spin polarizations of the electron and
positron pair and a sum over the final spin states of the outgoing particles
arising from each initial spin state is carried out.
\item Results are represented graphically, and summarized in subsequent tables.
\end{enumerate}
\section{Feynman Diagrams}
The following is the set of Feynman diagrams which were used to calculate
the cross section of the associated production of a charged Higgs boson with
a pair of neutralinos. Our momentum notation is:
$e^{-}(p1)$, $e^{+}(p2)$, $h(p3)$ $\widetilde{\chi }_{1}^o(p4)$ and $\widetilde{\chi }_{1}^o(p5)$.
\begin{figure}[tph]
\begin{center}
\vskip-5.5cm \mbox{\hskip-3.5cm\centerline{\epsfig{file=feyn4,width=17cm}}}
\end{center}
\caption{Feynman diagrams for the reaction: $e^{-}(p1)e^{+}(p2)\rightarrow h(p3) \widetilde{\chi }_{1}^o(p4) \widetilde{\chi }_{1}^o(p5)$}
\label{feyn4}
\end{figure}
\begin{figure}[tph]
\begin{center}
\vskip-5.5cm \mbox{\hskip-3.5cm\centerline{\epsfig{file=feyn5,width=17cm}}}
\caption{Cont. Feynman diagrams for the reaction: $e^{-}(p1)e^{+}(p2)\rightarrow h(p3) \widetilde{\chi }_{1}^o(p4) \widetilde{\chi }_{1}^o(p5)$}
\end{center}
\label{feyn5}
\end{figure}
\newpage
\section{Matrix Elements}
The following is the set of matrix elements corresponding to diagrams in figures \ref{feyn4} and \ref{feyn5} used in our calculations:
\begin{eqnarray*}
\mathcal{M}_{1} &=&-i\overline{v}(p_{2})\gamma ^{\mu
}(B^{L}P_{L}+B^{R}P_{L})u(p_{1})\frac{g^{\mu \nu }-k_{\mu }k_{\nu }/M_{Z}^{2}%
}{(p_{1}+p_{2})^{2}-M_{Z}^{2}+i\epsilon } \\&&J_{1}g^{\nu \rho }\frac{g^{\rho
\sigma }-k_{\rho }k_{\sigma }/M_{Z}^{2}}{(p_{4}+p_{5})^{2}-M_{Z}^{2}+i%
\epsilon }\times \overline{u}(p_{5})\gamma _{\sigma }(S_{11}+S_{11}^{`}\gamma
_{5})v(p_{4})
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{2} &=&i\overline{v}(p_{2})\gamma ^{\mu
}(B^{L}P_{L}+B^{R}P_{L})u(p_{1})\frac{g^{\mu \nu }-k_{\mu }k_{\nu }/M_{Z}^{2}%
}{(p_{1}+p_{2})^{2}-M_{Z}^{2}+i\epsilon } \times \\ && G_{1}(p_{4}+p_{5}-p_{3})^{\nu}
\frac{1}{(p_{4}+p_{5})^{2}-m_{H_{3}}^{2}}\overline{u}(p_{4})(R_{311}+R_{311}^{`}\gamma _{5})v(p_{5})
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{3,4,5,6} &=&\overline{v}(p_{2})\gamma ^{\mu
}(B^{L}P_{L}+B^{R}P_{L})u(p_{1})\frac{g^{\mu \nu }-p_{1}^{\mu }p_{2}^{\nu
}/M_{Z}^{2}}{(p_{1}+p_{2})^{2}-M_{Z}^{2}+i\epsilon }\overline{u}%
(p_{5})(R_{111}+R_{111}^{`}\gamma _{5}) \\
&&\times \frac{\NEG{p}_{3}+\NEG{p}_{5}+m_{\widetilde{\chi }_{1}^{o}}}{%
(p_{3}+p_{5})^{2}-m_{\widetilde{\chi }_{1}^{o}}^{2}+i\epsilon }\gamma _{\nu
}(S_{11}+S_{11}^{`}\gamma _{5})v(p_{4})
\end{eqnarray*}
where $k$ assumes the values, $1,2,3,4$.
\begin{eqnarray*}
\mathcal{M}_{7,8,9,10} &=&-i\overline{v}(p_{2})(N_{L1}+N_{L1}^{`}\gamma _{5})%
\frac{\NEG{p}_{3}+\NEG{p}_{4}+m_{\widetilde{\chi }_{k}^{o}}}{%
(p_{3}+p_{4})^{2}-m_{\widetilde{\chi }_{k}^{o}}^{2}+i\epsilon }%
(R_{111}+R_{111}^{`}\gamma _{5})u(p_{4}) \\ && \times \frac{\NEG{p}_{1}-\NEG{p}_{5}+m_{%
\widetilde{e}}}{(p_{1}-p_{5})^{2}-m_{\widetilde{e}}^{2}+i\epsilon } \overline{u}(p_{5})(N_{L1}+N_{L1}^{`}\gamma _{5})u(p_{1})
\end{eqnarray*}
again, here $k$ assumes the values, $1,2,3,4$.
\begin{eqnarray*}
\mathcal{M}_{11} &=&-\overline{v}(p_{2})(N_{L1}+N_{L1}^{`}\gamma
_{5})u(p_{5})\frac{\NEG{p}_{2}-\NEG{p}_{5}+m_{\widetilde{e}}}{%
(p_{2}-p_{5})^{2}-m_{\widetilde{e}}^{2}+i\epsilon }\psi _{k}\frac{\NEG%
{p}_{1}-\NEG{p}_{4}+m_{\widetilde{e}}}{(p_{1}-p_{4})^{2}-m_{\widetilde{e}%
}^{2}+i\epsilon } \\
&&\times \overline{u}(p_{4})(N_{L1}+N_{L1}^{`}\gamma _{5})u(p_{1})
\end{eqnarray*}
Here, $\psi =\frac{igM_{Z}\cos (\alpha +\beta )}{\cos \theta _{w}}(\frac{1}{2%
}+\sin ^{2}\theta _{w})$.
\begin{eqnarray*}
\mathcal{M}_{12,13,14,15} &=&-i\overline{v}(p_{2})(N_{L1}+N_{L1}^{`}\gamma
_{5})u(p_{4})\frac{\NEG{p}_{2}-\NEG{p}_{4}+m_{\widetilde{e}}}{%
(p_{2}-p_{4})^{2}-m_{\widetilde{e}}^{2}+i\epsilon }\overline{u}%
(p_{5})(R_{111}+R_{111}^{`}\gamma _{5}) \\
&&\times \frac{\NEG{p}_{3}+\NEG{p}_{5}+m_{\widetilde{\chi }_{k}^{o}}}{%
(p_{3}+p_{5})^{2}-m_{\widetilde{\chi }_{k}^{o}}^{2}+i\epsilon }%
(N_{L1}+N_{L1}^{`}\gamma _{5})u(p_{1})
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{16,17,18,19} &=&-i\overline{v}(p_{2})(N_{R1}+N_{R1}^{`}\gamma
_{5})\frac{\NEG{p}_{1}-\NEG{p}_{5}+m_{\widetilde{\chi }_{k}^{o}}}{%
(p_{1}-p_{5})^{2}-m_{\widetilde{\chi }_{k}^{o}}^{2}+i\epsilon }%
(R_{111}+R_{111}^{`}\gamma _{5})u(p_{4}) \\
&& \times \frac{\NEG{p}_{1}-\NEG{p}_{5}+m_{\widetilde{e}}}{(p_{1}-p_{5})^{2}-m_{%
\widetilde{e}}^{2}+i\epsilon }\overline{u}(p_{5})(N_{R1}+N_{R1}^{`}%
\gamma _{5})u(p_{1})
\end{eqnarray*}
As above, $k$ takes the vlaues, $1,2,3,4$.
\begin{eqnarray*}
\mathcal{M}_{20} &=&-\overline{v}(p_{2})(N_{R1}+N_{R1}^{`}\gamma
_{5})u(p_{4})\frac{\NEG{p}_{2}-\NEG{p}_{4}+m_{\widetilde{e}}}{%
(p_{2}-p_{4})^{2}-m_{\widetilde{e}}^{2}+i\epsilon }\psi _{k}\frac{\NEG%
{p}_{1}-\NEG{p}_{5}+m_{\widetilde{e}}}{(p_{1}-p_{5})^{2}-m_{\widetilde{e}%
}^{2}+i\epsilon } \\
&&\times \overline{u}(p_{5})(N_{R1}+N_{R1}^{`}\gamma _{5})u(p_{1})
\end{eqnarray*}
and also, $\psi =\frac{igM_{Z}\cos (\alpha +\beta )}{\cos \theta _{w}}(\frac{%
1}{2}+\sin ^{2}\theta _{w})$.
\begin{eqnarray*}
\mathcal{M}_{21,22,23,24} &=&\overline{v}(p_{2})(N_{R1}+N_{R1}^{`}\gamma
_{5})u(p_{5})\frac{\NEG{p}_{2}-\NEG{p}_{5}+m_{\widetilde{e}}}{%
(p_{2}-p_{5})^{2}-m_{\widetilde{e}}^{2}+i\epsilon }\overline{u}%
(p_{4})(R_{111}+R_{111}^{`}\gamma _{5}) \\
&&\times (R_{111}+R_{111}^{`}\gamma _{5})\frac{\NEG{p}_{3}+\NEG{p}_{4}+m_{%
\widetilde{\chi }_{k}^{o}}}{(p_{3}+p_{4})^{2}-m_{\widetilde{\chi }%
_{k}^{o}}^{2}+i\epsilon }(N_{R1}+N_{R1}^{`}\gamma _{5})u(p_{1})
\end{eqnarray*}
and $k=1,2,3,4$.
\noindent For the definitions of the constants used here, the
reader is referred to Appendix A.
\section{Cross Sections}
As before, to calculate the differential cross sections, and
hence, the total cross section, we need first to obtain the
squared matrix element for each Feynman diagram, where use of the
trace theorems was made. Later an average over the initial
spin polarizations of the electron and the positron pair and the
sum over the final spin states of the outgoing particles arising
from each initial spin state is carried out. The total cross
section as a function of the center of the mass energy (see Appendix
B) is then calculated.\\
Calculations were done with the following set of parameters:\\
$tan\beta = 10$, and $tan\beta = 15$ where $M_2 = 150$ or $M_2 = 300$.\\
All results are given in the following figures.
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig01tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig01tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 1 in figure \ref{feyn4}}
\label{hxx1}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig02tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig02tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 2 in figure \ref{feyn4}}
\label{hxx2}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig03tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig03tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 3 in figure \ref{feyn4}}
\label{hxx3}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig04tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig04tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 4 in figure \ref{feyn4}}
\label{hxx4}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig05tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig05tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 5 in figure \ref{feyn4}}
\label{hxx5}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig06tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig06tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 6 in figure \ref{feyn4}}
\label{hxx6}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig07tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig07tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 7 in figure \ref{feyn4}}
\label{hxx7}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig08tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig08tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 8 in figure \ref{feyn4}}
\label{hxx8}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig09tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig09tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 9 in figure \ref{feyn4}}
\label{hxx9}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig10tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig10tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 10 in figure \ref{feyn4}}
\label{hxx10}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig11tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig11tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 11 in figure \ref{feyn4}}
\label{hxx11}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig12tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig12tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 12 in figure \ref{feyn4}}
\label{hxx12}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig13tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig13tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 13 in figure \ref{feyn4}}
\label{hxx13}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig14tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig14tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 14 in figure \ref{feyn4}}
\label{hxx14}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig15tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig15tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 15 in figure \ref{feyn5}}
\label{hxx15}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig16tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig16tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 16 in figure \ref{feyn5}}
\label{hxx16}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig17tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig17tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 17 in figure \ref{feyn5}}
\label{hxx17}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig18tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig18tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 18 in figure \ref{feyn5}}
\label{hxx18}
\end{figure}
\clearpage
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig19tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig19tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 19 in figure \ref{feyn5}}
\label{hxx19}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig20tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig20tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 20 in figure \ref{feyn5}}
\label{hxx20}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig21tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig21tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 21 in figure \ref{feyn5}}
\label{hxx21}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig22tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig22tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 22 in figure \ref{feyn5}}
\label{hxx22}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig23tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig23tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 23 in figure \ref{feyn5}}
\label{hxx23}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig24tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_fig24tb35.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 24 in figure \ref{feyn5}}
\label{hxx24}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_tot_tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch7_tot_tb35.eps}}
\vspace{0.5cm}
\caption{\small Total cross sections for
the reaction $e^{-}e^{+}\rightarrow h \widetilde{\chi }_{1}^o \widetilde{\chi }_{1}^o$}
\label{hxxtot}
\end{figure}
\clearpage
\section{Conclusion}
Results of the previous section are summarized in tables
\ref{table7} and \ref{table8} for $M_2 = 150$ GeV and in tables \ref{table9} and \ref{table10} for $M_2 = 300$ GeV respectively.
From these results, it is clear that the reaction is most probably to proceed through Feynman diagram no. 5. The cross section of this reaction achieved a value of $1.5336\times 10^{-4}$ [pb] at $tan\beta$ = 10 and for $M_2$ = 150 GeV. And the cross section takes the vlaue of $2.3852\times {10^-4}$ [pb] at $tan\beta$ = 35 and for $M_2$ = 150 GeV.
for $M_2$ = 300 GeV, the value of the cross section achieved $3.0865\times 10^{-4}$ [pb] at $tan\beta$ = 10, and the value $4.0505\times 10^{-4}$ [pb] at $tan\beta$ = 35.
The total cross section of this reaction reaches its maximum value, $8.3901\times {10^-4}$ [pb], at $tan\beta$ = 35 and for $M_2$ = 150 GeV.
Interference effects which are not studied here, should be taken into accounts when dealing with the total cross section. for example the interference terms emerging from diagrams 20 \& 5 ($M_2=150 GeV$) would decrease the value of the total cross section effectively.
\begin{table}[htbp]
\begin{center}
\begin{tabular}[htbp]{|c||c|c||c|c|}
\hline
\hline
Figure No.&\multicolumn{2}{c|}{$\sigma_{tan\beta = 10}$}&\multicolumn{2}{c|}{$\sigma_{tan\beta =35}$}\\
\cline{2-5}
&$E_{CM}$& $\sigma$ (Pb)&$E_{CM}$& $\sigma$ (Pb)\\
\hline
1 & 1000 & 4.2976e-07 & 1060 & 4.8279e-07\\
2 & 920 & 3.0571e-07 & 980 & 4.8221e-08\\
3 & 820 & 8.7088e-08 & 840 & 4.3811e-08\\
4 & 760 & 8.4636e-11 & 800 & 1.1105e-10\\
5 & 520 & 0.00015336 & 540 & 0.00023852\\
6 & 720 & 9.8329e-05 & 740 & 0.00010254\\
7 & 1300 & 1.7996e-07 & 1340 & 5.4578e-08\\
8 & 1260 & 4.3651e-08 & 1340 & 2.7638e-08\\
9 & 800 & 3.3079e-07 & 800 & 5.2466e-07\\
10 & 780 & 7.8608e-05 & 780 & 5.0584e-05\\
11 & 1260 & 8.1842e-10 & 1280 & 6.0505e-10\\
12 & 1300 & 1.7990e-07 & 1340 & 5.45e-08\\
13 & 1240 & 4.3619e-08 & 1340& 2.7632e-08\\
14 & 780 & 3.3081e-07 & 800 & 5.2398e-07\\
15 & 800 & 7.8692e-05 & 800 & 5.0708e-05\\
16 & 1280 & 1.176e-06 & 1360 & 7.3614e-07\\
\hline
\end{tabular}
\caption{Summary of the results obtained for the reaction, $e^-(p1) e^+(p2) \rightarrow h(p3) \widetilde{\chi }_{1}^o(p4) \widetilde{\chi }_{1}^o(p5)$ for $M_2 = 150$ GeV}
\label{table7}
\end{center}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{tabular}[htbp]{|c||c|c||c|c|}
\hline
\hline
Figure No.&\multicolumn{2}{c|}{$\sigma_{tan\beta = 10}$}&\multicolumn{2}{c|}{$\sigma_{tan\beta =35}$}\\
\cline{2-5}
&$E_{CM}$& $\sigma$ (Pb)&$E_{CM}$& $\sigma$ (Pb)\\
\hline
17 & 1280 & 7.8242e-08 & 1320& 6.4104e-08\\
18 & 780 & 6.9236e-07 & 800 & 1.581e-06\\
19 & 780 & 0.00013973 & 800 & 0.0001293\\
20 & 1260 & 3.9622e-09 & 1280 & 6.0674e-09\\
21 & 1340 & 1.1762e-06 & 1340 & 7.3803e-07\\
22 & 1240 & 7.8012e-08 & 1340 & 6.4162e-08\\
23 & 780 & 6.9132e-07 & 780 & 1.576e-06\\
24 & 780 & 0.00013979 & 800 & 0.00012911\\
\hline
\end{tabular}
\caption{Summary of the results obtained for the reaction, $e^-(p1) e^+(p2) \rightarrow h(p3) \widetilde{\chi }_{1}^o(p4) \widetilde{\chi }_{1}^o(p5)$ for $M_2 = 150$ GeV}
\label{table8}
\end{center}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{tabular}[htbp]{|c||c|c||c|c|}
\hline
\hline
Figure No.&\multicolumn{2}{c|}{$\sigma_{tan\beta = 10}$}&\multicolumn{2}{c|}{$\sigma_{tan\beta =35}$}\\
\cline{2-5}
&$E_{CM}$& $\sigma$ (Pb)&$E_{CM}$& $\sigma$ (Pb)\\
\hline
1 & 1080 & 5.3869e-08 & 1120& 5.5643e-08\\
2 & 1040 & 3.0443e-08 & 1020& 4.506e-09\\
3 & 860 & 1.4074e-09 & 880& 6.0213e-10\\
4 & 680 & 2.7783e-07 & 680& 1.6657e-07\\
5 & 540 & 0.00011284 & 540& 0.00017088\\
6 & 800& 3.8825e-06 & 800& 5.3689e-06\\
7 & 1380 & 4.5216e-08 & 1380& 2.3421e-08\\
8 & 880 & 7.155e-06 & 940& 3.9801e-06\\
9 & 780 & 3.5995e-07 & 800& 8.0505e-07\\
10 & 840 & 0.00030865 & 840& 0.00040393\\
11 & 1260 & 1.4936e-09 & 1300& 2.0679e-09\\
12 & 1340 & 4.5271e-08 & 1360& 2.3366e-08\\
13 & 880 & 7.1686e-06 & 940& 3.9779e-06\\
14 & 800 & 3.5916e-07 & 800& 8.0607e-07\\
15 & 860 & 0.00030897 & 860& 0.00040505\\
16 & 1300 & 1.2266e-06 & 1340& 5.621e-07\\
\hline
\end{tabular}
\caption{Summary of the results obtained for the reaction, $e^-(p1) e^+(p2) \rightarrow h(p3) \widetilde{\chi }_{1}^o(p4) \widetilde{\chi }_{1}^o(p5)$ for $M_2 = 300$ GeV}
\label{table9}
\end{center}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{tabular}[htbp]{|c||c|c||c|c|}
\hline
\hline
Figure No.&\multicolumn{2}{c|}{$\sigma_{tan\beta = 10}$}&\multicolumn{2}{c|}{$\sigma_{tan\beta =35}$}\\
\cline{2-5}
&$E_{CM}$& $\sigma$ (Pb)&$E_{CM}$& $\sigma$ (Pb)\\
\hline
17 & 940 & 3.0376e-06 & 940& 1.3584e-06\\
18 & 800 & 3.7196e-06 & 800& 7.8338e-06\\
19 & 820 & 6.2326e-05 & 840& 6.9613e-05\\
20 & 1260 & 3.0118e-08 & 1300& 3.6928e-08\\
21 & 1360 & 1.2275e-06 & 1340& 5.627e-07\\
22 & 880 & 3.0427e-06 & 940& 1.3604e-06\\
23 & 780 & 3.7302e-06 & 800& 7.8293e-06\\
24 & 840 & 6.2285e-05 & 860& 6.9524e-05\\
\hline
\end{tabular}
\caption{Summary of the results obtained for the reaction, $e^-(p1) e^+(p2) \rightarrow h(p3) \widetilde{\chi }_{1}^o(p4) \widetilde{\chi }_{1}^o(p5)$ for $M_2 = 300$ GeV}
\label{table10}
\end{center}
\end{table}
\chapter{Production of a light neutral Higgs boson with a pair of charged Higgs bosons}
\section{Introduction}
In this chapter, the production of a light neutral Higgs boson is considered through the reaction, $e^{-}(p1)e^{+}(p2)\rightarrow h(p3) H^+(p4) H^-(p5)$, for different topologies and different propagators (see Appendix A). There are a total of 5 Feynman diagrams for this reaction (tree level approximation) for which we gave the matrix element corresponding to each diagram. Again, diagrams with the same topology which can be obtained by interchanging the indices were represented once.
Our work will proceed as usual,
\begin{enumerate}
\item Feynman diagrams are given,
\item Diagrams with the same topology are represented once, but has been taken into considerations when calculating the cross section.
\item Matrix elements are written, all the four momenta squares are
defined to be mass squared $(>0)$,
\item Matrix elements are squared,
\item An average over the initial spin polarizations of the electron and
positron pair and a sum over the final spin states of the outgoing particles
arising from each initial spin state is carried out.
\item Results are represented graphically, and summarized in subsequent tables.
\end{enumerate}
\section{Feynman Diagrams}
The following is the set of Feynman diagrams which were used to calculate
the cross section of the associated production of a charged Higgs boson with
a pair of neutralinos. Our momentum notation is:
$e^{-}(p1)$, $e^{+}(p2)$, $h(p3)$, $H^+(p4)$, and $H^-(p5)$.
\begin{figure}[h]
\begin{center}
\vskip-2.5cm \mbox{\hskip-3.5cm\centerline{\epsfig{file=feyn6,width=17cm}}}
\end{center}
\vskip-12cm
\caption{Feynman diagrams for the reaction: $e^{-}(p1)e^{+}(p2)\rightarrow h(p3) \widetilde{\chi }_{1}^o(p4) \widetilde{\chi }_{1}^o(p5)$}
\label{feyn6}
\end{figure}
\newpage
\section{Matrix Elements}
The following is the set of matrix elements corresponding to diagrams in figures \ref{feyn6} used in our calculations:
\begin{eqnarray*}
\mathcal{M}_{1} &=&\overline{v}(\vec{p}_{2})A^{\mu }\gamma _{\mu }\frac{%
g^{\mu \nu }}{(p_{1}+p_{2})^{2}}e(p_{1}+p_{2})^{\mu }\frac{1}{%
(p_{4}+p_{5})^{2}-m_{H_{3}}^{2}} \\
&&\times g\left[ M_{w}\cos (\beta -\alpha )-\frac{M_{Z}}{2\cos \theta _{w}}%
\cos 2\beta \cos (\beta +\alpha )\right] u(p_{1})
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{2} &=&\overline{v}(\vec{p}_{2})A^{\mu }\gamma _{\mu }\frac{%
g^{\mu \nu }}{(p_{1}+p_{2})^{2}}e(p_{1}+p_{2})^{\mu }\frac{1}{%
(p_{3}+p_{5})^{2}-m_{H_{3}}^{2}} \\
&&\times g\left[ M_{w}\cos (\beta -\alpha )-\frac{M_{Z}}{2\cos \theta _{w}}%
\cos 2\beta \cos (\beta +\alpha )\right] u(p_{1})
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{3} &=&\overline{v}(\vec{p}_{2})\gamma _{\mu
}(B^{L}P_{L}+B^{R}P_{R})^{\mu }\frac{g^{\mu \nu }-k_{\mu }k_{\nu }/M_{Z}}{%
(p_{1}+p_{2})^{2}-M_{Z}^{2}+i\epsilon }J_{1}g^{\nu \rho } \\
&&\times \frac{g^{\rho \sigma }-k_{\rho }k_{\sigma }/M_{Z}}{%
(p_{3}+p_{4})^{2}-M_{Z}^{2}+i\epsilon }\frac{g\cos 2\theta _{w}}{2\cos
\theta _{w}}(p_{3}+p_{4})^{\sigma }u(p_{1})
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{4} &=&-\overline{v}(\vec{p}_{2})\gamma _{\mu
}(B^{L}P_{L}+B^{R}P_{R})^{\mu }\frac{g^{\mu \nu }-k_{\mu }k_{\nu }/M_{Z}}{%
(p_{1}+p_{2})^{2}-M_{Z}^{2}+i\epsilon }\frac{g\cos 2\theta _{w}}{2\cos
\theta _{w}}(p_{3}+p_{4}+p_{5})^{\nu } \\
&&\times \frac{1}{(p_{4}+p_{5})^{2}-m_{H_{3}}^{2}}g\left[ M_{w}\cos (\beta
-\alpha )-\frac{M_{Z}}{2\cos \theta _{w}}\cos 2\beta \cos (\beta +\alpha )%
\right] u(p_{1})
\end{eqnarray*}
\begin{eqnarray*}
\mathcal{M}_{5} &=&-\overline{v}(\vec{p}_{2})\gamma _{\mu
}(B^{L}P_{L}+B^{R}P_{R})^{\mu }\frac{g^{\mu \nu }-k_{\mu }k_{\nu }/M_{Z}}{%
(p_{1}+p_{2})^{2}-M_{Z}^{2}+i\epsilon }\frac{g\cos 2\theta _{w}}{2\cos
\theta _{w}}(p_{3}+p_{4}+p_{5})^{\nu } \\
&&\times \frac{1}{(p_{3}+p_{5})^{2}-m_{H_{3}}^{2}}g\left[ M_{w}\cos (\beta
-\alpha )-\frac{M_{Z}}{2\cos \theta _{w}}\cos 2\beta \cos (\beta +\alpha )%
\right] u(p_{1})
\end{eqnarray*}
\noindent For the definitions of the constants used here, the
reader is referred to Appendix A.
\section{Cross Sections}
As before, to calculate the differential cross sections, and
hence, the total cross section, we need first to obtain the
squared matrix element for each Feynman diagram, where use of the
trace theorems was made. Later an average over the initial
spin polarizations of the electron and the positron pair and the
sum over the final spin states of the outgoing particles arising
from each initial spin state is carried out. The total cross
section as a function of the center of the mass energy (see Appendix
B) is then calculated.\\
Calculations were done with the following set of parameters:\\
$tan\beta = 10$, and $tan\beta = 15$ where $M_2 = 150$ or $M_2 = 300$.\\
All results are given in the following figures.
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch8_fig01tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch8_fig01tb15.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 1 in figure \ref{feyn6}}
\label{hHH1}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch8_fig02tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch8_fig02tb15.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 2 in figure \ref{feyn6}}
\label{hHH2}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch8_fig03tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch8_fig03tb15.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 3 in figure \ref{feyn6}}
\label{hHH3}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch8_fig04tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch8_fig04tb15.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 4 in figure \ref{feyn6}}
\label{hHH4}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch8_fig05tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch8_fig05tb15.eps}}
\vspace{0.5cm}
\caption{\small Cross sections for
diagram no. 5 in figure \ref{feyn6}}
\label{hHH5}
\end{figure}
\begin{figure}[th]
\vspace{-4.5cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch8_Tot_tb10.eps}}
\vspace{-0.1cm}
\centerline{\epsfxsize=5.5truein\epsfbox{ch8_Tot_tb15.eps}}
\vspace{0.5cm}
\caption{\small Total cross section for the reaction $e^{-}(p1)e^{+}(p2)\rightarrow h(p3) H^+(p4) H^-(p5)$}
\label{tothHH}
\end{figure}
\clearpage
\section{Conclusion}
Results of the previous section are summarized in tables
\ref{table11} for $M_2 = 150$ GeV and in tables \ref{table12} for $M_2 = 300$
GeV respectively. From these results, it is clear that the reaction
is most probably to proceed through diagram no. 3 at $tan\beta$ = 10 and $tan\beta$ = 15 where $M_2$ = 150 GeV. at these values the maximum cross section achieved is $5.7627\times 10^{-8}$ [pb] and $5.8264\times 10^{-8}$ [pb] respectively.\\
The total cross section achieved by this reaction is $1.9263\times 10^{-7}$ [pb] at $E_{CM}$ = 820 GeV for $tan\beta$ = 10 and $M_2$ = 150 GeV, and is $5.9956\times 10^{-8}$ [pb] at $E_{CM}$ = 840 GeV for $tan\beta$ = 15 and $M_2$ = 300 GeV
the maximum cross section at $M_2$ = 300 GeV for both values of $tan\beta$, that is 10 \& 15, takes the values $5.8647\times 10^{-8}$ at $E_{CM}$ = 800 GeV, and $5.8883\times 10^{-8}$ at $E_{CM}$ = 820 GeV.\\
The total cross section achieved by this reaction is $2.0128\times 10^{-7}$ [pb] at $E_{CM}$ = 860 GeV for $tan\beta$ = 10 and $M_2$ = 150 GeV, and is $6.6248\times 10^{-8}$ [pb] at $E_{CM}$ = 820 GeV for $tan\beta$ = 15 and $M_2$ = 300 GeV
\vskip2cm
\begin{table}[htbp]
\begin{center}
\begin{tabular}[htbp]{|c||c|c||c|c|}
\hline
\hline
Figure No.&\multicolumn{2}{c|}{$\sigma_{tan\beta = 10}$}&\multicolumn{2}{c|}{$\sigma_{tan\beta =15}$}\\
\cline{2-5}
&$E_{CM}$& $\sigma$ (Pb)&$E_{CM}$& $\sigma$ (Pb)\\
\hline
1 & 940 & 2.6354e-08 & 960& 2.9167e-11\\
2 & 940 & 2.6350e-08 & 960& 2.9131e-11\\
3 & 800 & 5.7627e-08 & 800& 5.8246e-08\\
4 & 940 & 3.8517e-09 & 960& 4.2596e-12\\
5 & 940 & 3.8512e-09 & 940& 4.2561e-12\\
total & 820 & 1.9263e-07& 840 & 5.9956e-08\\
\hline
\end{tabular}
\caption{Summary of the results obtained for the reaction,
$e^{-}(p1)e^{+}(p2)\rightarrow h(p3) H^+(p4) H^-(p5)$ for $M_2 = 150$ GeV} \label{table11}
\end{center}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{tabular}[htbp]{|c||c|c||c|c|}
\hline
\hline
Figure No.&\multicolumn{2}{c|}{$\sigma_{tan\beta = 10}$}&\multicolumn{2}{c|}{$\sigma_{tan\beta =15}$}\\
\cline{2-5}
&$E_{CM}$& $\sigma$ (Pb)&$E_{CM}$& $\sigma$ (Pb)\\
\hline
1 & 940 & 2.8116e-08 & 960 & 3.9597e-10\\
2 & 940 & 2.8066e-08 & 940 & 3.9717e-10\\
3 & 800 & 5.8047e-08 & 820& 5.8883e-08\\
4 & 900 & 4.1112e-09 & 960& 5.7828e-11\\
5 & 940 & 4.102e-09 & 940& 5.8048e-11\\
total & 860 & 2.0128e-07& 820 & 6.6248e-08\\
\hline
\end{tabular}
\caption{Summary of the results obtained for the reaction,
$e^{-}(p1)e^{+}(p2)\rightarrow h(p3) H^+(p4) H^-(p5)$ for $M_2 = 300$ GeV} \label{table12}
\end{center}
\end{table}
\section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\newdimen\theight
\@ifundefined{Column}{\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}}{}%
\@ifundefined{qed}{\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}}{}%
\@ifundefined{cents}{\def\cents{\hbox{\rm\rlap c/}}}{}%
\@ifundefined{tciLaplace}{\def\tciLaplace{\ensuremath{\mathcal{L}}}}{}%
\@ifundefined{tciFourier}{\def\tciFourier{\ensuremath{\mathcal{F}}}}{}%
\@ifundefined{textcurrency}{\def\textcurrency{\hbox{\rm\rlap xo}}}{}%
\@ifundefined{texteuro}{\def\texteuro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{euro}{\def\euro{\hbox{\rm\rlap C=}}}{}%
\@ifundefined{textfranc}{\def\textfranc{\hbox{\rm\rlap-F}}}{}%
\@ifundefined{textlira}{\def\textlira{\hbox{\rm\rlap L=}}}{}%
\@ifundefined{textpeseta}{\def\textpeseta{\hbox{\rm P\negthinspace s}}}{}%
\@ifundefined{miss}{\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}}{}%
\@ifundefined{vvert}{\def\vvert{\Vert}}{
\@ifundefined{tcol}{\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column}}{}%
\@ifundefined{dB}{\def\dB{\hbox{{}}}}{
\@ifundefined{mB}{\def\mB#1{\hbox{$#1$}}}{
\@ifundefined{nB}{\def\nB#1{\hbox{#1}}}{
\@ifundefined{note}{\def\note{$^{\dag}}}{}%
\defLaTeX2e{LaTeX2e}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\fi
\def\alpha{{\Greekmath 010B}}%
\def\beta{{\Greekmath 010C}}%
\def\gamma{{\Greekmath 010D}}%
\def\delta{{\Greekmath 010E}}%
\def\epsilon{{\Greekmath 010F}}%
\def\zeta{{\Greekmath 0110}}%
\def\eta{{\Greekmath 0111}}%
\def\theta{{\Greekmath 0112}}%
\def\iota{{\Greekmath 0113}}%
\def\kappa{{\Greekmath 0114}}%
\def\lambda{{\Greekmath 0115}}%
\def\mu{{\Greekmath 0116}}%
\def\nu{{\Greekmath 0117}}%
\def\xi{{\Greekmath 0118}}%
\def\pi{{\Greekmath 0119}}%
\def\rho{{\Greekmath 011A}}%
\def\sigma{{\Greekmath 011B}}%
\def\tau{{\Greekmath 011C}}%
\def\upsilon{{\Greekmath 011D}}%
\def\phi{{\Greekmath 011E}}%
\def\chi{{\Greekmath 011F}}%
\def\psi{{\Greekmath 0120}}%
\def\omega{{\Greekmath 0121}}%
\def\varepsilon{{\Greekmath 0122}}%
\def\vartheta{{\Greekmath 0123}}%
\def\varpi{{\Greekmath 0124}}%
\def\varrho{{\Greekmath 0125}}%
\def\varsigma{{\Greekmath 0126}}%
\def\varphi{{\Greekmath 0127}}%
\def{\Greekmath 0272}{{\Greekmath 0272}}
\def\FindBoldGroup{%
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}%
}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{%
\newcounter{equationnumber}
\def\mathletters{%
\addtocounter{equation}{1}
\edef\@currentlabel{\arabic{equation}}%
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}%
\edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}%
}
\def\endmathletters{%
\setcounter{equation}{\value{equationnumber}}%
}
}{}
\@ifundefined{BibTeX}{%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}%
\@ifundefined{AmS}%
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}%
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}%
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}%
\fi
\fi
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\mathop{\textstyle \int}}%
\def\tiint{\mathop{\textstyle \iint }}%
\def\tiiint{\mathop{\textstyle \iiint }}%
\def\tiiiint{\mathop{\textstyle \iiiint }}%
\def\tidotsint{\mathop{\textstyle \idotsint }}%
\def\toint{\mathop{\textstyle \oint}}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\def\dint{\mathop{\displaystyle \int}}%
\def\diint{\mathop{\displaystyle \iint}}%
\def\diiint{\mathop{\displaystyle \iiint}}%
\def\diiiint{\mathop{\displaystyle \iiiint }}%
\def\didotsint{\mathop{\displaystyle \idotsint }}%
\def\doint{\mathop{\displaystyle \oint}}%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\if@compatibility\else
\RequirePackage{amsmath}
\fi
\def\makeatother\endinput{\makeatother\endinput}
\bgroup
\ifx\ds@amstex\relax
\message{amstex already loaded}\aftergroup\makeatother\endinput
\else
\@ifpackageloaded{amsmath}%
{\if@compatibility\message{amsmath already loaded}\fi\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amstex}%
{\if@compatibility\message{amstex already loaded}\fi\aftergroup\makeatother\endinput}
{}
\@ifpackageloaded{amsgen}%
{\if@compatibility\message{amsgen already loaded}\fi\aftergroup\makeatother\endinput}
{}
\fi
\egroup
\typeout{TCILATEX defining AMS-like constructs in LaTeX 2.09 COMPATIBILITY MODE}
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\[email protected]
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\endequation{%
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\@ifnextchar*{\@tagstar}{\@tag}@false%
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
\@ifundefined{tag}{
\def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
}{}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\makeatother
\endinput
|
1,116,691,498,279 | arxiv | \section{Introduction}
In this paper we give a proof of existence of normally hyperbolic invariant manifolds for maps. The construction is performed in the state space of the map. Assumptions needed for the proof are of twofold nature. First we require topological conditions which follow from suitable alignment of the coordinates (these are the so called covering relations). Next we require that our map satisfies cone conditions. The aim of the paper though is not to produce yet another proof of the normally hyperbolic invariant manifold theorem. Our aim is to produce a tool that can be applied in rigorous-computer-assisted proofs. To show the strength of our approach we apply our theorem to a driven logistic map introduced in \cite{BSV}. The considered map is such that standard numerical simulation gives evidence of a chaotic attractor. The example is a demonstration of the fact that one has to be careful with the arithmetics in simulations, since the numerical evidence of an attractor is false. The map in fact possesses a normally hyperbolic invariant curve. This is apparent when simulations are performed using multiple precision computations. The strength of our method lies in the fact that even for such an example, which defeats standard numerical simulations, we are able to produce a rigorous proof of existence of a normally hyperbolic invariant curve.
The approach to normally hyperbolic manifolds presented here is in the spirit of \cite{Ca} and \cite{CZ2}. In \cite{Ca} a topological proof of existence of invariant sets with normally hyperbolic type properties is given. In \cite{CZ2} the result is extended to prove normally hyperbolic invariant manifolds. In both cases the proofs relied on assumptions that the first iterate of the map is well aligned with the stable and unstable manifolds. Similar approach was also used in \cite{CR} to give a proof of existence of a center manifold. The result in \cite{CR} is for ODEs and relies also on the fact that hyperbolic dynamics is uniform. The main difference between our paper and results mentioned above is that we assume that hyperbolic expansion and contraction aligns with the tangent spaces of the invariant manifolds after a suitable (possibly large) number of iterates of the map. This setting is more general, and also more typical for normal hyperbolicity.
The paper is organized as follows. Section \ref{sec:setup} introduces basic notations used throughout the paper and provides a setup and an outline of our problem. Section \ref{sec:geometric} contains a geometric construction of a normally hyperbolic manifold. We first give a construction of a "center-stable" manifold (the term "center-stable" refers to the normally hyperbolic invariant manifold union its associated
stable manifold; analogous terminology is used by us for the "center-unstable"
manifold). A center-unstable manifold is obtained using a mirror construction to the center-unstable manifold, by considering the inverse map. The intersection of the center-stable and center-unstable manifolds gives us the normally hyperbolic invariant manifold. In Section \ref{sec:ver-cond} we show how to verify assumptions of our theorems using local bounds on derivatives of the map. In Section \ref{sec:examples} we present our example of the driven logistic map and apply our method to it.
\section{Setup}\label{sec:setup}
We start by writing out some basic notations which we shall use throughout the paper. A notation $B_{i}(q,r)$ will stand for a ball of radius $r$ centered at $q$
in $\mathbb{R}^{i}.$ We will also use a notation $B_{i}=B_{i}(0,1)$. For a set
$A$ we will denote by $\overline{A}$ its closure, by $\mathrm{int\,}A$ its
interior and by $\partial A$ its boundary. For a function $f$ we will use a notation $\mathrm{dom}%
(f)$ to denote its domain. For points $p=(x,y)$ we shall use notation
$\pi_{x}(p),$ $\pi_{y}(p)$ to denote the projection onto the $x$ and $y$
coordinates respectively.
We now introduce the setup of our problem. Let $D$ and $\mathcal{U}$ be open subsets in $\mathbb{R}^{n}$ such that
$D\subset\mathcal{U}$. Let%
\[
f:\mathcal{U}\rightarrow\mathcal{U},
\]
be a diffeomorphism. Let $u,s,c\in\mathbb{N}$ be such that $u+s+c=n.$ We
assume that there exist a diffeomorphism%
\[
\phi:\mathcal{U}\rightarrow\phi(\mathcal{U})\subset\mathbb{R}^{u}%
\times\mathbb{R}^{s}\times\Lambda
\]
such that $\phi(\mathrm{cl\,}D)=D_{\phi}:=\overline{B}_{u}\times\overline{B}%
_{s}\times\Lambda$, and $\Lambda$ is a compact $c$ dimensional manifold
without boundary. We define $f_{\phi}:D_{\phi}\rightarrow\mathbb{R}^{u}%
\times\mathbb{R}^{s}\times\Lambda$ as
\[
f_{\phi}=\phi\circ f\circ\phi^{-1}.
\]
We assume that there exists a finite covering $\{U_{i}\}_{i\in I}$ of
$\Lambda$ and an atlas%
\[
\eta_{i}:\overline{U}_{i}\rightarrow\overline{B}_{c}.
\]
Throughout the work we will use a notation%
\[
\mathbf{B}=\overline{B}_{u}\times\overline{B}_{s}\times\overline{B}_{c}.
\]
For $i,j\in I$ we consider local maps $f_{ji}:\mathbf{B}\supset\mathrm{dom}%
(f_{ij})\rightarrow\mathbb{R}^{u}\times\mathbb{R}^{s}\times\overline{B}_{c}$
defined as%
\begin{eqnarray*}
f_{ij} & := &\tilde{\eta}_{j}\circ f_{\phi}\circ\tilde{\eta}_{i}^{-1}, \\
\tilde{\eta}_{i} & := &(\mathrm{id},\mathrm{id},\eta_{i})\qquad\text{for }i\in
I.
\end{eqnarray*}
Note that the domain of $f_{ij}$ can be empty, and will usually be smaller
than $\mathbf{B}.$ The following graph depicts the above defined functions and
their mutual relations.
\[%
\begin{array}
[c]{ccc}%
D & \overset{f}{\rightarrow} & \mathcal{U}\\
\downarrow\phi & & \downarrow\phi\\
\overline{B}_{u}\times\overline{B}_{s}\times\Lambda & \overset{f_{\phi}%
}{\rightarrow} & \mathbb{R}^{u}\times\mathbb{R}^{s}\times\Lambda\\
\quad\downarrow\tilde{\eta}_{i} & & \quad\downarrow\tilde{\eta}_{j}\\
\mathbf{B} & \overset{f_{ji}}{\rightarrow} & \mathbb{R}^{u}\times
\mathbb{R}^{s}\times\overline{B}_{c}%
\end{array}
\]
Our task in this paper will be to find a normally hyperbolic invariant
manifold, together with its stable and unstable manifolds within the set $D$.
We will use the following notations for our coordinates: $x\in\mathbb{R}^{u},$
$y\in\mathbb{R}^{s},$ $\theta\in\overline{B}_{c},$ $\lambda\in\Lambda.$ The
coordinate $x$ will play the role of a globally unstable direction, and the
coordinate $y$ will play the role of a stable direction for the map $f_{\phi}$
(hence the superscripts $u$ and $s$, which stand for "unstable" and "stable"
respectively). The coordinate $\lambda$ will play the role of the central
direction, in which the global dynamics is weaker than in the stable and unstable
coordinates. The notation $\theta$ will also be used for the central
direction, but it will be reserved to denote the central coordinate in the
local coordinates; i.e. $\theta=\eta_{i}(\lambda)$ for some $\lambda\in
\Lambda$ and $i\in I$.
\section{Geometric approach to invariant manifolds}\label{sec:geometric}
In this section we give the construction of a normally hyperbolic invariant
manifold. The construction is performed in the state space of our map. It is
based on the assumptions of covering relations and cone conditions. We first
give an introduction to these tools in Section \ref{sec:cov-cc}. In Section
\ref{sec:cover-cc-norm-hyp} we formulate our assumptions on the map in terms
of covering relations and cone conditions, which will imply the existence of a
normally hyperbolic manifold. In Section \ref{sec:main-res} we show how to
construct a center-stable manifold of our map. The construction of a
center-unstable manifold follows from a mirror argument. The intersection of
center-stable and center-unstable manifolds gives us a $C^{0}$ normally
hyperbolic invariant manifold. Let us write explicitly that for a normally
hyperbolic manifold which does not have an associated stable manifold, the
center-stable manifold will be the the normally hyperbolic manifold itself.
Analogous statement holds also for center-unstable manifolds.
\subsection{Covering relations and cones}
\label{sec:cov-cc} Covering relations are topological tools used for proofs of
nontrivial symbolic dynamics of dynamical systems. The method is based on the
Brouwer fixed point index, and the setting is such that it allows for rigorous
numerical verification. The method has been applied in computer assisted
proofs for the H\'{e}non map, R\"{o}ssler equations \cite{Z}, \cite{CZ2},
Lorenz equations \cite{GaZ}, Chua circuit \cite{G} or Kuramoto-Shivashinsky
ODE \cite{W}, amongst others. The method is based on singling out a number of
regions, called h-sets, which have hyperbolic type properties. Using these
properties one can find orbits of the system, which shadow the h-sets along
their trajectories. The method of covering relations relies on the system
having expanding and contracting coordinates. In this section we generalize
covering relations to include also a central direction. The setup is similar
to that of \cite{Ca}, \cite{CZ}, but has been simplified. Our proofs are now
simpler and based only on continuity arguments. They no longer require the use
of degree theory, with little loss of generality.
For any $p=(x,y,\theta)\in\mathbf{B}$ and $r_{u},r_{s},r_{c}>0$ we introduce a
notation
\[
N(p,r_{u},r_{s},r_{c}):=\overline{B}_{u}(x,r_{u})\times\overline{B}%
_{s}(y,r_{s})\times\overline{B}_{c}(\theta,r_{c}).
\]
We define%
\begin{eqnarray*}
N^{-} & = & N^{-}(p,r_{u},r_{s},r_{c}):=\partial\overline{B}_{u}(x,r_{u}%
)\times\overline{B}_{s}(y,r_{s})\times\overline{B}_{c}(\theta,r_{c})\\
N^{+} & = & N^{+}(p,r_{u},r_{s},r_{c}) \\
& := &\overline{B}_{u}(x,r_{u})\times\left( (\mathbb{R}^{s}\times
\mathbb{R}^{c})\setminus(B_{s}(y,r_{s})\times B_{c}(\theta,r_{c}))\right) .
\end{eqnarray*}
We assume that all boxes $N$ which we are going to consider here are contained
in $\mathbf{B}$. We will refer to a box $N$ as a \emph{ch-set}
(center-hyperbolic set) centered at $p$.
In following arguments we shall often consider different ch-sets. To keep
better track of our notations and to make our arguments more transparent we
shall stick to a convention that for two ch-sets $N_{1},N_{2}$ centered
respectively at $p_{1}=(x_{1},y_{1},\theta_{1})$ and $p_{2}=(x_{2}%
,y_{2},\theta_{2})$ we shall write%
\[
N_{i}=N_{i}(p_{i},r_{u}^{i},r_{s}^{i},r_{c}^{i}):=\overline{B}_{u}^{i}%
(x_{i},r_{u}^{i})\times\overline{B}_{s}^{i}(y_{i},r_{s}^{i})\times\overline
{B}_{c}^{i}(\theta_{i},r_{c}^{i})\quad\text{for }i=1,2.
\]
\begin{figure}[ptb]
\begin{center}
\includegraphics[
width=3in
]{figures/fig1.eps}
\end{center}
\caption{A ch-set $N_{1}$ covering a ch-set $N_{2}.$}%
\label{fig:covering1}%
\end{figure}
\begin{definition}
\label{def:f-covers}Let $g:\mathbf{B}\rightarrow\mathbb{R}^{u}\times
\mathbb{R}^{s}\times\overline{B}_{c}$ be a continuous function. Let
$p_{i}=(x_{i},y_{i},\theta_{i})$ for $i=1,2$ and let $N_{1}$, $N_{2}$ be two
ch-sets in $\mathbf{B}$ centered at $p_{1}$ and $p_{2}$ respectively. We say
that $N_{1}$ $g$-\emph{covers} $N_{2}$ if%
\begin{eqnarray}
g(p_{1}) \in \mathrm{int}(N_{2}),\label{eq:covering-cond-1}\\
\pi_{x}(g(N_{1}^{-}))\cap\overline{B}_{u}^{2}(x_{2},r_{u}^{2}) = \emptyset,\label{eq:covering-cond-2}\\
g(N_{1})\cap N_{2}^{+} = \emptyset. \label{eq:covering-cond-3}%
\end{eqnarray}
In such case we shall write $N_{1}\overset{g}{\Longrightarrow}N_{2}.$
\end{definition}
\begin{remark}
Definition \ref{def:f-covers} is a simplified definition of a covering
relation. More general versions can be found in \cite{GaZ}, \cite{GiZ},
\cite{Z} in the setting of hyperbolicity, or in \cite{Ca}, \cite{CZ} in a
setting when additionally a central direction is included.
\end{remark}
For $\gamma=(a,b,c)\in\mathbb{R}^{3},$ and $q=(x,y,\theta)\in\mathbb{R}%
^{u}\times\mathbb{R}^{s}\times\mathbb{R}^{c}$ we define%
\[
Q_{\gamma}:\mathbb{R}^{u}\times\mathbb{R}^{s}\times\mathbb{R}^{c}%
\rightarrow\mathbb{R}%
\]%
\begin{equation}
Q_{\gamma}(q):=a\left\Vert x\right\Vert ^{2}+b\left\Vert y\right\Vert
^{2}+c\left\Vert \theta\right\Vert ^{2}. \label{eq:Q-formula}%
\end{equation}
If $a>0$ $b,c<0,$ then for $p\in\mathbb{R}^{u}\times\mathbb{R}^{s}%
\times\mathbb{R}^{c}$ we will refer to%
\[
C(p,\gamma):=\{q:Q_{\gamma}(p-q)\geq0\}
\]
as a \emph{horizontal cone} centered at $p$ (see Figure
\ref{fig:covering-steps}).
\begin{definition}
Let $N$ be a ch-set and $\gamma=(a,b,c)$ be such that $a>0,$ $b,c<0.$ We will
refer to a pair $(N,\gamma)$ as a \emph{ch-set with cones}.
\end{definition}
\begin{definition}
\label{def:horizontal-disc}Let $(N,\gamma)=(N((x,y,\theta),r_{u},r_{s}%
,r_{c}),\gamma)$ be a ch-set with cones. A continuous function $\mathbf{h}%
:\overline{B}_{u}(x,r_{u})\rightarrow N$ is called a \emph{horizontal disc} in
$(N,\gamma),$ iff $\pi_{x}\mathbf{h}(x)=x$ and for any $x^{\ast},x^{\ast\ast
}\in\overline{B}_{u}(x,r_{u}),$%
\begin{equation}
Q_{\gamma}(\mathbf{h}(x^{\ast})-\mathbf{h}(x^{\ast\ast}))\geq0,
\label{eq:b-horizontal-disc-ineq}%
\end{equation}
\end{definition}
\begin{lemma}
\label{lem:hor-disc-image}Let $N_{i}=N_{i}((x_{i},y_{i},\theta_{i}),r_{u}%
^{i},r_{s}^{i},r_{c}^{i})$ for $i=1,2$ and let $(N_{1},\gamma_{1})$,
$(N_{2},\gamma_{2})$ be two ch-sets with cones. Assume that%
\begin{equation}
N_{1}\overset{g}{\Longrightarrow}N_{2} \label{eq:N1-N2-covering}%
\end{equation}
and that for any $q^{\ast},q^{\ast\ast}\in N_{1}$ such that $q^{\ast}\neq
q^{\ast\ast}$ and $Q_{\gamma_{1}}(q^{\ast}-q^{\ast\ast})\geq0$ we have%
\begin{equation}
Q_{\gamma_{2}}(g(q^{\ast})-g(q^{\ast\ast}))>0. \label{eq:cone-cond-N1-N2}%
\end{equation}
If $\mathbf{h}_{1}$ is a horizontal disc in $(N_{1},\gamma_{1})$ then there
exists a horizontal disc $\mathbf{h}_{2}$ in $(N_{2},\gamma_{2})$ such that
$g(\mathbf{h}_{1}(\overline{B}_{u}^{1}(x_{1},r_{u}^{1})))\cap N_{2}%
=\mathbf{h}_{2}(\overline{B}_{u}^{2}(x_{2},r_{u}^{2})).$
\end{lemma}
\begin{proof}
Without loss of generality we assume that $p_{1}=p_{2}=0$ and that $r_{\kappa
}^{i}=1$ for $i=1,2$ and $\kappa\in\{u,s,c\}$. In other words we assume that
for $i=1,2$%
\[
N_{i}=\overline{B}_{u}^{i}\times\overline{B}_{s}^{i}\times\overline{B}_{c}%
^{i}=\overline{B}_{u}(0,1)\times\overline{B}_{s}(0,1)\times\overline{B}%
_{c}(0,1).
\]
Let $\gamma_{i}=(a_{i},b_{i},c_{i})$ for $i=1,2$ and let $\mathbf{h}$ be any
horizontal disc in $N_{1}.$ Then by (\ref{eq:Q-formula}),
(\ref{eq:b-horizontal-disc-ineq}) and (\ref{eq:cone-cond-N1-N2}) for $x^{\ast
},x^{\ast\ast}\in\overline{B}_{u}^{1},$ $x^{\ast}\neq x^{\ast\ast}$%
\begin{equation}
a_{2}\left\Vert \pi_{x}g(\mathbf{h}(x^{\ast}))-\pi_{x}g(\mathbf{h}(x^{\ast
\ast}))\right\Vert ^{2}\geq Q_{\gamma_{2}}(g(\mathbf{h}(x^{\ast}%
))-g(\mathbf{h}(x^{\ast\ast})))>0,\label{eq:cone-cond-proof1}%
\end{equation}
which means that $\pi_{x}\circ g\circ\mathbf{h}$ is a monomorphism.
Using a notation $\mathbf{h}_{1}(x)=(x,h_{1}(x))\in\overline{B}_{u}^{i}%
\times(\overline{B}_{s}^{i}\times\overline{B}_{c}^{i}),$ for $\alpha\in
\lbrack0,1],$ we define a family of horizontal discs $\mathbf{h}_{\alpha
}(x)=(x,\alpha h_{1}(x)).$ Let $F_{\alpha}:\overline{B}_{u}^{1}\rightarrow
\mathbb{R}^{u}$ be a continuous family of functions defined as%
\[
F_{\alpha}(x):=\pi_{x}\circ g\circ\mathbf{h}_{\alpha}(x).
\]
We shall show that $\overline{B}_{u}^{2}\subset F_{1}(B_{u}^{1}).$ Functions
$F_{\alpha}$ are monomorphisms, hence sets $A_{\alpha}:=F_{\alpha}(B_{u}^{1})$
are homeomorphic to balls in $\mathbb{R}^{u}$; moreover $\partial A_{\alpha
}=F_{\alpha}(\partial B_{u}^{1})$. By Definition \ref{def:horizontal-disc} of
a horizontal disc, $\mathbf{h}_{\alpha}(\partial B_{u}^{1})\subset N_{1}^{-}$.
From assumption (\ref{eq:N1-N2-covering}), by conditions
(\ref{eq:covering-cond-1}), (\ref{eq:covering-cond-2})%
\begin{eqnarray}
\pi_{x}g(0) \in \overline{B}_{u}^{2},\label{eq:g-zero-B2}\\
\partial A_{\alpha}\cap\overline{B}_{u}^{2} \subset F_{\alpha}(N_{1}
^{-})\cap\overline{B}_{u}^{2}=\emptyset.\label{eq:A-alpha-bd}%
\end{eqnarray}
From the fact that $0\in B_{u}^{1}$
\begin{equation}
F_{0}(0)\in F_{0}(B_{u}^{1})=A_{0}.\label{eq:inter-ne1}%
\end{equation}
Since $\mathbf{h}_{0}(0)=0,$ by (\ref{eq:g-zero-B2})
\begin{equation}
F_{0}(0)=\pi_{x}\circ g\circ\mathbf{h}_{0}(0)=\pi_{x}g(0)\in\overline{B}%
_{u}^{2},\label{eq:inter-ne2}%
\end{equation}
From (\ref{eq:inter-ne1}), (\ref{eq:inter-ne2}) follows that $A_{0}%
\cap\overline{B}_{u}^{2}\neq\emptyset$. This by (\ref{eq:A-alpha-bd}) implies
that $\overline{B}_{u}^{2}\subset A_{0}$. By continuity of $F_{\alpha}$ with
respect to $\alpha$ this means that $\overline{B}_{u}^{2}\subset A_{\alpha}$
for all $\alpha\in\lbrack0,1].$ In particular $\overline{B}_{u}^{2}\subset
A_{1}=F_{1}(B_{u}^{1}).$
Since $F_{1}$ is a monomorphism and $\overline{B}_{u}^{2}\subset F_{1}%
(B_{u}^{1}),$ for any $v\in\overline{B}_{u}^{2}$ there exists a unique
$x=x(v)\in B_{u}^{1}$ such that $F_{1}(x)=v.$ We define $\mathbf{h}%
_{2}(v)=(v,h_{2}(v)):=(v,\pi_{y,\theta}\circ g\circ\mathbf{h}_{1}(x(v))).$ For
any $v^{\ast}\neq v^{\ast\ast}$, $v^{\ast},v^{\ast\ast}\in\overline{B}_{u}%
^{2}$, by (\ref{eq:b-horizontal-disc-ineq}) and (\ref{eq:cone-cond-N1-N2}) we
have%
\begin{eqnarray*}
Q_{\gamma_{2}}\left( \mathbf{h}_{2}(v^{\ast})-\mathbf{h}_{2}(v^{\ast
})\right) & = & Q_{\gamma_{2}}(g\circ\mathbf{h}_{1}(x(v^{\ast}))-g\circ
\mathbf{h}_{1}(x(v^{\ast\ast})))\\
& > &Q_{\gamma_{1}}(\mathbf{h}_{1}(x(v^{\ast}))-\mathbf{h}_{1}(x(v^{\ast\ast
})))\\
& > &0.
\end{eqnarray*}
Since $Q_{\gamma_{2}}\left( \mathbf{h}_{2}(v^{\ast})-\mathbf{h}_{2}%
(v^{\ast\ast})\right) >0$%
\begin{eqnarray*}
a_{2}\left\Vert v^{\ast}-v^{\ast\ast}\right\Vert \\
> -b_{2}\left\Vert \pi_{y}\left( h_{2}(v^{\ast})-h_{2}(v^{\ast\ast})\right) \right\Vert ^{2}%
-c_{2}\left\Vert \pi_{\theta}\left( h_{2}(v^{\ast})-h_{2}(v^{\ast\ast
})\right) \right\Vert ^{2}\\
\geq \min(-b_{2},-c_{2})\left\Vert h_{2}(v^{\ast})-h_{2}(v^{\ast\ast
})\right\Vert ^{2},
\end{eqnarray*}
and therefore $\mathbf{h}_{2}$ is continuous.
\end{proof}
\begin{figure}[ptb]
\begin{center}
\includegraphics[
height=2.6in
]{figures/Fig2.eps}
\end{center}
\caption{Covering relations for two iterates of a map $f$. For the second
iterate of the map the coordinate $x$ is expanding and $y$ is contracting (for
the first iterate of $f$ they are not). The fact that expansion in $x$ is
stronger than expansion in $\theta$ is visible from the fact that the cones
$C(f^{2}(p),\gamma_{3})$ are "tighter" than cones $C(p,\gamma_{1})$.}%
\label{fig:covering-steps}%
\end{figure}
\begin{remark}
Let us note that since we have freedom of choice of the radii $r_{u},r_{s}$
and $r_{c}$ it is not necessary for $x$ to be expanding, $y$ to be contracting
and $\theta$ to have weaker dynamics for each single iterate of the map. In
Figure \ref{fig:covering-steps} we have a sketch of a situation in which $x$
becomes expanding and $y$ contracting after a second iterate. In Figure
\ref{fig:covering-steps} the coordinate $\theta$ is expanding. It will turn
out that such a scenario is acceptable for us and can be dealt with by
increasing $r_{c}$ for successive iterates.
\end{remark}
\subsection{Covering relations and cone conditions for normal hyperbolicity}
\label{sec:cover-cc-norm-hyp}
In this section we formulate our assumptions which will imply the existence of
a normally hyperbolic manifold. The assumptions are in terms of covering
relations and cones and are in the spirit of \cite{CZ}. There are two major
differences though. The first is that assumptions used in \cite{CZ} required
the system to have uniform expansion and uniform contraction for the first
iterate of the map. Here we set up our coordinates in the directions of
\emph{global} contraction and \emph{global} expansion. In the setting of
normal hyperbolicity the coordinates of global contraction and expansion need
not be contracting and expanding for the first iterates of the map. What is
important is that they dominate after a sufficiently large numbers of
iterates, in other words, that the Lyapunov exponents are negative or
positive, respectively. We set up our assumptions so that they allow for such
setting. The second difference is that our setup has been significantly
simplified with comparison to \cite{CZ}. This resulted in a slight loss of
generality (we do not formulate our assumptions in terms of vector bundles as
in \cite{CZ}) but we need to consider fewer assumptions.
Let $1>R>\rho,r>0.$ Assume that there exists a finite sequence of points
$\boldsymbol{\lambda}_{k}\in\Lambda,$ $k\in\mathbb{N}$ such that for any $k$
the set $I(k)=\{i:B_{c}(\eta_{i}(\boldsymbol{\lambda}_{k}),\rho)\subset
B_{c}(0,R)\}$ is not empty. What is more, assume that there exists a set
$J\subset\{(i,k)|i\in I(k)\}$ such that $\Lambda\subset\bigcup_{(i,k)\in
J}\eta_{i}^{-1}(B_{c}(\eta_{i}(\boldsymbol{\lambda}_{k}),\rho)).$ For points
$(i,k)\in J$ we define sets
\[
M_{i,k}:=\overline{B}_{u}(0,r)\times\overline{B}_{s}(0,r)\times\overline
{B}_{c}(\eta_{i}(\boldsymbol{\lambda}_{k}),\rho).
\]
We will need to assume that the points $\boldsymbol{\lambda}_{k}$ are
sufficiently close to each other. We will also need to assume that $R$ and
$\rho$ are sufficiently large in comparison to $r$. This is summarized in
Assumption \ref{as:cones-setup}. The idea behind it is demonstrated in Figure
\ref{fig:M-assumptions}, which might provide some intuition.
\begin{assumption}
\label{as:cones-setup} Let $\mathbf{m}>1$ and let $\boldsymbol{\gamma}%
_{0}=(\mathbf{a}_{0},\mathbf{b}_{0},\mathbf{c}_{0})\in\mathbb{R}^{3},$
$\boldsymbol{\gamma}_{1}=(\mathbf{a}_{1},\mathbf{b}_{1},\mathbf{c}_{1}%
)\in\mathbb{R}^{3}$ satisfy $\mathbf{a}_{m}>0,$ $\mathbf{b}_{m},\mathbf{c}%
_{m}<0$ for $m=1,2.$ Let us also define a set $M\subset\mathbf{B}$ as%
\begin{equation}
M:=\overline{B}_{u}(0,r)\times\overline{B}_{s}(0,r)\times\overline{B}%
_{c}.\label{eq:Mi-set}%
\end{equation}
We assume that for any horizontal disc $\mathbf{h}$ in a ch-set with cones
$(M,\boldsymbol{\gamma}_{1})$ and for any $i\in I$ there exists $\left(
\iota,\kappa\right) \in J$ such that $\mathbf{h}(B_{u}(0,r))\subset
\mathrm{dom}(\tilde{\eta}_{\iota}\circ\tilde{\eta}_{i}^{-1}).$ In addition we
assume that for any $q^{\ast},q^{\ast\ast}$ in $\mathrm{dom}(\tilde{\eta
}_{\iota}\circ\tilde{\eta}_{i}^{-1})$ such that $Q_{\boldsymbol{\gamma}_{1}%
}(q^{\ast}-q^{\ast\ast})>0$ we have
\begin{equation}
Q_{\boldsymbol{\gamma}_{0}}(\tilde{\eta}_{\iota}\circ\tilde{\eta}_{i}%
^{-1}(q^{\ast})-\tilde{\eta}_{\iota}\circ\tilde{\eta}_{i}^{-1}(q^{\ast\ast
}))>\mathbf{m}Q_{\boldsymbol{\gamma}_{1}}(q^{\ast}-q^{\ast\ast}%
),\label{eq:from-g1-to-g0}%
\end{equation}
and
\begin{equation}
\mathbf{h}^{\prime}:=\tilde{\eta}_{\iota}\circ\tilde{\eta}_{i}^{-1}%
\circ\mathbf{h}_{|B_{u}(0,r)}\quad\text{is a horizontal disc in }%
(M_{\iota,\kappa},\boldsymbol{\gamma}_{0}).\label{eq:h'-hor-disc}%
\end{equation}
\end{assumption}
Assumption \ref{as:cones-setup} ensures that for $\mathbf{h}$ in some local
coordinates $\tilde{\eta}_{i}$ we can change to coordinates $\tilde{\eta
}_{\iota}$ so that $\mathbf{h}^{\prime}:=\tilde{\eta}_{\iota}\circ\tilde{\eta
}_{i}^{-1}\circ\mathbf{h}$ lies close to the middle of the set $M$. Assumption
\ref{as:cones-setup} is also discussed in Section \ref{sec:local-maps}, where
conditions which imply it are given.
\begin{remark}
Above we use bold font for $\boldsymbol{\gamma}_{i}=(\mathbf{a}_{i}%
,\mathbf{b}_{i},\mathbf{c}_{i})$, $i=0,1$ to emphasize that these are fixed
constants, and to distinguish them from other $\gamma=(a,b,c)$ in our proofs.
\end{remark}
\begin{figure}[ptb]
\begin{center}
\includegraphics[
height=2in
]{figures/Fig3.eps}
\end{center}
\caption{The change of coordinates $\tilde{\eta}_{\iota}\circ\tilde{\eta}%
_{i}^{-1}$, a horizontal disc $\mathbf{h,}$ and the cones given by
$\boldsymbol{\gamma}_{0}$ and $\boldsymbol{\gamma}_{1}$ in different local
coordinates. Here, for simplicity, the stable coordinate is neglected $s=0$. }%
\label{fig:M-assumptions}%
\end{figure}
\begin{definition}
\label{def:cone-conditions}If for any $(i,k)\in J$ there exists a sequence of
ch-sets with cones $(N_{1},\gamma_{1}),\ldots,(N_{n},\gamma_{n})$ ($n$ can
depend on $(i,k)$) and a sequence $i_{0}=i,i_{1},\ldots,i_{n}\in I$ such that%
\begin{equation}
M_{i,k}=:N_{0}\overset{f_{i_{1}i_{0}}}{\Longrightarrow}N_{1}\overset
{f_{i_{2}i_{1}}}{\Longrightarrow}N_{2}\overset{f_{i_{3}i_{2}}}{\Longrightarrow
}\ldots\overset{f_{i_{n}i_{n-1}}}{\Longrightarrow}N_{n}\overset{\mathrm{id}%
}{\Longrightarrow}M,\label{eq:cover-sequence}%
\end{equation}
then we say that $f$ \emph{satisfies covering conditions}.
If in addition for any $q_{1},q_{2}\in N_{l-1},$ $q_{1}\neq q_{2},$%
\begin{equation}
Q_{\gamma_{l+1}}(f_{i_{l+1}i_{l}}(q_{1})-f_{i_{l+1}i_{l}}(q_{2}))>Q_{\gamma
_{l}}(q_{1}-q_{2})\label{eq:cc-local-iterates}%
\end{equation}
for $l=0,\ldots,n-1,$ and for $\gamma_{n}=(a,b,c)$ we have%
\begin{equation}
\mathbf{a}_{1}>a,\quad\frac{\mathbf{b}_{1}}{\mathbf{a}_{1}}>\frac{b}{a}%
,\quad\frac{\mathbf{c}_{1}}{\mathbf{a}_{1}}>\frac{c}{a},\label{eq:to-gamma0}%
\end{equation}
then we say that $f$ \emph{satisfies cone conditions}.
\end{definition}
\begin{figure}[ptb]
\begin{center}
\includegraphics[
width=5in
]{figures/Fig4.eps}
\caption{(see Example \ref{ex:cone-cond-def}) For the first iterates of the
map the ch-sets and cones are contracted in the $x$ direction. After a number
of steps the expansion in $x$ starts to dominate. Note that the coordinate
$\theta$ is expanding. Since expansion in $x$ is stronger than expansion in
$\theta$ though, the cones eventually become more flat and their level sets
$Q_{\gamma_{i}}=c$ are pulled away from the origin.}%
\label{fig:example-cone}%
\end{center}
\end{figure}
\begin{example}
\label{ex:cone-cond-def}This example stands behind the pictures from Figure
\ref{fig:example-cone}. Consider $u=c=1$ and $s=0.$ Assume that $f_{i_{1}%
i_{0}}=(A_{ij}^{1})_{i,j=1,2}=diag(\frac{1}{2},2)$ $f_{i_{2}i_{1}}=(A_{ij}%
^{2})_{i,j=1,2}=diag(2,1),$ $f_{i_{2}i_{3}}=(A_{ij}^{3})_{i,j=1,2}=diag(5,2).$
Let $\boldsymbol{\gamma}_{0}=(1,-1)$ and $\boldsymbol{\gamma}_{1}=(\frac{1}%
{4},-\frac{3}{8}).$ We take ch-sets with cones $(N_{l}((0,0),r_{u}^{l}%
,r_{c}^{l}),\gamma_{l}),$ for $l=0,1,2,3$ with
\begin{eqnarray*}
r_{u}^{0} & = &r_{c}^{0}=r,\\
r_{u}^{l} & = &r_{u}^{l-1}A_{11}^{l}-\varepsilon,\\
r_{c}^{l} & = &r_{c}^{l-1}A_{22}^{l}+\varepsilon,
\end{eqnarray*}
$\gamma_{0}=\boldsymbol{\gamma}_{0},$ $\gamma_{1}=(4\delta,-\frac{1}{4}%
\delta^{-1}),$ $\gamma_{2}=(1\delta^{2},-\frac{1}{4}\delta^{-2}),$ $\gamma
_{3}=(\frac{1}{25}\delta^{3},-\frac{1}{16}\delta^{-3}),$ with $\delta
=1+\varepsilon.$ For sufficiently small $r$ and $\varepsilon$ we will have
(\ref{eq:cover-sequence}) and (\ref{eq:cc-local-iterates}). For sufficiently
small $\varepsilon$ we also have (\ref{eq:to-gamma0}). Assume now that
$\tilde{\eta}_{\iota}\circ\tilde{\eta}_{i_{3}}^{-1}=diag(1,1+\frac{1}{4}).$
This $\tilde{\eta}_{\iota}\circ\tilde{\eta}_{i_{3}}^{-1}$ is taken just as a
hypothetical example, in order to show that even when a switch to new
coordinates involves an expansion in the central coordinate the Assumption
\ref{as:cones-setup} can easily be satisfied. We have
\begin{eqnarray*}
Q_{\boldsymbol{\gamma}_{0}}(\tilde{\eta}_{\iota}\circ\tilde{\eta}_{i_{3}}%
^{-1}(x,y)) & = & x^{2}-\frac{5}{4}\theta^{2}\\
& = & 4\left( \frac{1}{4}x^{2}-\frac{5}{16}\theta^{2}\right) \\
& \geq & 4\left( \frac{1}{4}x^{2}-\frac{3}{8}\theta^{2}\right) \\
& = & 4Q_{\boldsymbol{\gamma}_{1}}((x,y))
\end{eqnarray*}
which means that (\ref{eq:from-g1-to-g0}) holds for $\mathbf{m}<4$.
\end{example}
We now introduce a notation $U\subset D_{\phi}$ for a set
\begin{equation}
U:=B_{u}(0,r)\times B_{s}(0,r)\times\Lambda. \label{eq:U-set-bound}%
\end{equation}
The set $U$ will be the region in which we will construct an invariant
manifold of points, which stay within the set $D_{\phi}$ for forward
iterations of the map $f_{\phi}.$
\begin{figure}[ptb]
\begin{center}
\includegraphics[
width=4.8in
]{figures/Fig5.eps}
\end{center}
\caption{The sequence of covering relations from Definition
\ref{def:cone-conditions}, together with the sets $M_{\iota_{0},\kappa_{0}}$
and $M_{\iota_{1},\kappa_{1}}$, which are the first step of the inductive
construction from the proof of Theorem \ref{th:nhims-from-cc}.}%
\label{fig:covering-iteration}%
\end{figure}
\subsection{Existence of a normally hyperbolic manifold - Main
result\label{sec:main-res}}
In this section we use the assumptions from Section
\ref{sec:cover-cc-norm-hyp} to obtain the existence of a normally hyperbolic
invariant manifold inside of the set $U$ defined in (\ref{eq:U-set-bound}). We
start with a construction of the center-stable manifold. This is given in
Theorem \ref{th:nhims-from-cc}. The existence of an center-unstable manifold
follows from mirror arguments for the inverse map. The normally hyperbolic
manifold is obtained by intersecting the center-stable and center-unstable
manifolds. This is done in Theorem \ref{th:main}.
\begin{theorem}
\label{th:nhims-from-cc}If $f$ satisfies cone conditions then there exists a
continuous monomorphism $V:B_{s}(0,r)\times\Lambda\rightarrow U$ such that
\begin{enumerate}
\item \label{it:th-nhims-cc1}$\pi_{y}V(y,\lambda)=y$, $\pi_{\lambda
}V(y,\lambda)=\lambda,$
\item \label{it:th-nhims-cc2}for any $(y,\lambda)\in B_{s}(0,r)\times\Lambda$
and any $n\in\mathbb{N}$
\[
f_{\phi}^{n}(V(y,\lambda))\in D_{\phi}.
\]
\item \label{it:th-nhims-cc3}for any $q\in U$ such that $f_{\phi}^{n}(q)\in
D_{\phi}$ for all $n\in\mathbb{N,}$ there exists a $(y,\lambda)\in
B_{s}(0,r)\times\Lambda$ such that $q=V(y,\lambda),$
\item \label{it:th-nhims-cc4}if $\lambda^{\ast},\lambda^{\ast\ast}\in\eta
_{i}^{-1}(B_{c}(\eta_{i}(\boldsymbol{\lambda}_{k}),\rho))$ for some $(i,k)\in
J$ then for any $y^{\ast},y^{\ast\ast}\in\overline{B}_{s}(0,r)$ such that
$(\lambda^{\ast},y^{\ast})\neq(\lambda^{\ast\ast},y^{\ast\ast})$%
\begin{equation}
Q_{\gamma_{0}}\left( \tilde{\eta}_{i}\circ V(y^{\ast},\lambda^{\ast}%
)-\tilde{\eta}_{i}\circ V(y^{\ast\ast},\lambda^{\ast\ast})\right)
<0.\label{eq:V-vert-cones}%
\end{equation}
\end{enumerate}
\end{theorem}
\begin{proof}
We take any $y_{0}\in B_{s}(0,r),$ $\lambda_{0}\in\Lambda$ and $(\iota
_{0},\kappa_{0})\in J$ such that $\lambda_{0}\in\eta_{\iota_{0}}^{-1}%
(B_{c}(\boldsymbol{\lambda}_{\kappa_{0}},\rho))$ and define a horizontal disc
$\mathbf{h}_{0}$ in $M_{\iota_{0},\kappa_{0}}$ as%
\[
\mathbf{h}_{0}(x):=(x,y_{0},\eta_{\iota_{0}}(\lambda_{0})).
\]
Since $f$ satisfies cone conditions, using assumption (\ref{eq:cover-sequence}%
) and applying inductively Lemma \ref{lem:hor-disc-image} gives us the
existence of indexes $i_{1},\ldots,i_{n_{1}}\in I$ and of a horizontal disc
$\mathbf{h}_{1}$ in $(M,\gamma_{n_{1}})$ such that
\begin{eqnarray*}
\mathbf{h}_{1}(\overline{B}_{u}) & = & \{\tilde{\eta}_{i_{n_{1}}}\circ f_{\phi
}^{n_{1}}\circ\tilde{\eta}_{\iota_{0}}^{-1}(\mathbf{h}_{0}(x))\in M:x\in
B_{u}(0,r),\text{ and}\\
& & \quad\tilde{\eta}_{i_{l}}\circ f_{\phi}^{l}\circ\tilde{\eta}_{\iota_{0}%
}^{-1}(\mathbf{h}_{1}(x))\in N_{l}\text{ for }l=1,\ldots,n_{1}\}.
\end{eqnarray*}
By (\ref{eq:to-gamma0})
\[
Q_{\mathbf{\gamma}_{1}}(\mathbf{h}_{1}(x^{\ast})-\mathbf{h}_{1}(x^{\ast\ast
}))>Q_{\gamma_{n_{1}}}(\mathbf{h}_{1}(x^{\ast})-\mathbf{h}_{1}(x^{\ast\ast
}))>0,
\]
which means that $\mathbf{h}_{1}$ is a horizontal disc in
$(M,\boldsymbol{\gamma}_{1}).$ From (\ref{eq:from-g1-to-g0}) and
(\ref{eq:h'-hor-disc}) we know that there exists $(\iota_{1},\kappa_{1})\in J$
such that $\mathbf{h}^{\prime}:=\tilde{\eta}_{\iota_{1}}\circ\tilde{\eta
}_{i_{n_{1}}}^{-1}\circ\mathbf{h}_{1}$ is a horizontal disc in $(M_{\iota
_{1},\kappa_{1}},\boldsymbol{\gamma}_{0}).$ This in particular means that for
$\mathbf{f}_{1}:=\tilde{\eta}_{\iota_{1}}\circ f_{\phi}^{n_{1}}\circ
\tilde{\eta}_{\iota_{0}}^{-1},$ there exists an $x\in B_{u}(0,r)$ for which
$\mathbf{f}_{1}(\mathbf{h}_{1}(x))\cap M_{\iota_{1},\kappa_{1}}\neq\emptyset.$
By (\ref{eq:cc-local-iterates}) and (\ref{eq:from-g1-to-g0}), for any
$x^{\ast}\neq x^{\ast\ast}$ such that $\mathbf{h}_{1}(x^{\ast}),\mathbf{h}%
_{1}(x^{\ast\ast})\in\mathrm{dom}(\mathbf{f}_{1})$
\begin{equation}
Q_{\gamma_{0}}\left( \mathbf{f}_{1}(\mathbf{h}_{1}(x^{\ast}))-\mathbf{f}%
_{1}(\mathbf{h}_{1}(x^{\ast\ast}))\right) >\mathbf{m}Q_{\gamma_{0}}\left(
\mathbf{h}_{1}(x^{\ast})-\mathbf{h}_{1}(x^{\ast\ast})\right)
>0.\label{eq:cc-inductive}%
\end{equation}
Repeating the above procedure inductively (starting the second step with the
horizontal disc $\mathbf{h}^{\prime}$ and local coordinates given by
$\tilde{\eta}_{\iota_{1}}$) we obtain a sequence of points $x_{s}\in
B_{u}(0,r)$ and indexes $(\iota_{s},\kappa_{s})$ for $s\in\mathbb{N}$ such
that for
\[
\mathbf{f}_{s}:=\tilde{\eta}_{\iota_{s}}\circ f_{\phi}^{n_{s}+\ldots+n_{1}%
}\circ\tilde{\eta}_{\iota_{0}}^{-1}%
\]
we have
\[
\mathbf{f}_{w}(\mathbf{h}_{1}(x_{s}))\in M_{\iota_{w},\kappa_{w}}\quad
\quad\text{for }w\leq s.
\]
Since $\overline{B}_{u}(0,r)$ is compact, there exists an $x_{0}=x_{0}%
(y_{0},\lambda_{0})\in B_{u}(0,r)$ such that $\tilde{\eta}_{\iota_{s}}%
^{-1}\circ\mathbf{f}_{s}(\mathbf{h}_{1}(x_{0}))\in U$ for all $s\in
\mathbb{N.}$ We define $V(y_{0},\lambda_{0}):=\tilde{\eta}_{\iota_{0}}%
^{-1}(x_{0}(y_{0},\lambda_{0}),y_{0},\lambda_{0}).$ To see that $V$ is
properly defined suppose that we have two points $x_{0}^{\ast}\neq x_{0}%
^{\ast\ast}$ such that $\tilde{\eta}_{\iota_{s}}\circ\mathbf{f}(\mathbf{h}%
_{1}(x_{0}^{\ast})),\tilde{\eta}_{\iota_{s}}\circ\mathbf{f}(\mathbf{h}%
_{1}(x_{0}^{\ast\ast}))\in U$ for all $s\in\mathbb{N.}$ Then by
(\ref{eq:cc-inductive}) we obtain%
\begin{eqnarray}
Q_{\gamma_{0}}\left( \mathbf{f}_{s}(\mathbf{h}_{1}(x_{0}^{\ast}%
))-\mathbf{f}_{s}(\mathbf{h}_{1}(x_{0}^{\ast\ast}))\right) & > &\mathbf{m}%
Q_{\gamma_{0}}\left( \mathbf{f}_{s-1}(\mathbf{h}_{1}(x_{0}^{\ast
}))-\mathbf{f}_{s-1}(\mathbf{h}_{1}(x_{0}^{\ast\ast}))\right) \nonumber\\
& >&\ldots\label{eq:h-pulled-ap}\\
& >&\mathbf{m}^{s}Q_{\gamma_{0}}\left( \mathbf{h}_{1}(x_{0}^{\ast
})-\mathbf{h}_{1}(x_{0}^{\ast\ast})\right) \nonumber\\
& >&0.\nonumber
\end{eqnarray}
Since $\mathbf{m}>1,$ (\ref{eq:h-pulled-ap}) implies in particular that
\[
\left\Vert \pi_{x}\left( \mathbf{f}_{s}(\mathbf{h}_{1}(x_{0}^{1}%
))-\mathbf{f}_{s}(\mathbf{h}_{1}(x_{0}^{2}))\right) \right\Vert
\rightarrow\infty\quad\text{as \quad}s\rightarrow\infty.
\]
This is impossible since $\mathbf{f}_{s}(\mathbf{h}_{1}(x_{0}^{w}))$ is in
$M_{\iota_{s},\kappa_{s}},$ which is a subset of $\mathbf{B,}$ which is a bounded.
We now need to show (\ref{eq:V-vert-cones}). Suppose that $V(y^{\ast}%
,\lambda^{\ast}),V(y^{\ast\ast},\lambda^{\ast\ast})\in M_{i,k}$ and
$Q_{\gamma_{0}}\left( \tilde{\eta}_{i}\circ V(y^{\ast},\lambda^{\ast}%
)-\tilde{\eta}_{i}\circ V(y^{\ast\ast},\lambda^{\ast\ast})\right) \geq0.$
Applying estimates analogous to (\ref{eq:h-pulled-ap}) we obtain a contradiction.
Continuity of $V$ will follow from the fact that
\begin{equation}
Q_{\gamma_{0}}\left( \tilde{\eta}_{\iota_{0}}\circ V(y^{\ast},\lambda^{\ast
})-\tilde{\eta}_{\iota_{0}}\circ V(y^{\ast\ast},\lambda^{\ast\ast})\right)
<0.\label{ea:cont-V-cone}%
\end{equation}
Since $\boldsymbol{\gamma}_{0}=(\mathbf{a}_{0},\mathbf{b}_{0},\mathbf{c}_{0})$
with $\mathbf{a}_{0}>0$ and $\mathbf{b}_{0},\mathbf{c}_{0}<0$
(\ref{ea:cont-V-cone}) gives
\begin{eqnarray*}
0 & >&Q_{\gamma_{0}}\left( \tilde{\eta}_{\iota_{0}}\circ V(y^{\ast}%
,\lambda^{\ast})-\tilde{\eta}_{\iota_{0}}\circ V(y^{\ast\ast},\lambda
^{\ast\ast})\right) \\
& = &\mathbf{a}_{0}\left\Vert \pi_{x}V(y^{\ast},\lambda^{\ast})-\pi
_{x}V(y^{\ast\ast},\lambda^{\ast\ast})\right\Vert ^{2}+\mathbf{b}%
_{0}\left\Vert y^{\ast}-y^{\ast\ast}\right\Vert ^{2} \\
& & +\mathbf{c}_{0}\left\Vert
\eta_{\iota_{0}}(\lambda^{\ast})-\eta_{\iota_{0}}(\lambda^{\ast\ast
})\right\Vert ^{2},
\end{eqnarray*}
and therefore%
\begin{eqnarray*}
\mathbf{a}_{0}\left\Vert \pi_{x}V(y^{\ast},\lambda^{\ast})-\pi_{x}%
V(y^{\ast\ast},\lambda^{\ast\ast})\right\Vert ^{2 } \\
<\min(-\mathbf{b}%
_{0},-\mathbf{c}_{0})\left\Vert \left( y^{\ast},\eta_{\iota_{0}}%
(\lambda^{\ast})\right) -\left( y^{\ast\ast},\eta_{\iota_{0}}(\lambda
^{\ast\ast})\right) \right\Vert ^{2}.
\end{eqnarray*}
\end{proof}
Now we move to proving the existence of the normally hyperbolic invariant
manifold. First we need a definition.
\begin{definition}
We say that $f$ \emph{satisfies backward cone conditions} if $f^{-1}$
satisfies cone conditions, with reversed roles of $x$ and $y$ coordinates.
\end{definition}
We assume that for $f$ Assumption \ref{as:cones-setup} holds with
$\boldsymbol{\gamma}_{0}=\boldsymbol{\gamma}_{0}^{\mathrm{forw}}.$ We assume
also that for $f^{-1}$ Assumption \ref{as:cones-setup} holds with
$\boldsymbol{\gamma}_{0}=\boldsymbol{\gamma}_{0}^{\mathrm{back}}$ (with
reversed roles of the $x$ and $y$ coordinates).
\begin{theorem}
\label{th:main}(Main Theorem) Assume that $f$ satisfies cone conditions for
$\boldsymbol{\gamma}_{0}^{\mathrm{forw}}=\left( \mathbf{a}_{0}^{\mathrm{f}%
},\mathbf{b}_{0}^{\mathrm{f}},\mathbf{c}_{0}^{\mathrm{f}}\right) $ and
backward cone conditions with $\boldsymbol{\gamma}_{0}^{\mathrm{back}}=\left(
\mathbf{a}_{0}^{\mathrm{b}},\mathbf{b}_{0}^{\mathrm{b}},\mathbf{c}%
_{0}^{\mathrm{b}}\right) .$ If%
\begin{equation}
\left\vert \mathbf{a}_{0}^{\mathrm{f}}\right\vert >\left\vert \mathbf{a}%
_{0}^{\mathrm{b}}\right\vert \text{\quad and\quad}\left\vert \mathbf{b}%
_{0}^{\mathrm{f}}\right\vert <\left\vert \mathbf{b}_{0}^{\mathrm{b}%
}\right\vert \label{eq:ab-ineq}%
\end{equation}
then there exist continuous monomorphisms $W^{s}:B_{s}(0,r)\times
\Lambda\rightarrow U,$ $W^{u}:B_{u}(0,r)\times\Lambda\rightarrow U$ and
$\chi:\Lambda\rightarrow U,$ such that
\begin{equation}
\pi_{y,\lambda}W^{s}(y,\lambda)=(y,\lambda),\quad\pi_{x,\lambda}%
W^{u}(y,\lambda)=(x,\lambda),\quad\pi_{\lambda}\chi(\lambda)=\lambda
,\label{eq:W-projections}%
\end{equation}
and $\Lambda_{\phi}:=\chi(\Lambda)$ is an invariant manifold for $f_{\phi},$
with stable manifold $W^{s}(B_{s}(0,r)\times\Lambda)$ and unstable manifold
$W^{u}(B_{u}(0,r)\times\Lambda).$
\end{theorem}
\begin{proof}
Since $f$ satisfies cone conditions, applying Theorem \ref{th:nhims-from-cc}
we obtain $W^{s}(y,\lambda)$ as $V$. Since $f$ satisfies backward cone
conditions, once again from Theorem \ref{th:nhims-from-cc} for $f^{-1}$ we
also obtain $W^{u}(x,\lambda)$ as function $V$. From point
\ref{it:th-nhims-cc1} in Theorem \ref{th:nhims-from-cc} it follows that
(\ref{eq:W-projections}) holds for $W^{s}$ and $W^{u}$.
We shall show that for any $\lambda\in\Lambda$ the sets $W^{s}(B_{s}%
(0,r),\lambda)$ and $W^{u}(B_{u}(0,r),\lambda)$ intersect. Let us define
$F:B_{u}(0,r)\times B_{s}(0,r)\rightarrow B_{u}(0,r)\times B_{s}(0,r)$ as%
\[
F(x,y):=\left( \pi_{x}W^{s}(y,\lambda),\pi_{y}W^{u}(x,\lambda)\right) .
\]
Since $F$ is continuous, from the Brouwer fixed point theorem follows that
there exists an $(x_{0},y_{0})$ such that $F(x_{0},y_{0})=\left( x_{0}%
,y_{0}\right) .$ By (\ref{eq:W-projections}) this means that%
\[
W^{s}(y_{0},\lambda)=\left( \pi_{x}W^{s}(y_{0},\lambda),y_{0},\lambda\right)
=\left( x_{0},\pi_{y}W^{u}(x_{0},\lambda),\lambda\right) =W^{u}%
(x_{0},\lambda).
\]
Now we shall show that for any given $\lambda\in\Lambda$ there exists only a
single point of such intersection. Suppose that for some $\lambda\in\Lambda$
there exist $\left( x^{\ast},y^{\ast}\right) ,\left( x^{\ast\ast}%
,y^{\ast\ast}\right) \in B_{u}(0,r)\times B_{s}(0,r)$, $\left( x^{\ast
},y^{\ast}\right) \neq\left( x^{\ast\ast},y^{\ast\ast}\right) $ such that
\[
W^{s}(y^{\ast},\lambda)=W^{u}(x^{\ast},\lambda)\quad\text{and\quad}%
W^{s}(y^{\ast\ast},\lambda)=W^{u}(x^{\ast\ast},\lambda).
\]
From (\ref{eq:W-projections}) we have $W^{s}(y_{m},\lambda)=W^{u}%
(x_{m},\lambda)=(x_{m},y_{m},\lambda)$ for $m=1,2.$ From point 4. in Theorem
\ref{th:nhims-from-cc} follows that%
\begin{eqnarray*}
Q_{\gamma_{0}^{\mathrm{forw}}}\left( \tilde{\eta}_{i}\circ W^{s}(y^{\ast
},\lambda)-\tilde{\eta}_{i}\circ W^{s}(y^{\ast\ast},\lambda)\right) =\\
\qquad Q_{\gamma_{0}^{\mathrm{forw}}}\left( (x^{\ast},y^{\ast},\eta_{i}%
(\lambda))-(x^{\ast\ast},y^{\ast\ast},\eta_{i}(\lambda))\right) <0,
\end{eqnarray*}%
\begin{eqnarray*}
Q_{\gamma_{0}^{\mathrm{back}}}\left( \tilde{\eta}_{i}\circ W^{u}(x^{\ast
},\lambda)-\tilde{\eta}_{i}\circ W^{u}(x^{\ast\ast},\lambda)\right) =\\
\qquad Q_{\gamma_{0}^{\mathrm{back}}}\left( (x^{\ast},y^{\ast},\eta_{i}%
(\lambda))-(x^{\ast\ast},y^{\ast\ast},\eta_{i}(\lambda))\right) <0.
\end{eqnarray*}
which implies that
\begin{eqnarray}
a_{0}^{\mathrm{f}}\left\Vert x^{\ast}-x^{\ast\ast}\right\Vert ^{2}%
+b_{0}^{\mathrm{f}}\left\Vert y^{\ast}-y^{\ast\ast}\right\Vert ^{2} &
< & 0,\label{eq:temp-contr-1}\\
a_{0}^{\mathrm{b}}\left\Vert x^{\ast}-x^{\ast\ast}\right\Vert ^{2}%
+b_{0}^{\mathrm{b}}\left\Vert y^{\ast}-y^{\ast\ast}\right\Vert ^{2} &
< & 0.\label{eq:temp-contr-2}%
\end{eqnarray}
From (\ref{eq:ab-ineq}) and (\ref{eq:temp-contr-2}) (keeping in mind that
$a_{0}^{\mathrm{f}}>0,$ $b_{0}^{\mathrm{f}}<0$ and that $a_{0}^{\mathrm{b}%
}<0,$ $b_{0}^{\mathrm{b}}>0$ due to the reversion of the roles of $x$ and $y$
for the inverse map) follows that
\[
a_{0}^{\mathrm{f}}\left\Vert x^{\ast}-x^{\ast\ast}\right\Vert ^{2}%
>-a_{0}^{\mathrm{b}}\left\Vert x^{\ast}-x^{\ast\ast}\right\Vert ^{2}%
>b_{0}^{\mathrm{b}}\left\Vert y^{\ast}-y^{\ast\ast}\right\Vert ^{2}%
>-b_{0}^{\mathrm{f}}\left\Vert y^{\ast}-y^{\ast\ast}\right\Vert ^{2},
\]
which contradicts (\ref{eq:temp-contr-1}).
We now define $\chi(\lambda):=(x_{0},y_{0},\lambda)$ for $x_{0},$ $y_{0}$ such
that $W^{s}(y_{0},\lambda)=W^{u}(x_{0},\lambda).$ By above arguments we know
that $\chi$ is a properly defined function. We need to show that this function
is continuous. Let us take any $\lambda^{\ast},\lambda^{\ast\ast}\in\eta
_{i}^{-1}(B_{c}(\eta_{i}(\boldsymbol{\lambda}_{k}),\rho))$ for some $(i,k)\in
J.$ From point 4 in Theorem \ref{th:nhims-from-cc} follows that%
\begin{eqnarray}
Q_{\gamma_{0}^{\mathrm{forw}}}\left( \tilde{\eta}_{i}\circ\chi(\lambda^{\ast
})-\tilde{\eta}_{i}\circ\chi(\lambda^{\ast\ast})\right) &
<&0,\label{eq:cc-for-chi-2}\\
Q_{\gamma_{0}^{\mathrm{back}}}\left( \tilde{\eta}_{i}\circ\chi(\lambda^{\ast
})-\tilde{\eta}_{i}\circ\chi(\lambda^{\ast\ast})\right) & <&0.\nonumber
\end{eqnarray}
Let us adopt notations $\tilde{\eta}_{i}\circ\chi(\lambda^{\ast})=\left(
x^{\ast},y^{\ast},\theta^{\ast}\right) $ and $\tilde{\eta}_{i}\circ
\chi(\lambda^{\ast\ast})=\left( x^{\ast\ast},y^{\ast\ast},\theta^{\ast\ast
}\right) .$ Note that from the construction of $\chi$ follows that $\eta
_{i}(\lambda^{\ast})=\theta^{\ast}$ and $\eta_{i}(\lambda^{\ast\ast}%
)=\theta^{\ast\ast}.$ From (\ref{eq:cc-for-chi-2}) it follows that
\begin{eqnarray}
&& \left( a_{0}^{\mathrm{f}}+a_{0}^{\mathrm{b}}\right) \left\Vert x^{\ast
}-x^{\ast\ast}\right\Vert ^{2}+\left( b_{0}^{\mathrm{f}}+b_{0}^{\mathrm{b}%
}\right) \left\Vert y^{\ast}-y^{\ast\ast}\right\Vert ^{2}%
\label{eq:cc-for-chi}\\
&& <-\left( c_{0}^{\mathrm{f}}+c_{0}^{\mathrm{b}}\right) \left\Vert
\theta^{\ast}-\theta^{\ast\ast}\right\Vert ^{2}\nonumber\\
&& =-\left( c_{0}^{\mathrm{f}}+c_{0}^{\mathrm{b}}\right) \left\Vert \eta
_{i}(\lambda^{\ast})-\eta_{i}(\lambda^{\ast\ast})\right\Vert ^{2}\nonumber
\end{eqnarray}
From (\ref{eq:ab-ineq}) it follows that $a_{0}^{\mathrm{f}}+a_{0}^{\mathrm{b}%
}=\left\vert a_{0}^{\mathrm{f}}\right\vert -\left\vert a_{0}^{\mathrm{b}%
}\right\vert >0$ and $b_{0}^{\mathrm{f}}+b_{0}^{\mathrm{b}}=-\left\vert
b_{0}^{\mathrm{f}}\right\vert +\left\vert b_{0}^{\mathrm{b}}\right\vert >0.$
By the fact that $\eta_{i}$ is continuous and the fact that $c_{0}%
^{\mathrm{f}}<0$ and $c_{0}^{\mathrm{b}}<0$, from (\ref{eq:cc-for-chi})
follows the continuity of $\chi.$
We will now show that for any $p\in W^{s}(B_{s}(0,r)\times\Lambda),$ $f_{\phi
}^{n}(p)$ converges to $\chi(\Lambda)$ as $n$ goes to infinity. Let us
consider the limit set of the point $p$%
\[
\omega(f_{\phi},p)=\{q|\lim_{k\rightarrow\infty}f_{\phi}^{n_{k}}(p)=q\text{
for some }n_{k}\rightarrow\infty\}.
\]
If we can show that $\omega(f_{\phi},p)$ is contained in $W^{u}\cap W^{s}%
=\chi(\Lambda),$ then this will conclude our proof. We take any $q=\lim
_{k\rightarrow\infty}f_{\phi}^{n_{k}}(p)$ from $\omega(f_{\phi},p).$ By
continuity of $W^{s}$ we know that $q\in W^{s}.$ Suppose now that $q\notin
W^{u}.$ This would mean that there exists an $n>0$ for which $f_{\phi}%
^{-n}(q)\notin B_{u}(0,r)\times B_{s}(0,r)\times\Lambda.$ Since%
\[
\lim_{k\rightarrow\infty}f_{\phi}^{n_{k}-n}(p)=f_{\phi}^{-n}(q),
\]
we have that $f_{\phi}^{-n}(q)\in\omega(f_{\phi},p),$ but this contradicts the
fact that $\omega(f_{\phi},p)\subset B_{u}(0,r)\times B_{s}(0,r)\times
\Lambda.$
Showing that all backward iterations of points in $W^{u}(B_{u}(0,r)\times
\Lambda)$ converge to $\chi(\Lambda)$ is analogous.
\end{proof}
\begin{remark}
Let us note that during the course of the proof of Theorem \ref{th:main} we
have established more than just continuity of $W^{u},$ $W^{s}$ and $\chi.$
From our construction we know that for $i\in I$%
\begin{eqnarray*}
\tilde{\eta}_{i}\circ W^{u}(x,\eta_{i}^{-1}(\theta)) & = &\left( x,w_{i}%
^{u}(x,\theta),\theta\right) ,\\
\tilde{\eta}_{i}\circ W^{s}(y,\eta_{i}^{-1}(\theta)) & = &\left( w_{i}%
^{s}(y,\theta),y,\theta\right) ,\\
\tilde{\eta}_{i}\circ\chi(\eta_{i}^{-1}(\theta)) & = & \left( \varkappa
_{i}(\theta),\theta\right) ,
\end{eqnarray*}
for continuous $w_{i}^{u}:B_{u}(0,r)\times B_{c}\rightarrow B_{s}(0,r),$
$w_{i}^{s}:B_{s}(0,r)\times B_{c}\rightarrow B_{u}(0,r)$ and $\varkappa
_{i}:B_{c}\rightarrow B_{u}(0,r)\times B_{s}(0,r).$ The inequality
(\ref{eq:V-vert-cones}) from Theorem \ref{th:nhims-from-cc} can be used to
obtain explicit Lipschitz bounds for functions $w_{i}^{u},$ $w_{i}^{s}.$ Also
estimates (\ref{eq:cc-for-chi}) can be used to obtain Lipschitz bounds for
$\varkappa_{i}.$ This means that we can get Lipschitz estimates for the
invariant manifold $\chi(\Lambda)$ together with Lipschitz estimates for its
stable and unstable manifold.
\end{remark}
\section{Verification of covering and cone conditions}
\label{sec:ver-cond}
In this section we show how covering relations and cone conditions can be
verified with the use of local bounds on derivatives. The idea is to develop a
simple automatised scheme which could be applied in computer assisted proofs.
In our approach we set up our verification so that we do not need to compute
images of large sets (which in case of rigorous numerics is always
troublesome). The scheme is based on iterates of a number of single points,
combined with estimates on derivatives around their neighbourhoods.
For any set $V\subset\mathbb{R}^{n}$ we define the interval enclosure of the
derivative of $f$ on $V$ as%
\begin{eqnarray*}
\lbrack df(V)]:= \\
\left\{ A\in\mathbb{R}^{n\times n}|A_{ij}\in\left[
\inf_{x\in V}\frac{df_{i}}{dx_{j}}(x),\sup_{x\in V}\frac{df_{i}}{dx_{j}%
}(x)\right] \text{ for all }i,j=1,\ldots,n\right\} .
\end{eqnarray*}
Let $U_{i_{1}},U_{i_{2}}\subset\Lambda$ be such that $\mathrm{dom}%
f_{i_{2}i_{1}}$ is nonempty. Assume that for any $(c+u+s)\times(c+u+s)$
matrix
\begin{equation}
A\in\lbrack df_{i_{2}i_{1}}(\mathrm{dom}f_{i_{2}i_{1}})]
\label{eq:A-for-i1-i2}%
\end{equation}
we have the following bounds%
\begin{eqnarray}
\sup\left\{ \left\Vert A_{ij}v_{j}\right\Vert :\left\Vert v_{j}\right\Vert
=1\right\} & \leq &\overline{A}_{ij}\label{eq:der-bounds}\\
\inf\left\{ \left\Vert A_{ij}v_{j}\right\Vert :\left\Vert v_{j}\right\Vert
=1\right\} & \geq & \underline{A}_{ij},\nonumber
\end{eqnarray}
with $i,j\in\{1,2,3\}$ and $v_{1},v_{2},v_{3}$ representing the variables
$x,y,\theta$ respectively (note that $\overline{A}_{ij},\underline{A}_{ij}$
depend on the choice of $i_{2}i_{1}$). In this section we shall use the bounds
(\ref{eq:der-bounds}) for verification of covering and cone conditions.
\subsection{Verifying covering conditions\label{sec:ver-cover}}
We define a $3\times3$ matrix $T_{i_{2}i_{1}}$ as%
\[
T_{i_{2}i_{1}}:=\left( t_{ij}\right) _{i,j=1,\ldots,3}%
\]%
\begin{equation}%
\begin{array}
[c]{lll}%
t_{11}=\underline{A}_{11},\quad & t_{12}=-\overline{A}_{12},\quad &
t_{13}=-\overline{A}_{13},\\
t_{21}=\overline{A}_{21}, & t_{22}=\overline{A}_{22}, & t_{23}=\overline
{A}_{23},\\
t_{31}=\overline{A}_{31}, & t_{32}=\overline{A}_{32}, & t_{33}=\overline
{A}_{33}.
\end{array}
\label{eq:Tmatrix-bounds}%
\end{equation}
We will use notations $R=(r_{u},r_{s},r_{c})\in\mathbb{R}^{3}$ and for
$q=(x,y,\theta)\in\mathbb{R}^{u}\times\mathbb{R}^{s}\times\mathbb{R}^{c}$ and
write%
\[
N(q,R):=N(q,r_{u},r_{s},r_{c}).
\]
We give a lemma, which can be used in order to verify that $N_{1}%
\overset{f_{i_{2}i_{1}}}{\Longrightarrow}N_{2}$.
\begin{lemma}
\label{lem:matrix-bounds-covering}Let $\varepsilon>0$ be a small number. Let
$N_{1}=N(q_{1},R_{1})\subset\mathrm{dom}f_{i_{2}i_{1}}$ be a ch-set. If for
$R_{2}=(r_{u}^{2},r_{s}^{2},r_{c}^{2}):=T_{i_{2}i_{1}}R_{1}+(-\varepsilon
,\varepsilon,\varepsilon)$ we have $r_{u}^{2},r_{s}^{2},r_{c}^{2}>0$ and for
$q_{2}:=f_{i_{2}i_{1}}(q_{1})$%
\begin{equation}
\left\Vert \pi_{x}q_{2}\right\Vert +r_{u}^{2}\leq1,\qquad\left\Vert \pi
_{y}q_{2}\right\Vert +r_{s}^{2}\leq1,\qquad\left\Vert \pi_{\theta}%
q_{2}\right\Vert +r_{c}^{2}\leq1, \label{eq:N2-in-B}%
\end{equation}
then for $N_{2}:=N(q_{2},R_{2})$ we have $N_{1}\overset{f_{i_{2}i_{1}}%
}{\Longrightarrow}N_{2}.$
\end{lemma}
\begin{proof}
Condition (\ref{eq:covering-cond-1}) holds by the choice of $q_{2}$ and
$N_{2}.$ Let $q\in N_{1}^{-},$ then for%
\[
A:=\int_{0}^{1}Df_{i_{2}i_{1}}(q_{1}+t(q-q_{1}))dt\in\lbrack df_{i_{2}i_{1}%
}(\mathrm{dom}f_{i_{2}i_{1}})],
\]
we have estimates%
\begin{eqnarray*}
\left\Vert \pi_{x}(f_{i_{2}i_{1}}(q)-q_{2})\right\Vert & = &\left\Vert \pi
_{x}(f_{i_{2}i_{1}}(q)-f_{i_{2}i_{1}}(q_{1}))\right\Vert \\
& =&\left\Vert \pi_{x}\left( \int_{0}^{1}Df_{i_{2}i_{1}}(q_{1}+t(q-q_{1}%
))dt\cdot(q-q_{1})\right) \right\Vert \\
& =&\left\Vert \pi_{x}A(q-q_{1})\right\Vert \\
& =&\left\Vert A_{11}\pi_{x}(q-q_{1})+A_{12}\pi_{y}(q-q_{1})+A_{13}\pi
_{\theta}(q-q_{1})\right\Vert \\
& \geq&\underline{A}_{11}r_{u}^{1}-\overline{A}_{12}r_{s}^{1}-\overline
{A}_{13}r_{c}^{1}\\
& >&r_{u}^{2},
\end{eqnarray*}
hence (\ref{eq:covering-cond-2}) holds. Analogous computations for $q\in
N_{1}$ give%
\begin{eqnarray*}
\left\Vert \pi_{y}(f_{i_{2}i_{1}}(q)-q_{2})\right\Vert & = &\left\Vert \pi
_{y}A(q-q_{1})\right\Vert \leq\overline{A}_{21}r_{u}^{1}+\overline{A}%
_{22}r_{s}^{1}+\overline{A}_{23}r_{c}^{1}<r_{s}^{2},\\
\left\Vert \pi_{\theta}(f_{i_{2}i_{1}}(q)-q_{2})\right\Vert & =& \left\Vert
\pi_{\theta}A(q-q_{1})\right\Vert \leq\overline{A}_{31}r_{u}^{1}+\overline
{A}_{32}r_{s}^{1}+\overline{A}_{33}r_{c}^{1}<r_{c}^{2},
\end{eqnarray*}
which proves (\ref{eq:covering-cond-3}). Conditions (\ref{eq:N2-in-B}) ensure
that $N_{2}\subset\mathbf{B}$.
\end{proof}
\begin{example}
We return to our Example \ref{ex:cone-cond-def}. The ch-sets from the example
follow from Lemma \ref{def:cone-conditions} as $N_{l}=N(0,R_{l})$ where
$R_{0}=(r,r)$ and $R_{l+1}=T_{i_{l+1}i_{l}}R_{l}+(-\varepsilon,\varepsilon
,\varepsilon)$ with $T_{i_{l+1}i_{l}}=\mathrm{diag}(A_{11}^{l+1},A_{22}%
^{l+1}).$
\end{example}
\begin{remark}
When the $x$ coordinate is strongly expanding, for practical reasons it might
be beneficial to set\textbf{ }$r_{u}^{2}$ significantly smaller than $\pi
_{1}T_{i_{2}i_{1}}R_{1}.$ In such case the covering $N_{1}\overset
{f_{i_{2}i_{1}}}{\Longrightarrow}N_{2}$ will still take place, but $N_{2}$
will be a smaller set. This might give better bounds for next iterations of
the map $f$ and also keep the later constructed $N_{i}$ within $\mathbf{B}$.
Without reducing $r_{u}$, in the case when $x$ is expanding, it might turn out
that the sets $N_{i}$ blow up quickly.
\end{remark}
\subsection{Verifying cone conditions\label{sec:ver-cc}}
Now we shall present some lemmas, which will show how one can obtain condition
(\ref{eq:cc-local-iterates}), from bounds on derivatives (\ref{eq:der-bounds}%
). The aim is to present a simple mechanism in which successive $\gamma_{l}$
are constructed.
Let $C=(c_{ij})_{i,j=1,...,3}$ be a $3\times3$ matrix with coefficients%
\begin{equation}%
\begin{array}
[c]{lll}%
c_{11}=\underline{A}_{11}^{2}-\sum_{k\neq1}\overline{A}_{11}\overline{A}%
_{1k}\quad & c_{12}=\sum_{k=1}^{3}\overline{A}_{21}\overline{A}_{2k}\quad &
c_{13}=\sum_{k=1}^{3}\overline{A}_{31}\overline{A}_{3k}\\
c_{21}=\underline{A}_{12}^{2}-\sum_{k\neq2}\overline{A}_{12}\overline{A}%
_{1k}\quad & c_{22}=\sum_{k=1}^{3}\overline{A}_{22}\overline{A}_{2k}\quad &
c_{23}=\sum_{k=1}^{3}\overline{A}_{32}\overline{A}_{3k}\\
c_{31}=\underline{A}_{13}^{2}-\sum_{k\neq3}\overline{A}_{13}\overline{A}%
_{1k}\quad & c_{32}=\sum_{k=1}^{3}\overline{A}_{23}\overline{A}_{2k}\quad &
c_{33}=\sum_{k=1}^{3}\overline{A}_{33}\overline{A}_{3k}%
\end{array}
\label{eq:C-def}%
\end{equation}
(note that $C$ depends on the choice of $i_{2},i_{1}$).
We start with a technical lemma
\begin{lemma}
\label{lem:matrix-bounds}Let $\gamma=(a,b,c)\in\mathbb{R}^{3}$ and let
$A:\mathbb{R}^{u+s+c}\rightarrow\mathbb{R}^{u+s+c}$ be a matrix for which the
bounds (\ref{eq:der-bounds}) hold. If $a\geq0,b\leq0,c\leq0$ then for any
$p=\left( p_{1},p_{2},p_{3}\right) \in\mathbb{R}^{c}\times\mathbb{R}%
^{u}\times\mathbb{R}^{s}$%
\[
Q_{\gamma}(Ap)\geq Q_{C\gamma}\left( p\right) .
\]
\end{lemma}
\begin{proof}
Using the estimate%
\[
\pm2\left\langle A_{ki}p_{i},A_{kj}p_{j}\right\rangle \geq-\overline{A}%
_{ki}\overline{A}_{kj}\left( \left\Vert p_{i}\right\Vert ^{2}+\left\Vert
p_{j}\right\Vert ^{2}\right)
\]
we obtain
\begin{eqnarray*}
Q_{\gamma}(Ap) \\
=a\sum_{i,j=1}^{3}\left\langle A_{1i}p_{i},A_{1j}%
p_{j}\right\rangle +b\sum_{i,j=1}^{3}\left\langle A_{2i}p_{i},A_{2j}%
p_{j}\right\rangle +c\sum_{i,j=1}^{3}\left\langle A_{3i}p_{i},A_{3j}%
p_{j}\right\rangle \\
= a\sum_{i=1}^{3}||A_{1i}p_{i}||^{2}+b\sum_{i=1}^{3}||A_{2i}p_{i}%
||^{2}+c\sum_{i=1}^{3}||A_{3i}p_{i}||^{2}\\
\quad+2\sum_{i<j}a\left\langle A_{1i}p_{i},A_{1j}p_{j}\right\rangle
+2\sum_{i<j}b\left\langle A_{2i}p_{i},A_{2j}p_{j}\right\rangle +2\sum
_{i<j}c\left\langle A_{3i}p_{i},A_{3j}p_{j}\right\rangle \\
\geq\left\Vert p_{1}\right\Vert ^{2}(a\underline{A}_{11}^{2}+b\overline
{A}_{21}^{2}+c\overline{A}_{31}^{2})+\left\Vert p_{2}\right\Vert ^{2}(a\underline{A}_{12}^{2}%
+b\overline{A}_{22}^{2}+c\overline{A}_{32}^{2})\\
\quad+\left\Vert p_{3}\right\Vert ^{2}(a\underline{A}_{13}^{2}%
+b\overline{A}_{23}^{2}+c\overline{A}_{33}^{2})\\
\quad-a\sum_{i<j}\overline{A}_{1i}\overline{A}_{1j}\left( ||p_{i}%
||^{2}+||p_{j}||^{2}\right) +b\sum_{i<j}\overline{A}_{2i}\overline{A}%
_{2j}\left( ||p_{i}||^{2}+||p_{j}||^{2}\right) \\
\quad+c\sum_{i<j}\overline{A}_{3i}\overline{A}_{3j}\left( ||p_{i}%
||^{2}+||p_{j}||^{2}\right) \\
=\left( C\gamma\right) _{1}\left\Vert p_{1}\right\Vert ^{2}+(C\gamma
)_{2}\left\Vert p_{2}\right\Vert ^{2}+\left( C\gamma\right) _{3}\left\Vert
p_{2}\right\Vert ^{2}.
\end{eqnarray*}
\end{proof}
Now we give a lemma which will be the main tool in the construction of
$\gamma_{l}$ from Definition \ref{def:cone-conditions}.
\begin{lemma}
\label{lem:matrix-bounds-cc}Let $U_{i_{1}},U_{i_{2}}\subset\Lambda$ and let
$N$ be a ch-set $N\subset\mathrm{dom}(f_{i_{2}i_{1}}).$ Let $\varepsilon>0$ be
a small number. Let $C$ be defined by (\ref{eq:C-def}) and $\varepsilon>0$.
Assume that $C$ is invertible and define
\begin{equation}
G_{i_{2}i_{1}}=C^{-1}.\label{eq:Gmatrix}%
\end{equation}
If for $\gamma^{\prime}=(a,b,c):=G_{i_{2}i_{1}}\gamma+(\varepsilon
,\varepsilon,\varepsilon),$ we have $a>0,$ and $b,c<0$ then for any
$q_{1},q_{2}\in N$%
\[
Q_{\gamma^{\prime}}(f_{i_{2}i_{1}}(q_{1})-f_{i_{2}i_{1}}(q_{2}))>Q_{\gamma
}(q_{1}-q_{2}).
\]
\end{lemma}
\begin{proof}
For
\[
A:=\int_{0}^{1}Df_{i_{2}i_{1}}(q_{2}+t(q_{1}-q_{2}))dt\in\lbrack
df_{i_{2}i_{1}}(\mathrm{dom}f_{i_{2}i_{1}})]
\]
applying Lemma \ref{lem:matrix-bounds} gives%
\begin{eqnarray*}
Q_{\gamma^{\prime}}(f_{i_{2}i_{1}}(q_{1})-f_{i_{2}i_{1}}(q_{2})) &
>&Q_{G_{i_{2}i_{1}}\gamma}(f_{i_{2}i_{1}}(q_{1})-f_{i_{2}i_{1}}(q_{2}))\\
& \geq& Q_{CG_{i_{2}i_{1}}\gamma}(q_{1}-q_{2})\\
& =&Q_{\gamma}(q_{1}-q_{2}).
\end{eqnarray*}
\end{proof}
\begin{example}
We return to Example \ref{ex:cone-cond-def}. The cones $\gamma_{l}$ follow
from Lemma \ref{lem:matrix-bounds-cc} as $\gamma_{0}=(1,-1)$ and $\gamma
_{l+1}=(1+\varepsilon,(1+\varepsilon)^{-1})\cdot G_{i_{l+1}i_{l}}\gamma_{l}$
with $G_{i_{l+1}i_{l}}=\mathrm{diag}\left( \frac{1}{\left( A_{11}%
^{l+1}\right) ^{2}},\frac{1}{\left( A_{22}^{l+1}\right) ^{2}}\right) ,$
where $\cdot$ stands for the scalar product.
\end{example}
\subsection{Setting up local maps\label{sec:local-maps}}
In this section we shall introduce conditions, which would ensure that the
assumptions from Section \ref{sec:cover-cc-norm-hyp} hold. Below we give a
Lemma which will ensure conditions (\ref{eq:from-g1-to-g0}) and
(\ref{eq:h'-hor-disc}).
Let us note that in some cases conditions (\ref{eq:from-g1-to-g0}) and
(\ref{eq:h'-hor-disc}) will follow from easier arguments or directly from the
setup of the problem. Such is the case in our example from Section
\ref{sec:examples}.
\begin{lemma}
Let $\mathbf{m}>1,$ $\Delta>0$ and $\rho>\sqrt{\frac{\mathbf{a}_{0}%
}{-\mathbf{c}_{0}}}r+\Delta$. Assume that
\begin{enumerate}
\item \label{itm:lem-setup1}for any $\iota\in I$ and any $\lambda\in U_{\iota
}$ there exists a $\boldsymbol{\lambda}_{\kappa}$ such that $(\iota,\kappa)\in
J$ and
\begin{equation}
\left\Vert \eta_{\iota}(\lambda)-\eta_{\iota}(\boldsymbol{\lambda}_{\kappa
})\right\Vert <\Delta,\label{eq:lambda-Delta}%
\end{equation}
\item \label{itm:lem-setup2}for any $\theta\in B_{c}$ and any $i\in I$ there
exists an $\iota\in I$ such that
\begin{eqnarray}
\overline{B}_{c}\left( \theta,\sqrt{\frac{\mathbf{a}_{1}}{-\mathbf{c}_{1}}%
}r\right) \cap\overline{B}_{c} \subset\mathrm{dom}\left( \eta_{\iota
}\circ\eta_{i}^{-1}\right) ,\label{eq:dom-cond}\\
\eta_{\iota}\circ\eta_{i}^{-1}(\theta) \in B_{c}\left( 0,R-\rho
-\Delta\right) .\label{eq:image-cond}%
\end{eqnarray}
For $C_{\iota\,i}$ defined as in (\ref{eq:C-def}), constructed for
$[d(\tilde{\eta}_{\iota}\circ\tilde{\eta}_{i}^{-1})(\mathrm{dom}(\eta_{\iota
}\circ\eta_{i}^{-1}))]\ $we assume that it is invertible and also that for
$\gamma=(a,b,c)=C_{\iota\,i}^{-1}\boldsymbol{\gamma}_{1}$ we have%
\begin{equation}
\mathbf{a}_{0}>\mathbf{m}a,\quad\mathbf{b}_{0}>\mathbf{m}b,\quad\mathbf{c}%
_{0}>\mathbf{m}c.\label{eq:to-gamma0-bounds}%
\end{equation}
\end{enumerate}
If assumptions \ref{itm:lem-setup1}, \ref{itm:lem-setup2} hold, then for any
horizontal disc $\mathbf{h}$ in a ch-set with cones $(M,\boldsymbol{\gamma
}_{1})$ and for any $i\in I$ there exists $\left( \iota,\kappa\right) \in J$
such that $\mathbf{h}(\overline{B}_{u}(0,r))\subset\mathrm{dom}(\tilde{\eta
}_{\iota}\circ\tilde{\eta}_{i}^{-1}).$ Also for any $q_{1},q_{2}$ in
$\mathrm{dom}(\tilde{\eta}_{\iota}\circ\tilde{\eta}_{i}^{-1})$ such that
$Q_{\boldsymbol{\gamma}_{1}}(q_{1}-q_{2})>0$ we have (\ref{eq:from-g1-to-g0}).
Furthermore condition (\ref{eq:h'-hor-disc}) holds.
\end{lemma}
\begin{proof}
Let $\mathbf{h}$ be a horizontal disc in a ch-set with cones
$(M,\boldsymbol{\gamma}_{1}).$ Take $\theta_{0}=\pi_{\theta}(\mathbf{h}(0)).$
For any $x\in\overline{B}_{u}(0,r)$ we have $Q_{\gamma_{1}}(\mathbf{h}%
(x)-\mathbf{h}(0))\geq0,$ which implies that%
\[
\mathbf{a}_{1}r^{2}\geq\mathbf{a}_{1}\left\Vert \pi_{x}(\mathbf{h}%
(x)-\mathbf{h}(0))\right\Vert ^{2}\geq-\mathbf{c}_{1}\left\Vert \pi_{\theta
}(\mathbf{h}(x))-\theta_{0}\right\Vert ^{2},
\]
hence $\pi_{\theta}(\mathbf{h}(\overline{B}_{u}(0,r)))\subset\overline{B}%
_{c}(\theta_{0},\sqrt{\frac{\mathbf{a}_{1}}{-\mathbf{c}_{1}}}r)\cap
\overline{B}_{c}.$ Taking $\iota$ from assumption 2. for $\theta=\theta_{0},$
condition (\ref{eq:dom-cond}) implies that $\mathbf{h}(\overline{B}%
_{u}(0,r))\subset\mathrm{dom}(\tilde{\eta}_{\iota}\circ\tilde{\eta}_{i}^{-1})$
and also
\begin{equation}
\left\Vert \eta_{\iota}\circ\eta_{i}^{-1}(\theta_{0})\right\Vert
<R-\rho-\Delta.\label{eq:theta0-bound}%
\end{equation}
Take now any $q_{1},q_{2}$ in $\mathrm{dom}(\tilde{\eta}_{\iota}\circ
\tilde{\eta}_{i}^{-1})$ such that $Q_{\boldsymbol{\gamma}_{1}}(q_{1}%
-q_{2})>0.$ Applying (\ref{eq:to-gamma0-bounds}) and Lemma
\ref{lem:matrix-bounds-cc} gives%
\begin{eqnarray}
Q_{\boldsymbol{\gamma}_{0}}(\tilde{\eta}_{\iota}\circ\tilde{\eta}_{i}%
^{-1}(q_{1})-\tilde{\eta}_{\iota}\circ\tilde{\eta}_{i}^{-1}(q_{2})) &
>&\mathbf{m}Q_{\gamma}(\tilde{\eta}_{\iota}\circ\tilde{\eta}_{i}^{-1}%
(q_{1})-\tilde{\eta}_{\iota}\circ\tilde{\eta}_{i}^{-1}(q_{2}))\nonumber\\
& \geq&\mathbf{m}Q_{C_{\iota\,i}C_{\iota\,i}^{-1}\boldsymbol{\gamma}_{1}%
}(q_{1}-q_{2})\nonumber\\
& =&\mathbf{m}Q_{\boldsymbol{\gamma}_{1}}(q_{1}-q_{2})\label{eq:temp-cc-1}\\
& >&0,\nonumber
\end{eqnarray}
which proves (\ref{eq:from-g1-to-g0}). Applying the bound in
(\ref{eq:temp-cc-1}) for $q_{1}=\mathbf{h}(x_{1}),$ $q_{2}=\mathbf{h}(x_{2})$
gives
\begin{equation}
Q_{\boldsymbol{\gamma}_{0}}(\mathbf{h}^{\prime}(x_{1})-\mathbf{h}^{\prime
}(x_{2}))\geq0,\label{eq:h'-cone-cond}%
\end{equation}
which means that to prove (\ref{eq:h'-hor-disc}) it is sufficient to show that
$\mathbf{h}^{\prime}(\overline{B}_{u}(0,r))\subset M_{\iota,\kappa}$ for some
$\kappa.$ Let $\lambda=\eta_{i}^{-1}(\theta_{0}).$ We now take $\kappa$ from
assumption 1. For any $x\in\overline{B}_{u}(0,r),$ by (\ref{eq:h'-cone-cond})
we have%
\begin{eqnarray*}
\mathbf{a}_{0}r^{2} & \geq &\mathbf{a}_{0}\left\Vert \pi_{x}(\mathbf{h}%
^{\prime}(x)-\mathbf{h}^{\prime}(0))\right\Vert ^{2}\\
& \geq&-\mathbf{c}_{0}\left\Vert \pi_{\theta}(\mathbf{h}^{\prime
}(x)-\mathbf{h}^{\prime}(0))\right\Vert ^{2}\\
& =&-\mathbf{c}_{0}\left\Vert \pi_{\theta}(\mathbf{h}^{\prime}(x))-\eta
_{\iota}(\lambda)\right\Vert ^{2}.
\end{eqnarray*}
This means that
\[
\pi_{\theta}(\mathbf{h}^{\prime}(\overline{B}_{u}(0,r)))\subset\overline
{B}_{c}(\eta_{\iota}(\lambda),r\sqrt{\frac{\mathbf{a}_{0}}{-\mathbf{c}_{0}}})
\]
hence%
\begin{eqnarray*}
\left\Vert \pi_{\theta}(\mathbf{h}^{\prime}(x))-\eta_{\iota}%
(\boldsymbol{\lambda}_{\kappa})\right\Vert & \leq &\left\Vert \pi_{\theta
}(\mathbf{h}^{\prime}(x))-\eta_{\iota}(\lambda)\right\Vert +\left\Vert
\eta_{\iota}(\lambda)-\eta_{\iota}(\boldsymbol{\lambda}_{\kappa})\right\Vert
\\
& <&r\sqrt{\frac{\mathbf{a}_{0}}{-\mathbf{c}_{0}}}+\Delta\\
& <&\rho,
\end{eqnarray*}
which gives $\mathbf{h}^{\prime}(\overline{B}_{u}(0,r))\subset M_{\iota
,\kappa}.$ What needs to be verified last is whether $M_{\iota,\kappa}%
\subset\mathbf{B.}$ From our construction $\pi_{\theta}M_{\iota,\kappa
}=\overline{B}_{c}(\eta_{\iota}(\boldsymbol{\lambda}_{\kappa}),\rho).$ For
$\theta\in\overline{B}_{c}(\eta_{\iota}(\boldsymbol{\lambda}_{\kappa}),\rho),$
using (\ref{eq:lambda-Delta}) and (\ref{eq:theta0-bound})%
\begin{eqnarray*}
\left\Vert \theta\right\Vert & \leq &\left\Vert \theta-\eta_{\iota
}(\boldsymbol{\lambda}_{\kappa})\right\Vert +\left\Vert \eta_{\iota
}(\boldsymbol{\lambda}_{\kappa})-\eta_{\iota}(\lambda)\right\Vert +\left\Vert
\eta_{\iota}(\lambda)\right\Vert \\
& =&\left\Vert \theta-\eta_{\iota}(\boldsymbol{\lambda}_{\kappa})\right\Vert
+\left\Vert \eta_{\iota}(\boldsymbol{\lambda}_{\kappa})-\eta_{\iota}%
(\lambda)\right\Vert +\left\Vert \eta_{\iota}\circ\eta_{i}^{-1}(\theta
_{0})\right\Vert \\
& <&\rho+\Delta+(R-\rho-\Delta),
\end{eqnarray*}
hence $\pi_{\theta}M_{\iota,\kappa}\subset B_{c}.$
\end{proof}
\subsection{Normally hyperbolic manifolds from bounds on derivatives}
In Section \ref{sec:ver-cover} we have shown how covering relations from the
chain (\ref{eq:cover-sequence}) can be constructed using bounds on derivatives
of local maps. In Section \ref{sec:ver-cc} we have shown how the cones can be
set up, using bounds on derivatives of local maps, so that the condition
(\ref{eq:cc-local-iterates}) holds. Here we shall combine these results
together in Theorem \ref{th:main}.
We shall use the notations $T_{i_{2}i_{1}}$ and $G_{i_{2}i_{1}}$ introduced in
Sections \ref{sec:ver-cover}, \ref{sec:ver-cc} through equations
(\ref{eq:Tmatrix-bounds}), (\ref{eq:C-def}) and (\ref{eq:Gmatrix}). We will
also assume that the assumptions from Section \ref{sec:cover-cc-norm-hyp}
hold. Here we introduce a definition which contains conditions which can be
verified using computer assistance. We will later show that the conditions
imply cone conditions.
\begin{definition}
\label{def:Forward-bounds}Assume that for any $(\iota_{0},\kappa_{0})\in J$
there exists an $n\in\mathbb{N,}$ a sequence $\iota_{0}=i_{0},i_{1}%
,\ldots,i_{n}=\iota_{1}$ and $\kappa_{1}$ such that $(\iota_{1},\kappa_{1})\in
J$ and for
\begin{eqnarray*}
q^{m} & = &(x^{m},y^{m},\theta^{m}):=f_{i_{m}i_{m-1}}\circ\ldots\circ
f_{i_{1}i_{0}}(0,0,\eta_{i_{0}}(\boldsymbol{\lambda}_{\kappa_{0}})),\\
R^{m} & = &(r_{u}^{m},r_{s}^{m},r_{c}^{m}):=T_{i_{m}i_{m-1}}\circ\ldots\circ
T_{i_{1}i_{0}}(r,r,\rho),\\
\gamma^{m} & := &(a^{m},b^{m},c^{m}):=G_{i_{m}i_{m-1}}\circ\ldots\circ
G_{i_{1}i_{0}}\gamma_{0}%
\end{eqnarray*}
with $m\leq n$ we have%
\begin{eqnarray}
r_{u}^{m}+\left\Vert x^{m}\right\Vert < 1,\quad\quad r_{s}^{m}+\left\Vert
y^{m}\right\Vert <1,\quad\quad r_{c}^{m}+\left\Vert \theta^{m}\right\Vert
<1,\nonumber\\
r_{u}^{n} > r+\left\Vert x^{n}\right\Vert ,\quad\quad r_{s}^{n}+\left\Vert
y^{n}\right\Vert <r, \label{eq:radius-bounds}%
\end{eqnarray}
and
\[%
\begin{array}
[c]{lll}%
a^{m}>0, & 0>b^{m}, & 0>c^{m},\\
a^{n}>\mathbf{a}_{1},\quad & b^{n}>\mathbf{b}_{1},\quad & c^{n}>\mathbf{c}%
_{1}.
\end{array}
\]
Then we say that $f$ \emph{satisfies forward bounds.}
\end{definition}
\begin{remark}
To verify that $f$ satisfies forward bounds on needs to compute $q^{m},$
$R^{m}$ and $\gamma^{m}.$ Let us note that in the case of $q^{m}$ it is enough
to obtain bounds on a finite number of successive iterates of a single point.
We therefore do not need to obtain bounds on images of large sets, which in
practise would accumulate large errors. The $R^{m}$ and $\gamma^{m}$ are
constructed using local bounds on derivatives and are easily computable with
computer assistance. Let us also note that to verify forward bounds we do not
need to compute the composition function $f^{n}$ or its derivative (this would
most likely cause big difficulties for high $n$ due to complexity of such
computations and also due to the fact that errors would accumulate quickly).
\end{remark}
\begin{lemma}
\label{lem:cc-from-forw-bounds}If $f$ satisfies forward bounds then $f$
satisfies cone conditions.
\end{lemma}
\begin{proof}
We take any $(\iota_{0},\kappa_{0})\in J$, a sequence $\iota_{0}=i_{0}%
,i_{1},\ldots,i_{n}=\iota_{1}$ and an index $\kappa_{1}$ such that $(\iota
_{1},\kappa_{1})\in J$ from Definition \ref{def:Forward-bounds}. We define
$R^{0}=R_{\varepsilon}^{0}:=(r,r,\rho)$ and%
\begin{eqnarray*}
R_{\varepsilon}^{m} & := &T_{i_{m}i_{m-1}}R_{\varepsilon}^{m-1}+(-\varepsilon
,\varepsilon,\varepsilon)\\
N_{m} & :=&N(q^{m},R_{\varepsilon}^{m}).
\end{eqnarray*}
By (\ref{eq:radius-bounds}), taking $\varepsilon$ sufficiently small, we will
ensure that $N_{m}\subset\mathbf{B.}$ By Lemma
\ref{lem:matrix-bounds-covering} we obtain $N_{m-1}\overset{f_{i_{m}i_{m-1}}%
}{\Longrightarrow}N_{m}$ for $m=1,\ldots,n$ and $N_{n}\overset{\mathrm{id}%
}{\Longrightarrow}M.$
Now we define $\gamma^{0}=\gamma_{\varepsilon}^{0}:=\boldsymbol{\gamma}_{0}$
and
\[
\gamma_{\varepsilon}^{m}:=G_{i_{m}i_{m-1}}\gamma_{\varepsilon}^{m-1}%
+(\varepsilon,\varepsilon,\varepsilon).
\]
Taking $\varepsilon>0$ small enough and applying Lemma
\ref{lem:matrix-bounds-cc} we obtain (\ref{eq:cc-local-iterates}).
\end{proof}
From now on let us assume that $f$ satisfies forward bounds with
$\boldsymbol{\gamma}_{0}=\boldsymbol{\gamma}_{0}^{\mathrm{forw}}.$
\begin{definition}
Let $\boldsymbol{\gamma}_{0}^{\mathrm{back}}=\left( \mathbf{a}_{0}%
^{\mathrm{b}},\mathbf{b}_{0}^{\mathrm{b}},\mathbf{c}_{0}^{\mathrm{b}}\right)
\in\mathbb{R}^{3}$ be such that $\mathbf{a}_{0}^{\mathrm{b}},\mathbf{c}%
_{0}^{\mathrm{b}}<0$ and $\mathbf{b}_{0}^{\mathrm{b}}>0.$ We say that $f$
\emph{satisfies backward bounds} if $f^{-1}$ satisfies forward bounds, with
reversed roles of the $x$ and $y$ coordinates.
\end{definition}
\begin{theorem}
Assume that $f$ satisfies forward bounds for $\boldsymbol{\gamma}%
_{0}^{\mathrm{forw}}=\left( \mathbf{a}_{0}^{\mathrm{f}},\mathbf{b}%
_{0}^{\mathrm{f}},\mathbf{c}_{0}^{\mathrm{f}}\right) $ and backward bounds
for $\boldsymbol{\gamma}_{0}^{\mathrm{back}}=\left( \mathbf{a}_{0}%
^{\mathrm{b}},\mathbf{b}_{0}^{\mathrm{b}},\mathbf{c}_{0}^{\mathrm{b}}\right)
.$ If in addition inequality (\ref{eq:ab-ineq}) holds then there exists a
normally hyperbolic invariant manifold in $\mathcal{U},$ together with its
stable and unstable manifolds $W^{s},$ $W^{u}.$
\end{theorem}
\begin{proof}
This follows directly from Lamma \ref{lem:cc-from-forw-bounds} and Theorem
\ref{th:main}.
\end{proof}
\section{Example of applications\label{sec:examples}}
\begin{figure}[ptb]
\begin{center}
\includegraphics[width=0.45in]{figures/coord.eps}\includegraphics[width=3in]{figures/Fig6.eps}
\end{center}
\caption{Misleading numerical plot of the attractor for $T$, obtained using
double precision (grey), and the true invariant curve computed with 128bit
accuracy (black). }%
\label{fig:misleading200}%
\end{figure}
Consider a driven logistic map
\begin{eqnarray} \label{eq:driven}
& T :S^{1}\times\mathbb{R}\rightarrow S^{1}\times\mathbb{R}, \nonumber \\
& T(\theta,x) =(\theta+\alpha,1-a(\theta)x^2),\quad a(\theta)=
a_0+\varepsilon\sin(2\pi\theta)
\end{eqnarray}
which differs from the well-known logistic map in the fact that the parameter
$a$ has been replaced by $a_0+\varepsilon\sin(2\pi\theta)$ and $\theta$ has a
quasiperiodic dynamics. Concretely we consider the parameter values
$a_0=1.31,$ $\varepsilon=0.3$ and $\alpha=\frac{g}{200},$ where $g$ is the golden mean
$g=\frac{\sqrt{5}-1}{2}$, hence the dynamics on the base of the skew-product
is slow. Numerical simulations in double precision (say, with mantissa of 52
binary digits) suggest that the map possesses a chaotic global attractor
(see Figure \ref{fig:misleading200}, grey). We will prove that this guess is not
correct. When the same simulations are done with multiple precision, one can
guess that the attractor consists of two invariant curves (see Figure \ref{fig:misleading200}, black). We will use the
method introduced in the previous sections to prove that $T$ possesses a
contracting invariant manifold and, in particular, that the red plot from Figure
\ref{fig:misleading200} do not shows the true dynamics. The same example was
considered for other values of $\alpha$ and in a non-rigorous way in \cite{BSV}
to illustrate that one has to be careful with the arithmetics in simulations.
\subsection{Explaining the observed behavior}
To explain the reasons of the observed behavior it is worth to mention that in
the example the parameter $a$ of the logistic map ranges in $[a_0-\varepsilon,a_0+\varepsilon]
=[1.01,1.61]$. For that range the attractor starts as a recently created (at
$a=1$) period-2 sink, followed by the full period-doubling cascade. Then one
finds from several-pieces strange attractors to a single piece, interrupted by
some periodic sinks and its corresponding cascades. When $a$ moves with $\theta$
one can question which is the ``averaged'' behavior. In particular the
period-2 orbit is only attracting until $a=5/4.$
\begin{figure}[htbp]
\begin{center}
\epsfig{file=figures/lyap.eps,width=7cm}
\end{center}
\caption{The integrand $h(\theta)$ in (\ref{eq:averlyap}) for the parameter
values: $a_0=1.31,\,\varepsilon=0.30.$}
\label{fig:lyapinteg}
\end{figure}
To this end we can consider what happens for ``frozen'' values of $a$, denoting
as $T_a$ the corresponding logistic map. The orbit of period two, $x_1(a),
x_2(a)$ is given by the solutions of $x^2-x/a+(1-a)/a^2$. In particular
\begin{equation} \label{eq:xminus}
x_1(a)=(1-\sqrt{4a-3})/(2a).
\end{equation}
The differential of $T_a^2$ on it is $4(1-a).$ To average with respect to
$\theta$ along the range, and noting that $a-1>0$ for the full range, we have to
consider the average of the Lyapunov exponent given as
\begin{equation} \label{eq:averlyap}
\frac{1}{2} \int_0^1 \log(4(a_0-1+\varepsilon\sin(2\pi\theta)))d\theta=
\frac{1}{2}\log(2(a_0\!-\!1\!+\!\sqrt{(a_0\!-\!1)^2\!-\!\varepsilon^2}))
\end{equation}
which for $a_0=1.31,\varepsilon=0.3$ gives $\Lambda_\infty\approx -0.12666931.$ The
integrand is shown in Figure \ref{fig:lyapinteg}. For the skew product, assuming
$\alpha\notin\mathbb{Q}$ and sufficiently small the two curves which form the
attractor, as will be proved later, are very close to the curves $x_1(a),x_2(a)$
of the frozen system. Figure \ref{fig:attrac.lower} displays the lower one. Also
the Lyapunov exponent of the driven map with $\alpha=g/N, N=200$, computed using
$10^5$ iterates after a transient also of $10^5$ iterates is $\Lambda_{200}
\approx -0.12680$. Using other values of $N$, like $100, 400, 800, 1600$ the
respective values $\Lambda_N$ obtained are $-0.12725,\,-0.12670,\,-0.126696,\,
-0.126689$, tending to the limit $\Lambda_\infty$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.4in]{figures/coord.eps}\quad
\epsfig{file=figures/fig6b.eps,width=7cm}
\end{center}
\caption{The lower part of the attractor, the graph of $x_1(a(\theta))$, for the
parameter values: $a_0=1.31,\,\varepsilon=0.30.$}
\label{fig:attrac.lower}
\end{figure}
The numerical difficulties are easy to understand. To compute the Lyapunov
exponents, starting at a point $x_0$ and an initial vector $v_0=1$ and setting
$S_0=0$ we compute recurrently
\[ \hat{v}_{j+1}=DT_a(x_j)(v_j),\quad x_{j+1}=T_a(x_j),\quad
n_{j+1}=|\hat{v}_{j+1}|, \quad v_{j+1}=\hat{v}_{j+1}/n_{j+1},\]
\[ S_{j+1}=S_j+\log(n_{j+1}). \]
The values $S_j$ are denoted as Lyapunov sums and the average slope as a
function of $j$ (if it exists) gives the Lyapunov exponent $\Lambda.$ For
details and generalisations see, e.g., \cite{S01} and \cite{LSSW} and references
therein.
Even when $\Lambda$ is negative it can happen that partial sums have strong
oscillations. Given the values of $S_j,j=0,\ldots,k$ let $(S_k)_{\text{min}}$
be the minimum of these values and introduce $O_k=S_k-(S_k)_{\text{min}}$. We define the
maximal oscillation of the Lyapunov sums as $OS=max\{O_k\}$. The Figure
\ref{fig:oscillation} shows the behavior of $S_j$ for $\alpha=g/200$ and also
some of the initial oscillations for $\alpha=g/1600$. A non-rigorous computation
of $OS$ for $N=100,200,400,800,1600$ with $10^5$ iterates after a transient
gives the values $28.845,\,56.761,\,112.632,\,224.379,\,447.874,$ respectively.
This implies a loss in the number of decimal digits equal to these values
divided by $\log(10)$. In particular, between 24 and 25 digits for $N=200$,
which explains the failure seen in Figure \ref{fig:misleading200}. For small
$\alpha$ the maximal oscillation tends to be
\begin{equation} \label{eq:oscill}
\frac{1}{\alpha}\int_{\theta_2-1}^{\theta_1} h(\theta)d\theta,
\end{equation}
where $h(\theta)$ is the function which appears as integrand in
(\ref{eq:averlyap}) and it is extended by periodicity outside $[0,1]$ while
$\theta_1=\frac{3}{4}-\frac{1}{2\pi}\cos^{-1}(0.2),$
$\theta_2=\frac{3}{4}+\frac{1}{2\pi}\cos^{-1}(0.2)$ are the values at which $h$
becomes equal to zero (see Figure \ref{fig:lyapinteg}). The value of the
maximal oscillation in (\ref{eq:oscill}) is $\approx 0.172660185/\alpha$ for
small $\alpha$, that if $\alpha=g/N$ becomes $\approx 0.27937N$ in good
agreement with the previous results.
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{cc}
\epsfig{file=figures/fig6c.eps,width=5.5cm} &
\epsfig{file=figures/fig6d.eps,width=5.5cm}
\end{tabular}
\end{center}
\caption{Oscillations of the Lyapunov sums. Left: the Lyapunov sums for $N=200$.
Right: some initial sums for $N=1600.$ Parameter values: $a_0=1.31,\,\varepsilon=0.30$
and $\alpha=g/N.$}
\label{fig:oscillation}
\end{figure}
Using these ideas one can even predict when we shall observe that the attractor
produced by simulations with not enough digits seems to indicate that it is not
a period-2 curve. Assume that we do computations with $d$ decimal digits and
that in a plot like the one in Figure \ref{fig:misleading200} one can
distinguish pixels which are a a distance of $10^{-p}$. In our example
reasonable values of $d,p$ are 16 and 4. This means that from $\theta_2-1$, when
$h$ becomes positive, till some unknown $\theta_d$ when the ``departure'' of the
iterates from the curve become visible, the factor of amplification of errors is
$10^{d-p}$ or, in logarithmic scale $(d-p)\log(10)$. This requires
\[\frac{1}{\alpha}\int_{\theta_2-1}^{\theta_d} h(\theta)d\theta=(d-p)\log(10).\]
In our example one finds $\theta_d\approx 0.258$ in good agreement with the
observed numerics in Figure \ref{fig:misleading200} . In a similar way one can
predict the ``landing'' value $\theta_l$ at which the points seen as chaotic in
Figure \ref{fig:misleading200} are close enough to the real invariant curves.
As the distance from the chaotic points to the true attractor is of the order
of 1, the condition is now
\[\frac{1}{\alpha}\int_{\theta_1}^{\theta_l} h(\theta)d\theta=p\log(10).\]
For the example one obtains $\theta_l\approx 0.629,$ again in good agreement
with the observed numerics.
This ``delayed'' observation of the expanding and compressing regimes is
similar, but now due to purely numerical reasons, to the delay of bifurcation
that can be observed in systems depending on a parameter which has slow dynamics
(see \cite{NST} and references therein).
\subsection{Some limit cases}
Now we discuss two limit cases. First one is the case in which $a(\theta)$
covers a wide range. Second one aims at describing the differences between
the union of the curves $x_1(a(\theta))$ and $x_2(a(\theta))$ and the true
attractor for $\alpha$ small enough.
According to (\ref{eq:averlyap}) and assuming that for $\alpha$ sufficiently
small the attractor is close to the union of the curves $x_{1,2}(a(\theta))$
it is enough to take $a_0=1.5-\delta_1,\,\varepsilon=0.5-\delta_1-\delta_2$ with
$0<\delta_2\leq \delta_1^2$ to have a negative limit averaged Lyapunov exponent
$\Lambda_\infty.$ If $\delta_1$ is small the values of $a$ almost cover the
full range $(1,2)$. The Figure \ref{fig:simbifdia} displays results of the
observed behavior using {\tt double precision} for the values $\delta_1=0.005,\,
\delta_2=10^{-6},\,\alpha=g/60000$.
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=0.4in]{figures/coord.eps}\quad
\epsfig{file=figures/mapsimbd.eps,width=8cm}
\end{center}
\caption{Simulations in {\tt double precision} for values of $a_0,\varepsilon$ such
that $a(\theta)$ almost covers the range $(1,2)$ and $\alpha$ very small. See
the text for the numerical values used.}
\label{fig:simbifdia}
\end{figure}
The figure is reminiscent of the ``bifurcation diagram'' of the logistic map.
In fact, a typical way to compute the diagram consists of taking a sample of
values of $a$, do some transient iterates and display some of the next iterates.
Now the value of $a$ is changed at every step according to (\ref{eq:driven})
but very slowly, and the transient is discarded.
From $\theta=3/4$ (for which the minimum of $a(\theta)$ is achieved) to
$\theta=5/4$ (mod 1) (for which one achieves the maximum) the plot looks like
that diagram, except for the bifurcation delays at the period doublings from
period 2 to period 4 and successive ones. In the range $\theta\in[1/4,3/4]$ the
reverse situation is seen, but now with much smaller bifurcation delays. The
authors do not know if, for the present values of the parameter, the attractor
will become close to the union of $x_1(a(\theta))$ and $x_2(a(\theta))$ for
computations done with a huge number of digits.
To look for the expression of the attractor as the union of two smooth curves,
assuming it is of that type, we restrict our attention to the lower part of it,
close to $x_1(a(\theta))$ as given in (\ref{eq:xminus}). In principle it is
convenient to work with $T^2$ but, as the eigenvalues of $T^2$ along the points
of period are negative, we prefer to work with $T^4$. We look for the attractor
as the graph of a function expanded in powers of $\alpha$
\begin{equation} \label{eq:graph}
G(\theta)=G_0(\theta)+\alpha G_1(\theta)+\alpha^2G_2(\theta)+\ldots,
\end{equation}
where $G_0(\theta)=x_1(a(\theta))$ is the zeroth order approximation. The map
$T^4(\theta,G(\theta))$ is $\mathcal{O}(\alpha)$ close to the identity. Hence, it can
be approximated by a smooth flow (see \cite{BRS} for proofs, an example of
application and additional references, as well as \cite{N} for general results)
and the curve we are looking for is a periodic solution of this flow. But we
shall proceed by imposing directly the invariance condition.
Starting at a point of the form $(\theta,G(\theta))$ and doing four iterations
using the values $a(\theta),a(\theta+\alpha),\,a(\theta+2\alpha),\,
a(\theta+3\alpha)$ we should have
\begin{equation} \label{eq:invariance}
T^4(\theta,G(\theta))-(\theta+4\alpha,G(\theta+4\alpha)) =0.
\end{equation}
Given values of $a_0,\,\varepsilon$ it is a cumbersome but elementary task to obtain in
a recurrent way the expressions of $G_1,\,G_2,\ldots$ from
(\ref{eq:invariance}). It is essential to reduce the dependence in $G_0(\theta)$
using the equation satisfied by $x_1(a)$ to decrease the order of the powers of
$G_0$ which appear to just the first one. We note also that in the computation
of all the terms $G_j$ there appears $16a^2-32a+15=(4a-5)(4a-3)$ in the
denominator, which cancels for $a=5/4$, but a careful examination allows to show
that the factor $4a-5$ is also present in the numerator.
In this way one obtains
\begin{equation} \label{eq:order1}
G_1(\theta)=\frac{3-2a-(8a-9)/\sqrt{4a-3}}{2a^2(4a-3)}2\pi\varepsilon\cos(2\pi\theta),
\end{equation}
where $a$ stands for $a(\theta)$ as introduced in (\ref{eq:driven}).
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{cc}
\epsfig{file=figures/graphorder1.eps,width=5.5cm} &
\epsfig{file=figures/graphorder2.eps,width=5.5cm}
\end{tabular}
\end{center}
\caption{Graphs of $G_1(\theta)$ (left) and $G_2(\theta)$ (right) for
$a_0=1.31,\,\varepsilon=0.30$.}
\label{fig:order1_2}
\end{figure}
The computation of $G_2$ is much more involved. The simplest expression is
given as a rational function depending on $a(\theta),G_0(\theta),G_1(\theta)$
and up to the second derivatives of these functions with respect to $\theta$.
Instead, Figure \ref{fig:order1_2} displays the graph of $G_1$ and $G_2$ for
$a_0=1.31,\,\varepsilon=0.30$. The graph of $G_0(\varepsilon)$ is very close to the attractor
shown in Figure \ref{fig:attrac.lower}.
To see tiny details on the attractor Figure \ref{fig:remainder} displays the
differences between the lower part of the attractor, computed with enough
digits, and the approximation in (\ref{eq:graph}) up to order 2 in $\alpha$.
\begin{figure}[htbp]
\begin{center}
\begin{tabular}{cc}
\epsfig{file=figures/remainder200.eps,width=5.5cm} &
\epsfig{file=figures/remainder1000.eps,width=5.5cm}
\end{tabular}
\end{center}
\caption{Differences between the attractor and the second order approximation
for $a_0=1.31,\,\varepsilon=0.30$ and $\alpha=g/N$. Left: $N=200$. Right: $N=1000$.}
\label{fig:remainder}
\end{figure}
The left part shows tiny oscillations which were not visible in Figure
\ref{fig:attrac.lower}. They reach a maximum at the value $\theta=\theta_1$ for
which $h(\theta)$ in (\ref{eq:oscill}) changes from positive to negative. As one
can expect the shape of these oscillations is a bump function multiplied by
a periodic function (close to a sinus) with period $4\alpha$. A similar behavior
is observed for many other values of $a_0,\varepsilon$ and $\alpha$. When the
oscillations start at a larger distance from $\theta_1$ they can amplify is such
a way that the attractor is no longer the union of the two curves. One can
suspect that it becomes a non-chaotic strange attractor (see, e.g., \cite{Greb}
and \cite{Ja}). In contrast, with the same values of $a_0,\varepsilon$ but for $N=1000$
the oscillations are not observed and the very small differences in the plot on
the right hand side of Figure \ref{fig:remainder} are mainly due to the third
order term in (\ref{eq:graph}).
\subsection{Computer assisted proof of existence of invariant curves}
In this section we apply our method from Sections \ref{sec:geometric}, \ref{sec:covering-ver-ex} to prove that for parameters $a_0=1.31,$ $\varepsilon=0.3$ and $\alpha=\frac{g}{200},$ with $g=\frac{\sqrt{5}-1}{2}$ the map $T$ has an invariant curve.
Around a neighborhood of the numerical guess for the attractor, the map
$T^{2}$ is locally invertible. This is due to the fact that our curve is
separated from the $x$-axis. For our proof we consider
\[
f=T^{-2}.
\]
The attractor is first computed (nonrigorously) by iterating $T$ forwards in
time. We then choose a set $\mathcal{V}$ arround the attractor (see Figure
\ref{fig:attr-bounds}, gray). For most $\theta$ the set is a $0.001$ radius
neighbourhood of the attractor. Close to the angle $\theta=\frac{3}{4}$ we
choose $\mathcal{V}$ to be tighter, so that we are sure that it lies within
the domain of $f$ (see Figure \ref{fig:attr-bounds}). Our aim is to prove that
inside of $\mathcal{V}$ we have an invariant normally hyperbolic curve of $f.$
The map $f$ is not uniformly expanding in the $x$ direction. Over one part of
the set $\mathcal{V}$ the map $f$ is strongly expanding, elswehere it is
contracting. A part of the expansion region, which we denote as
$\mathcal{U\subset V}$, is depicded in red and green (the green region is on
the left tip of the red region and is poining towards the attractor) on Figure
\ref{fig:attr-bounds}. On this set we place ch-sets $N_{1},\ldots,N_{168}$ of
width $\frac{\alpha}{2}$, starting with $N_{1}$ on the left and finishing with
$N_{168}$ on the right. We shall use a notation%
\[
U_{k,l}=\bigcup_{i=k}^{l}N_{i}.
\]
Our ch-sets are parallelograms. The coordinate $x$ is globally expanding for
$f$ and coordinate $\theta$ is normal (our map does not posses a globally
contracting coordinate $y$). The exits sets $N_{i}^{-}$ for the ch-sets are
the top and bottom edges of the parallelograms. The map $f$ moves the ch-sets
to the left. We distinguish two parts of the set $\mathcal{U}$: the set
$U_{1,4}$ in our plots is denoted in green colour, $U_{5,168}$ is denoted in
red. Since the width of the ch-sets is $\frac{\alpha}{2},$ for $k\in
5,\ldots,168$ we have%
\[
\pi_{\theta}f(N_{k})\subset\pi_{\theta}N_{k-4}.
\]
\begin{figure}[ptb]
\begin{center}
\includegraphics[width=0.45in]{figures/coord.eps}\hspace{1cm}%
\includegraphics[height = 1.8in]{figures/Fig7.eps}
\end{center}
\caption{Positioning of our ch-sets (green and red) and the set $\mathcal{V}$
(gray) relative to the attractor (on this plot on the $\theta$-axis in black).}%
\label{fig:attr-bounds}%
\end{figure}
In Section \ref{sec:covering-ver-ex} we shall show that (see Figures
\ref{fig:ch-sets-covering}, \ref{fig:covering2})%
\begin{equation}
N_{k}\overset{f}{\Longrightarrow}N_{k-4}\quad\text{for }k\in\{5,\ldots,168\},
\label{eq:covering-single}%
\end{equation}
and also that for $i=1,...,4$ (see Figure
\ref{fig:ch-sets-covering})%
\[
N_{i}\overset{f^{128}}{\Longrightarrow}U_{5,168}.
\]
In Section \ref{sec:cc-ver-ex} we show how to verify cone conditions. In
Section \ref{sec:tech-notes} we briefly discribe the tools that were used to
conduct the proof.
\begin{figure}[ptb]
\begin{center}
\includegraphics[width=0.4in]{figures/coord.eps}
\includegraphics[height = 1.5in]{figures/Fig8a.eps}
\includegraphics[height = 1.5in]{figures/Fig8b.eps}
\end{center}
\caption{The ch-sets: $N_{1}^{-},..N_{4}^{-}$ in green and $N_{5}%
^{-},...,N_{168}^{-}$ in red (plotted relative to the attractor), together
with $f(N_{5}^{-}),...,f(N_{168}^{-})$ in dark blue and $f^{128}(N_{1}%
^{-}),...,f^{128}(N_{4}^{-})$ in black. }%
\label{fig:ch-sets-covering}%
\end{figure}\begin{figure}[ptb]
\begin{center}
\includegraphics[width=0.4in]{figures/coord.eps}\hspace{1cm}
\includegraphics[ height=1.5in]{figures/Fig9.eps}
\end{center}
\caption{Closeup of the covering $N_{i}\overset{f}{\Longrightarrow}N_{i-4}$
for $i=5,6,7,8$ (plotted relative to the attractor).}%
\label{fig:covering2}%
\end{figure}
\subsubsection{Verification of covering conditions\label{sec:covering-ver-ex}}
To describe how covering conditions are verified we start with a seemingly
unrelated discussion. Consider a polynomial $p:[0,r]\rightarrow\mathbb{R}$ of
degree $n$%
\[
p(\theta)=\sum_{j=0}^{n}a_{j}\theta^{j},
\]
and a function $g:\mathbb{R}\rightarrow\mathbb{R.}$ Using Taylor expansion and
defining two polynomials $\overline{p}$ and $\underline{p}$, of degree $n$%
\begin{eqnarray}
\overline{p}(\theta) & = &g\circ p(0)+\sum_{j=1}^{n-1}\left( \frac{1}%
{j!}\frac{d^{j}\left( g\circ p\right) }{d\theta^{j}}(0)\right) \theta
^{j}\label{eq:p-upper}\\
&& +\frac{1}{n!}\left( \frac{d^{n}\left( g\circ p\right) }{d\theta^{n}%
}(0)+\frac{1}{n+1}\sup_{v,w\in\lbrack0,r]}\frac{d^{n+1}\left( g\circ
p\right) }{d\theta^{n+1}}(v)w\right) \theta^{n},\nonumber\\
\underline{p}(\theta) & =&g\circ p(0)+\sum_{j=1}^{n-1}\left( \frac{1}%
{j!}\frac{d^{j}\left( g\circ p\right) }{d\theta^{j}}(0)\right) \theta
^{j}\label{eq:p-lower}\\
&& +\frac{1}{n!}\left( \frac{d^{n}\left( g\circ p\right) }{d\theta^{n}%
}(0)+\frac{1}{n+1}\inf_{v,w\in\lbrack0,r]}\frac{d^{n+1}\left( g\circ
p\right) }{d\theta^{n+1}}(v)w\right) \theta^{n},\nonumber
\end{eqnarray}
for any $\theta\in\lbrack0,r]$ we have%
\begin{equation}
\underline{p}(\theta)\leq g(p(\theta))\leq\overline{p}(\theta).
\label{eq:edge-bounds}%
\end{equation}
For any $i=1,...,168,$ the exit set $N_{i}^{-}$ consists of two lines and can
be expressed using two polynomials (in fact these are affine functions)
$p_{i}^{u},p_{i}^{d}:[0,\frac{\alpha}{2}]\rightarrow\mathbb{R,}$ $p_{i}%
^{d}(\theta)=a_{i,0}^{d}+a_{i,1}^{d}\theta,$ $p_{i}^{u}(\theta)=a_{i,0}%
^{u}+a_{i,1}^{u}\theta$ and a point $q_{i}\in\lbrack0,1),$%
\begin{eqnarray*}
N_{i}^{-} & = &N_{d}^{-}\cup N_{u}^{-},\\
N_{i,d}^{-} & =&\{(p_{i}^{d}(\theta),q_{i}+\theta)|\theta\in\lbrack
0,\frac{\alpha}{2}]\},\\
N_{i,u}^{-} & =&\{(p_{i}^{u}(\theta),q_{i}+\theta)|\theta\in\lbrack
0,\frac{\alpha}{2}]\},
\end{eqnarray*}%
\[
p_{i}^{d}(\theta)<p_{i}^{u}(\theta)\quad\text{for }\theta\in\lbrack
0,\frac{\alpha}{2}].
\]
We will now show how to construct a ch-set $M$ such that%
\begin{equation}
N_{i}\overset{f}{\Longrightarrow}M. \label{eq:ni-covering-1step}%
\end{equation}
We first verify that for any point $(\theta,x)\in N_{i}$ we have
$\frac{\partial f}{\partial x}(x,\theta)<0$. We then take
\begin{equation}
g^{u}(\theta):=f(q_{i}+\theta,p_{i}^{d}(\theta)),\quad g^{d}(\theta
):=f(q_{i}+\theta,p_{i}^{u}(\theta)), \label{eq:g-maps}%
\end{equation}
and construct $p^{u}(\theta)=\overline{p}(\theta)$ using (\ref{eq:p-upper})
and $p^{d}(\theta)=\underline{p}(\theta)$ using (\ref{eq:p-lower}), taking $g$
as functions $g^{u}$ and $g^{d}$ respectively. Formula (\ref{eq:edge-bounds})
guarantees that $f(N_{i,d}^{-})$ lies above the graph of $p^{u}(\theta)$ and
that $f(N_{i,u}^{-})$ lies below the graph of $p^{d}(\theta)$. If we now set%
\begin{eqnarray}
M^{-} & =&M_{d}^{-}\cup M_{u}^{-},\nonumber\\
M_{d}^{-} & =&\{(p^{d}(\theta),q_{i}-2\alpha+\theta)|\theta\in\lbrack
0,\frac{\alpha}{2}]\},\label{eq:Mi-procedure}\\
M_{u}^{-} & =&\{(p^{u}(\theta),q_{i}-2\alpha+\theta)|\theta\in\lbrack
0,\frac{\alpha}{2}]\},\nonumber
\end{eqnarray}
and take $M$ to be the set of points which lie above $M_{d}^{-}$ and below
$M_{u}^{-}$ then (\ref{eq:ni-covering-1step}) holds.
For $i=5,...,168,$ after applying the above procedure to abtain $M$ which is
covered by $N_{i},$ we compute bounds on the images of sets
\begin{equation}
p^{u}\left( \left[ \frac{j\alpha}{20},\frac{(j+1)\alpha}{20}\right]
\right) ,\quad p^{d}\left( \left[ \frac{j\alpha}{20},\frac{(j+1)\alpha}%
{20}\right] \right) \quad\text{for }j=0,...,9, \label{eq:boundary-bound}%
\end{equation}
in local coordinates of ch-sets $N_{i-4},$ to verify that we have
(\ref{eq:covering-single}) (subdividing $[0,\frac{\alpha}{2}]$ into ten
intervals turns out to be sufficient for all $i\in\{5,...,168\}$).
For $i=1,...,4$ we need to iterate the procedure (\ref{eq:Mi-procedure}) many
times to obtain a sequence of covering relations%
\[
N_{i}\overset{f}{\Longrightarrow}M_{1}\overset{f}{\Longrightarrow}%
M_{2}\overset{f}{\Longrightarrow}\ldots\overset{f}{\Longrightarrow}%
M_{127}\overset{f}{\Longrightarrow}U_{5,168}.
\]
During our construction we make sure that all sets $M_{k}$ for $k\in
\{1,...,127\}$ lie in $\mathcal{V}$, which readily holds since the sets are
very strongly contracted. Each covering $M_{k}\overset{f}{\Longrightarrow
}M_{k+1}$ holds by construction. Verifying that $M_{127}\overset
{f}{\Longrightarrow}U_{5,168}$ is done analogously to (\ref{eq:boundary-bound}).
In our computer assisted proof we take the degrees of polynomials for the
edges of the sets $M_{k}$ as nine, which means that we need to perform
$C^{10}$ computations. Let us note that computationally this is not as heavy
as might seem, since the $C^{10}$ computations are performed for one
dimensional functions $g^{u}(\theta)$ and $g^{d}(\theta)$ (see
(\ref{eq:g-maps})). The reduction of dimension truly pays off, since the
difference between $C^{10}$ computations in one and two dimensions is substantial.
The estimates obtained by us are very accurate. In Figure
\ref{fig:num-difference} we give a plot of $M_{128,u}^{-}$, which is the lower
bound estimate of the image of $N_{4,u}^{-}$ after the final step in our
procedure (in black), and compare it with ten points from $N_{4,u}^{-},$
iterated non-rigorously with high precision computations (in red). The curve
lies below the points, as should, but this is impossible to distinguish from
the graph. The right hand side of Figure \ref{fig:num-difference} gives the
plot of the difference of the rigorous lower bound and non-rigorous
computation. They turn out to be very close.
\begin{remark}
The high order computations and multi-precision in current approach seem
essential. The sets $M_{k}$ constructed with our procedure are very strongly
contracted. The distance between the two curves of $M_{k}^{-}$ at the tightest
spot is of order $1.125\times10^{-25}$, which is extremely thin when compared
to the width of the curves $\frac{\alpha}{2}\approx1.545\times10^{-3}$; and
yet, with our $C^{10}$ approach, with little effort we are able to rigorously
keep them apart. Any standard approach, such as performing $C^{0}$
computations on sets or careful linearization with $C^{1}$ techniques through
local coordinates, is likely to fail.
\end{remark}
\begin{remark}
We believe that using a "parallel shooting" type approach it should be
possible to conduct the proof using double precision and $C^{1}$ computations
only (for this we would need an good apriori guess for the position of the
curve). Such approach could produce a rigorous-computer-assisted proof using
double precision of an invariant a curve, which is not detectable numerically with double
computations. This shall be a subject of forthcoming work.
\end{remark}
\begin{figure}[ptb]
\begin{center}
\includegraphics[width=0.4in]{figures/coord.eps}
\includegraphics[width=2in]{figures/Fig10a.eps}
\includegraphics[width=2in]{figures/Fig10b.eps}
\end{center}
\caption{Left: Rigorous bound on the image of an edge of one of ch-sets after
128th iterate of the map (in black), together with non-rigorous computations
using multi-precision (red). Right: The difference between rigorous lower
bound and non-rigorous computations. }%
\label{fig:num-difference}%
\end{figure}
\subsubsection{Verification of cone conditions\label{sec:cc-ver-ex}}
To verify cone conditions let us first rescale our coordinates by%
\[
\gamma_{\beta}(\theta,x)=(\beta\theta,x).
\]
Taking $\beta$ sufficiently large, choosing sufficiently many points
$\mathbf{\lambda}_{i}\in\lbrack0,\beta)$ and taking $h_{i}:=\frac{1}{2}\left(
c^{u}\left( \mathbf{\lambda}_{i}\right) -c^{d}\left( \mathbf{\lambda}%
_{i}\right) \right) ,$ $q_{i}:=(\mathbf{\lambda}_{i},c^{d}\left(
\mathbf{\lambda}_{i}\right) +h_{i})$ and $V_{i}:=\mathcal{V}\cap\left(
\lbrack\mathbf{\lambda}_{i}-h_{i},\mathbf{\lambda}_{i}+h_{i}]\times
\mathbb{R}\right) $ we can construct local maps%
\[
\tilde{\eta}_{i}:V_{i}\rightarrow B_{c}\times B_{u},
\]
for which $\tilde{\eta}_{i}(V_{i}\cap c^{u})=B_{c}\times\{1\},$ $\tilde{\eta
}_{i}(V_{i}\cap c^{d})=B_{c}\times\{-1\}$ and which are arbitrarily close to a
linear map $q\rightarrow\frac{1}{h_{i}}(q-q_{i})$. In these local coordinates,
by taking sufficiently large $\beta$, we have the following bound on
derivatives of local maps (assuming that we choose $i,j$ and $p$ such that
$p\in dom(f_{ij})\neq\emptyset$)%
\begin{eqnarray*}
Df_{ij} & =&D\left( \tilde{\eta}_{i}\circ\gamma_{\beta}\circ f\circ
\gamma_{\beta}^{-1}\circ\tilde{\eta}_{j}^{-1}\right) (p)\\
&& \approx\left(
\begin{array}[c]{cc}%
\frac{1}{h_{i}} & 0\\
0 & \frac{1}{h_{i}}%
\end{array}
\right) \left(
\begin{array}[c]{cc}%
\beta & 0\\
0 & 1
\end{array}
\right) \left(
\begin{array}[c]{cc}%
\frac{df_{1}}{d\theta}(\gamma_{\beta}^{-1}(\tilde{\eta}_{j}^{-1}(p)) & 0\\
\frac{df_{2}}{d\theta}(\gamma_{\beta}^{-1}(\tilde{\eta}_{j}^{-1}(p)) &
\frac{df_{2}}{dx}(\gamma_{\beta}^{-1}(\tilde{\eta}_{j}^{-1}(p))
\end{array}
\right) \\
&& \left(
\begin{array}
[c]{cc}%
\beta^{-1} & 0\\
0 & 1
\end{array}
\right) \left(
\begin{array}
[c]{cc}%
h_{j} & 0\\
0 & h_{j}%
\end{array}
\right) \\
&& =\frac{h_{j}}{h_{i}}\left(
\begin{array}
[c]{cc}%
\frac{df_{1}}{d\theta}(\gamma_{\beta}^{-1}(\tilde{\eta}_{j}^{-1}(p)) & 0\\
\frac{1}{\beta}\frac{df_{2}}{d\theta}(\gamma_{\beta}^{-1}(\tilde{\eta}%
_{j}^{-1}(p)) & \frac{df_{2}}{dx}(\gamma_{\beta}^{-1}(\tilde{\eta}_{j}%
^{-1}(p))
\end{array}
\right) ,
\end{eqnarray*}
which in turn is arbitrarily close to $\frac{h_{j}}{h_{i}}diag(\frac{df_{1}%
}{d\theta},\frac{df_{2}}{dx}).$ This means that by using the artificial
rescaling $\gamma_{\beta}$ (without the actual need to apply it in practice
for our computer assisted proof), we can divide the region $\mathcal{V}$ into
a finite number of sets $U_{1},\ldots,U_{N}$ ($\mathcal{V}\subset\bigcup
_{i=1}^{N}U_{i}$), and verify cone conditions using interval matrices
$diag(\left[ \frac{df_{1}}{d\theta}(U_{i})\right] ,\left[ \frac{df_{2}}%
{dx}(U_{i})\right] )$ and applying Lemma \ref{lem:matrix-bounds-cc}. For our
proof we take $\mathbf{\gamma}_{0}=(a,b)=\left( 1,-1\right) ,$ which means
that the quadratic form for our cones is simply%
\[
Q_{\mathbf{\gamma}_{0}}(\theta,x)=x^{2}-\theta^{2}.
\]
If we take $\mathbf{\gamma}_{1}=\left( (1-\varepsilon),1\right) $ for any
small parameter $\varepsilon>0$ then by choosing sufficiently large $\beta$
Assumption \ref{as:cones-setup} is satisfied (since any switch to new
coordinates is arbitrarily close to identity). This means that we can take
$\mathbf{\gamma}_{1}=\mathbf{\gamma}_{0}$, provided that all the inequalities
in our verification of cone conditions in the computer assisted proof are strict.
\subsubsection{Tools used for the proof\label{sec:tech-notes}}
Our proof has been conducted with the use of the CAPD library
(http://capd.ii.uj.edu.pl) developed by the Computer Assisted Proofs
in Dynamics group. We have used the multi-precision version of the library
running at 128 mantisa bits accuracy (which is approximately equivelent to
tracking 40 digits). The $C^{10}$ computations have been performed with
assistance of the Flexible Automatic Differentiation Package FADBAD++
(www.fadbad.com). The proof takes 16 seconds running on a 2.53 GHz laptop with
4GB of RAM.
\section{Final comments}
In this paper we have presented a version of a normally hyperbolic invariant manifold theorem,
which can be applied for rigorous-computer-assisted proofs. We have successfully applied our
method to an example in which standard {\tt double} precision simulations brake down and produce
false results. This demonstrates the strength of our method, that it can handle numerically difficult cases. It needs
to be noted that to apply our method we have used multiple precision for our computer assisted computations.
For our proof we also needed to apply a high order method which relied on $C^{10}$ computations.
We believe that it should be possible to devise a similar in spirit method, which would give proofs without multiple precision
and using $C^1$ computations only. This will be the subject of our future work.
\section{Acknowledgements}
We would like to thank Tomasz Kapela for his assistence and comments regarding
the implementation of multi-precision in CAPD library. Our special thanks goes
to Daniel Wilczak for his suggestions, frequent discussions and for his
assistence with implementation of higher order computations in the CAPD library. The research of MC has been supported by the Polish State Ministry of Science and Information Technology grant N201 543238.
The research of CS has been supported by grants MTM2006-05849/Consolider (Spain),
and CIRIT 2008SGR--67 (Catalonia).
\section*{References}
|
1,116,691,498,280 | arxiv | \section{Introduction} \label{sec:Introduction}
A crucial feature of most important real-world operations research
problems is that they tend to be computationally hard: It is highly
unlikely that there exists an ``ideal'' algorithm that will construct
a provably optimal solution in relatively short time, no matter what
instance it is faced with. Fast and easy heuristics may return
solutions that are quite poor for ``difficult'' instances; even
sophisticated methods that are guaranteed to find an optimum may
return a solution only after prohibitively long time. Nevertheless,
such difficult problems need to be dealt with, so algorithms have to
be constructed, tested, improved and compared.
A natural approach for evaluating the practical performance of
solution methods is to run experiments on test instances. This is even
true for problems that allow a theoretically ``good'', i.e.,
polynomial, algorithm, as this does not guarantee a useful running
time in practical applications. Obviously, the choice of test
instances may have a crucial impact on the results of such
experiments, and comparisons between alternative approaches are only
possible when similar test instances are used. Finally, beyond the
performance of individual algorithmic implementations, keeping track
of scientific progress over time is of vital interest for a research
community. This makes it also desirable to maintain a canon of open
challenge problems that can serve as catalysts for future
developments.
\section{Benchmark Libraries}
A well-established answer to the demands described above is
establishing and maintaining benchmark libraries for a large variety
of problems. In operations research, one of the first such efforts was
undertaken by Beasley with ORLIB~\cite{b-orldt-90}, a collection of
instances for 98 different classes of combinatorial optimization
problems, ranging from airport capacity allocation problems to vehicle
routing problems, and including many different cutting and packing
instances.
Arguably the most prominent of benchmark libraries for combinatorial
optimization is the TSPLIB \cite{r-tspli-91} by Reinelt. As the name
indicates, this collection is comprised of instances of the traveling
salesman problem (TSP), even though there are also some instances of
the capacitated vehicle routing problem. The TSPLIB has been extremely
successful in various ways. Its instances have been used for a large
variety of problems, e.g., for matching \cite{cr-cmwpm-99}
or for finding long tours \cite{fmrt-gfhlgmmmtsp-01}.
Moreover, solving a large, previously
unsolved TSPLIB instance has become a major scientific achievement,
requiring years of work by outstanding researchers and decades of CPU
time, announced in newspaper headlines and
in one case~\cite{abcc-stsp-98} recognized as work worthy of a
Beale-Orchard-Hays Prize for Excellence in Computational
Mathematical Programming.
Clearly, the TSPLIB
demonstrates that a benchmark library can be more than just a basis
for runtime comparison of algorithmic code.
Two other examples of benchmark libraries are the MIPLIB for mixed
integer programming (presented by Bixby et al.~\cite{bbdi-mipli-93})
that serves as a benchmark for all modern integer linear programming solvers,
and the SteinLib by Koch et al.~\cite{kmv-sluls-00} that clearly demonstrates
the advances in preprocessing and exact solution methods for Steiner tree
problems.
\section{Packing Problems}
Problems of cutting and packing are among the most important problems
in both mathematical programming and real-world operations research.
Even the basic one-dimensional versions of problems like bin packing
and knapsack (discussed and used as examples in any introductory
course in optimization) are NP-hard, but are more or less
well-understood by means of linear and integer programming.
Multi-dimensional generalizations face
additional difficulties, as a straightforward modeling as a compact
integer linear program is no longer available (see Fekete and Schepers
\cite{fs-cchdop-04} for a discussion.) As demonstrated in this
special issue (and its predecessors \cite{dw-cp-90}, \cite{bw-cp-95},
and \cite{yw-cp-02}), this gives rise to a multitude of algorithmic
approaches, dealing with a variety of problem variants. But unlike
the progress made for the TSP, the Steiner tree problem, and for
one-dimensional packing problems, solution methods for
multi-dimensional packing problems have failed to provide
breakthroughs, where the size of solved benchmark instances has grown
by several orders of magnitude, e.g., reaching the 24978 cities of Sweden
for the TSP. At
this point, the two-dimensional knapsack instance {\tt gcut13},
consisting of 32 rectangles, is beyond the reach of the best solution
methods by Fekete and Schepers~\cite{fs-eahdo-04} and Caprara and
Monaci \cite{cm-tdkp-04}. At the same time, multi-dimensional packing
instances are among the most popular types of puzzles, sometimes even
surrounded by quite a bit of hype, e.g., Monckton's ``Eternity'' tiling
puzzle that was the subject of a $\pounds$ 1,000,000
prize contest \cite{eternity}.
Over the years, a number of benchmark instances for cutting and
packing have been presented and used in the scientific literature.
Beasley's ORLIB had a limited number of two-dimensional instances.
Wottawa's PACKLIB~\cite{w-padp-96} was a first attempt at establishing
a general benchmark library for multi-dimensional packing.
ESICUP~\cite{esicup} started to collect data sets from different
sources; unfortunately, these instances differ in file format, making
it harder than neccessary for researchers to facilitate them in their
research.
Other instances were created and used in the context of a variety of
research papers; see Section~\ref{sec:datasets} for an overview. It
should be noted that most of the larger instances were originally
designed as test instances for other, more restricted problems like
guillotine cutting. This indicates that even though the capability of
algorithms for solving instances of multi-dimensional packing problems
has grown only moderately compared to those for other problems, the
development of benchmark instances has not kept up. This makes it
desirable to establish a collection of harder instances, allowing a
basis for comparison and an ongoing challenge for further progress.
\section{PackLib$^2$} \label{sec:packlib2}
As indicated above, there has been more than one attempt at
establishing a library of cutting and packing problems. Each of these
libraries had their advantages and their shortcomings. From the
perspective of input, the strong point of the ORLIB has been its very
simple file format: the description of a packing instance is reduced
to numbers. On the other hand, this also constitutes a disadvantage,
as a correct parsing of an instance requires reading the corresponding
article; moreover, despite of its compact representation, ORLIB
encoding still contains some redundant information. If a cost is
given for a box, it is almost always equal to its volume. Other
important information is omitted from the files. This has lead to
modifications of instances: In \cite{cm-tdkp-04} Caprara and Monaci
accidentally solve a slight modification of the gcut instances. The
gcut instances do not contain information how many boxes of a given
type can be packed. So Caprara and Monaci assumed that there was only
one box of each type.
The most sophisticated attempt at setting up a cutting and packing
library in terms of file format has been Wottawa's PACKLIB
\cite{w-padp-96}. Wottawa tried to promote a file format that was
self-describing. Its drawback was that, at that time, it required a
rather sophisticated and error-prone parser to read such a file.
As indicated by the name, PackLib$^2$ is a successor of PACKLIB. Like
PACKLIB it employs state-of-the-art technologies for representing
cutting and packing instances. Unlike all previous attempts,
PackLib$^2$ files not only capture instances but also references to
creators, references to attempts at solving the instances,
bibliographic information, and solution data. Because PackLib$^2$ is
XML-based, the parser for our file format is based on standard
technology. As distribution has progressed from electronic mail
\cite{b-orldt-90} over ftp-servers (possibly dressed in a web
interface) \cite{w-padp-96}, we are making full use of current
cross-referencing possibilities of websites: PackLib$^2$ is hosted at
\begin{verbatim}
http://www.math.tu-bs.de/packlib2
\end{verbatim}
As mentioned above, the core of PackLib$^2$ is a set of XML-based
files, one for each article listed on the website. Each of these files
is subdivided into three sections:
\begin{enumerate}
\item The description section gives general information about the
article. This is the only mandatory section of a PackLib$^2$ XML
file. This section basically is an extension of the BibTeX format.
\item If new problem instances or modifications of known instances are
described in the article, they are listed in the problem section.
\item Finally the results section lists the computational results of
the article. Whenever we are able to obtain a complete description
of the solution, the solution itself and resulting images are
available on PackLib$^2$.
\end{enumerate}
A detailed and up-to-date description (including future updates and
extensions) of the file format is available on the PackLib$^2$
website.
Based on these XML files, the PackLib$^2$ website is rebuilt
(half-)automatically whenever a new file is added. This automatism
ensures the integrity and comparability of the results obtained by
different researchers.
Besides listing instances and results, PackLib$^2$ also hosts cutting
and packing software. At this point, a parser for the XML files, as
well as converters to an ORLib-like format and to the old PackLib
format are available. Furthermore a program that generates zero--one
ILP formulations based on~\cite{b-autdg-85} is available.
\section{Description of Data Sets} \label{sec:datasets}
In this section we describe the instances that are currently part of
PackLib$^2$, listed in chronological order. So far, all instances
are two-dimensional instances. Most instances presented were
originally posed as guillotine cutting stock problems. They have been
reused in other settings as well. We have classified all results using
the new typology presented in \cite{whs-itcpp-06}.
The oldest and smallest instance was defined by Herz in
\cite{h-rcptd-72}, presenting a recursive procedure for the
two-dimensional guillotine cutting problem. The algorithm was
implemented in PL/1 and one hand-crafted instance was solved.
In 1977 Christofides and Whitlock presented a tree-search algorithm
for the same problem \cite{cw-atdcp-77}. They tested their algorithm
on 7 instances of the two-dimensional guillotine cutting problem.
Three of these instances were described explicitly and are part of
PackLib$^2$. The test problems were randomly generated. Given the
container $A_0$ with area $\alpha_0 = L_0 W_0$, $m$ boxes with area
$\alpha_i$ were drawn uniformly at random from the interval $[0, 0.25
\alpha_0]$. Given these areas, the length $l_i$ of a box was drawn
from the interval $[0, \alpha_i]$ and then rounded up to the nearest
integer. The width of the boxes was calculated by $w_i = \lceil
\alpha_i / l_i \rceil$. The boxes were then weighted by $\upsilon_i =
r_i \alpha_i$, where $r_i$ is a uniformly distributed random number in
the range from 1 to 3.
Another set of instances is due to Bengtsson. In \cite{b-prpha-82} he
gave 10 two-dimensional bin-packing problems. For each of the
instances he generated 200 boxes with length $\lfloor 12 r + 1
\rfloor$ and width $\lfloor 8 r + 1 \rfloor$. Here $r$ is drawn from a
uniform distribution in the range (0,1). Two different containers of
width 25 and length 10 and of width 10 and height 25 were considered.
Two practical instances of the constrained two-dimensional cutting
stock problem taken from applications in the lumber industry were given
by Wang in \cite{w-tactd-83}.
In \cite{b-autdg-85}, Beasley introduced 13 randomly generated problems
of different sizes. Here the length $l_i$ of each box
was generated by sampling an integer from the uniform distribution
$[L_0/4, 3L_0/4]$. The width $w_i$ was drawn from the interval
$[W_0/4, 3W_0/4]$. $L_0$ and $W_0$ denote width and height of the
container. The value of the boxes was set to their area.
In \cite{b-etdng-85} Beasley introduces 12 more packing instances. The
procedure for generating instances is essentially the same as in
\cite{cw-atdcp-77}. In addition, each box may be packed more than
once. The maximal count was generated by sampling an integer from the
uniform distribution $[1, 3]$.
Hadjiconstantinou and Christofides introduced 12 new data sets for the
general, orthogonal, two-dimensional kanpsack problem. They were
generated as follows. The dimensions $l_i$ and $w_i$ of the boxes
$R_i$ are integers sampled from the uniform distributions $[1, 0.75
W_0]$ and $[0.1, 0.75W_0]$, respectively. The integer value
$\upsilon_i$ was generated by multiplying $l_iw_i$ by a real random
number drawn from a uniform distribution and rounding up the result to
the nearest integer.
PackLib$^2$ also hosts two randomly generated instances of the
two-dimensional cutting-stock problem by Tsch\"{o}ke and Holth\"{o}fer
\cite{th-npact-95}.
Five more instances were listed explicitly in Hifi's article
\cite{h-ivbea-97}.
In \cite{fs-neagoddkp-97} five new two-dimensional knapsack instances
are defined. They were produced by a method described in
\cite{mv-estdf-98, mpv-tdbpp-00}.
For the (un)weighted constrained two-dimensional cutting stock problem
seven problems were given by Cung, Hifi, and Le Cun in
\cite{chc-ctdcs-00}. The box sizes $l_i$ and $w_i$ are chosen uniformly
at random from the intervals $[0.1L_0, 0.75L_0]$ and $[0.1W_0, 0.75W_0]$. The
weight assigned to the boxes is computed by $\upsilon_i=\lceil\rho
l_i w_i\rceil$, where $\rho=1$ for the unweighted case and
$\rho\in[0.25, 0.75]$ for the weighted case.
Finally there are the C-instances by Hopper and Turton
\cite{ht-eimhh-01}. These instances have 17 to 197 items. Three
instances were generated for each problem catogery. Width and height
of the boxes are produced randomly with a maximum aspect ratio of 7.
The problems were constructed such that the optimal solution is
known in advance. The ratio of the two dimensions of
the container varies between 1 and 3.
Many of these instances have been used again and again by other
researchers to demonstrate the effectiveness of their algorithms. This
fact is now traceable through PackLib$^2$.
\section{Conclusions} \label{sec:conc}
There are various possible expansions of PackLib$^2$. An obvious one is to
add more instances; contributions are always welcome, especially
if they are provided with accompanying data, ideally in XML format.
Other enhancements include two- and three-dimensional problem
variations (like including order constraints,
as suggested in \cite{fkt-mdpoc-01}) and the possibility to consider
other types of objective functions.
We are also in the process of soliciting practical problem instances
from the technical computer science community dealing with
reconfigurable computing, where
the objective is to place reconfigurable modules in two-dimensional space
(on a reconfigurable device like an FPGA) and time (as space may be re-used
when a module is no longer needed). This means that instances
are three-dimensional. See \cite{tfs-ohrt-01} for a more detailed description.
Another upcoming step will be to upgrade PackLib$^2$ to host more
algorithms. In the very near future we plan to post the
implementation of our leading-edge packing code
\cite{fs-neagoddkp-97,fs-eahdo-04}, as well as meta-heuristic based
algorithms to tackle even larger instances. Again, constributions are
welcome.
\section*{Acknowledgements}
We thank Andreas Ahrens, Peter Degenkolbe, and Christopher Tessars
for their help with setting up and maintaining PackLib$^2$, and
two anonymous referees for suggestions that helped to improve
the presentation of this paper.
\bibliographystyle{plain}
|
1,116,691,498,281 | arxiv | \section{Introduction} \label{sec:introduction}}
\IEEEPARstart{I}{mage} segmentation, defined as the partition of the entire image into a set of regions, plays a vital role in a wide range of applications. Medical image segmentation is a crucial example of this domain and offers numerous benefits for clinical use. Automated segmentation facilitates the data processing time and guides clinicians by providing task-specific visualizations and measurements. In almost all clinical applications the visualization algorithm not only provides insight into the abnormal regions in human tissue but also guides the practitioners to monitor cancer progression. Semantic segmentation as a preparatory step in automatic image processing technique can further enhance the visualization quality by modeling to detect specific regions which are more relevant to the task on hand (e.g., heart segmentation) \cite{antonelli2022medical}.
Image segmentation tasks can be classified into two categories: semantic segmentation and instance segmentation \cite{asgari2021deep,minaee2021image}. Semantic segmentation is a pixel-level classification that assigns corresponding categories to all the pixels in an image, whereas instance segmentation also needs to identify different objects within the same category based on semantic segmentation.
Designing segmentation methods to distinguish organ or lesion pixels requires task-specific image data to provide the appropriate critical details. Common medical imaging modalities for acquiring data are X-ray, Positron Emission Tomography (PET), Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Ultrasound (US) \cite{wu2021medical}.
Early traditional approaches to medical image segmentation mainly focused on edge detection, template matching techniques, region growing, graph cuts, active contour lines, machine learning, and other mathematical methods.
In recent years, deep learning has matured in diverse fields for solving many edge cases specific to the medical domain. Convolutional neural networks (CNNs) have successfully implemented feature representation extraction for images, thus eliminating the need for hand-crafted features in image segmentation, and their superior performance and accuracy make them the main choice in this field.
An initial attempt to model the semantic segmentation using a deep neural network was proposed in \cite{ciresan2012deep}. This approach passes the input images through the convolutional encoder to produce the latent representation. Then on top of the generated feature maps the fully connected layers are included to produce a pixel-level prediction. The main limitation of this architecture was the use of fully connected layers, which depleted the spatial information and consequently degraded the overall performance. Long et al. \cite{2014FCN} proposed Fully Convolutional Networks (FCNs) to address this limitation. The FCN structure applies several convolutional blocks consisting of the convolution, activation, and pooling layers on the encoder path to capture semantic representation, and similarly uses the convolutional layer along with the up-sampling operation in the decoding path to provide a pixel-level prediction. The main motivation underlying the successive up-sampling process on the decoding path was to gradually increase the spatial dimension for a fine-grained segmentation result.
Inspired by the architecture of FCNs and the encoder-decoder models, Ronneberger et al. develop the U-Net \cite{ronneberger2015u} model for biomedical image segmentation. It is tailored to practical use in medical image analysis and can be applied in a variety of modalities, including CT \cite{huang2018robust,yu2019liver,kazerouni2022diffusion,zhou2018unet++,zhang2019net}, MRI \cite{chen2018s3d,zhang2020dense,ibtehaz2020multiresunet,liu2019cu,baldeon2020adaresu}, US \cite{abraham2019novel,zhao2017improved,zhang2017image}, X-ray \cite{frid2018improving,waiker2020effective}, Optical Coherence Tomography (OCT) \cite{orlando2019u2,asgari2019u}, and PET \cite{zhong20183d,wang2022ica}.
FCN networks, specifically the U-Net, can efficiently exploit a limited number of annotated datasets by leveraging data augmentation (e.g., random elastic deformation) to extract detailed features of images without the need for new training data, resulting in good segmentation performance \cite{azad2022medical}. This superiority has made it a great success and has led to the extensive use of U-Net model in the field of medical segmentation. The U-Net network is composed of two parts. The first part is the contracting path that employs the downsampling module consisting of several convolutional blocks to extract semantic and contextual features. And in the second part, the expansive path applies a set of convolutional blocks equipped with the upsampling operation to gradually increase the spatial resolutions of the feature maps, usually by a factor of two, while reducing the feature dimensions to produce the pixel-wise classification score. The most significant and important part of U-Net is the skip connections which copy the outputs of each stage within the contracting path to the corresponding stages in the expansive path. This novel design propagates essential high-resolution contextual information along the network, which encourages the network to re-use the low-level representation along with the high-context representation for accurate localization.
This novel structure becomes the backbone in the field of medical image segmentation since 2015, and several variants of the model have been derived to progress the state of the art based on it.
The auto-encoder design of U-Net makes it a unique tool for breaching its structure in significant applications, e.g., image synthesis \cite{costa2017towards,wu2019uc,sun2022double}, image denoising \cite{reymann2019u,nasrin2019medical,lee2020mu}, image reconstruction \cite{guan2021dense,feng2020end}, and image super-resolution \cite{qiu2021progressive}. To provide more insight into the importance of the U-Net model in the medical domain, we provide \Cref{fig:unetinchallenges}, statistical information regarding the methods utilized U-Net model in their pipeline to address medical image analysis challenges. From \Cref{fig:unetinchallenges}, it is evident that U-Net influenced most of the diverse segmentation tasks in the medical image analysis domain with the extreme growth in publication numbers during the past decade and being bespeak for future remedies.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{Figures/intro/fig1_unet_2022modified.pdf}
\caption{The number of research works published in the past decade using the U-Net model as their baseline to address various medical image analysis challenges. The visualization shows sumptuous attention from the research/industry community for this architecture, particularly the segmentation task which is the main objective of this review paper.}
\label{fig:unetinchallenges}
\end{figure}
Our review covers the most recent U-Net-based medical image segmentation literature and discusses more than a hundred methods proposed until September 2022. We deliver a broad review and perspicuity on different aspects of these methods, including network architecture enhancements concerning vanilla U-Net, medical image data modalities, loss functions, evaluation metrics, and their critical contributions. According to the rapid developments in U-Net and its variants, we propose a summary of highly cited approaches in our taxonomy. we group the U-Net variants into the following categories:
\begin{enumerate}[leftmargin=*]
\item \hyperref[sec:skip-connection]{Skip Connection Enhancements}
\item \hyperref[sec:backbone-design]{Backbone Design Enhancements}
\item \hyperref[sec:bottleneck]{Bottleneck Enhancements}
\item \hyperref[sec:transformers]{Transformers}
\item \hyperref[sec:rich-representation]{Rich Representation Enhancements}
\item \hyperref[sec:probablistic]{Probabilistic Design}
\end{enumerate}
Some of the key contributions of this review paper can be outlined as follows:
\begin{itemize}[leftmargin=*]
\item This review covers the most recent literature on U-Net and its variants for medical image segmentation problems and overviews more than 100 segmentation algorithms proposed till September 2022, grouped into six categories.
\item We provide a comprehensive review and insightful analysis of different aspects of U-Net-based algorithms, including the refinement of base U-Net architectures, training data modality, loss functions, evaluation metrics, and their critical contributions.
\item We provide comparative experiments of some reviewed methods on popular datasets and offer codes and pre-trained weights on \href{https://github.com/NITR098/Awesome-U-Net}{GitHub}.
\end{itemize}
As a result, the remainder of the paper is organized as follows: \Cref{sec:taxonomy} includes the taxonomy of review methods. \Cref{subsec:2dunet} and \Cref{subsec:3dunet} provide a a detailed insight into the basic 2D U-Net and 3D U-Net architectures, respectively. In \Cref{sec:unetextensions} we will cover U-Net extensions, overview at least five top-cited methods in each taxonomical branch, and highlights their key contribution. \Cref{sec:experiments} provides a comprehensive practical information such as the experimental datasets, training process, the loss functions, evaluation metrics, comparative results, and ablation studies. \Cref{sec:challenges} discusses the current challenges in the literature and future directions. Eventually, the last chapter provides the conclusion.
\section{Taxonomy} \label{sec:taxonomy}
This section suggests a taxonomy that organizes different approaches presented in the literature to modify U-Net architecture for medical image segmentation.
Due to the modular design of U-Net, we proposed our taxonomy to cope with the inheritance design of U-net rather than the conceptual taxonomies offered in \cite{siddique2021u}. Furthermore, this property makes it difficult to fit each study into only one group so that a method may belong to several groups of divisions. \Cref{fig:unet-taxonomy} depicts our structure for taxonomy, and we think this taxonomy helps the field be organized and even motivational for future research. In \Cref{sec:unetextensions}, we will go through each concept of taxonomy. In the remainder of this section, we will first explain the naive 2D U-Net, and following that, we will introduce the 3D U-Net. Eventually, we will elaborate on the importance of the U-Net model from a clinical perspective.
\begin{figure*}[t]
\centering
\includegraphics[width=\textwidth]{Figures/taxonomy/UNet_Taxonomy.pdf}
\caption{The proposed U-Net taxonomy categorizes different extensions of the U-Net model based on their underlying design idea. More specifically, our taxonomy takes into account the modular design of the U-Net model and shows where the improvement happens (e.g., skip connection). Due to the clarification and unity in the studies' denomination, we may utilize some brevities. In this case, each prefix number denotes 1. \cite{chen2021transunet}, 2. \cite{wang2021transbts}, 3. \cite{li2021gt}, 4. \cite{xie2021cotr}, 5. \cite{hatamizadeh2022swin}, 6. \cite{wang2022mixed}, 7. \cite{reza2022contextual}, 8. \cite{wang2022uctransnet}, 9. \cite{huang2022scaleformer}, 10. \cite{valanarasu2021medical}, 11. \cite{cao2021swin}, 12. \cite{huang2021missformer}, 13. \cite{wu2022d}, 14. \cite{brudfors2021mrf}, 15. \cite{klug2020bayesian}, 16. \cite{myronenko20183d}, 17. \cite{kohl2018probabilistic}, 18. \cite{abraham2019novel}, 19. \cite{fu2018joint}, 20. \cite{moradi2019mfp}, 21. \cite{dolz2018dense}, 22. \cite{lachinov2018glioma}, 23. \cite{islam2019brain}, 24. \cite{drozdzal2016importance}, 25. \cite{milletari2016v}, 26. \cite{li2018h}, 27. \cite{ibtehaz2020multiresunet}, 28. \cite{karaali2022dr}, 29. \cite{jha2020doubleu}, 30. \cite{jin2019dunet}, 31. \cite{chen2018s3d}, 32. \cite{kou2019microaneurysms}, 33. \cite{nasrin2019medical}, 34. \cite{zhou2019unet++}, 35. \cite{huang2020unet}, 36. \cite{xiang2020bio}, 37. \cite{oktay2018attention}, 38. \cite{jin2020ra}, 39. \cite{li2020attention}, 40. \cite{lachinov2021projective}, 41. \cite{azad2019bi}, 42. \cite{li2019cr}, 43. \cite{fan2020ma}, 44. \cite{guo2021sa}, 45. \cite{zeng2019ric}, 46. \cite{azad2021deep}, 47. \cite{azad2022smu}, 48. \cite{hai2019fully}, 49. \cite{wang2020noise}, 50. \cite{wu2021jcs}.}
\label{fig:unet-taxonomy}
\end{figure*}
\subsection{ 2D U-Net} \label{subsec:2dunet}
Before recapitulating the U-Net structure in more detail, we will first consider the path that brings us to the U-Net architecture. The story begins with the EM segmentation challenge in 2012, where Ciresean et al. \cite{ciresan2012deep} were the first researchers who outperform the previous biomedical imaging segmentation methods using convolutional layers. The key factor that made them able to win the challenge was the availability of huge annotated data (CNN can learn comparatively better than the classical machine learning approach on large datasets \cite{o2019deep,hedjazi2017identifying}). However, access to the high amount of annotated data in biomedical tasks is always inherently challenging due to privacy concerns, the complexity of the annotation process, expert skill requirements, and the high price of taking images with biomedical imaging systems. The first step toward alleviating the need for large annotated data was proposed in \cite{ciresan2012deep}. This method used an image patching technique to not only increase the number of samples but also model the data distribution with small patches. Using this technique the CNN network learns the visual concept by simply deploying a sliding window. However, the sliding window usually brings more computation burden than its performance increase. Hence, there is always a trade-off between performance and computational complexity.
In 2015, Ronnebreger et al. \cite{ronneberger2015u} proposed a new architecture with respect to Long et al.'s \cite{2014FCN} FCN framework in conjunction with \textit{ISBI cell tracking challenge}, where they won the competition by a large margin. \Cref{fig:2dunet} shows the structure of the U-Net model. Their proposed method is a cornerstone in a few attitudes those days. First, it is based on a fully convolutional network in an encoder-decoder design with insufficient data than the DNNs instinct with some intuitive data augmentation techniques. Second, their model was reasonably fast and outperformed other methods in the challenge.
The model architecture can be divided into two parts: The first part is the contracting path, also known as the encoder path, where its purpose is to capture contextual information. This path consists of repeated blocks, where each block contains two successive $3\times3$ convolutions, followed by a ReLU activation function and max-pooling layers. The max pooling layer is also included to gradually increase the receptive field of the network without imposing an additional computational burden.
The second part is expanding the path, also called the decoder path, where it aims to gradually up-sample feature maps to the desired resolution. This path consists of one $2\times2$ transposed convolution layer (up-sampling), followed by two consecutive $3\times3$ convolutions and a ReLU activation. The connection path between encoder and decoder paths (also known as a bottleneck) includes two successive $3\times3$ convolutions followed by a ReLU activation. The successive convolutional operations included in the U-Net model enables the network's receptive field size to be increased linearly. This process makes a network gradually learn coarse contextual and semantic representation in deep layers compared to shallow layers. Learning high-level semantic features makes the network slowly lose localization of extracted features, where this aspect is essential to reconstruct segmentation results. Ronnebreger et al. presented skip connections from the encoder path to the decoder path on the same scales to overcome this challenge. The existential reason for these skip connections is to impose localization information of extracted semantic features at the same stage from the encoder. To this end, the connection module concatenates low-level features coming from the encoder path with high-level representation derived from the decoding path to enrich localization information. Eventually, the network uses a $1\times1$ convolution to map the final representation to the desired number of classes. To mitigate the loss of contextual information in the missing image's border pixels, the U-Net model uses an overlap tile strategy. In addition, to deal with insufficient training data a typical data augmentation technique such as rotation, and gray-level intensities invariance, elastic deformation is utilized. It should be noted that elastic deformation is a common strategy to make the model resistant to deformations, a common variation in tissues. From a practical perspective, the original U-Net model outperformed a sliding-window convolutional network \cite{ciresan2012deep} in warping error terminology in the \textit{EM segmentation challenge} dataset \cite{ronneberger2015u}. This network also became a new state-of-the-art on two other cell segmentation datasets, \textit{PhC-U373} and \textit{DIC-Hela} cells, by a large margin of approximately $9\%$ and $31\%$ from the previous best methods in the \textit{ISBI Cell Tracking Challenge 2015} by reporting Intersection over Union (IoU) metric \cite{ronneberger2015u}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/taxonomy/2dunet.pdf}
\caption{The initial 2D U-Net architecture that is designed to cope with semantic segmentation challenge. Figure from \cite{ronneberger2015u-arx}.}
\label{fig:2dunet}
\end{figure}
\begin{figure*}[!ht]
\centering
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=\linewidth]{./Figures/datasets/3d/cervicalSpine/axial}
\end{subfigure}
\hfill
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=\linewidth]{./Figures/datasets/3d/lung/axial}
\end{subfigure}
\hfill
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=\linewidth]{./Figures/datasets/3d/nineOrgs/axial}
\end{subfigure}
\hfill
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=\linewidth]{./Figures/datasets/3d/challenge/brain_tumour/478/axial}
\end{subfigure}
\hfill
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=\linewidth]{./Figures/datasets/3d/challenge/heart/029/axial}
\end{subfigure}
\hfill
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=\linewidth]{./Figures/datasets/3d/challenge/hepatic_vessel/052/axial}
\end{subfigure}
\hfill
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=\linewidth]{./Figures/datasets/3d/challenge/liver/66/axial}
\end{subfigure}
\hfill
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=\linewidth]{./Figures/datasets/3d/challenge/pancreas/385/axial}
\end{subfigure}
\hfill
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=\linewidth]{./Figures/datasets/3d/challenge/prostate/32/axial}
\end{subfigure}
\\
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=1.2\linewidth]{./Figures/datasets/3d/cervicalSpine/3d_3}
\end{subfigure}
\hfill
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=1.2\linewidth]{./Figures/datasets/3d/lung/3d_1}
\end{subfigure}
\hfill
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=1.2\linewidth]{./Figures/datasets/3d/nineOrgs/3d}
\end{subfigure}
\hfill
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=1.2\linewidth]{./Figures/datasets/3d/challenge/brain_tumour/478/3d_3}
\end{subfigure}
\hfill
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=1.2\linewidth]{./Figures/datasets/3d/challenge/heart/029/3d}
\end{subfigure}
\hfill
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=1.2\linewidth]{./Figures/datasets/3d/challenge/hepatic_vessel/052/3d}
\end{subfigure}
\hfill
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=1.2\linewidth]{./Figures/datasets/3d/challenge/liver/66/3d}
\end{subfigure}
\hfill
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=1.2\linewidth]{./Figures/datasets/3d/challenge/pancreas/385/3d}
\end{subfigure}
\hfill
\begin{subfigure}{0.106\textwidth} \centering
\includegraphics[width=\linewidth, height=1.2\linewidth]{./Figures/datasets/3d/challenge/prostate/32/3d}
\end{subfigure}
\caption{
Sample of the 3D medical dataset and a single selected 2D frame, where the target area (e.g., organ) is highlighted using the annotation mask.
c.1) Cervical spine \cite{kaggle-cervical-spine-fracture-detection},
c.2) Lung \cite{mader_2017},
c.3) Fourteen abdominal organs \cite{landman2015miccai},
c.4) Brain \cite{simpson2019large, menze2014multimodal},
c.5) Heart \cite{simpson2019large},
c.6) Hepatic vessel \cite{simpson2019large},
c.7) Liver \cite{simpson2019large},
c.8) Pancreas \cite{simpson2019large},
c.9) Prostate \cite{simpson2019large}.
}
\label{fig:dataset-3d}
\end{figure*}
\subsection{3D U-Net} \label{subsec:3dunet}
Due to the abundance and representation power of volumetric data, most medical image modalities are three-dimensional. So, {\c{C}}i{\c{c}}ek et al. \cite{cciccek20163d} proposed a 3D volumetric-based U-Net not only to pay attention to this need but also to overcome the time-consuming slice-by-slice annotation process for data. As it is noticeable that neighboring slices share the same information, there is no need for this much data redundancy. In \cite{cciccek20163d}, they replaced all 2D operations in U-Net architecture with the equivalent 3D companions and embedded a batch normalization layer for faster convergence after each 3D convolution layer. 3D U-Net was successfully applied to sparsely annotated three samples of Xenopus kidney embryos with reporting comparison results of IoU between 2D U-Net and 3D U-Net. To further support this we find the top 9/10 participants of the Kidney Tumor Segmentation (KiTS) 2021 challenge hosted by the Medical Image Computing and Computer Assisted Intervention (MICCAI) 2021 society \cite{heller2019kits19,heller2021state} challenge utilized a 3D U-Net and 1/10 utilized a 2D one \footnote{\url{https://kits21.kits-challenge.org/public/html/kits21_results.html\#}}.
\Cref{fig:dataset-3d} shows samples of 2D and 3D medical image segmentation challenges designed for different tasks. It can be seen that the 3D data provides more comprehensive information regarding the tissue and tumors, however, compared to the 2D data it has more computational cost.
\subsection{Clinical Importance and Effect of U-Net} \label{subsec:clinicalimportance}
During the start of the COVID-19 pandemic and the inevitable loss of healthcare and staff, the importance of utilizing artificial intelligence in images and test analysis was prompted. According to WHO, between January 2020 till May 2021, almost 80,000 to 180,000 healthcare and staff could have died from COVID-19 infection worldwide \cite{who2021healthcare}. Compensating for these skilled workforce losses, each country would incur a significant economic cost, and also transferring experiences among medical staff is a time-consuming process. In this direction, Michael et al. \cite{fitzke2021oncopetnet} applied a Large scale segmentation network to count specific cells in pathological images. They explicitly indicate that detecting cancerous cells from histopathological images is a challenging task that relies on the experiences of the expert pathologist. However, workflow efficiency can be increased with automatic system. Indeed the recent success of deep-learning-based segmentation methods, expansion of medical datasets and their easy accessibility, and facilitated access to modern and efficient GPUs, their applicability to specific image analysis problems of end-to-end users are eased. Semantic segmentation transforms a plain biomedical image modality into a meaningful and spatially well-organized pattern that could aid medical discoveries and diagnoses \cite{falk2019u,isensee2021nnu} and sometimes is beneficial to patients too as they may be able to avoid an invasive medical procedure \cite{aerts2014decoding}. Medical image segmentation is a vital component and a cornerstone in most clinical applications, including diagnostic support systems \cite{de2018clinically,bernard2018deep}, therapy planning support \cite{nikolov2018deep,mehrtash2017deepinfer}, intra-operative assistance \cite{hollon2020near}, tumor growth monitoring \cite{kickingereder2019automated,esmaeili2018direction}, and image-guided clinical surgery \cite{tonutti2017machine}. \Cref{fig:unet-pipeline} shows a general pipeline where the U-Net can be utilized in a clinical application to reduce experts' burden and accelerate the disease detection process. The entire end-to-end paradigm for using deep learning-based methods, especially U-Net, is an empirical struggle to fit this concept into everyday life \cite{isensee2021nnu}. Computer-Aided Diagnosis (CAD) can build from four main counterparts: Input, Network, Output, and Application. The input block could leverage different analyses on various available data like documented transcripts, diverse human body signals (EEG/ECG), and medical images. The multi-modal fusion of different data types could boost the performance of a pipeline for higher accuracy diagnosis. Based on specific criteria like Image modality, and data distribution, the network module could make decisions to choose one of the U-Net extensions which fit more to the setting. The output is a task-specific counterpart that the ultimate application block's decision could decide.
On the other hand, international image analysis competitions have a high demand for automatic segmentation methods, accounting for $70\%$ \cite{maier2018rankings} in the biomedical section, which universities of medical sciences primarily host or collaborate with them.
One of the advantages of deep learning competitions over conventional hypothesis-driven research is innate distinctions in the approach to problem-solving \cite{prevedello2019challenges}. Data competitions, by nature, encourage multiple individuals or groups to address a specific problem independently or collaboratively. According to Maier-Hein et al. \cite{maier2018rankings} of the 150 medical segmentation competitions before 2016 the majority used U-Net based models
Based on the points above and across-the-board of U-Net-based architectures, medical and clinical facilities could utilize these in real-world and commercial settings where nnU-Net \cite{isensee2021nnu} is one of these successful end-to-end designs.
\begin{figure*}[!t]
\centering
\includegraphics[width=\textwidth]{./Figures/etc/unet-gp.pdf}
\caption{A detailed overview of the U-Nets core involvement in medical image analysis and clinical use. An illustration of how U-Nets are involved in clinical decisions is discussed in research papers. The first block deals with image acquisition, preparation, and pre-processing steps to provide the data in a common format for the deep neural network. The second step uses a neural architecture search algorithm to find an efficient architecture for the task at hand while the third block is designed to perform post operations to further refine the network output. Finally, the application block uses the software output to assist specialists with a certain action (e.g., tumor growth monitoring).}
\label{fig:unet-pipeline}
\end{figure*}
\section{U-Net Extensions} \label{sec:unetextensions}
U-Net is a ubiquitous network according to its approximately 48 thousand citations during its first release in 2015. This is evidence that it can handle diverse image modalities in broad domains and not only in medical fields. From our sight, the core advantage of U-Net is its modular and symmetric design, which makes it a suitable choice for broad modification and collaboration with diverse plug-and-play modules to increase performance. Therefore, by pursuing this cue, we infringe the Ronneberger et al. \cite{ronneberger2015u} network to modular improvable counterparts besides solid auxiliary modification for achieving SOTA or par with segmentation performances. In this respect, we offer our taxonomy (\Cref{fig:unet-taxonomy}) and divide the diverse variants of U-Net modifications into systematic categories as follows:
\begin{enumerate}[leftmargin=*]
\item \hyperref[sec:skip-connection]{Skip Connection Enhancements}
\item \hyperref[sec:backbone-design]{Backbone Design Enhancements}
\item \hyperref[sec:bottleneck]{Bottleneck Enhancements}
\item \hyperref[sec:transformers]{Transformers}
\item \hyperref[sec:rich-representation]{Rich Representation Enhancements}
\item \hyperref[sec:probablistic]{Probabilistic Design}
\end{enumerate}
This taxonomy aims to provide comprehensive and practical information for both vendors and researchers. In the following parts of this section, each category will be extensively discussed along with relevant papers.
\subsection{Skip Connection Enhancements} \label{sec:skip-connection}
Skip connections are an essential part of the U-Net architecture as they combine the semantic information of a deep, low-resolution layer with the local information of a shallow, high-resolution layer.
This section provides a definition of skip connections and explains their role in the U-Net architecture before introducing extensions and variants of the classic skip connection used in the original U-Net.
Skip connections are defined as connections in a neural network that do not connect two following layers but instead skip over at least one layer. Considering a two-layer network a skip connection would connect the input directly to the output, skipping over the hidden layer \cite{bishop2006pattern}.
In image segmentation, skip connections were first used by Long et al. in \cite{2014FCN}. At the time, the most common use of convolutional networks was for image classification tasks which only have a single label as output. In a segmentation task, however, a label should be assigned to each pixel in the image adding a localization task to the classification task.
Long et al. \cite{2014FCN} added additional layers to a usual contracting network using upsampling instead of pooling layers to increase the resolution of the output and obtain a label for every pixel. Since local, high-resolution information gets lost in the contracting part of the network it cannot be completely recovered when upsampling these volumes. To combine the deep, coarse semantic information with the shallow fine appearance information they add skip connections that connect up-sampled lower layers with finer stride with the final prediction layer.
In the original U-Net architecture by Ronneberger et al. \cite{ronneberger2015u} each level in the encoder path is connected to the corresponding same-resolution level in the decoder path by a skip connection to combine the global information describing what with the local information resolving where.
The difference to the above approach is not only the higher number of skip connections but also the way in which the features are combined. Long et al. \cite{2014FCN} up-sampled feature maps from earlier layers to the output resolution and added them to the output of the final layer. Ronneberger et al. \cite{ronneberger2015u} concatenated the features of the corresponding encoder and decoder level and process them together by passing them through two convolutional layers and an up-sampling layer together.
Li et al. \cite{li2018h} conducted an ablation study on skip connections by training a dense U-Net with and without skip connections. The results clearly show that the network with the skip connections generalize better than the network without skip connections.
Over the following years, many variants and extensions of the original U-Net architecture were developed concerning the skip connections \cite{banerjee2022ultrasound,asadi2020multi}. Different types of extensions dealing with processing the encoder feature maps passed through the skip connections, combining the two sets of feature maps, and extending the number of skip connections will be presented in the following sections.
\subsubsection{Increasing the Number of Skip Connections}
In 2020 Zhou et al. \cite{zhou2019unet++} introduced the U-Net++ in which they redesign skip connections to be more flexible and therefore exploit multiscale features more effectively. Instead of restricting skip connections to only aggregate features that have the same scale in the encoder and decoder path, they redesign them in such a way that features of different semantic scales can be aggregated \cite{zhou2019unet++}.
They argue that there has been no proof so far that encoder and decoder feature maps at the same scale are the best match for feature fusion and therefore design a more flexible setup.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/unetextensions/unetpp.pdf}
\caption{Architecture of the U-Net++ \cite{zhou2019unet++}. The U-Net++ uses a nested structure of convolutional layers $X^{i,j}$ (indicate $j$-th convolution layer in the $i$-th scale) connected through the skip connection paths to model multi-scale representation.}
\label{fig:unet++}
\end{figure}
In their approach, they tackle two problems simultaneously. Since the optimal depth of a U-Net is unknown apriori and usually has to be determined through an exhaustive search, they incorporate U-Nets of different depths into one architecture. As can be seen in \Cref{fig:unet++} all the U-Nets share the same encoder but have their own decoder. Instead of only passing the same-scale encoder feature maps through the skip connections, each node in the decoder is also presented with the feature maps of the same-level decoders of the U-Nets with a lower depth. It can then be learned during training, which of the presented feature maps should ideally be used for the segmentation.
Huang et al. \cite{huang2020unet} take the dense skip connections introduced in the U-Net++ one step further by introducing full-scale skip connections in their architecture the U-Net3+. They argue that both the original U-Net with plain skip connections between same-level encoder and decoder nodes and the U-Net++ with the dense and nested skip connections do not sufficiently explore features from full scales making it challenging for the network to learn the position and boundary of an organ explicitly.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/unetextensions/unet3p.pdf}
\caption{Architecture showing the full-scale skip connections of the UNet3+ \cite{huang2020unet}. $X_{Ee}^i$ and $X_{De}^i$ denote the encoder and decoder path feature maps at $i$-th scale, respectively. In addition, each subscript under each feature map symbol represents the number of feature maps to the corresponding block.}
\label{fig:unet3+}
\end{figure}
To overcome this limitation they connect each decoder level with all encoder levels and all preceding decoder levels as can be seen in \Cref{fig:unet3+}. Since not all feature maps arriving at a decoder node through skip connections have the same scale, higher-resolution encoder feature maps will be downscaled using a max-pooling operation and lower-resolution feature maps coming from intra-decoder skip connections will be upsampled using bilinear upsampling. Additionally, apart from the up- or down-sampling operation, each skip connection is equipped with a $3\times 3$ convolutional layer calculating 64 output maps. The 64 feature maps arriving through each skip connection are stacked and the stack of feature maps is passed through another convolutional layer, followed by batch normalization and a ReLU activation before being further processed in the respective decoder node.
Instead of increasing the number of forward skip connections, Xiang et al. \cite{xiang2020bio} add additional backward skip connections: Their Bi-directional O-Shape network (BiO-Net) is a U-Net architecture with bi-directional skip connections.
This means that there are two types of skip connections:
\begin{enumerate}
\item The forward skip connections are known from the original U-Net architecture, combining encoder and decoder layers at the same level. These skip connections preserve the low-level visual features from the encoder and combine them with the semantic decoder information.
\item The backward skip connections pass decoded high-level features from the decoder back to the same level encoder.
The encoder can then combine the semantic decoder features with its original input and flexibly aggregate the two types of features.
\end{enumerate}
Together these two types of skip connections build an O-shaped recursive architecture that can be traversed multiple times to receive improved performance (See \Cref{fig:bio}).
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/unetextensions/BIO-Net.pdf}
\caption{BiO-Net Architecture. Figure from \cite{xiang2020bio-arx}.}
\label{fig:bio}
\end{figure}
The recursive output of the encoder and decoder can be defined as follows:
\begin{align}
\mathbf{x}_{out}^i&= \text{DOWN}(\text{ENC}([\text{DEC}([\mathbf{f}_{enc}^{i-1}, \mathbf{\hat{x}}_{in}^{i-1}]), \mathbf{x}_{in}^i])), \nonumber \\
\mathbf{\hat{x}}_{out}^i&= \text{UP}(\text{DEC}([\text{ENC}([\mathbf{f}_{dec}^{i}, \mathbf{x}_{in}^{i}]), \mathbf{\hat{x}}_{in}^i])),
\end{align}
Here, $i$ represents the current inference iteration, $\text{UP}$ stands for an upsampling operation, $\text{DOWN}$ for a downsampling operation, $\text{DEC}$ and $\text{ENC}$ stand for a decoder and encoder level, respectively. An additional improvement was achieved when collecting decoded features from all iterations and feeding them to the last decoder stage together to calculate the final output. Although the recurrent training scheme might increase training time, this extension of the U-Net has the advantage that it does not introduce any additional parameters as claimed by the authors.
\subsubsection{Processing Feature Maps within the Skip Connections}
In the attention U-Net established by Oktay et al. \cite{oktay2018attention}, attention gates (AGs) are added to the skip connections to implicitly learn to suppress irrelevant regions in the input image while highlighting the regions of interest for the segmentation task at hand.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/unetextensions/attention_unet1.pdf}
\caption{Attention U-Net Architecture. Figure from \cite{oktay2018attention}.}
\label{fig:attention-unet}
\end{figure}
In biomedical imaging, when organs to be segmented show high inter-patient variation in terms of shape and size, a common approach is to use a cascaded network. The first network extracts a rough region of interest (ROI) including the organ to be segmented and the second network predicts the exact organ segmentation in this ROI. These approaches, however, suffer from redundant model parameters and high computational resources. Adding attention gates to the skip connections maintains a high prediction accuracy without the need for an external organ localization model. It is therefore trainable from scratch and introduces no significant computational overload and only a few additional model parameters. The output of an AG is the elementwise multiplication of the input feature maps with attention coefficients $\alpha_i \in [0, 1]$ as $\mathbf{\hat{x}}_{i, c}^l=\mathbf{x}_{i, c}^l\cdot \alpha_i^l$. For the computation of the attention coefficients both the input feature maps $x$, that have been passed through the skip connection from the encoder and the gating signal $g$ are analyzed. Here, the gating signal is collected from a coarser scale as can be seen in \Cref{fig:attention-unet} for adding contextual information. The applied additive attention is formulated as follows:
\begin{align}
q_{\text{att}}^l&= \psi^T(\sigma_1(W_x^T\mathbf{x}_i^l+W_g^Tg_i+b_g))+b_\psi \nonumber \\
\alpha_i^l&=\sigma_2(q_{\text{att}}^l(\mathbf{x_i^l}, g_i; \Theta_{\text{att}})),
\end{align}
where $\sigma_1$ and $\sigma_2$ are ReLU and sigmoid activations respectively, $W_x\in \mathbb{R}^{F_l\times F_{int}}$, $W_g\in \mathbb{R}^{F_g\times F_{int}}$ and $\psi \in \mathbb{R}^{F_{int}\times 1}$ are linear transforms and $b_g$ and $b_\psi$ are bias terms. Adding an AG to a skip connection, therefore, highlights the ROIs in the feature maps from the encoder path before they are concatenated with the feature maps of the decoder path. So in addition to adding higher resolution information, additional information on the location of the object(s) to be segmented is added, eliminating the need for cascaded multi-network approaches.
The attention U-Net++ by Li et al. combines the attention U-Net with the U-Net++ \cite{li2020attention}. Attention gates as described in \cite{oktay2018attention} are added to all the skip connections of the U-Net++ with its nested U-Nets and dense skip connections.
With similar motivation, Jin et al. \cite{jin2020ra} introduced a 3D U-Net with attention residual modules in the skip connections, called the RA-UNet.
The network was developed for the task of segmenting tumors in the liver. The main difficulties of this task lie in the large spatial and structural variability, low contrast between liver and tumor, and similarity to nearby organs.
The added attention residual learning mechanism in the skip connections improve the performance by focusing on specific parts of the image as claimed by the authors. The output of the attention module ($\mathbf{OA}$) in the RA-UNet structure is formulated as:
\begin{equation}
\mathbf{OA}(\mathbf{x})=(1+\mathbf{S}(\mathbf{x}))\mathbf{F}(\mathbf{x}),
\end{equation}
where $\mathbf{S}(\mathbf{x})$ originates from the soft mask branch and has values in [0,1] to highlight important features and suppress noise and redundant features in the original feature maps $\mathbf{F}(\mathbf{x})$ passed through the trunk branch. The soft mask branch itself uses a residual encoder-decoder architecture to calculate its output.
To improve performance on the difficult task of the ovary and follicle segmentation from ultrasound images, Li et al. \cite{li2019cr} added spatial recurrent neural networks (RNNs) to the skip connections of a U-Net. Since there are usually many small follicles in an image, it is very likely that the neighboring follicles are spatially correlated. In addition, there might be a possible spatial correlation between the follicles and the ovary. As the max-pooling operation in the original U-Net brings a loss of spatially relative information the spatial RNNs should improve the segmentation results by learning multi-scale and long-range spatial contexts.
Li et al. \cite{li2019cr} built the spatial RNNs from plain RNNs with a ReLU activation. Each spatial RNN module takes feature maps as input and produces spatial RNN features as output. It uses four independent data translations to integrate local spatial information in up, down, left, and right directions. The maps from each direction are concatenated and passed through a $1\times 1$ convolutional layer to produce feature maps where each point contains information from all four directions. The process is then repeated to extend the local spatial information to global contextual information. As can be seen in \Cref{fig:crUnet}, the final feature maps passed through the skip connection are a combination of the original encoder feature maps and the RNN features extracted from these maps. The authors claim that the architecture is especially strong at avoiding the segmentation of false positives and detecting and segmenting very small follicles. A limitation of the RNN modules is that they make training more difficult and computationally expensive. To compensate for this, Li et al. added deep supervision.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/unetextensions/cr_unet.pdf}
\caption{The spatial RNN module used in the skip connections of the CR-Unet by Li et al. in \cite{li2019cr}.}
\label{fig:crUnet}
\end{figure}
While most medical applications demand segmentations to be in the same dimension as the input image, there are also medical protocols that require segmentation of the image projection, e.g., Liefers et al. \cite{liefers2019dense} studied the retinal vessel segmentation as a $2$D $\rightarrow$ $1$D retinal OCT segmentation task. This adds the problem of dimensionality reduction to the segmentation. Lachinov et al. \cite{lachinov2021projective} introduced a U-Net with projective skip connections to handle $N$D $\rightarrow$ $M$D segmentations, where $M<N$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/unetextensions/projectiveSkips.pdf}
\caption{Architecture of U-Net with 3D input and 2D output segmentation. Figure from \cite{lachinov2021projective-arx}.}
\label{fig:projective}
\end{figure}
The encoder is a classic U-Net encoder with residual blocks.
The decoder however only restores the input resolution for the $M$ dimensions of the segmentation.
The remaining reducible dimensions $M< d \leq N$ are left compressed. This means that the sizes of the encoder and decoder feature maps no longer match which is why Lachinov et al. \cite{lachinov2021projective} introduce the projective skip connections.
The encoder feature maps passed along the projective skip connections are processed by an average pooling layer with varying kernel size so that the dimensions which are not present in the segmentation are reduced to the size they have in the bottleneck.
This way they can be concatenated with the corresponding decoder feature maps. Global Average Pooling (GAP) and a convolutional layer are added after the last decoder level to calculate the final $M$D segmentation.
The overall architecture for $N = 3$ and $M = 2$ can be seen in \Cref{fig:projective}.
The third dimension is not upsampled to its original resolution in the decoder path and is finally reduced to one by the GAP.
\subsubsection{Combination of Encoder and Decoder Feature Maps}
Another extension of the classic skip connections is introduced in the BCDU-Net by Azad et al. \cite{azad2019bi} where a bi-directional convolutional long-term-short-term-memory (LSTM) module is added to the skip connections. Azad et al. argue that a simple concatenation of the high-resolution feature maps from the encoder and the feature maps extracted from the previous up-convolutional layer containing more semantic information might not lead to the most precise segmentation output. Instead, they combine the two sets of feature maps with non-linear functions in the bi-directional convolutional LSTM module. Ideally, this leads to a set of feature maps rich in both local and semantic information. The architecture of the bi-directional convolutional LSTM module used to combine the feature maps at the end of the skip connection can be seen in \Cref{fig:bcd-unet}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/unetextensions/bcd_unet.pdf}
\caption{Bi-directional convolutional LSTM module for the combination of encoder feature maps passed through the skip connection and decoder feature maps from the previous up-convolutional layer. $\mathcal{X}_e$ and $\mathcal{X}_d$ represent the set of feature maps copied from the encoding part and the output of the previous scale's convolutional layer, respectively. $\mathbf{Y}_i$ indicates the output of the BConvLSTM module in a $i$-th block of a single skip connection. Figure from \cite{azad2019bi-arx}.}
\label{fig:bcd-unet}
\end{figure}
It uses two ConvLSTMs, processing the input data in two directions in the forward and backward paths. The output will be determined by taking into consideration the data dependencies in both directions. In contrast to the approach by Li et al. \cite{li2019cr}, where only the encoder feature maps are processed by the RNN and then concatenated with the decoder features, this approach processes both sets of feature maps with the RNN.
\subsection{Backbone Design Enhancements} \label{sec:backbone-design}
Apart from adapting the skip connections of a U-Net it is also common to use different types of backbones in newer U-Net extensions. The backbone defines how the layers in the encoder are arranged and its counterpart is therefore used to describe the decoder architecture.
In the original U-Net by Ronneberger et al. \cite{ronneberger2015u} each level in the encoder consists of two $3\times 3$ convolutional layers with ReLU activation followed by a max pooling operation. The number of feature maps doubles at each level. Any 2D or 3D CNN image classifier can be used as an encoder in a U-Net adding its mirrored counterpart as the decoder. Dozens of studies modified the vanilla U-Net main blocks to broaden the receptive fields of convolution operations and extract rich, and fine-grained semantic representations for challenging multi-class problems, e.g., \cite{jha2020doubleu,li2022mfaunet,nguyen20213d,weng2021inet,lalonde2021capsules}. This section presents several prominent backbones used in the U-Net architecture and explains their benefits and downsides.
\subsubsection{Residual Backbone}
A very common backbone for the U-Net architecture is the ResNet initially developed by He et al. \cite{he2016deep}. Residual networks enable deeper network architectures by tackling the vanishing gradient problem that often occurs when stacking several layers in deep neural networks as well as a degradation problem that leads to first saturating and then degrading accuracy when adding more and more layers to a network. Residual building blocks, explicitly fit a residual mapping by adding skip connections and performing an identity mapping that is added to the output of the stacked layers.
In their implementation of a residual U-Net, Drozdzal et al. \cite{drozdzal2016importance} refer to the standard skip connections in the U-Net as long skip connections and the residual skip connections as short skip connections, as they only skip ahead over two convolutional layers. Using residual blocks as the backbone in a U-Net, Drozdzal et al. \cite{drozdzal2016importance} can build deeper architectures and find that the network training converges faster compared to the original U-Net. Milletari et al. \cite{milletari2016v} report the same findings in their 3D U-Net architecture using 3d residual blocks as the backbone.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/unetextensions/V-Net.png}
\caption{Schematic representation of V-Net structure for volumetric biomedical image segmentation that comprises residual connections in each scale. In addition, V-Net utilizes the strided convolutional kernels rather than max-pooling layers to lessen the memory footprint in the training stage. Figure from \cite{milletari2016v-arx}.}
\label{fig:v-net}
\end{figure}
A prominent adaption of the backbone is to exchange all 2D convolutions with 3D convolutions to process an entire image volume as can often be found in medical applications. When processing a 3D image in a slice-wise fashion using 2D convolutions, the contexts on the z-axis can not be captured and learned by the network. Using fully convolutional architecture with 3D convolutions elevates this drawback and can fully leverage the spatial information along all three dimensions.
A drawback of using 3D convolutional layers as the backbone in a u-net is the high computational cost and GPU memory consumption which limits the depth of the network and the filter's size i.e. its field-of-view. Milletari et al. \cite{milletari2016v} fully convolutional volumetric, V-Net architecture uses 3D residual blocks (\Cref{fig:v-net}) as a backbone, thereby enabling fast and accurate segmentation in 3D images. The H-DenseUNet by Li et al. \cite{li2018h} uses two U-Nets, one with 2D-dense-blocks as the backbone and the other with 3D-dense-blocks as the backbone. This enables them to first extract deep intra-slice features and then learn inter-slice features in shallower volumetric architecture with a lower computational burden.
\subsubsection{Multi-Resolution blocks}
To tackle the difficulty of analyzing objects at different scales, Ibtehaz et al. introduce the MultiResUNet with inception-like blocks as a backbone \cite{ibtehaz2020multiresunet}. Inception blocks, introduced by Szegedy et al. \cite{szegedy2016rethinking}, use convolutional layers with different kernel sizes in parallel on the same input and combine the perceptions from different scales before passing them deeper into the network. The two following convolutions with $3\times3$ kernels in the classical U-Net resemble one convolution with a $5\times5$ kernel. For incorporating a multi-resolution analysis into the network, $3\times3$ and $7\times7$ convolutions should be added in parallel to the $5\times 5$ convolution. This can be achieved by replacing the convolutional layers with inception-like blocks. Adding the additional convolutional layers increases the memory requirement and computational burden. Ibtehaz et al., therefore, formulate the more expensive $5\times5$ and $7\times7$ convolutions as consecutive $3\times3$ convolutions. The final \textit{MultiRes block} is created by adding a residual connection. The evolution from the original inception block to the MultiRes block can be seen in \Cref{fig:multiresunet}. Instead of keeping an equal number of filters for all consecutive convolutions, the number of filters is gradually increased to further reduce the memory requirements.
\begin{figure}
\centering
\begin{subfigure}{0.48\columnwidth}
\includegraphics[width=\textwidth]{./Figures/unetextensions/incpt1.png}
\caption{}
\label{fig:multires_1}
\end{subfigure}
\hfill
\begin{subfigure}{0.48\columnwidth}
\includegraphics[width=\textwidth]{./Figures/unetextensions/incpt2.png}
\caption{}
\label{fig:multires_2}
\end{subfigure}
\hfill
\begin{subfigure}{\columnwidth}
\includegraphics[width=\textwidth]{./Figures/unetextensions/incpt3.png}
\caption{}
\label{fig:multires_3}
\end{subfigure}
\caption{The MultiRes Block (c) is developed from the original inception Block (a) with three parallel convolutions with $3\times3$, $5\times5$, and $7\times7$ kernels by expressing the $5\times5$ and $7\times7$ convolutions as two and three consecutive $3\times3$ convolutions as can be seen in (b) and adding a residual skip connection. Figure from \cite{ibtehaz2020multiresunet-arx}.}
\label{fig:multiresunet}
\end{figure}
In the final architecture, the two consecutive $3\times3$ convolutions from the original U-Net are replaced by one MultiRes block, leading to faster convergence, improved delineation of faint boundaries, and higher robustness against outliers and perturbations.
Another well-known backbone for U-Net extensions is the DenseNet introduced by Huang et al. in \cite{huang2017densely}. Similarly to residual networks, the DenseNet also aims at fighting the vanishing gradient problem by creating skip connections from early layers to later layers. The DenseNet maximizes the information flow by connecting all layers with the same feature map size with each other. This means that every layer obtains concatenated inputs from all preceding layers. Contrary to what one might expect, a dense net actually requires fewer parameters compared to a traditional CNN because it does not have to relearn redundant feature maps and can therefore work with very narrow layers with e.g. only 12 filters and can learn multi-resolution features. The direct connection from each layer to the loss function implements implicit deep supervision which helps train deeper network architectures without vanishing gradients.
Karaali et al. \cite{karaali2022dr} utilized Dense Residual blocks in the U-Net-like representation for retinal vessel segmentation. To this end, they were inspired by DenseNet \cite{huang2017densely}, and ResNet \cite{he2016deep} to design a Residual Dense-Net (RDN) block. In their architecture the first sub-block comprises successive batch Normalization, ReLu, Convolution, and Dropout counterparts, which employs the dense connectivity pattern as in \cite{huang2017densely}. The following sub-block applies a residual connectivity pattern. Using a DenseNet-like backbone helps the U-Net architecture learn more relevant features using fewer parameters. The residual connectivity smooths the information flow across the layers to facilitate the optimization step.
\subsubsection{Re-considering Convolution}
This direction aims to reduce the computational burden of the naive convolution operation by re-considering the alternative convolutional operations. Jin et al. \cite{jin2019dunet} exchange each $3\times3$ convolutional layer in the original U-Net with a deformable convolutional block for the accurate segmentation of retinal vessels. Their architecture is named DUNet. The deformable convolutional blocks are inspired by the work on deformable convolutional networks by Dai et al. \cite{dai2017deformable} and should adapt the receptive fields to adjust optimally to different shapes and scales of complicated vessel structures in the input features. In deformable convolutions, offsets are learned and added to the grid sampling locations normally used in the standard convolution. One exemplary illustration of adjusted sampling locations for a $5\times 5$ kernel can be seen in \Cref{fig:dunet}.
\begin{figure}
\centering
\includegraphics[scale=0.3]{./Figures/unetextensions/DUnet.pdf}
\caption{One exemplary deformable convolutions with their respective normal convolution with a $5\times5$ kernel.}
\label{fig:dunet}
\end{figure}
In a classic convolution the kernel sampling grid $G$ would be defined as:
\begin{equation}
G = {(-2, -2), (-2, -1), ..., (2, 1), (2, 2)}.
\end{equation}
Considering this grid, every pixel $m_0$ in the output feature map $\mathbf{y}$ can be calculated as:
\begin{equation}
\mathbf{y}(m_0)=\sum_{m_i\in G}\mathbf(w)(m_i)\cdot \mathbf{x}(m_0+m_i)
\end{equation}
from the input $\mathbf{x}$. In the deformable convolution, an offset $\Delta m_i$ is added to the grid locations.
\begin{equation}
\mathbf{y}(m_0)=\sum_{m_i\in G}\mathbf(w)(m_i)\cdot \mathbf{x}(m_0+m_i+\Delta m_i)
\end{equation}\\
Every deformable convolutional block consists of a convolutional layer, to learn the ideal offsets from the input. A deformable convolution layer applying the convolution with the adapted sampling points followed by batch normalization and ReLU activation. Since the calculated offset $\Delta m_i$ is usually not an integer, the input value at the sampling point is determined using bilinear interpolation. Exchanging the simple convolutions with deformable convolutions helps the network adapt to different shapes, scales, and orientations but comes at a higher computational burden because an additional convolutional layer per block is needed to determine the offsets of the sampling grid.
When segmenting from 3D images it is important to make use of the full spatial information from the volumetric data. However, this is not possible with 2D convolutions and 3D convolutions are computationally very expensive. To address this problem, Chen et al. \cite{chen2018s3d} used separable 3D convolutions as the backbone of the U-Net. Each 3D convolutional block in the original U-Net is replaced by an S3D block which can be seen in \Cref{fig:separable}.
\begin{figure}
\centering
\includegraphics[scale=0.5]{./Figures/unetextensions/S3D_unet.pdf}
\caption{Architecture of one separable 3D convolutional block \cite{chen2018s3d}. Using parallel design the architecture performs the 3D convolution in three paths with less computation burden. }
\label{fig:separable}
\end{figure}
The 3D convolution is divided into three branches where each branch represents a different orthogonal view so that the input is processed in axial, sagittal, and coronal views. Additionally, a residual skip connection is added to the separated 3D convolution. Using separable 3D convolutions as the backbone of the U-Net, Chen et al. \cite{chen2018s3d} can take into consideration the full spatial information from the volumetric data in the U-Net architecture without the extremely high computational burden of standard 3D convolution.
\subsubsection{Recurrent Architecture}
Recurrent neural networks (RNN) are used frequently to process sequential data such as in speech recognition. Liang et al. \cite{liang2015recurrent} were among the first groups to design a recurrent convolutional neural network (RCNN) for images recognition. Although the input image, in contrast to sequential data, is static, the activity of each unit is modulated by the activities of its neighboring units because the activities of RCNNs evolve over time. By unfolding the RCNN through time, they can obtain arbitrarily deep networks with a fixed number of parameters.
Using these RCNN blocks as the backbone of the U-Net architecture enhances the ability of the model to integrate contextual information. Alom et al. \cite{alom2019recurrent} used RCNN blocks as a backbone in their RU-Net architecture, ensuring better feature representation for segmentation tasks.
\begin{figure}[ht!]
\centering
\begin{subfigure}[][][c]{0.48\columnwidth}
\centering
\includegraphics[scale=0.5]{Figures/unetextensions/RUnet1.pdf}
\caption{Architecture of double recurrent convolutional block.}
\label{fig:runet1}
\end{subfigure}
\hfill
\begin{subfigure}[][][c]{0.50\columnwidth}
\centering
\includegraphics[scale=0.5]{Figures/unetextensions/RUnet2.pdf}
\caption{A single unfolded recurrent convolutional block for $t=2$.}
\label{fig:runet2}
\end{subfigure}
\caption{Recurrent Convolutional Mechanism introduced by Alom et al. \cite{alom2019recurrent}, namely RU-Net.}
\end{figure}
\Cref{fig:runet1} shows a recurrent convolutional unit, they used as a backbone.
\Cref{fig:runet2} shows one of the two sub-blocks in \Cref{fig:runet1} unfolded for $t=2$, which is also the unfolding parameter chosen in their experiments. Adding additional residual connections to the separate recurrent convolutional blocks enables deeper networks and results in their R2U-Net architecture.
\subsection{Bottleneck Enhancements} \label{sec:bottleneck}
The U-Net architecture can be separated into three main parts: the encoder (contracting path), the decoder (expanding path), and the bottleneck which lies between the encoder and decoder. The bottleneck is used to force the model to learn a compressed representation of the input data which should only contain the important and useful information needed to restore the input in the decoder. To this end, various modules are designed in multiple studies \cite{zhang2022multi,azad2021deep} to recalibrate and highlight the most discriminant features. In the original U-Net, the bottleneck consists of two $3\times 3$ convolutional layers with ReLU activation. More recent approaches however have extended the classic bottleneck architecture to improve performance.
\subsubsection{Attention Modules}
Several works apply attention modules in the bottleneck of their U-Net architecture.
Fan et al. used a position-wise attention block (PAB) in their MA-Net to model spatial dependencies between pixels in the bottleneck feature maps with self-attention \cite{fan2020ma}.
The architecture of the PAB can be seen in \Cref{fig:MAnet}.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/unetextensions/MA-Net.pdf}
\caption{Architecture of the position-wise attention block introduced in \cite{fan2020ma}.}
\label{fig:MAnet}
\end{figure}
The feature maps passed into the bottleneck at the end of the encoder path are first processed by a $3\times 3$ convolutional layer.
The resulting outputs are then processed by three individual $1\times 1$ convolutional layers producing $A$, $B$, and $C$.
$A$ and $B$ are reshaped to form two vectors.
A matrix multiplication of these two vectors passed through a softmax function yields the spatial feature attention map $P \in \mathbb{R}^{N\times N}$ in which the positions $p_{i, j}$ encode the influence of the $i^{th}$ position on the $j^{th}$ position in the feature map.
Subsequently, a matrix multiplication is performed between the reshaped $C$ and the spatial feature attention map $P$, and the resulting feature maps are multiplied with the input $I'$ before being passed through a final $3\times 3$ convolutional layer.
The final output $O$ is therefore defined as follows:
\begin{equation}
O_i = \alpha \sum_{i=1}^N(P_{ji}C_i)+I'_j
\end{equation}
$\alpha$ is set to zero at the beginning of training and it is learned to assign more weight during the training process.
Considering that the final output is the weighted sum of the feature maps across all positions and the original feature maps, it has a global contextual view and can selectively aggregate rich contextual information.
Intra-class correlation and semantic consistency are improved because the PAB can consider long-range spatial dependency between features in a global view.
Guo et al. also add a spatial attention module to the bottleneck of their SA-UNet architecture \cite{guo2021sa}. The spatial attention module should enhance relevant features and compress unimportant features in the bottleneck. In their approach the input feature maps are passed through an average pooling and a max pooling layer in parallel. Both pooling operations are applied along the channel dimension to produce efficient feature descriptors. The outputs are then concatenated and passed through a $7 \times 7$ convolutional layer and sigmoid activation to obtain a spatial attention map.
By multiplying the spatial attention map with the original input features, the inputs can be weighted based on their importance for the segmentation task at hand.
The attention module only adds 98 parameters to the original U-Net and is therefore computationally very lightweight.
In another work, Azad et al. \cite{azad2022smu} utilized the idea of a texture/style matching mechanism in the U-Net bottleneck for brain tumor segmentation. In their design, an attention agent is designed to distill the informative information from a full modality (four MRI modalities, T1, T2, Flair and T1c) into a missing-modality network (only Flair). Further information regarding the missing-modality task can be found in \cite{azad2022medical}. A deep frequency attention module is proposed in \cite{azad2021deep} to perform a frequency recalibration process on the U-Net bottleneck. This attention block aims to recalibrate the feature representation based on the structure and shape information rather than texture representation to alleviate the texture bias in object recognition.
\subsubsection{Multi-Scale Representation}
The aim of this direction is to enhance the bottleneck design by including multi-scale feature representation, e.g. atrous convolution. The atrous convolutions are performed like standard convolutions, but with convolutional kernels with inserted holes in them.
The holes are defined by setting the weight of the convolutional kernel to zero at the corresponding locations and the pattern for doing so is defined by the atrous sampling rate $r$.
Considering a sampling rate $r$, this introduces $r-1$ zeros between consecutive filter values.
A $k\times k$ convolutional kernel is thereby enlarged to a $k+(k-1)*(r-1)\times k+(k-1)*(r-1)$ filter.
This way the receptive field of the layer is expanded without introducing any additional network parameters to be learned.\\
\Cref{fig:atrous} shows a $3\times 3$ kernel with atrous sampling rates $r$ of $r=1$, $r=2$ and $r=4$.
\begin{figure}
\centering
\begin{subfigure}[t][][t]{0.22\columnwidth}
\centering
\includegraphics[scale=0.5]{./Figures/unetextensions/atrous1.pdf}
\caption{} \label{fig:atrous1}
\end{subfigure}
\hfill
\begin{subfigure}[t][][t]{0.42\columnwidth}
\centering
\includegraphics[scale=0.5]{./Figures/unetextensions/atrous2.pdf}
\caption{} \label{fig:atrous2}
\end{subfigure}
\hfill
\begin{subfigure}[t][][t]{0.32\columnwidth}
\centering
\includegraphics[width=\textwidth]{./Figures/unetextensions/atrous3.pdf}
\caption{} \label{fig:atrous3}
\end{subfigure}
\caption{$3\times 3$ convolutional kernel with different atrous sampling rates $r$ of (a) $r=1$, (b) $r=2$ and (c) $r=4$.}
\label{fig:atrous}
\end{figure}
When the objects to be segmented are of very different sizes it is important for the network to extract multiscale information.
Combining the ideas of spatial pyramid pooling and atrous convolutions, the feature maps in the bottleneck of the U-Net can be resampled in parallel by atrous convolutions with different sampling rates and then combined to obtain rich multiscale features.
Hai et al. \cite{hai2019fully} use atrous spatial pyramid pooling (ASPP) in the bottleneck of a U-Net architecture for the segmentation of breast lesions. The final feature maps of the encoder are passed in parallel through a $1\times 1$ convolutional layer and three atrous $3\times 3$ convolutional layers with atrous sampling rates of 6, 12 and 18 respectively. These four processed groups of feature maps are concatenated together with the original feature maps passed to the bottleneck and processed by a final $1\times 1$ convolution before being passed to the decoder.
Wang et al. make use of ASPP in the bottleneck as well in their COPLE-Net for the segmentation of pneumonia lesions from CT scans of COVID-19 patients \cite{wang2020noise}. Here, four atrous convolutional layers with dilation rates of 1, 2, 4, and 6 respectively are used to process the bottleneck feature maps to capture multi-scale features for the segmentation of small and large lesions.
Similarly, Wu et al. \cite{wu2021jcs} proposed a multi-task learning paradigm, JCS, for COVID-19 CT image classification and segmentation. JCS \cite{wu2021jcs} is a two branches architecture, which utilizes a Group Atrous (GA) module, in its segmentation branches bottleneck for feature modification. GA first applies $1\times 1$ convolution operation to expand the channels of the feature map. Then the feature map is divided into four equal sets. Utilizing the atrous convolutions with different rates on these sets results in more global feature maps with diverse receptive fields. To fully extract more discriminant features from the final feature map, JCS adopts a squeeze and Excitation (SE) \cite{hu2018squeeze} block as an attention mechanism for recalibrating channel-wise convolution features.
\begin{figure*}[th!]
\centering
\begin{subfigure}[t][][c]{0.48\textwidth}
\includegraphics[width=\textwidth]{Figures/unetextensions/ViT.pdf}
\caption{Overview of preliminary Vision Transformer (ViT) structure proposed by Dosovitskiy et al. \cite{dosovitskiy2020image}. ViT splits an image into fixed-size patches with a non-overlapping regime and then linearly embeds each of them to a 1D vector space, afterward adds position embeddings, and feeds the resulting sequence of vectors to a standard Transformer encoder. In order to perform classification, naive ViT uses the standard approach of adding an extra learnable \textit{classification token} to the sequence. However, this token is not practical in the segmentation literature, although the MLP head (from the dashed line in the above figure) is omitted in segmentation tasks.}
\label{fig:vit}
\end{subfigure}
\hfill
\begin{subfigure}[t][][c]{0.48\textwidth}
\includegraphics[width=\textwidth]{Figures/unetextensions/TransformerEncoder.pdf}
\includegraphics[width=\textwidth,height=0.7\textwidth]{Figures/unetextensions/Multihead-AttentionBlock.pdf}
\caption{Up: Transformer Encoder block visualization, down-left: Multi-Head Self-Attention (MHSA) block, down-right: Attention mechanism block feed by the query (Q), key (K), and value (V) representations learned from the input embedded patches. LN and MLP denote the Layer Normalization \cite{ba2016layer} and Multi-Layer Perceptron operations, respectively.}
\label{fig:transformer-encoder}
\end{subfigure}
\hfill
\caption{The above figure portrays the ViT \cite{dosovitskiy2020image} pipeline with all of its detailed counterparts from a top-down view.}
\label{fig:transformer}
\end{figure*}
\subsection{Transformers} \label{sec:transformers}
Inspired by the recent success of the Transformer models in Natural Language Processing (NLP), these models were further extended to perform vision recognition tasks. More specifically, the Vision Transformer model was introduced by Dosovitskiy et al. \cite{dosovitskiy2020image} to alleviate the deficiency of CNNs in capturing the long-range semantic dependencies.
Before going deeper into transformer-based methods, it might be practical to review the concept of vision transformers and the mechanism of self-attention utilized in these networks.
Contrary to the Transformers in NLP tasks \cite{vaswani2017attention}, the computer vision tasks usually contains more than one dimensional data (e.g., 2D image, 3D video) which needs to be prepared for the transformer model. Hence, ViT's pipeline starts with image sequentialization (see \Cref{fig:vit}) process to prepare the tokenized sequence for the encoder module. From now on, the words \textbf{patch} and \textbf{token} will be used interchangeably.
If $x\in\mathbb{R}^{H\times W \times D \times C}$ is a volumetric 3D image with a $\left(H,W,D\right)$ spatial resolutions and $C$ input channels, first the $x$ is dividing into $N=\frac{H \times W \times D}{P^3}$ flattened uniform, non-overlapping patches $x_{p}^{i}\in \mathbb{R}^{N\times(P^3.C)}$ , $i \in \{1,\cdots,N\}$ with $(P,P,P)$ spatial resolution for each patch, therefore each patch is representing by a 1D sequence with a length of $1\times (P^3.C)$. Afterward, a linear layer applies on top of the sequence to map them to a $K$ dimensional embedding space. In order to retain the positional information of patches, a 1D learnable positional encoding $\mathbf{E}_{\text{pos}} \in \mathbb{R}^{N\times K}$ adds to patch embedding as follows:
\begin{align}
\mathbf{z}_0=\left[x_{\text{class}}; x_p^1\mathbf{E};x_p^2\mathbf{E};\cdots;x_p^N\mathbf{E}\right] + \mathbf{E}_{\text{pos}}, \label{eq:patch-embedding}
\end{align}
where $\mathbf{E}$ denotes patch embedding operation and the class token,$x_{\text{class}}$, omitable in segmentation tasks. In the next step, the embedded patch feed to the stack of Transformer encoder blocks ($L \times$) containing the Multi-Head Self-Attention (MSHA), Multi-Layer
Perceptron (MLP), and Layer Normalization \cite{ba2016layer} sub-blocks to generate the latent representation. The following formulations show the mathematical process in Transformer encoder:
\begin{align}
\hat{\mathbf{z}}^{l}&=\text{MSA}(\text{LN}(\mathbf{z}^{l-1}))+\mathbf{z}^{l-1} \notag \\
\mathbf{z}^{l} & = \text{MLP}(\text{LN}(\hat{\mathbf{z}}^{l})) + \hat{\mathbf{z}}^{l}, \label{eq:multi-head-attention-block}
\end{align}
where $l \in \{1,\cdots,L\}$, $\hat{\mathbf{z}}^{l}$, and $\mathbf{z}^{l}$ denote output of MHSA operation and MLP function, respectively.
From \Cref{fig:transformer-encoder} the MHSA block comprises $h$ parallel Self-Attention sub-blocks that perform the attention (Scaled Dot-Product Attention) $h$ times with different \textbf{Q}eury ($Q$), \textbf{K}ey ($K$), and \textbf{V}alue ($V$) matrices from the input 1D sequence, $\mathbf{z}^{l} \in \mathbb{R}^{N\times K}$. The attention function is a mapping operation between query and key-value pairs to an output that measures the similarity between two components in $\mathbf{z}$ as:
\begin{align}
\text{SA}(\mathbf{z})=\text{SoftMax}(\frac{QK^{T}}{\sqrt{K_h}})V,\label{eq:attention-eq}
\end{align}
where $\sqrt{K_h}$ denotes a normalization factor to preserve the attention matrix (\Cref{eq:attention-eq}) from the possible gradient vanishing or exploding through the training. Furthermore, the output of MHSA derives from the concatenation of multiple heads:
\begin{align}
\text{MHSA}=\text{Linear}([\text{SA}_1(\mathbf{z});\text{SA}_2(\mathbf{z});\cdots;\text{SA}_h(\mathbf{z})]).
\end{align}
So far, we have briefly introduced the ViT pipeline and the related mathematics. In the next sections, we will discuss the integration of the Transformer into the U-Net structure in medical segmentation. We categorized the presence of Transformers in U-shaped networks into two sub-categories: (a) Transformer as a complement to CNN-based U-Net-like structures and (b) U-shaped standalone Transformer architectures.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{Figures/unetextensions/TransUNet.pdf}
\caption{The overview architecture of the TransUNet. The Transformer layers are employed in the encoder part. The schematic of the Transformer is shown on the left. Figure from \cite{chen2021transunet}.}
\label{fig:transunet}
\end{figure}
\subsubsection{Transformer Complement to CNN-based U-Net} \label{sec:hybrid-transformer}
The success of convolutional neural networks (CNN) in diverse dense prediction tasks in the vision domain, e.g., segmentation, is noticeable. Their performance is underlined in their multi-scale representation and ability to capture local semantic and texture information. However, the local representation derving from the CNN architecture might not be robust enough to capture geometrical and structural information existing in the medical data. Therefore, there is a need for a mechanism to capture inter-pixel long relations to extend the performances of the existing CNN-based U-Net variants suffering from the limited receptive field of convolutional operations. Chen et al. \cite{chen2021transunet} proposed one of the first studies that utilized the Vision Transformer (ViT) in the U-Net structure to compensate for the U-Net's disability in long-range modeling dependencies, namely TransUNet (See \Cref{fig:transunet}). The stacked Transformers in the encoder path feed with the tokenized paths from abstract features extracted within the primal input to extract global contexts. The decoder path upsamples the encoded features combined with the high-resolution CNN feature maps to enable precise localization. Chen et al. clarified that the naive Transformer is not fit for downstream tasks like segmentation well due to its 1D functionality for capturing the interaction of the tokenized information. Therefore, they proposed this complementary Transformer design with U-Net, which they conducted several ablation studies to prove their superiority within the conventional attention collaborated networks such as Attention U-Net \cite{oktay2018attention} on Synapse \cite{synapse2015ct} and ACDC \cite{bernard2018deep} datasets.
TransUNet is a 2D network that processes the volumetric 3D medical image slice-by-slice, and due to its seminal ViT adaptation for its building blocks, it relies on pre-trained ViT models on large-scale image datasets. These restrictions made Wang et al. \cite{wang2021transbts} point them out and propose TransBTS as a U-Net-like architecture, modeling local and global information in spatial and slice/depth dimensions. While Transformer's computational complexity is quadratic and the volumetric 3D data is large, on the other hand, ViT's fixed-size tokenization process \cite{dosovitskiy2020image} discards the local structural information, TransBTS utilizes the 3D CNN backbone for its encoder and decoder path to capture local representation across spatial and depth dimensions and unleash from the high computational burden for the Transformer counterpart in overall. The essential key points in the amalgamation of the Transformer with the encoded low-resolution with high-level representation flow come from CNN blocks, are linear projection and feature mapping blocks, where the input/output signals reshape and downsample to be compatible for their usage. This hybrid network captures the local and global information from 3D data and demonstrates the improved performance within two Brain Tumor Segmentation (BraTS) 2019-2020 \cite{menze2014multimodal,bakas2017advancing,bakas2018identifying} datasets over the previous CNN U-Net structures.
Li et al. \cite{li2021gt} proposed the GT U-Net structure to address the low performance of previous segmentation methods in fuzzy boundaries while keeping the computational complexity low within the hybrid structure of CNN and the Transformer in a U-Net-like paradigm. Their method was applied to the private orthodontist tooth X-ray images and DRIVE dataset \cite{staal2004ridge}. All the main counterparts of U-Net are based on Group Transformer (GT) to dispense the quadratic computational complexity within these successive parallel convolution, Multi-Head Self-Attention (MHSA), convolution modules in each stage to gradually increase receptive filed and extracting long local-dependencies. So far, the presence of a Transformer in the segmentation tasks is crucial because if a network wants to provide an efficient prediction mask, it should be able to minimize the miss-classifying of the background and foreground pixels that leads to a reduction in False Positives (FP). Therefore learning long-range contextual features is as essential as fuzzy boundaries resulting from object overlappings or variation in exposure of the medical imaging devices. To mitigate the occurrence of this miss predicting in boundary levels, GT U-Net utilizes a Fourier descriptor loss term within binary cross entropy to impose the prior shape knowledge.
Xie et al. \cite{xie2021cotr} addressed the computational complexity that restrains the multi-scale functionality of conventional Self-Attention (SA) and proposed the hybrid CoTr architecture for volumetric medical image segmentation. The whole network is a U-Net-like structure with CNN-based 3D residual blocks for encoder and decoder paths with the amalgamation of Deformable Transformer (DeTrans) for multi-scale fusion, besides the conventional skip connections from the encoder to the decoder for better localization information and faster convergence. TransUNet suffers from parameters overload within MHSA, which treats all image tokenization positions equally. Therefore, CoTr instantiates the deformation concept from \cite{dai2017deformable,zhu2021deformable} into the deformable self-attention mechanism in Transformer to decrease the computation complexity and prepare the ground for using Transformer to process multi-scale and high-resolution feature maps. MS-DMSA layer is a deformable transformer instead of MHSA that focuses on only a small set of key sampling locations around a reference path. CoTr demonstrates the competitive results in a score-parameter trade-off on The Multi-Atlas Labeling Beyond the Cranial Vault (BCV) \cite{landman2015miccai} dataset.
UNETR \cite{hatamizadeh2022unetr} is a 3D segmentation network that directly utilizes volumetric data incorporating ViT solely at the encoder stage to capture global multi-scale contextual information in a 3D volumetric style which is usually of paramount importance in medical image segmentation domain. The architecture follows the U-shaped structure of \cite{ronneberger2015u} with skip connections carrying successive 3D convolution operations to the 3D CNN-based decoder. Using a CNN-based decoder is since transformers can not capture spatial localization information well despite their excellent capability of learning global information. Analogous to U-Net, Hatamizadeh et al. \cite{hatamizadeh2022unetr} uses the different stages of Transformer in the encoder to pass the flow from the contracting path to extracting path, and the multi-resolution contextual information (after reshaping the embedded sequence to a proper tensor shape and applying convolution operations) merges with CNN-based decoder to improve the segmentation mask prediction. UNETR produces uniform, non-overlapping patches from volumetric data and applies a linear projection to project patches into a constant embedding dimensional space throughout the Transformer layers. Their ablation studies depict that they outperformed the TranUNet \cite{chen2021transunet}, TransBTS \cite{wang2021transbts}, and CoTr \cite{xie2021cotr} on BCV \cite{landman2015miccai}, and MSD \cite{simpson2019large} datasets on an average of 1\% margin in the dice score metric.
In computer vision tasks, neighboring information of a specific region tends to be more correlated than far regions. To this end, Wang et al. \cite{wang2022mixed} proposed the MT-UNet network utilized with the Mixed Transformer Module (MTM) to capture long-range dependencies wisely concerning the most neighboring contextual information. Another critical point is that the ViT with Self Attention (SA) calculates the intra-tokens affinities, ignoring the inter-tokens connections dispensed through the other dimensions, especially in medical images. Therefore, MTM consists of an External Attention (EA) counterpart in itself to address this concern. MTM is used in conjunction with a U-Net-like structure accompanied by CNN blocks. CNN blocks are used to not only reduce the computational overhead by downsampling the input feature maps but also introduce a structure prior to the model in the case of small medical datasets. MT-UNet performs well on Synapse \cite{landman2015miccai} and ACDC \cite{bernard2018deep} datasets in comparison with TranUNet \cite{chen2021transunet}.
Azad et al. \cite{reza2022contextual} proposed a contextual attention network, namely TMU, for adaptively synthesizing the U-Net produced local feature with the ViT's global information for enhanced overlap boundary areas in medical images. TMU is two branches pipeline, wherein the first stream utilizes a U-Net-like block without a segmentation head (Resnet backbone \cite{he2016deep}) to extract high semantic features and object-level boundary heatmap interaction representation. In the next branch, the ViT-based Transformer module applies to non-overlap input images to extract long-range dependencies. Whereas the objective of segmentation differs from one subject to another data, as mentioned before, TMU aims to merge the local and global information adaptively. To do so, Azad et al. proposed a contextual attention mechanism to produce image-level contextual information and highlight the most discriminative regions within importance coefficients delivered by attention weights from Transformer. This paradigm not only revealed the efficiency of the boundary information as a prior and adaptive collaboration of local and long-range dependencies but also outperforms the conventional hybrid and solely CNN-based methods on SegPC challenge dataset \cite{gupta2018pcseg,gehlot2020ednfc,gupta2020gcti} and skin lesion segmentation datasets \cite{codella2018skin,codella2019skin,mendoncca2013ph}.
\begin{figure}[!htb]
\includegraphics[width=\columnwidth]{Figures/unetextensions/UCTransNet.pdf}
\caption{Overview of the UCTransNet architecture. The original skip connections of U-Net architecture are replaced with the proposed CTrans Module including Channel-wise Cross fusion Transformer (CCT) and Channel-wise Cross Attention (CCA). CCT with the Multi-head Cross-Attention module is illustrated on the left. Figure from \cite{wang2022uctransnet-arx}.}
\label{fig:uctransnet}
\end{figure}
Skip connections in the U-Net-based model are used to transfer high spatial information from the encoder to the decoder for accurate localization, while the successive downsampling operations suffer from the loss of spatial information. However, Wang et al. \cite{wang2022uctransnet} studied the effectiveness of the preliminary U-Net skip connections and stated that the naive skip connections suffer from the highly semantic gap such as semantic gaps among multi-scale encoder features and between the encode-decoder stages. They proposed UCTransNet \cite{wang2022uctransnet} that alleviates these mentioned issues from the channel perspective with an attention mechanism, namely Channel Transformer (CTrans). CTrans is a modification for skip connections in a U-Net-based pipeline and consists of two sub-counterparts Channel Cross fusion with Transformer (CCT) and Channel-wise Cross-Attention (CCA), for aggregating multi-scale features adaptively and guiding the fused multi-scale channel-wise features to decoder effectively, respectively (see \Cref{fig:uctransnet}). CCT aims to fuse multi-scale encoder features to adaptively compensate for the semantic gap between different scales with the advantage of long-range dependency modeling in the Transformer. CCT tokenized feature maps at each stage within patch sizes from a multiple of $\frac{1}{2}$, preserving the channel dimensions. From \Cref{fig:uctransnet}, the proposed CCT module accompanies tokenized feature maps as a query and concatenated four tokens come from stages as key and value matrixes. With the use of instance normalization \cite{ulyanov2016instance} operation for gradient smoothing through the process, the primary distinction between the CCT and Self Attention (SA) is that the attention operation applies on the channel axis rather than the patch axis. Afterward, to rectify the gap between the encoder and decoder's inconsistent feature representation, the output tokens of CCT pass through CCA to apply a better fusion step and lessen the ambiguity with the decoder feature. The UCTransNet network performs SOTA Dice results on the GlaS \cite{sirinukunwattana2017gland}, MoNuSeg \cite{kumar2017dataset,kumar2019multi} and Synapse \cite{landman2015miccai} datasets in comparison with TransUNet \cite{chen2021transunet}.
\begin{figure}[!h]
\includegraphics[width=\columnwidth]{Figures/unetextensions/ScaleFormer.pdf}
\caption{Overview of the ScaleFormer \cite{huang2022scaleformer} architecture. The input image is first passed through the CNN Block to extract the local details. The output features of the last several layers in the CNN block are then fed into the Intra-Scale Transformers to model the global information for each scale. Spatial-Aware Inter-Scale Transformer fuses the outputs of Inter-scale Transformer modules to enable the interaction among different scales. Finally, the decoder block performs upsampling and concatenation with features of corresponding scales to produce the segmentation prediction.}
\label{fig:scaleformer}
\end{figure}
Similar to UCTransNet, Huang et al. \cite{huang2022scaleformer} addressed the inconsistency between local and global features in inter and intra-scales in conventional architectures (hybrid / standalone) \cite{chen2021transunet,cao2021swin,huang2021missformer,yan2022after} and proposed a ScaleFormer backbone based on a U-Net-like structure which during this study is the SOTA method in 2D modality. Their innovations through their design are to couple CNN-based features within long-range contextual features in each scale effectively within lightweight Dual-Axis MSA captures attention in a row/column-wise manner. In addition, ScaleFormer \cite{huang2022scaleformer} make a bridge with a spatial-aware inter-scale Transformer to interact with the target regions' multi-scales features to surpass the shape, location, and variability of organs' limitation. ScaleFormer utilizes ResNet \cite{he2016deep} variant backbone, basic ResNet-34 blocks for CNN-feature extractor, and in each stage, scale-wise intra-scale transformer (Dual-Axis MSA) couples with the local features to highlight both detailed local-spatial and long-spatial affinities in each scale. To alleviate the deficiencies of previous methods in capturing sufficient information from multiple scales by a hierarchical encoder, spatial-aware inter-scale Transformers merge these features adaptively to strengthen the ScalFormer in effectively segmenting various-scale organs. From \Cref{fig:scaleformer}, the inter-scale Transformer is a computation-efficient design by applying successive point-wise convolutions followed by average-pooling on row/column-wise query and key matrices of the Transformer. This pipeline embraces the whole operations in a single block rather than multi blocks for row and column on input data \cite{ho2019axial}. The spatial-aware inter-scale Transformer is a conventional Transformer with a difference in interaction calculation of tokens cue. To be more precise, each input token to this Transformer first reshapes to its 2D representation. Every 2D representation of specific tokens in each scale concatenates with their successive 2D patch feature map in the following scales and then flattens to its 1D representation by producing a master token for that specific token and applying the standard Transformer calculation to it. Afterward, the enhanced representations are aggregated in the decoder path with local-level CNN features on the same stage in each skip connection to each decoder block. ScaleFormer proves its functionality through multiple datasets \cite{landman2015miccai,kumar2017dataset,kumar2019multi,bernard2018deep} by outperforming TransUNet \cite{chen2021transunet}, Swin-Unet\cite{cao2021swin}, MISSFormer \cite{huang2021missformer} and AFTer-UNet \cite{yan2022after}.
\Cref{fig:transunet,fig:scaleformer} show sample CNN-Transformer U-shaped structures that Transformer is an add-on to a U-Net-like network to model the long-range contextual information in diverse stages of U-Net from encoder-decoder to skip connection and bottleneck. The Swin UNETR \cite{hatamizadeh2022swin} is a modification to the original UNETR \cite{hatamizadeh2022unetr} that the 3D Vision Transformer replaced by the Swin Transformer in encoder path. There are still other studies such as \cite{azad2022transnorm,chen2021transattunet} noteworthy to review, but due to the limitation of the paper, we excluded them.
\begin{figure*}[th!]
\centering
\begin{subfigure}[][][c]{0.48\textwidth}
\includegraphics[width=\textwidth]{Figures/unetextensions/MedT.pdf}
\caption{
Overview of the MedT architecture. The network uses the LoGo strategy for training. The upper global branch utilizes the first fewer blocks of the transformer layers to encode the long-range dependency of the original image. In the local branch, the images are converted into small patches and then fed into the network to model the local details within each patch. The output of the local branch is re-sampled relying on the location information. Finally, a $1 \times 1$ convolution layer fuses the output feature maps from the two branches to generate the final segmentation mask. Figure from \cite{valanarasu2021medical-arx}.
}
\label{fig:medt}
\end{subfigure}
\hfill
\begin{subfigure}[][][c]{0.48\textwidth}
\includegraphics[width=\textwidth]{Figures/unetextensions/SwinUNet.png}
\caption{
The architecture of the Swin-Unet follows the U-Shape structure. It contains the encoder, the bottleneck, and the decoder part which are built based on the Swin transformer block. The encoder and the decoder are connected with skip connections. Figure from \cite{cao2021swin}.
}
\label{fig:swin-unet}
\end{subfigure}
\hfill
\begin{subfigure}[][][c]{0.48\textwidth}
\includegraphics[width=\textwidth]{Figures/unetextensions/MISSFormer.pdf}
\caption{
Overview of the MISSFormer architecture.
The network is composed of a hierarchical encoder, a decoder, and an Enhanced Transformer Context Bridge. The encoder and decoder are constructed based on the enhanced transformer blocks and modules for patch processing. The outputs of each stage within the encoder are fused and passed through the bridge to model the local and global dependencies of different scales. Figure from \cite{huang2021missformer}.
}
\label{fig:missformer}
\end{subfigure}
\hfill
\begin{subfigure}[][][c]{0.48\textwidth}
\includegraphics[width=\textwidth]{Figures/unetextensions/D-Former.pdf}
\caption{Overview of the D-Former architecture. One dynamic position encoding block (DPE), multiple local scope modules (LSMs), and global scope modules (GSMs) constitute each D-Former block. Figure from \cite{wu2022d}.}
\label{fig:d-former}
\end{subfigure}
\caption{Overview of architectures mentioned in \Cref{sec:standalone-transformer}}
\label{fig:standalone-transformer}
\end{figure*}
\begin{figure*}[th!]
\centering
\begin{subfigure}[t][][c]{0.3\textwidth}
\includegraphics[width=\textwidth]{Figures/unetextensions/SwinUnetBlock.pdf}
\caption{The calculations through the Swin block are:\\ $\hat{\mathbf{z}}^l = \text{W-MSA}(\text{LN}(\mathbf{z}^{l-1}))+\mathbf{z}^{l-1}$,\\$\mathbf{z}^l=\text{MLP}(\text{LN}(\hat{\mathbf{z}}^l))+\hat{\mathbf{z}}^l$,\\ $\hat{\mathbf{z}}^{l+1}=\text{SW-MSA}(\text{LN}(\mathbf{z}^l))+\mathbf{z}^l$,\\$\mathbf{z}^{l+1}=\text{MLP}(\text{LN}(\hat{\mathbf{z}}^{l+1}))+\hat{\mathbf{z}}^{l+1}$, $\hat{\mathbf{z}}^{l}$ and $\hat{\mathbf{z}}^{l+1}$ denote the output features of the W-MSA and SW-MSA modules, respectively.}
\label{fig:swin-block}
\end{subfigure}
\hfill
\begin{subfigure}[t][][c]{0.3\textwidth}
\includegraphics[width=\textwidth]{Figures/unetextensions/EnhancedTransformerBlock.pdf}
\caption{The Efficient Mix-FFN block applies the following operations: \\ {\tiny $y_1 = \text{LN}(\text{Conv}_{3\times 3}(\text{FC}(x_{in}))+\text{FC}(x_{in}))$}, \\ $x_{out}=\text{FC}(\text{GELU}(y_1)) + x_{in}$.}
\label{fig:missformer-block}
\end{subfigure}
\hfill
\begin{subfigure}[t][][c]{0.3\textwidth}
\includegraphics[width=\textwidth]{Figures/unetextensions/D-FormerBlock.pdf}
\caption{The LSM and GSM modules (main counterparts of the D-Former block) apply the following formulations:\\ $\hat{\mathbf{z}}^l = \text{LS-MSA}(\text{LN}(\mathbf{z}^{l-1}))+\mathbf{z}^{l-1}$,\\$\mathbf{z}^l=\text{MLP}(\text{LN}(\hat{\mathbf{z}}^l))+\hat{\mathbf{z}}^l$,\\ $\hat{\mathbf{z}}^{l+1}=\text{GS-MSA}(\text{LN}(\mathbf{z}^l))+\mathbf{z}^l$,\\$\mathbf{z}^{l+1}=\text{MLP}(\text{LN}(\hat{\mathbf{z}}^{l+1}))+\hat{\mathbf{z}}^{l+1}$, $\hat{\mathbf{z}}^{l}$ and $\hat{\mathbf{z}}^{l+1}$ denote the output features of the LS-MSA and GS-MSA modules, respectively.}
\label{fig:dformer-block}
\end{subfigure}
\hfill
\caption{Overview of main Transformer counterpart blocks mentioned in \Cref{sec:standalone-transformer}. (a) Swin Transformer block \cite{liu2021swin,cao2021swin}, (b) MISSFormer Enhanced Transformer block \cite{huang2021missformer}, (c) D-Former Transformer block \cite{wu2022d}. MLP, FC, and LN operations indicate the Multi-Layer Perceptron, Fully Connected, and Layer Normalization \cite{ba2016layer}, respectively.}
\label{fig:attention-standalone-transformer}
\end{figure*}
\subsubsection{Standalone Transformer Backbone for U-Net Designs} \label{sec:standalone-transformer}
So far, multiple studies incorporating the Transformer concept and conventional CNN modules have been reviewed in \Cref{sec:hybrid-transformer}. In this section, we investigate the usage of Transformer as a standalone main counterpart for designing backbones for U-Net-like structures. One of the pioneering structures in this domain was proposed by Valanarasu et al. \cite{valanarasu2021medical}, namely MedT. Like most of the other networks, MedT plans to contribute to capturing long-range spatial context with pure Transformer rather than the CNN-based methods that partially broaden the hindered-receptive field of CNN, e.g., D-UNet \cite{jin2019dunet} with deformable convolution operations \cite{dai2017deformable}, ASPP-FC-DenseNet \cite{hai2019fully} with atrous convolution operations \cite{chen2017deeplab}, and H-DenseUNet \cite{li2018h} with successive convolution operations. However, Transformer's performance (also ViTs) has a strong bond with the fed data scale to the Transformer module \cite{dosovitskiy2020image}, which in the medical scale, could be degraded more, and a high amount of data could not be available. This lack of data is a considerable corner point in learning positional encoding as one of the preliminary steps of Transformers, which have shown their capacity to model images' spatial structure. Therefore MedT \cite{valanarasu2021medical} proposes a gated axial-attention mechanism to control the information flow by positional embeddings to query, key, and value \cite{wang2020axial} in a multi-axis attention operation \cite{ho2019axial}. In \cite{wang2020axial} the accurate relative positional encoding learned on large-scale datasets rather than small-scale datasets improves the performance, therefore MedT introduces a gating parameter to control the amount of positional bias in capturing non-local information in hindering non-accurate positional embedding. In addition, to effectively extract information, MedT utilizes a Local-Global (LoGo) training strategy to compensate for the Transformer's patch-wise technique weakness in capturing inter-patch pixel dependencies. To do so, MedT investigates two branches in its network diagram (see \Cref{fig:medt}), one as a global branch to work on the original resolution of the image and the local branch that operates on the patches of the image. Overall, MedT demonstrated vanguard results on Brain US \cite{valanarasu2020learning,wang2018automatic}, Glas \cite{sirinukunwattana2017gland}, and MoNuSeg \cite{kumar2017dataset,kumar2019multi} datasets in Dice and IoU metrics.
Transformers are well capable of capturing long-range dependencies through data, however, they suffer from severe and inevitable handicaps that impede them from their versatile use in vision tasks. These shortages commonly are correlated with chains to each other, e.g., Transformers computational complexity is quadratic \cite{dosovitskiy2020image,aghdam2022attention} and this restrains its usability in dense vision tasks such as segmentation and detection, which needs the neighboring pixel dependencies in the multi-scale and hierarchical pattern. Due to the fixed size non-overlapping tokenization step in the naive Transformer rather than the pixel-by-pixel calculation of attention to diminishing the mentioned computational burden, the Transformer is unworthy of extracting the local contextual dependencies in intra-path pixels. These constraints make the interest to provide efficient Transformers, Linear Transformers, with a significant amount of reduction in parameters and computational complexity \cite{tay2020efficient}. In the vision tasks, Swin Transformer \cite{liu2021swin} plays a critical role as an efficient and linear Transformer with the capability of supporting hierarchical architectures. A key design counterpart of the Swin Transformer is its shifted windowing scheme that makes the Transformer calculate the affinities for patches in the same window. Afterward, the window swipes on the patches, and the attention calculates among the patches in the same window. This successive shifting operation and capturing local contextual information within patches in windows can stack multiple times. Ultimately a patch merging layer is introduced to build CNN-like hierarchical feature maps by merging image patches in deeper layers. This intuition and the U-Net-like structure success emerged Swin-Unet \cite{cao2021swin} structure in the medical segmentation field. From \Cref{fig:swin-unet}, Cao et al. \cite{cao2021swin} used the Swin Transformer block as the main counterpart of their U-shaped network. 2D medical images split into non-overlapping patches, and each patch fed into the encoder path comprised of Swin blocks. The contextual features from the bottleneck output upsample in the decoder path with patch expanding layer (contrary to path merging layer) end couples with the multi-stage features from the encoder via skip connections to restore the spatial information. Swin-Unet presented SOTA results over the CNN-Transformer hybrid structures like TransUNet \cite{chen2021transunet} and demonstrated the robust generalization ability with the help of two multi-organ (Synapse) \cite{landman2015miccai} and cardiac (ACDC) \cite{bernard2018deep} segmentation datasets.
Due to some intrinsic features of medical images or, more specifically, the properties of human body organs, e.g., multi-scale and deformations, \cite{ronneberger2015u}, the necessity to capture long-range dependencies for accurate segmentation boundaries \cite{valanarasu2021medical} and inherence of generalizability of models even with no pre-training strategies and applicable for low-data regime \cite{maier2018rankings,prevedello2019challenges,varoquaux2022machine}, Huang et al. \cite{huang2021missformer} proposed MISSFormer a pure U-shaped Transformer network to address these apprehensions. \Cref{fig:missformer} displays the MISSFormer network with enhanced Transformer block as a primary entity in the network. One of the Transformer's drawbacks mentioned earlier is their un-suitability for capturing local context \cite{chu2021conditional,li2021localvit}, which comes with the solution for lessening the computational complexity with patching operation. However, the local contextual information plays a pivotal role in high-resolution vision tasks, therefore some studies in the vision Transfomer domain tackle this problem by embedding convolution operations in their attention module, e.g., PVTv1 \cite{wang2021pyramid}, PVTv2 \cite{wang2022pvt}, and Uformer \cite{wang2022uformer}. Huang et al. \cite{huang2021missformer} argues this methodology and brings up the point that direct usage of convolution layers in Transformer blocks limits the discrimination of features. For an input image, MISSFormer applies a $4\times 4$ convolutions with the stride size instead of $4$ (overlapping windows) for preserving local continuity in building the patches stage. The encoder path molds hierarchical representation (see \Cref{fig:missformer}) with the help of Enhanced Transformer Block, which accompanies the \Cref{fig:missformer-block} Transformer module. Enhanced Transformer block comprises an Efficient Self-Attention module to decrease the traditional attention calculation by downsampling the corresponding query, key, and value matrixes. Afterward, the Enhanced Mix-Feed Forward Network (FFN) sub-module (modified clone of Mix FFN from \cite{xie2021segformer}) aligns features and makes discriminant representation with $3 \times 3$ depth-wise convolutions for capturing the local context efficiently. Analogous to \cite{wang2022uctransnet}, MISSFormer rethinks the skip connection design, utilizes the Enhanced Transformer Context Bridge module for multi-scale information fusion, and hinders the gap between encoder and decoder feature maps. This module captures the local and global correlations between different scale features. For a brief review of this module, the context bridge (\Cref{fig:missformer}) flattens the output attention matrixes from different scales in the spatial dimension and reshapes it in a way to have a consistent channel dimension and concatenates the representations in flattened spatial dimension and feed to enhanced Transformer block to produce long-range dependencies and local context correlations. Ultimately the output of Enhanced Mix-FFN is split and restores to its original shape to get the discriminant fused hierarchical multi-scale representation. MISSFormer plotted high performances compared to TransUNet \cite{chen2021transunet} and Swin-Unet \cite{cao2021swin} over Synapse \cite{landman2015miccai} and ACDC \cite{bernard2018deep} datasets.
Since most U-Net-shaped standalone Transformer utilized models incorporate 2D inputs, Wu et al. \cite{wu2022d} proposed a Dilated Transformer (D-Former) for 3D medical image segmentation with consideration of degrading the computational complexity. D-Former \cite{wu2022d}, \Cref{fig:d-former}, is a U-shaped hierarchical architecture with specialized D-former blocks which consist of Local Scope Modules (LSMs) and Global Scope Modules (GSMs) in an alternate order to capture local and global contextual information. Each D-Former block could repeat the successive LSM and GSM counterparts, however the original D-Former \cite{wu2022d} uses three successive sequences of LSM-GSM modules in the third and sixth D-Former blocks. LSM captures local self-attention by dividing a 3D feature map into non-overlapping 3D-volumetric units that consist of 3D patches and calculating the self-attention within these 3D units. As a complement to the locally fine-grained attention, the GSM module attains interactions through different units in a dilated manner to enlarge the attention's performance range. This intuition captures local and global interactions with fixed-size patches within sets of 3D volumetric units without any computation overflow. Also, D-Former takes advantage of positional encoding within Transformers with Dynamic Positional Encoding (DPE), which can be a vital cue in dense prediction tasks such as segmentation. Due to the 3D functionality and 3D contextual information, D-Former is a SOTA method on various 3D datasets such as Synapse \cite{landman2015miccai} and ACDC \cite{bernard2018deep} datasets with large margins in Dice score over CNN-hybrid or standalone Transformer designs.
Still, numerous studies out there, e.g., DS-TransUNet \cite{lin2021ds} utilize the Swin Transformer in the dual-path U-shaped structure for modeling the multi-scale patch size to lessen the deficiency of Transformers in capturing the local context. We reviewed multiple studies that utilized the Transformer in their U-Net pipeline in various methods. With such evidence and stunning growth in the Transformer field, we still hear about the outstanding collaboration between Transformers and U-Net-like networks so often.
\subsection{Rich Representation Enhancements} \label{sec:rich-representation}
To obtain a rich representation, the common approaches applied to medical image segmentation are multi-scale and multi-modal methods, e.g. \cite{wang2021sar,zhao2020scau,zhang2018road}. The key objective is to enhance the performance of the trained models by utilizing all available information from multi-modal or multi-scale images while retaining the most desirable and relevant features.
The multi-scale method, also referred to as the pyramid method originated from the Laplace pyramid method proposed by Burt et al. \cite{burt1987laplacian}. The approach converts the source input image by resizing it into a series of images with decreasing spatial resolutions. This scheme allows encoders of models to directly access the features of the enhanced images of different sizes and thus learn the respective features.
The study of the organs of interest requires their specific imaging modality to provide targeted information. However, each imaging technique has its limitations and can only reveal partial details about the organ, which may lead to inaccurate clinical analysis. Therefore, a fusion of images from various imaging modalities can be conducted to supplement each other's information by integrating complementary information retrieved from several input images.
The powerful structural design of the UNet network with the encoder and decoder allows the network to mine salient features at multiple input levels and enables effective feature fusion of different modalities.
Lachinov et. al. evaluate the performance of the Cascaded U-Net\cite{lachinov2018glioma} with multiple encoders processing each modality respectively to demonstrate the improvement due to the extraction multi-modal representation. The results indicate that the architecture taking the multiple modalities into account outperforms the network only relying on one single modality.
The following classifications will illustrate the modality fusion proposed to learn richer representations.
\subsubsection{Multi-scale Fusion}
Image pyramid input or side output layers are aggregated into U-Net structures to fuse the multi-scale information in the encoder or decoder stage.
Abraham et al. \cite{abraham2019novel} propose the Focal Tversky Attention U-Net with a generalized focal loss function that modulates the Tversky index \cite{hashemi2018asymmetric} to address the issue of data imbalance and improve precision and recall balance in medical image segmentation. Furthermore, they incorporate multi-scale image inputs into the attention U-Net model with deep supervision output layers \cite{oktay2018attention}. The novel architecture facilitates the extraction of richer feature representations and results in 3\% dice score improvement on multi-class CT abdominal segmentation task. Compared to the commonly used Dice loss, the Tversky similarity index introduces a specific weight for each class, which is inversely proportionate to the label occurrences. This index shown as follows alleviates precision and recall imbalance due to equal weighting of false positive (FP) and false negative (FN).
\begin{equation}
\begin{aligned}
TI_c &= \frac{\sum^N_{i=1}p_{ic}g_{ic}+\epsilon}{\sum^N_{i=1}p_{ic}g_{ic}+\alpha\mathbf{A}+\beta\mathbf{B}+\epsilon}
\\
\mathbf{A} &= \sum^N_{i=1}p_{i\bar{c}}g_{ic},\qquad
\mathbf{B} =\sum^N_{i=1}p_{ic}g_{i\bar{c}}
\end{aligned}
\end{equation}
where, $\mathbf{A}$ and $\mathbf{B}$ refer to FN and FP respectively. In the terms $\mathbf{A}$ and $\mathbf{B}$, $p_{ic}$ denotes the probability that the pixel $i$ is classified to the lession class $c$ and $p_{i\bar{c}}$ is the probability pixel $i$ belongs to the non-lesion class $\bar{c}$. $g_{ic}$ and $g_{i\bar{c}}$ are of the same form for ground truth pixels. Adjusting the weights $\alpha$ and $\beta$ to fine-tune the emphasis can enhance recall in case of significant class imbalance.
The authors further develop a focal Tversky loss function (FTL) for better small regions-of-interest (ROIs) segmentation by forcing the function to shift the focus on less accurate and misclassified predictions.
\begin{equation}
FTL_c = \sum(1-TI_c)^{1/\gamma}
\end{equation}
where $\gamma$ is set in the interval $[1,3]$ to enable the loss function to concentrate more on incorrectly classified predictions that are less accurate. The reason is that when $\gamma > 1$, the FTL is almost unaffected if a pixel is misclassified with a high Tversky index. And if a pixel is incorrectly classified with a small Tversky index, the FTL will be high and forces the model to focus on hard samples.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/unetextensions/FocalAttentionUNet.pdf}
\caption{Architecture of the Focal Tversky Attention U-Net with multi-scale input and deep supervised output layers. AGs indicate soft attention gates which combine spatial information from low-level feature maps with coarser contextual information from skip connections. Figure from \cite{abraham2019novelFig}.}
\label{fig:Focal_Tversky}
\end{figure}
As shown in \Cref{fig:Focal_Tversky}, they use Soft Attention Gates (AGs) to prune features and propagate only relevant spatial information to the decoding layers to enhance the balance between precision and recall at a structural level. In addition, an input image pyramid injected into each of the max pooling layers in the encoder and deep supervision module enriches feature learning at different scales.
To better segment the Optic Disc (OD) and Optic Cup (OC) for accurate diagnosis of glaucoma from fundus images, Fu et al. \cite{fu2018joint} introduce the polar transformation into the U-shape convolutional network with multi-scale input layers to build the Polar Transformation M-Net, aims to extract the richer context representation of the original image in the polar coordinate system. Compared to prior work, which treats OD and OC individually while ignoring their interdependency and overlap, the proposed Polar transformation M-Net considers both OD and OC simultaneously and formulates the segmentation as a multi-label task. Moreover, a novel loss function built on the Dice coefficient is employed to address the data imbalance between OD and OC in the fundus images. The model aggregates the side-output layers serving as early classifiers that generate prediction maps at different scales.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/unetextensions/PolarMNet.pdf}
\caption{Illustration of the Polar Transformation M-Net method. The upper part of the figure shows the overall pipeline. The lower part demonstrates the architecture of M-Net. The method first localizes the disc center to extract the ROI, then applies the polar transformation to the detected patches of ROI. The transferred results are passed through the M-Net to obtain multi-label maps. The inverse polar transformation restored the final segmentation prediction to the Cartesian coordinate. The M-Net architecture is composed of an input image pyramid, U-shape convolutional network, and side-output layers. Figure from \cite{fu2018jointFig}.}
\label{fig:PolarMUnet}
\end{figure}
\Cref{fig:PolarMUnet} demonstrates the overall structure of M-Net. It consists of a multi-scale input layer, U-shape convolutional network, side-output layer, and multi-label loss function. The initial multi-scale layer constructs the image input with descending resolutions for hierarchical representation learning. The processed image pyramid is passed through a U-shape network similar to the original U-Net architecture \cite{ronneberger2015u} whose encoder path and decoder path are concatenated by the skip connections. The output of each stage within the decoder path is fed to a side-output layer that produces a local output map. The superiority of the side-output layer is that the issue of gradient vanishing could be mitigated by the backpropagation of the side-output loss together with the final layer loss. Since the disc region overlays the cup pixels, the authors evaluate segmentation performance by the proposed multi-label loss function based on the Dice coefficient, which is defined as:
\begin{equation}
L_s = 1 - \sum^{K}_k \frac{2w_k\sum^N_np_{(k,i)}g_{k,i}}{\sum^N_ip^2_{(k,i)}+\sum^N_ig^2_{(k,i)}}
\end{equation}
where $K$ is the total number of classes and $N$ denotes the number of pixels. $p_{(k,i)}$ and $g_{(k,i)}$ are the predicted probability and binary ground truth respectively. Class weights $w_k$ control the contribution of each class to the final results. Another contribution of the network is the pixel-wise polar transformation operated on the fundus image plane. The polar transformation can map the radial relationship that OC should be within the OD to the layer-like spatial structure, which makes the features easier to identify. Additionally, the interpolation during the polar transformation based on the OD center could expand the cup region. It helps relieve the heavily biased distribution of OC/background pixels. The experiments on the ORIGA dataset have demonstrated an increase in segmentation performance.
The original U-Net may not fully exploit all contributions of the semantic strength since it only generates output segmentation maps from the final layer of the decoder path.
Moreover, the output of the layers in different steps cannot be connected to one another, which blocks feature sharing and leads to redundant parameters. To address the mentioned issue, Moradi et al. proposed MFP-UNet which allows the output of all the blocks in different stages to be fed to the last layer \cite{moradi2019mfpFig}.
Their architecture is composed of two pathways, the ``bottom-up pathway'' and the ``top-down pathway''.
The encoder of the UNet with dilated convolution filter serves as the "bottom-up pathway". The dilated convolutional kernel can increase the receptive fields of the module by the dilation factor. Besides, the expansion path of U-Net acts as the FPN top-down pathway. Each step of the top-down pathway provides prediction maps where lower-resolution semantically stronger features can be processed for transfer to higher resolutions. Additional convolution layers are included for processing the feature maps at different scales to one fixed resolution compared to the decoder path of the original U-Net, which can boost the accuracy and improve the resolution of each stage. According to the experiment results, the novel model provides a robust and powerful architecture regarding the capabilities of feature representation in a pyramid, which shows robustness to large and rich training sets.
\subsubsection{Multi-modality Fusion}
In this section, we summarize the U-Net variant models with multimodal fusion modules, where a single encoder of U-Net is extended to multiple encoders to receive medical images in different modalities. The branches of encoders are connected by their respective strategies of aggregation, thus sharing information in different modalities, extracting richer representations, and complementing each other.
The novel architecture Dolz et al. \cite{dolz2018dense} propose in the Dense Multi-path U-Net enhances traditional U-Net models regarding rich representation learning on two key aspects: modality fusion and inception module extension. Two typical strategies are employed to deal with multi-modal image segmentation tasks. The early fusion merges the low-level features of inputs of multiple imaging modalities at the very early stage. As for the late fusion strategy, the CNN outputs of different modalities are fused at a later point. Nevertheless, these previous strategies cannot thoroughly model the highly complex relation of the image information across different paths of modalities. To alleviate the limitation, the proposed HyperDenseNet adopts the strategy where each stream receives inputs of image data of one modality, and the layers in the same and different paths are densely connected.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/unetextensions/MultipathDenseUnet.png}
\caption{The architecture of network for segmenting ischemic stroke lesions across various imaging modalities. In the encoding path, each imaging modality is input into a single stream. All layers are directly connected to one another in a single stream, which facilitates the information flow of the network. The dashed lines depict the dense connections between the layer outputs in the streams of different modalities. Figure from \cite{dolz2018denseFig}.}
\label{fig:DenseUnet}
\end{figure}
As shown in \Cref{fig:DenseUnet}, the encoding path contains N streams, each responsible for one imaging modality. The Dense Multi-path U-Net supports Hyper-dense connections both within a single path and between several paths. In a densely-connected network, the output of the $l^{th}$ layer is produced by the mapping $H_l$ of the concatenation of all the feature layers.
\begin{equation}
x_l = H_l([x_{l-1},x_{l-2},...,x_0])
\end{equation}
HyperDenseNet integrates the outputs among different paths based on the densely-connected network to obtain richer feature representation from combined modalities. In addition, the permuting and interleaving operations are applied to the concatenation to improve performance. Considering the case of two modalities with streams 1 and 2 denoted as $x_l^1$ and $x_l^2$ respectively, the output of the $l^{th}$ layer of HyperDenseNet can be expressed as follows:
\begin{equation}
x_l^s = H_l^s(\pi_l^s([x_{l-1}^1,x_{l-1}^2,x_{l-2}^1,x_{l-2}^2,...,x_0^1,x_0^2])),
\end{equation}
where $\pi^s_l$ represents the shuffling function acting on the feature maps.
Inspired by the Inception module in \cite{szegedy2016rethinking} which employs convolutions with multiple kernel sizes on the same level to capture both local and global information, the authors further expand the convolutional module of Inception with two additional convolutional blocks to facilitate the learning of multi-scale features. The two blocks exploit different dilation rates to enable multiple receptive fields larger than the original inception module. The $n \times n$ convolutions are replaced with the consequent $1 \times n$ and $n \times 1$ convolutions with the aim to be more efficient.
Lachinov et al. \cite{lachinov2018glioma} propose a deep cascaded variant of U-Net, Cascaded Unet, to process multi-modal input for better performance regarding brain tumor segmentation. Despite the feasibility of the original U-Net to handle multi-modal MRI image input, it fuses the feature information of all the modalities which is processed in an identical manner. Based on the original U-Net, the proposed Cascaded Unet employs multiple encoders in parallel for better exploiting feature representations for each specific modality.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/unetextensions/CascadedUNet.pdf}
\caption{Architecture of Cascaded Unet with multiple encoders. T1, T2, T1CE, and FLAIR indicate several MRI modalities. N denotes the current number of filters. K is a number of filters in the context feature map produced from lower-scale models. At each stage of the encoding path across modalities, the resulting features are calculated as a maximum of the feature maps processed by encoders. In the decoder of the model, the output at each scale is obtained from the skip connections from the encoder, the decoder output from the previous stage, and the context connections. Figure from \cite{lachinov2018gliomaFig}.}
\label{fig:CascadedUNet}
\end{figure}
The overall architecture of the Cascaded Unet model can be seen in \Cref{fig:CascadedUNet}.
The encoder path contains separate subpaths where every subpath utilizes a convolution group to process one input modality and generate feature maps. Then elementwise maximum operation acts on the multiple feature maps per stage to obtain the resulting features. The output of the feature map is afterward joined with the corresponding feature map of the larger-scale block, which boosts the information flow between the feature maps at different scales. The decoder of Cascaded UNet produces output at each level depending on the output at the same scale and the output of the decoder block at the previous stage. This strategy encourages the model to iteratively improve the results from earlier iterations.
\subsubsection{Leveraging Depth Information}
Some methods modify the U-Net into a 3D model and design modules to extract the information across channels in order to fully exploit the structural information of the third-dimension medical images. For improving automatic brain tumor prognosis, Islam et al. adapt the U-Net architecture to a 3D model and integrate the 3D attention strategy to perform image segmentation \cite{islam2019brain}. Compared to only skip connections, the introduced 3D attention model is aggregated into the decoder part of U-Net that includes channel and spatial attention in parallel with skip connections. The additional 3D attention layers encourage the module to encode richer spatial features from the original images.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/unetextensions/3dAttentionUNet.pdf}
\caption{Architecture of 3D Attention U-Net with a channel and a spatial attention parallel to skip connections. Figure from \cite{islam2019brainFig}.}
\label{fig:3d-attention-UNet}
\end{figure}
As shown in \Cref{fig:3d-attention-UNet}, the 3D attention U-Net is composed of a 3D encoder, the decoder, and skip connections combined with the channel and spatial attention mechanism.
In the path for 3D spatial attention, the authors perform $1 \times 1 \times C $ convolution on the input feature maps to obtain the result of the $H \times W \times 1$ dimension. In parallel, the input feature maps are passed through an average pooling and then fed to the fully-connected layer to get the $1 \times 1 \times C$ sequential channel correlation. Since the two paths capture features parallelly, the inconsistency and sparsity caused by the two excitations can be alleviated by fusing skip connections. Furthermore, the integration of skip connections can enhance the performance of segmentation prediction which can be inferred from the experiments on the BraTS 2019 dataset.
\subsection{Probabilistic Design} \label{sec:probablistic}
Another type of U-Net extension combines the classic U-Net with different types of probabilistic extensions. Depending on the task that should be achieved or the process that should be enhanced, different types of extensions from bayesian skip connections, over variational auto-encoders to Markov random fields are used, which are introduced in the following.
\subsubsection{Variational Auto Encoder (VAE) Regularization}
In medical image segmentation tasks, different graders often produce different segmentations. Most of these different segmentations are plausible as many medical images contain ambiguities that can not be resolved considering only the image at hand. Taking this into consideration, Kohl et al. learn a distribution over segmentations from an ambiguous input to produce an unlimited number of possible segmentations instead of just providing the most likely hypothesis \cite{kohl2018probabilistic}. In their approach, they combine a U-Net, for producing reliable segmentations, with a conditional variational autoencoder (CVAE), which can model complex distributions and encodes the segmentation variants in a low-dimensional latent space. The best results were obtained for a latent space of dimension 6.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/unetextensions/probabilistic.pdf}
\caption{\textbf{(a)} Illustration of the sampling process of M segmentations with probabilistic U-Net. \textbf{(b)} Illustration of the training process of the probabilistic U-Net for one training sample. The green arrows represent loss functions. Figure from \cite{kohl2018probabilistic-arx}.}
\label{fig:probabilisticUnet}
\end{figure}
\Cref{fig:probabilisticUnet} (a) shows the sampling process given a trained prior net and u-net as well as a low-dimensional latent space.
Each position in the latent space encodes a different segmentation variant. Passing the input image through the prior net, it will determine the probability of the encoded variants for the given input image.
For each possible segmentation to be predicted the network is applied to the same input image. A random sample from the prior probability distribution is drawn and broadcast to an N-channel feature map with the same shape as the segmentation map. It will then be concatenated with the final feature maps of the u-net and processed with successive $1\times1$ convolutions to produce the segmentation map corresponding to the drawn point from the latent space. Only the combination needs to be recalculated in each iteration, as the last feature maps of the U-Net and the output of the prior net can be reused for each hypothesis.\\
\Cref{fig:probabilisticUnet} (b) shows the training procedure of the probabilistic U-Net.
Apart from the standard training procedures for conditional VAEs and deterministic segmentation models, it has to be learned how to embed the segmentation variants in the latent space in a useful way. This is solved by the posterior net. It learns to recognize a segmentation variant and map it to a certain position in the latent space. A sample from its output posterior distribution combined with the activation map of the u-net must result in a segmentation identical to the ground truth segmentation. From this, it follows that the training data set must include a set of different but plausible segmentations for each input image.
Myronenko \cite{myronenko20183d} adds a VAE branch to a 3D U-Net architecture to address the problem of limited training data for brain tumor segmentation. In their architecture, the U-Net is used for the segmentation of the tumor, and the VAE is used for the reconstruction of the image sharing the same encoder. For the VAE, the output of the encoder is reduced to a lower dimensional space and a sample is drawn from the Gaussian distribution with the given mean and standard derivation (std). The sample is then reconstructed to the input image using an architecture similar to that of the U-Net decoder but without any skip connections. The total loss to be minimized during training is made up of three terms:
\begin{equation}
\mathbf{L} = \mathbf{L}_\text{dice} + 0.1 \cdot \mathbf{L}_\text{L2} + 0.1 \cdot \mathbf{L}_\text{KL}
\end{equation}
$\mathbf{L}_\text{dice}$ is a soft dice loss between the predicted segmentation of the u-net and the GT segmentation.\\
$\mathbf{L}_\text{L2}$ and $\mathbf{L}_\text{KL}$ are the losses for the VAE where $\mathbf{L}_\text{L2}$ describes how well the reconstructed image matches the input image and $\mathbf{L}_\text{KL}$ is the Kullback-Leibler (KL) divergence between the estimates normal distribution and a prior distribution $\mathcal{N}(0,1)$. Using the VAE brach helps to better cluster the features at the end of the encoder. This helps to guide and regularize the shared encoder for small training set sizes. Adding the additional VAE branch, therefore, improved the performance and led to stable results for different random initializations of the network.
\subsubsection{Graphical Model Algorithm}
While the classic U-Net performs well on data from the same distribution as the training data, its accuracy decreases on out-of-distribution data.
To address this problem, Brudfors et al. \cite{brudfors2021mrf} combine a U-Net with Markov random fields (MRFs) to form the MRF-Unet. The low-parameter, first-order MRFs are better at generalization because they encode simpler distributions which is an important quality to fit out of distribution data. The very accurate U-Net predictions make up for the fact that the MRFs are less flexible. The architecture of the proposed model can be seen in \Cref{fig:mrfUnet}. As the combination of the U-Net and MRF distribution is intractable by calculating the product of the two, an iterative mean-field approach is used to estimate the closest factorized distribution under the Kullback-Leibler divergence. A detailed mathematical derivation of the process can be found in the work by Brudfors et al. \cite{brudfors2021mrf}. Experiments showed that the combination of MRF and U-Net improved performance on in- and out-of-distribution data.
The lightweight MRF component, which does not add any additional parameters to the architecture, serves as a simple prior and therefore learns abstract label-specific features.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/unetextensions/MRFunet.pdf}
\caption{Schematic illustration of the MRF-UNet product. Figure from \cite{brudfors2021mrf-arx}.}
\label{fig:mrfUnet}
\end{figure}
Klug et al. \cite{klug2020bayesian} use a Bayesian skip connection in an attention-gated 3D U-Net to allow a prior to bypass most of the network and be reintegrated at the final layer in their work to segment stroke lesions in perfusion CT images. The skip connection provides the prior to the final network layer and should reduce false-positive rates for small and patchy segmentations of varying shapes. As a prior, the segmentation of the ischemic core obtained by a standard thresholding method is used. Klug et al. \cite{klug2020bayesian} evaluated two ways to combine the prior and the output of the U-Net to calculate the final output segmentation: Addition and convolution of the two maps. Superior results were achieved by using convolution for combination in all experiments.
The input to the U-Net is the concatenation of the 3D perfusion CT image and the prior. When comparing a 3D attention gate U-Net to the same architecture with the bayesian skip connection additionally reintegrating the prior at the end of the network, the latter achieves a better performance in terms of dice score with faster convergence. It is worth mentioning that we have found excessive papers in which probabilistic design is integrated into the U-Net in applications such as for brain tumor \cite{savadikar2020brain}, and skin lesion segmentation \cite{chen2022medical}.
\begin{figure*}[!thb]
\centering
\includegraphics[width=\textwidth]{Figures/unetextensions/timeline.pdf}
\caption{The timeline of prominent U-Net-based methods proposed in medical semantic segmentation literature, from 2015 to 2022. The superscripts in ascending order denote the 1. \cite{ronneberger2015u}, 2. \cite{cciccek20163d}, 3. \cite{milletari2016v}, 4. \cite{drozdzal2016importance}, 5. \cite{dubost2017gp}, 6. \cite{alakwaa2017lung}, 7. \cite{oktay2018attention}, 8. \cite{zhou2018unet++}, 9. \cite{chen2018s3d}, 10. \cite{fu2018joint}, 11. \cite{dolz2018dense}, 12. \cite{lachinov2018glioma}, 13. \cite{myronenko20183d}, 14. \cite{kohl2018probabilistic}, 15. \cite{zeng2019ric}, 16. \cite{azad2019bi}, 17. \cite{guan2019fully}, 18. \cite{jin2019dunet}, 19. \cite{hai2019fully}, 20. \cite{abraham2019novel}, 21. \cite{moradi2019mfp}, 22. \cite{islam2019brain}, 23. \cite{huang2020unet}, 24. \cite{wang2020noise}, 25. \cite{li2020attention}, 26. \cite{jin2020ra}, 27. \cite{ibtehaz2020multiresunet}, 28. \cite{fan2020ma}, 29. \cite{klug2020bayesian}, 30. \cite{lachinov2021projective}, 31. \cite{wu2021jcs}, 32. \cite{guo2021sa}, 33. \cite{chen2021transunet}, 34. \cite{valanarasu2021medical}, 35. \cite{cao2021swin}, 36. \cite{wang2021transbts}, 37. \cite{brudfors2021mrf}, 38. \cite{xiang2020bio}, 39. \cite{hatamizadeh2022unetr}, 40. \cite{reza2022contextual}, 41. \cite{wang2022uctransnet}, 42. \cite{huang2022scaleformer}, 43. \cite{huang2021missformer}, 44. \cite{wu2022d}, 45. \cite{wang2022mixed}, respectively.}
\label{fig:timeline}
\end{figure*}
\begin{table*}[!thb]
\centering
\caption{The review of U-Net-like models for medical image segmentation based on the proposed taxonomy, \Cref{fig:unet-taxonomy}. \{\textit{SCE}, \textit{BDE}, \textit{BE}, \textit{T}, \textit{RRE} and \textit{PD}\} stands for \textit{Skip Connection Enhancement}, \textit{Backbone Design Enhancement}, \textit{Bottleneck Enhancement}, \textit{Transformer}, \textit{Rich Representation Enhancement} and \textit{Probabilistic Design}, respectively.} \label{tab:paperhighlights}
\resizebox{\textwidth}{!}{
\begin{tblr}{
colspec={Q[l]Q[l,3cm]Q[l,7cm]Q[l,5.5cm]},
rowspec={*{15}{Q[t]}},
row{1}={font=\bfseries},
cell{2-40}{1}={font=\itshape},
cell{2-40}{1-4}={font=\tiny}
}
\toprule
Strategy & Networks & Core Ideas & Practical Use Cases \\ \toprule
SCE & {Attention U-Net \cite{oktay2018attention} \\ UNet++ \cite{zhou2018unet++,zhou2019unet++} \\ RA-Unet \cite{jin2020ra} \\BCDU-Net \cite{azad2019bi} \\ U-Net3+ \cite{huang2020unet} \\ BiO-Net \cite{xiang2020bio}} & {Skip connections are defined as connections in a neural network that do not connect two following layers but instead skip over at least one layer. This strategy initially aimed to encourage feature reusability and compensate for gradient vanishing in a deeper network. This modification introduces the feasibility of transferring high spatial localization features from the encoder to the decoder for better segmentation maps. In addition, some use cases used skip connections as hierarchical multi-scale fusion paths for feature enrichment in diverse U-Net stages. Furthermore, skip connection could efficiently decrease the semantic feature gaps between different layers and scales.} & {$\bullet$ Exploit multiscale features \cite{zhou2019unet++} \\$\bullet$ Robust boundary representation \cite{huang2020unet} \\$\bullet$ Bi-directional feature representation \cite{xiang2020bio} \\$\bullet$ Feature recalibration \cite{oktay2018attention} \\$\bullet$ Suppress irrelevant regions besides feature reusability \cite{jin2020ra} \\$\bullet$ Enrich semantic representation \cite{azad2019bi}} \\ \midrule
BDE & {Residual U-Net \cite{he2016deep,drozdzal2016importance,milletari2016v} \\ Multi-Res U-Net \cite{ibtehaz2020multiresunet,szegedy2016rethinking} \\ Dense U-Net \cite{huang2017densely,guan2019fully} \\ H-DenseUNet \cite{li2018h} \\DUNet \cite{jin2019dunet,dai2017deformable} \\ S3D U-Net \cite{chen2018s3d}} & {The backbone defines how the layers in the encoder are arranged and its counterpart is therefore used to describe the decoder architecture. Ideally, the strong backbone design (e.g., inception model) with pre-trained weight can further improve the model generalization capability.} & {$\bullet$ Converges to lower loss \cite{li2018h} \\$\bullet$ Addressing gradient vanishing \cite{he2016deep} \\$\bullet$ Faster convergence rate \cite{drozdzal2016importance} \\$\bullet$ Feature reusability \cite{milletari2016v} \\$\bullet$ Multi-scale encoding \cite{ibtehaz2020multiresunet} \\$\bullet$ Better boundary representation \cite{szegedy2016rethinking} \\$\bullet$ Fine-grained feature set \cite{huang2017densely} \\$\bullet$ Cross-modality representation \cite{li2018h} \\$\bullet$ Reducing computation burden of multi-scale representation \cite{jin2019dunet} \\$\bullet$ Efficient multi-scale computation \cite{chen2018s3d}}\\ \midrule
BE & {ASPP \cite{hai2019fully} \\ MA-Net \cite{fan2020ma} \\ COPLE-Net \cite{wang2020noise} \\ SA-UNet \cite{guo2021sa} \\ FRCU-Net \cite{azad2021deep} \\ JCS \cite{wu2021jcs} \\ MS-Net \cite{zhang2022multi}} & {The network bottleneck contains the compressed representation of the input data and provides necessary information (e.g., semantic, texture, shape features) to reconstruct the segmentation map. Any improvement in the bottleneck design can further improve the prediction result.} & {$\bullet$ Frequency recalibration \cite{azad2021deep} \\$\bullet$ Spatial attention \cite{guo2021sa} \\$\bullet$ Feature pyramid \cite{hai2019fully,wang2020noise} \\$\bullet$ Imposing attention mechanism \cite{wu2021jcs}} \\ \midrule
T & {TransUNet \cite{chen2021transunet} \\ TransBST \cite{wang2021transbts} \\ Swin-Unet \cite{cao2021swin} \\ UNETR \cite{hatamizadeh2022unetr} \\ TMU \cite{reza2022contextual} \\ UCTransNet \cite{wang2022uctransnet}} & {Transformers' critical point is to compensate for CNN's limited receptive field. Extracting long-range contextual information with intuition to look at the whole image at once is a promotional function of Transformers. Due to the 1D sequence mapping functionality of Transformers, they can use as a play-and-plug module at different parts of U-Net-like structures. However, due to the quadratic computational complexity nature, utilizing efficient Transformers' design even from the NLP field within vision architectures is beneficial. Since Transformers calculate the affinities between different parts of input data adaptively, utilizing them is a wise solution for multi-scale feature amalgamation.} & {$\bullet$ Improving CNN bottleneck's feature discriminancy \cite{chen2021transunet} \\$\bullet$ Capture inter-slice affinities from 3D data \cite{wang2021transbts} \\$\bullet$ Hierarchical Efficient Transformer-based design \cite{cao2021swin} \\$\bullet$ Modeling 3D volumetric data with global multi-scale information \cite{hatamizadeh2022unetr} \\$\bullet$ Feature re-calibration / degrading the boundary maps erroneous \cite{reza2022contextual} \\$\bullet$ Decreasing the gap between multi-scale semantic features \cite{wang2022uctransnet}} \\ \midrule
RRE & {Focal Tversky Attention U-Net \cite{abraham2019novel} \\ PT M-Net \cite{fu2018joint}\\Dense Multi-path U-Net \cite{dolz2018dense} \\MFP-Unet \cite{moradi2019mfp} \\ Cascaded Unet \cite{lachinov2018glioma}} &
{The key objective is to enhance the performance of the trained models by utilizing all available information from multi-modal or multi-scale images while retaining the most desirable and relevant features. Some methods also operate directly on volumetric images to take full advantage of depth information.} & {$\bullet$ Improved precision and recall balance \cite{abraham2019novel} \\$\bullet$ Hierarchical representation learning \cite{fu2018joint} \\$\bullet$ Richer feature representation from combined modalities \cite{dolz2018dense} \\$\bullet$ Robust architecture regarding the capabilities of feature representation in a pyramid \cite{moradi2019mfp} \\$\bullet$ Boosted information flow between the different scales \cite{lachinov2018glioma}} \\
\midrule
PD & {Probabilistic U-Net \cite{kohl2018probabilistic} \\ MRF U-Net \cite{brudfors2021mrf} \\ VAE Regularization \cite{myronenko20183d} \\ Bayesian Skip \cite{klug2020bayesian}} & {In medical image segmentation tasks, different graders often produce different segmentations. Most of these different segmentations are plausible as many medical images contain ambiguities that can not be resolved considering only the shot at hand. Probabilistic models aim to model the nature of uncertainty in medical data.} & {$\bullet$ Modeling annotation uncertainty \cite{kohl2018probabilistic} \\$\bullet$ Addressing out-of-distribution \cite{brudfors2021mrf} \\$\bullet$ Imposing regularization to address data limitation \cite{myronenko20183d} \\$\bullet$ Encouraging feature-reusability and reduce FP rate \cite{klug2020bayesian}} \\ \bottomrule
\end{tblr}
}
\end{table*}
\subsection{Comparative Overview}
In this section, we briefly review the recent works regarding U-Net variants presented in \Cref{sec:skip-connection} to \ref{sec:probablistic} for medical image segmentation in \Cref{tab:paperhighlights}. It lists the related network list in each direction along with information about the core ideas and the practical use cases. As detailed in \Cref{tab:paperhighlights}, the modification dealing with skip connections is one of the directions for the extension of the U-Net structure. Some works redesign skip connections by increasing the number of forward skip connections or aggregating some modules within the skip connections for processing feature maps. Some methods also apply bi-directional LSTM to combine the feature maps from Encoder and Decoder. The novel skip connections in these mentioned methods enable the models to be more flexible and therefore explore local and semantic features more efficiently and from different scales. However, it also means a more complex design of the network architecture which leads to a larger number of parameters and more expensive computation.
Instead of altering skip connections, some proposed methods use other types of backbones apart from the original U-Net by Ronneberger et al. \cite{ronneberger2015u}. Residual mapping structure, inception modules, and dense-connections are incorporated into the architectures respectively. Such designs alleviate the vanishing gradient and degradation problems, which facilitates the faster convergence of the training process. Several approaches focus on adapting convolution operations such as using deformable convolutional kernels to make the network more general and robust. Nevertheless, a higher computational cost is needed due to the additional convolution layers.
In the third strategy, some approaches utilize other mechanisms in the bottleneck aiming at enhancing feature extraction and compressing more useful spatial information. Attention modules are employed to model long-range spatial dependencies between pixels in the bottle-neck feature maps. Atrous spatial pyramid pooling (ASPP) with different sampling rates can resample the compressed feature maps in the bottleneck separately, which helps to gain the output of bottleneck from different sizes of reception fields.
The rise of the Transformer, which is prevalent in the field of NLP, inspires the development of computer vision. The proposed ViT facilitates the new trend of the combination of U-Net and Transformer. The tokenized input images are passed through Transformer to extract global information that will supplement U-Net. Despite the state-of-the-art (SOTA) performance that the Transformer-based U-Net-like models have achieved, the large number of parameters of models sometimes cause a long time for convergence. Some models are also highly dependent on the pretrained weights.
The last direction combines the original U-Net with various types of probabilistic extension modules resulting in new types of variants. Probabilistic U-Net integrates the U-Net structure with a conditional variational autoencoder (CVAE) to generate an unlimited number of plausible prediction results when the inputs are obscure. Furthermore, the network with Markov Random Fields (MRF) has the advantage of preventing the model from overfitting with precise segmentations, which significantly improves the performance on out-of-distribution data. The methods in this direction demonstrate more or less robustness to defective input datasets, such as those of limited size or containing ambiguous images.
\Cref{fig:timeline} demonstrates the timeline of typical U-Net-based methods proposed in medical semantic segmentation literature from 2015 to 2022. As shown in the timeline, the U-Net structure has continued to be appealing in recent years regarding the task of medical image segmentation. The direction for the extension of U-Net is prominently influenced by Transformer after 2021 as a result of the emergence of ViT \cite{dosovitskiy2020image}.
\section{Quantitative Comparison} \label{sec:experiments}
In this section, we evaluate the performance of several of the
previously discussed U-Net variants on favored medical segmentation benchmarks. It is worth noting that although most models reported their performance on standard datasets and used standard metrics, some failed to do so, making across-the-board comparisons difficult. Furthermore, only a small percentage of publications provide additional information, such as hyper-parameters, execution time, and memory footprint, in a reproducible way, which is essential for industrial and real-world applications. To compensate for the gap between the solid foundation for comparison of the architectures, we conducted several studies on them in fair conditions to empower the equilibrium insights on their performance.
\subsection{Implementation details}\label{sec:implementation-detail}
In this section, we discuss our experiments and selected networks criteria. Before diving in deep, we should mention that all our implementation code is done in Python with the PyTorch library \cite{NEURIPS2019_9015}. We used a single Nvidia RTX 3090 GPU for training the networks. As a baseline network, we select the 2D U-Net \cite{ronneberger2015u} to build our comparison upon the naive U-Net and explore the effect of modifications to U-Net. Next will be the Attention U-Net \cite{oktay2018attention}, which is the pioneering method to integrate the attention mechanism with U-Net to enhance the feature reusability and a feature selection measure to make the features more discriminant. U-Net++ \cite{zhou2019unet++} brings the highly dense skip connection schema to U-Net to decrease the semantic gap between down-sampling and up-sampling paths. However, some methods tried to surpass the convolution operations of local receptive fields by using dense backbones to U-Net, but not only this procedure still lacks the real global context, but it also adds more parameters to the model, which is not desirable. To this end, Residual U-Net \cite{zhang2018road} brings a neighboring affinity recipe to hinder locality problems via RCNN blocks. Although this model originally applied to non-medical data, it demonstrated robust results on the medical images too. MultiResUNet \cite{ibtehaz2020multiresunet} by modeling the multi-modal information in the backbone is another intuition that we will see its contribution over the base U-Net, that which is more successful than the other in compensating the locality of CNN-based methods. Lastly, we selected three Transformer-utilized U-shaped networks, TransUNet \cite{chen2021transunet}, UCTransNet \cite{wang2021uctransnet} and MISSFormer \cite{huang2021missformer}, to demonstrate the Transformer evolution to the medical segmentation field by effectively capturing global contextual information. We should also note that no pretraining weights were utilized during the training process for any of the networks. Even though the pretraining weights on Imagenet might bring some advantages, we dropped the pretraining weight to provide a fair evaluation criterion.
\subsection{Datasets}\label{sec:datasets}
To demonstrate a fair and productive comparison on the \cref{sec:implementation-detail} networks, we selected several datasets from the diverse modalities for the semantic segmentation task. In this respect, we consider \textbf{S}egmentation of \textbf{M}ultiple \textbf{M}yeloma \textbf{P}lasma \textbf{C}ells, \textit{SegPC} \cite{gupta2018pcseg,gupta2020gcti,gehlot2020ednfc}, which is a collection of 2D microscopic images for cancer screening to aid hematologists in better diagnosis. \textit{ISIC 2018} \cite{codella2019skin} is another 2D Dermoscopic dataset from the skin lesions for assisting dermatologists with an early-stage cancer diagnosis. Next is the multi-organ 3D CT \textit{Synapse} \cite{landman2015miccai} dataset covering the 13 organs' annotations which are published through the Synapse website \cite{synapse2015ct}. Here we want to clarify that the Synapse dataset is also known as \textbf{B}eyond the \textbf{C}ranial \textbf{V}ault (\textit{BCV} or \textit{BTCV}), so these two names using interchangeably, but there is a slight difference between the Synapse and the BCV datasets used in studies. Most of the studies used the Synapse dataset name in their works \cite{chen2021transunet} using the eight organ classes annotation; the rest used the number of classes for their report varies from eleven \cite{xie2021cotr} to twelve \cite{hatamizadeh2022unetr}.
\subsubsection{SegPC}
For decades, automatic cell segmentation in microscopy images was studied, and various methods were developed \cite{bozorgpour2021multi,gupta2022segpc}. Multiple Myeloma (MM) is a type of blood cancer, specifically, a plasma cell cancer. The first stage of building an automated diagnostic tool for MM is the robust segmentation of cells. Segmenting plasma cells in these microscopic stained images is quite complex due to the diversity of situations. For instance, cells may appear in clusters or isolated, with varying nuclei and cytoplasm sizes, some of which touch each other with overlapping boundaries. The SegPC includes images captured from the bone marrow aspirate slides of MM patients. The current dataset has 775 images of MM plasma cells. In this study, we sort the training set according to the single entity's name and choose 70\% of the set as a train set, 10\% for the validation set, and 20\% for the test set. Also, we applied the resizing step to all images to a fixed size of $224 \times 224$. The SegPC dataset slides were stained using Jenner-Giemsa stain and contain the annotation for the Nucleus and Cytoplasm of Plasma cells. It should be noted that for this dataset, we only apply networks for the Cytoplasm segmentation task after cropping the Nucleus samples in each image.
\subsubsection{ISIC 2018}
Human skin tissue consists of three types, i.e., dermis, epidermis, and hypodermis. The epidermis is a susceptible tissue, which under severe solar radiation, could trigger the embedded melanocytes to produce melanin at a significant level \cite{azad2020attention}. Fatal skin cancer is a result of melanocyte growth, which is known as melanoma. In 2022, the American Cancer Society reported approximate melanoma skin cancer cases of 99,780, with death cases of 7,650, 7.66\% of all cases \cite{siegel2022cancer}. Early disease recognition plays a crucial role in medical diagnosis, where it reported that detection of melanoma in early phases could increase the relative survival rate to 92\%. However, robust skin lesion segmentation is a pretty challenging task due to the diverse lesion sizes, illumination changes, differences in texture, position, colors, and presence of unwanted objects like air bulbs, hair, or ruler markers. The ISIC 2018 \cite{codella2019skin} dataset was published by the \textit{International Skin Imaging Collaboration (ISIC)} as a large-scale dataset of dermoscopy images. It includes 2,594 RGB images of $700 \times 900$ pixels size. First, we resized all images to a $224 \times 224$ pixels size, and then like \cite{alom2019recurrent}, we used 1,815 images for training, 259 for validation, and 520 for the testing steps. The dataset consists of two class annotations: cancer or non-cancer lesions' heat map.
\subsubsection{Synapse}
The Synapse dataset is a multi-organ segmentation dataset \cite{landman2015miccai}, which was presented with the 30 abdominal CT scans provided in conjunction with the \textit{MICCAI 2015 Conference}, with 3,779 axial contrast-enhanced abdominal clinical CT images. Each CT volumetric data consists of $85 \sim 198$ slices of a consistent size $512 \times 512$ through whole samples. The spatial resolution of each voxel are $\left[0.54 \sim 0.54\right] \times \left[0.98 \sim 0.98\right] \times \left[2.5 \sim 5.0\right]$ mm$^{3}$ in each axis. In this report, we used the same preferences for data preparation analogous to \cite{chen2021transunet,azad2022transdeeplab,heidari2022hiformer}, randomly allocated 18 training cases and 12 cases for validation, and during the testing phase, we reported the final scores on the validation set. In addition, all slices resized to $224 \times 224$ to follow the same setting as \cite{chen2021transunet,cao2021swin}. We used eight classes annotation for our experiments with class names: Aorta, Gallbladder, Spleen, Kidney (L/R), Liver, Pancreas, Spleen, and Stomach. Our training process uses 2D data (similar to \cite{chen2021transunet,cao2021swin}) and reports the test results on the 3D volume.
\subsection{Loss Functions}\label{sec:loss-funcs}
Selecting the proper loss/objective function while designing the complex segmentation network is extremely important. There are a variety of loss functions to choose and train the network, but in this study, we determined the two well-known and traditional loss functions in the medical image segmentation domain: \textbf{C}ross \textbf{E}ntropy (\textit{CE}) and \textbf{D}ice \textbf{S}ørensen \textbf{C}oefficient (\textit{DSC}) loss functions.
\subsubsection{CE Loss}
CE \cite{zhang2018generalized} derives from the Kullback-Leibler (KL) divergence, a measure of dissimilarity between two distributions. In the segmentation task, CE is formulated as:
\begin{align}
{L_{CE}} = - \frac{1}{N}\sum\limits_{i = 1}^C {\sum\limits_{j = 1}^N {p_j^i\log q_j^i}},
\end{align}
where $p_j^i$ denotes the ground truth binary indicator of class label $i$ of voxel $j$, and $q_j^i$ is the corresponding predicted segmentation probability.
\subsubsection{Dice Loss}
\textbf{S}ørensen \textbf{D}ice \textbf{C}oefficient (\textit{DSC}) is typically used to evaluate the similarity between two samples, which will discuss in \Cref{sec:eval-metrics}. Based on this coefficient, \Cref{eq:dicecoeff}, the Dice loss was introduced in 2017 by Sudre et al. \cite{sudre2017generalised} and formulated as:
\begin{align}
L_{Dice}(y,\hat{y})=1-\frac{2y\hat{y}+\alpha}{y+\hat{y}+\alpha},
\end{align}
where $y$ and $\hat{y}$ are actual and predicted values by model, respectively. $\alpha$ protects the function to not be undefined in edge case scenarios, e.g., $y=\hat{y}=0$.
\subsection{Evaluation Metrics}\label{sec:eval-metrics}
Unquestionably, a model should be examined from numerous aspects, such as several accuracy metrics, speed, and memory occupation efficiency. Nevertheless, most studies evaluate their model based on different accuracy metrics. This section brings brief keynotes on various accuracy metrics used in other research. Even though quantitative accuracy metrics are used to evaluate a segmentation algorithm on diverse benchmark datasets, since the ultimate goal of these approaches in computer vision is to apply them to real-world problems, the model's visual quality should also be considered.
\textbf{$\bullet$~Precision / Recall / F1 score ---}
These are the most popular metrics for evaluating the accuracy of the segmentation models, either classical or deep learning-based methods resulting from the confusion matrix \cite{minaee2021image}. Precision and recall can be defined for each class or at the aggregate level as follows:
\begin{align}
\text{Precision}= \frac{\text{TP}}{\text{TP} + \text{FP}}, \quad \text{Recall} = \frac{\text{TP}}{\text{TP}+ \text{FN}},
\end{align}
where TP stands for the true positive fraction, FP refers to the false-positive fraction and FN refers to the false-negative fraction. Recall in the segmentation context known also as sensitivity, which calculates the proportion of correctly labeled foreground pixels discarding background pixels. Usually, we are interested in a combined version of precision and recall rates, so a famous metric is called the F1 score, which is defined as the harmonic mean of precision and recall as follows:
\begin{align}
F_1 =\frac{2 \text{Prec.}\: \text{Rec.}}{\text{Prec.}+\text{Rec.}}= \frac{2\text{TP}}{2\text{TP} + \text{FP} + \text{FN}} \label{eq:f1-score}
\end{align}
\textbf{$\bullet$~Accuracy ---}
Refers to as \textbf{Class Average Accuracy}, calculates the ratio of correct pixels to the whole mask based on each class. It is an updated version of pixel accuracy, which describes the percent of pixels in a predicted segmentation mask that is classified correctly. Accuracy is defined as:
\begin{align}
Acc = \frac{1}{k} \sum^{k}_{j=1}\frac{p_{jj}}{g_j},
\end{align}
where $p_{jj}$ is the number of pixels that are classified in the correct class $j$, and $g_j$ is the total number of pixels of the ground truth for class $j$. However, the class imbalance prevalent in medical datasets will result in undesirable performance.
\textbf{$\bullet$~Intersection over Union (IoU) ---}
Also known as \textbf{Jaccard Index}, is a measure to describe the extent of overlap of predicted segmentation mask with ground truth. It is defined as the intersection area between the predicted segmentation and the ground truth, divided by the area of union between the predicted segmentation mask and the ground truth:
\begin{align}
\text{IoU} = J(A, B) = \frac{\lvert A \cap B\rvert}{A \cup B} = \frac{\text{TP}}{\text{TP} + \text{FP} + \text{FN}},
\end{align}
where A and B denote the ground truth and the predicted segmentation, respectively, it goes between $0$ and $1$.
\textbf{$\bullet$~Dice Coefficient ---}
This metric commonly applied in medical image analysis defines as the ratio of the twice overlapping region of ground truth ($G$) and prediction ($P$) maps to the total pixels of ground truth and predicted area. The Dice coefficient for a certain class can be formulated as:
\begin{align}
Dice = \frac{2\vert G \cap P\vert}{\vert G\vert+\vert P\vert}, \label{eq:dicecoeff}
\end{align}
When Dice is used for binary segmentation maps, the Dice score is equivalent to the F1 score. From \Cref{eq:f1-score}, it is evident that this metric focuses on foreground pixels' accuracy and penalizes them for wrong prediction labels.
\textbf{$\bullet$~Hausdorff Distance ---}
This metric is one of the extensively used metrics to indicate the segmentation error. It computes the longest distance, Euclidean distance, from a point in the ground truth contour $\delta q$ to one point in segmented contour $\delta p$ as follows:
\begin{align}
HD(X,Y)= max( hd(\delta p,\delta q) , hd(\delta q,\delta p)),
\end{align}
where $hd(\delta p,\delta q)$ and $hd(\delta q,\delta p)$ stand for the one-sided HD from $\delta p$ to $\delta q$ and from $\delta q$ to $\delta p$ respectively as follows:
\begin{align}
hd(\delta q,\delta p) &= max_{q \in \delta q}min_{p \in \delta p}||q-p||_2, \nonumber \\
hd(\delta p,\delta q) &= max_{p \in \delta p}min_{q \in \delta q}||q-p||_2.
\end{align}
\subsection{Experimental Results}
\label{sec:experimental-results}
\subsubsection{Skin lesion (ISIC 2018)}
Skin lesion segmentation is challenging due to various artifacts in dermoscopic images, such as ruler markers, water bubbles, dark corners, and strands of hair \cite{hasan2020dsnet}. \Cref{tab:er-isic2018} presents the comparison results for ISIC 2018 \cite{codella2019skin} dataset. It can be seen that the Dice score as a prominent metric is almost the same among CNN and Transformer based methods. The skin lesion usually appears in the texture and does not follow a specific shape or geometrical pattern. This might explain why transformer-based networks might not bring more advantages to texture-related patterns. It also can been seen that a network with multi-scale feature description capacity, e.g., U-Net++ \cite{zhou2018unet++} and UCTransNet \cite{wang2022uctransnet} are highly capable to localize abnormal regions comparing to other extensions. Overall, the model with multi-scale representation surpasses other extensions in both CNN and Transformer based approaches.
For further investigation, we provided some segmentation results in \Cref{fig:er-isic2018}.
Although the same dice trend endorses the quantitative results among the CNN and Transformer models (\Cref{tab:er-isic2018}), the multi-scale representation derived from the global contextual modeling of the Transformer models enables these architectures to provide more smooth segmentation results even when the background pixels have a high overlap with the skin lesion class. In addition, the CNN-based networks generally are more likely to over-segment or under-segment the lesions comparing to the Transformer-based models. Since UCTransNet \cite{wang2022uctransnet} has the advantages of CNN hierarchical design and multi-scale feature fusion property within the CTrans module, it provides the SOTA results among the selected methods. The critical difference between the multi-scale representation learning with MISSFormer \cite{huang2021missformer} and UCTransNet \cite{wang2022uctransnet} is underlay the utilizing the down-sampling factor with in Efficient Transformer block, which degrades the feature representation (see \Cref{sec:standalone-transformer} and \Cref{fig:missformer-block} for more detail). However, UCTransNet utilizes the CNN-based feature representation instead of the Transformer-based backbone in a U-shaped structure. This representation makes the feature more discriminant, and the semantic gap between the encoder and decoder path successfully lessen by the CTrans module.
\begin{table*}[tb]
\caption{Performance comparison on \textit{ISIC 2018}, \textit{SegPC 2021} and \textit{Synapse} datasets (best results are bolded).}
\vspace{0.2\baselineskip}
\begin{subtable}[t]{0.48\textwidth}
\centering
\caption{\textit{ISIC 2018}} \label{tab:er-isic2018}
\setlength{\tabcolsep}{4pt} %
\renewcommand*{\arraystretch}{1.11}
\vspace{-0.3\baselineskip}
\scriptsize{
\begin{tabular}{l | c c c c c c }
\hline
\textbf{Methods} &
\textbf{AC} &
\textbf{PR} &
\textbf{SE} &
\textbf{SP} &
\textbf{Dice} &
\textbf{IoU} \\
\hline
U-Net \cite{ronneberger2015u} & 0.9446 & 0.8746 & 0.8603 & 0.9671 & 0.8674 & 0.8491 \\
Att-UNet \cite{oktay2018attention} & 0.9516 & 0.9075 & 0.8579 & 0.9766 & 0.8820 & 0.8649 \\
U-Net++ \cite{zhou2018unet++} & 0.9517 & 0.9067 & 0.8590 & 0.9764 & 0.8822 & 0.8651 \\
MultiResUNet \cite{ibtehaz2020multiresunet} & 0.9473 & 0.8765 & 0.8689 & 0.9704 & 0.8694 & 0.8537 \\
Residual U-Net \cite{zhang2018road} & 0.9468 & 0.8753 & 0.8659 & 0.9688 & 0.8689 & 0.8509 \\
TransUNet \cite{chen2021transunet} & 0.9452 & 0.8823& 0.8578 & 0.9653 & 0.8499 & 0.8365\\
UCTransNet \cite{wang2022uctransnet} & \textbf{0.9546} & \textbf{0.9100} & \textbf{0.8704} & \textbf{0.9770} & \textbf{0.8898} & \textbf{0.8729} \\
MISSFormer \cite{huang2021missformer} & 0.9453 & 0.8964 & 0.8371& 0.9742 & 0.8657 & 0.8484 \\
\hline
\end{tabular}
}
\end{subtable}
\hfill
\begin{subtable}[t]{0.48\textwidth}
\centering
\caption{\textit{SegPC 2021}} \label{tab:er-segpc2021}
\setlength{\tabcolsep}{4pt} %
\renewcommand*{\arraystretch}{1.11}
\vspace{-0.3\baselineskip}
\scriptsize{
\begin{tabular}{l | c c c c c c c }
\hline
\textbf{Methods} &
\textbf{AC} &
\textbf{PR} &
\textbf{SE} &
\textbf{SP} &
\textbf{Dice} &
\textbf{IoU} \\
\hline
U-Net \cite{ronneberger2015u} & 0.9795 & 0.9084 & 0.8548 & 0.9916 & 0.8808 & 0.8824 \\
Att-UNet \cite{oktay2018attention} & 0.9854 & 0.9360 & 0.8964 & 0.9940 & 0.9158 & 0.9144 \\
U-Net++ \cite{zhou2018unet++} & 0.9845 & 0.9328 & 0.8887 & 0.9938 & 0.9102 & 0.9092 \\
MultiResUNet \cite{ibtehaz2020multiresunet} & 0.9753 & 0.8391 & 0.8925 & 0.9834 & 0.8649 & 0.8676 \\
Residual U-Net \cite{zhang2018road} & 0.9743 & 0.8920 & 0.8080 & 0.9905 & 0.8479 & 0.8541 \\
TransUNet \cite{chen2021transunet} & 0.9702 & 0.8678 & 0.7831 & 0.9884 & 0.8233 & 0.8338 \\
UCTransNet \cite{wang2022uctransnet} & \textbf{0.9857} & \textbf{0.9365} & \textbf{0.8991} & \textbf{0.9941} & \textbf{0.9174} & \textbf{0.9159} \\
MISSFormer \cite{huang2021missformer} & 0.9663 & 0.8152 & 0.8014 & 0.9823 & 0.8082 & 0.8209\\
\hline
\end{tabular}
}
\end{subtable}
\hfill
\begin{subtable}[t]{\textwidth}
\caption{\textit{Synapse}} \label{tab:er-synapse}
\setlength{\tabcolsep}{8.7pt} %
\renewcommand*{\arraystretch}{1.11}
\vspace{-0.2\baselineskip}
\scriptsize{
\begin{tabular}{l|cc|*{8}c}
\hline \textbf{Methods} & \textbf{DSC~$\uparrow$} & \textbf{HD~$\downarrow$} & \textbf{Aorta} & \textbf{Gallbladder} & \textbf{Kidney(L)} & \textbf{Kidney(R)} & \textbf{Liver} & \textbf{Pancreas} & \textbf{Spleen} & \textbf{Stomach} \\
\hline
U-Net \cite{ronneberger2015u} & 76.85 & 39.70 & 89.07 & \textbf{69.72} & 77.77 & 68.60 & 93.43 & 53.98 & 86.67 & 75.58 \\
Att-UNet \cite{oktay2018attention} & 77.77 & 36.02 & \textbf{89.55} & 68.88 & 77.98 & 71.11 & 93.57 & 58.04 & 87.30 & 75.75 \\
U-Net++ \cite{zhou2018unet++} & 76.91 & 36.93 & 88.19 & 65.89 & 81.76 & 74.27 & 93.01 & 58.20 & 83.44 & 70.52 \\
MultiResUNet \cite{ibtehaz2020multiresunet} & 77.42 & 36.84 & 87.73 & 65.67 & 82.08 & 70.43 & 93.49 & 60.09 & 85.23 & 74.66 \\
Residual U-Net \cite{zhang2018road} & 76.95 & 38.44 & 87.06 & 66.05 & 83.43 & 76.83 & 93.99 & 51.86 & 85.25 & 70.13 \\
TransUNet \cite{chen2021transunet} & 77.48 & 31.69 & 87.23 & 63.13 & 81.87 & 77.02 & 94.08 & 55.86 & 85.08 & 75.62 \\
UCTransNet \cite{wang2022uctransnet} & 78.23 & 26.75 & 84.25 & 64.65 & 82.35 & 77.65 & 94.36 & 58.18 & 84.74 & 79.66\\
MISSFormer \cite{huang2021missformer} & \textbf{81.96} & \textbf{18.20} & 86.99 & 68.65 & \textbf{85.21} & \textbf{82.00} & \textbf{94.41} & \textbf{65.67} & \textbf{91.92} & \textbf{80.81}\\
\hline
\end{tabular}
}
\end{subtable}
\end{table*}
\begin{figure*}[ht]
\captionsetup[subfigure]{labelformat=empty,}
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/7/img}
\caption{}
\label{fig:er-isic-img-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/7/gt}
\caption{}
\label{fig:er-isic-gt-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/7/unet}
\caption{}
\label{fig:er-isic-unet-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/7/unetpp}
\caption{}
\label{fig:er-isic-unetpp-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/7/attunet}
\caption{}
\label{fig:er-isic-attunet-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/7/resunet}
\caption{}
\label{fig:er-isic-resunet-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/7/multiresunet}
\caption{}
\label{fig:er-isic-multiresunet-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/7/transunet}
\caption{}
\label{fig:er-isic-transunet-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/7/missformer}
\caption{}
\label{fig:er-isic-missformer-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/7/uctransnet}
\caption{}
\label{fig:er-isic-uctransnet-1}
\end{subfigure}\hfill
\\[-20.5pt]
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/11/img}
\caption{}
\label{fig:er-isic-img-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/11/gt}
\caption{}
\label{fig:er-isic-gt-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/11/unet}
\caption{}
\label{fig:er-isic-unet-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/11/unetpp}
\caption{}
\label{fig:er-isic-unetpp-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/11/attunet}
\caption{}
\label{fig:er-isic-attunet-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/11/resunet}
\caption{}
\label{fig:er-isic-resunet-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/11/multiresunet}
\caption{}
\label{fig:er-isic-multiresunet-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/11/transunet}
\caption{}
\label{fig:er-isic-transunet-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/11/missformer}
\caption{}
\label{fig:er-isic-missformer-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/11/uctransnet}
\caption{}
\label{fig:er-isic-uctransnet-2}
\end{subfigure}\hfill
\\[-20.5pt]
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/47/img}
\caption{\scriptsize{Input Image}}
\label{fig:er-isic-img-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/47/gt}
\caption{\scriptsize{Target (GT)}}
\label{fig:er-isic-gt-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/47/unet}
\caption{\scriptsize{UNet}}
\label{fig:er-isic-unet-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/47/unetpp}
\caption{\scriptsize{UNet++}}
\label{fig:er-isic-unetpp-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/47/attunet}
\caption{\scriptsize{AttUNet}}
\label{fig:er-isic-attunet-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/47/resunet}
\caption{\scriptsize{ResUNet}}
\label{fig:er-isic-resunet-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/47/multiresunet}
\caption{\scriptsize{MultiResUNet}}
\label{fig:er-isic-multiresunet-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/47/transunet}
\caption{\scriptsize{TransUNet}}
\label{fig:er-isic-transunet-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/47/missformer}
\caption{\scriptsize{MISSFormer}}
\label{fig:er-isic-missformer-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/isic/47/uctransnet}
\caption{\scriptsize{UCTransNet}}
\label{fig:er-isic-uctransnet-3}
\end{subfigure}\hfill
\caption{Visual comparisons of different methods on the \textit{ISIC 2018} skin lesion segmentation dataset. Ground truth boundaries are shown in \textcolor{green}{green}, and predicted boundaries are shown in \textcolor{blue}{blue}.}
\label{fig:er-isic2018}
\end{figure*}
\subsubsection{Cell (SegPC 2021)}
Visual segmentation results presented in \Cref{fig:er-segpc2021} for SegPC dataset. Due to the overlapping nature of multiple myeloma and the spiky ground truth labels, segmenting is laborious. To this end, besides the Dice score comparison in \Cref{tab:er-segpc2021}, the mIoU metric plays a critical role in voting for the networks' performance. It is worth mentioning that the Transformer-based methods provide more softer contour of segmentation maps than the CNN-based studies, which in our opinion, reflects that global long-range contextual information helped the network to perceive the actual shape of cells. On the contrary, due to the locality of convolution counterparts, the CNN-based modules are more error-prone in boundaries. The multi-scale representation power of the UCTransNet \cite{wang2022uctransnet} once again shows the effectiveness of this approach in generating precise segmentation maps for cells with varying scales and backgrounds.
\begin{figure*}[ht]
\captionsetup[subfigure]{labelformat=empty}
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/0/img}
\caption{}
\label{fig:er-segpc-img-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/0/gt}
\caption{}
\label{fig:er-segpc-gt-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/0/unet}
\caption{}
\label{fig:er-segpc-unet-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/0/unetpp}
\caption{}
\label{fig:er-segpc-unetpp-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/0/attunet}
\caption{}
\label{fig:er-segpc-attunet-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/0/resunet}
\caption{}
\label{fig:er-segpc-resunet-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/0/multiresunet}
\caption{}
\label{fig:er-segpc-multiresunet-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/0/transunet}
\caption{}
\label{fig:er-segpc-transunet-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/0/missformer}
\caption{}
\label{fig:er-segpc-missformer-1}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/0/uctransnet}
\caption{}
\label{fig:er-segpc-uctransnet-1}
\end{subfigure}\hfill
\\[-20.5pt]
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/7/img}
\caption{}
\label{fig:er-segpc-img-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/7/gt}
\caption{}
\label{fig:er-segpc-gt-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/7/unet}
\caption{}
\label{fig:er-segpc-unet-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/7/unetpp}
\caption{}
\label{fig:er-segpc-unetpp-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/7/attunet}
\caption{}
\label{fig:er-segpc-attunet-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/7/resunet}
\caption{}
\label{fig:er-segpc-resunet-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/7/multiresunet}
\caption{}
\label{fig:er-segpc-multiresunet-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/7/transunet}
\caption{}
\label{fig:er-segpc-transunet-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/7/missformer}
\caption{}
\label{fig:er-segpc-missformer-2}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/7/uctransnet}
\caption{}
\label{fig:er-segpc-uctransnet-2}
\end{subfigure}\hfill
\\[-20.5pt]
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/45/img}
\caption{\scriptsize{Input Image}}
\label{fig:er-segpc-img-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/45/gt}
\caption{\scriptsize{Target (GT)}}
\label{fig:er-segpc-gt-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/45/unet}
\caption{\scriptsize{UNet}}
\label{fig:er-segpc-unet-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/45/unetpp}
\caption{\scriptsize{UNet++}}
\label{fig:er-segpc-unetpp-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/45/attunet}
\caption{\scriptsize{AttUNet}}
\label{fig:er-segpc-attunet-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/45/resunet}
\caption{\scriptsize{ResUNet}}
\label{fig:er-segpc-resunet-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/45/multiresunet}
\caption{\scriptsize{MultiResUNet}}
\label{fig:er-segpc-multiresunet-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/45/transunet}
\caption{\scriptsize{TransUNet}}
\label{fig:er-segpc-transunet-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/45/missformer}
\caption{\scriptsize{MISSFormer}}
\label{fig:er-segpc-missformer-3}
\end{subfigure}\hfill
\begin{subfigure}{.099\textwidth}
\centering
\includegraphics[width=\linewidth]{Figures/benchmark_imgs/segpc/45/uctransnet}
\caption{\scriptsize{UCTransNet}}
\label{fig:er-segpc-uctransnet-3}
\end{subfigure}\hfill
\caption{Visual comparisons of different methods on the \textit{SegPC 2021} cell segmentation dataset. \textcolor{red}{Red} region indicates the Cytoplasm and \textcolor{blue}{blue} denotes the Nucleus area of cell.}
\label{fig:er-segpc2021}
\end{figure*}
\subsubsection{Multi organ (Synapse)}
In \Cref{tab:er-synapse}, we presented the quantitative results based on the Dice score and Hausdorff distance metrics. In addition, \Cref{fig:er-synapse} we depicted the visual qualitative results over the synapse multi-organ dataset. Due to the increasing class numbers and the multi-scale innate of the synapse dataset, it is a challenging task. Overall, as a baseline method, U-Net \cite{ronneberger2015u} performs well in comparison with parameters number and a basic design but suffers from several visual defects, e.g., miss-labeling, over-segmentation, and under-segmentation. U-Net++ \cite{zhou2019unet++}, MultiResUNet \cite{ibtehaz2020multiresunet}, and Residual U-Net \cite{zhang2018road} principally follow the same intuition within creating rich semantic representations with dense connections and bigger convolution kernels for capturing contextual information to increase performance, but still, they could not hit the target and depicted a minimal performance boost in Dice score with respect to U-Net. Attention U-Net \cite{oktay2018attUnet} utilized an attention mechanism within skip connections to recalibrate the feature reusability and demonstrates almost 1\% improvement in Dice score compared with U-Net and presents smoother segmentation boundaries over the mentioned methods. Finally, Transformer-based methods represent better results in mean Dice score with respect to CNN-based studies. It is predominately in bond with ViT's advantage in capturing global dependencies that are very important in multi-scale segmentation tasks. MISSFormer \cite{huang2021missformer}, due to its hierarchical and multi-scale design, performs as a SOTA strategy among our selected models. This result suggests that in multi-scale segmentation tasks, the determinative approach should follow the hierarchical design besides leveraging contextual information for superior performances.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Figures/benchmark_imgs/synapse/synapsevisualization_crop.pdf}
\caption{Visual comparisons of different methods on the \textit{Synapse} multi-organ segmentation dataset.}
\label{fig:er-synapse}
\end{figure*}
\begin{table}
\centering
\caption{Comparison of the number of parameters in Million (M) scale and
Giga Floating-point Operations Per Second (GFLOPS).} \label{tab:parameter-flops}
\begin{tabular}{l| *2{c}}
\hline
\textbf{Methods} & \textbf{\# Parameters (M)} & \textbf{GFLOPS} \\
\hline
MISSFormer \cite{huang2021missformer} & 42.46 & 7.25 \\
U-Net \cite{ronneberger2015u} & 1.95 & 8.2 \\
MultiResUNet \cite{ibtehaz2020multiresunet} & 7.82 & 15.74 \\
U-Net++ \cite{zhou2018unet++} & 9.16 & 26.72 \\
UCTransNet \cite{wang2022uctransnet} & 66.43 & 32.99 \\
Att-UNet \cite{oktay2018attention} & 34.88 & 51.02 \\
Residual U-Net \cite{zhang2018road} & 13.04 & 62.06 \\
TransUNet \cite{chen2021transunet} & 105.28 & 80.68 \\
\hline
\end{tabular}
\end{table}
\subsection{Discussion}\label{sec:discussion}
In this section, we analyze the models' performances from \Cref{sec:experimental-results} in more detail. Observing the experimental results, we find that the performance gain achieved by all the extensions of U-Net models compared to the original U-Net. This fact reveals the effectiveness of this contribution to the U-Net model. Although the U-Net acts as a baseline method, in a binary segmentation task such as skin lesion segmentation, it demonstrates a good performance compared to its extensions. However, its performance in multi-organ segmentation tasks, specifically the overlapped objects, seems less promising, evidenced by the over-segmentation or under-segmentation results presented in the qualitative Figures (see \Cref{fig:er-isic2018,fig:er-segpc2021,fig:er-synapse}).
On the contrary, attention-based strategies like Attention U-Net \cite{oktay2018attention} performs well in the case of overlapped objects. Furthermore, the hierarchy of the skip connections included in the U-Net++ \cite{zhou2019unet++} model seems successful in boosting the performance, which might explain the contribution of the nested structure. In addition, a hierarchical attention mechanism in Attention U-Net \cite{oktay2018attention} helps the naive U-Net compensate for the locality issue for more expansive capturing view in feature extraction to perform accurate segmentation over the multi-scale objects such as multi-organ segmentation. Even though each extension of the U-Net model in a CNN-based strategy aims to enhance the feature representation, the locality nature of the convolution operation usually limits the representation power for modeling global anatomical and structural representation. Residual U-Net \cite{zhang2018road}, even with the Residual enhancements, is no exception to this rule. On the other hand, Transformer models, e.g., MISSFormer \cite{huang2021missformer} (as a hybrid model), and UCTransNet \cite{wang2022uctransnet} seem highly capable of modeling the global representation as can be witnessed from the quantitative results on Synapse dataset.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{./Figures/benchmark_imgs/ComparisonOfModels_crop.pdf}
\caption{Comparison of experimental models with respect to their Dice score over studied datasets, memory, and complexity factors.}
\label{fig:comparison-methods}
\end{figure}
The computational complexity, more specifically, the Giga Floating Point Operations (GFLOPS) and the number of trainable parameters, is another critical aspect that needs to be considered for the model assessment. In this respect, we provided \Cref{tab:parameter-flops} to show the number of GFLOPS and trainable parameters for each U-Net extension. Observing \Cref{tab:parameter-flops}, we realize that the modifications in naive U-Net with the integration of CNN-based plug-and-play modules increase the parameter numbers and GFLOPS remarkably. In addition, the extra FLOPS rate further increases with the Transformer-based models due to the quadratic computational complexity. On the contrary, MISSFormer, as a standalone Transfomer-based U-shaped network, benefits from the efficient Transformer design, less affected by the parameter numbers in comparison with TransUNet \cite{chen2021transunet}. Overall, the underlying trade-off between performance and efficiency needs to be considered for the practical use of these methods. To this end, firstly, we define the normalized computational efficiency of a particular network as follows:
\begin{align}
\text{Compexity}_{\text{net}}&=1-\left(\frac{\text{GFLOPS}(\text{net})}{max(\text{GFLOPS(net)}}\right), \nonumber \\
& s.t. \; \text{net} \in \text{Nets}
\end{align}
\noindent where the max function calculates the maximum number of GFLOPS from all models to find the upper bound, similarly, we define the normalized memory efficiency as:
\begin{align}
\text{Memory}_{\text{net}}&=1-\left(\frac{\#\text{Parameters}(\text{net})}{max(\# \text{Parameters}(\text{net}))}\right). \nonumber \\
& s.t. \; \text{net} \in \text{Nets}
\end{align}
\noindent Moreover, finally, we show the model's performance on a particular dataset using the dice score in \Cref{fig:comparison-methods}. The computational efficiency along with the performance gain demonstrated in \Cref{fig:comparison-methods} can be a good source for analyzing the trade-off between performance and efficiency and finally choosing an appropriate clinically-applicable model.
\section{Challenges and Opportunities } \label{sec:challenges}
Over the past few years, deep-learning-based methods, especially the U-Net family, have been utilized considerably in medical image fields to address clinical demands. In the following, future perspectives associated with medical image segmentation using the U-Net family will be introduced in favor of improvement in this field.
\subsection{Memory Efficient Models}
The primary purpose of almost all the approaches mentioned in this paper was to identify the limitation of the original U-Net model and design add-on modules to enhance feature reusability and enrich feature representation to bring more performance boost. However, including more parameters in the model usually results in large memory requirements, which makes the model unsuitable for clinical applications with limited computational devices \cite{fitzke2021oncopetnet}. To overcome this issue, one might consider the efficient design of the network or use a more efficient way of reducing unnecessary parameters. Deep model compression techniques such as pruning \cite{han2015deep}, quantization \cite{han2015deep}, low-rank approximation \cite{yu2017compressing}, knowledge distillation \cite{gou2021knowledge}, and neural architecture search \cite{wistuba2019survey} to new a few, is an area for advancement that researcher might consider in their approach.
\subsection{Balance Between Accuracy and Efficiency}
It is always appealing for the deep model to be accurate and efficient. However, model efficiency usually requires meeting some restrictions (e.g., low computational constraint). Such a scenario potentially leads to a poor network design and consequently results in less accurate predictions. Hence, there is always a trade-off between the model's accuracy and efficiency. From a clinical perspective, it is highly desirable to take into account the balance between efficiency and accuracy. Therefore, one might consider the trade-off between the accuracy and efficiency of U-Net extension.
\subsection{Model scaling and complexity}
As mentioned in the previous sections, evident is the fact that some parameters such as running time, computational complexity, and memory are important for clinics with limited computing resources. In addition to the memory shortage, inference time and consequently computational complexity also have key roles in real-time applications that need to be considered. To do so, well-known metrics evaluate the model's complexity such as Model parameters, Floating-Point Operations (FLOPS), Runtime, and Frame Per Second (FPS) should be considered in the model assessment. The two given metrics, model parameters, and FLOPS are independent of the implementation environment. Hence, a larger value ends up with lower implementation efficiency. Metrics such as run-time and FPS may seem to be less desirable compared to model parameters and FLOPs, due to their reliance on hardware and implementation environment. However, these metrics are also imperative for a real-world application and need to be taken into account as well.
\subsection{Interpretable Models}
It is always a question for clinical, how deep learning models learns specific patterns to recognize certain cancer. From a clinical point of view, understanding the object recognition process by deep models is significantly appropriate. As a result, the radiologists can interpret the deep models (e.g., feature visualization), to find out how the diagnosis process happens inside the deep model and how they can include prior knowledge (e.g., pathology assumption) to further improve model performance. Hence, model interpretation not only assists the radiologist to understand the hidden markers in medical data but also provides a way to further scrutinize the network architecture for possible improvement with modeling clinical assumptions. In this respect, future direction for U-Net extension might consider the importance of interpretable characteristics in their network design.
\subsection{Federated Learning}
In a medical domain data, privacy provides a set of regulations to confirm data confidentiality and security. In most clinical applications, whole data acquisition usually happens in one center, which limits the data diversity and results in less generalizable deep neural networks. In contrast, a multi-central dataset with authorized data privacy can provide a more realistic dataset for clinical purposes \cite{rieke2020future}. The future network design might consider a federated learning strategy in their optimization process to provide a more generalizable model for clinical use.
\subsection{Software Ecosystems}
With the rapid development of Artificial Intelligence (AI) systems, these methods are integrated and adapted in almost all domains to facilitate the data processing strategy and provide assistive tools to meet consumer demands. A software ecosystem coordinates this process by defining a set of actors (e.g., developers, users, etc.) in conjunction with the appropriate interaction strategy to secure consumers' demands. The software ecosystem can be considered as a three-step pipeline (see \Cref{fig:unet-pipeline}). The first step (\Cref{fig:unet-pipeline}) is data acquisition and preparation. This step aims to recognize the hardware requirements (e.g., MRI scan, metadata processing) for the task-specific goal and prepare the data in a standard format for the next step. The deep learning model design and optimization process happens in the second step to learn the task-specific objective. Finally, the last step provides comprehensive tools to analyze and evaluate the extracted results for the clinical applications. Several software prototypes already exist in the literature, nnUNet \cite{isensee2021nnu}, Ivadomed \cite{gros2020ivadomed}, and etc., to provide open access code for both clinical/research communities. Similarly, our open-source software at GitHub\footnote{\url{https://github.com/NITR098/Awesome-U-Net}} provides an implementation of the U-Net family for several public datasets. The future research direction might consider contributing to an open-source library to further support software development.
\subsection{Data Driven Decisions}
In a supervised learning task, the neural network uses the ground truth mask to learn a discriminative space, which enables the network to distinguish the target class from the background regions (e.g., segmenting tumor regions in bran images). All the extensions of the U-Net models presented in this review were based on the supervised learning paradigm. Such a training strategy imposes a mask bias on the training process and prevents the network from learning data-driven features. This might explain why supervised trained networks have less capacity to generalize for unseen objects. To address this issue, one might consider including unsupervised data in their training process for a richer data representation, like \cite{feyjie2020semi}. As it is clear from the name, the unsupervised technique does not require any annotation and gives the model more freedom to infer from the data itself without imposing any prior knowledge (e.g., mask information). Although training and reasoning from the image itself is a complex task, modeling such behavior might open up new potential for the U-Net family to learn more generic and data-representative features. The future direction of the U-Net family might consider this point in their potential design for a further performance boost.
\section{Conclusion}
In this study, we presented a thorough review of the literature on U-Net and its variants, the emergence of which has continued to rise in the medical image segmentation field over the years.
We examined the area from its main taxonomy, the extensions to the variants, and a benchmark of performance and speed.
To structure the wide variety of U-Net extensions, the different extensions and variants are grouped depending on which type of change was made to the architecture.
Adaptions to the skip connections are presented in \Cref{sec:skip-connection} and comprise methods that increase the number of skip connections, apply additional processing to the feature maps in the skip connections to focus on the areas of interest and improve the fusion of the encoder and decoder feature maps combined through the skip connections.
\Cref{sec:backbone-design} introduces different types of backbones used in the U-Net architecture e.g. deeper network architectures, processing of 3D images or multi-resolution feature extraction for high inter-patient variability in terms of size of the object(s) of interest.
A variety of extensions of the bottleneck of the original U-Net is examined in \Cref{sec:bottleneck}.
The different approaches adapt the bottleneck for multi-scale representation of the bottleneck feature maps or position-wise attention to model spatial dependencies between pixels in the bottleneck.
The transformer variants of the U-Net architecture, introduced in \Cref{sec:transformers}, enable the networks to capture inter-pixel long-range dependencies and to compensate for the otherwise limited receptive field of the convolutions in the original U-Net.
Approaches presented in \Cref{sec:rich-representation} adapt the U-Net architecture to use information from multiple modalities and/or multiple scales for a rich representation of features. Finally, \Cref{sec:probablistic} presented to model the uncertainty in annotations of medical data or out-of-distribution samples.
Furthermore, a detailed evaluation of several of the U-Net variants on different medical datasets was conducted in \Cref{sec:experimental-results}. In our accuracy and complexity experiments, the transformer variants, the UCTransNet \cite{wang2021uctransnet} and MISSFormer \cite{huang2021missformer} achieved superior performance (2\% increases in terms of dice scores) compared to the CNN extension on all datasets. Eventually, our experimental results along with the computational and memory complexity provided a picture for the reader to consider a trade-off between the performance and efficiency (e.g., MISSFormer \cite{huang2021missformer} with 42M parameters and 0.81 DSC score on the Synapse dataset against U-Net with 1.9M parameters but 0.79 DSC score) for choosing the desired network for the problem at hand. We hope our study can assist researchers in further extending the U-Net model for both clinical and industry applications.
\bibliographystyle{IEEEtran}
|
1,116,691,498,282 | arxiv | \section{Introduction
The well-known railroad crossing problem has been used as an example
for comparing various specification and validation methodologies; see
for example~\cite{HL,ORS} and the relevant references there. The
evolving algebras (EA) methodology has been used extensively for
specification and validation for real-world software and hardware
systems; see the EA guide~\cite{Guide} and the EA
bibliography~\cite{Boerger}. The merits of using ``toy'' problems as
benchmarks are debatable; not every methodology scales well to
real-world problems. Still, toy problems are appropriate for
experimentation. Here we present an evolving algebra solution for the
railway crossing problem and use the opportunity for experimentation
with instantaneous actions and reactions in real time.
In Sect.~2, we describe a version of the railroad crossing problem.
It is not difficult to generalize the problem (e.g. by
relaxing our assumptions on trains) and generalize the solution
respectively. An interested reader may view that as an exercise.
In Sect.~3, we give a brief introduction to evolving algebras (in short,
ealgebras), in order to make this paper self-contained. We omit
many important aspects of ealgebras and refer the interested reader to a
fuller definition in the EA guide~\cite{Guide}. In Sect.~4,
experimenting with instantaneous actions in real time, we define special
distributed real-time ealgebras appropriate to situations like that of the
railroad crossing problem.
In Sect.~5 and Sect.~6, we give a solution for the railroad crossing
problem which is formalized as an ealgebra. The program for the
ealgebra is given in Sect.~5. The reader may wish to look at
Sect.~5 right away; the notation is self-explanatory to a large
extent. In Sect.~6, we define regular runs (the only relevant runs)
of our ealgebra and analyze those runs. Formally speaking, we have to
prove the existence of regular runs for every possible pattern of
trains; for technical reasons, we delay the existence theorem until
later.
In Sect.~7, we prove the safety and liveness properties of our
solution. In Sect.~8 we prove a couple of additional properties of
our ealgebra. In Sect.~9, we take advantage of the additional
properties and prove the existence theorem for regular runs and
analyze the variety of regular runs.
The ealgebra formalization is natural and this allows us to use
intuitive terms in our proofs. One may have an impression that no
formalization is really needed. However, a formalization is needed if
one wants a mathematical verification of an algorithm: mathematical
proofs are about mathematical objects. Of course, we could avoid
intuitive terms and make the proofs more formal and pedantic, but this
paper is addressed to humans and it is so much harder to read pedantic
proofs. It is a long standing tradition of applied mathematics to use
intuitive terms in proofs. Let us notice though that more formal and
pedantic proofs have their own merits; if one wants to check the
details of our proofs by machine, it is useful to rewrite the proofs
in a pedantic way. In any case, we see a great value in the
naturality of formalization. No semantical approach makes inherent
difficulties of a given problem go away. At best, the approach does
not introduce more complications and allows one to deal with the
inherent complexity of the given problem.
\paragraph{Acknowledgments.}
Raghu Mani participated in an initial stage of the work \cite{tech1}.
During the final stage of the work, the first author was
a CNRS\footnote{Centre National de la Recherche Scientifique} visitor
in the Laboratoire Informatique Theoretique et Programmation, Paris,
France \cite{tech2}.
\section{The Railroad Crossing Problem}
Imagine a railroad crossing with several train tracks and a common
gate, such as the one depicted in Fig.~\ref{trainpic}. Sensors along
every track detect oncoming and departing trains. Let us consider one
of the tracks, shown in Fig.~\ref{senspic}. It has four sensors at
points L1, L2, R1 and R2. Sensor L1 detects trains coming from the
left, and sensor L2 detects when those trains leave the crossing.
Similarly sensor R1 detects trains coming from the right, and sensor
R2 detects when those trains leave the crossing. Based on signals
from these sensors, an automatic controller signals the gate to open
or close.
\begin{figure}[htbp]
\setlength{\unitlength}{0.00083300in}%
\begingroup\makeatletter\ifx\SetFigFont\undefined
\def\x#1#2#3#4#5#6#7\relax{\def\x{#1#2#3#4#5#6}}%
\expandafter\x\fmtname xxxxxx\relax \def\y{splain}%
\ifx\x\y
\gdef\SetFigFont#1#2#3{%
\ifnum #1<17\tiny\else \ifnum #1<20\small\else
\ifnum #1<24\normalsize\else \ifnum #1<29\large\else
\ifnum #1<34\Large\else \ifnum #1<41\LARGE\else
\huge\fi\fi\fi\fi\fi\fi
\csname #3\endcsname}%
\else
\gdef\SetFigFont#1#2#3{\begingroup
\count@#1\relax \ifnum 25<\count@\count@25\fi
\def\x{\endgroup\@setsize\SetFigFont{#2pt}}%
\expandafter\x
\csname \romannumeral\the\count@ pt\expandafter\endcsname
\csname @\romannumeral\the\count@ pt\endcsname
\csname #3\endcsname}%
\fi
\fi\endgroup
\begin{center}\framebox{
\begin{picture}(5024,2200)(66,-1417)
\thicklines
\put( 78,-63){\line( 1, 0){914}}
\put( 1764,-63){\line(1,0){3314}}
\put( 78,-213){\line( 1, 0){914}}
\put( 1764,-213){\line( 1, 0){3314}}
\put(153, 12){\line( 0,-1){300}}
\put(228, 12){\line( 0,-1){300}}
\put(303, 12){\line( 0,-1){300}}
\put(378, 12){\line( 0,-1){300}}
\put(453, 12){\line( 0,-1){300}}
\put(528, 12){\line( 0,-1){300}}
\put(603, 12){\line( 0,-1){300}}
\put(678, 12){\line( 0,-1){300}}
\put(753, 12){\line( 0,-1){300}}
\put(828, 12){\line( 0,-1){300}}
\put(903, 12){\line( 0,-1){300}}
\put(978, 12){\line( 0,-1){300}}
\put(1803, 12){\line( 0,-1){300}}
\put(1878, 12){\line( 0,-1){300}}
\put(1878, 12){\line( 0,-1){300}}
\put(2028, 12){\line( 0,-1){300}}
\put(1953, 12){\line( 0,-1){300}}
\put(2103, 12){\line( 0,-1){300}}
\put(2103, 12){\line( 0,-1){300}}
\put(2178, 12){\line( 0,-1){300}}
\put(2178, 12){\line( 0,-1){300}}
\put(2253, 12){\line( 0,-1){300}}
\put(2328, 12){\line( 0,-1){300}}
\put(2403, 12){\line( 0,-1){300}}
\put(2478, 12){\line( 0,-1){300}}
\put(2628, 12){\line( 0,-1){300}}
\put(2553, 12){\line( 0,-1){300}}
\put(2703, 12){\line( 0,-1){300}}
\put(2778, 12){\line( 0,-1){300}}
\put(2853, 12){\line( 0,-1){300}}
\put(3003, 12){\line( 0,-1){300}}
\put(2928, 12){\line( 0,-1){300}}
\put(3078, 12){\line( 0,-1){300}}
\put(3153, 12){\line( 0,-1){300}}
\put(3228, 12){\line( 0,-1){300}}
\put(3303, 12){\line( 0,-1){300}}
\put(3378, 12){\line( 0,-1){300}}
\put(3453, 12){\line( 0,-1){300}}
\put(3528, 12){\line( 0,-1){300}}
\put(3603, 12){\line( 0,-1){300}}
\put(3678, 12){\line( 0,-1){300}}
\put(3828, 12){\line( 0,-1){300}}
\put(3753, 12){\line( 0,-1){300}}
\put(3903, 12){\line( 0,-1){300}}
\put(3978, 12){\line( 0,-1){300}}
\put(4053, 12){\line( 0,-1){300}}
\put(4128, 12){\line( 0,-1){300}}
\put(4203, 12){\line( 0,-1){300}}
\put(4278, 12){\line( 0,-1){300}}
\put(4353, 12){\line( 0,-1){300}}
\put(4353, 12){\line( 0,-1){300}}
\put(4428, 12){\line( 0,-1){300}}
\put(4503, 12){\line( 0,-1){300}}
\put(4578, 12){\line( 0,-1){300}}
\put(4653, 12){\line( 0,-1){300}}
\put(4728, 12){\line( 0,-1){300}}
\put(4803, 12){\line( 0,-1){300}}
\put(4878, 12){\line( 0,-1){300}}
\put(4878, 12){\line( 0,-1){300}}
\put(4953, 12){\line( 0,-1){300}}
\put( 78,-588){\line( 1, 0){914}}
\put( 1764,-588){\line( 1, 0){3314}}
\put( 78,-738){\line( 1, 0){914}}
\put( 1764,-738){\line( 1, 0){3314}}
\put(153,-513){\line( 0,-1){300}}
\put(228,-513){\line( 0,-1){300}}
\put(303,-513){\line( 0,-1){300}}
\put(378,-513){\line( 0,-1){300}}
\put(453,-513){\line( 0,-1){300}}
\put(528,-513){\line( 0,-1){300}}
\put(603,-513){\line( 0,-1){300}}
\put(678,-513){\line( 0,-1){300}}
\put(753,-513){\line( 0,-1){300}}
\put(828,-513){\line( 0,-1){300}}
\put(903,-513){\line( 0,-1){300}}
\put(978,-513){\line( 0,-1){300}}
\put(1803,-513){\line( 0,-1){300}}
\put(1878,-513){\line( 0,-1){300}}
\put(1878,-513){\line( 0,-1){300}}
\put(2028,-513){\line( 0,-1){300}}
\put(1953,-513){\line( 0,-1){300}}
\put(2103,-513){\line( 0,-1){300}}
\put(2103,-513){\line( 0,-1){300}}
\put(2178,-513){\line( 0,-1){300}}
\put(2178,-513){\line( 0,-1){300}}
\put(2253,-513){\line( 0,-1){300}}
\put(2328,-513){\line( 0,-1){300}}
\put(2403,-513){\line( 0,-1){300}}
\put(2478,-513){\line( 0,-1){300}}
\put(2628,-513){\line( 0,-1){300}}
\put(2553,-513){\line( 0,-1){300}}
\put(2703,-513){\line( 0,-1){300}}
\put(2778,-513){\line( 0,-1){300}}
\put(2853,-513){\line( 0,-1){300}}
\put(3003,-513){\line( 0,-1){300}}
\put(2928,-513){\line( 0,-1){300}}
\put(3078,-513){\line( 0,-1){300}}
\put(3153,-513){\line( 0,-1){300}}
\put(3228,-513){\line( 0,-1){300}}
\put(3303,-513){\line( 0,-1){300}}
\put(3378,-513){\line( 0,-1){300}}
\put(3453,-513){\line( 0,-1){300}}
\put(3528,-513){\line( 0,-1){300}}
\put(3603,-513){\line( 0,-1){300}}
\put(3678,-513){\line( 0,-1){300}}
\put(3828,-513){\line( 0,-1){300}}
\put(3753,-513){\line( 0,-1){300}}
\put(3903,-513){\line( 0,-1){300}}
\put(3978,-513){\line( 0,-1){300}}
\put(4053,-513){\line( 0,-1){300}}
\put(4128,-513){\line( 0,-1){300}}
\put(4203,-513){\line( 0,-1){300}}
\put(4278,-513){\line( 0,-1){300}}
\put(4353,-513){\line( 0,-1){300}}
\put(4353,-513){\line( 0,-1){300}}
\put(4428,-513){\line( 0,-1){300}}
\put(4503,-513){\line( 0,-1){300}}
\put(4578,-513){\line( 0,-1){300}}
\put(4653,-513){\line( 0,-1){300}}
\put(4728,-513){\line( 0,-1){300}}
\put(4803,-513){\line( 0,-1){300}}
\put(4878,-513){\line( 0,-1){300}}
\put(4878,-513){\line( 0,-1){300}}
\put(4953,-513){\line( 0,-1){300}}
\put(1008,-1337){\framebox(750,2063){}}
\multiput(1377,-1337)(0,100){21}{\line( 0, 1){50}}
\linethickness{75\unitlength}
\put(630,236){\line(1,0){375}}
\put(630,665){\line(1,0){375}}
\put(810,236){\line(0,1){450}}
\linethickness{91\unitlength}
\put(845,455){\line(1,0){826}}
\linethickness{75\unitlength}
\put(1757,-1310){\line(1,0){375}}
\put(1757,-880){\line(1,0){375}}
\put(1935,-1310){\line(0,1){450}}
\linethickness{91\unitlength}
\put(1081,-1100){\line(1,0){826}}
\end{picture}}
\end{center}
\caption{\label{trainpic}A railroad crossing.}
\end{figure}
\begin{figure}[htbp]
\setlength{\unitlength}{0.00083300in}%
\begingroup\makeatletter\ifx\SetFigFont\undefined
\def\x#1#2#3#4#5#6#7\relax{\def\x{#1#2#3#4#5#6}}%
\expandafter\x\fmtname xxxxxx\relax \def\y{splain}%
\ifx\x\y
\gdef\SetFigFont#1#2#3{%
\ifnum #1<17\tiny\else \ifnum #1<20\small\else
\ifnum #1<24\normalsize\else \ifnum #1<29\large\else
\ifnum #1<34\Large\else \ifnum #1<41\LARGE\else
\huge\fi\fi\fi\fi\fi\fi
\csname #3\endcsname}%
\else
\gdef\SetFigFont#1#2#3{\begingroup
\count@#1\relax \ifnum 25<\count@\count@25\fi
\def\x{\endgroup\@setsize\SetFigFont{#2pt}}%
\expandafter\x
\csname \romannumeral\the\count@ pt\expandafter\endcsname
\csname @\romannumeral\the\count@ pt\endcsname
\csname #3\endcsname}%
\fi
\fi\endgroup
\begin{center}\framebox{
\begin{picture}(4374,1641)(64,-1198)
\thicklines
\put(1742,-1186){\framebox(847,1032){}}
\multiput(2165,-1186)(0,100){10}{\line( 0, 1){50}}
\put(1502,238){\circle{396}}
\put(1427,163){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{rm}R2}}}
\put(2852,238){\circle{396}}
\put(2777,163){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{rm}L2}}}
\put(377,238){\circle{396}}
\put(302,163){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{rm}L1}}}
\put(4127,238){\circle{396}}
\put(4052,163){\makebox(0,0)[lb]{\smash{\SetFigFont{12}{14.4}{rm}R1}}}
\put(153,-513){\line( 0,-1){300}}
\put(228,-513){\line( 0,-1){300}}
\put(303,-513){\line( 0,-1){300}}
\put(378,-513){\line( 0,-1){300}}
\put(453,-513){\line( 0,-1){300}}
\put(528,-513){\line( 0,-1){300}}
\put(603,-513){\line( 0,-1){300}}
\put(678,-513){\line( 0,-1){300}}
\put(753,-513){\line( 0,-1){300}}
\put(828,-513){\line( 0,-1){300}}
\put(903,-513){\line( 0,-1){300}}
\put(978,-513){\line( 0,-1){300}}
\put(1053,-513){\line( 0,-1){300}}
\put(1128,-513){\line( 0,-1){300}}
\put(1128,-513){\line( 0,-1){300}}
\put(1203,-513){\line( 0,-1){300}}
\put(1278,-513){\line( 0,-1){300}}
\put(1353,-513){\line( 0,-1){300}}
\put(1428,-513){\line( 0,-1){300}}
\put(1503,-513){\line( 0,-1){300}}
\put(1503,-513){\line( 0,-1){300}}
\put(1578,-513){\line( 0,-1){300}}
\put(2703,-513){\line( 0,-1){300}}
\put(2778,-513){\line( 0,-1){300}}
\put(2853,-513){\line( 0,-1){300}}
\put(3003,-513){\line( 0,-1){300}}
\put(2928,-513){\line( 0,-1){300}}
\put(3078,-513){\line( 0,-1){300}}
\put(3153,-513){\line( 0,-1){300}}
\put(3228,-513){\line( 0,-1){300}}
\put(3303,-513){\line( 0,-1){300}}
\put(3378,-513){\line( 0,-1){300}}
\put(3453,-513){\line( 0,-1){300}}
\put(3528,-513){\line( 0,-1){300}}
\put(3603,-513){\line( 0,-1){300}}
\put(3678,-513){\line( 0,-1){300}}
\put(3828,-513){\line( 0,-1){300}}
\put(3753,-513){\line( 0,-1){300}}
\put(3903,-513){\line( 0,-1){300}}
\put(3978,-513){\line( 0,-1){300}}
\put(4053,-513){\line( 0,-1){300}}
\put(4128,-513){\line( 0,-1){300}}
\put(4203,-513){\line( 0,-1){300}}
\put(4278,-513){\line( 0,-1){300}}
\put(4353,-513){\line( 0,-1){300}}
\put(4353,-513){\line( 0,-1){300}}
\put(2626,-586){\line( 1, 0){1800}}
\put( 76,-586){\line( 1, 0){1650}}
\put( 76,-736){\line( 1, 0){1650}}
\put(2626,-736){\line( 1, 0){1800}}
\put(1651,-511){\line( 0,-1){300}}
\put(301, 14){\vector(-1,-3){225}}
\put(1501, 14){\vector( 1,-3){225}}
\put(2851, 14){\vector(-1,-3){225}}
\put(4201, 14){\vector( 1,-3){225}}
\end{picture}}
\end{center}
\caption{\label{senspic}Placement of sensors along a railroad track.}
\end{figure}
The problem is to design a controller that guarantees the following
requirements.
\begin{description}
\item[Safety] If a train is in the crossing, the gate is closed.
\item[Liveness] The gate is open as much as possible.
\end{description}
Several assumptions are made about the pattern of train movement. For
example, if a train appears from the left, it leaves the crossing to
the right. It is easiest to express those assumptions as a
restriction on possible histories of train motion on any given track.
\paragraph{Assumptions Regarding Train Motion.}
For any given track, there is a finite or infinite sequence of moments
\[ t_0 < t_1 < t_2 < t_3 < \ldots \]
\noindent
satisfying the following conditions.
\begin{description}
\item[Initial State] The moment $t_0$ is the initial moment. The
observed part $[L1,R1]$ of the track is empty at $t_0$.
\item[Train Pattern]
If $t_{3i+1}$ appears in the sequence then $t_{3i+3}$ appears
in the sequence and we have that
\begin{itemize}
\item at $t_{3i+1}$, one oncoming train is detected at L1 or R1,
\item at $t_{3i+2}$ the train reaches the crossing, and
\item at $t_{3i+3}$ the train is detected to have left the crossing
at L2 or R2 respectively.
\end{itemize}
\item[Completeness]
There are no other trains.
\end{description}
\paragraph{Additional Assumptions.} From the moment that an oncoming
train is detected, it takes time between $\dmin$ and $\dmax$ for the
train to reach the crossing. In terms of the sequence $\langle t_0 <
t_1 < t_2 < t_3 <\ldots\rangle$ above, this assumption can be stated
as follows:
\begin{description}
\item[1] Every difference $t_{3i+2} - t_{3i+1}$ belongs to the
interval $[\dmin,\dmax]$.
\end{description}
\noindent Further, the gate closes within time $\dclose$ and opens
within time $\dopen$. This does not necessarily mean that if the
controller signals the gate to close (respectively open) at moment $t$
then the gate closes (respectively opens) by time $t +\dclose$
(respectively $t +\dopen$). Let us state the assumption more precisely
as a restriction on possible histories.
\begin{description}
\item[2] There is no interval
$I=(t,t+\dclose)$ (respectively $I=(t,t+\dopen)$) during which the
signal to close (respectively to open) is in force but the gate is not
closed (respectively opened) at any moment in $I$.
\end{description}
\noindent It is easy to see that the controller cannot guarantee the
safety requirement is satisfied if $\dmin<\dclose$. We ignore the
case $\dmin=\dclose$ and assume that
\begin{description}
\item[3] $\dclose <\dmin$.
\end{description}
\noindent
Finally, we will assume that actions are performed instantaneously. Of
course, real actions take time and the use of instantaneous actions is
an abstraction. But this may be a useful abstraction. For example,
in our case, it is natural to ignore the time taken by the
controller's actions. It is not natural at all to view closing and
opening of the gate as instantaneous actions, and we will not do that.
Let us stress that the evolving algebra methodology does not require
that actions are necessarily instantaneous. See for
example~\cite{BGR} where an instantaneous action ealgebra is refined
to a prolonged-action ealgebra.
The design part of the railway crossing problem is not difficult,
especially because the problem has been addressed in a number of papers.
What remains is to formalize the design in a specification language, in our
case as an evolving algebra, and prove the safety and liveness requirements
are satisfied.
\section{Evolving Algebras Reminder
We give a brief reminder on evolving algebras based on the EA
guide~\cite{Guide}. We present only what is necessary here and ignore many
important features.
\subsection{Static Algebras}
Static algebras are essentially logicians' structures except that a
tiny bit of meta-mathematics is built into it. They are indeed algebras in
the sense of the science of universal algebra.
A {\em vocabulary\/} is a collection of function symbols; each symbol
has a fixed arity. Some function symbols are tagged as relation
symbols (or predicates). It is supposed that every vocabulary
contains the following {\em logic symbols\/}: nullary symbols \true,
\false, \undef, a binary symbol =, and the symbols of the standard
propositional connectives.
A {\em static algebra\/} (or a {\em state\/}) $A$ {\em of
vocabulary\/} $\U$ is a nonempty set $X$ (the {\em basic set\/} or
{\em superuniverse\/} of $A$), together with interpretations of all
function symbols in $\U$ over $X$ (the {\em basic functions\/} of
$A$). A function symbol $f$ of arity $r$ is interpreted as an $r$-ary
operation over $X$ (if $r=0$, it is interpreted as an element of $X$).
The interpretations of predicates ({\em basic relations\/}) and the
logic symbols satisfy some obvious requirements stated below.
Remark on notations and denotations. A symbol in $\U$ is a name or
notation for the operation that interprets it in $A$, and the
operation is the meaning or denotation of the symbol in $A$. In English, a
word ``spoon" is a name of a familiar table utensil, and one says ``I
like that spoon" rather than a more cumbersome ``I like that utensil
named `spoon'". Similarly, when a state is fixed, we may say that $f$
maps a tuple $\bar{a}$ to an element $b$ rather than that the
interpretation of $f$ maps a tuple $\bar{a}$ to an element $b$.
On the interpretations of logic symbols and predicates. Intuitively, (the
interpretations of) \true\ and \false\ represent truth and falsity
respectively. Accordingly, the symbols \true\ and \false\ are
interpreted by different elements. These two elements are the only
possible values of any basic relation. The Boolean connectives behave
in the expected way over these two elements, and the equality function
behaves in the expected way over all elements.
Universes and typing. Formally speaking, a static algebra is
one-sorted. However, it may be convenient to view it as many-sorted;
here we describe a standard way to do this. Some unary basic
relations are designated as universes (or sorts) and their names
may be called universe symbols. One thinks about a universe $U$ as a
set $\{x: U(x)=\true\}$. Basic functions are assigned universes as
domains. For example, the domain of a binary function $f$ may be
given as $U_1\times U_2$ where $U_1$ and $U_2$ are universes. If $f$
is a relation, this means that $f(a_1,a_2) =\false$ whenever
$a_1\not\in U_1$ or $a_2\not\in U_2$. Otherwise this means that
$f(a_1,a_2) =\undef$ whenever $a_1\not\in U_1$ or $a_2\not\in U_2$, so
that $f$ is intuitively a partial function.
Remark on the built-in piece of meta-mathematics. In first-order
logic, an assertion about a given structure does not evaluate to any
element of the structure. For technical convenience, in evolving
algebras truth and falsity are represented internally and many
assertions can be treated as terms. This technical modification does
not prevent us from dealing with assertions directly. For example,
let $f,g$ be nullary function symbols and $P$ a binary function
symbol. Instead of saying that $P(f,g)$ evaluates to \true\
(respectively \false) at a state $A$, we may say $P(f,g)$ holds
(respectively fails) at $A$. In some cases, we may even omit ``holds'';
for example, we may assert simply that $f\neq g$. Admittedly, this
is not very pedantic, but we write for humans, not machines.
\subsection{Updates
Alternatively, a state can be viewed as a kind of memory.
A {\em location\/} $\ell$ of a state $A$ of vocabulary $\U$ is a pair
$\ell = (f,\bar{a})$ where $f$ is a symbol in $\U$ of some arity $r$
and $\bar{a}$ is an $r$-tuple of elements of $A$ (that is, of the
superuniverse of $A$). The element $f(\bar{a})$ is the {\em content\/}
of location $\ell$ in $A$.
An {\em update} of state $A$ is a pair $(\ell,b)$, where $\ell$ is some
location $(f,\bar{a})$ of $A$ and $b$ is an element of $A$; it is
supposed that $b$ is (the interpretation of) \true\ or \false\ if $f$
is a predicate. This update
is {\em trivial\/} if $b$ is the content of $\ell$ in $A$. An update can be
performed: just replace the value at location $\ell$ with $b$. The
vocabulary, the superuniverse and the contents of other locations remain
unchanged. The state changes only if the update is nontrivial.
Call a set $S =\{(\ell_1,b_1),\ldots,(\ell_n,b_n)\}$ of updates of a state $A$
{\em consistent\/} if the locations are distinct. In other words, $S$ is {\em
inconsistent\/} if there are $i,j$ such that $\ell_i =\ell_j$ but $b_i\neq b_j$.
In the case that $S$ is consistent it is performed as follows: replace the
content of $\ell_1$ with $b_1$, the content of $\ell_2$ with $b_2$ and so on.
To perform an inconsistent update set, do nothing.
A pedantic remark. The equality used in the previous paragraph is
not the built-in equality of $A$ but rather the equality of the meta
language. One could use another symbol for the built-in equality, but
this is not necessary.
A remark to theoreticians. At the point that updates are introduced,
some people, in particular Robin Milner~\cite{Milner}, raise an
objection that an update may destroy algebraic properties. For
example, an operation may lose associativity.
That is true. So, in what sense are
static algebras algebraic? They are algebraic in the sense that the
nature of elements does not matter and one does not distinguish
between isomorphic algebras. A standard way to access a particular
element is to write a term that evaluates to that element. Coming
back to algebraic properties like associativity (and going beyond the
scope of this paper), let us note that, when necessary, one can
guarantee that such a property survives updating by declaring
some functions static or by imposing appropriate integrity constraints
or just by careful programming.
\subsection{Basic Rules}
In this subsection we present the syntax and semantics of basic rules.
Each rule $R$ has a vocabulary, namely the collection of function symbols
that occur in $R$. A rule $R$ is applicable to a state $A$ only if the
vocabulary of $A$ includes that of $R$. At each state $A$ of sufficiently
rich vocabulary, $R$ gives rise to a set of updates. To execute $R$ at
such a state $A$, perform the update set at $A$.
A {\em basic update rule\/} $R$ has the form
\begin{tabbing}mmmmm\=\kill
\> $f(e_1,\ldots,e_r) := e_0$\
\end{tabbing}
\noindent
where $f$ is an $r$-ary function symbol (the {\em head\/} of $R$) and
each $e_i$ is a ground term, that is, a term without any variables.
(In programming languages, terms are usually called expressions; that
motivates the use of letter e for terms.) To execute $R$ at a state
$A$ of sufficiently rich vocabulary, evaluate all terms $e_i$ at $A$
and then change $f$ accordingly. In other words, the update set
generated by $R$ at $A$ consists of one update $(\ell,a_0)$ where
$\ell = (f,(a_1,\ldots,a_r))$ and each $a_i$ is the value of $e_i$ at
$A$.
For example, consider an update rule $f(c_1 + c_2) := c_0$ and a state
$A$ where $+$ is interpreted as the standard addition function on
natural numbers and where $c_1,c_2,c_0$ have values $3, 5, 7$
respectively. To execute the rule at $A$, set $f(8)$ to $7$.
There are only two basic rule constructors.
One is the {\em conditional constructor\/} which
produces rules of the form:
\begin{tabbing}mmmmm\=\kill
\>{\bf if} $g$ {\bf then} $R_1$ {\bf else} $R_2$ {\bf endif}
\end{tabbing}
\noindent where $g$ is a ground term (the {\em guard\/} of the new rule) and
$R_1,R_2$ are rules. To execute the new rule in a state $A$ of
sufficiently rich vocabulary, evaluate the guard. If it is true, then
execute $R_1$; otherwise execute $R_2$. (The ``{\bf else}'' clause
may be omitted if desired.)
The other constructor is the {\em block constructor\/} which produces
rules of the form:
\begin{tabbing}mmmmm\=mmmmm\=\kill
\>{\bf block}\\
\> \> $R_1$\\
\> \> $\vdots$\\
\> \> $R_k$\\
\>{\bf endblock}
\end{tabbing}
\noindent where $R_1,\ldots,R_k$ are rules. (We often omit the
keywords ``{\bf block}'' and ``{\bf endblock}'' for brevity and use
indentation to eliminate ambiguity.) To execute the new rule in a
state $A$ of sufficiently rich vocabulary, execute rules
$R_1,\ldots,R_k$ simultaneously. More precisely, the update set
generated by the new rule at $A$ is the union of the update sets
generated by the rules $R_i$ at $A$.
A {\em basic program\/} is simply a basic rule.
In this paper we say that a rule $R$ is {\em enabled\/} at a state $A$ of
sufficiently rich vocabulary if the update set generated by $R$ at $A$ is
consistent and contains a non-trivial update; otherwise $R$ is {\em
disabled\/} at $A$. (The notion of being enabled has not been formalized in
the EA guide.) Rules will be executed only if they are enabled, so that
the execution changes a given state. This seems to be a very pedantic
point. What harm is done by executing a rule that does not change a given
state? It turns out that the stricter notion of being enabled is
convenient in real-time computational theory; see Lemma~\ref{lem4.4} in this
connection.
\subsection{Parallel Synchronous Rules}
Generalize the previous framework in two directions. First, permit
terms with variables and generalize the notion of state: in addition
to interpreting some function names, a generalized state may assign
values to some variables. (Notice that a variable cannot be the head
of an update rule.)
Second, generalize the notion of guards by allowing bounded
quantification. More formally, we define {\em guards\/} as a new
syntactical category. Every term $P(e_1,\ldots,e_r)$, where $P$
is a predicate, is a guard. A Boolean combination of guards is a
guard. If $g(x)$ is a guard with a variable $x$ and $U$ is a universe
symbol then the expression $(\forall x\in U) g(x)$ is also a guard.
The semantics of guards is quite obvious. A guard $g(\by)$ with free
variables $\by$ holds or fails at a (generalized) state $A$ that
assigns values to all free variables of $g$. The least trivial case
is that of a guard $g(\by) = (\forall x\in U) g'(x,\by)$. For every
element $b$ of $U$ in $A$, let $A_b$ be the expansion of $A$ obtained
by assigning the value $b$ to $x$. Then $g(\by)$ holds at $A$ if
$g'(x,\by)$ holds at every $A_b$; otherwise it fails at $A$.
Now consider a generalized basic rule $R(x)$ with a variable $x$ and
let $U$ be a universe symbol. Form the following rule $R^*$:
\begin{tabbing}mmmmm\=mmmmm\=\kill
\>{\bf var} $x$ {\bf ranges over} $U$\\
\> \>$R(x)$\\
\>{\bf endvar}
\end{tabbing}
Intuitively, to execute $R^*$, one executes $R(x)$ for every $x\in U$.
To make this more precise, let $A$ be a (generalized) state that
interprets all function names in the vocabulary of $R(x)$ and assigns
values to all free variables of $R(x)$ except for $x$. For each
element $b$ of the universe $U$ in $A$, let $A_b$ be the expansion of
$A$ obtained by assigning the value $b$ to $x$, and let $E_b$ be the
update set generated by $R(x)$ at $A_b$. Since $x$ does not appear as
the head of any update instruction in $R(x)$, each $E_b$ is also a set
of updates of $A$. The update set generated by $R^*$ at $A$ is the
union of the update sets $E_b$.
Call the new rule a {\em parallel synchronous rule\/} (or a {\em
declaration rule\/}, as in the EA guide). A {\em parallel synchronous
program\/} is simply a parallel synchronous rule without free variables.
Every occurrence of a variable should be bound by a declaration or a
quantifier.
\subsection{Special Distributed Programs
For our purposes here, a {\em distributed program\/} $\Pi$ is given by a
vocabulary and a finite set of basic or parallel synchronous programs
with function symbols from the vocabulary of $\Pi$. The constitutent
programs are the {\em modules\/} of $\cA$. A {\em state\/} of $\Pi$ is a
state of the vocabulary of $\Pi$.
Intuitively, each module is executed by a separate agent.
This is a very restricted definition. For example, the EA guide allows
the creation of new agents during the evolution.
Intuitively, it is convenient though to distinguish between a module (a
piece of syntax) and its executor, and even think about agents in
anthropomorphic terms. But since in this case agents are uniquely defined by
their programs, there is no real need to have agents at all, and we may
identify an agent by the name of its program.
\section{Special Distributed Real-Time Ealgebras
A program does not specify a (distributed) ealgebra completely.
We need to define what constitutes a computation (or a run) and
then to indicate initial states and maybe a relevant class of runs.
In this section, we define a restricted class of distributed
real-time evolving algebras by restricting attention to static algebras of
a particular kind and defining a particular notion of run.
We are interested in computations in real time that satisfiy the following
assumptions.
\begin{description}
\item[I1] Agents execute instantaneously.
\item[I2] Enviromental changes take place instantaneously.
\item[I3] The global state of the given distributed ealgebra is well defined
at every moment.
\end{description}
Let us stress again that the three assumptions above are not a part of the
evolving algebra definition. The prolonged-action ealgebra~\cite{BGR},
mentioned in Sect.~2, satisfies none of these three assumptions.
\paragraph{Vocabularies and Static Structures.}
Fix some vocabulary $\U$ with a universe symbol Reals and let $\U^+$ be the
extension of $\U$ with a nullary function symbol CT; it is supposed of
course that $\U$ does not contain CT. Restrict attention to $\U^+$-states
where the universe Reals is the set of real numbers and CT evaluates to a
real number. Intuitively, CT gives the current time.
\subsection{Pre-runs
\begin{definition}\label{Definition of pre-runs}
A {\em pre-run\/} $R$ of vocabulary $\U^+$ is a mapping from the interval
$[0,\infty)$ or the real line to states of vocabulary $\U^+$ satisfying the
following requirements where $\r(t)$ is the reduct of $R(t)$ to $\U$.
\begin{description}
\item[Superuniverse Invariability]
The superuniverse does not change during the evolution; that is, the
superuniverse of every $R(t)$ is that of $R(0)$.
\item[Current Time]
At every $R(t)$, $\CT$ evaluates to $t$.
\item[Discreteness]
For every $\t>0$, there is a finite sequence $0 = t_0 < t_1 <\ldots < t_n
=\t$ such that if $t_i < \a < \b <t_{i+1}$ then $\r(\a) =\r(\b)$. \qed
\end{description}
\end{definition}
Remarks. Of course, we could start with an initial moment different from
$0$, but without loss of generality we can assume that the initial moment is
$0$. Our discreteness requirement is rather simplistic (but sufficient for
our purposes in this paper). One may have continuous time-dependent basic
functions around (in addition to CT); in such cases, the discreteness
requirement becomes more subtle.
In the rest of this section, $R$ is a pre-run of vocabulary $\U^+$ and
$\r(t)$ is the reduct of $R(t)$ to $\U$.
The notation $\r(t+)$ and $\r(t-)$ is self-explanatory; still, let
us define it precisely. $\r(t+)$ is any state $\r(t+\e)$ such that
$\e>0$ and $\r(t+\d) = \r(t+\e)$ for all positive $\d<\e$.
Similarly, if $t>0$ then $\r(t-)$ is any state $\r(t-\e)$ such that
$0<\e\leq t$ and $\r(t-\d) = \r(t-\e)$ for all positive $\d<\e$.
Call a moment $t$ {\em significant\/} for $R$ if (i)~$t=0$
or (ii)~$t>0$ and either $\r(t)\neq\r(t-)$ or $\r(t)\neq\r(t+)$.
\begin{lemma}\label{Lemma 4.1}
For any moment $t$, $\r(t+)$ is well defined. For any moment $t>0$,
$\r(t-)$ is well defined. If there are infinitely many significant
moments then their supremum equals $\infty$.
\end{lemma}
\begin{proof}
Obvious.
\qed\end{proof}
Recall that a set $S$ of nonnegative reals is {\em discrete\/} if it has no
limit points. In other words, $S$ is discrete if and only if, for every
nonnegative real $\t$, the set $\{t\in S: t <\t\}$ is finite. The
discreteness requirement in the definition of pre-runs means exactly that
the collection of the significant points of $R$ is discrete.
We finish this subsection with a number of essentially self-evident
definitions related to a given pre-run $R$. Let $e$ be a term of
vocabulary $\U^+$. If $e$ has free variables then fix the values of
those variables, so that $e$ evaluates to a definite value in every
state of vocabulary $\U^+$. (Formally speaking $e$ is a pair of the
form $(e',\xi)$ where $e'$ is a term and $\xi$ assigns elements of
$R(0)$ to free variables of $e'$.)
{\em The value $e_t$ of $e$ at moment $t$\/} is the value of $e$ in $R(t)$.
Accordingly, $e$ {\em holds\/} (respectively {\em fails\/}) at $t$ if it does
so in $R(t)$. Likewise, a module is {\em enabled\/} (respectively {\em
disabled\/}) at $t$ if it is so in $R(t)$. In a similar vein, we speak about
a time interval $I$. For example, $e$ {\em holds over\/} $I$ if it holds at
every $t\in I$.
If $e$ has the same value over some nonempty interval $(t,t+\e)$, then this
value is {\em the value $e_{t+}$ of $e$ at $t+$\/} (respectively {\em at
$t-$\/}). Similarly, if $t>0$ and $e$ has the same value over some nonempty
interval $(t-\e,t)$, then this value is {\em the value $e_{t-}$ of $e$ at
$t-$\/}. Define accordingly when $e$ holds, fails at $t+, t-$ and when an
agent is enabled, disabled at $t+, t-$.
Further, $e$ {\em is set to\/} a value $a$ (or simply {\em becomes\/} $a$)
at $t$ if either (i)~$e_{t-}\neq a$ and $e_t = a$,
or else (ii)~$e_t\neq a$ and $e_{t+}=a$.
Define accordingly when an agent becomes enabled, disabled at $t$.
\subsection{Runs
Now consider a distributed program $\Pi$ with function symbols from
vocabulary $\U^+$. Runs of $\Pi$ are pre-runs with some restrictions
on how the basic functions evolve. Depending upon their use, the
basic functions of $\Pi$ fall into the
following three disjoint categories.
\begin{description}
\item[Static] These functions do not change during any run.
The names of these functions do not appear as the heads of update
rules in $\Pi$.
\item[Internal Dynamic] These
functions may be changed only by agents.
The names of these functions appear as the heads of update rules and
the functions are changed by executing the modules of $\Pi$.
For brevity, we abbreviate ``internal dynamic'' to ``internal''.
\item[External Dynamic] These functions may be changed only by the
environment. The names of these functions do not appear as the heads
of update rules; nevertheless the functions can change from
one state to another. Who changes them? The environment.
Some
restrictions may be imposed on how these functions can change.
For brevity, we abbreviate ``external dynamic'' to ``external''.
\end{description}
Remark. It may be convenient to have functions that can by changed
both by agents and the environment. The EA guide allows that, but we
do not need that generality here.
Before we give the definition of runs, let us explain informally that
one should be cautious with instantaneous actions. In particular, it
may not be possible to assume that agents always fire at the moment
they become enabled. Consider the following two interactive
scenarios.
\begin{description}
\item[Scenario 1] The environment changes a nullary external function
$f$ at moment $t$. This new value of $f$ enables an agent $X$. The
agent fires immediately and changes another nullary function $g$.
\end{description}
What are the values of $f$ and $g$ at time $t$, and at what time does
$X$ fire? If $f$ has its old value at $t$ then $X$ is disabled at $t$
and fires at some time after $t$; thus $X$ does not fire immediately.
If $g$ has its new value already at $t$ then $X$ had to fire at some
time before $t$; that firing could not be triggered by the change of
$f$. We arrive at the following conclusions: $f$ has its new value at
$t$ (and thus $f_t$ differs from $f_{t-}$), $X$
fires at $t$, and $g$ has its old value at $t$ (and thus
$g_t$ differs from $g_{t+}$).
\begin{description}
\item[Scenario 2] At time $t$, an agent $X$ changes a function $g$ and
in so doing enables another agent $Y$ while disabling himself.
\end{description}
When does $Y$ fire? Since $X$ fires at $t$, it is enabled at $t$ and
thus $g$ has its old value at $t$. Hence $Y$ is disabled at $t$ and
fires at some time after $t$. Thus $Y$ cannot react immediately.
The following definition is designed to allow immediate agents.
\begin{definition}\label{Definition of runs}
A pre-run $R$ of vocabulary $\U^+$ is a {\em run\/} of $\Pi$ if it
satisfies the following conditions where $\r(t)$ is the reduct of $R$
to $\U$.
\begin{enumerate}
\item
If $\r(t+)$ differs from $\r(t)$ then $\r(t+)$ is the $\U$-reduct of
the state resulting from executing some modules $M_1,\ldots,M_k$
at $R(t)$. In
such a case we say $t$ is {\em internally significant\/}
and the executors of $M_1,\ldots,M_k$ {\em fire\/} at $t$. All
external functions with names in $\U$
have the same values in $\r(t)$ and $\r(t+)$.
\item
If $i > 0$ and $\r(\t)$ differs from $\r(\t-)$ then they differ only
in the values of external functions. In such a case we say
$\t$ is {\em externally significant\/}. All internal functions
have the same values in $\r(t-)$ and $\r(t)$. \qed
\end{enumerate}
\end{definition}
Remark. Notice the global character of the definition of firing. An agent
fires at a moment $t$ if $\r(t+)\neq\r(t)$. This somewhat simplified
definition of firing is sufficient for our purposes in this paper.
In the rest of this section, $R$ is a run of $\Pi$ and $\r(t)$ the reduct
of $R(t)$ to $\U$. Let $e$ be a term $e$ with fixed values of all its free
variables. A
moment $t$ is {\em significant for $e$\/} if, for every $\e>0$, there exists a
moment $\a$ such that $|\a - t| <\e$ and $e_a\neq e_t$. Call $e$ {\em
discrete\/} (in the given run $R$) if the collection of significant moments
of $e$ is discrete. In other words, $e$ is discrete if and only, for every
$t>0$, there is a finite sequence
\[ 0 = t_0 < t_1 < \ldots < t_n = t\]
\noindent
such that if $t_i <\a <\b < t_{i+1}$ then $e_\a = e_\b$.
\begin{lemma}[(Discrete Term Lemma)]
If a term $e$ is discrete then
\begin{enumerate}
\item For every $t$, $e$ has a value at $t+$.
\item For every $t>0$, $e$ has a value at $t-$.
\end{enumerate}
\end{lemma}
\begin{proof}
Obvious.
\qed\end{proof}
\begin{lemma}[(Preservation Lemma)]
Suppose that a term $e$ with fixed values of its free variables does not
contain CT. Then $e$ is discrete. Furthermore,
\begin{enumerate}
\item If $e$ contains no external functions and $t>0$ then $e_t = e_{t-}$.
\item If $e$ contains no internal functions then $e_{t+} = e_t$.
\end{enumerate}
\end{lemma}
\begin{proof}
This is an obvious consequence of the definition of runs.
\qed\end{proof}
It may be natural to have agents that fire the instant they are
enabled.
\begin{definition
An agent is {\em immediate\/} if it fires at every state where it is
enabled.
\qed\end{definition}
\begin{lemma}[(Immediate Agent Lemma)]\label{lem4.4}\
\begin{enumerate}
\item The set of moments when an immediate agent is enabled is discrete.
\item If the agent is enabled at some moment $t$ then it is disabled at
$t+$ and, if $t>0$, at $t-$.
\end{enumerate}
\end{lemma}
\begin{proof}\
\begin{enumerate}
\item
If the agent is enabled at a moment $t$, it fires at $t$ and therefore
(according to our notion of being enabled) changes the state; it follows
that $t$ is a significant moment of the run. By the discreteness condition
on pre-runs, the collection of significant moments of a run is discrete.
It remains to notice that every subset of a discrete set is discrete.
\item
Follows from 1.
\qed
\end{enumerate}
\end{proof}
Recall the scenario S2. There agent $Y$ cannot be immediate.
Nevertheless, it may make sense to require that some agents cannot
delay firing forever.
\begin{definition}\label{Definition of bounded agent}
An agent X is {\em bounded\/} if it is immediate or there exists a
bound
$b>0$ such that there is no interval $(t,t+b)$ during which $X$ is
continuously enabled but does not fire.
\qed\end{definition}
Notice that it is not required that if a bounded agent $X$ becomes
enabled at some moment $\a$, then it fires at some moment $\b < \a +
b$. It is possible a priori that X becomes disabled and does not fire in that
interval.
\section{The Ealgebra for Railroad Crossing Problem
We present our solution for the railroad crossing problem formalized
as an evolving algebra $\cA$ of a vocabulary $\U^+ =\U\cup\{\CT\}$.
In this section, we describe the program and initial states of $\cA$;
this will describe the vocabulary as well. The relevant runs of $\cA$
will be described in the next section.
The program of $\cal A$ has two modules \Gate\ and \Controller,
shown in Fig.~\ref{rules}.
\begin{figure}[htbp]
\begin{center}
\begin{minipage}{3.5in}
\begin{tabbing}mmm\=mmm\=mmm\=mmm\=mmm\=\kill
\Gate\\
\>{\bf if} Dir = open {\bf then} GateStatus := open {\bf endif}\\
\>{\bf if} Dir = close {\bf then} GateStatus := closed {\bf endif}\\
\\
\Controller\\
\>{\bf var} $x$ {\bf ranges over} Tracks\\
\>\>{\bf if} TrackStatus$(x)$ = coming {\bf and} Deadline$(x) = \infty$ {\bf then}\\
\>\>\>Deadline$(x)$ := \CT + WaitTime\\
\>\>{\bf endif}\\
\>\>{\bf if} $\CT = $Deadline$(x)$ {\bf then} Dir := close {\bf endif}\\
\>\>{\bf if} TrackStatus$(x)$ = empty {\bf and} Deadline$(x)< \infty$ {\bf then}\\
\>\>\>Deadline$(x)$ := $\infty$\\
\>\>{\bf endif}\\
\>{\bf endvar}\\
\>{\bf if} Dir=close {\bf and} SafeToOpen {\bf then} Dir := open
{\bf endif}\\
\end{tabbing}
\end{minipage}
\end{center}
\caption{\label{rules}Rules for \Gate\ and \Controller.}
\end{figure}
Here \WT\ abbreviates the term $\dmin - \dclose$, and SafeToOpen
abbreviates the term
\[(\forall x\in\Tracks)
[\TS(x) = \empty \mbox{\ \ or\ \ } \CT + \dopen < \DL(x)].\]
We will refer to the two constituent rules of \Gate\ as OpenGate,
CloseGate respectively. We will refer to the three constituent rules
of \Controller's parallel synchronous rule as $\SD(x)$, $\SC(x)$,
$\CD(x)$, respectively, and the remaining conditional rule as $\SO$.
Our \GS\ has only two values: opened and closed. This is of course a
simplification. The position of a real gate could be anywhere between
fully closed and fully opened. (In \cite{HL}, the position of the
gate ranges between $0^o$ and $90^o$.) But this simplification is
meaningful. The problem is posed on a level of abstraction where it
does not matter whether the gate swings, slides, snaps or does
something else; it is even possible that there is no physical gate,
just traffic lights. Furthermore, suppose that the gate is opening
and consider its position as it swings from $0^o$ to $90^o$. Is it
still closed or already open at $75^o$? One may say that it is
neither, that it is opening. But for the waiting cars, it is still
closed. Accordingly \GS\ is intended to be equal to closed at this
moment. It may change to opened when the gate reaches $90^o$.
Alternatively, in the case when the crossing is equipped with traffic
lights, it may change to opened when the light becomes green.
Similarly, it may change from opened to closed when the light becomes
red. If one is interested in specifying the gate in greater detail,
our ealgebra can be refined by means of another ealgebra.
The program does not define our evolving algebra $\cA$ completely. In
addition, we need to specify a collection of {\em initial states\/} and
relevant runs.
Initial states of $\cA$ satisfy the following conditions:
\begin{enumerate}
\item
The universe Tracks is finite. The universe ExtendedReals
is an extension of the universe Reals with an additional element
$\infty$. The binary relation $<$ and the binary operation $+$ are
standard; in particular $\infty$ is the largest element of
ExtendedReals.
\item The nullary functions close and open are interpreted by different
elements of the universe Directions. The nullary functions closed and
opened are interpreted by different elements of the universe
GateStatuses. The nullary functions empty, coming, \incrossing\ are
different elements of the universe TrackStatuses.
\item The nullary functions
$\dclose, \dopen, \dmax, \dmin$ are positive reals such that
\[ \dclose < \dmin \leq \dmax . \]
One may assume for simplicity of understanding that these four
reals are predefined: that is, they have the same value in all initial
state. This assumption is not necessary.
\item The unary function \TS\ assigns (the element called) empty to
every track (that is, to every element of the universe Tracks). The unary
function Deadline assigns $\infty$ to every track.
\end{enumerate}
It is easy to see that, in any run, every value of the internal function \DL\
belongs to ExtendedReals.
\section{Regular Runs
The following definition takes into account the assumptions of
Sect.~2.
\subsection{Definitions}
\begin{definition}
A run $R$ of our evolving algebra is {\em regular\/} if it satisfies
the following three conditions.
\begin{description}
\item[Train Motion] For any track $x$, there is a finite or infinite
sequence
\[ 0=t_0 < t_1 < t_2 < t_3 < \ldots \]
\noindent of so-called {\em significant moments of track $x$\/} such that
\begin{itemize}
\item
$\TS(x)$ = empty holds over every interval $[t_{3i}, t_{3i+1})$;
\item
$\TS(x)$ = coming holds over every interval $[t_{3i+1},t_{3i+2})$,
and\\ $\dmin \leq (t_{3i+2} - t_{3i+1}) \leq \dmax$;
\item
$\TS(x) = \incrossing$ holds over every interval $[t_{3i+2},
t_{3i+3})$; and
\item if $t_k$ is the final significant moment in the sequence, then
$k$ is divisible by $3$ and $\TS(x)=\empty$ over $[t_k,\infty)$.
\end{itemize}
\item[Controller Timing] Agent \Controller\ is immediate.
\item[Gate Timing] Agent \Gate\ is bounded. Moreover, there is no
time interval $I=(t,t+\dclose)$ such that [Dir=close and $\GS
=\opened$] holds over $I$. Similarly there is no interval
$I=(t,t+\dopen)$ such that [Dir=open and $\GS =\closed$] holds over
$I$. \qed
\end{description}
\end{definition}
In the rest of this paper, we restrict attention to regular runs of
$\cA$. Let $R$ be a regular run and $\r$ be the reduct of $R$ to
$\U$.
\subsection{Single Track Analysis
Fix a track $x$ and let $0=t_0<t_1<t_2<\ldots$ be the significant moments
of $x$.
\begin{lemma}[(Deadline Lemma)]\
\begin{enumerate}
\item $\DL(x) =\infty$ over $(t_{3i},t_{3i+1}]$, and $\DL(x) = t_{3i+1} +\WT$
over $(t_{3i+1},t_{3i+3}]$.
\item Let $\Dclose = \dclose + (\dmax - \dmin) = \dmax - \WT$.
If $\TS(x)\neq\incrossing$ over an interval $(\a,\b)$, then
Dead\-line$(x)\geq\b -\Dclose$ over $(\a,\b)$.
\end{enumerate}
\end{lemma}
\begin{proof}\
\begin{enumerate}
\item
A quite obvious induction along the sequence
\[ (t_0,t_1], (t_1,t_3], (t_3,t_4], (t_4,t_6], \ldots. \]
\noindent
The basis of induction. We prove that $\DL(x) =\infty$ over $I =
(t_0,t_1)$; it will follow by Preservation Lemma that $\DL(x) =\infty$
at $t_1$. Initially, $\DL(x) =\infty$. Only $\SD(x)$ can alter that
value of $\DL(x)$, but $\SD(x)$ is disabled over $(t_0,t_1)$.
The induction step splits into two cases.
\paragraph{Case 1.} Given that $\DL(x) =\infty$ at $t_{3i+1}$, we
prove that $\DL(x) = t_{3i+1} +\WT$ over $I = (t_{3i+1},t_{3i+3})$; it
will follow by Preservation Lemma that $\DL(x) = t_{3i+1} +\WT$ at
$t_{3i+3}$. $\SD(x)$ is enabled and therefore fires at $t_{3i+1}$
setting $\DL(x)$ to $t_{3i+1} +\WT$. $\CD(x)$ is the only rule that
can alter that value of $\DL(x)$ but it is disabled over $I$ because
$\TS(x)\neq\empty$ over $I$.
\paragraph{Case 2.} Given that $\DL(x) <\infty$ at $t_{3i}$ where
$i>0$, we prove that $\DL(x) =\infty$ over $I = (t_{3i},t_{3i+1})$; it
will follow by Preservation Lemma that $\DL(x) =\infty$ at $t_{3i+1}$.
$\CD(x)$ is enabled and therefore fires at $t_{3i}$ setting $\DL(x)$
to $\infty$. Only $\SD(x)$ can alter that value of $\DL(x)$ but it is
disabled over $I$ because $\TS(x) =\empty\neq\coming$ over $I$.
\item By contradiction suppose that $\DL(x) <\b -\Dclose$ at some
$t\in(\a, \b)$. By 1, there is an $i$ such that $t_{3i+1}< t\leq
t_{3i+3}$ and $\DL(x) = t_{3i+1} +\WT$ at $t$. Since $(\a,\b)$ and
the \incrossing\ interval $[t_{3i+2},t_{3i+3})$ are disjoint, we have
that $t_{3i+1} < t < \b\leq t_{3i+2}$. By the definition of regular
runs, $\dmax\geq t_{3i+2} - t_{3i+1}\geq\b - t_{3i+1}$, so that
$t_{3i+1}\geq\b -\dmax$. We have
\[\begin{array}{rclcl}
\b-\Dclose & > & \DL(x) \mbox{ at t } & = & t_{3i+1}+\WT \\
& \geq & \b-\dmax + \WT & = & \b-\Dclose
\end{array}\]
\noindent which is impossible. \qed
\end{enumerate}
\end{proof}
\begin{corollary}[(Three Rules Corollary)]\
\begin{enumerate}
\item $\SD(x)$ fires exactly at moments $t_{3i+1}$, that is exactly when
Track\-Status$(x)$ becomes coming.
\item $\SC(x)$ fires exactly at moments $t_{3i+1} +\WT$.
\item $\CD(x)$ fires exactly at moments $t_{3i}$ with
$i>0$, that is exactly when $\TS(x)$ becomes empty.
\end{enumerate}
\end{corollary}
\begin{proof}
Obvious.
\qed\end{proof}
Let $s(x)$ be the quantifier-free part
\[ \TS(x) = \empty \mbox{\ \ or\ \ } \CT + \dopen < \DL(x). \]
of the term SafeToOpen with the fixed value of $x$.
\begin{lemma}[(Local SafeToOpen Lemma)]\
\begin{enumerate}
\item Suppose that $\WT >\dopen$. Then $s(x)$ holds over intervals
$[t_{3i},t_{3i+1} +\WT -\dopen)$ (the {\em maximal positive intervals of
$s(x)$\/}) and fails over intervals $[t_{3i+1} +\WT -\dopen, t_{3i+3})$.
\item Suppose that $\WT\leq\dopen$. Then $s(x)$ holds over intervals
$[t_{3i},t_{3i+1}]$ (the {\em maximal positive intervals of
$s(x)$\/}) and fails over intervals $(t_{3i+1}, t_{3i+3})$.
\item The term $s(v)$ is discrete.
\item $s(x)$ becomes true exactly at moments $t_{3i}$ with $i>0$, that is
exactly when $\TS(x)$ becomes empty.
\item If $[\a,\b)$ or $[\a,\b]$ is a maximal positive interval of
$s(x)$, then $\SC(x)$ is disabled over $[\a,\b]$ and at $\b+$.
\end{enumerate}
\end{lemma}
\begin{proof}\
\begin{enumerate}
\item
Over $[t_{3i},t_{3i+1})$, $\TS(x) =\empty$ and
therefore $s(x)$ holds. At $t_{3i+1}$, $\DL(x) =\infty$ and therefore
$s(x)$ holds. $\SD(x)$ fires at $t_{3i+1}$ and sets $\DL(x)$ to
$t_{3i+1} +\WT$. Over $(t_{3i},t_{3i+1} +\WT -\dopen)$,
\begin{eqnarray*}
\CT+\dopen &<& (t_{3i+1}+\WT-\dopen)+\dopen\\
&=& t_{3i+1}+\WT = \DL(x)
\end{eqnarray*}
\noindent
and therefore $s(x)$ holds. Over the interval $[t_{3i+1} +\WT
-\dopen, t_{3i+3})$, $\TS(x)\neq\empty$ and $\CT +\dopen\geq t_{3i+1}
+\WT =\DL(x)$ and therefore $s(x)$ fails.
\item The proof is similar to that of 1.
\item This follows from 1 and 2.
\item This follows from 1 and 2.
\item We consider the case when $\WT >\dopen$; the case when
$\WT\leq\dopen$ is similar. By 1, the maximal open interval of $s(x)$
has the form $[\a,\b) = [t_{3i},t_{3i+1} +\WT -\dopen)$ for some $i$. By
Three Rules Corollary, $\SC(x)$ fires at moments $t_{3j+1} +\WT$. Now the
claim is obvious.
\qed
\end{enumerate}
\end{proof}
\subsection{Multiple Track Analysis
\begin{lemma}[(Global SafeToOpen Lemma)]\
\begin{enumerate}
\item The term SafeToOpen is discrete.
\item If SafeToOpen holds at $t+$ then it holds at $t$.
\item If SafeToOpen becomes true at $t$ then some $\TS(x)$ becomes empty at
$t$.
\item If SafeToOpen holds at $t$ then $t$ belongs to an interval
$[\a,\b)$ (a {\em maximal positive interval of\/} SafeToOpen) such that
SafeToOpen fails at $\a-$, holds over $[\a,\b)$ and fails at $\b$.
\end{enumerate}
\end{lemma}
\begin{proof}\
\begin{enumerate}
\item
Use part 3 of Local SafeToOpen Lemma and the fact that there are only
finitely many tracks.
\item Use parts 1 and 2 of Local SafeToOpen Lemma.
\item Use parts 1 and 2 of Local SafeToOpen Lemma.
\item Suppose that SafeToOpen holds at $t$. By parts 1 and 2 of Local
SafeToOpen Lemma, for every track $x$, $t$ belongs to an interval $[\a_x
<\b_x)$ such that $s(x)$ fails at $\a_x-$, holds over $[\a_x,\b_x)$ and
fails at $\b_x$. The desired $\a =\max_x\a_x$, and the desired $\b
=\min_x\b_x$.\qed
\end{enumerate}
\end{proof}
\begin{lemma}[(Dir Lemma)
Suppose that $[\a,b)$ is a maximal positive interval of SafeToOpen.
\begin{enumerate}
\item Dir = close at $\a$.
\item Dir = open over $(\a,\b]$ and at $\b+$.
\end{enumerate}
\end{lemma}
\begin{proof}\
\begin{enumerate}
\item By Global SafeToOpen Lemma, some $\TS(x)$ becomes empty at $t$. Fix
such an $x$ and let $0 = t_0 < t_1 < t_2 <\ldots$ be the significant
moments of $\TS(x)$. Then $\a = t_{3i+3}$ for some $i$. By Three Rules
Corollary, $\SD(x)$ fires at $t_{3i+1} +\WT$ setting Dir to close. By
Local SafeToOpen Lemma, $s(x)$ fails over $I = (t_{3i+1} +\WT,
t_{3i+3}]$. Hence SafeToOpen fails over $I$ and therefore every $\SC(y)$
is disabled over $I$. Thus Dir remains close over $I$.
\item
By 1, SignalOpen fires at $\a$ setting Dir to open. By part 5 of
Local SafeToOpen Lemma, every $\SC(x)$ is disabled over $[\a,\b]$ and at
$\b+$. Hence Dir remains open over $(\a,\b]$ and at $\b+$.
\qed
\end{enumerate}
\end{proof}
\begin{corollary}[(SignalOpen Corollary)]
SignalOpen fires exactly when SafeToOpen becomes true.
SignalOpen fires only when some Track\-Status$(x)$ becomes true.
\end{corollary}
\begin{proof}
Obvious.
\qed\end{proof}
We have proved some properties of regular runs of our ealgebra $\cA$,
but the question arises if there any regular runs. Moreover, are
there any regular runs consistent with a given pattern of trains? The
answer is positive. In Sect.~8, we will prove that every pattern of
trains gives rise to a regular run and will describe all regular runs
consistent with a given pattern of trains.
\section{Safety and Liveness
Recall that we restrict attention to regular runs of our ealgebra $\cA$.
\begin{theorem}[(Safety Theorem)
The gate is closed whenever a train is in the crossing. More
formally, $\GS=\closed$ whenever $\TS(x)=\incrossing$ for any $x$.
\end{theorem}
\begin{proof}
Let $t_0 < t_1 <\ldots$ be the significant moments of some track $x$.
Thus, during periods $[t_{3i+2}, t_{3i+3})$, $\TS(x)=\incrossing$. We
show that $\GS =\closed$ over $[t_{3i+2}, t_{3i+3}]$ and even over
$[t_{3i+1} +\dmin, t_{3i+3}]$. (Recall that $\dmin\leq t_{3i+2} -
t_{3i+1}\leq\dmax$ and therefore $t_{3i+1} +\dmin\leq t_{3i+2}$.)
By Three Rules Corollary, $\SD(x)$ fires at $t_{3i+1}$ setting
$\DL(x)$ to $\a = t_{3i+1} +\WT$. If $\Dir_\a = \open$ then $\SC(x)$
fires at $\a$ setting Dir to close; regardless, $\Dir_{\a+} = \close$.
By Local SafeToOpen Lemma, $s(x)$ fails over $I = (\a, t_{3i+3})$.
Hence, over $I$, SafeToOpen fails, SignalOpen is disabled, Dir =
close, and OpenGate is disabled.
By the definition of regular runs, $\GS =\closed$ at some moment $t$
such that $\a < t <\a +\dclose = t_{3i+1} +\WT +\dclose = t_{3i+1}
+\dmin$. Since OpenGate is disabled over $I$, \GS\ remains closed
over $I$ and therefore over the interval $[t_{3i+1} +\dmin,
t_{3i+3})$. By Preservation Lemma, $\GS =\closed$ at $t_{3i+3}$.
\qed\end{proof}
Let $\Dclose = \dclose + (\dmax - \dmin) = \dmax - \WT$.
\begin{theorem}[(Liveness Theorem)]
Assume $\a +\dopen < \b -\Dclose$. If the crossing is empty in the
open time interval $(\a,\b)$, then the gate is open in $[\a +\dopen,\b
-\Dclose]$. More formally, if every $\TS(x)\neq\incrossing$ over
$(\a,\b)$, then $\GS =\opened$ over $[\a +\dopen,\b -\Dclose]$.
\end{theorem}
\begin{proof}
By Deadline Lemma, every $\DL(x)\geq\b -\Dclose >\a +\dopen$ over $(\a,\b)$.
By the definition of SafeToOpen, it holds at $\a$. If $\Dir_\a =
\close$ then SignalOpen fires at $\a$; in any case $\Dir_{\a+}=\open$.
By Deadline Lemma, every $\DL(x)\geq\b -\Dclose > CT$ over $(\a,\b
-\Dclose)$. Hence, over $(\a,\b -\Dclose)$, every $\SC(x)$ is
disabled, Dir remains open, and StartClose is disabled.
By the definition of regular runs, $\GS =\opened$ at some moment $t\in(\a,\a
+\dopen)$. Since StartClose is disabled over $(\a,\b -\Dclose)$, \GS\ remains
opened over $(t,\b -\Dclose)$ and therefore is opened over $[\a +\dopen,\b
-\Dclose)$. By Preservation Lemma, $\GS =\opened$ at $b -\Dclose$.
\qed\end{proof}
The next claim shows that, in a sense, Liveness Theorem cannot be
improved.
\begin{Claim}\
\begin{enumerate}
\item Liveness Theorem fails if $\dopen$ is replaced with a smaller
constant.
\item Liveness Theorem fails if $\Dclose$ is replaced with a smaller
constant.
\end{enumerate}
\end{Claim}
\begin{proof}
The first statement holds because the gate can take time arbitrarily
close to $\dopen$ to open. The second statement holds for two
reasons. Recall that $\Dclose = \dclose + (\dmax - \dmin)$. The term
$(\dmax - \dmin)$ cannot be reduced; to be on the safe side, the
controller must act as if every oncoming train is moving as fast as
possible, even if it is moving as slow as possible. The term
$\dclose$ cannot be reduced either; the gate can take arbitrarily
short periods of time to close. Now we give a more detailed proof.
\paragraph{Part 1.}
Given some constant $\copen <\dopen$, we construct a regular run of our
ealgebra $\cA$ and exhibit an open interval $I = (\a,\b)$ such that the
crossing is empty during $I$ but the gate is not opened during a part of
interval $(\a +\copen, \b -\Dclose)$.
We assume that $\dopen,\Dclose < 1$ (just choose the unit of time
appropriately) and that there is only one track.
The traffic. Only one train goes through the crossing. It appears at time
$100$, reaches the crossing at time $100 +\dmax$ and leaves the crossing at
time $110 +\dmax$, so that Dir should be changed only twice: set to
close at $100 +\WT$ and set to open at $110 +\dmax$.
The run. We don't care how quickly the gate closes, but we stipulate that
the time $\Delta$ that the gate takes to open belongs to
$(\copen,\dopen)$.
The interval $I$: $(110 +\dmax, 110 +\dmax +\dopen)$.
Since the only train leaves the crossing at $110 +\dmax$, the crossing
is empty during $I$. However the gate takes time $\Delta >\copen$ to
open and thus is not opened during the part $(110 +\dmax +\copen, 110
+\dmax +\Delta)$ of $I$.
\paragraph{Part 2.}
Given some constant $\Cclose <\Dclose$, we construct a regular run of our
ealgebra $\cA$ and exhibit an open interval $I = (\a,\b)$ such that the
crossing is empty during $I$ but the gate is not opened (even closed)
during a part of interval $(\a +\dopen, \b -\Cclose)$.
We assume that $\dopen,\Cclose < 1$, and that there is only one track with
the same traffic pattern as in part 1.
The run. This time we don't care how quickly the gate opens, but we
stipulate that the time $\Delta$ that the gate takes to close
satisfies the following condition:
\[ 0 < \Delta < \min\{ \dclose, \Dclose -\Cclose \}. \]
The interval $I$ is $(0,100+\dmax)$, so that $\a = 0$ and $\b = 100
+\dmax$.
Since the only train reaches the crossing at $100+\dmax$, the crossing is
empty during $I$. The gate is closed by $100 +\WT +\Delta$ and is
closed during the part $(100 +\WT +\Delta, 100 +\WT +(\Dclose -\Cclose))$ of
interval $(\a +\dopen,\b -\Cclose)$. Let us check that $(100 +\WT +\Delta,
100 +\WT +(\Dclose -\Cclose)$ is indeed a part of $(\a +\dopen,\b -\Cclose)$.
Clearly, $\a + \dopen < 0 + 1 < 100 + \WT + \Delta$. Further:
\begin{eqnarray*}
&& 100 +\WT + \Delta \\
&<& 100 + \WT + (\Dclose - \Cclose)\\
&=& 100 + (\dmin - \dclose) + [(\dclose + \dmax - \dmin) - \Cclose] =
\b - \Cclose.
\end{eqnarray*}
\qed\end{proof}
\section{Some Additional Properties
\begin{theorem}[(Uninterrupted Closing Theorem)
The closing of the gate is never interrupted. More formally, if Dir
is set to close at some moment $\a$, then Dir = close over the
interval $I = (\a,\a +\dclose)$.
\end{theorem}
Recall that, by the definition of regular runs, GateStatus = closed
somewhere in $I$ if Dir = close over $I$.
\begin{proof}
Since Dir is set to close at $\a$, some $\SC(x)$ fires at $\a$. Fix
such an $x$ and let $t_0 < t_1 <\ldots$ be the significant moments of
track $x$. By Three Rules Corollary, there is an $i$ such that $\a =
t_{3i+1} +\WT = t_{3i+1} +\dmin -\dclose$. Then $\a +\dclose =
t_{3i+1} +\dmin\leq t_{3i+2}$. By the definition of regular runs,
$\TS(x) =\coming$ over $I$. By Deadline Theorem, $\DL(x) =\a$ over
$I$, so that $\CT +\dopen >\CT >\DL(x)$ over $I$. Because of this
$x$, SafeToOpen fails over $I$ and therefore SignalOpen is disabled
over $I$. Thus Dir = close over $I$.
\end{proof}
\begin{theorem}[(Uninterrupted Opening Theorem)]\
Suppose $\WT\geq\dopen$; that is, $\dmin\geq\dclose +\dopen$. Then
the opening of the gate is not interrupted; in other words, if Dir is
set to open at some moment $\a$, then Dir = open over the interval $I
= (\a,\a +\dopen)$.
\end{theorem}
Recall that, by the definition of regular runs, GateStatus = opened
somewhere in $I$ if Dir = open over $I$.
\begin{proof}
It suffices to prove that every $\SC(x)$ is disabled over $I$. Pick
any $x$ and let $t_0 < t_1 <\ldots$ be the significant moments of
track $x$. Since Dir is set to open at $\a$, SignalOpen fires at
$\a$, SafeToOpen holds at $\a$, and $s(x)$ holds at $\a$. We have two
cases.
\paragraph{Case 1.} $\a +\dopen <\DL(x)_\a <\infty$. Since $\DL(x)_\a
<\infty$, $\t_{3i+1}<\a\leq t_{3i+3}$ and $\DL(x)_\a = t_{3i+1} +\WT$
for some $i$ (by Deadline Lemma). We have
\[ \a +\dopen < \DL(x)_\a = t_{3i+1} +\WT <
t_{3i+1} +\dmin \leq t_{3i+2} < t_{3i+3}. \]
\noindent
By Deadline Lemma, $\DL(x)$ does not change in $I$, so that CT remains
$<\DL(x)$ in $I$ and therefore $\SC(x)$ is disabled over $I$.
\paragraph{Case 2.} $\a +\dopen\geq\DL_\a(x)$ or $\DL_\a(x) =\infty$.
We check that $t_{3i}\leq\a\leq t_{3i+1}$ for some $i$. Indeed, if
$\TS(x)_\a =\empty$ then $t_{3i}\leq\a<t_{3i+1}$ for some $i$.
Suppose that $\TS(x)_\a\neq\empty$. Since $s(x)$ holds at $a$, $\a
+\dopen <\DL_\a(x)$. By the condition of Case 2, $\DL(x)_\a =\infty$.
Recall that $\TS(x)\neq\empty$ exactly in intervals $[t_{3i+1},
t_{3i+3}$ and $\DL(x) =\infty$ exactly in periods $(t_{3i},
t_{3i+1}]$. Thus $\a = t_{3i+1}$ for some $i$.
The first moment after $\a$ that $\SC(x)$ is enabled is $t_{3i+1}
+\WT$. Thus it suffices to check that $\a +\dopen\leq t_{3i+1} +\WT$.
Since $\dmin \geq\dclose +\dopen$, we have
\[ \a + \dopen \leq t_{3i+1} + \dopen \leq
t_{3i+1} + (\dmin -\dclose) = t_{3i+1} + \WT. \qed\]
\end{proof}
\begin{corollary}[(Dir and GateStatus Corollary)]
Assume $\dmin\geq\dclose +\dopen$.
\begin{enumerate}
\item If the sequence $\g_1 <\g_2 <\g_3 <\ldots$ of positive significant moments
of Dir is infinite, then the sequence $\d_1 <\d_2 <\d_3 <\ldots$ of positive
significant moments of \GS\ is infinite and each $\d_i\in(\g_i,\g_{i+1})$.
\item If the positive significant moments of Dir form a finite sequence $\g_1
<\g_2 <\ldots <\g_n$, then the positive significant moments of \GS\ form a
sequence $\d_1 <\d_2 <\ldots <\d_n$ such that $\d_i\in(\g_i,\g_{i+1})$ for all
$i<n$ and $\d_n >\g_n$.
\end{enumerate}
\end{corollary}
\begin{proof}
We prove only the first claim; the second claim is proved similarly.
Since Dir = open and $\GS =\opened$ initially, \GS\ does not change in
$(0,\g_1)$. Suppose that we have proved that if $\g_1 <\ldots <\g_j$
are the first $j$ positive significant moments of Dir, then there are
exactly $j-1$ significant moments $\d_1 <\ldots <\d_{j-1}$ of \GS\ in
$(0,g_j]$ and each $\d_i\in(\g_i,\g_{i+1})$. We restrict attention to
the case when $j$ is even; the case of odd $j$ is similar. Since $j$
is even, Dir is set to open at $\g_j$. If $\g_j$ is the last
significant moment of Dir, then the gate will open at some time in
$(\g_j,\g_j +\dopen)$ and will stay open forever after that.
Otherwise, let $k = j + 1$. By Uninterrupted Opening Theorem, the
gate opens at some moment $\d_j\in(\g_j,\g_k)$. Since Dir remains
open in $(\d_j,\g_k)$, $\GS =\opened$ holds over $(\d_j,\g_k)$. By
Preservation Lemma, $\GS =\opened$ at $\g_k$. \qed\end{proof}
\section{Existence of Regular Runs
We delayed the existence issue in order to take advantage of Sect.~8.
For simplicity, we restrict attention to an easier but seemingly more
important case when $\dmin \geq\dclose + \dopen$. The Existence
Theorem and the two Claims proved in this section remain true in the
case $\dmin<\dclose +\dopen$; we provide remarks explaining the
necessary changes.
Let $\U_1 =\U -\{\GS\}$, and $\U_0 =\U_1 -\{\DL,\Dir\}$. For $i = 0,1$, let
$\U_i^+ =\U_i\cup\{\CT\}$.
\begin{theorem}[(Existence Theorem)]
Let $P$ be a pre-run of vocabulary $\U_0$ satisfying the train motion
requirement in the definition of regular runs, and let $A$ be an
initial state of $\cA$ consistent with $P(0)$. There is a regular run
$R$ of $\cA$ which starts with $A$ and agrees with $P$.
\end{theorem}
\begin{proof}
Let the significant moments of $P$ be $0=\a_0 <\a_1 <\ldots$. For
simplicity, we consider only the case where this sequence is infinite.
The case when the sequence is finite is similar. Our construction
proceeds in two phases. In the first phase, we construct a run $Q$ of
module\ \Controller\ (that is of the corresponding one-module evolving
algebra of vocabulary $\U_1^+$) consistent with $A$ and $P$. In the
second phase, we construct the desired $R$ by extending $Q$ to include the
execution of module\ \Gate. \smallskip
\paragraph{Phase 1: Constructing $Q$ from $P$.} Let $\b_0 <\b_1
<\ldots$ be the sequence that comprises the moments $\a_i$ and the
moments of the form $t +\WT$ where $t$ is a moment when some $\TS(x)$
becomes coming. By Three Rule and SignalOpen Corollaries, these are
exactly the significant moments of the desired $Q$. We define the
desired $Q$ by induction on $\b_i$. It is easy to see that $Q(T)$ is
uniquely defined by its reduct $q(t)$ to $\U_1$.
$Q(0)$ is the appropriate reduct of $A$. Suppose that $Q$ is defined
over $[0,\b_j]$ and $k=j+1$. Let $\g$ range over $(\b_j,\b_k)$. If
\Controller\ does not execute at $\b_j$, define $q(\g) =
q(\b_j)$; otherwise let $q(\g)$ e the state resulting from
executing \Controller\ at $q(\b_j)$. Define $q(\b_k)$ to agree
with $q(\g)$ at all functions except \TS, where it agrees with
$P(\b_k)$.
Clearly $Q$ is a pre-run. It is easy to check that $Q$ is a run of
\Controller\ and that \Controller\ is immediate in $Q$.
\smallskip
\paragraph{Phase 2: Constructing $R$ from $Q$.} We construct $R$ by
expanding $Q$ to include \GS. Let $\g_1 <\g_2 <\ldots$ be the
sequence of significant moments of $Q$ at which Dir changes. Thus Dir
becomes close at moments $\g_i$ where $i$ is odd, and becomes open at
moments $\g_i$ where $i$ is even.
There are many possible ways of extending $Q$ depending on how long it
takes to perform a given change in \GS. Chose a sequence
$a_1,a_2,\ldots$ of reals such that (i)~$a_i <\g_{i+1} -\g_i$ and
(ii)~$a_i <\dclose$ if $i$ is odd and $a_i <\dopen$ if $i$ is even.
The idea is that \Gate\ will delay executing OpenGate or CloseGate for time
$a_i$.
The construction proceeds by induction on $\g_i$. After $i$ steps,
\GS\ will be defined over $[0,g_i]$, and $\GS_{g_i}$ will equal opened
if $i$ is odd and will equal closed otherwise.
Set $\GS =\opened$ over $[0,\g_1]$. Suppose that \GS\ is defined over
$[0,\g_i]$ and let $j=i+1$. We consider only the case when $i$ is
even. The case of odd $i$ is similar.
By the induction hypothesis, $\GS =\closed$ at $\g_i$. Since $i$ is even, Dir is
set to open at $\g_i$. Define $\GS =\closed$ over $(\g_i,\g_i + a_i]$ and
opened over $(\g_i + a_i,\g_j]$.
It is easy to see that $R$ is a regular run of $\cA$.
\qed\end{proof}
Remark. If the assumption $\dmin\geq\dclose +\dopen$ is removed, Phase 1 of the
construction does not change but Phase 2 becomes more complicated. After $i$
steps, \GS\ is defined over $[0,g_i]$, and $\GS_{g_i} =\closed$ if $i$ is even;
it cannot be guaranteed that $\GS_{g_i} =\opened$ if $i$ is odd. The first step
is as above. For an even $i$, we have three cases.
Case 1: $a_i <\g_j -\g_i$. Define \GS\ over $(g_i,g_j]$ as in the Existence
Theorem Proof.
Case 2: $a_i >\g_j -\g_i$. Define $\GS =\closed$ over $(g_i,g_j]$.
Case 3: $a_i =\g_j -\g_i$. Define $\GS =\closed$ over $(g_i,g_j]$ as in sub-case 2
but also mark $g_j$ (to indicate that OpenGate should fire at $\g_j$).
For an odd $i$, we have two cases.
Case 1: Either $\GS =\opened$ at $\g_i$ or else $\GS =\closed$ at $g_i$ but $g_i$
is marked. Define \GS\ over $(g_i,g_j]$ as in the Existence Theorem Proof.
Case 2: $\GS =\closed$ at $\g_i$ and $\g_i$ is not marked. Ignore $a_i$ and
define $\GS =\closed$ over $(g_i,g_j]$.
\begin{Claim}[(Uniqueness of Control)]
There is only one run of \Controller\ consistent with $A$ and $P$.
\end{Claim}
\begin{proof}
Intuitively, the claim is true because the construction of $Q$ was
deterministic: we had no choice in determining the significant moments of
$Q$. More formally, assume by reductio ad absurdum that $Q_1, Q_2$
are runs of \Controller\ consistent with $A$ and $P$ and the set $D = \{t:
Q_1(t)\neq Q_2(t)\}$ is non-empty. Let $\t=\inf(D)$. Since both $Q_1$ and
$Q_2$ agree with $A$, $\t>0$. By the choice of $\t$, $Q_1$ and $Q_2$ agree
over $[0,\t)$. Since both $Q_1$ and $Q_2$ agree with $A$ and $P$, they can
differ only at internal functions; let $q_1, q_2$ be reductions of $Q_1,
Q_2$ respectively to the internal part of the vocabulary. By Preservation
Lemma, $q_1$ and $q_2$ coincide at $\t$. But the values of internal
functions at $\t+$ are completely defined by the state at $t$. Thus $q_1$
and $q_2$ coincide at $\t+$ and therefore $Q_1, Q_2$ coincide over some
nonempty interval $[\t,\t +\e)$. This contradicts the definition of $\t$.
\qed\end{proof}
\begin{Claim}[(Universality of Construction)]
Let $R'$ be any regular run of the ealgebra consistent with $A$ and $P$. In the
proof of Existence Theorem, the sequence $a_1,a_2,\ldots$ can be chosen in such a
way that the regular run $R$ constructed there coincides with $R'$.
\end{Claim}
\begin{proof}
By Uniqueness of Control Claim, the reducts of $R$ and $R'$ to $\U_1^+$
coincide. The moments $\g_1<\g_2<\ldots$ when Dir changes in $R$ are exactly the
same moments when Dir changes in $R'$. We have only to construct appropriate
constants $a_i$.
Let $\d_1 <\d_2 <\dots$ be the significant moments of \GS\ in $R'$. With respect
to Dir and GateStatus Corollary, define $a_i =\d_i -\g_i$. It is easy to check
that $R = R'$.
\qed\end{proof}
Remark. If the assumption $\dmin\geq\close +\dopen$ is removed, the proof of
Uniqueness of Control Claim does not change but the proof of Universality of
Construction Claim becomes slightly complicated. Let $j=i+1$. For an even $i$,
we have two cases.
Case 1: $\d_i\leq\g_j$. Define $a_i =\d_i -\g_i$.
Case 2: $\d_i >\g_j$. In this case $\g_j -\g_i <\dopen$. The exact value of
$a_i$ is irrelevant; it is only important that $a_i\in(\g_j -\g_i,\dopen)$.
Choose such an $a_i$ arbitrarily.
For an odd $i$, we also have two cases.
Case 1: In $R'$, either $\GS =\opened$ at $\g_i$ or else $\GS =\closed$ at
$\g_i$ but OpenGate fires at $\g_i$. Define $a_i =\d_i -\g_i$.
Case 2: In $R'$, $\GS =\closed$ at $\g_i$. The exact value of $a_i$ is
irrelevant; it is only important that $a_i<\dclose$. Choose such an $a_i$
arbitrarily.
|
1,116,691,498,283 | arxiv | \section{Introduction}
The hard X-ray imaging by the {\em INTEGRAL} satellite \citep{winkler03} has been
uncovering a large number of hard X-ray sources. Since {\em INTEGRAL}'s launch
in 2002 October, 550 sources have been detected by the IBIS instrument in the
$\sim$20--50 keV band (based on version 29 of the ``General Reference Catalog'').
Included in these sources are 236 ``IGR'' sources that were unknown or at least
not well-studied prior to {\em INTEGRAL}. An important result from the
{\em INTEGRAL} mission has been the discovery of a relatively large number of
High-Mass X-ray Binaries (HMXBs). There are 37 IGR sources that have been
classified as HMXBs (and it should be noted that about 1/3 of the IGR sources
are still unclassified). The IGR HMXBs are interesting both for the large
number of new systems as well as the specific properties of these systems.
These include a new class of ``Supergiant Fast X-ray Transients''
\citep{negueruela06} that are HMXBs that can exhibit hard X-ray flares that only
last for a few hours while the X-ray flux changes by orders of magnitude
\citep{intzand05,sguera06}. Many of the IGR HMXBs are also extreme in having
a high level of obscuration ($N_{\rm H}$$\sim$$10^{23}$--$10^{24}$ cm$^{-2}$) due to
material local to the source \citep{walter06,chaty08}. For both the SFXTs and
the obscured HMXBs, it is thought that a strong stellar wind is at least partially
responsible for their extreme X-ray properties \citep{fc04,walter06,wz07}.
The 37 IGR HMXBs presumably contain either neutron stars or black holes, but
the nature of the compact object is only clear for the 12 systems for which
X-ray pulsations from their neutron stars have been detected. Eleven of these
systems have pulse periods ranging from 5 to 1300~s, and the 12th system has an
unusually long period of $\sim$5900~s \citep{patel04}. Another X-ray property
that can be taken as evidence for the presence of a neutron star is a very hard
X-ray spectrum. Typically, neutron star HMXBs have X-ray spectra that can be
modeled with a power-law with a photon index of $\Gamma$$\sim$1 that is
exponentially cutoff near 10--20~keV \citep{nagase89,lutovinov05a}. Although
there are only a few known black hole HMXBs (e.g., Cygnus~X-1, LMC~X-1,
LMC~X-3, M33~X-7), their X-ray spectral properties are similar to the general
class of black hole X-ray binaries with power-law spectra with
$\Gamma$$\sim$1.4--2.1 in their hardest spectral state \citep{mr06}. If black
hole spectra show a cutoff, it is usually close to 100~keV \citep{grove98},
significantly higher than seen for neutron star HMXBs. However, it should
be noted that evidence from the shape of the spectral continuum alone is
usually taken only as an indication of the nature of the compact object.
For example, HMXBs 4U~1700--37 and 4U~2206+54 both have continuum X-ray
spectra similar to neutron star HMXBs, but they are considered to be only
probable neutron star systems because pulsations have not been detected.
In this study, we focus on X-ray observations of IGR~J16207--5129, which was
discovered in the Norma region of the Galaxy relatively early-on in the
{\em INTEGRAL} mission \citep{walter04a,tomsick04_munich}. Although it is
a relatively faint hard X-ray source at $3.3\pm 0.1$ millicrab in the
20--40~keV band \citep{bird07}, it has been consistently detected by
{\em INTEGRAL} as well as in X-ray follow-up observations by {\em Chandra}
and {\em XMM-Newton} (this work), indicating that it is a persistent source.
The {\em Chandra} observation provided a sub-arcsecond position that allowed
for the identification of an optical counterpart with $R = 15.38\pm 0.03$
and an IR counterpart with $K_{\rm s} = 9.13\pm 0.02$ \citep{tomsick06}.
Based on the optical/IR Spectral Energy Distribution, \cite{tomsick06} found
that the system has a massive O- or B-type optical companion. Optical and
IR observations confirmed this and indicate a supergiant nature for the
companion \citep{masetti06v,ns07,rahoui08}, and \cite{rahoui08} estimated
a source distance of $\sim$4.1~kpc. With further IR spectroscopy, the
spectral type of the companion was narrowed down to B1~Ia and a source
distance of $6.1^{+8.9}_{-3.5}$~kpc was estimated \citep{nespoli08}.
Our current knowledge about the soft X-ray properties of IGR~J16207--5129
comes from the {\em Chandra} observation that was made in 2005. This showed
that the source has a hard X-ray spectrum with a power-law photon index of
$\Gamma = 0.5^{+0.6}_{-0.5}$, and the source also exhibited significant
variability over the 5~ks {\em Chandra} observation \citep{tomsick06}.
Based on these properties, we selected this target for follow-up
{\em XMM-Newton} observations to obtain an improved X-ray spectrum and
to search for pulsations that would provide information about the nature of
the compact object. Here, we present the results of the {\em XMM-Newton}
observation.
\section{{\em XMM-Newton} Observations and Light Curve}
We observed IGR~J16207--5129 with {\em XMM-Newton} during satellite revolution 1329.
The observation (ObsID 0402920201) started on 2007 March 13, 8.27~hr UT and lasted
for 44~ks. The EPIC/pn instrument \citep{struder01} accumulated $\sim$0.4--15~keV
photons in ``small window'' mode, giving a 4.4-by-4.4 arcminute$^{2}$ field-of-view
(FOV) and a time resolution of 5.6718~ms. The mode used for the 2 EPIC/MOS units
\citep{turner01} is also called ``small window'' mode, and for MOS, its features
are a 1.8-by-1.8 arcminute$^{2}$ FOV and a time resolution of 0.3~s.
In addition to the {\em XMM-Newton} data, we downloaded the ``current calibration
files'' indicated as necessary for this observation by the on-line software tool
{\ttfamily cifbuild}. For further analysis of the data, we used the {\em XMM-Newton}
Science Analysis Software (SAS-8.0.0) as well as the XSPEC package for spectral
analysis and IDL for timing analysis.
We began by using SAS to produce pn and MOS images and found a strong X-ray source
consistent with the {\em Chandra} position of IGR~J16207--5129 \citep{tomsick06}.
This is the only source seen in the pn image, but for MOS, five of the outer CCD
detectors are active, and a few other faint sources are detected. To produce a pn
light curve, we used SAS to read the event list produced by the standard data
pipeline. We extracted source counts from a circular region with a radius of
$\sim$$35^{\prime\prime}$, which includes nearly all of the counts from the source.
For subtracting the background contribution, we used the counts from a rectangular
region with an area 3.5 times larger than the source region located so that no
point in the background region comes within $2^{\prime}$ of the source. In producing
the light curves, we applied the standard filtering (``FLAG=0'' and ``PATTERN$\leq$4'')
as well as restricting the energy range to 0.4--15~keV.
The pn light curves with 50~s time resolution are shown in Figure~\ref{fig:lc}.
While the average count rate after deadtime correction and background subtraction
is 1.64~c/s, there is a very high level of variability with count rates in the
50~s time bins ranging from $0.03\pm 0.15$ c/s to $7.6\pm 0.5$ c/s. The background
light curve after deadtime correction and scaling to the size of the source region
is shown in Figure~\ref{fig:lc}b. The background rate also shows a high level of
variability during the observation. The average background rate for the entire
44~ks observation is 0.40~c/s, but it is much higher at the end of the observation.
For the first 39~ks, the average background rate is 0.18~c/s, while it is 1.9~c/s
for the last 5~ks. We verified that the higher count rate is due to proton flares
by producing a pn light curve for the full pn FOV in the $>$10~keV energy band.
The average full-FOV count rate in this energy band is 0.33~c/s for the first 39~ks
while it is 4.9~c/s for the last 5~ks. Thus, for the spectral analysis described
below, we only included the portion of the exposure indicated in Figure~\ref{fig:lc}.
We also produced MOS light curves using a circular source region with a radius of
$\sim$$35^{\prime\prime}$, and they show the same variability and flaring as the pn
light curves. We used the MOS1 and MOS2 pipeline event lists, and applied the
standard filtering (``FLAG=0'' and ``PATTERN$\leq$12''). In small window mode,
the active area of the central CCD is too small to use the data from this CCD for
background subtraction; thus, we used a source-free rectangular region from one of
the outer CCDs as our background region. We also made MOS light curves using
data from the full FOVs in the $>$10~keV energy band. They show that a large
increase in the background level occurred simultaneously with the increase seen
in the pn, and for the analysis described below, we used the MOS data from the
first 39~ks of the observation (i.e., the low-background time indicated in
Figure~\ref{fig:lc}b). After background subtraction, the 0.4--12~keV MOS1 and
MOS2 count rates in the source region during the low-background time are 0.50
and 0.51 c/s, respectively.
The third X-ray instrument on {\em XMM-Newton} is the Reflection Grating
Spectrometer (RGS). We inspected the RGS dispersion images that are used to
extract spectra, but there is no evidence that IGR~J16207--5129 is detected.
To determine if an RGS upper limit is constraining, we used the spectra
obtained using the EPIC instruments (see \S$3.2$ below) along with an RGS
response matrix. We find that, for the entire 44~ks observation, we would
expect the RGS to collect $\sim$22 counts and $\sim$13 counts in the first
and second grating orders, respectively, which, given the instrumental
background, is consistent with the non-detection.
\begin{figure}
\plotone{fig1.ps}
\caption{(a) {\em XMM-Newton} pn 0.4--15~keV light curve for IGR~J16207--5129.
The time resolution is 50~s, and we have subtracted the background contribution.
(b) The light curve from the background region after scaling to the size of the
source extraction region. The solid line marks the low-background time segment
used for spectral analysis.\label{fig:lc}}
\end{figure}
\clearpage
\section{Analysis and Results}
\subsection{Timing}
We used the {\em XMM-Newton} instrument with the highest effective area, the pn, for
timing analysis and started with the standard pipeline event list. We used the SAS
tool {\ttfamily barycen} to correct the timestamps to the Earth's barycenter for
each event. As IGR~J16207--5129 is a HMXB that may contain a pulsar, a primary
goal of this analysis is to search for periodic signals. We used the IDL software
package to read in the event list and make a 0.4--15~keV light curve at the highest
possible time resolution, $\Delta$$t = 5.6718$~ms. We note that it is important to
use this exact value for $\Delta$$t$ to avoid producing artifacts in the power
spectrum. Once the high time resolution light curve was produced, we used IDL's
Fast Fourier Transform (FFT) algorithm to produce a Leahy-normalized power spectrum
\citep{leahy83}. The power spectrum extends from $2.3\times 10^{-5}$ Hz (based on
the 44~ks duration of the observation) to 88~Hz (the Nyquist frequency).
Part of this power spectrum is shown in Figure~\ref{fig:timing1}. The power is
distributed as a $\chi^{2}$ probability distribution with 2 degrees of freedom (dof),
and we used this fact along with the total number of trials, which is equal to the
number of frequency bins ($N_{trials}$ = 3,839,999 in this case) to determine a detection
threshold. Here, the 90\% confidence detection limit is at a Leahy Power of 34.9, and
this is shown in Figure~\ref{fig:timing1}. This limit is not exceeded at frequencies
above 0.005~Hz, but there are a large number of frequency bins below 0.005~Hz that
exceed the limit. We suspect that this may be related to low-frequency continuum
noise; thus, we treat the $>$0.005~Hz case first.
The highest power that we measure in the 0.005--88~Hz range is 33.4, which is just below
the 90\% confidence detection threshold. We do not consider this as even a marginal
detection, but we use this value ($P_{max}$) to calculate an upper limit on the strength
of a periodic signal. As described in \cite{vdk89}, the 90\% confidence upper
limit is given by $P_{UL} = P_{max} - P_{exceed}$, where $P_{exceed}$ is the power level
that is exceeded in 90\% of the frequency bins. In our case, $P_{exceed} = 0.2$ so that
$P_{UL} = 33.2$, which, after converting to fractional rms units using the average
source and background count rates, corresponds to an upper limit on the rms noise
level for a periodic signal of $<$2.3\%. We also produced a power spectrum with
$4.6\times 10^{-5}$~Hz frequency bins, but we still did not find any bins with power
in the 0.005--88~Hz range with powers exceeding the 90\% confidence upper limit.
\begin{figure}
\plotone{fig2.ps}
\caption{The lowest frequency portion of the Leahy-normalized power spectrum
for IGR~J16207--5129 with a frequency binsize of $2.3\times 10^{-5}$~Hz. The
horizontal dashed line marks the 90\% confidence detection limit (after
accounting for trials). The full power spectrum extends to the Nyquist
frequency of 88 Hz, based on the pn time resolution of 5.6718~ms, and only
exceeds the 90\% confidence detection limit in the region below 0.05~Hz.
\label{fig:timing1}}
\end{figure}
\begin{figure}
\plotone{fig3.ps}
\caption{Red noise rms-normalized power spectrum for IGR~J16207--5129.
The power-spectrum is fitted with a power-law (solid line).
\label{fig:timing2}}
\end{figure}
\begin{figure}
\plotone{fig4.ps}
\caption{(a) Leahy-normalized power spectrum for IGR~J16207--5129 with
a frequency binsize of $2.3\times 10^{-5}$~Hz. The solid line is the
power-law model from the fit shown in Figure~\ref{fig:timing2}. (b)
Two times the data-to-model ratio using the data and model shown in
panel a. The horizontal dashed line shows the 90\% confidence detection
limit (after accounting for the number of trials). The strongest
signal is at 0.0089~Hz (112.56~s), but it is not statistically significant.
\label{fig:timing3}}
\end{figure}
To characterize the low-frequency noise, we produced a 0.4--15~keV pn light curve with
10~s time resolution, and made a new power spectrum with a Nyquist frequency of 0.05~Hz.
The new power spectrum consists of the average of power spectra from four $\sim$11~ks
segments of the light curve, giving a minimum frequency of $9.2\times 10^{-5}$~Hz.
In addition, we converted the power spectrum to rms normalization and rebinned the
power spectrum so that the power in each bin follows a Gaussian distribution, allowing
for us to fit a model to the power spectrum with $\chi^{2}$-minimization. The resulting
power spectrum is shown in Figure~\ref{fig:timing2}, and the figure also shows that
the power spectrum is well-described by a power-law model ($P = A (\nu/{\rm 1~Hz})^{-\alpha}$).
We obtain a power-law index of $\alpha = 1.76\pm 0.05$, and the integrated rms noise level
is 64\%$\pm$21\% (0.000092--0.05~Hz). For the fit, the $\chi^{2} = 43.6$ for 39 dof,
indicating that the power-law provides an acceptable description of the power spectrum.
To search for a periodic signal in the presence of red (i.e., power-law) noise requires
an extra step in the analysis. Specifically, after producing another Leahy-normalized
power spectrum, this time covering the 0.000023--0.05~Hz frequency range, multiplying
this power spectrum by 2 and dividing by the (re-normalized) power-law model leads to
a power spectrum where the power in each frequency bin follows a $\chi^{2}$ distribution
with 2 dof \citep{vdk89}. This is illustrated by showing these power spectra
in Figure~\ref{fig:timing3}, and we can now search the powers in the bottom panel for
periodic signals. After accounting for the number of trials (i.e., 2,176 frequency bins),
the 90\% confidence detection limit is shown. Although there are no signals that
reach this detection limit, the maximum signal at 0.0089~Hz (112.56~s) is only slightly
below. We do not consider this to be even a marginal detection, but we use the power
in this bin, $P_{max} = 19.2$ to determine the upper limit. Using the same procedure
as described above results in an upper limit on the rms noise level for a periodic
signal of $<$1.7\%.
One caveat to the non-detection of a periodic signal is that it is possible for
the power from a periodic signal to be spread out in frequency by orbital motion.
This can happen if the duration of the observation is a substantial fraction of the
orbital period. For IGR~J16207--5129, we do not know the orbital period; however,
we do know that it is an HMXB and is expected to have an orbital period of between
several days and several weeks. Thus, our $\sim$12~hr {\em XMM-Newton} observation
should cover only a small fraction of the orbit.
\subsection{Energy Spectrum}
\subsubsection{Time-Averaged Spectrum}
We extracted pn, MOS1, and MOS2 energy spectra using the source and background regions
and filtering criteria described above, and we produced response matrices using the
SAS tools {\ttfamily rmfgen} and {\ttfamily arfgen}. We obtained a total exposure time
of 27~ks for the pn and 37~ks for each MOS unit. The exposure time is lower for the
pn due to a higher level of deadtime. We rebinned the energy spectra by requiring at
least 50 counts for each energy bin, leaving a pn spectrum with 722 bins, a MOS1 spectrum
with 290 bins, and a MOS2 spectrum with 298 bins. Although we use the 0.4--15~keV
energy band for pn light curves, we restrict the spectral analysis to 0.4--12~keV for
the pn and MOS units as the calibration\footnote{A document on the pn and EPIC calibration
dated 2008 April by M.~Guainazzi can be found at
http://xmm2.esac.esa.int/docs/documents/CAL-TN-0018.pdf.} has focused on this energy range.
We used the XSPEC software package to jointly fit the 3 spectra with an absorbed
power-law model. To account for absorption, we used the photoelectric absorption
cross sections from \cite{bm92} and elemental abundances from \cite{wam00}, which
correspond to the estimated abundances for the interstellar medium. We left the
relative normalization between the instruments as free parameters, and a
$\chi^{2}$-minimization fit indicates that the MOS normalizations are slightly
higher (3\%$\pm$1\%) than the pn normalization. The absorbed power-law model,
with the best fit values of $N_{\rm H} = 9\times 10^{22}$~cm$^{-2}$ for the column
density and $\Gamma = 0.97$ for the power-law photon index, appears to provide a
good description of the spectral continuum above 2~keV; however, the fit is not
statistically acceptable overall with $\chi^{2}/\nu = 1778/1305$. The counts
spectrum and residuals for the absorbed power-law fit are shown in
Figure~\ref{fig:spectrum_counts}. Large residuals are seen below 2~keV, suggesting
the presence of a soft excess. We also produced pn and MOS spectra with a higher
level of binning to study the iron K$\alpha$ region of the spectrum. The
spectrum and residuals shown in Figure~\ref{fig:counts_iron} indicates the presence
of an iron emission line near 6.4~keV.
\begin{figure}
\plotone{fig5.ps}
\vspace{-0.8cm}
\caption{{\em XMM-Newton} pn and MOS energy spectrum for IGR~J16207--5129 fitted
with an absorbed power-law. Residuals (lower panel) illustrate the presence of
a soft excess. The pn points are marked with points while the
MOS1 and MOS2 points are not (upper and lower panels), and the line used for the
pn model is dashed while the MOS model lines are solid (upper panel).
\label{fig:spectrum_counts}}
\end{figure}
\begin{figure}
\plotone{fig6.ps}
\vspace{-0.8cm}
\caption{{\em XMM-Newton} pn and MOS energy spectrum in the iron line region
for IGR~J16207--5129 fitted with an absorbed power-law. Residuals (lower panel)
illustrate the presence of an iron line. The pn points are marked with points
while the MOS1 and MOS2 points are not (upper and lower panels), and the line
used for the pn model is dashed while the MOS model lines are solid (upper panel).
\label{fig:counts_iron}}
\end{figure}
We refit the spectrum (going back to the lower level of binning) after adding a
Gaussian to model the iron line and a second spectral component to account for
the soft excess. We tried different models for the second spectral component,
including a Bremsstrahlung model, a black-body, and a power-law. In each case,
the second component was absorbed, but we fixed $N_{\rm H}$ for this component
to the Galactic value, $1.7\times 10^{22}$ cm$^{-2}$ \citep{dl90}. We also used
this level of absorption for the iron line. Adding the iron line and using a
Bremsstrahlung model for the second component yields a much improved fit (over
the power-law alone) of $\chi^{2}/\nu = 1402/1300$. The best fit Bremsstrahlung
temperature is 189~keV, which is approaching the upper end of the range allowed
by the XSPEC model {\tt bremss}. We derive a 90\% confidence lower limit on the
temperature of 50~keV, indicating that if the soft excess is due to Bremsstrahlung
emission, it is rather hot. While it is tempting to take the high Bremsstrahlung
temperature as an indication that the soft excess is non-thermal, using a
black-body model for the second component yields a fit of comparable quality,
$\chi^{2}/\nu = 1403/1300$, and the black-body temperature is $\sim$0.6~keV.
Finally, if a power-law is used for the second component, the quality of the
fit is identical to the Bremsstrahlung fit, $\chi^{2}/\nu = 1402/1300$, and
the power-law photon index is $\Gamma = 0.9^{+0.5}_{-0.4}$. Although the
statistical quality of the fits does not allow us to determine which model is
the best to use for the second, soft excess, component, the fact that the
power-law has a photon index that is consistent with the value of $\Gamma$
found for the primary power-law component, $\Gamma = 1.15^{+0.07}_{-0.05}$
(90\% confidence errors), suggests that it may be possible to interpret
the soft excess as an unabsorbed portion of the primary component, and this
possibility is explored further in \S$3.2.2$. Thus, we proceed by focusing
on the model that uses the power-law for the second component.
The different components of the model with an absorbed power-law, a power-law
with just interstellar absorption, and an iron line are shown in
Figure~\ref{fig:spectrum_efe}, and the model parameters are shown in
Table~\ref{tab:spectra}. The primary power-law component has the value of
$\Gamma$ given above absorbed by a column density of $N_{\rm H} =
(1.19^{+0.06}_{-0.05})\times 10^{23}$ cm$^{-2}$, and the 0.5--10~keV unabsorbed
flux of the power-law component is $3.7\times 10^{-11}$ ergs~cm$^{-2}$~s$^{-1}$,
which is a factor of $\sim$20 higher than the soft excess component. The
parameters of the narrow iron K$\alpha$ line are well-constrained. The energy
of the line is $6.39\pm 0.03$~keV, which is consistent with iron ionization states
between neutral (FeI) and $\sim$FeX \citep{nagase86}. The 90\% confidence upper
limit on the width of the line is $<$0.12~keV. The line is clearly detected,
but it is not extremely strong with an equivalent width of $42\pm 12$~eV after
correcting the line and power-law components for absorption.
\begin{figure}
\plotone{fig7.ps}
\caption{{\em XMM-Newton} pn and MOS unfolded energy spectrum for IGR~J16207--5129
with a model consisting of an absorbed power-law (dashed line), a less absorbed power-law
to account for the soft excess (dotted line), and an iron K$\alpha$ emission line (solid
line).
\label{fig:spectrum_efe}}
\end{figure}
\subsubsection{Spectral Properties vs.~Intensity}
A caveat on the above spectral analysis is that the parameters represent average
spectral properties for a highly variable source. As an initial look at how the
spectrum changes with flux level, we extracted deadtime corrected and background
subtracted pn light curves with 100~s time resolution in the 0.4--2~keV, 2--5~keV,
and 5--15~keV energy bands. Figure~\ref{fig:ht} shows these light curves (panels
a--c) along with the hardness vs.~time (panel d). The hardness is calculated
using the rates in the 2--5~keV and 5--15~keV energy bands (defined as $C_{soft}$
and $C_{hard}$, respectively) according to ($C_{hard}$--$C_{soft}$)/($C_{hard}$+$C_{soft}$).
The most striking aspect of Figure~\ref{fig:ht} is how similar the 2--5~keV and
5--15~keV light curves appear, suggesting that the variability is relatively
independent of energy. Although the statistics are poorer for the 0.4--2~keV
light curve, many of the features in this light curve can be associated with
variability in the higher energy light curves. A close inspection of the hardness
(Figure~\ref{fig:ht}d) shows that the light curves do have some energy dependence.
While there are some exceptions, for most of the deepest dips, the spectra are
softer, with the hardness being less than zero for many dips. In contrast, most
flares are harder, with typical hardness values of $\sim$0.2--0.3. While the
sharp flare that occurs at a time of $\sim$16,000~s follows the typical behavior,
a notable exception is found for the flare seen in the first $\sim$3,000~s of the
observation, which has hardness values between --1 and $\sim$0.
\begin{figure}
\plotone{fig8.ps}
\caption{IGR~J16207--5129 light curves in 0.4--2~keV (a), 2--5~keV (b), and
5--15~keV (c) energy bands and the hardness vs. time (d). Defining the
5--15~keV rate as $C_{hard}$ and the 2--5~keV rate at $C_{soft}$, the hardness
is ($C_{hard}$--$C_{soft}$)/($C_{hard}$+$C_{soft}$). The time resolution in each
case is 100~s, and we have subtracted the background contribution.\label{fig:ht}}
\end{figure}
The same hardness values are plotted vs.~the 2--15~keV count rate (i.e., intensity)
in Figure~\ref{fig:hi}, with each point corresponding to a 100~s time bin. In
addition, we calculated weighted averages in seven intensity bins, and these are also
shown in the figure. At the high intensity end, the average hardness is dominated
by the softer flare at the beginning of the observation. Then, the spectrum is
harder at intermediate intensities before softening again below 1 c/s. We
calculated the expected hardness change if the variability is due only to a
change in $N_{\rm H}$, and the prediction as $N_{\rm H}$ changes from
$1.7\times 10^{22}$ cm$^{-2}$ to $10^{24}$ cm$^{-2}$ is shown in Figure~\ref{fig:hi}.
The predicted change in hardness is drastically larger than the change we observe,
indicating that changing $N_{\rm H}$ cannot be the sole cause of the variability.
\begin{figure}
\plotone{fig9.ps}
\caption{Hardness-Intensity diagram for IGR~J16207--5129. We used the 100~s time
resolution light curves for the counts in the 5--15~keV band ($C_{hard}$) and the
2--5~keV band ($C_{soft}$). The hardness is defined as
($C_{hard}$--$C_{soft}$)/($C_{hard}$+$C_{soft}$). In addition to the individual
100~s points, which are grey in color, the averages in seven intensity bins are
shown (the black histogram). The dashed line shows the hardness-intensity
relationship expected if the change in count rate was due only to a change in
column density.\label{fig:hi}}
\end{figure}
To investigate which spectral parameters do change with intensity, we used the
0.4--15~keV pn light curve to divide the pn and MOS data into spectra at four
intensity levels: 4--8 pn c/s, 2--4 pn c/s, 1--2 pn c/s, and 0--1 pn c/s. We
used all of the data from the first 39~ks of the observation and obtained pn
exposure times of 2,485~s, 6,177~s, 8,520~s, and 9,940~s and MOS exposure times
of 3,412~s, 8,482~s, 11,700~s, and 13,650~s for the four intensity levels,
respectively. In fitting these spectra, we simplified the two power-law model
by requiring the same $\Gamma$ for both power-laws, which is equivalent to using
a model where a fraction, $f$, of the power-law component is absorbed both by
interstellar material and by the extra material local to the source, while a
fraction, 1--$f$ is absorbed only by interstellar material. Table~\ref{tab:intensity}
shows the parameters obtained for the total spectrum and also for each of the
four intensity levels, and Figure~\ref{fig:efe_intensity} shows the unfolded
spectra for the four levels. While we performed all of the fits with pn and
MOS spectra, Figure~\ref{fig:efe_intensity} shows only the pn spectra for
clarity.
One interesting result that we obtain from the spectral parameters shown in
Table~\ref{tab:intensity} is that, at $N_{\rm H} = (6.4\pm 0.4)\times 10^{22}$
cm$^{-2}$, the column density for the highest intensity level is significantly
less that for the lower levels. This spectrum is dominated by the flare that
occurs during the first 3,000~s of the observation, and this indicates that
lower $N_{\rm H}$ is the reason that this flare is softer. The parameters also
indicate that there are two reasons why the spectrum is softer during the
deepest dips. First, the power-law index indicates that the source gradually
softens toward the lower levels, going from $\Gamma = 1.06\pm 0.05$ at 4--8 c/s
to $1.30\pm 0.12$ at 0--1 c/s. Secondly, the parameters and the spectra shown
in Figure~\ref{fig:efe_intensity} indicate that the flux of the soft excess
does not decrease as much as the flux of the primary power-law component.
Between 2--4 c/s and 0--1 c/s, the flux of the primary power-law component
changes by a factor of $5.1\pm 0.5$ while the flux of the soft excess changes
by a factor of $2.0\pm 0.5$. The iron line parameters show that the equivalent
width of the iron line is consistent with being constant while the flux of the
line changes with the continuum flux.
\begin{figure}
\plotone{fig10.ps}
\caption{Unfolded pn energy spectra after dividing the data into the following
different intensity levels: (a) 4--8 pn c/s; (b) 2--4 pn c/s; (c) 1--2 pn c/s;
and (d) 0--1 pn c/s. In each panel, the best fit partial covering fraction
models are plotted for all four intensity levels.
\label{fig:efe_intensity}}
\end{figure}
\section{Discussion}
\subsection{Is IGR J16207--5129 an Obscured HMXB?}
While the {\em Chandra} position and the optical/IR spectroscopy leave essentially
no doubt that IGR~J16207--5129 is an HMXB, it has not been entirely clear whether
the column density is high enough to require that part of the absorbing material
is local to the source or not. The only other soft X-ray measurement besides the
one reported here is the {\em Chandra} observation, and \cite{tomsick06} reported
a value of $N_{\rm H} = (3.7^{+1.4}_{-1.2})\times 10^{22}$ cm$^{-2}$ (90\% confidence
errors), while many of the obscured HMXBs have column densities in excess of
$10^{23}$ cm$^{-2}$. When one considers atomic and molecular hydrogen, the total
Galactic column density is near $2.4\times 10^{22}$ cm$^{-2}$ \citep{dl90,dht01,tomsick08a},
so the {\em Chandra} measurement is only marginally higher than the Galactic value.
However, the value that we measure with {\em XMM-Newton} is substantially higher,
$N_{\rm H} = (1.19^{+0.06}_{-0.05})\times 10^{23}$ cm$^{-2}$, which is significantly in
excess of the Galactic value. To check on whether the difference between the
{\em Chandra} and {\em XMM-Newton} measurements is related to variability in the column
density, we re-analyzed the {\em Chandra} spectrum. In our previous {\em Chandra}
analysis \citep{tomsick06}, we used the \cite{ag89} rather than \cite{wam00} abundances,
and we fitted the spectrum with only a single power-law. First, we find that when
we use the \cite{wam00} abundances and re-fit the {\em Chandra} spectrum, we
obtain a value of $N_{\rm H} = (5.4^{+2.1}_{-1.7})\times 10^{22}$ cm$^{-2}$, which is
already somewhat higher than found in the previous analysis. Second, the quality
of the {\em Chandra} spectrum does not allow for the detection of the soft excess,
but we re-fitted the {\em Chandra} spectrum after including a soft excess with
the values of $N_{\rm H}$ and $\Gamma$ measured with {\em XMM-Newton} (see
Table~\ref{tab:spectra}). This fit gives values for the primary power-law component
of $\Gamma = 0.7\pm 0.7$ and $N_{\rm H} = 7.9^{+4.4}_{-3.2}\times 10^{22}$ cm$^{-2}$,
which are both consistent with the values measured with {\em XMM-Newton}.
Thus, the {\em Chandra} spectrum allows for the possibility that IGR~J16207--5129 is
an obscured HMXB while the {\em XMM-Newton} spectrum constrains $N_{\rm H}$ to be
significantly higher than the Galactic value, requiring that the source is an obscured
HMXB. With {\em XMM-Newton}, we see that the $N_{\rm H}$ can change, with the first
3,000~s of the observation having $N_{\rm H} = (6.4\pm 0.4)\times 10^{22}$ cm$^{-2}$,
while the $N_{\rm H}$ was about a factor of two higher for the rest of the observation.
The {\em Chandra} spectrum is consistent with either level. Even at the highest
$N_{\rm H}$ seen by {\em XMM-Newton}, the amount of local absorbing material is
certainly not as much as seen in some of the most extreme systems like
IGR~J16318--4848, which has $N_{\rm H} = 2\times 10^{24}$ cm$^{-2}$. This is consistent
with optical and IR observations which have shown P Cygni profiles, forbidden emission
lines (indicating a supergiant B[e] spectral type), and IR excesses from local material
for IGR~J16318--4848 \citep{fc04,moon07,rahoui08} but not for IGR~J16207--5129
\citep{ns07,nespoli08,rahoui08}.
\subsection{Comparison to other HMXBs Lacking Pulsations}
With the results of our timing study, IGR~J16207--5129 joins a relatively short
list of accreting HMXBs that show very hard energy spectra, resembling known
neutron star HMXBs, but that do not seem to exhibit pulsations (at least in the
expected frequency range). One example is 4U~1700--377, which has an extremely
massive and luminous O6.5~Iaf+ companion in a 3.41~day orbit with the compact
object. The properties of this source as seen by {\em XMM-Newton} are remarkably
similar to those of IGR~J16207--5129 \citep{vandermeer05}. Light curves with
10~s time resolution show flares and dips where the flux changes by factors of
at least 5 on ks time scales. While the spectral analysis was limited to the
fainter time periods because of photon pile-up, spectral fits show a direct
component with a hard power-law index ($\Gamma = 1.08$--1.87 for the ``low-flux''
spectrum, depending on the exact model used) and a high level of absorption
($N_{\rm H} = 6.8\times 10^{22}$~cm$^{-2}$ to $N_{\rm H} = 2.0\times 10^{23}$~cm$^{-2}$)
along with a soft excess. While the light curves for 4U~1700--377 and
IGR~J16207--5129 are somewhat unusual in their extreme variability on long
time scales, it should be noted that many pulsating HMXBs (e.g., Vela~X-1
and GX~301--2) also show very similar energy spectra to those described here,
with hard primary power-law components and soft excesses \citep{nagase89}.
A comparison of the power spectra of 4U~1700--377 and IGR~J16207--5129 show
similarities also. The typical power spectrum of an HMXB pulsar has 3 parts:
A relatively flat portion at frequencies below the pulsation frequency;
a region where the pulsations, harmonics, and sometimes quasi-periodic
oscillations (QPOs) are observed, and a higher frequency region where
the power spectrum can be described as a power-law with a slope of 1.4--2.0
\citep{bh90,ht93,chakrabarty01}. However, a uniform study of 12 HMXBs
with {\em EXOSAT} indicated that 4U~1700--377 (and, interestingly, Cyg X-3)
deviated from this pattern by showing only a steep power-law in the
0.002--1~Hz frequency range \citep{bh90}, like IGR~J16207--5129. \cite{bh90}
quote 0.008--29~Hz integrated rms noise levels for 4U~1700--377 between 6 and
12\%, and if we recalculate the IGR~J16207--5129 noise level for this
frequency range, we obtain 11.7\%$\pm$2.7\%.
While there are many similarities between these two sources, one possible
difference is that the average luminosity of 4U~1700--377 is probably
higher. To avoid model-dependent flux measurements, we compare the average
20--40~keV flux measurements made by {\em INTEGRAL}. For 4U~1700--377,
\cite{bird07} quote a value of $208.1\pm 0.1$ millicrab compared to
$3.3\pm 0.1$ millicrab for IGR~J16207--5129. The distance to 4U~1700--377
is estimated at 1.9~kpc \citep{ankay01}, so the ratio of fluxes for the
two sources would imply a distance of $\sim$15~kpc for IGR~J16207--5129 if
their 20--40~keV luminosities were the same, which is at the very upper
limit of the distance range derived by \cite{nespoli08}. At the best
estimate for the IGR~J16207--5129 distance, 6.1~kpc, the luminosity of
IGR~J16207--5129 would be about 6 times lower than 4U~1700--377. At a
distance of 6.1~kpc, the average IGR~J16207--5129 20--40 keV luminosity is
$1.1\times 10^{35}$ ergs~s$^{-1}$, and the average 0.5--10 keV unabsorbed
luminosity is $1.6\times 10^{35}$ ergs~s$^{-1}$.
Despite soft X-ray spectra that are similar to neutron star HMXBs, the
nature of the compact object is still unclear for both 4U~1700--377 and
IGR~J16207--5129 because of the lack of X-ray pulsations. One reason that
the black hole possibility has been taken seriously for 4U~1700--377 is
that the compact object mass has been measured to be
$2.44\pm 0.27$~\Msun~\citep{clark02}, which is significantly higher than
the values close to 1.4\Msun~measured for most neutron stars \citep{tc99},
and could indicate that 4U~1700--377 harbors a low mass black hole.
Obtaining a compact object mass measurement for IGR~J16207--5129 would
certainly be interesting and may be feasible as it is relatively bright
in the optical and IR (see \S$1$). However, the mass measurement would be
challenging because accurate spectroscopy would be required to measure the
massive B1~Ia companion's radial velocity curve. Rather than black holes,
these sources may harbor very slowly rotating neutron stars. Several such
X-ray binaries are known: IGR~J16358--4726 with its 1.64~hr pulsations
\citep{patel04}; 2S~0114+650 with its 2.7~hr pulsations and B1~Ia spectral
type \citep[][and references therein]{farrell08}, which is the same spectral
type as IGR~J16207--5129; 4U~1954+319 with its likely 5~hr pulsations
\citep{mattana06}; and 1E 161348--5055.1 with its likely 6.7~hr pulsations
\citep{deluca06}. With our $\sim$12~hr {\em XMM-Newton} observation of
IGR~J16207--5129, we can rule out pulsation periods in the $\sim$1--2 hr
range but probably not in the $\gsim$4 hr range.
Another HMXB with a neutron star-like spectrum that has been well-studied
without detecting pulsations is 4U~2206+54. This system has an O9.5~V or
O9.5~III companion \citep{ribo06} and a 9.6 or 19.25 day orbital period
\citep{corbet07}. Like IGR~J16207--5129, its X-ray emission is highly
variable \citep{masetti04}, but its energy spectrum is significantly
different, showing no evidence for local absorption. Its $10^{-3}$--1 Hz
power spectrum is dominated by strong red (i.e., power-law) noise
\citep{nr01,torrejon04,blay05}, which is similar to IGR~J16207--5129 and
4U~1700--377. However, for 4U~2206+54, the possible detection of a
cyclotron line at $\sim$30~keV provides some additional evidence that
the compact object is a neutron star \citep{torrejon04,blay05}. If
confirmed, this would indicate that the lack of pulsations and the red
noise power spectrum should not be taken as evidence for a black hole.
Rather, the non-detections of pulsations could be related to a system
geometry where, e.g., the neutron star magnetic and spin axes are nearly
aligned, or the non-detections could be due to spin periods that are
longer than the duration of the observations.
Finally, it is worth noting that several IGR HMXBs besides IGR~J16207--5129
have not yet shown X-ray pulsations. IGR~J19140+0951 has similarities to
IGR~J16207--5129, with a B0.5~I spectral type, a high level of obscuration
(although the level depends on orbital phase), and a soft excess
\citep[see][and references therein]{prat08}. IGR~J16318--4848 (see
\S$4.1$) also has not exhibited pulsations despite a high level of X-ray
variability that has been seen in several soft X-ray observations.
Although the apparent lack of pulsations may be due to the fact that the
timing properties of these sources have not been very well-studied yet,
it is possible that the IGR HMXBs include a population of systems with
black holes or very slowly rotating neutron stars.
\subsection{Soft Excess and Iron Line}
The presence of a soft excess and emission lines are very common in both pulsating
and non-pulsating obscured HMXBs. In addition to iron lines, some sources (e.g.,
Vela~X-1 and 4U~1700--377) have emission lines from Si, Ne, etc.~in the 0.5--3~keV
range, making up at least part of the soft excess \citep{watanabe06,boroson03,vandermeer05}.
Although our spectral analysis does not allow us to definitively rule out other
models besides a power-law (e.g., Bremsstrahlung or black-body) for the soft excess,
the fact using a power-law leads to a power-law index that is consistent with the
value of $\Gamma$ for the primary power-law components allows for two possible
physical interpretations. One possibility is the partial covering of the power-law
source by absorbing material (likely the stellar wind), which is the scenario envisioned
for the spectral fits described in \S$3.2.2$. Another possible explanation is that
the soft excess emission originates as part of the primary power-law component, but
it is scattered in the stellar wind. In this picture, the fact that the soft
excess has a much lower column density than the primary component would indicate
that the photons that are part of the soft excess come from the edge (i.e., within
one optical depth of the edge) of the stellar wind.
Although it is not clear whether any of the IGR~J16207--5129 soft excess comes
from emission lines, the fact that the flux of the Fe~K$\alpha$ line at 6.4~keV
correlates with the overall source flux (i.e., the equivalent width is consistent
with being constant) is different from Vela~X-1 and 4U~1700--377, which both show
lines with very large equivalent widths at low luminosities. Regardless of the
presence (or not) of lower energy lines in IGR~J16207--5129, the energy of the
iron line indicates a low ionization state implies that the line originates in
cool material. One possible location for the cool material is an accretion disk
around the IGR~J16207--5129 compact object; however, many HMXBs with strong stellar
winds have compact objects that accrete directly from the wind with little or no
disk accretion. The equivalent width of $42\pm 12$~eV that we measure for the
IGR~J16207--5129 iron line is consistent with simulations in which a spherical
distribution of cold matter with solar abundances and
$N_{\rm H}\sim 10^{23}$~cm$^{-2}$ is illuminated by a central X-ray source
\citep{matt02}, and this is also true for several other IGR HMXBs \citep{walter06}.
Although it is not obvious that the matter in a stellar wind will be cold
(i.e., neutral), it has been suggested that the wind might be a clumpy two-phase
medium with cool, dense regions that are responsible for the emission lines along
with hot, highly-ionized regions \citep[][and references therein]{vandermeer05}.
The partial covering model that we use to fit the IGR~J16207--5129 continuum
would also be consistent with absorption of the X-ray source by a clumpy wind.
\section{Summary and Conclusions}
The {\em XMM-Newton} observation of IGR~J16207--5129 confirms that it is a
member of the group of obscured IGR HMXBs. We measure a column density of
$N_{\rm H} = (1.19^{+0.06}_{-0.05})\times 10^{23}$ cm$^{-2}$ for the average
spectrum. We find that the column density could have been that high during
the only previous soft X-ray observation of this source with {\em Chandra},
but a detailed spectral analysis of the {\em XMM-Newton} data shows that
$N_{\rm H}$ can vary by a factor of $\sim$2. We detect an iron line in
the energy spectrum, and its strength is consistent with what is expected
for a spherical distribution of material with the measured $N_{\rm H}$ around
the compact object. Although the X-ray spectrum is similar to those seen
from other neutron star HMXBs with a hard primary power-law component and
a soft excess, we do not detect the pulsations that might be expected if
the compact object is a neutron star.
A detailed comparison between IGR~J16207--5129 and another apparently
non-pulsating HMXB, 4U~1700--377, shows strong similarities in spectral
and timing properties. Most notably, the power spectra of both sources
can be described as a single power-law down to $10^{-3}$-$10^{-4}$~Hz.
Since pulsating HMXBs show power spectra that break near the pulsation
frequency, it is possible that both of these sources harbor very slowly
rotating neutron stars (although the possibility that the compact object
is a black hole cannot be ruled out entirely). We note that several
of the IGR HMXBs are either known to harbor slowly rotating neutron stars
or may harbor slowly rotating neutron stars in cases where pulsations have
not been detected. It is possible that in addition to uncovering obscured
HMXBs, the HMXBs that {\em INTEGRAL} is uncovering tend to contain slow
rotators.
\acknowledgments
JAT acknowledges partial support from National Aeronautics and Space Administration
(NASA) {\em XMM-Newton} Guest Observer award number NNX07AQ11G. JAT thanks Nora
Loiseau of the {\em XMM-Newton} User Support Group, Joern Wilms, and Peter Woods
for helpful information concerning the {\em XMM-Newton} data analysis. We thank
an anonymous referee for useful suggestions on the spectral analysis.
|
1,116,691,498,284 | arxiv | \section{Introduction and motivation}
\emph{Dominating set problems} are among the most important class of combinatorial problems in graph optimization,
from a theoretical as well as from a practical point of view.
For a given graph $G = G(V,E)$, a subset $D\subset V$ of vertices is referred to as a \textit{dominating set} if
the remaining vertices, i.e., $V\setminus D$, are \textit{dominated} by $D$ according to a given topological relation (e.g., they are all adjacent to at least one vertex from $D$).
Dominating set problems (also often called \emph{domination problems} in graphs) have attracted the attention of computer scientists and applied mathematicians
since the early 50s, and their close relation to covering and independent set problems has lead to the development of a whole research area (see, e.g.,~\cite{Ore1962} and~\cite{doi:10.1002/net.3230070305} for early references on domination problems).
There are many applications where set domination and related concepts play a central role, including school bus routing~\cite{7502983}, communication networks~\cite{wan2002distributed}, radio station location~\cite{erwin2004dominating}, social networks analysis~\cite{SunMa2017}, biological networks analysis~\cite{NACHER201657}, and also chess-problems like the five-queens problem~\cite{rolfes2014copositive};
see e.g., the book~\cite{haynes2013fundamentals} for a comprehensive overview of domination problems.
Variants of dominating set problems include e.g., the \emph{connected dominating set problem}~\cite{DuWan2013}, the \emph{(weighted) independent dominating set problem}~\cite{GODDARD2013839,PINACHODAVIDSON2018860}, among others (see,~e.g.,~\cite{Kang2013} for further variants of the dominating set problems).
In this paper, we address the recently introduced \emph{(minimum) weighted total domination problem (WTDP)} which is defined as follows.
\begin{definition}Let $\mathbf{w}: V\rightarrow \mathbb{R}_{\geq 0}^{|V|}$ be a vertex weight function,
and let $\mathbf{c}: E\rightarrow \mathbb{R}_{\geq 0}^{|E|}$ be an edge weight function.
The weighted total domination problem is the problem of finding a set $D\subset V$,
such that every vertex in $V$ (including the vertices from $D$) has at least one neighbor in $D$
and the function
\begin{align}
w(D) = \sum_{i\in D} w_i + \sum_{e\in E(D)} c_e + \sum_{i \in V\setminus D} \min \{c_e\mid e:\{i,j\}\in E\;\mbox{and}\;j\in N(i)\cap D\} \notag
\end{align}is minimized, where $E(D)\subseteq E$ corresponds to the set of edges \emph{inside} $D$,
and $N(i)\subset V$ corresponds to the set of neighboring vertices of vertex $i\in V$.
\end{definition}
For referring to the different components of the objective function, we denote $\sum_{i\in D} w_i$ as the \emph{vertex selection costs}, $\sum_{e\in E(D)} c_e$ as the \emph{internal edge costs} and $\sum_{i \in V\setminus D} \min \{c_e\mid e:\{i,j\}\in E\;\mbox{and}\;j\in N(i)\cap D\}$ as the \emph{external edge costs}.
Figure~\ref{fig:example} gives an exemplary instance of the WTDP, together with its optimal solution.
\tikzstyle{vertex}=[circle,fill=black!15,minimum size=20pt,inner sep=0pt]
\tikzstyle{edge} = [draw,thick,-]
\tikzstyle{weight} = [draw=none,fill=white,inner sep=1pt, font=\small]
\tikzstyle{selected edge} = [draw,line width=2pt,-,black!100]
\tikzstyle{unselected vertex}=[circle,fill=black!5,minimum size=20pt,inner sep=0pt]
\begin{figure}[h!tb]
\begin{subfigure}[b]{.5\linewidth}
\centering
\begin{tikzpicture}[scale=1.9]
\foreach \pos/\name/\type/\weight in {{(0,2)/A/vertex/1}, {(1,2)/B/vertex/8}, {(2,2)/C/vertex/1},{(0,1)/D/vertex/5},{(1,1)/E/vertex/1},{(2,1)/F/vertex/7},{(0,0)/G/vertex/1},{(1,0)/H/vertex/9},{(2,0)/I/vertex/1}}
\node[\type] (\name) at \pos {$\weight$};
\foreach \source/ \dest/\weight in
{A/B/6,B/C/7,A/D/2,D/E/5,B/E/3,C/F/3,E/F/3,D/G/3,G/H/2,E/H/6,H/I/2,F/I/4}
\path[edge] (\source) -- node[weight] {$\weight$} (\dest);
\end{tikzpicture}
\caption{Instance}\label{fig:instance}
\end{subfigure}%
\begin{subfigure}[b]{.5\linewidth}
\centering
\begin{tikzpicture}[scale=1.9]
\foreach \pos/\name/\type/\weight in {{(0,2)/A/unselected vertex/}, {(1,2)/B/unselected vertex/}, {(2,2)/C/vertex/1},{(0,1)/D/vertex/5},{(1,1)/E/unselected vertex/},{(2,1)/F/vertex/7},{(0,0)/G/vertex/1},{(1,0)/H/unselected vertex/},{(2,0)/I/unselected vertex/}}
\node[\type] (\name) at \pos {$\weight$};
\foreach \source/ \dest/\weight in
{A/D/2,B/C/7,C/F/3,E/F/3,D/G/3,F/I/4,G/H/2}
\path[selected edge] (\source) -- node[weight] {$\weight$} (\dest);
\end{tikzpicture}
\caption{Optimal solution}\label{fig:solution}
\end{subfigure}
\caption{Instance $I=(G=(V,E,\mathbf{c},\mathbf{w})$ and optimal solution with weight $14+6+18=38$ (\emph{vertex selection costs}+\emph{internal edge costs}+\emph{external edge costs}). We note that a solution consisting of all the vertices with weight one would not be feasible, as it is not a total dominating set, but only a dominating set. \label{fig:example}}
\end{figure}
We note that in the WTDP, we are not just concerned with the concept of domination, but with the stronger concept of \emph{total domination}, which imposes that for each vertex $v \in D$, there is also a neighbor of $v$ in $D$ (i.e., the vertices of $D$ also need to be dominated by $D$).
The WTDP\ was introduced in~\cite{MaEtAl2019} an is an extension of the (unweighted) total domination problem (TDP), resp., the vertex-weighted total domination problem.
In the TDP, the objective function has $w_i=1$ for all $i \in V$, and $c_e=0$ for all $e \in E$.
The optimal solution of the TDP\ for a given graph is called its \emph{total domination number}.
The TDP\ was introduced in the 1980s (see~\cite{cockayne1980total})
and is NP-hard in general graphs (see~\cite{laskar1984algorithmic}, for further details).
The TDP\ has a rich history of research focusing on theoretical results, e.g., computational complexity and bounds for the domination number for certain graph classes, we refer the reader to the survey~\cite{henning2009survey} and the book~\cite{henning2013total} for more details.
Applications of total domination include the design of communication networks and the forming of committees~\cite{haynes2013fundamentals, henning2004restricted}.
\paragraph{Contribution and Outline}
The WTDP\ was recently introduced in~\cite{MaEtAl2019}, where three Mixed-Integer Programming (MIP) formulations to solve the problem were presented and evaluated in a computational study.
In this paper, we present two new MIP models for the problem, and design solution frameworks based on them.
These solution frameworks also include valid inequalities,
starting heuristics and primal heuristics. A genetic algorithm (GA) is also developed,
which is based on a greedy randomized adaptive search procedure (GRASP) version of our starting heuristic.
We carry out a computational study to assess the performance of our new approaches in comparison to the previous work by~\cite{MaEtAl2019}.
The study reveals that our algorithms are up to 500 times faster and instances with up to 125 vertices can be solved to optimality within a timelimit of 1800 seconds. Moreover, the presented heuristics (i.e., the GA, and just using the GRASP on its own) also work well and often find the optimal, or a near-optimal solution within a short runtime.
Furthermore, we also analyze the influence of instance-characteristics on the performance of our algorithms.
The paper is organized as follows. In the reminder of this section, we give a short overview of the models introduced in~\cite{MaEtAl2019}.
In Section~\ref{sec:form} we present our two new MIP models, together with valid inequalities.
In Section~\ref{sec:bc} we discuss implementation details of the branch-and-cut algorithms we designed based on our new models, including a description of the starting and primal heuristics.
In Section~\ref{sec:genalg}, we describe our genetic algorithm.
Section~\ref{sec:compres} contains our computational study, and concluding remarks are provided in Section~\ref{sec:con}.
\subsection{Revisiting the models of~\cite{MaEtAl2019}}
\label{subsec:prevwork}
In the following, we give a brief overview of the three formulations for the WTDP\ presented in~\cite{MaEtAl2019} (we denote them as (MA1), (MA2), (MA3)). We re-implemented these models and included them in our computational study, see Section~\ref{sec:compres}.
Firstly, consider the following set of variables and constraints which are common to all formulations of~\cite{MaEtAl2019} and that will also be part of our formulations.
Let $\mathbf{x}\in\{0,1\}^{|V|}$ be a vector of binary variables, such that $x_i = 1$ if vertex $i\in V$ is taking as part of the (total) dominating set, and $x_i =0$ otherwise.
Constraints
\begin{equation}
\sum_{j\in N(i)} x_j \geq 1,\;\forall i\in V,\tag{TDOM}\label{eq:dom}
\end{equation}
ensure that the variables with $x_i=1$ form a total dominating set. We observe that these constraints are already enough to define the set of feasible solutions. The remaining constraints in the presented models are used to correctly measure the objective function.
Let $\mathbf{y}\in\{0,1\}^{|E|}$ be a vector of binary variables
associated with the edges $E$.
These variables will be used in all formulations except of (MA3), and they are used differently depending on the considered formulation.
In (MA1), (MA2), they are used to measure the contribution of any edge $e=\{i,j\}$ on the objective function, for both the internal edge costs and the external edge costs. In contrast, in the new formulations presented in Section~\ref{sec:form}, these variables are only used for the internal edge costs, and the external edge costs are modeled in different ways.
\paragraph{Formulation (MA1)}
Let $\mathbf{z}\in\{0,1\}^{|E|}$ be a vector of binary variables,
such that $z_{e=\{i,j\}} = \min\{x_i,x_j\}$ for every edge $e:\{i,j\}\in E$.
For a given vertex $i\in V$, let $\delta(i)\subset E$ be the set of edges incident to $i$.
Formulation (MA1)~ is given by:
\begin{align}
\mbox{(MA1)}\quad\quad\quad w^* = \min &\sum_{i\in V} w_i x_i+ \sum_{e\in E} c_e y_e \tag{WTD1.1}\label{eq:wtd11}\\
\mbox{s.t.}\quad\quad & \eqref{eq:dom} \notag \\
&x_i + x_j \geq y_e,\;\forall e:\{i,j\}\in E\tag{WTD1.3}\label{eq:wtd13}\\
&x_i \geq z_e\;\mbox{and}\; x_j\geq z_e,\;\forall e:\{i,j\}\in E\tag{WTD1.4}\label{eq:wtd14}\\
&z_e \geq x_i + x_j - 1,\;\forall e:\{i,j\}\in E\tag{WTD1.5}\label{eq:wtd15}\\
&y_e \geq z_e,\;\forall e\in E\tag{WTD1.6}\label{eq:wtd16}\\
&x_i + \sum_{e\in\delta(i)} y_e \geq 1,\;\forall i\in V\tag{WTD1.7}\label{eq:wtd17}\\
&\mathbf{x}\in\{0,1\}^{|V|},\;\mathbf{y}\in\{0,1\}^{|E|}\;\mbox{and}\;\mathbf{z}\in\{0,1\}^{E} \notag
\end{align}
\paragraph{Formulation (MA2)}
Let $M$ be a large constant (e.g., the maximum vertex-degree of a given instance).
Compared to formulation (MA1), (MA2)\ gets rid of the $\mathbf{z}$-variables with the help of big-M-type constraints.
Formulation (MA2)~ is given by:
\begin{align}
\mbox{(MA2)}\quad\quad\quad w^* = &\min \sum_{i\in D} w_i x_i+ \sum_{e\in E(D)} c_e y_e \notag\\
\mbox{s.t.}\quad\quad &\mbox{\eqref{eq:dom},~\eqref{eq:wtd13} and~\eqref{eq:wtd17}}\tag{WTD2.2}\label{eq:wtd22}\\
&y_e \leq x_i + x_j - 1,\;\forall e:\{i,j\}\in E\tag{WTD2.3}\label{eq:wtd23}\\
&\sum_{e\in\delta(i)} y_e \leq 1 + M x_i,\;\forall i\in V\tag{WTD2.4}\label{eq:wtd24}\\
&\mathbf{x}\in\{0,1\}^{|V|}\;\mbox{and}\;\mathbf{y}\in\{0,1\}^{|E|} \notag
\end{align}
\paragraph{Formulation (MA3)} Finally, formulation (MA3)\ also gets rid of the binary $\mathbf{y}$-variables, with the help of integer variables $\mathbf{q}\in\{0,1,\ldots, |V|L\}^{|V|}$, where $L$ is a large constant, e.g., the maximum edge weight of a given instance. These variables measure for each $i \in V$ twice contribution to the objective of all the edge-weights of edges adjacent to $i$ (again, for both the internal and external edge costs).
Formulation (MA3)~ is given by:
\begin{align}
\mbox{(MA3)}\quad\quad\quad w^* = & \sum_{i\in V} \left(w_i x_i + \frac{1}{2}\cdot q_i\right)\tag{WTD3.1}\label{eq:wtd31}\\
\mbox{s.t.}\quad\quad & \eqref{eq:dom} \notag \\
&q_i \geq 2\left(c_e x_i - L x_i - \sum_{e':\{i,j'\}\in E\mid_{c_{e'}\leq c_e}} Lx_{j'} \right),\;\forall e:\{i,j\}\in E,\;\forall i\in V\tag{WTD3.2}\label{eq:wtd32}\\
&q_i \geq \sum_{e:\{i,j\}\in E}c_e\left(x_i + x_j -1\right),\;\forall i\in V\tag{WTD3.3}\label{eq:wtd33}\\
&\mathbf{x}\in\{0,1\}^{|V|}\;\mbox{and}\;\mathbf{q}\in\{0,1,\ldots, |V|L\}^{|V|} \notag
\end{align}
\section{Two new Mixed-Integer Programming formulations for the WTDP}
\label{sec:form}
In this section we present two alternative formulations that, as we show in Section \ref{sec:compres},
allow the design of algorithmic strategies that outperform the results presented in~\cite{MaEtAl2019}.
\subsection{Formulation (F1)\ and valid inequalities}
Let $\mathbf{x}\in\{0,1\}^{|V|}$ be defined as before,
and $\mathbf{y}\in\{0,1\}^{|E|}$ be such that $y_{e:\{i,j\}}=1$ if $x_i = 1$ and $x_j=1$,
and $y_{e:\{i,j\}}=0$, otherwise, for every edge $e:\{i,j\}\in E$.
Let $A = \{(i,j)\cup(j,i)\mid \forall e:\{i,j\}\in E \}$
be the set of bi-directed arcs associated with $E$,
and let $c_{ij} = c_{ji} = c_{e}$ for all $e\in E$.
In contrast to (MA1), we associate $\mathbf{z}$ with these directed arcs, instead of the undirected edges. Let $z_{ij} = 1$ if vertex $j\in V$ is adjacent to the dominating set vertex $i$ through arc $(i,j)\in A$, and $z_{ij} =0$ otherwise. These variables are used to measure the external edge costs.
By using such strategy, the resulting formulation resembles to the formulations of the well-known uncapacitated facility location problem (UFL);
we can interpret the set of vertices $j \in D$ as open facilities, we want the vertices $i \in V\setminus D$ be assigned to the facility with the cheapest assignment cost (see, e.g., \cite{fischetti2016redesigning,laporte2015location} for recent references on the UFL). Let $\delta^-(i)$ and $\delta^+(i)$ correspond to the set of \textit{incoming} and \textit{outgoing} arcs from and to vertex $i\in V$, respectively.
Using this notation, the WTDP can be formulated as follows:
\begin{align}
\mbox{(F1)}\quad\quad\quad w^* = &\min \sum_{i\in V} w_i x_ i + \sum_{e\in E} c_e y_e + \sum_{(i,j)\in A} c_{ij} z_{ij}\notag\\
\mbox{s.t.}\quad\quad & \eqref{eq:dom} \notag \\
& x_i + \sum_{(j,i)\in\delta^-(i)} z_{ji} = 1,\;\forall i\in V\tag{XZLINK1}\label{eq:wtd42}\\
& z_{ij} \leq x_i,\;\forall (i,j)\in A \tag{XZLINK2}\label{eq:wtd43}\\
&y_e\geq x_i + x_j - 1,\;\forall e:\{i,j\}\in E\tag{YZLINK}\label{eq:wtd44}\\
&\mathbf{x}\in\{0,1\}^{|V|},\;\mathbf{y}\in\{0,1\}^{|E|}\;\mbox{and}\;\mathbf{z}\in\{0,1\}^{|A|}. \notag
\end{align}Constraints \eqref{eq:wtd42} ensure for each $i \in V$, that either $i \in D$, or that it is covered by a $j \in D$. Constraints \eqref{eq:wtd43} link the $\mathbf{z}$-variables and $\mathbf{x}$-variables. Together with the $\sum_{(i,j)\in A} c_{ij} z_{ij}$-part of the objective function, they ensure that the contribution of vertices $i \in V \setminus D$ is measured correctly (i.e., these are the external edge costs). Finally, constraints
\eqref{eq:wtd44} and the $\sum_{e\in E} c_e y_e$-part in the objective function make sure that the contribution of edges $e:\{i,j\}$, where both $i,j \in D$, is measured correctly (i.e., these are the internal edge costs). We note that both variable-sets $\mathbf{y}$ and $\mathbf{z}$ can be relaxed to be continuous, as for binary $\mathbf{x}$, these variables are automatically binary.
\paragraph{Valid inequalities} Next, we present three families of valid inequalities for (F1). Separation of these inequalities is discussed in Section \ref{sec:sep}.
\begin{theorem}
Inequalities
\begin{equation}
y_{e:\{i,j\}} + z_{ij} \leq x_i,\quad \forall (i,j) \in A \label{eq:wtd43lifted} \tag{XZLINK2L}
\end{equation}
are valid for (F1).
\end{theorem}
\begin{proof}
These inequalities are a lifted version of inequalities \eqref{eq:wtd43}. Validity follows from the fact, that in any feasible solution, either the edge $e:\{i,j\}$ or the arc $(i,j)$ can be contained, and in both cases, this implies that $i \in D$.
\end{proof}
\begin{theorem}
Inequalities
\begin{equation}
\sum_{e \in\delta(i)} y_e \geq x_i,\;\forall i\in V \label{eq:domobj} \tag{TDOMY}
\end{equation}
are valid for (F1).
\end{theorem}
\begin{proof}
By definition of total domination, for each $i \in V$, at least one adjacent vertex $j \in N(i)$ must be in $D$. Thus, if $x_i=1$, which means $i \in D$, at least one of the $y_e$-variables for $e \in \delta(i)$ must be one.
\end{proof}
For the next
family
of valid inequalities, we observe that constraints \eqref{eq:wtd44} together with inequalities $y_{e:\{i,j\}}\leq x_i$ (which are valid, but redundant in our case due to the minimization objective) and the binary constraints on $(\mathbf{x,y})$ give the \emph{boolean quadric polytope (BQP)} (see, e.g.,~\cite{padberg1989boolean}). Thus all inequalities valid for the BQP are also valid for our formulation. We note that there are many graph problems which can be either directly formulated using the BQP, or using the BQP and additional constraints (see, e.g.,~\cite{billionnet2005different,bonomo2012polyhedral,macambira2000edge}). There is a huge number of families of valid inequalities known for the BQP, however, most of them are not useful within a branch-and-cut algorithm as there are no efficient separation procedures known for them (see, e.g.,~\cite{letchford2014new}).
We thus just used the following inequalities known as
\emph{clique inequalities}
in our algorithm.
\begin{theorem}
Let $C \subset V$, such that $E(C)\subset E$ form a \emph{clique}. The \emph{clique inequalities}
\begin{equation}
\sum_{e \in E(C)} y_e \geq \sum_{i \in C} x_i -1 \tag{CLIQUE} \label{eq:clique}
\end{equation}
are valid for (F1).
\end{theorem}
Section \ref{sec:sep} details how these valid inequalities are incorporated into our solution framework.
\subsection{Formulation (F2)\ and valid inequalities}
In formulation (F2), we use continuous variables $q_i \geq 0$, $i \in V$ to measure the external edge costs. This is done by exploiting a Benders decomposition scheme that allows projecting out the $\mathbf{z}$-variables, similar as it is done for the UFL~(see, e.g., \cite{fischetti2016redesigning}).
By doing so, we obtain a polynomial set of optimality cuts, which are detailed next (note that by adding a "dummy-arc"-variable $z_{ii}$ with weight zero to formulation (F1), replacing $x_i$ in \eqref{eq:wtd42} with $z_{ii}$ and adding a constraint $z_{ii}\leq x_i$ the connection to UFL\ becomes directly evident; in the following, we also provide a combinatorial argument for their correctness without the need for Benders decomposition).
For ease of exposition, for a given vertex $i$, let $N'(i) = \{j_1,\ldots,j_k,\ldots, j_{|N(i)|} \}$ be the ordered set of adjacent vertices such that $c_{j_1 i}\leq \ldots \leq c_{j_k i}\leq\ldots\leq c_{j_{|N(i)|}i}$. Then the cuts for a given $i \in V$ are given by
\begin{align}\
q_i \geq c_{ki} - \sum_{k' = 1}^{k-1} (c_{ki} - c_{k'i})x_{k'} - c_{ki}x_i,\;\forall k\in \{1,\ldots,|N'(i)|\}. \label{eq:f2q} \tag{EXTCOSTS-$i$}
\end{align}
When $x_i=0$, i.e., $i \in V \setminus D$,~\eqref{eq:f2q} is similar to the Benders optimality cuts for the UFL~and, therefore,
these inequalities measure the external edge cost for vertex $i$.
When $x_i=1$, i.e., $i \in D$ (and thus $i$ incurs in no external edge cost),
the right hand side of the cuts is at most zero, due to $-c_{ki}x_i$ and, therefore, they are also correct.
By replacing~\eqref{eq:wtd42} and~\eqref{eq:wtd43} with~\eqref{eq:f2q}, the WTDP\ can be formulated as
\begin{align}
\mbox{(F2)}\quad\quad\quad w^* = \min &\sum_{i\in V} \left(w_i x_ i + q_i\right) + \sum_{e\in E} c_e y_e \notag\\
\mbox{s.t.}\quad\quad & \eqref{eq:dom}, \eqref{eq:f2q}, \eqref{eq:wtd44} \notag \\
&\mathbf{x}\in\{0,1\}^{|V|},\;\mathbf{y}\in\{0,1\}^{|E|}\;\mbox{and}\; q_i \geq 0, \forall i \in V. \notag
\end{align}
We note that $\mathbf{y}$-variables could also be projected out, however, the resulting optimality cuts would not have the same effective structure as \eqref{eq:f2q}.
Namely, as each $y_{e=\{i,j\}}$ links two vertices $i,j \in V$, the corresponding Benders subproblem for a fixed $\mathbf{x}$ would not decompose for each vertex.
\paragraph{Valid inequalities} Inequalities \eqref{eq:f2q} can be lifted by using the $\mathbf{y}$-variables.
\begin{theorem}
Let $i \in V$ and $k \in \{1,\ldots,|N'(i)|\}$. Then inequalities
\begin{align}\
q_i \geq c_{ki} - \sum_{k' = 1}^{k-1} (c_{ki} - c_{k'i})x_{k'} - c_{ki}x_i+\sum_{k' = 1}^{k-1} (c_{ki} - c_{k'i})y_{e=\{k'i\}} ,\; \label{eq:f2qlifted} \tag{EXTCOSTS-$i$-L}
\end{align}
are valid for (F2).
\end{theorem}
\begin{proof}
When the $\mathbf{y}$-variables are zero, the inequalities are similar to \eqref{eq:f2q} and thus clearly valid. Now suppose some $y_{e=\{l,i\}}$ for some $1 \leq l \leq k-1$ is one. By definition of the variables, this means that both $x_i$ and $x_l$ are one, and thus on the right-hand-side (rhs) of the cut, we have $c_{ki}-(c_{ki} - c_{li}) -c_{ki}=-(c_{ki} - c_{li})<0$. Thus, $(c_{ki} - c_{li})$ (which is the coefficient of $y_e$) can be added to the rhs, which then will be zero and the inequality still remains valid. The same reasoning also applies, if more $y_e$-variables are one.
\end{proof}
Finally, we observe that inequalities~\eqref{eq:domobj} and~\eqref{eq:clique} presented for (F1)\ are also valid for (F2), as they are in the $(\mathbf{x,y})$-space.
\section{Implementation details of the branch-and-cut algorithms}
\label{sec:bc}
In this section, we give implementation details of the branch-and-cut algorithms we designed based on (F1)\ and (F2).
\subsection{Initialization and separation of cuts \label{sec:sep}}
We first describe how the valid inequalities are incorporated in our frameworks. We note that to design a successful branch-and-cut scheme, it is often crucial to carefully select which cuts to add, e.g., even if in theory the cuts improve the lower bound, they may lead to slow linear programming (LP)-relaxation solution times due to their density or numerical stability, which is detrimental to the node-throughput and thus to the overall performance of the branch-and-cut. We refer to~\citep{Dey2018,WesselmannStuhl2012, rahmaniani2017benders} for recent works on theoretical and computational studies on the challenges of cutting-plane selection. In Section~\ref{sec:ingredients} we also provide computational results obtained when just adding individual families of valid inequalities to the formulations.
The lifted inequalities~\eqref{eq:wtd43lifted} are added at the initialization, by simply replacing their non-lifted counterpart~\eqref{eq:wtd43}. The objective-cuts~\eqref{eq:f2q}, resp., their lifted version~\eqref{eq:f2qlifted} in (F2)\ are added for the five smallest values of $c_{ki}$ for each $i\in V$ at initialization, and the remaining ones are then separated on-the-fly by enumeration. Inequalities~\eqref{eq:domobj} are also separated by enumeration.
Clique inequalities~\eqref{eq:clique} are separated heuristically.
We observe that inequalities~\eqref{eq:wtd44} are a special case of~\eqref{eq:clique} for $|C|=2$.
For each edge $e=\{i,j\}\in E$, we try to construct a violated inequality~\eqref{eq:clique} by greedily constructing a clique containing $e$.
Thus, initially, let $C=\{i,j\}$. Let $(\mathbf{\tilde x, \tilde y})$ be the LP-values at the current branch-and-cut node.
We sort all vertices $k \in \cap_{i \in C} N(i)$ (i.e., all candidate vertices to grow the clique $C$) in descending order according to $|N(k)|\cdot (\tilde x_k+\epsilon)$, for $\epsilon=0.0001$. Note that by adding any vertex $k$ to $C$, the (potential) violation of the constructed clique inequality changes by $\tilde x_k-\sum_{i \in C} \tilde y_{e=\{i,k\}}$. Thus, we iterate through the sorted list of candidate vertices to increase $C$, and whenever this value is greater than $\epsilon$ for a given $k$, we add it to $C$, and repeat the procedure for this $C$. This is done, until no more vertex can be added to $C$. We then add the clique inequality for this $C$ if it is violated. To speed-up separation, if an edge $e$ is already contained in a clique inequality added during the current round of separation at a branch-and-cut node, we do not consider it in constructing additional clique inequalities.
In order to avoid overloading the LP-relaxation with cuts and to allow for a fast node-throughput in the branch-and-cut, we only separate inequalities at the root node and limit separation to ten rounds.
Naturally, to ensure correctness when using (F2)\, violation of objective-cuts~\eqref{eq:f2q}, resp., their lifted version~\eqref{eq:f2qlifted}, is also checked whenever an integer solution is obtained during the branch-and-cut.
As inequalities~\eqref{eq:domobj} and in particular~\eqref{eq:clique} can become quite dense, especially if the instance graph has many edges, we use the option \texttt{UseCutFilter} provided by CPLEX (the chosen MIP-solver),
when adding these cuts. With this option, CPLEX checks the cut with the same criteria (e.g., density) as it checks its own general purpose cuts, and adds it only if it determines that it is beneficial.
\subsection{Starting and primal heuristic and local search\label{sec:heur}}
We implemented both a starting heuristic and a primal heuristic; the former gets called at the initialization while the latter gets called during the execution of the corresponding branch-and-cut algorithms. Both of these heuristics construct feasible solutions, which we then try to improve by applying a local search procedure
The starting heuristic starts out with the solution $D^H=V$ consisting of the set of all vertices (which clearly is a feasible solution).
We then greedily remove vertices from $D^H$ as long as the solution remains feasible.
Algorithm~\ref{alg:starting} details our starting heuristic.
At each iteration, we use a score $score_i$ for choosing the vertex to remove;
this score gives for each vertex in $D^H$ the improvement in objective solution value it would bring if it is removed.
When removing a vertex, say $i$, its vertex weight $v_i$ and the internal edge costs $w_{ij}$ for $j \in D^H$ are not applicable anymore.
On the other hand, we need to consider the new external edge cost for covering $i$ and, moreover,
we have to consider that all the vertices $j' \in V \setminus D^H$ that are covered by $i$ up to that iteration now need to be covered by another vertex in $D^H$
(thus for covering these vertices we will get new, similar or higher, external edge costs).
We note that removing a vertex only causes local changes in the solution structure,
thus we do not need to calculate $score_i$ for each vertex in $D^H$ from scratch in every iteration.
In particular, when node $i$ gets removed, the score needs to be re-calculated only for neighboring nodes
$j \in N(i)$ and for the corresponding neighbors $j' \in N(j)$
(removing $i$ may change the external edge costs associated with such a $j'$ as both $i$ and $j'$ share $j$ as neighbor).
We observe that verifying if $D^H$ is still be a total dominating set after removing $i$
(line~\ref{alg:check} of Algorithm~\ref{alg:starting})
can be done efficiently by storing the number $N^H_j=|N(j) \cap D^H|$ for each $j \in V$,
i.e., the number of neighbors of $j$ contained in $D^H$.
At the begging of the algorithm execution, it holds $N^H_j=|N(j)|$, and whenever a vertex $i'$ gets removed in the course of the algorithm, $N^H_j$ gets decreased by one for each $j \in N(i)$.
Therefore, $D^H \setminus \{i\}$ is still a total dominating set, if and only if $N^H_j>1$ for each $j \in N(i)$.
\SetKwRepeat{Do}{do}{while}
\SetKw{Continue}{continue}
\begin{algorithm}[h!tb]
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{instance $(G=(V,E), \mathbf{(c,w)})$ of the WTDP}
\Output{total dominating set $D^H$}
$D^H\gets V$ \label{alg:start}\;
$score_i \gets -\infty$ \tcp*[f]{store score function for faster evaluation, $-\infty$ indicates re-calculation needed} \;
$improvingMoveExists \gets false$ \;
\Do{$improvingMoveExists$}
{
$improvingMoveExists \gets false$ \;
$vertexToRemove \gets null$\;
$bestScore=0$\;
\For{$i \in D^H$}
{
\If{$D^H \setminus \{i\}$ is not a total dominating set\label{alg:check}}{\Continue}
\If(\tcp*[f]{re-calculation of score needed}){$score_i=-\infty$}
{
$score_i\gets w_i$\tcp*[f]{vertex costs saved}\;
\For{$j \in N(i)\cap D^H$}
{
$score_i\gets score_i+w_{ij}$\tcp*[f]{internal edge costs saved}\;
}
$w^* \gets \min_{j \in D^H} w_{ij}$\tcp*[f]{external edge cost for covering $i$ updated}\;
$score_i\gets score_i -w^*$\;
\For(\tcp*[f]{external edge costs for vertices currently covered by $i$ updated}){$j \in N(i)\cap (V \setminus D^H)$}
{
\If{$i \in \argmin_{j'\in D^H} w_{jj'}$}
{
$w_j^*\gets \min_{j \in (D^H \setminus \{i\})} w_{ij}$\tcp*[f]{external edge cost for covering $j$ updated}\;
$score_i\gets score_i-w^*_{j}$\;
}
}
}
\If{$score_i>bestScore$}
{
$bestScore=score_i$ \label{alg:upstart}\;
$vertexToRemove \gets i$\;
$improvingMoveExists \gets true$ \label{alg:upend}\;
}
}
\If{$improvingMoveExists$}
{
$D^H \gets D^H \setminus \{vertexToRemove\}$\;
\For(\tcp*[f]{score for the neigbors of $vertexToRemove$, and their neigbors need to be updated}){$\forall j \in N(vertexToRemove)$}
{
\For{$\forall j' \in N(j)$}
{
$score_{j'}\gets -\infty$\;
}
}
}
}
\caption{Starting heuristic\label{alg:starting} }
\end{algorithm}
The primal heuristic is guided by the $\mathbf{(\tilde x)}$-values of the LP-relaxation at the current branch-and-cut node. First, we sort the vertices $i \in V$ in descending order according to $\tilde x_i$.
Afterwards, ties are broken first by degree of the vertices (again in descending order), and if there remain ties, they are broken by vertex-index. Let $sorted$ be the list of sorted vertices, $D^H=\emptyset$ (the solution to be constructed) and $covered=\emptyset$ (the list of vertices covered by $D^H$).
To construct a heuristic solution, we iterate through $sorted$ and whenever $|N(i) \cap (V\setminus covered)|>0$ for the currently considered vertex $i$, i.e., $i$ covers a vertex not yet covered by the current partial solution $D^H$, we add $i$ to $D^H$ and update $covered$ by $covered \cup N(i)$. We stop when $covered=V$, i.e., $D^H$ is a total dominating set and thus a feasible solution.
The local search procedure is shown in Algorithm~\ref{alg:local}. It uses two local search operators, namely adding a vertex $i$ to the current solution $D^H$ and removing a vertex $i$ from the current solution $D^H$. The procedures \texttt{testAddVertex($D^H$,$i$)} and \texttt{testRemoveVertex($D^H$,$i$)} revert the change in objective function caused by adding/removing a vertex $i$. This can be done efficiently, as the changes caused by these moves are of a local nature, as described above (e.g., the test for the change caused by removing $i$ is exactly the calculation of the score-function in Algorithm~\ref{alg:starting}). We first try the add-move, and when this move cannot improve the current solution anymore, we try the remove-move. If it is successful, we go back to trying the add-move, if not, the local search terminates.
We iterate through the vertices by their indexes, and if a move is possible, we apply it, and then restart (i.e., we use a \emph{first improvement} strategy).
\SetKwRepeat{Do}{do}{while}
\SetKw{Continue}{continue}
\begin{algorithm}[h!tb]
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{total dominating set $D^H$}
\Output{total dominating set $D^H$ (with potentially better objective function value)}
$improvingMoveExists \gets false$ \;
\Do{$improvingMoveExists$}
{
$improvingMoveExists \gets false$ \;
\For{$i \not \in D^H$}
{
\If{\texttt{testAddVertex($D^H$,$i$)}$>0$}
{
$D^H=D^H \cup\{i\}$\;
$improvingMoveExists \gets true$ \;
\bf{break}\;
}
}
\If{$improvingMoveExists==false$}
{
\For{$i \in D^H$}
{
\If{\texttt{testRemoveVertex($D^H$,$i$)}$>0$}
{
$D^H=D^H \setminus \{i\}$\;
$improvingMoveExists \gets true$ \;
\bf{break}\;
}
}
}
}
\caption{Local search\label{alg:local} }
\end{algorithm}
\subsection{Branching priorities \label{sec:bra}}
For both (F1)\ and (F2), once the $\mathbf{x}$-variables are fixed to binary, the values of all the other variables (i.e., $\mathbf{(y,z)}$, resp., $\mathbf{(y,q)}$) automatically follow. We thus give branching priorities $100\cdot|N(i)|$ to the $\mathbf{x}$-variables in the MIP-solver, CPLEX in our case (while the branching priorities of the other variables were left at their default value, i.e., zero).
\section{A genetic algorithm \label{sec:genalg}}
Genetic algorithms (GAs) are among the most prominent metaheuristic approaches for solving (combinatorial) optimization problems; we refer the reader to the book~\cite{Kramer2017} for an overview on essential elements of this class of
procedures.
GAs have been developed for tackling set dominating problems.
For instance, a hybrid GA has been developed in~\cite{hedar2010hybrid} for the \emph{minimum dominating set problem},
where the GA methodology is combined with local search and intensification schemes;
likewise, in~\cite{giap2014parallel} a parallelized GA is presented for the same problem.
Further examples on GA-based approaches for related problems can be found
in~\cite{sundar2014steady} for the \emph{dominating tree problem},
and in~\cite{RengaswamyEtAl2017} for the \emph{minimum weight minimum connected dominating set problem}.
In their general setting, GAs explore the solution space by keeping a set of feasible solutions, denoted as \emph{population}.
Starting from an initial population, the algorithm iteratively creates a new population (i.e., new solutions) by typically using the following three (randomized) bio-inspired operators: \emph{selection}, \emph{mutation} and \emph{crossover}.
The
\emph{selection} operator selects a subset of the current population (according to a \emph{fitness} value of each solution), from which (usually) pairs of solutions are taken and a \emph{crossover} operator is applied to combine these pairs to create a new solutions.
To these new solutions a \emph{mutation} operator is applied, which randomly modifies the solution in order to keep the population diverse.
Algorithm~\ref{alg:genetic} gives an outline of the genetic algorithm we developed for the WTDP.
The initial population is constructed by using a generalized randomized adaptive search procedure (GRASP) version of our starting heuristic.
GRASP is a general technique to generate (diverse) heuristic solutions by randomizing the construction phase.
For further details on GRASP, the reader is referred to the recent textbook~\cite{ResendeRibeiro2016}.
In order to turn our starting heuristic into a GRASP, we add randomization to the choosing of the vertex with the best score (i.e., lines \ref{alg:upstart}-\ref{alg:upend}):
If a vertex $i$ has $score_i>bestScore$, we generate a random integer in $[0,99]$ and only apply lines \ref{alg:upstart}-\ref{alg:upend} if this integer is larger than a given value $cutoff$.
As \emph{crossover} operator, we also use modifications of Algorithm~\ref{alg:starting}, resp., the GRASP.
In particular, for crossover between two solution $D^1$, $D^2$, we use the GRASP, and set $D^H \gets D^1 \cup D^2$ as initial solution in line~\ref{alg:start}.
For \emph{mutation}, we generate a random integer $m$ in a given range $[m_l,m_u]$ and then randomly remove $m$ vertices from the current solution $D^H$.
After removing these vertices, $D^H$ may be infeasible, in order to make it feasible, we apply the same heuristic as our primal heuristic (with just the degree of vertices as sorting criterion, as of course we have no LP-values).
After mutation, we also apply the local search procedure described in Algorithm~\ref{alg:local}.
The newly obtained solutions are merged with the current population, and then the $populationSize$ best are selected as the next generation, for a given value of $populationSize$.
As a fitness value for selection, we use the objective function values of the solutions. In order to keep the population diverse, we keep at most one solution for each fitness value and size $|D^H|$ in the population (this is done by checking if the current population already contains a solution with the fitness value and size of the currently created solution, and if yes, the solution is discarded).
To create the population, we run the GRASP $initialPopulationSize$ times, for a given value of parameter $initialPopulationSize$ and then select the $populationSize$ best solutions.
We used the following parameter values in our implementation, these values were determined using some preliminary computational experiments: $initialPopulationSize=100$, $populationSize=40$, $cutoff=30$, $[m_l,m_u]=[1,4]$, and $nIterations=20$.
\SetKwRepeat{Do}{do}{while}
\SetKw{Continue}{continue}
\begin{algorithm}[h!tb]
\DontPrintSemicolon
\SetKwInOut{Input}{input}\SetKwInOut{Output}{output}
\Input{instance $I=(G=(V,E), \mathbf{(c,w)})$ of the WTDP, parameters $initialPopulationSize, populationSize, cutoff, [m_l,m_u], nIterations$}
\Output{total dominating set $D^H$}
$population \gets \emptyset$\label{alg:grasp1} \;
\For{$i=1,\ldots, initialPopulationSize$}
{
$newD \gets \texttt{GRASP}(I, cutoff)$\;
\If{there is no solution with the same objective value and size as $newD$ in $population$}
{
$population \gets population \cup newD$\label{alg:grasp2} \;
}
}
$population \gets \texttt{select}(population,populationSize)$ \;
\For{$i=1,\ldots, nIterations$}
{
\For{all pairs $D^1,D^2$ from $population$}
{
$newD \gets \texttt{crossover}(D^1,D^2, cutoff)$\;
$newD \gets \texttt{mutation}(newD,[m_l,m_u])$\;
$newD \gets \texttt{localSearch}(newD)$\;
\If{there is no solution with the same objective value and size as $newD$ in $population$}
{
$population \gets population \cup newD$ \;
}
}
$population \gets \texttt{select}(population,populationSize)$\;
}
$D^H\gets \texttt{select}(population,1)$\;
\caption{Genetic algorithm\label{alg:genetic} }
\end{algorithm}
\section{Computational results \label{sec:compres}}
The branch-and-cut framework was implemented in C++ using CPLEX 12.9 as MIP solver and the genetic algorithm was also implemented in C++.
The computational study was carried out on an Intel Xeon E5 v4 CPU with 2.5 GHz and 6GB memory using a single thread. All CPLEX parameters were left at default values (except branching priorities, see Section~\ref{sec:bra}), and we set the timelimit for a run to 1800 seconds (similar to the timelimit in~\cite{MaEtAl2019}).
\subsection{Instance description}
\label{subsec:instdesc}
\paragraph{Instances similar to the instances of~\cite{MaEtAl2019}}
In~\cite{MaEtAl2019}, the authors created instances to test their formulations.
Unfortunately, the instances are not available online, thus we generated our own, following the same procedure as described in~\cite{MaEtAl2019}.
These instances are generated according to the
Erd\"os-R\'enyi model, where one fixes the number of nodes, $|V|$ and a probability $p\in[01]$ that allows to control the edge density of the resulting graph. Edge and vertex weights,
$\mathbf{c}$ and $\mathbf{w}$, respectively, are random integers between one and five.
As in~\cite{MaEtAl2019}, we considered $|V|\in\{20,50,100\}$ and $p\in\{0.2,0.5,0.8\}$,
which leads to instance ranging from $20$ nodes and $31$ edges to $100$ nodes and $3943$ edges.
For each pair $(n,p)$ we generated five instances (instead of one as done in~\cite{MaEtAl2019}), using the \texttt{gnp\_random\_graph(n,p)}-method from the \texttt{networkx}-package~\cite{hagberg2008exploring} to obtain the Erd\"os-R\'enyi graphs. This set has $3 \cdot 3 \cdot 5 = 45$ instances.
We denote this set of instances as \texttt{MA}, individual instances are addressed as \texttt{MA}$-|V|-p-id$, where $id \in \{1,\ldots, 5\}$.
\paragraph{New instances}
To analyze the influence of different weight structures, we generated an additional set of instances, denoted as \texttt{NEW}, as in the \texttt{MA}\, both are in a similar (small) range.
We again used the Erd\"os-R\'enyi model, and considered $|V|\in\{75,100,125\}$ and $p\in\{0.2,0.5,0.8\}$.
We used the following range-combinations for $(\mathbf{c},\mathbf{w})$: $([1,50],[1,10])$, $([1,25],[1,25])$,$([1,10],[1,50])$.
For each combination of $(|V|,p)$ and $(\mathbf{c},\mathbf{w})$ we created five instances.
Thus, this set has $3 \cdot 3 \cdot 3 \cdot 5 = 135$ instances.
The instance set is denoted as \texttt{NEW}, individual instances are addressed as \texttt{NEW}$-|V|-p-c_u-id$,
where $id \in \{1,\ldots, 5\}$ and $c_u$ is the upper bound of the considered range for $\mathbf{c}$ (i.e., $c_u \in \{10,25,50\}$).
Both sets of instances we created are available online at \url{https://msinnl.github.io/pages/instancescodes.html}\sloppy.
\subsection{Assessing the effect of the valid inequalities \label{sec:ingredients}}
We now analyze the effect of the families of valid inequalities presented in Section~\ref{sec:form}.
In order to test this, we added them individually to the corresponding model of the LP-relaxation of (F1)\ and (F2), and then also added all of them together.
There is an exponential number of cliques; therefore, in order to get an impression of the effect of clique inequalities~\eqref{eq:clique}, we heuristically calculate edge clique covers (i.e., set of cliques, such that every edge occurs in one of the cliques) using our separation heuristic described in Section~\ref{sec:sep} (using the vertex-degree as sorting criteria), and add the corresponding inequalities induced by the cliques in this cover.
In figure~\ref{fig:lpplot3}, we give a plot of the obtained LP gaps (over both instance sets), calculated as $100 \cdot (w^B-w^{LP})/w^B$, where $w^B$ is the best solution value we obtained using our approaches, and $w^{LP}$ is the value of the considered LP relaxation. Note that the figure does not give a plot for (F1)+\eqref{eq:wtd43lifted},(F2),(F2)+\eqref{eq:f2qlifted},(F2)+\eqref{eq:domobj}, (F2)+\eqref{eq:clique}. This is, because using the liftings~\eqref{eq:wtd43lifted}, resp.,~\eqref{eq:f2qlifted} on their own had no effect on the value of the LP relaxation. Moreover, the values obtained by (F2),(F2)+\eqref{eq:domobj}, (F2)+\eqref{eq:clique} are just the same as their counterpart with (F1), as in (F2), the $\mathbf{z}$-variables are projected out in a Benders way, and~\eqref{eq:domobj} and~\eqref{eq:clique} operate in the $(\mathbf{x},\mathbf{y})$-space.
On the other hand, there is a slight difference between (F1)+all and (F2)+all, with (F1)+all giving slightly lower gaps. An explanation for this is, that once inequalities~\eqref{eq:domobj} and~\eqref{eq:clique} are present in the model, the liftings~\eqref{eq:wtd43lifted}, resp.,~\eqref{eq:f2qlifted} also start to have an effect on the bounds (as both~\eqref{eq:domobj} and~\eqref{eq:clique} "push" the $\mathbf{y}$-variables, which are the variables added in the lifting).
Overall, we see that both~\eqref{eq:domobj} and~\eqref{eq:clique} on their own result in a considerable improvement of the LP gap, e.g., without any inequalities only around 20\% of the instances have an LP gap of 20\% or less, while adding~\eqref{eq:domobj} or~\eqref{eq:clique} increases the number of instances with such a gap to about 30\%.
Adding all families together gives again a considerable improvement, now for about 50\% of instances, the LP gap is 20\% or less. When adding all inequalities, the largest LP gap is around 50\%, while without adding any inequalities, the largest gap is over 70\%.
\begin{figure}[h!tb]
\centering \includegraphics[width=.80\linewidth]{lp-gap-plot-3.pdf}
\caption{LP-gap plot for adding families of valid inequalities to (F1)\ and (F2).}\label{fig:lpplot3}
\end{figure}
\subsection{Detailed results \label{sec:detailed}}
In this section, we provide detailed results obtained by the branch-and-cut frameworks based on the formulations (F1)\ and (F2)\ described in Section~\ref{sec:bc}, and also the genetic algorithm.
A comparison with the models of~\cite{MaEtAl2019} is also done.
In Table~\ref{ta:comp} we report the following results for instance set \texttt{MA}.
In columns (F1)+ and (F2)+, we report the results attained by our branch-and-cut algorithms.
In columns (F1)\ and (F2)\, we report the results attained
when solving (F1)\ and (F2)\ directly with CPLEX, without any of the branch-and-cut ingredients presented in Section~\ref{sec:bc} (note that in case of (F2), constraints~\ref{eq:f2q} are separated as they are needed to ensure correctness).
In columns (MA1), (MA2), and (MA3)\,
we report the results attained when solving directly by CPLEX
the formulations provided in~\cite{MaEtAl2019}.
In this table, we report for each approach the runtime (column $t[s]$), the objective value of the best obtained solution (column $w^B$) and the optimality gap (column $g[\%]$, calculated as $100\cdot(w^B-LB)/w^B$, where $LB$ is the obtained lower bound).
The results reported in Table~\ref{ta:comp} show,
that the approaches proposed in this paper (i.e., (F1), (F2), (F1)+ and (F2)+),
are considerably more effective than those proposed in~\cite{MaEtAl2019}:
All of our approaches, with the exception of (F1), managed to solve all the instances within the timelimit, while none of the approaches (MA1), (MA2), and (MA3)\ managed to solve the instances with 100 nodes to optimality (and (MA1), (MA2)\ also fail for some of the instances with 50 nodes).
Moreover, for the instances where our approaches as well as the approaches of~\cite{MaEtAl2019} can solve the problem,
our approaches are up to 500 times faster; see, e.g, instances \texttt{MA-50-0.5-4}, where (F2)+ takes 2 seconds, while (MA3)\ takes 1201 seconds (and (MA1), (MA2)\ reach the timelimit).
Likewise,the gaps attained by our strategies (when the time limit is reached),
are considerable smaller than those attained by the~\cite{MaEtAl2019} approaches
(for instance, while the maximum gap attained by (F1)\ is 9.06\%, the maximum gap attained by (MA3)\ is 72.53\%).
When comparing among our approaches, we see that for all but two instances (\texttt{MA-100-0.8-3} and \texttt{MA-100-0.8-4}), (F2)+ is the fastest, and for these two instances (F2)\ is the fastest.
Not surprisingly, the instances become harder with larger number of vertices, and, also higher density ($p$) seems to make the instances harder.
\setlength{\tabcolsep}{3pt}
\begin{table}[t!]
\centering
\caption{Comparison with previous approaches (MA1), (MA2), (MA3)\ from literature on instance set \texttt{MA} \label{ta:comp}}
\begingroup\footnotesize
\begin{tabular}{rrr|rrrrrrrrrrrr|rrrrrrrrr}
\toprule
\multicolumn{3}{c}{instance} & \multicolumn{3}{c}{(F1)} & \multicolumn{3}{c}{(F1)+} & \multicolumn{3}{c}{(F2)} & \multicolumn{3}{c}{(F2)+} & \multicolumn{3}{c}{(MA1)} & \multicolumn{3}{c}{(MA2)} & \multicolumn{3}{c}{(MA3)} \\ $|V|$ & $p$ & $id$ & $t[s]$ & $w^*$ & g[\%] & $t[s]$ & $w^*$ & g[\%]& $t[s]$ & $w^*$ & g[\%]& $t[s]$ & $w^*$ & g[\%]& $t[s]$ & $w^*$ & g[\%]& $t[s]$ & $w^*$ & g[\%]& $t[s]$ & $w^*$ & g[\%] \\ \midrule
20 & 0.2 & 1 & \textbf{1} & 63 & 0.00 & \textbf{1} & 63 & 0.00 & \textbf{1} & 63 & 0.00 & \textbf{1} & 63 & 0.00 & 2 & 63 & 0.00 & 3 & 63 & 0.00 & 2 & 63 & 0.00 \\
20 & 0.2 & 2 & \textbf{1} & 58 & 0.00 & \textbf{1} & 58 & 0.00 & \textbf{1} & 58 & 0.00 & \textbf{1} & 58 & 0.00 & 3 & 58 & 0.00 & 3 & 58 & 0.00 & \textbf{1} & 58 & 0.00 \\
20 & 0.2 & 3 & 3 & 58 & 0.00 & \textbf{1} & 58 & 0.00 & \textbf{1} & 58 & 0.00 & \textbf{1} & 58 & 0.00 & 3 & 58 & 0.00 & 3 & 58 & 0.00 & \textbf{1} & 58 & 0.00 \\
20 & 0.2 & 4 & \textbf{1} & 51 & 0.00 & \textbf{1} & 51 & 0.00 & \textbf{1} & 51 & 0.00 & \textbf{1} & 51 & 0.00 & 4 & 51 & 0.00 & 3 & 51 & 0.00 & 2 & 51 & 0.00 \\
20 & 0.2 & 5 & \textbf{1} & 55 & 0.00 & \textbf{1} & 55 & 0.00 & \textbf{1} & 55 & 0.00 & \textbf{1} & 55 & 0.00 & 3 & 55 & 0.00 & 4 & 55 & 0.00 & \textbf{1} & 55 & 0.00 \\
20 & 0.5 & 1 & \textbf{1} & 44 & 0.00 & \textbf{1} & 44 & 0.00 & \textbf{1} & 44 & 0.00 & \textbf{1} & 44 & 0.00 & 5 & 44 & 0.00 & 4 & 44 & 0.00 & \textbf{1} & 44 & 0.00 \\
20 & 0.5 & 2 & \textbf{1} & 47 & 0.00 & \textbf{1} & 47 & 0.00 & \textbf{1} & 47 & 0.00 & \textbf{1} & 47 & 0.00 & 4 & 47 & 0.00 & 5 & 47 & 0.00 & \textbf{1} & 47 & 0.00 \\
20 & 0.5 & 3 & \textbf{1} & 46 & 0.00 & \textbf{1} & 46 & 0.00 & \textbf{1} & 46 & 0.00 & \textbf{1} & 46 & 0.00 & 6 & 46 & 0.00 & 4 & 46 & 0.00 & \textbf{1} & 46 & 0.00 \\
20 & 0.5 & 4 & \textbf{1} & 40 & 0.00 & \textbf{1} & 40 & 0.00 & \textbf{1} & 40 & 0.00 & \textbf{1} & 40 & 0.00 & 4 & 40 & 0.00 & 3 & 40 & 0.00 & \textbf{1} & 40 & 0.00 \\
20 & 0.5 & 5 & \textbf{1} & 41 & 0.00 & \textbf{1} & 41 & 0.00 & \textbf{1} & 41 & 0.00 & \textbf{1} & 41 & 0.00 & 4 & 41 & 0.00 & 3 & 41 & 0.00 & \textbf{1} & 41 & 0.00 \\
20 & 0.8 & 1 & \textbf{1} & 37 & 0.00 & \textbf{1} & 37 & 0.00 & \textbf{1} & 37 & 0.00 & \textbf{1} & 37 & 0.00 & 3 & 37 & 0.00 & 4 & 37 & 0.00 & 2 & 37 & 0.00 \\
20 & 0.8 & 2 & 3 & 35 & 0.00 & \textbf{1} & 35 & 0.00 & \textbf{1} & 35 & 0.00 & \textbf{1} & 35 & 0.00 & 4 & 35 & 0.00 & 4 & 35 & 0.00 & \textbf{1} & 35 & 0.00 \\
20 & 0.8 & 3 & \textbf{1} & 40 & 0.00 & \textbf{1} & 40 & 0.00 & \textbf{1} & 40 & 0.00 & \textbf{1} & 40 & 0.00 & 5 & 40 & 0.00 & 4 & 40 & 0.00 & \textbf{1} & 40 & 0.00 \\
20 & 0.8 & 4 & \textbf{1} & 34 & 0.00 & \textbf{1} & 34 & 0.00 & \textbf{1} & 34 & 0.00 & \textbf{1} & 34 & 0.00 & 4 & 34 & 0.00 & 5 & 34 & 0.00 & \textbf{1} & 34 & 0.00 \\
20 & 0.8 & 5 & \textbf{1} & 34 & 0.00 & \textbf{1} & 34 & 0.00 & \textbf{1} & 34 & 0.00 & \textbf{1} & 34 & 0.00 & 5 & 34 & 0.00 & 3 & 34 & 0.00 & \textbf{1} & 34 & 0.00 \\
\midrule
50 & 0.2 & 1 & 2 & 111 & 0.00 & \textbf{1} & 111 & 0.00 & 4 & 111 & 0.00 & \textbf{1} & 111 & 0.00 & 991 & 111 & 0.00 & 216 & 111 & 0.00 & 229 & 111 & 0.00 \\
50 & 0.2 & 2 & 3 & 106 & 0.00 & \textbf{1} & 106 & 0.00 & 3 & 106 & 0.00 & \textbf{1} & 106 & 0.00 & 1380 & 106 & 0.00 & 438 & 106 & 0.00 & 309 & 106 & 0.00 \\
50 & 0.2 & 3 & 8 & 111 & 0.00 & \textbf{1} & 111 & 0.00 & 4 & 111 & 0.00 & \textbf{1} & 111 & 0.00 & TL & 114 & 10.40 & 755 & 111 & 0.00 & 552 & 111 & 0.00 \\
50 & 0.2 & 4 & 4 & 101 & 0.00 & \textbf{1} & 101 & 0.00 & 4 & 101 & 0.00 & \textbf{1} & 101 & 0.00 & 796 & 101 & 0.00 & 322 & 101 & 0.00 & 409 & 101 & 0.00 \\
50 & 0.2 & 5 & 13 & 108 & 0.00 & \textbf{1} & 108 & 0.00 & 6 & 108 & 0.00 & \textbf{1} & 108 & 0.00 & TL & 108 & 12.19 & 1565 & 108 & 0.00 & 1129 & 108 & 0.00 \\
50 & 0.5 & 1 & 4 & 82 & 0.00 & 3 & 82 & 0.00 & 3 & 82 & 0.00 & \textbf{2} & 82 & 0.00 & TL & 82 & 19.75 & TL & 82 & 9.11 & 631 & 82 & 0.00 \\
50 & 0.5 & 2 & 5 & 85 & 0.00 & \textbf{2} & 85 & 0.00 & 3 & 85 & 0.00 & \textbf{2} & 85 & 0.00 & TL & 88 & 19.39 & 1579 & 85 & 0.00 & 794 & 85 & 0.00 \\
50 & 0.5 & 3 & 22 & 84 & 0.00 & 3 & 84 & 0.00 & 4 & 84 & 0.00 & \textbf{2} & 84 & 0.00 & TL & 87 & 18.31 & TL & 85 & 9.37 & 1082 & 84 & 0.00 \\
50 & 0.5 & 4 & 16 & 82 & 0.00 & 3 & 82 & 0.00 & 4 & 82 & 0.00 & \textbf{2} & 82 & 0.00 & TL & 82 & 17.55 & TL & 82 & 14.90 & 1201 & 82 & 0.00 \\
50 & 0.5 & 5 & 15 & 82 & 0.00 & 4 & 82 & 0.00 & 4 & 82 & 0.00 & \textbf{2} & 82 & 0.00 & TL & 83 & 20.30 & TL & 82 & 12.69 & 1062 & 82 & 0.00 \\
50 & 0.8 & 1 & 6 & 77 & 0.00 & 7 & 77 & 0.00 & 5 & 77 & 0.00 & \textbf{4} & 77 & 0.00 & TL & 77 & 8.39 & 1063 & 77 & 0.00 & 876 & 77 & 0.00 \\
50 & 0.8 & 2 & 3 & 72 & 0.00 & 3 & 72 & 0.00 & 3 & 72 & 0.00 & \textbf{2} & 72 & 0.00 & 452 & 72 & 0.00 & 307 & 72 & 0.00 & 299 & 72 & 0.00 \\
50 & 0.8 & 3 & 3 & 74 & 0.00 & 3 & 74 & 0.00 & 4 & 74 & 0.00 & \textbf{2} & 74 & 0.00 & 1100 & 74 & 0.00 & 587 & 74 & 0.00 & 446 & 74 & 0.00 \\
50 & 0.8 & 4 & 6 & 76 & 0.00 & 5 & 76 & 0.00 & 4 & 76 & 0.00 & \textbf{3} & 76 & 0.00 & TL & 76 & 10.23 & 1242 & 76 & 0.00 & 736 & 76 & 0.00 \\
50 & 0.8 & 5 & 13 & 79 & 0.00 & 15 & 79 & 0.00 & \textbf{7} & 79 & 0.00 & \textbf{7} & 79 & 0.00 & TL & 79 & 16.13 & 1720 & 79 & 0.00 & 1310 & 79 & 0.00 \\
\midrule
100 & 0.2 & 1 & 898 & 175 & 0.00 & 92 & 175 & 0.00 & 566 & 175 & 0.00 & \textbf{50} & 175 & 0.00 & TL & 183 & 38.79 & TL & 178 & 34.82 & TL & 175 & 56.48 \\
100 & 0.2 & 2 & 251 & 174 & 0.00 & 14 & 174 & 0.00 & 276 & 174 & 0.00 & \textbf{12} & 174 & 0.00 & TL & 174 & 34.17 & TL & 175 & 34.60 & TL & 188 & 61.21 \\
100 & 0.2 & 3 & TL & 178 & 6.48 & 239 & 177 & 0.00 & 1434 & 177 & 0.00 & \textbf{121} & 177 & 0.00 & TL & 195 & 43.27 & TL & 189 & 41.47 & TL & 183 & 60.95 \\
100 & 0.2 & 4 & TL & 169 & 2.17 & 81 & 169 & 0.00 & 562 & 169 & 0.00 & \textbf{37} & 169 & 0.00 & TL & 172 & 37.79 & TL & 175 & 36.77 & TL & 172 & 59.06 \\
100 & 0.2 & 5 & TL & 171 & 5.39 & 97 & 167 & 0.00 & 1473 & 167 & 0.00 & \textbf{47} & 167 & 0.00 & TL & 170 & 39.18 & TL & 172 & 38.20 & TL & 173 & 60.06 \\
100 & 0.5 & 1 & TL & 147 & 2.69 & 304 & 147 & 0.00 & 292 & 147 & 0.00 & \textbf{108} & 147 & 0.00 & TL & 166 & 51.92 & TL & 160 & 49.55 & TL & 149 & 62.37 \\
100 & 0.5 & 2 & 707 & 144 & 0.00 & 158 & 144 & 0.00 & 152 & 144 & 0.00 & \textbf{51} & 144 & 0.00 & TL & 154 & 44.81 & TL & 148 & 42.81 & TL & 150 & 66.31 \\
100 & 0.5 & 3 & 995 & 147 & 0.00 & 401 & 147 & 0.00 & 186 & 147 & 0.00 & \textbf{128} & 147 & 0.00 & TL & 157 & 48.58 & TL & 149 & 43.22 & TL & 160 & 62.88 \\
100 & 0.5 & 4 & TL & 149 & 9.06 & 725 & 146 & 0.00 & 289 & 146 & 0.00 & \textbf{214} & 146 & 0.00 & TL & 156 & 50.99 & TL & 150 & 46.05 & TL & 160 & 63.98 \\
100 & 0.5 & 5 & TL & 139 & 3.18 & 466 & 139 & 0.00 & 242 & 139 & 0.00 & \textbf{155} & 139 & 0.00 & TL & 148 & 49.27 & TL & 145 & 44.48 & TL & 152 & 71.38 \\
100 & 0.8 & 1 & 1655 & 136 & 0.00 & 346 & 136 & 0.00 & 172 & 136 & 0.00 & \textbf{97} & 136 & 0.00 & TL & 150 & 55.44 & TL & 141 & 51.52 & TL & 136 & 62.64 \\
100 & 0.8 & 2 & 759 & 140 & 0.00 & 894 & 140 & 0.00 & 283 & 140 & 0.00 & \textbf{249} & 140 & 0.00 & TL & 146 & 49.55 & TL & 141 & 44.35 & TL & 147 & 65.31 \\
100 & 0.8 & 3 & 1212 & 141 & 0.00 & 1032 & 141 & 0.00 & \textbf{236} & 141 & 0.00 & 325 & 141 & 0.00 & TL & 144 & 53.13 & TL & 149 & 50.87 & TL & 153 & 71.51 \\
100 & 0.8 & 4 & TL & 141 & 7.45 & 1652 & 141 & 0.00 & \textbf{334} & 141 & 0.00 & 495 & 141 & 0.00 & TL & 148 & 52.31 & TL & 147 & 50.83 & TL & 142 & 71.17 \\
100 & 0.8 & 5 & 990 & 134 & 0.00 & 509 & 134 & 0.00 & 231 & 134 & 0.00 & \textbf{160} & 134 & 0.00 & TL & 152 & 55.60 & TL & 148 & 51.19 & TL & 156 & 72.53 \\
\bottomrule
\end{tabular}
\endgroup
\end{table}
\pagebreak
In the following, we focus on the instance set \texttt{NEW}\ and our solution algorithms to get more insights on the performance of them, in particular, their behavior with respect to different weight structures.
In Figure~\ref{fig:ilpplot}, we present plots of runtimes to optimality, and optimality gaps (for the unsolved instances of the respective approaches) for (F1)\, (F2), (F1)+ and (F2)+.
Figures~\ref{fig:ilpplot1} and~\ref{fig:ilpplot2} give runtimes optimality gaps, respectively,
for the complete set of instances \texttt{NEW}, while Figures~\ref{fig:ilpplot3}-\ref{fig:ilpplot7}
show results for the different weight structure, i.e.,
Figures~\ref{fig:ilpplot3} and~\ref{fig:ilpplot4} are for the instances with $c_u=10$,
Figures~\ref{fig:ilpplot5} and~\ref{fig:ilpplot6} are for the instances with $c_u=25$ and
Figure~\ref{fig:ilpplot7} is for the instances with $c_u=50$ (since all instances are solved to optimality we do not provide an optimality gap plot).
Note the different scales on the x-axis of Figure~\ref{fig:ilpplot7} compared to the other runtime-plots.
\begin{figure}[h!]
\begin{subfigure}[b]{.5\linewidth}
\centering \includegraphics[width=.99\linewidth]{runtime-plot-1.pdf}
\caption{Runtime plot for all instances of the set}\label{fig:ilpplot1}
\end{subfigure}%
\begin{subfigure}[b]{.5\linewidth}
\centering
\centering \includegraphics[width=.99\linewidth]{gap-plot-1.pdf}
\caption{Optimality gap plot for all instances of the set}\label{fig:ilpplot2}
\end{subfigure}
\newline
\begin{subfigure}[b]{.5\linewidth}
\centering
\includegraphics[width=.99\linewidth]{runtime-plot-2.pdf}
\caption{Runtime plot for instances with $c_u=10$}\label{fig:ilpplot3}
\end{subfigure}%
\begin{subfigure}[b]{.5\linewidth}
\centering
\includegraphics[width=.99\linewidth]{gap-plot-2.pdf}
\caption{Optimality gap plot for instances with $c_u=10$}\label{fig:ilpplot4}
\end{subfigure}
\newline
\begin{subfigure}[b]{.5\linewidth}
\centering
\includegraphics[width=.99\linewidth]{runtime-plot-3.pdf}
\caption{Runtime plot for instances with $c_u=25$}\label{fig:ilpplot5}
\end{subfigure}%
\begin{subfigure}[b]{.5\linewidth}
\centering
\includegraphics[width=.99\linewidth]{gap-plot-3.pdf}
\caption{Optimality gap plot for instances with $c_u=25$}\label{fig:ilpplot6}
\end{subfigure}
\newline
\centering
\begin{subfigure}[b]{.5\linewidth}
\centering
\includegraphics[width=.99\linewidth]{runtime-plot-4.pdf}
\caption{Runtime plot for instances with $c_u=50$ (all solved to optimality with all approaches, but x-axis cut off at 150 seconds for better readability.)}\label{fig:ilpplot7}
\end{subfigure}%
\caption{Plots of runtimes and optimality gap of our MIP-approaches on instance set \texttt{NEW}\ and different subgroups of these instances}\label{fig:ilpplot}
\end{figure}
In the plots shown in Figure~\ref{fig:ilpplot} we see a strong connection between the weight structure and the computational difficulty of the instances.
All instances with $c_u=50$ can be solved by all approaches within the timelimit;
furthermore, (F2)\ and (F2)+ only need at most 100 seconds.
However, for both $c_u=25$ and $c_u=10$, the situation is strikingly different, in particular, for $c_u=10$;
we can observe that only (F1)+ and (F2)+\ manage to solve over 50\% of the instances within the timelimit.
This behavior might be explained by the fact that, for these instances, edges play a more important role (due to their larger weight range), and thus the problem becomes more similar to a BQP problems and,
as it has been shown previously in the literature,
problems where a BQP structure plays an important role a often very hard to solve (see, e.g.,~\cite{billionnet2005different}).
Such hypothesis could be validated by the fact that for these instances, the (F1)+ approach,
which includes the valid inequalities (in particular BQP-like inequalities~\ref{eq:clique}) is the approach working second best (not only when looking at the runtime, but also when looking at the optimality gap for the unsolved instances).
Moreover, for instances with $c_u=50$ (where edge weights are less influential),
the second best approach is (F2), which does not contain any valid inequalities.
Despite these differences, when considering at all instances, we see that (F2)+ works best, managing to solve around 80\% of the instances within the timelimit, followed by (F2)\ and (F1)+, which both manage to solve around 75\% of the instances.
The results reported in Figure~\ref{fig:ilpplot} are complemented by Tables~\ref{ta:our75}-\ref{ta:our125},
where we give detailed results of our approaches, including the genetic algorithm (indicated by GA).
Moreover, we also report the results obtained when running only the GRAP-part of the GA (i.e., lines~ \ref{alg:grasp1}-\ref{alg:grasp2} in Algorithm~\ref{alg:genetic}).
There is one table for each value of $|V|$ in order to allow an analysis from another point of view.
In these tables, we present the runtime (column $t[s]$;
TL indicates timelimit reached, and ML memorylimit), and the objective value of the best obtained solution (column $w^B$);
for the MIP-approaches also the optimality gap (column $g[\%]$), and the number of branch-and-cut nodes (column $\#nBN$);
and for the GA and GRASP also the primal gap compared to the best solution found by the MIP-approaches (column $pg[\%]$, calculated as $100 \cdot (w^H-w^{MIP})/w^{MIP}$, where $w^{MIP}$ is the value of the best solution found by the MIP-approaches and $w^H$ the value of the best solution found by the GRASP, resp., GA).
In the tables, we can see that all our approaches manage to solve all instances with $|V|=75$ to optimality.
For instances with $|V|=100$, (F2)+ solves all but two instances, and for instances with $|V|=125$, (F2)+ solves 24 out of 45 instances to optimality within the timelimit.
In general, for nearly all instances (F2)+ works best, i.e., either it has the smallest runtime, or, for unsolved instances, it has the smallest optimality gap.
For the instances, where (F2)+ is not the best performing approach, (F2)\ gives the best results.
With respect to this, we can see that (F2)\ only performs better for instances with $p=0.8$ (i.e., denser instances).
A possible explanation for this could be, that for denser instances, the LPs with added valid inequalities becomes denser, in particular the clique inequalities, as there will also be a lot more cliques for denser graphs.
Thus, while adding the valid inequalities improves the bound, the drawback of longer LP solution times and thus the slower node-throughput in the branch-and-cut becomes burdensome.
This is also reflected in the number of branch-and-cut nodes enumerated, (F2)\ often enumerates around ten times as much nodes as (F2)+,
while the runtime of both approaches is quite similar (and over all instances, adding the valid inequalities pays off, as only for the dense instances with $p=0.8$ the described drawback is having an effect).
With respect to (F1)\ and (F1)+, the situation is similar, i.e., (F1)\ has a considerably higher node-throughput, but in general (F1)+ performs better.
Moreover, when comparing (F1)+\ and (F2)+, it can be seen that (F1)+\ usually needs less branch-and-cut nodes to prove optimality (when it manages to do so), but is slower than (F2)+, as the "slimmer" formulation of (F2)+ allows for a faster node-throughput (while still being "strong enough" for proving optimality).
The largest optimality gap is 25.59\% and is obtained for instance \texttt{125-0.8-10-2}.
From the results reported in the tables, we can also conclude that both heuristics perform quite well.
The GRASP takes at most nine seconds (for some of the instances with $|V|=125$), and the largest primal gap around 20.21\% (instance \texttt{NEW-125-0.2-10-3}), while most of the primal gaps are smaller than 10\% and for slightly less than half of the instances, it is zero. The largest primal gaps are obtained for instances with $p=0.2$.
Likewise, the GA takes at most 86 seconds (for instance \texttt{NEW-125-0.8-50-5}) and only for 30 out of 135 instances, there is a positive primal gap (the largest is 5.26\% for instance \texttt{NEW-75-0.2-10-4}, and for most instances with positive primal gap, the gap is under 1\%).
Interestingly, for none of the unsolved instances, the GA could find an improved solution compared to the best solution found by the MIP approaches.
\begin{landscape}
\begin{table}[ht]
\centering
\caption{Comparison of our approaches on instance set \texttt{NEW}\ with $|V|$=75. \label{ta:our75}}
\begingroup\footnotesize
\begin{tabular}{rrr|rrrrrrrrrrrrrrrr|rrrrrr}
\toprule
\multicolumn{3}{c}{instance} & \multicolumn{4}{c}{(F1)} & \multicolumn{4}{c}{(F1)+} & \multicolumn{4}{c}{(F2)} & \multicolumn{4}{c}{(F2)+} & \multicolumn{3}{c}{GRASP} & \multicolumn{3}{c}{GA} \\ $p$ & $c_u$ & $id$ & $t[s]$ & $w^B$ & g[\%] &$\#nN$ & $t[s]$ & $w^B$ & g[\%]&$\#nN$ & $t[s]$ & $w^B$ & g[\%]& $\#nN$ & $t[s]$ & $w^B$ & g[\%]& $\#nN$ & $t[s]$ & $w^B$ & pg[\%] & $t[s]$ & $w^B$ & pg[\%] \\ \midrule
0.2 & 10 & 1 & 576 & 686 & 0.00 & 105831 & 32 & 686 & 0.00 & 1832 & 251 & 686 & 0.00 & 123502 & \textbf{15} & 686 & 0.00 & 1921 & 1 & 769 & 12.10 & 5 & 686 & 0.00 \\
0.2 & 10 & 2 & 617 & 770 & 0.00 & 135320 & 25 & 770 & 0.00 & 1167 & 224 & 770 & 0.00 & 121349 & \textbf{8} & 770 & 0.00 & 1147 & 1 & 871 & 13.12 & 6 & 794 & 3.12 \\
0.2 & 10 & 3 & 319 & 661 & 0.00 & 48274 & 23 & 661 & 0.00 & 665 & 85 & 661 & 0.00 & 46205 & \textbf{8} & 661 & 0.00 & 796 & 1 & 765 & 15.73 & 6 & 661 & 0.00 \\
0.2 & 10 & 4 & 595 & 703 & 0.00 & 105611 & 42 & 703 & 0.00 & 2456 & 443 & 703 & 0.00 & 216827 & \textbf{26} & 703 & 0.00 & 2938 & 1 & 762 & 8.39 & 7 & 740 & 5.26 \\
0.2 & 10 & 5 & 168 & 758 & 0.00 & 23620 & 24 & 758 & 0.00 & 1092 & 160 & 758 & 0.00 & 82778 & \textbf{11} & 758 & 0.00 & 1340 & 1 & 857 & 13.06 & 6 & 779 & 2.77 \\
0.2 & 25 & 1 & 51 & 498 & 0.00 & 6617 & 16 & 498 & 0.00 & 263 & 37 & 498 & 0.00 & 14486 & \textbf{3} & 498 & 0.00 & 285 & 1 & 556 & 11.65 & 6 & 504 & 1.20 \\
0.2 & 25 & 2 & 49 & 546 & 0.00 & 8616 & 16 & 546 & 0.00 & 170 & 37 & 546 & 0.00 & 18598 & \textbf{3} & 546 & 0.00 & 179 & 1 & 607 & 11.17 & 6 & 546 & 0.00 \\
0.2 & 25 & 3 & 40 & 518 & 0.00 & 6505 & 15 & 518 & 0.00 & 157 & 27 & 518 & 0.00 & 10784 & \textbf{4} & 518 & 0.00 & 224 & 1 & 603 & 16.41 & 5 & 518 & 0.00 \\
0.2 & 25 & 4 & 649 & 498 & 0.00 & 115335 & 36 & 498 & 0.00 & 3276 & 226 & 498 & 0.00 & 116161 & \textbf{19} & 498 & 0.00 & 3772 & 1 & 521 & 4.62 & 6 & 498 & 0.00 \\
0.2 & 25 & 5 & 97 & 513 & 0.00 & 15696 & 15 & 513 & 0.00 & 245 & 54 & 513 & 0.00 & 23682 & \textbf{4} & 513 & 0.00 & 288 & 1 & 526 & 2.53 & 6 & 513 & 0.00 \\
0.2 & 50 & 1 & 2 & 339 & 0.00 & 124 & 10 & 339 & 0.00 & 21 & 2 & 339 & 0.00 & 162 & \textbf{1} & 339 & 0.00 & 28 & 1 & 340 & 0.29 & 6 & 339 & 0.00 \\
0.2 & 50 & 2 & 2 & 382 & 0.00 & 133 & 13 & 382 & 0.00 & 43 & 2 & 382 & 0.00 & 237 & \textbf{1} & 382 & 0.00 & 40 & 1 & 414 & 8.38 & 5 & 382 & 0.00 \\
0.2 & 50 & 3 & \textbf{1} & 335 & 0.00 & 64 & 14 & 335 & 0.00 & 6 & \textbf{1} & 335 & 0.00 & 54 & \textbf{1} & 335 & 0.00 & 10 & 1 & 341 & 1.79 & 5 & 341 & 1.79 \\
0.2 & 50 & 4 & 3 & 333 & 0.00 & 317 & 14 & 333 & 0.00 & 59 & 3 & 333 & 0.00 & 400 & \textbf{2} & 333 & 0.00 & 59 & 1 & 338 & 1.50 & 6 & 333 & 0.00 \\
0.2 & 50 & 5 & 3 & 347 & 0.00 & 374 & 15 & 347 & 0.00 & 82 & 3 & 347 & 0.00 & 423 & \textbf{2} & 347 & 0.00 & 94 & 1 & 353 & 1.73 & 6 & 347 & 0.00 \\
0.5 & 10 & 1 & 1297 & 581 & 0.00 & 99392 & 159 & 581 & 0.00 & 4820 & 213 & 581 & 0.00 & 62202 & \textbf{109} & 581 & 0.00 & 8222 & 1 & 590 & 1.55 & 13 & 581 & 0.00 \\
0.5 & 10 & 2 & 299 & 602 & 0.00 & 30769 & 134 & 602 & 0.00 & 3840 & 152 & 602 & 0.00 & 41014 & \textbf{84} & 602 & 0.00 & 6719 & 1 & 641 & 6.48 & 11 & 602 & 0.00 \\
0.5 & 10 & 3 & 226 & 545 & 0.00 & 19174 & 100 & 545 & 0.00 & 2739 & 141 & 545 & 0.00 & 35146 & \textbf{61} & 545 & 0.00 & 4960 & 1 & 545 & 0.00 & 10 & 545 & 0.00 \\
0.5 & 10 & 4 & 264 & 540 & 0.00 & 25262 & 84 & 540 & 0.00 & 1960 & 109 & 540 & 0.00 & 28090 & \textbf{55} & 540 & 0.00 & 3797 & 1 & 580 & 7.41 & 10 & 540 & 0.00 \\
0.5 & 10 & 5 & 165 & 519 & 0.00 & 13004 & 85 & 519 & 0.00 & 1803 & 119 & 519 & 0.00 & 28200 & \textbf{52} & 519 & 0.00 & 3291 & 1 & 551 & 6.17 & 10 & 519 & 0.00 \\
0.5 & 25 & 1 & 106 & 387 & 0.00 & 7161 & 52 & 387 & 0.00 & 1269 & 31 & 387 & 0.00 & 7626 & \textbf{23} & 387 & 0.00 & 1598 & 1 & 402 & 3.88 & 10 & 387 & 0.00 \\
0.5 & 25 & 2 & 71 & 384 & 0.00 & 5083 & 44 & 384 & 0.00 & 1194 & 24 & 384 & 0.00 & 5674 & \textbf{20} & 384 & 0.00 & 1458 & 1 & 413 & 7.55 & 10 & 384 & 0.00 \\
0.5 & 25 & 3 & 83 & 362 & 0.00 & 4769 & 26 & 362 & 0.00 & 343 & 31 & 362 & 0.00 & 5898 & \textbf{13} & 362 & 0.00 & 442 & 1 & 380 & 4.97 & 10 & 362 & 0.00 \\
0.5 & 25 & 4 & 100 & 366 & 0.00 & 7073 & 63 & 366 & 0.00 & 1833 & 52 & 366 & 0.00 & 14482 & \textbf{31} & 366 & 0.00 & 2551 & 1 & 371 & 1.37 & 9 & 371 & 1.37 \\
0.5 & 25 & 5 & 84 & 331 & 0.00 & 4938 & 31 & 331 & 0.00 & 389 & 24 & 331 & 0.00 & 4688 & \textbf{14} & 331 & 0.00 & 470 & 1 & 331 & 0.00 & 10 & 331 & 0.00 \\
0.5 & 50 & 1 & 5 & 240 & 0.00 & 348 & 16 & 240 & 0.00 & 77 & 4 & 240 & 0.00 & 424 & \textbf{3} & 240 & 0.00 & 97 & 1 & 244 & 1.67 & 9 & 240 & 0.00 \\
0.5 & 50 & 2 & \textbf{2} & 238 & 0.00 & 43 & 11 & 238 & 0.00 & 28 & \textbf{2} & 238 & 0.00 & 68 & \textbf{2} & 238 & 0.00 & 30 & 1 & 245 & 2.94 & 9 & 238 & 0.00 \\
0.5 & 50 & 3 & 2 & 215 & 0.00 & 0 & 8 & 215 & 0.00 & 0 & 2 & 215 & 0.00 & 47 & \textbf{1} & 215 & 0.00 & 0 & 1 & 215 & 0.00 & 9 & 215 & 0.00 \\
0.5 & 50 & 4 & 8 & 235 & 0.00 & 553 & 14 & 235 & 0.00 & 141 & 5 & 235 & 0.00 & 703 & \textbf{4} & 235 & 0.00 & 134 & 1 & 235 & 0.00 & 9 & 235 & 0.00 \\
0.5 & 50 & 5 & 4 & 206 & 0.00 & 198 & 11 & 206 & 0.00 & 47 & 3 & 206 & 0.00 & 198 & \textbf{2} & 206 & 0.00 & 47 & 1 & 206 & 0.00 & 8 & 206 & 0.00 \\
0.8 & 10 & 1 & 343 & 571 & 0.00 & 24196 & 241 & 571 & 0.00 & 3009 & \textbf{137} & 571 & 0.00 & 23527 & \textbf{137} & 571 & 0.00 & 4798 & 2 & 613 & 7.36 & 16 & 571 & 0.00 \\
0.8 & 10 & 2 & 208 & 520 & 0.00 & 12433 & 144 & 520 & 0.00 & 1481 & \textbf{86} & 520 & 0.00 & 14721 & 102 & 520 & 0.00 & 2583 & 2 & 520 & 0.00 & 15 & 520 & 0.00 \\
0.8 & 10 & 3 & 245 & 543 & 0.00 & 13720 & 165 & 543 & 0.00 & 2105 & 122 & 543 & 0.00 & 19460 & \textbf{92} & 543 & 0.00 & 3475 & 2 & 543 & 0.00 & 15 & 543 & 0.00 \\
0.8 & 10 & 4 & 308 & 571 & 0.00 & 17925 & 208 & 571 & 0.00 & 2722 & 142 & 571 & 0.00 & 27914 & \textbf{113} & 571 & 0.00 & 4340 & 2 & 571 & 0.00 & 15 & 571 & 0.00 \\
0.8 & 10 & 5 & 225 & 509 & 0.00 & 10311 & 137 & 509 & 0.00 & 1374 & 94 & 509 & 0.00 & 15107 & \textbf{77} & 509 & 0.00 & 2412 & 2 & 509 & 0.00 & 17 & 509 & 0.00 \\
0.8 & 25 & 1 & 196 & 357 & 0.00 & 6516 & 119 & 357 & 0.00 & 1527 & 53 & 357 & 0.00 & 7532 & \textbf{51} & 357 & 0.00 & 1957 & 2 & 360 & 0.84 & 15 & 357 & 0.00 \\
0.8 & 25 & 2 & 151 & 338 & 0.00 & 4736 & 89 & 338 & 0.00 & 1119 & \textbf{34} & 338 & 0.00 & 4454 & \textbf{34} & 338 & 0.00 & 1201 & 2 & 356 & 5.33 & 15 & 338 & 0.00 \\
0.8 & 25 & 3 & 28 & 323 & 0.00 & 1495 & 44 & 323 & 0.00 & 439 & \textbf{15} & 323 & 0.00 & 1568 & 22 & 323 & 0.00 & 549 & 2 & 323 & 0.00 & 13 & 323 & 0.00 \\
0.8 & 25 & 4 & 113 & 345 & 0.00 & 4240 & 73 & 345 & 0.00 & 670 & 47 & 345 & 0.00 & 6870 & \textbf{37} & 345 & 0.00 & 947 & 2 & 345 & 0.00 & 13 & 345 & 0.00 \\
0.8 & 25 & 5 & 112 & 311 & 0.00 & 3977 & 53 & 311 & 0.00 & 570 & 32 & 311 & 0.00 & 3900 & \textbf{25} & 311 & 0.00 & 747 & 2 & 311 & 0.00 & 15 & 311 & 0.00 \\
0.8 & 50 & 1 & \textbf{2} & 182 & 0.00 & 29 & 8 & 182 & 0.00 & 13 & 4 & 182 & 0.00 & 136 & \textbf{2} & 182 & 0.00 & 16 & 2 & 182 & 0.00 & 14 & 182 & 0.00 \\
0.8 & 50 & 2 & 3 & 188 & 0.00 & 61 & 9 & 188 & 0.00 & 33 & 3 & 188 & 0.00 & 86 & \textbf{2} & 188 & 0.00 & 30 & 2 & 188 & 0.00 & 11 & 188 & 0.00 \\
0.8 & 50 & 3 & 3 & 191 & 0.00 & 64 & 9 & 191 & 0.00 & 16 & \textbf{2} & 191 & 0.00 & 35 & \textbf{2} & 191 & 0.00 & 19 & 2 & 191 & 0.00 & 11 & 191 & 0.00 \\
0.8 & 50 & 4 & 5 & 196 & 0.00 & 127 & 13 & 196 & 0.00 & 67 & \textbf{4} & 196 & 0.00 & 222 & \textbf{4} & 196 & 0.00 & 62 & 2 & 196 & 0.00 & 12 & 196 & 0.00 \\
0.8 & 50 & 5 & \textbf{5} & 192 & 0.00 & 176 & 15 & 192 & 0.00 & 81 & 6 & 192 & 0.00 & 402 & 7 & 192 & 0.00 & 77 & 2 & 192 & 0.00 & 15 & 192 & 0.00 \\
\bottomrule
\end{tabular}
\endgroup
\end{table}
\begin{table}[ht]
\centering
\caption{Comparison of our approaches on instance set \texttt{NEW}\ with $|V|$=100. \label{ta:our100}}
\begingroup\footnotesize
\begin{tabular}{rrr|rrrrrrrrrrrrrrrr|rrrrrr}
\toprule
\multicolumn{3}{c}{instance} & \multicolumn{4}{c}{(F1)} & \multicolumn{4}{c}{(F1)+} & \multicolumn{4}{c}{(F2)} & \multicolumn{4}{c}{(F2)+} & \multicolumn{3}{c}{GRASP} & \multicolumn{3}{c}{GA} \\ $p$ & $c_u$ & $id$ & $t[s]$ & $w^B$ & g[\%] &$\#nN$ & $t[s]$ & $w^B$ & g[\%]&$\#nN$ & $t[s]$ & $w^B$ & g[\%]& $\#nN$ & $t[s]$ & $w^B$ & g[\%]& $\#nN$ & $t[s]$ & $w^B$ & pg[\%] & $t[s]$ & $w^B$ & pg[\%] \\ \midrule
0.2 & 10 & 1 & TL & 931 & 23.09 & 150968 & 373 & 873 & 0.00 & 17695 & TL & 914 & 16.03 & 488396 & \textbf{319} & 873 & 0.00 & 28266 & 1 & 930 & 6.53 & 12 & 873 & 0.00 \\
0.2 & 10 & 2 & TL & 991 & 20.67 & 195361 & 348 & 944 & 0.00 & 16097 & TL & 966 & 13.96 & 537600 & \textbf{261} & 944 & 0.00 & 21216 & 1 & 983 & 4.13 & 13 & 944 & 0.00 \\
0.2 & 10 & 3 & TL & 933 & 21.97 & 175869 & 509 & 878 & 0.00 & 24488 & TL & 937 & 17.98 & 488300 & \textbf{389} & 878 & 0.00 & 32623 & 1 & 905 & 3.08 & 11 & 878 & 0.00 \\
0.2 & 10 & 4 & TL & 837 & 19.00 & 160900 & 775 & 837 & 0.00 & 36727 & TL & 850 & 14.18 & 533800 & \textbf{546} & 837 & 0.00 & 44398 & 1 & 879 & 5.02 & 11 & 837 & 0.00 \\
0.2 & 10 & 5 & TL & 913 & 23.99 & 166974 & 412 & 840 & 0.00 & 19910 & TL & 847 & 12.08 & 455300 & \textbf{332} & 840 & 0.00 & 31130 & 1 & 907 & 7.98 & 12 & 870 & 3.57 \\
0.2 & 25 & 1 & 660 & 591 & 0.00 & 52158 & 82 & 591 & 0.00 & 3688 & 401 & 591 & 0.00 & 97432 & \textbf{55} & 591 & 0.00 & 5421 & 1 & 591 & 0.00 & 12 & 591 & 0.00 \\
0.2 & 25 & 2 & TL & 655 & 3.93 & 154809 & 94 & 653 & 0.00 & 4420 & 1126 & 653 & 0.00 & 279737 & \textbf{67} & 653 & 0.00 & 7342 & 1 & 687 & 5.21 & 11 & 655 & 0.31 \\
0.2 & 25 & 3 & 769 & 612 & 0.00 & 64664 & 61 & 612 & 0.00 & 2355 & 251 & 612 & 0.00 & 56158 & \textbf{34} & 612 & 0.00 & 3142 & 1 & 648 & 5.88 & 12 & 616 & 0.65 \\
0.2 & 25 & 4 & TL & 558 & 2.87 & 122658 & 38 & 552 & 0.00 & 917 & 224 & 552 & 0.00 & 53608 & \textbf{22} & 552 & 0.00 & 1716 & 1 & 602 & 9.06 & 11 & 552 & 0.00 \\
0.2 & 25 & 5 & TL & 606 & 6.27 & 172508 & 506 & 606 & 0.00 & 31561 & TL & 609 & 4.40 & 401219 & \textbf{345} & 606 & 0.00 & 40428 & 1 & 646 & 6.60 & 12 & 607 & 0.17 \\
0.2 & 50 & 1 & 11 & 418 & 0.00 & 929 & 19 & 418 & 0.00 & 193 & 8 & 418 & 0.00 & 1247 & \textbf{5} & 418 & 0.00 & 249 & 1 & 422 & 0.96 & 12 & 420 & 0.48 \\
0.2 & 50 & 2 & 9 & 447 & 0.00 & 774 & 18 & 447 & 0.00 & 177 & 8 & 447 & 0.00 & 1154 & \textbf{7} & 447 & 0.00 & 261 & 1 & 472 & 5.59 & 11 & 456 & 2.01 \\
0.2 & 50 & 3 & 5 & 419 & 0.00 & 339 & 17 & 419 & 0.00 & 106 & 6 & 419 & 0.00 & 590 & \textbf{4} & 419 & 0.00 & 124 & 1 & 427 & 1.91 & 11 & 419 & 0.00 \\
0.2 & 50 & 4 & 74 & 403 & 0.00 & 4679 & 21 & 403 & 0.00 & 329 & 19 & 403 & 0.00 & 3709 & \textbf{10} & 403 & 0.00 & 395 & 1 & 418 & 3.72 & 12 & 410 & 1.74 \\
0.2 & 50 & 5 & 12 & 375 & 0.00 & 800 & 20 & 375 & 0.00 & 290 & 10 & 375 & 0.00 & 1663 & \textbf{5} & 375 & 0.00 & 328 & 1 & 379 & 1.07 & 13 & 379 & 1.07 \\
0.5 & 10 & 1 & TL & 763 & 18.61 & 104422 & TL & 743 & 7.09 & 26400 & TL & 743 & 7.32 & 240277 & \textbf{1421} & 743 & 0.00 & 61206 & 2 & 749 & 0.81 & 26 & 749 & 0.81 \\
0.5 & 10 & 2 & TL & 708 & 28.16 & 112961 & 1207 & 698 & 0.00 & 15696 & 1357 & 698 & 0.00 & 226826 & \textbf{670} & 698 & 0.00 & 25318 & 3 & 705 & 1.00 & 25 & 700 & 0.29 \\
0.5 & 10 & 3 & TL & 728 & 16.48 & 103135 & 1323 & 699 & 0.00 & 19211 & TL & 699 & 3.00 & 291119 & \textbf{742} & 699 & 0.00 & 29592 & 3 & 730 & 4.43 & 24 & 718 & 2.72 \\
0.5 & 10 & 4 & TL & 726 & 13.57 & 107121 & 1088 & 726 & 0.00 & 13761 & 1324 & 726 & 0.00 & 218790 & \textbf{609} & 726 & 0.00 & 22324 & 2 & 775 & 6.75 & 26 & 726 & 0.00 \\
0.5 & 10 & 5 & TL & 761 & 24.11 & 124466 & TL & 702 & 1.37 & 24691 & TL & 744 & 17.25 & 240100 & \textbf{1275} & 702 & 0.00 & 51404 & 2 & 743 & 5.84 & 25 & 702 & 0.00 \\
0.5 & 25 & 1 & 670 & 461 & 0.00 & 18182 & 235 & 461 & 0.00 & 2640 & 182 & 461 & 0.00 & 22792 & \textbf{99} & 461 & 0.00 & 3913 & 3 & 461 & 0.00 & 25 & 461 & 0.00 \\
0.5 & 25 & 2 & 230 & 437 & 0.00 & 6776 & 178 & 437 & 0.00 & 2454 & 115 & 437 & 0.00 & 12140 & \textbf{76} & 437 & 0.00 & 3372 & 2 & 448 & 2.52 & 19 & 437 & 0.00 \\
0.5 & 25 & 3 & 404 & 434 & 0.00 & 10685 & 263 & 434 & 0.00 & 3821 & 155 & 434 & 0.00 & 16969 & \textbf{111} & 434 & 0.00 & 4425 & 3 & 443 & 2.07 & 22 & 434 & 0.00 \\
0.5 & 25 & 4 & TL & 494 & 8.20 & 63896 & 921 & 482 & 0.00 & 16949 & 621 & 482 & 0.00 & 90411 & \textbf{533} & 482 & 0.00 & 22037 & 2 & 489 & 1.45 & 25 & 482 & 0.00 \\
0.5 & 25 & 5 & 1395 & 456 & 0.00 & 40173 & 829 & 456 & 0.00 & 12615 & 430 & 456 & 0.00 & 59506 & \textbf{358} & 456 & 0.00 & 16885 & 3 & 470 & 3.07 & 23 & 457 & 0.22 \\
0.5 & 50 & 1 & 4 & 260 & 0.00 & 27 & 17 & 260 & 0.00 & 5 & 5 & 260 & 0.00 & 131 & \textbf{3} & 260 & 0.00 & 5 & 2 & 260 & 0.00 & 22 & 260 & 0.00 \\
0.5 & 50 & 2 & 3 & 271 & 0.00 & 27 & 17 & 271 & 0.00 & 15 & 3 & 271 & 0.00 & 55 & \textbf{2} & 271 & 0.00 & 14 & 2 & 271 & 0.00 & 21 & 271 & 0.00 \\
0.5 & 50 & 3 & 9 & 283 & 0.00 & 282 & 21 & 283 & 0.00 & 119 & 7 & 283 & 0.00 & 404 & \textbf{4} & 283 & 0.00 & 135 & 3 & 283 & 0.00 & 21 & 283 & 0.00 \\
0.5 & 50 & 4 & 27 & 291 & 0.00 & 914 & 39 & 291 & 0.00 & 353 & 11 & 291 & 0.00 & 1070 & \textbf{10} & 291 & 0.00 & 355 & 2 & 296 & 1.72 & 22 & 291 & 0.00 \\
0.5 & 50 & 5 & 12 & 269 & 0.00 & 347 & 29 & 269 & 0.00 & 228 & 14 & 269 & 0.00 & 1254 & \textbf{9} & 269 & 0.00 & 251 & 2 & 269 & 0.00 & 21 & 269 & 0.00 \\
0.8 & 10 & 1 & TL & 730 & 20.71 & 55600 & TL & 730 & 15.57 & 11878 & TL & 730 & \textbf{ 3.22} & 176962 & TL & 730 & 8.77 & 30496 & 4 & 730 & 0.00 & 39 & 730 & 0.00 \\
0.8 & 10 & 2 & TL & 697 & 15.18 & 40114 & TL & 683 & 11.61 & 5773 & 1064 & 683 & 0.00 & 103180 & \textbf{1025} & 683 & 0.00 & 17483 & 4 & 688 & 0.73 & 37 & 683 & 0.00 \\
0.8 & 10 & 3 & TL & 721 & 19.24 & 47870 & TL & 718 & 11.53 & 10506 & \textbf{1636} & 718 & 0.00 & 154346 & 1769 & 718 & 0.00 & 31269 & 4 & 718 & 0.00 & 37 & 718 & 0.00 \\
0.8 & 10 & 4 & TL & 726 & 36.02 & 52165 & TL & 709 & 8.75 & 9566 & TL & 712 & 6.60 & 153824 & \textbf{1452} & 709 & 0.00 & 28487 & 4 & 709 & 0.00 & 41 & 709 & 0.00 \\
0.8 & 10 & 5 & TL & 703 & 18.92 & 43898 & TL & 700 & 18.08 & 7205 & \textbf{1221} & 700 & 0.00 & 118429 & TL & 700 & 3.55 & 28770 & 4 & 710 & 1.43 & 39 & 704 & 0.57 \\
0.8 & 25 & 1 & 1138 & 442 & 0.00 & 15789 & 1125 & 442 & 0.00 & 7017 & \textbf{396} & 442 & 0.00 & 34791 & 459 & 442 & 0.00 & 9633 & 5 & 452 & 2.26 & 40 & 442 & 0.00 \\
0.8 & 25 & 2 & 1068 & 430 & 0.00 & 15155 & 693 & 430 & 0.00 & 4218 & \textbf{277} & 430 & 0.00 & 27907 & 285 & 430 & 0.00 & 5284 & 4 & 430 & 0.00 & 32 & 430 & 0.00 \\
0.8 & 25 & 3 & 984 & 426 & 0.00 & 15999 & 669 & 426 & 0.00 & 4180 & \textbf{251} & 426 & 0.00 & 19709 & 269 & 426 & 0.00 & 5152 & 4 & 426 & 0.00 & 36 & 426 & 0.00 \\
0.8 & 25 & 4 & 1045 & 428 & 0.00 & 17287 & 891 & 428 & 0.00 & 5285 & \textbf{277} & 428 & 0.00 & 27607 & 390 & 428 & 0.00 & 7049 & 4 & 428 & 0.00 & 35 & 428 & 0.00 \\
0.8 & 25 & 5 & TL & 447 & 9.51 & 26364 & 1375 & 432 & 0.00 & 8047 & \textbf{520} & 432 & 0.00 & 51363 & 578 & 432 & 0.00 & 10357 & 4 & 432 & 0.00 & 42 & 432 & 0.00 \\
0.8 & 50 & 1 & 33 & 259 & 0.00 & 993 & 75 & 259 & 0.00 & 396 & \textbf{16} & 259 & 0.00 & 1415 & 21 & 259 & 0.00 & 427 & 4 & 259 & 0.00 & 32 & 259 & 0.00 \\
0.8 & 50 & 2 & 6 & 246 & 0.00 & 41 & 25 & 246 & 0.00 & 33 & \textbf{5} & 246 & 0.00 & 96 & 6 & 246 & 0.00 & 44 & 4 & 246 & 0.00 & 9 & 246 & 0.00 \\
0.8 & 50 & 3 & 9 & 238 & 0.00 & 106 & 26 & 238 & 0.00 & 31 & \textbf{5} & 238 & 0.00 & 154 & \textbf{5} & 238 & 0.00 & 39 & 4 & 238 & 0.00 & 34 & 238 & 0.00 \\
0.8 & 50 & 4 & 28 & 253 & 0.00 & 673 & 56 & 253 & 0.00 & 210 & \textbf{14} & 253 & 0.00 & 757 & 16 & 253 & 0.00 & 232 & 4 & 258 & 1.98 & 34 & 253 & 0.00 \\
0.8 & 50 & 5 & 39 & 248 & 0.00 & 1042 & 81 & 248 & 0.00 & 414 & \textbf{18} & 248 & 0.00 & 1428 & 25 & 248 & 0.00 & 451 & 5 & 250 & 0.81 & 31 & 248 & 0.00 \\
\bottomrule
\end{tabular}
\endgroup
\end{table}
\begin{table}[ht]
\centering
\caption{Comparison of our approaches on instance set \texttt{NEW}\ with $|V|$=125. \label{ta:our125}}
\begingroup\footnotesize
\begin{tabular}{rrr|rrrrrrrrrrrrrrrr|rrrrrr}
\toprule
\multicolumn{3}{c}{instance} & \multicolumn{4}{c}{(F1)} & \multicolumn{4}{c}{(F1)+} & \multicolumn{4}{c}{(F2)} & \multicolumn{4}{c}{(F2)+} & \multicolumn{3}{c}{GRASP} & \multicolumn{3}{c}{GA} \\ $p$ & $c_u$ & $id$ & $t[s]$ & $w^B$ & g[\%] &$\#nN$ & $t[s]$ & $w^B$ & g[\%]&$\#nN$ & $t[s]$ & $w^B$ & g[\%]& $\#nN$ & $t[s]$ & $w^B$ & g[\%]& $\#nN$ & $t[s]$ & $w^B$ & pg[\%] & $t[s]$ & $w^B$ & pg[\%] \\ \midrule
0.2 & 10 & 1 & TL & 1031 & 29.33 & 111233 & TL & 1031 & 13.08 & 38900 & TL & 1122 & 33.07 & 289800 & TL & 1026 & \textbf{11.31} & 66500 & 2 & 1112 & 8.38 & 24 & 1026 & 0.00 \\
0.2 & 10 & 2 & TL & 1038 & 25.41 & 103199 & TL & 1038 & 5.43 & 39364 & TL & 1136 & 29.30 & 323400 & TL & 1038 & \textbf{ 4.11} & 70700 & 2 & 1069 & 2.99 & 22 & 1038 & 0.00 \\
0.2 & 10 & 3 & TL & 935 & 21.55 & 112721 & 1065 & 935 & 0.00 & 18545 & TL & 1006 & 23.01 & 307900 & \textbf{610} & 935 & 0.00 & 23794 & 2 & 1124 & 20.21 & 23 & 947 & 1.28 \\
0.2 & 10 & 4 & TL & 1087 & 32.38 & 162826 & TL & 1050 & 11.10 & 35800 & TL & 1102 & 30.22 & 307500 & TL & 1052 & \textbf{10.55} & 65400 & 2 & 1121 & 6.76 & 21 & 1051 & 0.10 \\
0.2 & 10 & 5 & TL & 1067 & 38.06 & 98022 & TL & 978 & 12.12 & 46300 & TL & 1069 & 32.41 & 293644 & TL & 974 & \textbf{10.88} & 72100 & 2 & 1112 & 14.17 & 25 & 975 & 0.10 \\
0.2 & 25 & 1 & TL & 752 & 14.43 & 79324 & 727 & 720 & 0.00 & 20101 & TL & 777 & 15.13 & 249000 & \textbf{484} & 720 & 0.00 & 30681 & 2 & 803 & 11.53 & 26 & 720 & 0.00 \\
0.2 & 25 & 2 & TL & 748 & 9.42 & 115536 & 1690 & 746 & 0.00 & 49679 & TL & 755 & 9.34 & 250400 & \textbf{1038} & 746 & 0.00 & 66326 & 2 & 768 & 2.95 & 24 & 748 & 0.27 \\
0.2 & 25 & 3 & TL & 758 & 17.24 & 76593 & 1308 & 715 & 0.00 & 32387 & TL & 756 & 13.82 & 262200 & \textbf{802} & 715 & 0.00 & 58391 & 2 & 752 & 5.17 & 21 & 717 & 0.28 \\
0.2 & 25 & 4 & TL & 725 & 13.52 & 125557 & TL & 701 & 1.13 & 45548 & TL & 726 & 11.79 & 277800 & \textbf{1195} & 701 & 0.00 & 68666 & 2 & 726 & 3.57 & 22 & 705 & 0.57 \\
0.2 & 25 & 5 & TL & 690 & 12.15 & 125451 & TL & 684 & 3.49 & 48278 & TL & 714 & 14.62 & 284000 & \textbf{1548} & 684 & 0.00 & 94996 & 2 & 747 & 9.21 & 23 & 697 & 1.90 \\
0.2 & 50 & 1 & 22 & 455 & 0.00 & 914 & 19 & 455 & 0.00 & 94 & 14 & 455 & 0.00 & 1809 & \textbf{3} & 455 & 0.00 & 112 & 2 & 457 & 0.44 & 21 & 455 & 0.00 \\
0.2 & 50 & 2 & 15 & 477 & 0.00 & 552 & 22 & 477 & 0.00 & 163 & 11 & 477 & 0.00 & 1216 & \textbf{4} & 477 & 0.00 & 153 & 2 & 493 & 3.35 & 23 & 477 & 0.00 \\
0.2 & 50 & 3 & 150 & 490 & 0.00 & 5438 & 33 & 490 & 0.00 & 379 & 32 & 490 & 0.00 & 4963 & \textbf{9} & 490 & 0.00 & 446 & 2 & 501 & 2.24 & 21 & 490 & 0.00 \\
0.2 & 50 & 4 & 307 & 467 & 0.00 & 10476 & 36 & 467 & 0.00 & 678 & 63 & 467 & 0.00 & 11763 & \textbf{14} & 467 & 0.00 & 903 & 2 & 504 & 7.92 & 23 & 467 & 0.00 \\
0.2 & 50 & 5 & 680 & 457 & 0.00 & 27859 & 71 & 457 & 0.00 & 1974 & 74 & 457 & 0.00 & 12890 & \textbf{29} & 457 & 0.00 & 2719 & 2 & 468 & 2.41 & 24 & 459 & 0.44 \\
0.5 & 10 & 1 & TL & 888 & 35.77 & 74241 & TL & 817 & 19.35 & 11969 & TL & 920 & 32.99 & 189400 & TL & 817 & \textbf{15.90} & 32310 & 4 & 817 & 0.00 & 41 & 817 & 0.00 \\
0.5 & 10 & 2 & TL & 838 & 27.80 & 71722 & TL & 815 & 18.60 & 11600 & TL & 902 & 35.97 & 165500 & TL & 815 & \textbf{14.33} & 28242 & 5 & 827 & 1.47 & 45 & 815 & 0.00 \\
0.5 & 10 & 3 & TL & 931 & 48.44 & 71111 & TL & 836 & 21.04 & 12000 & TL & 915 & 32.01 & 183200 & TL & 836 & \textbf{18.68} & 31000 & 4 & 880 & 5.26 & 45 & 872 & 4.31 \\
0.5 & 10 & 4 & TL & 912 & 36.32 & 81756 & TL & 867 & 23.60 & 12100 & ML & 947 & 34.16 & 194451 & TL & 867 & \textbf{20.84} & 28328 & 4 & 914 & 5.42 & 55 & 867 & 0.00 \\
0.5 & 10 & 5 & TL & 949 & 39.97 & 76935 & TL & 867 & 25.04 & 12998 & TL & 995 & 41.73 & 188520 & TL & 867 & \textbf{22.20} & 30407 & 5 & 906 & 4.50 & 55 & 867 & 0.00 \\
0.5 & 25 & 1 & TL & 613 & 21.13 & 37273 & TL & 566 & 9.75 & 13891 & TL & 566 & \textbf{ 4.27} & 160389 & TL & 566 & 4.91 & 37868 & 5 & 566 & 0.00 & 48 & 566 & 0.00 \\
0.5 & 25 & 2 & TL & 542 & 9.87 & 30162 & TL & 533 & 2.02 & 14217 & 915 & 533 & 0.00 & 78306 & \textbf{900} & 533 & 0.00 & 24650 & 5 & 561 & 5.25 & 48 & 533 & 0.00 \\
0.5 & 25 & 3 & TL & 563 & 13.77 & 30148 & TL & 538 & 2.16 & 12186 & 1417 & 538 & 0.00 & 111362 & \textbf{835} & 538 & 0.00 & 19259 & 5 & 567 & 5.39 & 49 & 538 & 0.00 \\
0.5 & 25 & 4 & TL & 567 & 18.54 & 37033 & TL & 552 & 16.40 & 10610 & TL & 576 & 15.71 & 149600 & TL & 552 & \textbf{10.66} & 37400 & 4 & 565 & 2.36 & 53 & 552 & 0.00 \\
0.5 & 25 & 5 & TL & 572 & 19.52 & 38695 & TL & 545 & 12.36 & 15193 & TL & 552 & \textbf{ 8.51} & 148100 & TL & 545 & 8.67 & 40091 & 5 & 548 & 0.55 & 48 & 548 & 0.55 \\
0.5 & 50 & 1 & 40 & 334 & 0.00 & 785 & 64 & 334 & 0.00 & 473 & 19 & 334 & 0.00 & 1495 & \textbf{16} & 334 & 0.00 & 481 & 4 & 336 & 0.60 & 40 & 334 & 0.00 \\
0.5 & 50 & 2 & 19 & 330 & 0.00 & 500 & 41 & 330 & 0.00 & 251 & 13 & 330 & 0.00 & 631 & \textbf{12} & 330 & 0.00 & 255 & 4 & 330 & 0.00 & 38 & 330 & 0.00 \\
0.5 & 50 & 3 & 20 & 315 & 0.00 & 247 & 39 & 315 & 0.00 & 80 & 9 & 315 & 0.00 & 219 & \textbf{7} & 315 & 0.00 & 77 & 5 & 315 & 0.00 & 49 & 315 & 0.00 \\
0.5 & 50 & 4 & 57 & 316 & 0.00 & 834 & 88 & 316 & 0.00 & 488 & 33 & 316 & 0.00 & 2657 & \textbf{21} & 316 & 0.00 & 504 & 5 & 316 & 0.00 & 51 & 316 & 0.00 \\
0.5 & 50 & 5 & 104 & 311 & 0.00 & 2479 & 113 & 311 & 0.00 & 1099 & 33 & 311 & 0.00 & 3111 & \textbf{32} & 311 & 0.00 & 1107 & 4 & 311 & 0.00 & 40 & 311 & 0.00 \\
0.8 & 10 & 1 & TL & 855 & 53.04 & 33869 & TL & 793 & 18.91 & 4892 & TL & 855 & 32.69 & 123500 & TL & 793 & \textbf{17.38} & 15086 & 9 & 793 & 0.00 & 78 & 793 & 0.00 \\
0.8 & 10 & 2 & TL & 913 & 44.80 & 35253 & TL & 853 & 26.18 & 6445 & TL & 899 & 36.83 & 117000 & TL & 845 & \textbf{25.59} & 17975 & 8 & 854 & 1.07 & 72 & 845 & 0.00 \\
0.8 & 10 & 3 & TL & 885 & 42.29 & 34230 & TL & 787 & 18.71 & 4916 & TL & 841 & 26.83 & 107600 & TL & 787 & \textbf{17.29} & 16300 & 9 & 829 & 5.34 & 74 & 787 & 0.00 \\
0.8 & 10 & 4 & TL & 853 & 55.10 & 34257 & TL & 777 & 17.46 & 4700 & TL & 830 & 31.51 & 109100 & TL & 777 & \textbf{16.52} & 15100 & 9 & 829 & 6.69 & 83 & 777 & 0.00 \\
0.8 & 10 & 5 & TL & 865 & 58.98 & 34289 & TL & 820 & 23.57 & 5188 & TL & 904 & 39.57 & 114502 & TL & 813 & \textbf{23.00} & 16200 & 8 & 827 & 1.72 & 77 & 813 & 0.00 \\
0.8 & 25 & 1 & TL & 514 & 18.09 & 18822 & TL & 508 & 12.79 & 6100 & \textbf{1555} & 508 & 0.00 & 100191 & TL & 508 & 7.23 & 16220 & 9 & 521 & 2.56 & 69 & 510 & 0.39 \\
0.8 & 25 & 2 & TL & 504 & 17.32 & 13406 & TL & 498 & 10.76 & 6000 & \textbf{1158} & 498 & 0.00 & 75456 & 1656 & 498 & 0.00 & 16200 & 9 & 499 & 0.20 & 65 & 498 & 0.00 \\
0.8 & 25 & 3 & TL & 533 & 20.87 & 17604 & TL & 513 & 12.87 & 5558 & TL & 550 & 15.73 & 107800 & TL & 513 & \textbf{ 5.83} & 16743 & 9 & 523 & 1.95 & 77 & 513 & 0.00 \\
0.8 & 25 & 4 & TL & 505 & 17.77 & 19977 & TL & 493 & 11.21 & 5241 & \textbf{1424} & 493 & 0.00 & 92315 & TL & 493 & 0.39 & 17238 & 8 & 506 & 2.64 & 75 & 493 & 0.00 \\
0.8 & 25 & 5 & TL & 515 & 22.38 & 16373 & TL & 504 & 16.65 & 7000 & TL & 528 & 18.21 & 114500 & TL & 504 & \textbf{14.02} & 19469 & 8 & 519 & 2.98 & 76 & 504 & 0.00 \\
0.8 & 50 & 1 & 511 & 307 & 0.00 & 4416 & 355 & 307 & 0.00 & 1499 & \textbf{49} & 307 & 0.00 & 2868 & 92 & 307 & 0.00 & 1568 & 8 & 307 & 0.00 & 64 & 307 & 0.00 \\
0.8 & 50 & 2 & 58 & 296 & 0.00 & 897 & 130 & 296 & 0.00 & 360 & \textbf{24} & 296 & 0.00 & 1484 & 32 & 296 & 0.00 & 397 & 8 & 296 & 0.00 & 57 & 296 & 0.00 \\
0.8 & 50 & 3 & 125 & 294 & 0.00 & 1888 & 141 & 294 & 0.00 & 316 & \textbf{30} & 294 & 0.00 & 1655 & 33 & 294 & 0.00 & 351 & 8 & 294 & 0.00 & 71 & 294 & 0.00 \\
0.8 & 50 & 4 & 82 & 270 & 0.00 & 974 & 72 & 270 & 0.00 & 105 & 35 & 270 & 0.00 & 1747 & \textbf{15} & 270 & 0.00 & 116 & 9 & 270 & 0.00 & 86 & 270 & 0.00 \\
0.8 & 50 & 5 & 89 & 278 & 0.00 & 1326 & 206 & 278 & 0.00 & 659 & \textbf{46} & 278 & 0.00 & 2611 & 58 & 278 & 0.00 & 744 & 9 & 278 & 0.00 & 77 & 278 & 0.00 \\
\bottomrule
\end{tabular}
\endgroup
\end{table}
\end{landscape}
\clearpage
\section{Conclusions and future work \label{sec:con}}
In this paper, we presented exact and heuristic solution algorithms for the recently introduced \emph{(minimum) weighted total domination problem (WTDP)} (see~\cite{MaEtAl2019}).
The WTDP\ is a problem from the family of domination problems, which are among the most basic combinatinatorial problems in graph optimization.
In the WTDP\ we are not just concerned with the concept of domination (i.e., finding a vertex-set $D\subset V$ for a given graph $G=(V,E)$, such that each vertex is either in $D$ or adjacent to it),
but with the stronger concept of \emph{total domination}, which imposes that for each vertex $v \in D$, there is also a neighbor of $v$ in $D$ (i.e., the vertices of $D$ also need to be dominated by $D$).
In the WTDP, we have a weight function associated with the vertices and edges of the graph.
The goal is to find a total dominating set $D$ with minimal weight.
The weight counted in the objective is the weight of the vertices selected for $D$, the weight of the edges between vertices in $D$, and for each vertex in $V\setminus D$,
the smallest weight of an edge between it and a vertex in $D$.
We introduced two new Mixed-Integer Programming models for the problem, and designed solution frameworks based on them. These solution frameworks include valid inequalities, starting heuristics and primal heuristics.
In addition, we also developed a genetic algorithm (GA), which is based on a greedy randomized adaptive search procedure (GRASP) version of our starting heuristic.
In a computational study, we compared our new exact approaches to the previous MIP approached presented in~\cite{MaEtAl2019} and also analyzed the performance of the GRASP and GA.
The study revealed that our exact solution algorithms are up to 500 times faster compared to the exact approaches of~\cite{MaEtAl2019} and instances with up to 125 vertices can be solved to optimality within a timelimit of 1800 seconds.
Moreover, the GRASP and GA also works well and often find the optimal or a near-optimal solution within a short runtime.
In the study, we also investigated the influence of different instance-characteristics, e.g., density and weight-structure on the performance of our approaches. Instances, where the edge weights are in a larger range compared to the vertex weights turned out to be the most difficult for our algorithms, while high density also plays a role in making instances difficult.
The attained results confirm that domination problems are computationally challenging and, therefore, require the combined effort of MIP-based and heuristic approaches in order to tackle more difficult instances.
Therefore, we believe that the development of further modeling and algorithmic advances for domination problem variants is an interesting venue for future work as these problems are relevant both from the methodological and practical point of view.
\paragraph{Acknowledgments} E. \'Alvarez-Miranda acknowledges the support of the National Commission for Scientific and Technological Research CONICYT, Chile, through the grant FONDECYT N.1180670 and through the Complex Engineering Systems Institute
PIA/BASAL AFB180003.
\section*{References}
\bibliographystyle{plainnat}
|
1,116,691,498,285 | arxiv | \section{Introduction} \label{sec:intro}
The study of the stellar content of galaxies, both in the local and early Universe, is fundamental to understand how they shape-up over cosmic time, as it provides key constraints on star formation rates, total stellar masses, chemical enrichment, and the stellar Initial Mass Function (IMF). Since Stellar Population Synthesis (SPS) models are an essential tool to constrain the stellar content of galaxies, a detailed assessment of their validity and limitations is crucial to determine the physical and evolutionary properties of these systems. While spectral synthesis modelling at optical wavelengths is now a mature field of research and optical galaxy spectra can be accurately matched with SPS models, there is still a long road ahead for Near-Infrared (NIR) SPS models to consistently agree with observations~\citep{riffel2019, eftekhari2021}. For example, only in the last decade, the problem of matching strong sodium absorption lines of massive Early-Type Galaxies (ETGs) has been scrutinized in the NIR. The strength of NIR sodium features in massive ETGs is much stronger than would be expected from an old stellar population with a Milky Way (MW)-like IMF and with solar elemental abundance ratios. A combination of a bottom-heavy IMF and a highly-enhanced sodium abundance can reconcile the tension between observations and NIR models \citep{smith2015, labarbera2017, rock2017}; however, the finding of massive ETGs with MW-like IMFs -as derived by strong gravitational lensing analyses- and strong sodium line-strengths at 1.14~$\mu$m calls for caution in interpreting the NIR sodium line-strengths \citep{smith2013, smith2015}. Another disagreement between observations and stellar population models in the NIR arises from CO absorption features, that are prominent in the \textit{H}- and \textit{K}-band spectral regions, and have always been a puzzle.
The appearance of the first overtone of CO in \textit{K} band, at $\sim2.3~\mu$m, in the spectra of galaxies was discussed by many authors in the 1970s and 1990s \citep{baldwin1973b, frogel1975, frogel1978, aaronson1978, frogel1980, oliva1995, mobasher1996, james1999}. CO absorption originates in the atmospheres of red giants and supergiants, that tend to have deeper CO absorptions than dwarf stars \citep{baldwin1973a}. \citet{faber1972} opened up the discussion that optical data could not be used to uniquely determine the proportion of M dwarfs and M giants in the galactic nucleus of M31, showing that while a model with enhanced M dwarfs in synthesised models would match the Na doublet at $8190$~\AA, a model dominated by M giants is required to explain the \textit{K}-band CO strength. Since then, several authors have analyzed the \textit{K}-band CO absorption of galaxies, alone or in combination with other indices, by comparing the observed strengths with those of stellar spectra~\citep{baldwin1973a, baldwin1973b, frogel1975, frogel1978, oliva1995, mobasher1996, james1999}. All of these studies found that line-strengths of the \textit{K}-band CO absorption lie in the range of giant stars, concluding that most of the light emitted from galaxies in the CO spectral region comes from these stars.
Using NIR observations of globular clusters, \citet{aaronson1978} showed that the $2.3~\mu$m CO index strength is strongly correlated with metallicity. They constructed SPS models and compared them with observations of the central regions of ETG, claiming that metal-rich models with a Salpeter IMF adequately fit the CO index of the brightest ellipticals. \citet{frogel1978} also found that any significant increase in the number of late-type dwarfs beyond those already contained in the SPS models drives the \textit{K}-band CO index to unacceptably low values\footnote{\citet{kroupa1994} obtained a similar result by simulating the \textit{K}-band spectrum of the cooling-flow ellipticals, i.e elliptical galaxies with a low-mass star accretion population. They used the spectral library of low-mass stars from \citet{arnaud1989} and showed that by spectroscopy around the \textit{K}-band CO feature, an overabundance of low-mass stars in these galaxies can be estimated.}, concluding that the changes observed in the CO indices of galaxies are due to variations in the mean metallicity of their stellar populations. However, \citet{frogel1980} attributed the differences between various colours and \textit{K}-band CO index of ETGs with respect to those of globular clusters and stellar synthesis models to a population of low-temperature luminous stars present neither in the clusters nor in the models. They hypothesized giant branch stars with higher metallicity than the Sun and/or asymptotic giant-branch (AGB) stars above the first red giant tip as two candidates for such a population.
Separation of the relative contributions to the \textit{K}-band CO line-strength from young supergiants in a burst population (a few Myr) and giants in an older stellar system ($\sim1$~Gyr) has also been a subject of debate; the NIR CO features are mainly sensitive to effective temperature but are also somewhat shallower in giants than supergiants of similar temperatures \citep{kleinmann1986, origlia1993, oliva1995}. However, metal-rich red giant stars can have CO absorptions that are as strong as the red supergiants found in starbursts; in other words, cold giants and slightly warmer supergiants can have equally strong CO line-strengths \citep{origlia1993, oliva1995}. This hampers the interpretation of CO absorptions in galaxies alone, in the absence of independent measurement of the stellar temperature.
ETGs are known to host old stellar populations with little contribution, if any, from recently-formed stellar populations. Indeed, since~\citet{frogel1980}, the CO (2.3 $\mu$m) absorption has been used to possibly infer the presence of young stars (red giants and supergiants) in ETGs. In particular, \citet{mobasher2000} found that the CO line-strength is stronger for ellipticals in the outskirts of the Coma cluster than in the core, interpreting this as an evidence for younger populations in galaxies inhabiting low-density environments (see also~\citealt{james1999}). \citet{mobasher1996} and \citet{marmol2009} also interpreted the observed higher value of the $2.3~\mu$m CO line-strength of galaxies in the field with respect to those in denser environments of clusters as due to relatively more recent episodes of star formation in field galaxies. Unfortunately, most of these analyses has hitherto been based on a direct comparison of CO indices in galaxies to those for stellar spectra, with no detailed comparison to predictions of SPS models. Recently, \citet{baldwin2018} measured the $2.3~\mu$m CO line-strength for 12 nearby ETGs, comparing to predictions from different sets of SPS models. They found that all models systematically underpredict the strength of the \textit{K}-band CO.
While the CO bandhead in the \textit{K} band has been extensively analyzed in the literature, no much effort has been done so far to study other NIR CO lines, that are prominent in galaxy spectra, especially in the \textit{H} band. This is because low-temperature and heavily obscured stars are brighter in the \textit{K} than the \textit{H} band, and, perhaps more importantly, severe contamination of the \textit{H} band spectral range from sky emission lines has prevented its exploitation in the work of the late 1900s. However, nowadays, thanks to the high-resolution of NIR spectrographs and new sky-subtraction techniques, the \textit{H}-band is fully accessible to detailed stellar population studies.
CO absorptions in the \textit{K} band arise from transitions between two vibrational states with a difference ($\Delta\nu$) in quantum number $\nu$ of 2, whilst the $\Delta\nu$ of CO absorptions in the \textit{H} band is equal to 3 (see table 10 of \citet{rayner2009}). This results in a lower ($\sim$2 orders of magnitude) optical depth of CO lines in the \textit{H} band than the \textit{K} band and therefore, the CO lines in the \textit{H} band saturate in cooler stars than those in \textit{K} band. Hence the strength of the CO lines in the \textit{H} and \textit{K} band are expected to behave differently with spectral type and luminosity \citep{origlia1993}. Indeed, performing a simultaneous analysis of different features arising from the same chemical species is of paramount importance, as it helps in breaking degeneracies among relevant stellar population parameters (e.g.~\citealt{conroy2012a}). The only effort in this direction has been done by~\citet{riffel2019}, who have analyzed CO line-strengths in both the \textit{H} and \textit{K} band, for a sample of nearby ETGs, covering a range of galaxy mass, as well as star-forming galaxies. They found that while some CO lines are matched by the models, others seem to exhibit a significant disagreement.
In this paper, we perform a detailed analysis of a whole battery of CO absorption features that are found in the NIR spectra of ETGs, focusing on a homogeneous, high-quality, sample of very massive nearby galaxies, with a velocity dispersion of $\sim300$~\kms\ (i.e. the high-mass end of the galaxy population), as well as other galaxy samples collected from previous works. We show that, indeed, all CO features, besides the well-studied $2.3~\mu$m CO bandhead, are systematically underestimated by the models for the very massive ETGs. We scrutinize several possible explanations of this ``CO mismatch'' problem, including the effect of a non-universal IMF, a contribution from young and intermediate-age populations, high-metallicity stars, as well as the effect of non-solar abundance ratios. We also present ad-hoc empirical SPS models that might help to solve the problem, by taking advantage of the scatter of stars in the available stellar libraries used to construct the models.
The paper is organised as follows: In Sections~\ref{sec:samples} and \ref{sec:libraries_models}, we describe our samples of ETGs, as well as the stellar libraries and synthesis models used in the present work. In Section~\ref{sec:spectral_indices} we show the CO mismatch problem for NIR absorption features, by comparing models and observations. Different experiments are presented in Section~\ref{sec:abundance_agb} in order to search for a possible solution of the CO puzzle. Our empirical approach to address the tension is explained in Section~\ref{sec:empirical_approach}. Section~\ref{sec:discussion} provides a discussion of our results. The overall conclusions are summarized in Section~\ref{sec:conclusions}.
\section{Samples} \label{sec:samples}
We used different samples of ETGs drawn from the literature. Our main galaxy sample is that of \citet{labarbera2019} (hereafter LB19), consisting of exquisite-quality, high-resolution, optical and NIR spectra for a sample of very massive ETGs at z $\sim0$, collected with the X-SHOOTER spectrograph \citep{vernet2011} at the ESO Very Large Telescope (VLT). Other samples of ETGs were used wherever the quality of data and wavelength coverage were suitable for our analysis. Although these samples are far from being homogeneous (as they were observed with different instruments), they encompass a wide range in galaxy stellar mass and stellar population parameters, allowing for a comprehensive study of NIR CO features. In particular, we have included two samples of ETGs residing in varying environments (see below), namely in the field \citep{marmol2009}, and in the Fornax cluster \citep{silva2008}, whose \textit{K}-band spectra were obtained with the same observational setup. These two samples allow us to explore the effect of the environment on the CO strengths.
The main properties of our galaxy samples are summarized as follows:
\begin{itemize}
\item\citet{labarbera2019} (hereafter XSGs): The seven massive ETGs of this sample are at redshift $z\sim0.05$, and span a velocity dispersion ($\sigma$) range of $\sim 300 - 360$~\kms. Five galaxies are centrals of galaxy groups, while two systems are satellites (see LB19 for details). The galaxies were observed using the X-SHOOTER three-arm echelle spectrograph, mounted on UT2 at the ESO VLT. The wavelength coverage of the spectra in the NIR arm ranges from $9800$ to $25000$~\AA \space with a final resolution of $\sim5500$\,(FWHM). The spectra used for the present work were extracted for all galaxies within an aperture of radius $1.5$", and have a high signal-to-noise ratio (SNR), above $170$~\AA$^{-1}$. In addition to individual spectra, we also made use of a stacked spectrum in our analysis, to characterize the average behaviour of the sample. Using optical (H${\rm \beta o}$, H$_{\rm \gamma F}$, TiO1, TiO2$_{\rm SDSS}$, aTiO, Mg4780, [MgFe]', NaD, \ion{Na}{i}8190) and NIR (\ion{Na}{i}1.14, \ion{Na}{i}2.21) spectral indices, LB19 showed that stellar populations in the center of these galaxies are old ($\sim11$~Gyr), metal-rich ($\sim+0.26$~dex), enhanced in $\alpha$-elements (\mbox{$\mbox{[$\alpha$/Fe]}$}$\sim+0.4$~dex) and with bottom-heavy IMF (with logarithmic slope $\Gamma_{b}\sim3$ for the upper segment of a low-mass tappered IMF, often regarded as "bimodal").
\item\citet{francois2019}: The twelve nearby (z < $0.016$) and bright (B$_{t}\sim 11-13$~mag) galaxies of this sample span $\sim 35 - 335$~\kms \space in velocity dispersion and are distributed along the Hubble sequence from ellipticals to spirals. They have been observed with the X-SHOOTER spectrograph at the VLT. NIR spectra were obtained with a $1.2$" slit, providing a resolving power of $4300$. The one-dimensional spectra were extracted by sampling the same spatial region for all galaxies. Using optical indices (<Fe>, [MgFe]', Mg$_{2}$, Mg$_{b}$, and H$_{\rm \beta}$), \citet{francois2019} showed that stellar populations in the center of these galaxies span a wide range of values in age and metallicity ( $0.8 \leq \rm age \leq 15$ and $-0.4 \leq \rm \mbox{$\mbox{[Z/H]}$} \leq 0.5$).
\item\citet{baldwin2018}: They obtained \textit{JHK}-band spectra for twelve nearby ETGs from the ATLAS$\rm^{3D}$ sample \citep{cappellari2011}, with a velocity dispersion range of $\sigma = 80 - 120$~\kms, using GNIRS, the Gemini Near-Infrared Spectrograph at the 8m Gemini North telescope in Hawaii. The galaxies span a broad range in age from $1$ to $15$~Gyr (optically-derived SSP-equivalent ages) at approximately solar metallicity. They used a $0.3$" slit, with a spectral resolution of R $\sim 1700$. One-dimensional spectra were extracted within an aperture of $\pm\frac{1}{8}$R$_{\rm eff}$ except for one galaxy (whose spectrum was extracted within $\pm\frac{1}{12}$R$_{\rm eff}$). The SNR of the spectra is in the range $50-200$.
\item\citet{marmol2009}: They observed twelve ETGs in the field in the velocity dispersion range $59$~\kms \space < $\sigma$ < $305$~\kms \space with the ISAAC NIR imaging spectrometer, mounted on UT1 at the VLT. NIR spectra were obtained with a $1$" slit, providing a resolving power of $7.1$~\AA \space at $2.3~\mu$m. The wavelength coverage of the spectra is short (from $2.20$ to $2.29~\mu$m), including \ion{Na}{i}, \ion{Fe}{i}, \ion{Ca}{i} and the first CO absorptions at the red end of the \textit{K} band (see below). They extracted galaxy spectra within a radius corresponding to $\frac{1}{8}$R$_{\rm eff}$. The ages of these galaxies were determined by \citet{sanchez2006b}, using the H$_{\rm\beta}$ index and a preliminary version of the SPS models of \citet{vazdekis2010}.
\item\citet{silva2008}: This sample consists of eight ETGs in the Fornax cluster with $\sigma = 70 - 360$~\kms, all observed with the ISAAC NIR imaging spectrometer. A $1$" slit was used during the observations, giving a spectral resolution of R $\approx$ $2900$ ($7.7$~\AA\ FWHM). The central spectra were extracted within a radius corresponding to $\frac{1}{8}$R$_{\rm eff}$ with a SNR of $48 - 280$, and covering a wavelength range of $2.12 - 2.37~\mu$m. The ages of these galaxies are drawn from \citet{kuntschner1998}, and were computed with \citet{worthey1994} SPS models.
\end{itemize}
Note that, for all spectra, we measured CO spectral indices (see Sec.~\ref{sec:spectral_indices}), when they lie within the available spectral range and are considered to be safe for the stellar population analysis, according to the criteria given in \citet{eftekhari2021} (see their section~4.8).
\section{Stellar libraries and stellar population models}\label{sec:libraries_models}
We compare observed CO index strengths with predictions of various SPS models which differ in a number of ingredients, such as the adopted isochrones, stellar libraries, IMF shape, age and metallicity coverage, as well as the prescription to implement the AGB phase. The latter is one of the most important aspect when studying the NIR spectral range, as it is the main source of differences seen among models. We also compare CO indices observed in our galaxy samples with individual stars, from theoretical and empirical stellar libraries. The main features of the models and libraries are summarized below.
\subsection{Stellar population models}\label{sec:models}
\begin{itemize}
\item E-MILES: We used two model versions: base E-MILES \citep{vazdekis2016} and $\alpha$-enhanced E-MILES (an updated version of Na-enhanced models described in \citealt{labarbera2017}). The models are available for two sets of isochrones; BaSTI \citep{pietrinferni2004} and Padova00 \citep{girardi2000}. These isochrones provide templates for wide range of ages, from $1-14$~Gyr in BaSTI and $1-17.78$~Gyr for Padova00, and metallicities, from $-0.35$ to $+0.26$~dex in BaSTI and $-0.4$ to $+0.22$~dex in Padova00. The base model is computed for different IMF shapes- Kroupa universal, revised Kroupa, Chabrier, unimodal and bimodal- while the $\alpha$-enhanced model is only available for bimodal IMF distributions (see \citealt{vazdekis1996, vazdekis1997} for a description of different IMF parametrizations). E-MILES models cover a wide range in wavelength from ultraviolet ($1680$~\AA) to infrared ($50000$~\AA) and are based on NGSL stellar library \citep{gregg2006} in the ultraviolet, MILES \citep{sanchez2006a}, Indo-US \citep{valdes2004} and CaT \citep{cenarro2001} stellar libraries in the optical and 180 stars of IRTF stellar library (see below) in the infrared. For the current study, we utilised the BaSTI-based model predictions with bimodal IMF shape with slopes $\Gamma_{b}=1.3$ (representative of the Kroupa-like IMF) and $\Gamma_{b}=3.0$ (representative of a bottom-heavy IMF), and two metallicity values, around solar ($+0.06$) and metal-rich ($+0.26$). Note that the BaSTI models have cooler temperatures for low-mass stars and use simple synthetic prescriptions to include the AGB regime.
\item Conroy et al.: We used two model versions: \citet{conroy2012a} (hereafter CvD12) and \citet{conroy2018} (hereafter C18). The CvD12 models rely on 91 stars from the IRTF stellar library in the NIR, using different isochrones to cover different phases of stellar evolution, from the hydrogen burning limit to the end of the AGB phase, namely the Dartmouth isochrones \citep{dotter2008}, Padova isochrones \citep{marigo2008} and Lyon isochrones \citep{chabrier1997, baraffe1998}. The publicly-available models of CvD12 cover either non-solar abundance templates at $13.5$~Gyr, or younger ages (from $3$ to $13.5$~Gyr) at solar abundance ratios and metallicity. The $13.5$ Gyr model of solar metallicity and solar abundance is available for a Salpeter IMF, Chabrier IMF, two bottom-heavy IMFs with logarithmic slopes of x=$3$ and x=$3.5$, and a bottom-light IMF. In this paper, we utilised C-, O-, and $\alpha$-enhanced models of age $13.5$~Gyr in addition to solar metallicity models of ages $3$ to $13.5$~Gyr with a Chabrier IMF. The updated version of CvD12 models is based on the MIST isochrones \citep{choi2016} and utilises the Extended IRTF Stellar Library \citep{villaume2017} that includes continuous wavelength coverage from 0.35 to 2.4~$\mu$m. C18 models cover a wide range in metallicity ($-1.5 \lesssim {\rm[Fe/H]} \lesssim 0.3$) and include new metallicity- and age-dependent response functions for 18 elements. In this paper, we used C18 models with overall metallicity [Z/H] of 0.0 and 0.2~dex, respectively, with a Kroupa-like IMF, for ages between 1 to 13~Gyr. We also used C-enhanced models ([C/Fe] = +0.15) of age 13~Gyr, solar metallicity and a Kroupa-like IMF.
\item\citet{maraston2005} (hereafter M05): This model is based on the fuel consumption theorem of \citet{renzini1981} for the post main sequence stages of the stellar evolution (in particular TP-AGBs) and makes use of two stellar libraries, the BaSel theoretical library \citep{lejeune1998} in the optical and NIR and the \citet{lancon2000} empirical library of TP-AGB stars. The models are available for Salpeter and Kroupa IMF and for two horizontal branch (HB) morphologies: blue and red. They cover a wide range in metallicity (from $-2.35$ to $+0.67$~dex) and age (from $10^{3}$ yr to $15$~Gyr). For our analysis we have used models with solar metallicity, Kroupa IMF and ages between $1$ and $14$ Gyr, with red HB morphology.
\end{itemize}
\subsection{Stellar libraries}\label{sec:libraries}
\begin{itemize}
\item IRTF \citep{cushing2005, rayner2009}: This is an empirical spectral library of $210$ cool stars covering the \textit{J}, \textit{H}, and \textit{K} bands ($0.8-5~\mu$m) at a resolving power of R $\sim2000$. The library includes late-type stars, AGB, carbon and S stars, mostly with solar metallicity\footnote{Note that, although the IRTF library has been recently extended to a wider range in metallicity by \citet{villaume2017}, in the present paper we used the original version of the IRTF library, as this library is actually used to build up our reference SSP models (E-MILES). Moreover, the extended library shows a significant improvement mostly in the low-metallicity regime (see figure~1 of \citet{villaume2017}), which is not relevant for our samples of massive ETGs.}, providing absolute flux calibrated spectra. For this study, we have used a subsample of $180$ IRTF stars that are also used to construct E-MILES models in the NIR.
\item Theoretical stars from \citet{knowles19_Thesis}: We included a small set of theoretical stellar models, generated using the same method presented in \cite{knowles21}. In summary, these models are computed using ATLAS9 (\citealt{kurucz1993}), with publicly available\footnote{\url{http://research.iac.es/proyecto/ATLAS-APOGEE//}} opacity distribution functions described in \citealt{mezaros2012}. We used the 1{\small D} and LTE mode of ASS$\epsilon$T (Advanced Spectrum SynthEsis Tool, \citealt{koesterke2009}) with input ATLAS9 atmospheres to produce fully consistent synthetic spectra at air wavelengths, with abundances varied in the same way in both model atmosphere and spectral synthesis components. The models here adopt \citet{asplund2005} solar abundances and a microturbulent velocity of $2$~\kms. We direct interested readers to \cite{knowles19_Thesis} and \cite{knowles21} for further details. The star models in this work have effective temperatures of $3500$, $3750$ and $4000$~{\small K}, for $\log g$=$1.5$, \mbox{$\mbox{[M/H]}$}=\mbox{$\mbox{[$\alpha$/M]}$}=$0.0$ and two different carbon abundances; scaled-solar (\mbox{$\mbox{[C/Fe]}$}=$0.0$) and enhanced (\mbox{$\mbox{[C/Fe]}$}=$0.25$). \mbox{$\mbox{[M/H]}$}\ here is defined as a scaled-metallicity in which all metals, apart from the $\alpha$-elements and carbon if they are non-solar, are scaled by the same factor from the solar mixture (e.g. \mbox{$\mbox{[M/H]}$}=$0.2$=\mbox{$\mbox{[Fe/H]}$}=\mbox{$\mbox{[Li/H]}$}). This definition results in \mbox{$\mbox{[$\alpha$/M]}$}=\mbox{$\mbox{[$\alpha$/Fe]}$}\ and \mbox{$\mbox{[C/M]}$}=\mbox{$\mbox{[C/Fe]}$}. The models are generated specifically for this work, with a wavelength range of $1675.1$-$24002.1$~\AA\ and a resolution of R $\approx100000$.
\item APOGEE \citep{majewski2017}: APOGEE is an \textit{H}-band ($1.59 - 1.69~\mu$m) spectral library of stars in the Milky Way from the SDSS with a resolving power of $\sim22500$. The $\rm16^{th}$ data release provides reduced and pipeline-derived stellar parameters and elemental abundances (ASPCAP; \citealt{garcia2016}) for more than $430,000$ stars. For our study, in the ASPCAP catalog, we made a selection of a subset of stars\footnote{We removed stars with bits STAR\_BAD, TEFF\_BAD, LOGG\_BAD, and COLORTE\_WARN set in the APOGEE\_ASPCAPFLAG bitmask and also those with bits BRIGHT\_NEIGHBOR, VERY\_BRIGHT\_NEIGHBOR, PERSIST\_HIGH, PERSIST\_MED, PERSIST\_LOW, SUSPECT\_RV\_COMBINATION, and SUSPECT\_BROAD\_LINES set in the APOGEE\_STARFLAG bit-mask.}. We also removed stars with SNR < $100$ (per pixel) and those that have a radial velocity scatter greater than $1.5$~\kms, and a radial velocity error greater than $3$~\kms.
\end{itemize}
\section{CO spectral indices}\label{sec:spectral_indices}
As described in Sec.~\ref{sec:intro}, fitting SPS models to CO absorption features of galaxies in the \textit{K} band has proven to be challenging. To gain further insights into the problem, we consider here not only the already studied CO feature at $\sim2.30~\mu$m, but also other CO features at $2.32~\mu$m and $2.35~\mu$m. These are the first-overtone CO bandheads at the red end of the \textit{K} band; their depth increases with decreasing stellar temperature \citep{kleinmann1986, rayner2009} and luminosity \citep{origlia1993} and becomes progressively weaker with decreasing metallicity \citep{frogel1975, aaronson1978, doyon1994, davidge2018, davidge2020}, being stronger in red giant and supergiant stars than in dwarf stars\footnote{The first overtone bands of $^{12}$CO show bandheads at 22929.03, 23220.50, 23518.16, and 23822.97~\AA\ (in air) and those of $^{13}$CO at 23441.84, 23732.94, and 24030.49~\AA. The feature at 2.35~\AA\ is dominated by the $^{12}$CO bandhead at 23518.16~\AA\ but disturbed by the $^{13}$CO line at 23441.84~\AA. The strong saturation of the $^{12}$CO feature makes the $^{13}$CO line clearly visible even for low $^{12}$CO/$^{13}$CO abundance ratios (see the shoulder on the blue-ward of the central feature in panel CO2.35 of Fig.~\ref{fig:fig1}). Consequently, the first overtone bands of CO can be used to estimate the $^{12}$CO/$^{13}$CO isotope ratio providing, in principle, important clues about stellar evolution and nucleosynthesis.}. In addition, we also include in our analysis six second-overtone CO bandheads in the \textit{H} band, namely at $1.56$, $1.58$, $1.60$, $1.64$, $1.66$, and $1.68~\mu$m. In the following, we first show observed and model spectra around each CO absorption (Sec.~\ref{sec:obs_vs_mod}), and then we discuss observed and model line-strengths in the \textit{K} and \textit{H} bands (see Secs.~\ref{sec:K_band} and~\ref{sec:H_band}), respectively.
\subsection{CO indices: observed vs model spectra} \label{sec:obs_vs_mod}
In Fig.~\ref{fig:fig1}, we show the spectra of XSGs from LB19, around CO absorptions from \textit{H} through \textit{K} band, and compare them with model spectra. From light to dark, the shifted red spectra correspond to the individual XSGs, while the median-stacked spectrum is shown in black. The wavelength definitions of CO indices, from \citet{eftekhari2021}, are shown with shaded grey and orange areas corresponding to indices bandpass and pseudo-continua bands, respectively. In the same figure, we also show a fiducial model (pink) corresponding to an E-MILES simple stellar population (SSP) with an age of $11$~Gyr (mean age of the XSGs as derived from optical indices), solar chemical abundance pattern, and a bimodal IMF of logarithmic slope $1.3$ (corresponding to a MW-like IMF). Note that although the spectra of XSGs do not cover the CO index at $2.35~\mu$m, we also show this index as it is covered by other galaxy samples in our analysis (see Fig.~\ref{fig:fig2}). For clarity and ease of comparison, normalised spectra of XSGs are shifted upwards while the stacked spectrum and model spectra are only normalised to the mean flux within pseudo-continua bands. Clearly, XSG2 is the one with the largest scatter amongst the spectra, likely because of the lower quality of the data for this galaxy, that has been observed with a different observational setup (see LB19 for details). A clear mismatch between observations and models is seen in all panels. The CO indices of the XSGs are much stronger than those of the fiducial SSP model.
As a first step, we investigate whether a young population, a non-solar metallicity, a dwarf-rich stellar population, and/or a non-solar chemical abundance pattern could significantly affect the CO lines, and explain the deep CO absorptions in the data:
\begin{description}
\item[-] As mentioned in Sec.~\ref{sec:intro}, the disagreement between \textit{K}-band CO observations and models has been attributed to the presence of intermediate-age stellar components, dominated by stars in the AGB evolutionary phase. To test this scenario, in Fig.~\ref{fig:fig1}, we show an E-MILES SSP with the same parameters as the fiducial model but with an age of $2$~Gyr (see cyan curves). Except for the CO1.64 index, a variation in age does not significantly change the depth of CO absorption features. We assess this issue, in more detail, in Sec.~\ref{sec:AGB}.
\item[-] Since XSGs have metal-rich stellar populations, as shown by the analysis of optical spectral indices (see LB19), in the figure we also show the effect of increasing metallicity, with an SSP having the same parameters as the fiducial model but \mbox{$\mbox{[M/H]}$} = $+0.26$ (violet). According to E-MILES models, the increase in metallicity does not significantly affect the depth of (all) CO features.
\item[-] The strong CO absorptions can not be explained by IMF variations either; an SSP model with the same fiducial model parameters but with a steeper IMF slope (green) has shallower CO absorptions, hence worsening the fitting. This was first pointed out by \citet{faber1972}, who showed that an increase in the number of dwarf stars drives the CO index at $22800$~\AA\ to unacceptably low values. Figure~\ref{fig:fig1} shows that, indeed, all CO features exhibit a similar behaviour.
\item[-]In Fig.~\ref{fig:fig1}, we also investigate non-solar \mbox{$\mbox{[$\alpha$/Fe]}$}\ abundance effects. An SSP that differs from the fiducial model only in $\alpha$-enhancement is shown with brown colour. As for the IMF, the enhancement in $\alpha$ weakens the strength of CO absorptions. Notice that a different result seems to hold for A-LIST SPS models \citep{ashok2021}, which suggest an increase of CO strength with \mbox{$\mbox{[$\alpha$/Fe]}$}\ (see their figure 6.a).
\end{description}
\begin{figure*}
\centering
\includegraphics[width=.97\linewidth]{figs/Figure1.pdf}
\caption{Spectra of XSGs and E-MILES models in the regions of CO features: individual XSG spectra (light to dark red); the stacked spectrum of XSGs (black); a ``fiducial model'' with solar abundance and metallicity, an age of $11$~Gyr, and MW-like IMF (pink); a model with Age = $2$~Gyr (cyan); a model with \mbox{$\mbox{[M/H]}$}\ = $+0.26$ (violet), a model with a bottom-heavy IMF of $\Gamma_{b} = 3.0$ (green); and a model with \mbox{$\mbox{[$\alpha$/Fe]}$}\ = $+0.4$ (brown). All these models have the same stellar population parameters as the fiducial model except for a given parameter (see labels on the top). The model and observed spectra have been convolved to a common resolution of $\sigma = 360$~\kms, as this is the highest velocity dispersion in the sample of \citet{labarbera2019}. The central bandpasses of CO indices, as well as the blue and red pseudo-continua, are from \citet{eftekhari2021} and are shown as grey and orange areas, respectively. All spectra have been normalised to the mean flux within pseudo-continua bands. The XSG individual spectra have also been arbitrarily shifted to display galaxy-to-galaxy variations in the depth of the COs. Note that the spectral range of XSGs does not cover the CO2.35 index but this index is included in the figure as it is used for other galaxy samples. Remarkably, for all CO features, galaxies show stronger absorption than the models, regardless of the adopted model parameters.
}
\label{fig:fig1}
\end{figure*}
\subsection{CO line-strengths in \textit{K} band} \label{sec:K_band}
Figure~\ref{fig:fig2} shows a quantitative comparison
of line-strengths of CO indices in \textit{K} band between data and different SPS models (see Sec.~\ref{sec:models}), i.e. E-MILES (solid/dotted pink and purple lines), CvD12 (dashed violet line), C18 (solid and dotted violet lines), and M05 (solid black line) models. We plot model line-strengths of the COs as a function of age, while for galaxies we plot observed line-strengths as a function of the age, as estimated from previous works (see Sec.~\ref{sec:samples}). For each index, the measurements on the individual XSG spectra are plotted with open red circles, while the filled red circle corresponds to the measurement of their stacked spectrum. Individual galaxies from \citet{francois2019}, \citet{baldwin2018}, \citet{marmol2009}, and \citet{silva2008} samples (hereafter B18, F19, M09, and S08, respectively) are shown with open lime squares, cyan triangles, blue pentagons, and orange diamonds, respectively. For each index, the median CO line-strength of each sample is shown with a filled symbol of the same colour. The indices were measured after smoothing all spectra to a common velocity dispersion of $\sigma = 360$~\kms.
\subsubsection{D$_{\rm CO}$ vs. age}
In the upper-left panel of Fig.~\ref{fig:fig2}, we consider the D$_{\rm CO}$ index, i.e. the definition of the first CO bandhead in \textit{K} band from \citet{marmol2008}. This index is defined with two blue pseudo-continua and the absorption bandpass (see \citealt{marmol2008} for details). In the same panel, we also included measurements for the spectra of S08 and M09 (open/filled orange diamonds and blue pentagons). For ages greater than $3$~Gyr, different models show similar trends, with D$_{\rm CO}$ showing no significant variation with age. Only for ages younger than $3$~Gyr, there is a significant difference between E-MILES and M05 models since the contribution of AGB stars are more emphasized in the young populations of M05. We further discuss this issue in Sec.~\ref{sec:AGB}. All models underpredict the median value of D$_{\rm CO}$ for the samples except for that of F19. However, since the scatter of D$_{\rm CO}$ in this sample is far larger than that for the other samples, no firm conclusions can be drawn. In general, Fig.~\ref{fig:fig2} shows that the models can barely reproduce only the galaxies with the smallest D$_{\rm CO}$ values. For instance, two galaxies of B18 at $13$~Gyr are well matched with E-MILES and M05 models, and the same applies to two galaxies in the S08 sample with the weakest CO absorption, that are well matched with E-MILES models (see solid pink line and the orange diamonds for an age of $\sim 8$~Gyr). Since the M05 model differs significantly from E-MILES model in the predictions for young populations, it can match the youngest galaxies of M09 and B18. Overall, for the D$_{\rm CO}$ index, the mismatch between observations and models applies to all models. C18 models predict the lowest values for D$_{\rm CO}$ and cannot match any data points.
\subsubsection{D$_{\rm CO}$ vs. metallicity}
Another parameter that can be considered is the variation of metallicity among our galaxies, that span a wide range, from \mbox{$\mbox{[M/H]}$} = $-0.4$ to \mbox{$\mbox{[M/H]}$} = $+0.5$~dex. We investigate the effect of metallicity by showing the predictions of E-MILES models with a MW-like IMF ($\Gamma_{b}=1.3$) and total metallicity of $+0.26$~dex (dotted-pink line), and predictions of C18 models with a Kroupa IMF and [Z/H] = 0.2~dex (dotted-violet line). Hence, the effect of metallicity can be seen by comparing dotted and solid lines in Fig.~\ref{fig:fig2}. The figure shows that the discrepancy between the dotted-pink line and the filled blue pentagon and red circle (orange diamond) is almost 8 (2) times larger than the increase in D$_{\rm CO}$ caused by variations in \mbox{$\mbox{[M/H]}$}\ from $+0.06$ to $+0.26$ dex.
However, it is worth noticing that the IRTF stellar library, that is used to construct E-MILES models in the NIR, consists of stars in the solar neighbourhood, which are unavoidably biased towards solar metallicity. In fact, according to \citet{rock2015}, the quality of E-MILES models decreases at supersolar metallicities. Also, we cannot exclude a non-linear behaviour of CO absorptions in the very high-metallicity regime. Therefore, we looked at C18 models, which are based on the Extended IRTF Stellar Library with better coverage in metallicity (although this advantage mainly applies to the metal-poor regime). Comparing the difference between dotted and solid pink lines with the difference between dotted and solid violet lines shows that the effect of metallicty in C18 models is larger than the one in E-MILES models but it is not enough to match the high values of XSGs or M09 data. Also, notice that although C18 models are based on a stellar library with better coverage in metallicity, they predict the lowest values for D$_{\rm CO}$, hampering the discrepancy to the observed data points. We conclude that the mismatch between data and models cannot be explained with metallicity variations alone.
\subsubsection{D$_{\rm CO}$ vs. IMF}
\label{sec:DCO_IMF}
We also investigated the effects of a bottom-heavy IMF ($\Gamma_{b}=3.0$) on the CO features as shown by solid- and dotted-purple lines for solar and metal-rich populations, respectively. The IMF slope of XSGs has been determined by LB19, using a combination of optical and NIR (Na) indices, finding that all galaxies have a bottom-heavy IMF in the centre. However, Fig.~\ref{fig:fig2} shows that a dwarf-rich population does significantly increase the discrepancy between models and observed CO indices. While this result seems to be consistent with \citet{alton2018}, who claimed a MW-like IMF for massive galaxies in their sample, based on \textit{J}- and \textit{K}-band spectral indices (including two CO bandheads in \textit{K} band), it has to be noted that for a MW-like IMF the models do not match the observations either. In other words, any claim from CO lines on the IMF should be taken with caution.
\subsubsection{D$_{\rm CO}$ vs. environment}
The samples of S08 and M09 allow us to assess the effect of galactic environment on the strength of D$_{\rm CO}$, as galaxies in the former sample reside in a high-density environment (Fornax cluster) compared to the latter, which consists of field galaxies.
It is noteworthy to mention that within 20~Mpc, the Fornax cluster is the closest and second most massive galaxy cluster after Virgo. It has a virial mass of 10$^{13}$\ms \space \citep{drinkwater2001} and while most of its bright members are ETGs, mainly located in its core \citep{grillmair1994}, its mass assembly is still ongoing \citep{scharf2005}, and therefore it is not fully virialized. Fornax cluster is an evolved, yet active environment, as well as a rich reservoir for studying the evolution of galaxies in a cluster environment, particularly within its virial radius.
The two samples of S08 and M09 have been observed with the same telescope and observational setup (see Sec.~\ref{sec:samples}), allowing for a direct comparison. Note that these two samples are also comparable with respect to velocity dispersion (see Sec.~\ref{sec:samples}). As shown in the plot, M09 galaxies, located in the field, tend to have larger values of D$_{\rm CO}$ strength than S08 galaxies in the Fornax cluster. We see that the median value of these two samples cannot be matched with the current models. We can speculate that the origin of the difference between D$_{\rm CO}$ values of ETGs in low- and high-density environments might be due to a difference in the carbon abundance of field and cluster galaxies. Since star formation in dense environments takes place more rapidly than in isolated galaxies, carbon, which is expelled into the interstellar medium by dying stars of intermediate masses, cannot be incorporated in newer stellar generations. Therefore, the resulting stars in dense environments, like cluster galaxies, exhibit smaller carbon abundance with respect to their counterparts of similar mass in low-density environments. As the CO molecule has high binding energy, carbon mostly forms CO molecules. Thus CO indices in the field galaxies are stronger than cluster galaxies as was suggested by M09. Moreover, \citet{rock2017} found a dichotomy between the \ion{Na}{i}2.21 values of the ETGs in the S08 and M09 samples. Indeed, they showed that one possible driver of NaI2.21 might be \mbox{$\mbox{[C/Fe]}$}\ abundance.
\subsubsection{Other \textit{K}-band CO indices}
The upper-right panel of Fig.~\ref{fig:fig2}, shows measurements for the same CO feature as for D$_{\rm CO}$, but using the index definition, named CO2.30, from \citet{eftekhari2021}. While D$_{\rm CO}$ measures the absorption at $\sim2.30~\mu$m as a generic discontinuity, defined as the ratio between the average fluxes in the pseudo-continua and in the absorption bands, the CO2.30 index follows a Lick-style definition, with a blue and red pseudo-continua and the absorption bandpass (see figure A3 in \citealt{eftekhari2021} for a comparison of the two definitions). Note that the red bandpass of CO2.30 is not covered by the S08 and M09 spectra and, therefore, these samples are not included in the upper-right panel of Fig.~\ref{fig:fig2}. For ages of 3 and $5$~Gyr, the predictions of CvD12 and E-MILES models for CO2.30 are very similar, while for older ages, CvD12 models are closer to the M05 predictions, leading to a larger mismatch. Also, C18 models with solar metallicity (solid violet line) have smaller difference with M05 models than E-MILES ones for ages older than 3 Gyr and their trend is very similar to E-MILES and M05 models, in contrast to CvD12. The difference between E-MILES, CvD12, C18 and M05 models for old ages (>$5$~Gyr) is quite significant, and comparable to the effect of varying the IMF slope (see pink and purple lines in the figure). However, similarly to D$_{\rm CO}$, all models fail to match the observed strong CO2.30 line-strengths, and changing \mbox{$\mbox{[M/H]}$}\ of E-MILES models from $+0.06$ to $+0.26$, increases CO2.30 by only $\sim0.2$~\AA, while the discrepancy between the pink line and the filled lime square and cyan triangle (red circle) is $\sim1 (2)$~\AA. Moreover, although increasing the overall metallicity of C18 by 0.2~dex leads to an increase of $\sim1$~\AA, the C18 models predict the lowest values for CO2.30 and cannot match the data. Even by considering only the relative changes and adding the effect of metallicity predicted by C18 to the E-MILES models, the models are not still able to match the high value of the stacked spectrum of XSGs. Hence, using the CO2.30 index, we end up with the same conclusions as for D$_{\rm CO}$, i.e. an intrinsic offset exists between models and data, which is independent of the index definition.
In the lower panels of Fig.~\ref{fig:fig2}, we also show measurements for other two CO bandheads in \textit{K} band, namely CO2.32 and CO2.35, which have been far less studied compared to the CO feature at $\sim2.30~\mu$m. Note that the XSGs spectra do not cover the wavelength limits of the CO2.35 index, and thus this sample is not included in the panel showing this index. Also, as can be seen in Fig.~\ref{fig:fig1}, the spectra of one XSG, and for the XSG stack, do not encompass the red bandpass of CO2.32. Hence, the corresponding measurements are not seen in the CO2.32 vs. age panel. Unlike CO2.30, for CO2.32, the M05 models predict the highest line-strengths among all SPS models with solar metallicity ($\sim0.5$~\AA\ higher than E-MILES and C18 models with a Kroupa IMF). E-MILES models with solar metallicity and MW-like IMF match well the median CO2.32 value of the F19 and B18 samples, and C18 models with solar metallicity are very close to F19 and match well to B18, while the scatter of XSGs is large, likely because the feature is at the edge of the available spectral range for these galaxies, making all models compatible with them. However, it should be noted that galaxies in these samples span a wide range in velocity dispersion, and at the highest $\sigma$, galaxies should be better described by a bottom-heavy IMF (see, e.g., \citealt{labarbera2013}). The latter is particularly true for the XSGs, with $\sigma \gtrsim 300$~\kms. It is expected that MW-like IMF models (pink lines) describe low-$\sigma$ galaxies, while bottom-heavy IMF models (purple lines) match high-$\sigma$ galaxies, in particular the XSGs (but see \citealt{alton2018}). However, predictions for a bottom-heavy IMF (purple lines) fall below the observed line-strengths for CO2.32, for all galaxies (but for one XSG). Thus, the mismatch between observations and models seems to be in place also for the CO2.32 index. Also, there is a closer similarity between the trend of C18 models and E-MILES and M05 models than CvD12 ones. Another interesting point is that C18 models with [Z/H] = 0.2~dex overpredict the mean CO2.32 line-strengths of F19 and B18 samples. Since the IMF of the most massive galaxies in F19 should be bottom-heavy, one might consider the effect of supersolar metallicity (from C18) and bottom-heavy IMF (from E-MILES) at the same time. In this case, the discrepancy between the C18 models and F19 sample gets even worse as the effect of IMF is larger than the effect of metallicity. The comparison for CO2.35 is similar to that for CO2.30, with solar and MW-like IMF models underpredicting the median values for F19 and B18 samples. However, the metal-rich E-MILES models (dotted-pink line) can reproduce the median value of the F19 galaxies (filled lime square), but as already pointed out, the most massive galaxies in this sample are expected to have a steeper IMF, while bottom-heavy models fall below all the observed data-points for CO2.35 (the bottom-heavy E-MILES model predicts a CO2.35 of $\sim11$~\AA, while the lowest value of CO2.35 in the F19 sample is $\sim11.5$~\AA). We conclude that, in general, the disagreement between observations and models is present in all the \textit{K}-band CO indices.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figs/Figure2.pdf}
\caption{CO indices measured from different galaxy samples, with open (filled) lime squares, cyan triangles, blue pentagons, and orange diamonds corresponding to individual (median) values for galaxies from \citet{francois2019}, \citet{baldwin2018}, \citet{marmol2008}, and \citet{silva2008}, respectively (see labels on the top). Red open (filled) circles are line-strengths for individual spectra (median-stacked spectrum) of the XSGs \citep{labarbera2019}. Observed line-strengths are compared to CO indices measured on different SPS spectra: two sets of E-MILES models with solar metallicity and bimodal IMF of slopes $1.3$, corresponding to a MW-like IMF, and $3.0$, representative of the bottom-heavy IMF of massive galaxies (solid pink and purple lines, respectively); two sets of E-MILES models with super-solar metallicity and bimodal IMF slopes of $1.3$ and $3.0$ (dotted pink and purple lines); one set of CvD12 models with \mbox{$\mbox{[Fe/H]}$}\ = $0$ and Chabrier IMF (dashed violet line); and one set of M05 models with solar metallicity and Kroupa-like IMF (solid black line). The indices are measured on data and models all corrected to the same velocity dispersion of 360~\kms. }
\label{fig:fig2}
\end{figure*}
\subsection{\textit{H}-band CO indices} \label{sec:H_band}
In order to assess whether the mismatch of observed and model CO lines is intrinsic to the \textit{K} band, or whether it is a general issue in the NIR, we measured a whole battery of CO absorptions that populate the \textit{H}-band spectral range (see Fig.~\ref{fig:fig1}). Figure~\ref{fig:fig3} shows the same comparison as in Fig.~\ref{fig:fig2} but for the \textit{H}-band lines. Note that for CO1.58 and CO1.68, the spectra of F19 and B18 are severely contaminated by sky, and thus we do not show the corresponding line-strengths. For the same reason, only two XSGs are shown for CO1.68, and only a few galaxies are plotted in the panels for CO1.60, CO1.64, and CO1.66.
Remarkably, Fig.~\ref{fig:fig3} shows that (i) LB19 galaxies have lower scatter in all plots compared to the CO indices in \textit{K} band, most likely due to the high-quality of these data in \textit{H} band, and (ii) these very massive ETGs show very high CO values with respect to the model predictions for all indices. The discrepancy between models and observations for \textit{H}-band CO indices is similar to that found in \textit{K} band. In particular, the median stacked spectrum of the XSGs shows \textit{H}-band CO values $\sim1.3$ times larger than E-MILES models with MW-like IMF and solar metallicity.
For CO1.56, the offsets between the median values of F19 and B18 samples and the reference E-MILES model (pink line) are $\sim0.7$ and $\sim0.6$~\AA. E-MILES models can reproduce the median value of F19 and B18 galaxies for CO1.60 and CO1.66, respectively. However, these indices have been computed only for two galaxies in either samples, and thus we are not able to draw any firm conclusion.
Although the updated version of CvD12 models with solar metallicity (solid violet lines) tend to increase slightly for ages older than 3~Gyr (except for CO1.64), the behaviour of supersolar C18 models (dotted violet lines) is more similar to the E-MILES models and in case of CO1.58 it even matches with the E-MILES (dotted pink and violet line). For CO1.56 and CO1.64, C18 models with [Z/H] = 0.2~dex predict the highest values among all models but still they cannot match the median values of galaxies. The mean value of CO1.66 index in the B18 sample is well fitted by a solar metallicity C18 SSP and interestingly, a C18 SSP with supersolar metallicity matches the CO1.66 index value of stacked XSGs well. This result is at first glance surprising, however, one should bear in mind that XSGs have bottom-heavy IMFs and according to E-MILES models this makes the predictions of the CO values lower. In the CO1.68 panel, surprisingly, C18 models overpredict the line-strengths of the XSGs. In general, E-MILES and CvD12 models are more self-consistent than C18 and M05 models as their deviation with respect to data is similar for all CO indices.
In all panels of Fig.~\ref{fig:fig3}, the M05 models predict the lowest CO strengths, compared to other models. For CO1.56, CO1.58, and CO1.64, M05 models predict a strong increase at ages younger than $3$~Gyr, similar to what is found for CO2.30 in \textit{K} band, while for other indices the opposite behaviour is seen (e.g. CO1.60 and CO1.68). On the contrary, for all CO indices, the XSGs data show, consistently, line-strengths significantly above the model predictions for old ages (except for CO1.66 in which the supersolar metallicity C18 model matches the stacked spectrum and for CO1.66 in which C18 models overpredict the line-strengths of the XSGs). Again, this points against a scenario whereby the CO line-strengths are accounted for by young stellar populations with an AGB-enhanced contribution such as in M05 models. As for the \textit{K} band, CvD12 models show a trend for all CO indices to decrease with increasing age, while E-MILES models exhibit a nearly flat behaviour and C18 models a slightly increasing behaviour. For instance, in case of CO1.58, CO1.66, and CO1.68, for ages younger than $5$~Gyr, CvD12 models predict $\sim0.4$~\AA\ stronger line-strengths than E-MILES models (pink line), while the two models agree for populations with an age of $\sim7$~Gyr. For older ages, CvD12 models always predict lower CO index values than E-MILES models.
The effect of a bottom-heavy IMF in Fig.~\ref{fig:fig3} is shown by the purple lines, corresponding to E-MILES models with a bimodal IMF slope of $\rm \Gamma_b=3.0$. Similarly to the \textit{K} band, a bottom-heavy IMF leads to significantly shallower CO line-strengths, than those for a standard stellar distribution. Note also that for most indices, the discrepancy between MW-like IMF models and the XSG stack (filled red circle) is larger than the variations due to a change in IMF slope. In the case of CO1.56 and CO1.58, the discrepancy is about twice the difference between models with different IMF. As already quoted in Sec.~\ref{sec:DCO_IMF}, we emphasize that while a bottom-heavy IMF hampers the offset between model and observations, we are not able to match the CO line-strengths with a standard IMF (except for CO1.66 with supersolar metallicity C18 model). In other words, the CO-strong feature should not be interpreted as evidence against a varying IMF in (massive) galaxies.
From Figs~\ref{fig:fig2} and~\ref{fig:fig3}, a very consistent picture emerges. CO lines throughout the \textit{H} and \textit{K} bands are stronger than the predictions of all the considered state-of-the-art SPS models with varying age, metallicity, and IMF\footnote{At this point, we remind the reader that it is yet unclear why some indices, e.g. CO2.30, increase significantly for young populations in M05 models while they decrease in other models.}. Indeed, in order to reconcile models and observations, additional stellar population parameters should be taken into account, as we detail in the following sections.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figs/Figure3.pdf}
\caption{ The same as in Fig.~\ref{fig:fig2} but for \textit{H}-band CO indices.}
\label{fig:fig3}
\end{figure*}
\section{Effects of varying other stellar population parameters}\label{sec:abundance_agb}
To gain insight into the origin of the discrepancy between observed NIR CO features and model predictions, we scrutinise the effect of varying abundance ratios in the models (Sec.~\ref{sec:abundances}), as well as that of an enhanced contribution from AGB stars (Sec.~\ref{sec:AGB}).
The main results of this analysis are shown in Figs.~\ref{fig:fig4} and \ref{fig:fig5}, for \textit{K}- and \textit{H}-band indices, respectively. The figures are the same as Figs.~\ref{fig:fig2} and \ref{fig:fig3}, but showing only median values of line-strengths for different samples, as well as CO indices for the XSG stack. To avoid crowding the figure, only E-MILES models are plotted, with different arrows showing the effect of varying different parameters in the models, as detailed below. Note, also, that in Fig.~\ref{fig:fig4}, we do not include the panel for D$_{\rm CO}$ (as in Fig.~\ref{fig:fig2}), as it does not add any further information with respect to the CO2.30 index, whose Lick-style definition is more similar to that of the other CO indices.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figs/Figure4.pdf}
\caption{CO indices in \textit{K} band, as measured from different galaxy samples. Filled lime squares and cyan triangles correspond to median values for galaxies from F19 and B18, respectively. Red circles are measurements for the XSG stack (see the text). Observed COs are
compared to predictions of E-MILES SPS models: solid (dotted) pink and purple lines are models with solar (super-solar) metallicity and bimodal IMF of slopes $1.3$ and $3.0$, corresponding to a MW-like and bottom-heavy IMF, respectively. The (small) green arrows indicate the effect of adding a fraction of an intermediate-age E-MILES SSP on top of an old SSP (see the text); the blue arrow is the same but with an enhanced contribution of AGB stars for the intermediate-age component. The black arrow is the equivalent of the green arrow for the M05 models. The solid and dotted khaki (orange) arrows mark the effect of the ``empirical corrections'' to E-MILES SSPs with solar and super-solar metallicities, respectively, for a MW-like (bottom-heavy) IMF. The brown and violet arrows show the effect of \mbox{$\mbox{[$\alpha$/Fe]}$}\ and \mbox{$\mbox{[C/Fe]}$}\ abundance ratios on CO indices. The indices are measured after smoothing all data and models to a velocity dispersion of $360$~\kms. }
\label{fig:fig4}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figs/Figure5.pdf}
\caption{Same as Fig.~\ref{fig:fig4} but for \textit{H}-band indices}
\label{fig:fig5}
\end{figure*}
\subsection{Abundance ratios}
\label{sec:abundances}
So far, we have considered only models constructed with stars following the chemical pattern of the solar neighbourhood. However, differences in the depth of CO features between models and data might also arise because of variations in \mbox{$\mbox{[$\alpha$/Fe]}$}, or other elemental abundances, with respect to a scaled-solar composition. According to \citealt{eftekhari2021} (see column "d" of their figures~4 and~6), the maximum change in the strength of CO indices due to elemental abundance variations comes from carbon and $\alpha$-enhancements. Therefore, in Figs.~\ref{fig:fig4} and \ref{fig:fig5}, we show the effect of $\alpha$-enhancement, based on $\alpha$-enhanced E-MILES models with an age of $13$~Gyr (brown arrows), and that of an enhancement in carbon abundance, based on CvD12 models with an age of $13.5$~Gyr and a Chabrier IMF (see the violet arrows). The violet and brown arrows correspond to variations of $\delta$\mbox{$\mbox{[C/Fe]}$}\ = $+0.15$~dex and $\delta$\mbox{$\mbox{[$\alpha$/Fe]}$}\ = $+0.4$, respectively, which can be representative of massive ETGs, such as those of the XSG sample (see, e.g. \citealt{labarbera2017}). Note that we also have used the updated version of CvD12 models to see the effect of C-enhancement. Since the size of the violet arrow did not change with respect to that of CvD12, we do not include it in the plots to avoid crowding the figure. Moreover, since we are interested in the relative response of models to variations of \mbox{$\mbox{[C/Fe]}$}\ and \mbox{$\mbox{[$\alpha$/Fe]}$}, we shifted the starting point of the violet arrow to the end point of the brown arrow.
As expected, increasing \mbox{$\mbox{[C/Fe]}$}\ results in stronger CO lines in all cases, pointing in the right direction to match the data. However, the effect is counteracted by that of $\alpha$-enhancement (except for CO1.56, where the effect of $\alpha$-enhancement is negligible), so that, for all the \textit{K}-band CO indices, the brown and violet arrows tend to cancel each other for the adopted abundance variations in the CvD12 models. Hence, the picture emerging from Fig.~\ref{fig:fig4} is similar to that of Fig.~\ref{fig:fig2}. For CO2.30 and CO2.35, even considering the effect of varying abundance ratios, the models are not able to match the observations, especially in the case of a bottom-heavy IMF. For CO2.32, the median index values for the F19 and B18 samples can be matched, but only with models having a MW-like IMF.
For the \textit{H}-band CO line-strengths (Fig.~\ref{fig:fig5}), carbon abundance has a larger effect than \mbox{$\mbox{[$\alpha$/Fe]}$}, compared to \textit{K}-band indices, i.e. the relative size of violet vs. brown arrows in Fig.~\ref{fig:fig5} is larger than in Fig.~\ref{fig:fig4}. This shows, once again, the importance of studying lines from the same chemical species (CO) at different wavelengths (\textit{H} and \textit{K}). Even so, summing up the violet and brown arrows in Fig.~\ref{fig:fig5} does not allow us to reach the high CO values of massive XSGs. For instance, in the case of CO1.58 and CO1.60, summing up the effect of \mbox{$\mbox{[C/Fe]}$}\ and \mbox{$\mbox{[$\alpha$/Fe]}$}\ would result in a (modest) increase of $\delta$(CO1.58)~$\sim0.1$~\AA\ and $\delta$(CO1.60)~$\sim0.2$~\AA, respectively, these variations being far smaller than the deviations between MW-like IMF E-MILES models and the XSGs stack, corresponding to $\sim0.9$~\AA\ and $\sim0.5$~\AA\ for CO1.58 and CO1.60, respectively. Note also that indices, such as CO1.56, for which the effect of abundance ratios is smaller (compared to, e.g., the effect of varying the IMF) do not show smaller deviations of data compared to models. In other terms, also a qualitative comparison of data and model predictions, seems to point against the effect of abundance ratios as the main culprit of the CO mismatch problem. However, one should bear in mind that the effect of abundance ratios on SSP models relies completely on theoretical stellar spectra, as well as molecular/atomic line lists, that are notoriously affected by a number of uncertainties, particularly in scarcely explored spectral regions, such as the NIR. Hence, we cannot exclude that the effect of \mbox{$\mbox{[$\alpha$/Fe]}$}\ and \mbox{$\mbox{[C/Fe]}$}\ is underestatimated by current SPS models. Alternatively, one should seek for other possible explanations, as discussed in the following section.
\subsection{Intermediate-age stellar populations}\label{sec:AGB}
Since stars in AGB phase contribute most to the \textit{K}-band luminosity of stellar populations with ages between 0.3 and 2~Gyr ($\sim$30\% in E-MILES models, with \citet{maraston2005} models having the largest contribution from AGB stars among other models, i.e. $>$70\%), it has been suggested that the deep CO band-heads of ETGs in the \textit{K} band are due to the presence of AGB stars, from intermediate-age stellar populations \citep[e.g.][]{mobasher1996, mobasher2000, james1999, davidge2008, marmol2009}. It is important to assess, in a quantitative manner, if this hypothesis can account for the observed high CO line-strength values. To this effect, we first fit the observed XSG stacked spectrum with different SSP models, assuming a non-parametric star formation history (SFH), and then simulate the effect of intermediate-age populations by constructing ad-hoc two-component models.
To fit the XSG stack, we use the software {\sc STARLIGHT} \citep{cid2005}, a full spectral fitting code that allows us to fit a galaxy spectrum with a generic linear combination of input model spectra, i.e. the so-called ``base'' spectra. First, we use scaled-solar E-MILES SSPs as a base, including models with different ages and metallicities, and a Kroupa-like IMF~\footnote{Including bottom-heavy SSPs does not improve the {\sc STARLIGHT} fits significantly, as expected by the fact that CO line-strengths get weaker for $\rm \Gamma_b=3$ (see Sec.~\ref{sec:spectral_indices}).}. Note that this approach does not make any assumption about the SFH, which is treated in a non-parametric way. Hence, the effect of young populations is taken into account in the most general manner, without any restriction from the optical range. The {\sc STARLIGHT} fitting was carried out in the \textit{H} band, as CO absorptions dominate this spectral range. Figure~\ref{fig:fig6} compares the stacked spectrum of XSGs (black line), with the best-fitting composite stellar population model of E-MILES SSPs (pink line). The best-fitting model shows deviations at a level of $\sim2$\% from the stacked spectrum in the region of the CO bandpasses. Note that this is similar to what was found when comparing individual E-MILES SSPs to the XSGs' stack (see the fiducial model, plotted as a pink line, in Fig.~\ref{fig:fig1}). Since in the {\sc STARLIGHT} fits, there is no constraint on the age of the best-fitting SSPs, these results show that young populations do not help to resolve the tension between observations of CO lines and model predictions.
Based on near-ultraviolet (NUV) photometric data, \citet{yi2005} found that roughly $15$\% of bright ETGs at z $<0.13$ show signs of young ($\lesssim1$~Gyr) populations at the level of $1$\%--$2$\% mass fractions. Also, \citet{schiavon2007} generated two-component models, showing that a mass fraction of the young component of $\sim0.5$\%--$1$\% provides a reasonably good match to the blue indices of nearby ETGs. This result has been recently confirmed, based on a combination of NUV and optical absorption lines for the XSGs, by \citet{salvador2021}, who found that the centre of massive ETGs are populated by a $0.7$\% mass fraction of stars formed within the last $1$~Gyr.
To test this scenario, we contaminated the light of an old ($10$~Gyr) E-MILES SSP by a small fraction ($3$\% in mass) of an intermediate-age ($1.5$~Gyr) E-MILES SSP. The effect is shown for a solar metallicity and MW-like IMF population by the green arrows at $\sim10$~Gyr in Figs.~\ref{fig:fig4} and~\ref{fig:fig5}. Indeed, the arrows are small for all CO indices and, for CO1.58, CO1.60, and CO1.66, they also point in the ``wrong'' direction, i.e. that of decreasing (rather than increasing) model line-strengths. However, when considering an E-MILES model with age of $1.5$~Gyr, solar metallicity, and a MW-like IMF, AGB stars contribute by $\sim1/3$ to its bolometric luminosity, while such fraction is larger for M05 models. Hence, one might attribute the small effect of the intermediate-age population to the less emphasized contribution of AGB stars to E-MILES, compared to M05, young SSP models. To address this issue, we used the AGB-enhanced version of E-MILES SSPs constructed by \citet{rock2017}. They calculated an AGB-enhanced E-MILES model of $1.5$~Gyr, solar metallicity and Kroupa-like IMF by using ``partial SSPs'', i.e. computing two SSPs, one by integrating stars along the isochrone without the AGB phase, and the other one by integrating only AGB stars, and combining the two models by assigning $70$\% luminosity-weight to the model made up of AGB stars only. This synthesised AGB-enhanced stellar population is added on top of an old population of $10$~Gyr assuming a $3\%$ mass fraction. The effect on CO line-strengths is shown by the blue arrows at $10$~Gyr in Figs~\ref{fig:fig4} and \ref{fig:fig5}. Although the blue arrows are larger than the green ones, they are not large enough to fit the median values for the F19 and B18 samples, except for CO2.35, CO1.60, and CO1.66. In the case of CO2.35 (see Fig~\ref{fig:fig4}), the median value of the F19 sample could be matched with a metal-rich and MW-like IMF E-MILES model, if one assumes that the effects of the blue, violet, and brown arrows (i.e. AGBs + \mbox{$\mbox{[C/Fe]}$}\ + \mbox{$\mbox{[$\alpha$/Fe]}$}) sum up to $\sim0$. For CO1.60 and CO1.66, the comparison is limited by the small number of galaxies available for the F19 and B18 samples. As a further test, we added a $3$\% mass fraction of a $1.5$~Gyr M05 SSP with solar metallicity and Kroupa-like IMF to an old ($10$ Gyr) M05 SSP. The effect is shown by the black arrows at $\sim10$~Gyr in Figs~\ref{fig:fig4} and \ref{fig:fig5}. The effect of adding an emphasized-AGB intermediate-age population on top of an old one turns out to be negligible, and for the CO1.60 and CO1.66 indices, it goes also into the opposite direction compared to the data. Note also that there is no way the AGB-enhanced models can consistently match the strong CO line-strengths of the XSGs, in the \textit{H} and \textit{K} band.
As a final remark, we emphasize that our analysis does not rule out the presence of intermediate-age populations in ETGs, but it points against a scenario where the observed strong CO absorptions are mainly due to the presence of intermediate-age populations (i.e. AGBs) in these galaxies.
\section{An empirical modelling approach}\label{sec:empirical_approach}
\subsection{Searching for stars that match the strong CO lines}\label{sec:fitting}
In order to identify the stars that might be responsible for the strong CO absorption observed in massive ETGs, we fitted the stacked spectrum of XSGs with {\sc STARLIGHT} (see above), using as an input base all the 180 individual stars of the IRTF library that are used to construct E-MILES models in the NIR. As for the fitting with E-MILES SSPs (see Sec.~\ref{sec:AGB}), we fitted only the \textit{H}-band region, where the signal from CO features is prominent with respect to that of other absorptions. The best-fit mixture of IRTF stars is shown in Fig.~\ref{fig:fig6}, as a lime-colored curve. The relative residuals between observed and model spectrum are shown in the bottom panel of the same figure. By comparing the residuals for the best-fit of IRTF stars with that for E-MILES SSPs (pink curve), it can be seen that using the stars improves significantly the fit to the observed spectrum, with residuals in the CO lines at the level of $\sim1$\%, i.e. about half of those for the E-MILES best-fitting model. Although some improvement in the fitting may be actually expected when employing the IRTF stars, given that this band is populated with so many CO absorptions, it is still remarkable how much smaller the obtained residuals are. Indeed, no improvement at all would be achieved in the case where the input stellar library completely lacked those stars responsible for CO absorption. {\sc STARLIGHT} also returns the weight (in light) of each star in the best-fit mixture. Surprisingly, we found that only $4$ (out of $180$) stars received a significant weight (> $0.5\%$) in the best-fit spectrum, namely HD~219734, HD~10465, HD~36003, and HD~187238. The light-weighted contribution of these stars from {\sc STARLIGHT} and their main stellar parameters from \citet{rock2016} are summarized in Tab.~\ref{tab:tab1}. HD~36003 is a dwarf star (low-mass star), while the other three stars are giants (massive stars). Hereafter, we refer to these stars as the \textit{H}-band best-fitting stars.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figs/Figure6.pdf}
\caption{The upper panel shows the stacked spectrum of XSGs (black) in \textit{H} band, together with the best-fitting spectra obtained with E-MILES SSPs (pink) and IRTF stars (lime), running the spectral fitting code {\sc STARLIGHT} (see the text). CO features, with central absorptions and their pseudo-continua, are plotted as grey and orange shaded areas. All spectra have been normalised to their median flux. The lower panel shows the relative residuals of the stack with respect to each best-fitting spectrum. Notice the improvement in the matching of CO features when using IRTF stars, despite to the fact that only four stars received a non-negligible weight in the {\sc STARLIGHT} best-fit mixture (see the text).}
\label{fig:fig6}
\end{figure*}
\strutlongstacks{T}
\begin{table}
\centering
\caption{ Properties of four IRTF stars that best-fit the XSG stacked spectrum in \textit{H} band. Columns~1 and~2 give the name of each star and its weight in light to the best-fit {\sc STARLIGHT} model (see the text). The effective temperature, surface gravity, and metallicity of the stars are given in Columns~3, 4, and 5, respectively. }
\label{tab:tab1}
\begin{tabular}{lcccr}
\hline\hline
Star & Weight & ${\rm T}_{\mbox{\scriptsize eff}}$ & $\log g$ & \mbox{$\mbox{[Fe/H]}$} \\
& ($\%$) & ({\small K}) & (dex) & (dex) \\
(1) & (2) & (3) & (4) & (5) \\
\hline
HD 219734 & 43 & 3730 & 0.9 & 0.27 \\
HD 36003 & 18 & 4465 & 4.61 & 0.09\\
HD 187238 & 17 & 4487 & 0.8 & 0.177 \\
HD 10465 & 14 & 3781 & 0.5 & -0.458 \\
\hline
\end{tabular}
\end{table}
Since the XSG stack is best-fit by only four stars, these stars have to be ``special'' somehow, and their properties might help up to shed light on the nature of the CO absorptions. To address this point, we measured the line-strength of CO indices for the spectra of all IRTF stars, and marked the position of the \textit{H}-band best-fitting stars in the CO vs. effective temperature (${\rm T}_{\mbox{\scriptsize eff}}$) plots in Fig.~\ref{fig:fig7}. In this figure, different colours show different types of stars, according to the classification provided in table~2 of \citet{rock2015}, with blue and orange colours corresponding to AGB and M-dwarf stars, respectively. The five carbon stars in the IRTF library are shown in pink colour. Note that this classification is only available for stars cooler than $3900$~{\small K}. The remaining IRTF stars are plotted with grey colour. Figure~\ref{fig:fig7} suggests that those stars with ${\rm T}_{\mbox{\scriptsize eff}}$\ $< 5500$~{\small K}, that are not classified as carbon stars and M-dwarfs, seem to split into two sequences. Most of the stars trace a well-defined, narrow sequence, that we call the ``normal'' CO sequence throughout the paper, where the star HD~219734 (one of the \textit{H}-band best-fitting stars, see above) can be actually found, for {\it all} CO plots. Along this sequence, the CO line-strengths increase with decreasing ${\rm T}_{\mbox{\scriptsize eff}}$. However, some stars do not fall onto this sequence, but form a sort of ``CO-strong'' sequence (where two of the \textit{H}-band best-fitting stars, namely HD~10465 and HD~187238, can be found in {\it all} CO plots)\footnote{Note that for CO1.58 and CO1.64, a double-branch sequence is not so clear. However, this might result from some sky residuals in the wavelength range of the CO lines, or some contamination of the CO lines from different absorbers. According to panel (a) in figure~A2 of \citet{eftekhari2021}, the central bandpass of CO1.58 is severely contaminated by telluric absorption lines and its red bandpass is contaminated by a strong emission line at $\sim15844$~\AA. Moreover, a magnesium line contributes to this absorption feature. In the same figure, in panel (b), the presence of two strong emission lines can be seen in both blue and red bandpasses of the CO1.64 index. The central feature also has some contribution from atomic silicon lines.}. To guide the eye, we performed a linear fit to the CO-strong sequence (see App.~\ref{sec:appendixA} for details), and marked such a sequence with black segments in Fig.~\ref{fig:fig7}. In all panels, we show the CO line-strengths for the XSG stack as horizontal red-dashed lines. These lines intersect the locus of stars at an effective temperature of about $4000$~{\small K}. This is the temperature where stars in the CO-strong sequence deviate the most from those in the normal sequence.
We also attempted to fit the spectrum of the \textit{H}-band best-fitting stars with the MARCS \citep{gustafsson2008} library of very cool stellar spectra, and tried to extract additional information regarding their $\alpha$ abundance ratio. However, the best fitting models do a poor job of predicting the shape of the spectra of such cool stars and CO indices; therefore the derived parameters are less reliable. Here, we only mention that the results point to a lower $\alpha$ abundance for stars in the CO-strong sequence compared to the one in the normal sequence (see App.~\ref{sec:appendixB} for details of this experiment).
\begin{figure*}
\centering
\includegraphics[width=0.87\linewidth]{figs/Figure7.pdf}
\caption{Index strength of CO indices for IRTF stars as a function of effective temperature. Indices have been measured on spectra smoothed at the common resolution of $\sigma=360$~\kms. Blue, orange, and pink colours correspond to AGB, M-dwarf and carbon stars at ${\rm T}_{\mbox{\scriptsize eff}}$\ $< 3900$~{\small K}, respectively. Remaining stars are shown with grey colour. The four stars that contribute most to the net flux of the {\sc STARLIGHT} best-fitting model (see the text) are marked with filled star symbols of different colours (see labels in the top--right panel). Black lines are the linear fit to the CO-strong sequence of giant stars, with ${\rm T}_{\mbox{\scriptsize eff}}$ $< 5500$~{\small K} (see the text for details). In each panel, the CO line-strength for the XSG stack is marked with a dashed-red line. The light green arrows mark the increase caused by a \mbox{$\mbox{[C/Fe]}$}\ enhancement of $0.25$~dex, from the theoretical stars of \citet{knowles19_Thesis}, all having solar metallicity and $\log g$\ = $1.5$.}
\label{fig:fig7}
\end{figure*}
As noted above, two stars that were assigned the highest weight in the {\sc STARLIGHT} best-fitting spectrum (HD~10465 and HD~187238; see Tab.~\ref{tab:tab1}) occupy the CO-strong sequence, while the other two (HD~219734 and HD~36003) follow the normal sequence. This suggests that in order to match the observed spectrum of ETGs, a significant contribution from the CO-strong sequence might be required. However, as a general caveat, one should notice that massive ETGs might contain different stars than the few CO-strong-sequence stars in the IRTF library. An interesting point is that the two stars in the higher sequence have almost the same ${\rm T}_{\mbox{\scriptsize eff}}$\ as the two other stars. This may explain why SPS models do actually fail to reproduce CO features. A nominal SSP model averages the available stellar spectra along the isochrones. Since stars in the normal sequence are in larger number compared to those in the CO-strong sequence, the contribution from the latter is diluted in the synthesised models. The {\sc STARLIGHT} fitting results are suggesting, instead, that it should be the other way around, with a large weight from the CO-strong sequence. In order to remedy this situation, we constructed ad-hoc SPS models, as detailed below.
\subsection{Empirical corrections to E-MILES models}\label{sec:empirical_model}
We modified E-MILES stellar population models by shifting stars in the normal CO sequence to those in the upper (CO-strong) one. The procedure is described in detail in App.~\ref{sec:appendixA}. In short, we systematically separated giant stars into the two sequences (according to all the available CO indices), and for stars that share similar stellar parameters, we divided the mean spectrum for the CO-strong sequence with that for the normal sequence, to obtain a (multiplicative) differential response of ``CO-enhancement'', as illustrated in Fig.~\ref{fig:fig8} (see light through dark green spectra). We point out that this procedure is possible because a number of stars are in the upper sequence for all the CO indices, i.e. there is actually a population of stars, with strong lines, that can be singled out from the normal sequence, which extend over a range of temperatures. The responses obtained in this way were interpolated at different temperatures, and applied to the giant star spectrum in the normal sequence. New SSP models were synthesised accordingly, using the ``empirically corrected'' giant stars, for an age of $11$~Gyr, \mbox{$\mbox{[M/H]}$}\ = $+0.06$ and $+0.26$, and $\Gamma_{b} = 1.3$ and $3.0$, respectively, over a wavelength range from $15400$ to $23800$~\AA.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{figs/Figure8.pdf}
\caption{Comparison of empirical CO responses (light through dark green curves) with \mbox{$\mbox{[C/Fe]}$}\ (violet), \mbox{$\mbox{[O/Fe]}$}\ (blue), and \mbox{$\mbox{[$\alpha$/Fe]}$}\ (brown) responses from CvD12 models. The light to dark green spectra are defined as the ratio between mean spectra of stars in the CO-strong and normal CO sequences, for stars with different temperature. The definition of indices are overplotted with grey (central bandpasses) and orange (pseudo-continua bandpasses) areas.
The left and right panels refer to \textit{H} and \textit{K} band, respectively. All spectra have been smoothed to $\sigma=360$~\kms. }
\label{fig:fig8}
\end{figure*}
We measured the CO line-strengths on the empirically corrected models. In Figs.~\ref{fig:fig4} and \ref{fig:fig5}, we show the variation of CO indices, compared to the reference E-MILES models, as khaki and orange arrows, for IMF slopes of $\Gamma_{b} = 1.3$ and $3.0$, respectively. Solid and dotted arrows correspond to solar and super-solar metallicity models, respectively. The empirically-corrected SSPs have significantly larger CO indices. In the case of CO2.30, CO1.60, and CO1.66, the khaki arrows would allow one to fit the stacked spectrum of XSGs. However, since the IMF has been shown to be bottom-heavy for these galaxies, it should be looked at the orange arrows, whose size is not enough to match the data. For some \textit{H}-band indices, i.e. CO1.56, CO1.60, and CO1.66, the khaki arrows predict even larger CO values that the median line-strengths for the samples of B18 and F19. In the case of CO1.58, CO1.64, and CO1.68, although the empirically corrected models improve the predictions of CO indices, they cannot match those of massive ETGs (even for a MW-like IMF).
We note that the dotted arrows have approximately the same size as solid arrows, i.e. the effect of the empirical correction does not depend on metallicity. Perhaps, this is not surprising, as stars of the IRTF stellar library are biased towards solar metallicities~(see \citet{rock2017} and references therein). Also, khaki and orange arrows have approximately similar size, implying that the empirical CO response is not coupled to that of IMF, as it is the case for Na-enhanced E-MILES models (see~\citealt{labarbera2017} for details). This stems from the fact that the CO correction is only performed on giant stars, while a bottom-heavy IMF increases the number of dwarf, relative to giant, stars.
\subsection{What is driving the empirical corrections?}\label{sec:drivers}
The empirically corrected models provide closer predictions to observed CO indices compared to E-MILES models, although still far from a perfect match. In order to make further progress, we tried to understand the physical drivers behind the empirical corrections, to possibly tune the models further.
Figure~\ref{fig:fig8} shows the CO-response functions of five selected stars, with different ${\rm T}_{\mbox{\scriptsize eff}}$\ but otherwise similar stellar parameters, that we used to construct the empirically corrected models (see App.~\ref{sec:appendixA} for details). The violet, blue, and brown lines in the figure plot responses corresponding to a carbon enhancement of $0.15$~dex, an oxygen enhancement of $0.2$~dex, and an $\alpha$ enhancement of $0.2$~dex, based on CvD12 SSP models (with an age of $13.5$~Gyr, \mbox{$\mbox{[Fe/H]}$}\ = $0.0$, and Chabrier IMF), respectively. Indeed, the empirical responses look more similar to those for carbon enhancement, although the differences in the depth of CO indices are quite significant. On the contrary, the response to oxygen enhancement is very different from the empirical responses, being almost flat in the regions of CO absorptions. The effect of $\alpha$ enhancement is even more dissimilar, as it shows bumps in the regions of the CO central passbands, consistent with the fact that CO line-strengths anti-correlate with \mbox{$\mbox{[$\alpha$/Fe]}$}\ (see Sec.~\ref{sec:abundances}).
Therefore, we speculate that the empirical corrections might be reflecting the effect of carbon enhancement on (cool) giant stars. To further test this hypothesis, the \mbox{$\mbox{[C/Fe]}$}\ of stars in the CO-strong and normal sequences should be compared. Unfortunately, carbon abundances for giant stars in the IRTF library have not been measured yet. Hence, we relied on theoretical C-enhanced stars from \citet{knowles19_Thesis} to see if an enhancement in carbon abundance might explain the difference between CO line-strengths in the two CO sequences. In Fig.~\ref{fig:fig7}, we show the effect of a \mbox{$\mbox{[C/Fe]}$}\ enhancement of $0.25$~dex on theoretical cool giant stars with light-green arrows. Interestingly, the size of the arrows increases with decreasing ${\rm T}_{\mbox{\scriptsize eff}}$. However, one should bear in mind that theoretical stellar spectra are rather uncertain for very cool stars, and these models stop at $3500$~{\small K}. Indeed, we may expect that this trend continues at lower ${\rm T}_{\mbox{\scriptsize eff}}$, with CO absorptions getting even stronger. Focusing only on the difference due to enhancement and neglecting the starting point, we see that the arrows are comparable to the CO line-strengths difference between normal and CO-strong sequence stars. For CO1.58, CO1.60 and CO2.32 indices, the arrows can bring the stars from the normal to the CO-strong sequence, while for other indices, the arrows can explain only part of the difference between the two sequences, with a larger gap for cooler stars. For instance, the arrow at 3500K for CO1.68 is too small, and it is unable to reach a group of three stars, with ${\rm T}_{\mbox{\scriptsize eff}}$\ $\sim3500$~{\small K} and CO1.68 as high as $\sim4.4$~\AA.
As shown in Figs.~\ref{fig:fig4} and~\ref{fig:fig5}, increasing \mbox{$\mbox{[$\alpha$/Fe]}$}\ abundance causes the CO absorptions to weaken. However this prediction, which is qualitatively similar in both E-MILES $\alpha$-enhanced and CvD12 models, is in contrast with predictions of A-LIST SSP models \citep{ashok2021}. According to their figure~6, CO absorptions get stronger by increasing \mbox{$\mbox{[$\alpha$/Fe]}$}. A-LIST provides fully empirical SSP model predictions, based on the APOGEE stellar library, while $\alpha$-enhanced E-MILES SSPs are semi-empirical models (i.e. the relative effect of $\alpha$-enhancement is estimated through the aid of theoretical star spectra). Unfortunately, \mbox{$\mbox{[$\alpha$/Fe]}$}\ abundance ratios have not been measured for all IRTF stars. However, we further assessed the effect of \mbox{$\mbox{[$\alpha$/Fe]}$}\ on CO indices by looking at elemental abundance ratios for the APOGEE stellar library, as computed with ASPCAP. To this effect, we selected a set of APOGEE stars as described in Sec.~\ref{sec:libraries}. We attempted to single out the effect of surface gravity, metallicity, and carbon enhancement by only selecting stars within a narrow range of stellar parameters ($\rm 0.35 <$ $\log g$\ $< 0.55$, $-0.1 <$ \mbox{$\mbox{[M/H]}$}\ $< 0.1$, and $\rm 0 <$ \mbox{$\mbox{[C/Fe]}$}\ $< 0.05$). Since the wavelength coverage of APOGEE spectra is divided across three chips with relatively narrow ranges (blue chip from $1.51$ to $1.581~\mu$m, green chip from $1.585$ to $1.644~\mu$m, and red chip from $1.647$ to $1.7~\mu$m), we were able to measure line-strengths for only three CO indices (i.e. CO1.56, CO1.60, and CO1.66, respectively). Figure ~\ref{fig:fig9} shows CO line-strengths for APOGEE stars as a function of ${\rm T}_{\mbox{\scriptsize eff}}$, $\log g$, and \mbox{$\mbox{[M/H]}$}\ (left-, mid-, and right-panels), respectively, with stars being colour-coded according to their \mbox{$\mbox{[$\alpha$/Fe]}$}. According to the figure, stars with higher CO do not show a higher value of \mbox{$\mbox{[$\alpha$/Fe]}$}. In many cases, stars with high CO seem to have lower (rather than higher) \mbox{$\mbox{[$\alpha$/Fe]}$}. In the plot of CO1.60 vs. ${\rm T}_{\mbox{\scriptsize eff}}$, only one star (in red) has high $\alpha$-enhancement and high CO1.60. For CO1.66, a correlation (anti-correlation) of the index with surface gravity (metallicity) is actually observed. Note that these results are in disagreement with predictions from the A-LIST models of \citet{ashok2021}, with the origin of such disagreement remaining unclear.
\begin{figure*}
\centering
\includegraphics[width=0.89\linewidth]{figs/Figure9.pdf}
\caption{CO indices measured on the spectra of APOGEE stars as a function of stellar parameters, namely ${\rm T}_{\mbox{\scriptsize eff}}$\ (left), $\log g$\ (middle), and \mbox{$\mbox{[M/H]}$}\ (right). The stars are coloured according to \mbox{$\mbox{[$\alpha$/Fe]}$}, as shown from the colourbar on the top. }
\label{fig:fig9}
\end{figure*}
Overall, our analysis shows that it is very unlikely that $\alpha$ enhancement is the missing piece of the CO puzzle. On the other hand, the effect of carbon on low-temperature giant stars seems to be the most likely candidate to explain the strength of CO lines. However, the predictions from theoretical models should be improved, and extended to stars with lower temperatures ($\lesssim3500$~{\small K}) at high metallicity, in order to draw firm conclusions.
\section{Discussion}\label{sec:discussion}
It is instructive to look at the mismatch between models and data using CO-CO diagrams, i.e. plotting one CO index against CO line-strengths for other features. In Fig.~\ref{fig:fig10}, we show two such diagrams, based on three CO indices (CO1.58, CO1.60, and CO2.30, respectively) as measured for IRTF stars (star symbols), the stacked spectrum of XSGs (red point), and E-MILES models with ages from $1$ to $14$~Gyr (see the pink and purple lines, corresponding to solar-metallicity models for MW-like and bottom-heavy IMF, respectively). In the CO vs. CO plots, the locus of stars is well defined, forming a relatively narrow strip. Interestingly, the point of massive galaxies falls off the main strip: for values of CO1.60 $\sim2$~\AA\ and CO2.30 $\sim12$~\AA, stars have CO1.60 $\sim 2.4$~\AA, while the XSG stack has CO1.60 $\sim3.2$~\AA. As expected, E-MILES model predictions follow the locus of stars, predicting lower CO values compared to the data. However, to be able to match the XSG stack, the models should not only increase the CO line-strengths but, also, move away from the main strip of IRTF stars.
Note that in both panels of Fig.~\ref{fig:fig10}, some dwarf stars (see the orange stars in the figure) do not share the same locus as the rest of the IRTF stars, but they are actually shifted to higher values of CO1.58 ($\sim1.4$\AA ), at given values of CO1.60 ($\sim0.2$\AA ) and CO2.30 ($\sim6$\AA ), respectively. The result is that predictions of models with a bottom-heavy IMF (orange arrows in the figure) are slightly above the main star locus. However, at the same time, these models predict lower values of CO ($\Delta$CO1.58 $\approx-0.2$~\AA, $\Delta$CO1.60 $\approx-0.5$~\AA, and $\Delta$CO2.30 $\approx-2$~\AA, comparing the tips of the orange and khaki arrows in the figure). Therefore, while bottom-heavy models worsen the gap between observed and model line-strengths, on the other hand, the CO-CO plots suggest that IMF variations might help to reconcile models and data by producing greater CO1.58 line-strengths than CO2.30.
In Fig.~\ref{fig:fig10}, we also show, with blue arrows, the effect of an AGB-enhanced population (based on AGB-enhanced E-MILES SSPs; see Sec.~\ref{sec:AGB}), trying to mimic the presence of an intermediate-age population. The AGB-enhanced models increase CO1.58 by only $0.1$~\AA, while the discrepancy between the red point (i.e. the XSG stack) and the pink line (fiducial E-MILES model) is about $8$ times larger. Moreover, the change in CO1.60 (CO2.30) line-strength due to the blue arrow is $\sim0.1$ ($\sim0.5$)~\AA, i.e. one-fifth (one-forth) of the offset between the models and data. We conclude, as already discussed in Sec.~\ref{sec:AGB}), that while AGB stars might have a relevant contribution to the NIR light of (massive) galaxies, they are likely not responsible for the strong CO line-strengths in the \textit{H} and \textit{K} bands.
Khaki and orange arrows in Fig.~\ref{fig:fig10} are the same as in Figs.~\ref{fig:fig4} and~\ref{fig:fig5}, plotting the increase of CO line-strengths caused by the empirical correction on giant stars, for both MW-like and bottom-heavy IMF models, respectively. Both arrows increase the CO1.58, CO1.60, and CO2.30 line-strengths by $\sim0.4$, $0.4$, and $1.5$~\AA, respectively. While the khaki arrow (compared to the orange one) brings the model indices closer to the XSG stack, the orange arrow is not able to reach the data, but it is slightly off the star sequence, similar to the XSG stack. This suggests that in order to match the CO line-strengths, one would need an effect similar to that of the empirical corrections, plus the slight offset due to a bottom-heavy IMF.
The effect of carbon enhancement from CvD12 models is also shown in Fig.~\ref{fig:fig10} (violet arrows), to be compared with the empirical responses. In the CO1.58 vs. CO1.60 diagram, the violet arrow, although increasing the CO indices (by $\sim0.4$~\AA\ for CO1.58, and $\sim0.4$~\AA\ for CO1.60), it is along the stellar locus, while in the CO1.58 vs. CO2.30 plot, the arrow seems to point to the correct direction to match the XSG stack (though with a small overall variation of only $\Delta$CO2.30 $\approx0.5$\AA).
Similar to the effect of carbon-enhancement, it seems that carbon stars (pink stars in Fig.~\ref{fig:fig10}) might also be able to bring the models out of the stellar locus in the CO1.58 vs CO2.30 diagram, while this is not the case for the CO1.58--CO1.60 plot, as in the latter case, pink stars are somehow aligned to the sequence of blue stars. However, according to Fig.~\ref{fig:fig7}, for most CO vs. ${\rm T}_{\mbox{\scriptsize eff}}$\ panels, carbon stars are not in the CO-strong sequence, i.e. they would not help in matching all the \textit{H}- and \textit{K}-band CO line-strengths. Again, this shows the importance of combining the largest available set of CO features, as we do in our analysis, and gives further support to our conclusion that adding an intermediate-age stellar population (that would also include carbon stars) to an underlying old component does not solve the issue with NIR CO spectral features.
Using optical and \textit{J}- and \textit{K}-band absorption features, including the first two CO bandheads in \textit{K} band, \citet{alton2017, alton2018} studied stellar population gradients in the spectra of eight massive ETGs. They showed that models that do not account for the effect of \mbox{$\mbox{[C/Fe]}$}\ variations underpredict the CO bandheads in the \textit{K} band. Moreover, they showed that to fit H$_{\beta}$, in the optical, a large enhancement in carbon abundance is also required. In other terms, an over-abundance of carbon seems to have a prominent role in matching CO lines, in agreement with the suggestions from our analysis. However, \citet{alton2017, alton2018} also used CO features in the \textit{K} band to conclude in favour of a MW-like IMF in the center of (some) massive ETGs, in contrast to studies based on (optical) spectral features. Indeed, our analysis shows that current stellar population models in the NIR are still not accurate enough to allow for a quantitative matching of CO lines to be performed, and an even smaller effect to be constrained, such as that of a varying IMF. The results presented here demand a new generation of NIR stellar population models, after a significant effort to move beyond the current limitations of theoretical star spectra is made, particularly for the predictions of abundance ratio effects in low-temperature (giant) stars. Along the same lines, we point out that while our ad-hoc empirically-corrected SPS models do not match the observations yet, they tend to significantly reduce the discrepancy with respect to the observed CO strengths. Admittedly, the interpretation of our empirical corrections as an effect of \mbox{$\mbox{[C/Fe]}$}\ for low-temperature giants remains rather speculative, but urges for their study and opens up new avenues for improving SPS models in the NIR spectral range.
\begin{figure*}
\centering
\includegraphics[width=16cm]{figs/Figure10.pdf}
\caption{ A selected pair of CO vs. CO plots (CO1.58 vs. CO1.60 on the left; CO1.58 vs. CO2.30 on the right), showing IRTF stars (star symbols), the XSG stacked spectrum (filled red circle), and predictions for solar-metallicity E-MILES SSP models having ages from 1 to $14$~Gyr, for a MW-like and bottom-heavy IMF (see pink and purple lines), respectively. The violet arrows show the effect of increasing carbon abundance for CvD12 models, while the (small) blue arrows plot the increase of CO indices when accounting for AGB-enhanced intermediate-age population on top of an old stellar component. The effect of our empirical corrections on SSP model predictions is shown with khaki and orange arrows, for a MW-like and a bottom-heavy IMF, respectively.
}
\label{fig:fig10}
\end{figure*}
\section{Summary and Conclusions}\label{sec:conclusions}
We have shown, in a comprehensive manner, that a whole set of CO lines in the spectra of massive ETGs, from \textit{H} through to \textit{K} band, lie significantly above the predictions of widely-used state-of-the-art SPS models. We have explored different possible reasons for this ``CO-mismatch'' problem, finding that an individual enhancement of carbon abundance might be the most likely explanation, compared to other scenarios (such as an enhanced contribution of AGB stars from intermediate-age populations). In general, our study highlights the importance of improving SPS models in the NIR, in the following aspects:
\begin{itemize}
\item \textit{non-solar chemical abundances:} we need substantial progress in the modelling of the response of stellar spectra to elemental abundance variations. In particular, the effect of varying abundance ratios for cool stars (${\rm T}_{\mbox{\scriptsize eff}}$\ $< 4000$~{\small K}) is far from being well understood, and might be crucial to explain the current issues with NIR spectral lines. In addition, since current SPS modelling for varying abundance ratios is based on scaled-solar isochrones, we need a significant improvement on isochrones with non-solar abundances to create fully consistent SSP models with varying abundances. Moreover, the interplay between C, N, and O elements and their effects on CO indices are not yet fully understood. As some of the current models consider O as one of the $\alpha$-elements, a different treatment of O from $\alpha$-elements may be an interesting avenue for further investigation.
\item \textit{very cool stars:} current (theoretical) models struggle to reproduce atomic and molecular bands for stars with ${\rm T}_{\mbox{\scriptsize eff}}$\ $< 3500$~{\small K}. Moreover, SPS would benefit from an improved treatment of evolved stellar evolution phases, such as those of red giants and supergiants, and the AGB, which have a prominent contribution to the NIR light of a stellar population in various age regimes.
\item \textit{high-metallicity stars:} empirical stellar libraries, used to construct SPS models, are based on MW stars, having an unavoidable bias towards solar metallicity. Based on current SPS model predictions, CO indices do not depend significantly on metallicity, but from the study of individual stars and clusters \citep{aaronson1978, oliva1995}, CO indices are found to increase with increasing metallicity. Therefore, models with a good coverage of stars in the supersolar metallicity regime might yield further important clues to understand the CO-mismatch problem.
\end{itemize}
As a final remark, we would like to emphasize that a revision of SPS models in the directions suggested by the NIR CO indices, should take carefully into account the constraints provided by other spectral ranges, such as the optical and the UV. For example, fitting just a single CO line or a number of them, could lead to misleading derivations of stellar population properties.
\section*{Acknowledgements}
We are thankful to the reviewer, Dr. Russell Smith, for his careful reading of the manuscript and valuable comments. The authors acknowledge support from grant PID2019-107427GB-C32 from the Spanish Ministry of Science, Innovation and Universities (MCIU). This work has also been supported through the IAC project TRACES which is partially supported through the state budget and the regional budget of the Consejer\'\i a de Econom\'\i a, Industria, Comercio y Conocimiento of the Canary Islands Autonomous Community.
\section*{Data Availability}
The E-MILES SSP models are publicly available at the MILES website (\url{http://miles.iac.es}). The updated version of Na-enhanced models of \citet{labarbera2017} are also available from the same website (under "Other predictions/data"). The \citet{conroy2012a} SSP models are available upon request to the authors (see \url{https://scholar.harvard.edu/cconroy/projects}). The \citet{conroy2018} models are available for download at \url{https://scholar.harvard.edu/cconroy/sps-models}. The \citet{maraston2005} are downloaded from \url{http://www.icg.port.ac.uk/~maraston/Claudia's_Stellar_Population_Model.html}. Observations of \citet{labarbera2019} sample made with ESO Telescope at the Paranal Observatory under programmes ID 092.B-0378, 094.B-0747, 097.B-0229 (PI: FLB). The central spectra and stacked central spectrum are available from FLB upon request. The spectra of \citet{baldwin2018} sample were taken using the Gemini Near-Infrared Spectrograph on the Gemini North telescope in Hawaii through observing program GN-2012A-Q-22. The reduced spectra (FITS files) are available via \url{https://github.com/cbaldwin1/Reduced-GNIRS-Spectra}. Observations of \citet{francois2019} sample made with ESO Telescopes at the La Silla Paranal Observatory under programme ID 086.B-0900(A). The reduced spectra (FITS files) are available via \url{http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/621/A60}. Observations of \citet{silva2008} sample performed at the European Southern Observatory, Cerro Paranal, Chile; ESO program 68.B-0674A and 70.B-0669A. Observations of \citet{marmol2008} sample performed at the European Southern Observatory, Cerro Paranal, Chile, as well. \citet{rock2017} corrected the data of these two samples to restframe and convolved them to a resolution of $\sigma=360$~\kms and they are available from EE upon request. The IRTF Spectral Library is observed with the SpeX spectrograph, at the NASA Infrared Telescope Facility on Mauna Kea and the spectra are publicly available at \url{http://irtfweb.ifa.hawaii.edu/~spex/IRTF_Spectral_Library/}. Theoretical stars are computed by ATK specifically for this project and are available from ATK upon request. APOGEE stars can be downloaded through the \url{https://www.sdss.org/dr16/} website.
\bibliographystyle{mnras}
|
1,116,691,498,286 | arxiv | \section{Introduction}\label{sec:introduction}
In their early months, human infants start experiencing with their own body, moving hands, touching objects, and interacting with people around. Such activities are part of an overall process of autonomous development (a.k.a. self-development), which lets them gradually develop cognitive and behavioral capabilities~\cite{rochat1998self}. These skills include the capability to recognize situations around, the sense of sel
, the sense of agency (i.e., understanding the effect of own actions in an environment
, the capability to act purposefully towards a goal, and some primitive social capabilities (i.e., knowing how to act in the presence of others).
Moving from humans to machines, the possibility of building ICT systems capable to autonomously develop their own mental and social models and to act purposefully in an environment, is increasingly recognized as a key challenge in many areas of artificial intelligence (AI), such as robotics~\cite{lake2017building
, intelligent IoT~\cite{Mar21}, autonomous vehicles management~\cite{Yang21}.
Indeed, for small-scale and static scenarios, and for simple goal-oriented tasks, it is possible to ``hardwire'' a model of the environment within a system, alongside some pre-designed plans of action. However, for larger and dynamic scenarios, and for complex tasks, individual components of ICT systems should be able to autonomously (i.e., without human supervision and with little or no innate knowledge): \emph{(i)} build environmental models and continuously update them as situations evolve; \emph{(ii)} develop the capability of recognizing and modeling the effect of their own actions on the context (which variables of the environment can or cannot be directly affected by which actuators, which variables and actuators relate to each other); \emph{(iii)} learn to achieve goals on this basis and depending on the current situation; \emph{(iv)} learn how to organize and coordinate actions among multiple distributed components whenever necessary.
The ambitious idea of building systems capable of autonomous development is not new, and its opportunity has been already advocated since several years~\cite{Weng599}, sometime under the notion of ``self-aware'' computing systems~\cite{selfaware}.
However, the topic is now even more timely. Many recent research results in areas such as machine learning, causal analysis, multi-agent systems, and collective behaviors, have started shedding light on the various mechanisms that have to be involved in the process of autonomous development, hinting at the fact that the vision (at least in specific application areas) is close to become reality. Furthermore, unfolding the key concepts and mechanisms underlying autonomous development can also somewhat contribute to understand the many mental mechanisms behind artificial general intelligence~\cite{Marcus21}.
Against this background, the contribution of this paper is to frame the key concepts of autonomous development in ICT systems and to identify challenges and promising research directions. Specifically, Section~\ref{sec:framework} introduces a general conceptual framework for the (continuous and adaptive) process of autonomous development, both at the individual and at the collective level; Section~\ref{sec:applications} sketches key application scenarios; Section~\ref{sec:approaches} analyzes the most promising approaches in the area of machine learning, causal analysis, multi-agent systems, and collective adaptive systems that can contribute with fundamental building blocks towards realizing the vision of autonomous development, each \emph{per se} challenging; Section~\ref{sec:challenges} identifies additional horizontal challenges to be attacked.
\section{Conceptual Framework} \label{sec:framework}
Autonomous development, beside being the process that infants carry out during the early stages of their life~\cite{rochat1998self}, also involves any ``agent'' whenever it is incarnated in a new body and immersed in a new environment.
As an example to quickly and intuitively introduce our general framework (Figure~\ref{fig:framework}) let us consider what we do whenever we start playing a new video-game. At first, we observe the game environment on the screen and the commands we have available on the joystick; that is, we get acknowledged with our \emph{embodiment and perception} on the video-game. We spend a few seconds trying the commands to assess their effects in the game environment; that is, we try to acquire a \emph{sense of agency}. Then, we understand what is the goal of the game and how we can use the commands to achieve it; that is, we start acting in a \emph{goal-oriented} way.
Typically, we recognize in the video-game the presence of other ``agents'', virtual characters that are not under our control; that is, we distinguish between \emph{self and non-self}. The acquisition of such a skill implies that we acknowledge that we should tune our actions also in dependence of the actions of these other agents (\emph{strategic thinking}). All this process is typically repeated in a cyclic way (i.e., when reaching a new level in the game) to adapt to new environments, new situations, new tools available to play with, new goals, and new enemies appearing.
In the case of multiplayer games, beside recognizing the presence of players different from ourselves, and recognizing the need to act also accounting for them, we should understand: whether we have \emph{communication} tools available; how to use these tools to affect and influence the actions of others, i.e., to \emph{coordinate} with them, so that eventually \emph{institutional} ways to act together towards a goal can be established. Again, this process may be cyclically repeated as the game advances.
Truly intelligent and adaptive ICT systems should undergo a similar process and autonomously develop through similar phases.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{FrameworkBis.png}
\caption{The conceptual framework of autonomous development.}
\label{fig:framework}
\end{figure}
\subsection{The Individual Level}
Let us firstly consider a single agent $X$ (purely software or physically embodied) immersed in a (virtual or physical) environment. The agent can observe a set of environmental variables $\mathcal{V} = \{v_1, v_2, \ldots, v_m\}$. As the agent is part of the environment, internal variables of the agent itself (i.e., its current status and configuration) are included in the set. In addition, the agent has a set of actions that it can choose $\mathcal{A} = \{a_0, \ldots, a_{n-1}, null \}$, including the $null$ action.
\vspace{3mm}
\noindent
\emph{Embodiment and Perception}. In this very early phase, the agent should autonomously recognize the existence of $\mathcal{A}$ and $\mathcal{V}$, that is, it should get acknowledged to its actuation and sensorial capabilities. Without resorting to complex AI techniques, methods from the reflective and self-adaptive programming systems can effectively apply in this phase~\cite{sal09} to let the agent dynamically self-inspect its capabilities and start analyzing the observed variables. Still in this phase, the agent can also start acquiring some understanding of the relations between the observed variables over time, as well as some simple prediction capabilities
\vspace{3mm}
\noindent
\emph{Sense of Agency}. In this exploratory phase, the agent starts trying to understand what are the effects of $\mathcal{A}$ on $\mathcal{V}$, by trying to apply actions (even without any goal in mind) to see their effects. That is, it will eventually recognize that, given the current state of $\mathcal{V}_{current}$, the application of an action $a_i$ (or of a sequence of actions) will eventually lead (with some probability) to state $\mathcal{V}_{next}$. This mechanism enables the construction of the basic sense of agency~\cite{rochat1998self}, and of the sense of causality.
\vspace{3mm}
\noindent
\emph{Goal-orientedness}. In this exploitation phase, the agent starts applying $\mathcal{A}$ with goals in mind. That is, given the current state $\mathcal{V}_{current}$ and a desired future state $\mathcal{V}_g$ (the goal, aka the desired ``state of the affairs''), the agents resorts to the acquired sense of agency by applying the action that can possibly lead to $\mathcal{V}_g$. This also involves achieving the capability of planning the required sequence of actions to achieve a specific goal.
\vspace{3mm}
\noindent
\emph{Self and Non-Self}. As soon as an individual agent starts exploring its own actions $\mathcal{A}$, and recognizes that such actions have effect on the environment, it also understands that there are effects that are not under its own control. That is, there are ``non-self'' entities acting in the environment, too. By learning how to apply $\mathcal{A}$, the agent also learns the limits of such actions because of the non-self entities affecting some variable $v_i$.
\vspace{3mm}
\noindent
\emph{Strategic Thinking}. The agent has built a model of the world, that is, of how $\mathcal{A}$ affects $\mathcal{V}$, and it starts including the mental models of others (non-self)~\cite{subagdja2019beyond} while acting, as well as while designing strategies.
That is, it can recognize that there are goals that it can possibly (or hopefully) attain only by accounting for the actions of others.
\vspace{3mm}
As in the video-game example, autonomous development is not to be conceived as a ``once-and-for-all'' process. Rather, it is a continuous, never-ending process: environmental conditions can change, new sensors may become available to enable more detailed observations, and new actions become feasible (or vice versa, some sensors and actions may no longer be available). This requires the agents to re-tune their learned sense of agency, and re-think how to achieve goals in isolation and in the presence of non-self entities.
\subsection{The Collective Level}
In the presence of multiple agent acting in the same environment, an agent recognizes that there are goals that cannot be achieved in isolation or by simply applying strategic thinking. Thus as part of their autonomous development, they should collectively develop some forms of ``autonomous social engagement''.
Formally, this corresponds to considering a set of $K$ agents $X_0, \ldots, X_{K-1}$, where \emph{(i)} each agent can choose the actions to perform from its own set (possibly disjoint or only partially overlapping from those of the other agents); \emph{(ii)} not necessarily all the agents can observe the whole set of environmental variables, but more likely each agent $X_j$ has the capability to perceive and/or control a subset of them. Thus, for specific goals $\mathcal{V}_g$ to be achieved, there is the need of properly combining and sequencing actions by different agents, e.g. $X_i$ executes $a^i_w$ whereas $X_j$ executes $a^j_z$, and so on.
\vspace{3mm}
\noindent
\emph{Communication}. To overcome the limitations of strategic thinking, agents should be provided with a specific set of \emph{communication actions}, i.e., actions that are devoted to influence the actions of other agents. These could take the form of explicit communication acts, i.e., messages, that the agent should learn how to receive and send as an additional -- social -- form of perception and action. However, they could also take the form of more implicit actions aimed at affecting the behavior of others, i.e., leaving signs in the environment (stigmergy) or by adopting peculiar behaviors aimed at be noticed by others (behavioral implicit communication)~\cite{10.1007/978-3-319-24309-2_8}. All these scenarios can be formalized by augmenting the $\mathcal{A}$ set to include communication, and possibly $\mathcal{V}$, to include observable signs in the environment.
\vspace{3mm}
\noindent
\emph{Coordination}. By exploring its available communication actions, agents start understanding how such acts can be used to get access and to affect some of the variables of the environment, and in particular those that are not observable and controllable by themselves. For instance, they can learn how to use communication acts to get access to the value of some non-observable variables $v_i$ or to direct other agents in executing the actions that can affect its values as required for a goal to be achieved. In other words, such explorations enable learning basic forms of coordination, which can be thought of as a social form for the sense of agency.
\vspace{3mm}
\noindent
\emph{Institution}. Eventually, after exploring coordination protocols, the agents can ``institutionalise'' their patterns of interaction towards collective actions. That is, they will learn those acceptable social patterns of coordination, and the set of social norms and social incentives, that enables them to systematically achieve goals together~\cite{Morris-Martin:2019aa}. Formally, this corresponds to having agents in the collective recognize and adhere to a set of constraints $\mathcal{C(A,V)}$ ruling the way (communication) actions $\mathcal{A}$ can be performed in specific conditions $\mathcal{V}$, as well as the commitments and expectations each communication action sets on the agents participating in the protocol.
\vspace{3mm}
As for the case of the individual level, the dynamics of the environment or of the agent population may require the above collective process to assume a continuous cyclic nature.
We emphasize that communication, coordination, and institutions are not strictly necessary to promote complex goal-oriented collective actions~\cite{leibo2019autocurricula}. Nevertheless, whenever communication mechanisms are available, learning to exploit them is a natural part of the autonomous development process, and can facilitate the effectiveness of the social development.
\section{Application Scenarios}\label{sec:applications}
\iffalse
There are diverse application scenarios that can potentially take advantage of systems capable of self-development.
\emph{Robotics} is the area which first identified the profitability of building systems capable of self-development~\cite{Bongard1118}. In particular, it is necessary when the robot gets damaged while in operation, and has to develop a novel understand of what it can do according to its residual operational capabilities. At the collective level, the autonomous evolution of communication and coordination capabilities can be of fundamental importance to acquire the capability of the collective to act in unknown and dynamically changing scenarios \cite{Cambier20}.
\emph{Smart factories}, as collective robotic systems, can be seen as an aggregated group of components that act together in order to achieve a production goal. Beside their basic scheme of functioning, defined at design time, if one component of the manufacturing system breaks or has some unexpected behavior, the manufacturing system should ideally adapt to the new situation, and self-develop capabilities of acting so as to overcome the problem without undermining production. The need for adaptability and flexibility is indeed explicitly recognized as a key challenge in Industry 4.0 initiatives~\cite{zhang2017SOMS}.
\emph{Smart homes} can facilitate our interactions with the environment and increase our safety and comfort. We envision that once a new home is built, its smart devices could start exploring their own individual and collective capabilities, so as to eventually learn how they can affect the home environment, and apply such capabilities once users will start populating it. This will also require to continuously adapt to habits and preferences of users, accommodate new devices and services, tolerate partial failures. Our preliminary experience suggests the feasibility of the vision~\cite{Mar21}.
\emph{Smart cities} as well can potentially take advantage of self-development approaches~\cite{saelens2020heyciti}. However, unlike in a smart home, a smart city is not a system free to explore the effect of its actions and interactions, and eventually become capable to act in a goal-oriented way. Thus, for this scenario (but most likely also for smart factories), simulation-based approaches should probably be exploited: system components will be made self-developing in a simulated environment, before being eventually deployed in the real world~\cite{Yang21}.
\fi
There are diverse application scenarios that can potentially take advantage of systems capable of autonomous development, at the individual and/or collective level.
\subsection{Robotics}
Robotics is the area which first identified the profitability of building systems capable to autonomously build ``by experience'' a model of their own capabilities, and consequently learn to achieve specific goals.
In general, developers can define a model of their robots at design time, and can easily wire such model directly in embedded software without necessarily having the robot learn it in autonomy. However, autonomous learning may become necessary when the robot gets damaged while in operation, thus (partially) losing its original capabilities (e.g., a four-legged robot with one leg damaged). In that case, the robot should autonomously learn a new model to understand what it can do according to its residual operational capabilities (e.g., how to walk with three legs only), and how it can re-learn to achieve goals with them \cite{Bongard1118}.
A different situation is that of modular robots~\cite{yim2002modular}, where the robot can re-arrange its shape to serve different tasks. In this case, having the developers foresee all the possible shapes that the robot can assume to serve different tasks can be time-consuming. Also, it can prevent emergent (not previously envisioned) shapes -- functional to peculiar tasks -- to be autonomously identified by the robot itself. In this case, having the modular robot try to assume a variety of forms and understand its action capabilities for the different shapes could be very useful before deployment (other than after deployment, to recover from injuries and from the loss of some of its modules).
At the level of collective robotics (i.e., group of robots living together in an environment and having to cooperate to achieve collective task) most current approaches relies on coordination schemes defined at design time. However, also in this case it has been argued that the autonomous evolution of communication and coordination capabilities can be of fundamental importance to acquire the capability of the collective to act in unknown and dynamically changing scenarios \cite{Cambier20}.
\subsection{Smart factories}
Similarly to a collective robotic system, a complex manufacturing system can be seen as an aggregated group of components that act together in order to achieve a production goal. Beside their basic scheme of functioning, defined at design time, if one component of the manufacturing system breaks or has some unexpected behavior, the manufacturing system should ideally adapt to the new situation, so as to overcome the problem without undermining production.
Exploring in advance all possible contingencies and hard-wiring solutions to them in the system is almost impossible. The system should rather learn autonomously how to act upon changing conditions (whether of a temporary or ultimate nature) to maintain its overall functioning. For example, the system may explore the possibility of deviating the material flow from the broken component to another one, to learn the effect of such actions, and its overall impact on the production. In doing so, a local or global production re-scheduling might be necessary, with the initiative autonomously coming from the different components of the manufacturing system, without involving the production planning office. Clearly, such capabilities of autonomous adaptation can also apply at the level of individual components of the system, whenever these do not impact other components.
The need for integrating adaptability and flexibility in manufacturing systems is explicitly recognized as a key challenge in Industry 4.0 initiatives, and some examples of agent-based production control systems -- exhibiting some limited forms of adaptivity -- can be already found, e.g., in the automotive industry~\cite{bussman2017ICMAS-Daimler}. However, these are still far from the adaptivity level that could be reached by fully-fledged autonomous development approaches.
\subsection{Smart homes}
Buildings and homes are increasingly being enriched with sensors and actuators (i.e., IoT devices, in general), to facilitate our interactions with the environment and, by monitoring our activities and habits, to increase our safety and comfort. However, such systems are typically based on design-time decisions w.r.t. deployment of devices, their interactions, and the types of services to be provided. Learning capabilities are typically limited to monitoring user activities and adapt the parameters of services (e.g., the levels of light and heating) accordingly \cite{YeDZ19}.
From the perspective of autonomous development, we envision that once IoT devices are deployed in an environment, they should be activated to autonomously explore their own individual and collective capabilities (i.e., the individual sense of agency and the impact of inter-device interactions) so as to eventually learn how they can affect the home environment and how, and apply such capabilities once users will start populating the environment. Then, the overall smart home/building system will continue to dynamically and continuously modify its functioning to adapt to the presence of different users with different profiles, of users whose habits tend to evolve over time, or simply to react to contingencies (e.g., modifications in the number and type of available devices, or structural modifications to the environment).
We have conducted some simple preliminary experience to show the potential feasibility of autonomous development in a simple two-rooms smart home test-bed~\cite{Mar21}. In particular, we have shown that IoT devices in a room, when left free to explore the effects of their actions, can eventually build a sound causal model of the room, and can use such model to actuate specific environmental condition in that room (e.g., closing windows and turning off lights to achieve darkness). Also, by merging the models built for the two rooms, devices can learn to cooperatively actuate house-level environmental conditions (e.g., when a curtained window connects the two adjacent rooms, they have to agree on keeping the curtain closed so as to have light in one room and darkness in the other). Exploring larger and more complex environment and different learning techniques, and in the presence of users, will give us better clues on the general applicability of the approach and its possible limitations.
\subsection{Smart cities}
Most of the considerations above for smart homes and buildings scenario can be, in theory, transferred to the larger scenario of smart cities. That is, to all those ICT and IoT systems that can get deployed to automate and regulate the activities of modern cities (e.g., mobility, energy management, garbage collection). Indeed, the need to deploy robust systems capable -- with limited design efforts and limited human intervention -- to dynamically adapt their behavior to continuously changing urban conditions is widely recognized as a key challenge for harmonic and sustainable urban development~\cite{Ullah20}.
The substantial difference between smart homes/buildings and smart cities is that cities already exist and are already inhabited. Thus, you cannot think at let a smart city system free to explore the effect of its actions and interactions to eventually become capable to act in a goal-oriented way (which you can instead do before a new building becomes inhabited).
However, one can think at exploiting a simulation-based approach to this purpose. Given that accurate and reliable simulators exists to study different urban aspects, one can think at having system components explore and learn in a simulated environment towards full mental development, before being eventually deployed in the real world \cite{Yang21}. Indeed, exploiting simulators to support autonomous development, the same as children are given toys and playgrounds to explore and develop their capabilities, can be a key for any application area.
\section{Research Approaches}\label{sec:approaches}
\begin{figure*}[t]
\centering
\includegraphics[width=0.75\textwidth]{Approaches2.png}
\caption{The galaxy of autonomous development.}
\label{fig:approaches}
\end{figure*}
The idea of autonomous development, at both the individual and collective level, has been widely investigated in areas such as cognitive psychology, neuroscience, philosophy, and ethics~\cite{Weng599}.
We hereby focus on the computational perspective, and in particular on the galaxy of different approaches that can contribute to unfold the mechanisms involved in autonomous development vision and to eventually realize the vision (Figure~\ref{fig:approaches}). Although most of these approaches can play or are already playing a fundamental role, they still have to attack several challenging problems to become practical tools for future systems engineering.
We do not focus here on the basic levels of individual autonomous development, i.e., perception and embodiment, in that tools already exist to give agents sophisticated sensing abilities (e.g., convolutional neural networks to recognize objects, scenes, and activities) and the capability of controlling their own actuators purposefully.
\subsection{Goal-oriented Learning}
The broad area of reinforcement learning shares with our vision the objective of training machines to act in a goal-oriented way in a specific context. However, despite the amazing recent results in the area, in particular with deep Q-learning~\cite{mnih2015human}, most current approaches do not aim at building systems with a sense of agency and capable of developing an interpretable world model, but rather at achieving goals based on explicit, domain-based rewards, that are named \textit{extrinsic}. This makes most approaches highly ineffective in scaling up to learning tasks in complex contexts, or across domains, or despite the ever-changing dynamics of the environment.
Curriculum-based approaches to machine learning go somewhat in the direction of gradually developing the capability to act in complex scenarios~\cite{bengio2009curriculum}. The agent is first trained on simple tasks, and the gained knowledge is accumulated and exploited in increasingly complex scenarios, where further skills can thus be effectively learnt. Yet again, most of these approaches do not focus on the development of a world model and of an explicit sense of agency.
Reinforcement learning approaches based on \emph{intrinsic} rewards~\cite{Sch10
, instead, more closely exploit the idea of exploring the world to develop a sense of agency. In fact, while extrinsic rewards are typically designed by a ``teacher'' (e.g., the score in a videogame) intrinsic rewards are developed by the agent itself to satisfy its curiosity (i.e., when it discovers how to achieve specific tasks). For example, in~\cite{burda2018large} intrinsic rewards are computed as the error in forecasting the consequence of the action performed by the agent given its current state.
Recent approaches based on the theory of affordances~\cite{Khet20} propose to have agents gradually learn the effects of their actions. By having them act in constrained environments where only a limited set of actions apply, they eventually develop an explicit sense of agency, i.e., a model of how their actions affect the environment.
Al these approaches face the key challenge of building general conceptual and practical tools to: (i) learn to effectively act in an environment by exploiting the power of model-free sub-symbolic (deep learning) approaches; and, at the same time, (ii) learn incremental and reusable, possibly causal models of the world. The latter being increasingly recognized as a key ingredient for intelligence and autonomous development, whereas they mostly still lack effort and success is in building \emph{explicit, actionable models} of the world, there including the environment and the society of agents.
\subsection{Learning causality}
Understanding and leveraging \textit{causality} is recognized as a key general challenge for AI in the coming years~\cite{scholkopf2021toward}. In particular, Judea Pearl~\cite{pearl2019}
has proposed the idea of a ``causal hierarchy'' (also named ``ladder of causation'') to define different levels of causality recognition and exploitation by an intelligent agent.
The first level of the ladder consists in simply detecting relations as associations, whereas the second one assumes the possibility to intervene in the environment and observe the (causal) effects of the taken actions. Finally, the third level enables reasoning and planning on the basis of counterfactual analysis. Such layers correspond to some of the phases of the autonomous development loop we defined: the first one is mostly involved in the perception phase, whereas the second one is associated to the development of a sense of agency and to recognition of self and non-self. The final layer clearly enables goal-oriented behavior, strategic thinking, and collective coordination.
Bayesian and causal networks are among the models that are most widely exploited in order to build interpretable causal models of the world. A recent contribution that is in line with the ideas we envision for autonomous development is the application of curriculum learning to the problem of learning the structure of Bayesian networks~\cite{zhao2017learning}. On a pure sub-symbolic level, on the other hand, another recent work proposes to learn causal models in an online setting~\cite{javed2020learning}, with the aim to find (and strengthen) causal links between input and output variables.
We argue that key challenges in this area concern, again, understanding how to synergistically exploit symbolic and sub-symbolic approaches to learn, represent, and evolve causal models in autonomous development scenarios, and how to use them to adaptively achieve goals.
\subsection{Autocurricula}
When multiple agents act in a shared environment, their actions and their effectiveness in achieving goals are affected by what others do. Game-theoretic approaches to strategic thinking have deeply investigated this problem and the decision-making processes behind \cite{myerson2013game}. In this context, it has also been shown that agents can effectively learn in autonomy to improve their performance in dealing with others~\cite{nowe2012game}.
However, when moving from theoretical settings (e.g., the prisoner's dilemma) to complex and realistic scenarios where agents have complex goals, peculiar phenomena arise. The more one agent learns, the more it challenges others, triggering a continuous increase in complexity of behaviour, ultimately enabling to incrementally learn more sophisticated means to act. This somewhat resembles the increase of complexity that agents face in curricula approaches to reinforcement learning. The key difference being that, in the presence of multiple agents, the increase in complexity and capabilities of agents is promoted and self-sustained by the system itself, hence the term \emph{autocurricula}~\cite{leibo2019autocurricula}.
Recently, autocurricula-based approaches have produced stunning results in multiagent environments, both cooperative and competitive. For instance, in a hide and seek scenario \cite{baker2020emergent}), agents moving in a complex simulated environment have learned how to effectively compete (hiders against seekers) and cooperate (coalitions of cooperating seekers/hiders) in very elaborated ways, in a continuous self-sustained learning process. Indeed, we consider such approaches fundamental towards the autonomous development of complex agent societies. Yet, a deep understanding of the process that drives evolution of individual and collective behaviors is still missing, and is a key challenge for the next few years. To this end, providing agents an \emph{explicit} modeling (possibly in \emph{causal} terms) of the others' behavior and of the overall societal behavior, may be necessary~\cite{subagdja2019beyond}. Also, autocurricula approaches do not currently account for the possibility of explicitly interacting (e.g., through speech acts) with other agents, which may indeed fundamental to improve collective learning.
\subsection{Learning to communicate and coordinate}
Agents may communicate and coordinate by explicit messages,
by leaving traces in the environment,
or implicitly~\cite{10.1007/978-3-319-24309-2_8}.
These forms of communication are already exploited in multiagent learning, mostly to improve the individual learning process by letting agents share information (e.g., for merging their individual causal models of the world ~\cite{meganck2005distributed})
) and coordinate actions.
However, these communication approaches are usually assumed as an \emph{innate} capability of agents, rather than one to be learnt. That is, agents have an \emph{a-priori} sense of agency with respect to communication actions, whereas in our vision it should be developed by learning.
For example, with reference to explicit communication acts, \cite{DBLP:conf/atal/GuptaD20} proposes a voting game to let agents learn to share a communication language and to develop a strategy to communicate. In~\cite{Foer16}, it is shown that reinforcement learning can be effectively applied to let agents learn how to communicate in order to achieve a specific effect.
In the case of \emph{implicit} communication, instead, forms of implicit behavioral communications have been shown to emerge in simple system components that purposefully move in an environment~\cite{Grupen20}, as they learn to affect others with \emph{ad-hoc} actions.
Learning to use stigmergy to effectively coordinate is under-explored in the literature, which instead focuses on the opposite -- using stigmergy to boost learning.
In any case, the development of general approaches to let agents developed fully-fledged forms of communication and coordination is still an open challenge, which may call for agents to develop not only a model of the world, but an overall model of the society, i.e., a social sense of agency explicitly modeling how communications and coordination actions affect other agents in the shared environment.
\subsection{Emergence of Institutions}
Whereas learning to communicate is about understanding how to use communication to coordinate actions with others, enabling and sustaining global collective achievement of goals requires ``institutionalized'' means of acting at the collective level, i.e., a set of shared beliefs and of shared social conventions and norms aimed at ruling collective actions \cite{Esteva01}.
The mechanisms leading to the spontaneous emergence of institutions in human society,
there included the mechanisms to promote and sustain altruistic and cooperative behavior (e.g., reputation and shared rewards),
have been widely investigated~\cite{Now06}. However, most approaches to building multiagent systems assume such mechanisms as \emph{explicitly designed}~\cite{Esteva01}.
Yet, some promising studies related to the emergence of institutionalized behaviors in multiagent systems have been undertaken (see~\cite{Morris-Martin:2019aa} for a recent survey).
For instance, \cite{DBLP:journals/tcyb/YuZR14} proposes a collective learning framework where agents learn to adopt norms in repeated coordination, i.e., agents eventually \emph{learn} that a social norm has emerged, and ``institutionalize'' their behaviour in their (social) decision making processes \emph{implicitly}, by behaving so as to comply to the norm.
Another interesting work~\cite{10.5555/2615731.2616158} integrates rational thought, reinforcement learning, and social interactions to model norms emergence in a society: agents incrementally develop a social behaviour (a social norm) while \emph{internalising} it within their cognitive model.
However, the development of general models and tools to support the proper learning and evolution of institutionalized mechanisms of coordination, through the construction of explicit norms representations and adoption by agents cognitive models is still missing, and so are the solutions to the many problems involved in this process. For instance: how to avoid that an agent learns that free-riding is better than abiding to norms; or how to avoid inconsistencies and misunderstandings in their interpretation.
\section{Horizontal Challenges}\label{sec:challenges}
The presented approaches and techniques are still at the research stage, and many research challenges have been identified for each of them. In addition, it is possible to identify several additional ``horizontal'' challenges, i.e., of a general nature independently of the specific approach.
The specific nature of such challenges, in our opinion, makes them specifically suited for being pursued by the most diverse research communities in the area of machine learning, reinforcement learning, autonomous and multi-agent systems.
\vspace{3mm}
\noindent
\emph{Engineering}. Many of the presented approaches are grounded in the area of machine learning and multi-agent systems, disciplines with plenty of years of research behind, but in which traditional software engineering problems are sometimes considered mundane. Systems are often developed \emph{ad-hoc} for a specific task or problem domain, with little attention to modularity, re-usability, dependability, thus missing the flexibility to adopt them across different domains, tasks, data-sets~\cite{porter2019distributed}. In addition, given that the diverse approaches presented can each contribute important pieces to the overall vision of autonomous development, sound engineering approaches are needed to try to integrate such a heterogeneous plethora into a coherent whole. These represent multi-faceted and horizontal research challenges that, in our opinion, could and should be profitably attacked by promoting synergies and collaboration with the software engineering research community.
\vspace{3mm}
\noindent
\emph{Controlling evolution}. Autonomous development raises the issue of somewhat controlling how behaviors evolve, as individual learns new skills and tasks, and as the collective learns new way of coordinating and acting together. How can we \emph{steer} a learning process towards desired outcomes without putting bias in it? How can we \emph{constrain} the boundaries within which individual and collective behaviors should stay (e.g., in terms of safety)? What \emph{interventions} can we make to re-direct an agent or a collective that has taken an unpredictable or unsafe autonomous development path? Experience in self-adaptive components based on feedback, as well as in the study of emergent behaviors in self-organizing systems and definitely help in finding proper technical answers, and -- why not -- \emph{ethical} ones~\cite{DBLP:conf/hicss/YapoW18}.
\vspace{3mm}
\noindent
\emph{Humans in the Loop}. The more technologies based on autonomous development will advance, the more humans will have to actively interact with them. This interaction will raise technical issues (will we have ``handles'' to control or block such systems in some ways and to some extent?) and ethical problems (will we be rather ``handled'' by these systems and subjects to their decisions?). Some of these problems already emerged, like in the \textit{moral machine} experiment~\cite{awad2018moral} or in AI-based hiring technology.
Technical challenges will be meat for the HCI and distributed systems communities (there included the self-organizing systems one). Ethical and moral ones will be meat for politicians and lawyers, although deep joint work with technical experts will always be necessary.
A key ingredient involves institutions, since they represent humans as a group: laws and regulations need to be developed to regulate the global actors into the day-by-day technology usage. Nevertheless, a deeper interaction between researchers in science and technology and public institutions is needed to support the regulation design phase.
\vspace{3mm}
\noindent
\emph{Sustainability}. Algorithms for autonomous development will most likely require extensive computational resources. For example, the mentioned ``hide and seek' experiment by OpenAI involved a distributed infrastructure of 128,000 pre-emptible CPU cores and 256 GPUs on GCP~\cite{baker2020emergent}: the default model optimized over 1.6 million parameters taking 34 hours to reach the fourth stage over six of agents skills progression.
This example is a sort of best-in-class projects; anyway, it is clear that if autonomously developing systems will be based on similar learning approaches, they will require massive amounts of computational resources.
Therefore, a key challenge for the community will be to devise algorithmic and system-level means to make autonomous development systems sustainable, and affordable by others other then the big technology players.
\vspace{3mm}
\noindent \emph{Explainability}. Being able to inspect and explain the decision making process of AI systems is already a hot topic, so much that an entire research field (XAI, from eXplainable AI) has born. We already commented several times how such problems should be compulsory accounted for also for autonomous development, possibly with the help of causal models or other symbolic models. This is indeed a key challenge for the several research research community, too, where explaining individual and social behaviors, and patterns and global configurations emerging from local interactions is mostly still considered a ``holy grail''.
\section{Conclusions}\label{sec:conclusions}
The general vision of autonomous development is still far to be reality. However, several ideas in the areas of machine learning, causality, multi-agent systems, are already showing its potential feasibility and applicability, at least in specific application areas. We argue that researchers in these areas have plenty of room for further exploring the topic and contribute to advance the vision, possibly addressing the many open issues that we have tried to identify.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,116,691,498,287 | arxiv | \section*{Abstract (Not appropriate in this style!)}%
\else \small
\begin{center}{\bf Abstract\vspace{-.5em}\vspace{\z@}}\end{center}%
\quotation
\fi
}%
}{%
}%
\@ifundefined{endabstract}{\def\endabstract
{\if@twocolumn\else\endquotation\fi}}{}%
\@ifundefined{maketitle}{\def\maketitle#1{}}{}%
\@ifundefined{affiliation}{\def\affiliation#1{}}{}%
\@ifundefined{proof}{\def\proof{\noindent{\bfseries Proof. }}}{}%
\@ifundefined{endproof}{\def\endproof{\mbox{\ \rule{.1in}{.1in}}}}{}%
\@ifundefined{newfield}{\def\newfield#1#2{}}{}%
\@ifundefined{chapter}{\def\chapter#1{\par(Chapter head:)#1\par }%
\newcount\c@chapter}{}%
\@ifundefined{part}{\def\part#1{\par(Part head:)#1\par }}{}%
\@ifundefined{section}{\def\section#1{\par(Section head:)#1\par }}{}%
\@ifundefined{subsection}{\def\subsection#1%
{\par(Subsection head:)#1\par }}{}%
\@ifundefined{subsubsection}{\def\subsubsection#1%
{\par(Subsubsection head:)#1\par }}{}%
\@ifundefined{paragraph}{\def\paragraph#1%
{\par(Subsubsubsection head:)#1\par }}{}%
\@ifundefined{subparagraph}{\def\subparagraph#1%
{\par(Subsubsubsubsection head:)#1\par }}{}%
\@ifundefined{therefore}{\def\therefore{}}{}%
\@ifundefined{backepsilon}{\def\backepsilon{}}{}%
\@ifundefined{yen}{\def\yen{\hbox{\rm\rlap=Y}}}{}%
\@ifundefined{registered}{%
\def\registered{\relax\ifmmode{}\r@gistered
\else$\m@th\r@gistered$\fi}%
\def\r@gistered{^{\ooalign
{\hfil\raise.07ex\hbox{$\scriptstyle\rm\RIfM@\expandafter\text@\else\expandafter\mbox\fi{R}$}\hfil\crcr
\mathhexbox20D}}}}{}%
\@ifundefined{Eth}{\def\Eth{}}{}%
\@ifundefined{eth}{\def\eth{}}{}%
\@ifundefined{Thorn}{\def\Thorn{}}{}%
\@ifundefined{thorn}{\def\thorn{}}{}%
\def\TEXTsymbol#1{\mbox{$#1$}}%
\@ifundefined{degree}{\def\degree{{}^{\circ}}}{}%
\newdimen\theight
\def\Column{%
\vadjust{\setbox\z@=\hbox{\scriptsize\quad\quad tcol}%
\theight=\ht\z@\advance\theight by \dp\z@\advance\theight by \lineskip
\kern -\theight \vbox to \theight{%
\rightline{\rlap{\box\z@}}%
\vss
}%
}%
}%
\def\qed{%
\ifhmode\unskip\nobreak\fi\ifmmode\ifinner\else\hskip5\p@\fi\fi
\hbox{\hskip5\p@\vrule width4\p@ height6\p@ depth1.5\p@\hskip\p@}%
}%
\def\cents{\hbox{\rm\rlap/c}}%
\def\miss{\hbox{\vrule height2\p@ width 2\p@ depth\z@}}%
\def\vvert{\Vert
\def\tcol#1{{\baselineskip=6\p@ \vcenter{#1}} \Column} %
\def\dB{\hbox{{}}
\def\mB#1{\hbox{$#1$}
\def\nB#1{\hbox{#1}
\@ifundefined{note}{\def\note{$^{\dag}}}{}%
\defLaTeX2e{LaTeX2e}
\ifx\fmtnameLaTeX2e
\DeclareOldFontCommand{\rm}{\normalfont\rmfamily}{\mathrm}
\DeclareOldFontCommand{\sf}{\normalfont\sffamily}{\mathsf}
\DeclareOldFontCommand{\tt}{\normalfont\ttfamily}{\mathtt}
\DeclareOldFontCommand{\bf}{\normalfont\bfseries}{\mathbf}
\DeclareOldFontCommand{\it}{\normalfont\itshape}{\mathit}
\DeclareOldFontCommand{\sl}{\normalfont\slshape}{\@nomath\sl}
\DeclareOldFontCommand{\sc}{\normalfont\scshape}{\@nomath\sc}
\fi
\def\alpha{{\Greekmath 010B}}%
\def\beta{{\Greekmath 010C}}%
\def\gamma{{\Greekmath 010D}}%
\def\delta{{\Greekmath 010E}}%
\def\epsilon{{\Greekmath 010F}}%
\def\zeta{{\Greekmath 0110}}%
\def\eta{{\Greekmath 0111}}%
\def\theta{{\Greekmath 0112}}%
\def\iota{{\Greekmath 0113}}%
\def\kappa{{\Greekmath 0114}}%
\def\lambda{{\Greekmath 0115}}%
\def\mu{{\Greekmath 0116}}%
\def\nu{{\Greekmath 0117}}%
\def\xi{{\Greekmath 0118}}%
\def\pi{{\Greekmath 0119}}%
\def\rho{{\Greekmath 011A}}%
\def\sigma{{\Greekmath 011B}}%
\def\tau{{\Greekmath 011C}}%
\def\upsilon{{\Greekmath 011D}}%
\def\phi{{\Greekmath 011E}}%
\def\chi{{\Greekmath 011F}}%
\def\psi{{\Greekmath 0120}}%
\def\omega{{\Greekmath 0121}}%
\def\varepsilon{{\Greekmath 0122}}%
\def\vartheta{{\Greekmath 0123}}%
\def\varpi{{\Greekmath 0124}}%
\def\varrho{{\Greekmath 0125}}%
\def\varsigma{{\Greekmath 0126}}%
\def\varphi{{\Greekmath 0127}}%
\def{\Greekmath 0272}{{\Greekmath 0272}}
\def\FindBoldGroup{%
{\setbox0=\hbox{$\mathbf{x\global\edef\theboldgroup{\the\mathgroup}}$}}%
}
\def\Greekmath#1#2#3#4{%
\if@compatibility
\ifnum\mathgroup=\symbold
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#4%
\fi
\else
\FindBoldGroup
\ifnum\mathgroup=\theboldgroup
\mathchoice{\mbox{\boldmath$\displaystyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\textstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptstyle\mathchar"#1#2#3#4$}}%
{\mbox{\boldmath$\scriptscriptstyle\mathchar"#1#2#3#4$}}%
\else
\mathchar"#1#2#3#4%
\fi
\fi}
\newif\ifGreekBold \GreekBoldfalse
\let\SAVEPBF=\pbf
\def\pbf{\GreekBoldtrue\SAVEPBF}%
\@ifundefined{theorem}{\newtheorem{theorem}{Theorem}}{}
\@ifundefined{lemma}{\newtheorem{lemma}[theorem]{Lemma}}{}
\@ifundefined{corollary}{\newtheorem{corollary}[theorem]{Corollary}}{}
\@ifundefined{conjecture}{\newtheorem{conjecture}[theorem]{Conjecture}}{}
\@ifundefined{proposition}{\newtheorem{proposition}[theorem]{Proposition}}{}
\@ifundefined{axiom}{\newtheorem{axiom}{Axiom}}{}
\@ifundefined{remark}{\newtheorem{remark}{Remark}}{}
\@ifundefined{example}{\newtheorem{example}{Example}}{}
\@ifundefined{exercise}{\newtheorem{exercise}{Exercise}}{}
\@ifundefined{definition}{\newtheorem{definition}{Definition}}{}
\@ifundefined{mathletters}{%
\newcounter{equationnumber}
\def\mathletters{%
\addtocounter{equation}{1}
\edef\@currentlabel{\arabic{equation}}%
\setcounter{equationnumber}{\c@equation}
\setcounter{equation}{0}%
\edef\arabic{equation}{\@currentlabel\noexpand\alph{equation}}%
}
\def\endmathletters{%
\setcounter{equation}{\value{equationnumber}}%
}
}{}
\@ifundefined{BibTeX}{%
\def\BibTeX{{\rm B\kern-.05em{\sc i\kern-.025em b}\kern-.08em
T\kern-.1667em\lower.7ex\hbox{E}\kern-.125emX}}}{}%
\@ifundefined{AmS}%
{\def\AmS{{\protect\usefont{OMS}{cmsy}{m}{n}%
A\kern-.1667em\lower.5ex\hbox{M}\kern-.125emS}}}{}%
\@ifundefined{AmSTeX}{\def\AmSTeX{\protect\AmS-\protect\TeX\@}}{}%
\def\@@eqncr{\let\@tempa\relax
\ifcase\@eqcnt \def\@tempa{& & &}\or \def\@tempa{& &}%
\else \def\@tempa{&}\fi
\@tempa
\if@eqnsw
\iftag@
\@taggnum
\else
\@eqnnum\stepcounter{equation}%
\fi
\fi
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@eqnswtrue
\global\@eqcnt\z@\cr}
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
\def\tfrac#1#2{{\textstyle {#1 \over #2}}}%
\def\dfrac#1#2{{\displaystyle {#1 \over #2}}}%
\def\binom#1#2{{#1 \choose #2}}%
\def\tbinom#1#2{{\textstyle {#1 \choose #2}}}%
\def\dbinom#1#2{{\displaystyle {#1 \choose #2}}}%
\def\QATOP#1#2{{#1 \atop #2}}%
\def\QTATOP#1#2{{\textstyle {#1 \atop #2}}}%
\def\QDATOP#1#2{{\displaystyle {#1 \atop #2}}}%
\def\QABOVE#1#2#3{{#2 \above#1 #3}}%
\def\QTABOVE#1#2#3{{\textstyle {#2 \above#1 #3}}}%
\def\QDABOVE#1#2#3{{\displaystyle {#2 \above#1 #3}}}%
\def\QOVERD#1#2#3#4{{#3 \overwithdelims#1#2 #4}}%
\def\QTOVERD#1#2#3#4{{\textstyle {#3 \overwithdelims#1#2 #4}}}%
\def\QDOVERD#1#2#3#4{{\displaystyle {#3 \overwithdelims#1#2 #4}}}%
\def\QATOPD#1#2#3#4{{#3 \atopwithdelims#1#2 #4}}%
\def\QTATOPD#1#2#3#4{{\textstyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QDATOPD#1#2#3#4{{\displaystyle {#3 \atopwithdelims#1#2 #4}}}%
\def\QABOVED#1#2#3#4#5{{#4 \abovewithdelims#1#2#3 #5}}%
\def\QTABOVED#1#2#3#4#5{{\textstyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\QDABOVED#1#2#3#4#5{{\displaystyle
{#4 \abovewithdelims#1#2#3 #5}}}%
\def\tint{\mathop{\textstyle \int}}%
\def\tiint{\mathop{\textstyle \iint }}%
\def\tiiint{\mathop{\textstyle \iiint }}%
\def\tiiiint{\mathop{\textstyle \iiiint }}%
\def\tidotsint{\mathop{\textstyle \idotsint }}%
\def\toint{\mathop{\textstyle \oint}}%
\def\tsum{\mathop{\textstyle \sum }}%
\def\tprod{\mathop{\textstyle \prod }}%
\def\tbigcap{\mathop{\textstyle \bigcap }}%
\def\tbigwedge{\mathop{\textstyle \bigwedge }}%
\def\tbigoplus{\mathop{\textstyle \bigoplus }}%
\def\tbigodot{\mathop{\textstyle \bigodot }}%
\def\tbigsqcup{\mathop{\textstyle \bigsqcup }}%
\def\tcoprod{\mathop{\textstyle \coprod }}%
\def\tbigcup{\mathop{\textstyle \bigcup }}%
\def\tbigvee{\mathop{\textstyle \bigvee }}%
\def\tbigotimes{\mathop{\textstyle \bigotimes }}%
\def\tbiguplus{\mathop{\textstyle \biguplus }}%
\def\dint{\mathop{\displaystyle \int}}%
\def\diint{\mathop{\displaystyle \iint }}%
\def\diiint{\mathop{\displaystyle \iiint }}%
\def\diiiint{\mathop{\displaystyle \iiiint }}%
\def\didotsint{\mathop{\displaystyle \idotsint }}%
\def\doint{\mathop{\displaystyle \oint}}%
\def\dsum{\mathop{\displaystyle \sum }}%
\def\dprod{\mathop{\displaystyle \prod }}%
\def\dbigcap{\mathop{\displaystyle \bigcap }}%
\def\dbigwedge{\mathop{\displaystyle \bigwedge }}%
\def\dbigoplus{\mathop{\displaystyle \bigoplus }}%
\def\dbigodot{\mathop{\displaystyle \bigodot }}%
\def\dbigsqcup{\mathop{\displaystyle \bigsqcup }}%
\def\dcoprod{\mathop{\displaystyle \coprod }}%
\def\dbigcup{\mathop{\displaystyle \bigcup }}%
\def\dbigvee{\mathop{\displaystyle \bigvee }}%
\def\dbigotimes{\mathop{\displaystyle \bigotimes }}%
\def\dbiguplus{\mathop{\displaystyle \biguplus }}%
\ifx\ds@amstex\relax
\message{amstex already loaded}\makeatother\endinpu
\else
\@ifpackageloaded{amsmath}%
{\message{amsmath already loaded}\makeatother\endinput}
{}
\@ifpackageloaded{amstex}%
{\message{amstex already loaded}\makeatother\endinput}
{}
\@ifpackageloaded{amsgen}%
{\message{amsgen already loaded}\makeatother\endinput}
{}
\fi
\let\DOTSI\relax
\def\RIfM@{\relax\ifmmode}%
\def\FN@{\futurelet\next}%
\newcount\intno@
\def\iint{\DOTSI\intno@\tw@\FN@\ints@}%
\def\iiint{\DOTSI\intno@\thr@@\FN@\ints@}%
\def\iiiint{\DOTSI\intno@4 \FN@\ints@}%
\def\idotsint{\DOTSI\intno@\z@\FN@\ints@}%
\def\ints@{\findlimits@\ints@@}%
\newif\iflimtoken@
\newif\iflimits@
\def\findlimits@{\limtoken@true\ifx\next\limits\limits@true
\else\ifx\next\nolimits\limits@false\else
\limtoken@false\ifx\ilimits@\nolimits\limits@false\else
\ifinner\limits@false\else\limits@true\fi\fi\fi\fi}%
\def\multint@{\int\ifnum\intno@=\z@\intdots@
\else\intkern@\fi
\ifnum\intno@>\tw@\int\intkern@\fi
\ifnum\intno@>\thr@@\int\intkern@\fi
\int
\def\multintlimits@{\intop\ifnum\intno@=\z@\intdots@\else\intkern@\fi
\ifnum\intno@>\tw@\intop\intkern@\fi
\ifnum\intno@>\thr@@\intop\intkern@\fi\intop}%
\def\intic@{%
\mathchoice{\hskip.5em}{\hskip.4em}{\hskip.4em}{\hskip.4em}}%
\def\negintic@{\mathchoice
{\hskip-.5em}{\hskip-.4em}{\hskip-.4em}{\hskip-.4em}}%
\def\ints@@{\iflimtoken@
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits
\else\multint@\nolimits\fi
\eat@
\else
\def\ints@@@{\iflimits@\negintic@
\mathop{\intic@\multintlimits@}\limits\else
\multint@\nolimits\fi}\fi\ints@@@}%
\def\intkern@{\mathchoice{\!\!\!}{\!\!}{\!\!}{\!\!}}%
\def\plaincdots@{\mathinner{\cdotp\cdotp\cdotp}}%
\def\intdots@{\mathchoice{\plaincdots@}%
{{\cdotp}\mkern1.5mu{\cdotp}\mkern1.5mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}%
{{\cdotp}\mkern1mu{\cdotp}\mkern1mu{\cdotp}}}%
\def\RIfM@{\relax\protect\ifmmode}
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi{\RIfM@\expandafter\RIfM@\expandafter\text@\else\expandafter\mbox\fi@\else\expandafter\mbox\fi}
\let\nfss@text\RIfM@\expandafter\text@\else\expandafter\mbox\fi
\def\RIfM@\expandafter\text@\else\expandafter\mbox\fi@#1{\mathchoice
{\textdef@\displaystyle\f@size{#1}}%
{\textdef@\textstyle\tf@size{\firstchoice@false #1}}%
{\textdef@\textstyle\sf@size{\firstchoice@false #1}}%
{\textdef@\textstyle \ssf@size{\firstchoice@false #1}}%
\glb@settings}
\def\textdef@#1#2#3{\hbox{{%
\everymath{#1}%
\let\f@size#2\selectfont
#3}}}
\newif\iffirstchoice@
\firstchoice@true
\def\Let@{\relax\iffalse{\fi\let\\=\cr\iffalse}\fi}%
\def\vspace@{\def\vspace##1{\crcr\noalign{\vskip##1\relax}}}%
\def\multilimits@{\bgroup\vspace@\Let@
\baselineskip\fontdimen10 \scriptfont\tw@
\advance\baselineskip\fontdimen12 \scriptfont\tw@
\lineskip\thr@@\fontdimen8 \scriptfont\thr@@
\lineskiplimit\lineskip
\vbox\bgroup\ialign\bgroup\hfil$\m@th\scriptstyle{##}$\hfil\crcr}%
\def\Sb{_\multilimits@}%
\def\endSb{\crcr\egroup\egroup\egroup}%
\def\Sp{^\multilimits@}%
\let\endSp\endSb
\newdimen\ex@
\[email protected]
\def\rightarrowfill@#1{$#1\m@th\mathord-\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\leftarrowfill@#1{$#1\m@th\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill\mkern-6mu\mathord-$}%
\def\leftrightarrowfill@#1{$#1\m@th\mathord\leftarrow
\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\mathord-\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}%
\def\overrightarrow{\mathpalette\overrightarrow@}%
\def\overrightarrow@#1#2{\vbox{\ialign{##\crcr\rightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\let\overarrow\overrightarrow
\def\overleftarrow{\mathpalette\overleftarrow@}%
\def\overleftarrow@#1#2{\vbox{\ialign{##\crcr\leftarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\overleftrightarrow{\mathpalette\overleftrightarrow@}%
\def\overleftrightarrow@#1#2{\vbox{\ialign{##\crcr
\leftrightarrowfill@#1\crcr
\noalign{\kern-\ex@\nointerlineskip}$\m@th\hfil#1#2\hfil$\crcr}}}%
\def\underrightarrow{\mathpalette\underrightarrow@}%
\def\underrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\rightarrowfill@#1\crcr}}}%
\let\underarrow\underrightarrow
\def\underleftarrow{\mathpalette\underleftarrow@}%
\def\underleftarrow@#1#2{\vtop{\ialign{##\crcr$\m@th\hfil#1#2\hfil
$\crcr\noalign{\nointerlineskip}\leftarrowfill@#1\crcr}}}%
\def\underleftrightarrow{\mathpalette\underleftrightarrow@}%
\def\underleftrightarrow@#1#2{\vtop{\ialign{##\crcr$\m@th
\hfil#1#2\hfil$\crcr
\noalign{\nointerlineskip}\leftrightarrowfill@#1\crcr}}}%
\def\qopnamewl@#1{\mathop{\operator@font#1}\nlimits@}
\let\nlimits@\displaylimits
\def\setboxz@h{\setbox\z@\hbox}
\def\varlim@#1#2{\mathop{\vtop{\ialign{##\crcr
\hfil$#1\m@th\operator@font lim$\hfil\crcr
\noalign{\nointerlineskip}#2#1\crcr
\noalign{\nointerlineskip\kern-\ex@}\crcr}}}}
\def\rightarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\copy\z@\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\box\z@\mkern-2mu$}\hfill
\mkern-6mu\mathord\rightarrow$}
\def\leftarrowfill@#1{\m@th\setboxz@h{$#1-$}\ht\z@\z@
$#1\mathord\leftarrow\mkern-6mu\cleaders
\hbox{$#1\mkern-2mu\copy\z@\mkern-2mu$}\hfill
\mkern-6mu\box\z@$}
\def\qopnamewl@{proj\,lim}{\qopnamewl@{proj\,lim}}
\def\qopnamewl@{inj\,lim}{\qopnamewl@{inj\,lim}}
\def\mathpalette\varlim@\rightarrowfill@{\mathpalette\varlim@\rightarrowfill@}
\def\mathpalette\varlim@\leftarrowfill@{\mathpalette\varlim@\leftarrowfill@}
\def\mathpalette\varliminf@{}{\mathpalette\mathpalette\varliminf@{}@{}}
\def\mathpalette\varliminf@{}@#1{\mathop{\underline{\vrule\@depth.2\ex@\@width\z@
\hbox{$#1\m@th\operator@font lim$}}}}
\def\mathpalette\varlimsup@{}{\mathpalette\mathpalette\varlimsup@{}@{}}
\def\mathpalette\varlimsup@{}@#1{\mathop{\overline
{\hbox{$#1\m@th\operator@font lim$}}}}
\def\stackunder#1#2{\mathrel{\mathop{#2}\limits_{#1}}}%
\begingroup \catcode `|=0 \catcode `[= 1
\catcode`]=2 \catcode `\{=12 \catcode `\}=12
\catcode`\\=12
|gdef|@alignverbatim#1\end{align}[#1|end[align]]
|gdef|@salignverbatim#1\end{align*}[#1|end[align*]]
|gdef|@alignatverbatim#1\end{alignat}[#1|end[alignat]]
|gdef|@salignatverbatim#1\end{alignat*}[#1|end[alignat*]]
|gdef|@xalignatverbatim#1\end{xalignat}[#1|end[xalignat]]
|gdef|@sxalignatverbatim#1\end{xalignat*}[#1|end[xalignat*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@gatherverbatim#1\end{gather}[#1|end[gather]]
|gdef|@sgatherverbatim#1\end{gather*}[#1|end[gather*]]
|gdef|@multilineverbatim#1\end{multiline}[#1|end[multiline]]
|gdef|@smultilineverbatim#1\end{multiline*}[#1|end[multiline*]]
|gdef|@arraxverbatim#1\end{arrax}[#1|end[arrax]]
|gdef|@sarraxverbatim#1\end{arrax*}[#1|end[arrax*]]
|gdef|@tabulaxverbatim#1\end{tabulax}[#1|end[tabulax]]
|gdef|@stabulaxverbatim#1\end{tabulax*}[#1|end[tabulax*]]
|endgroup
\def\align{\@verbatim \frenchspacing\@vobeyspaces \@alignverbatim
You are using the "align" environment in a style in which it is not defined.}
\let\endalign=\endtrivlist
\@namedef{align*}{\@verbatim\@salignverbatim
You are using the "align*" environment in a style in which it is not defined.}
\expandafter\let\csname endalign*\endcsname =\endtrivlist
\def\alignat{\@verbatim \frenchspacing\@vobeyspaces \@alignatverbatim
You are using the "alignat" environment in a style in which it is not defined.}
\let\endalignat=\endtrivlist
\@namedef{alignat*}{\@verbatim\@salignatverbatim
You are using the "alignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endalignat*\endcsname =\endtrivlist
\def\xalignat{\@verbatim \frenchspacing\@vobeyspaces \@xalignatverbatim
You are using the "xalignat" environment in a style in which it is not defined.}
\let\endxalignat=\endtrivlist
\@namedef{xalignat*}{\@verbatim\@sxalignatverbatim
You are using the "xalignat*" environment in a style in which it is not defined.}
\expandafter\let\csname endxalignat*\endcsname =\endtrivlist
\def\gather{\@verbatim \frenchspacing\@vobeyspaces \@gatherverbatim
You are using the "gather" environment in a style in which it is not defined.}
\let\endgather=\endtrivlist
\@namedef{gather*}{\@verbatim\@sgatherverbatim
You are using the "gather*" environment in a style in which it is not defined.}
\expandafter\let\csname endgather*\endcsname =\endtrivlist
\def\multiline{\@verbatim \frenchspacing\@vobeyspaces \@multilineverbatim
You are using the "multiline" environment in a style in which it is not defined.}
\let\endmultiline=\endtrivlist
\@namedef{multiline*}{\@verbatim\@smultilineverbatim
You are using the "multiline*" environment in a style in which it is not defined.}
\expandafter\let\csname endmultiline*\endcsname =\endtrivlist
\def\arrax{\@verbatim \frenchspacing\@vobeyspaces \@arraxverbatim
You are using a type of "array" construct that is only allowed in AmS-LaTeX.}
\let\endarrax=\endtrivlist
\def\tabulax{\@verbatim \frenchspacing\@vobeyspaces \@tabulaxverbatim
You are using a type of "tabular" construct that is only allowed in AmS-LaTeX.}
\let\endtabulax=\endtrivlist
\@namedef{arrax*}{\@verbatim\@sarraxverbatim
You are using a type of "array*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endarrax*\endcsname =\endtrivlist
\@namedef{tabulax*}{\@verbatim\@stabulaxverbatim
You are using a type of "tabular*" construct that is only allowed in AmS-LaTeX.}
\expandafter\let\csname endtabulax*\endcsname =\endtrivlist
\def\endequation{%
\ifmmode\ifinner
\iftag@
\addtocounter{equation}{-1}
$\hfil
\displaywidth\linewidth\@taggnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\else
$\hfil
\displaywidth\linewidth\@eqnnum\egroup \endtrivlist
\global\@ifnextchar*{\@tagstar}{\@tag}@false
\global\@ignoretrue
\fi
\else
\iftag@
\addtocounter{equation}{-1}
\eqno \hbox{\@taggnum}
\global\@ifnextchar*{\@tagstar}{\@tag}@false%
$$\global\@ignoretrue
\else
\eqno \hbox{\@eqnnum
$$\global\@ignoretrue
\fi
\fi\fi
}
\newif\iftag@ \@ifnextchar*{\@tagstar}{\@tag}@false
\def\@ifnextchar*{\@TCItagstar}{\@TCItag}{\@ifnextchar*{\@TCItagstar}{\@TCItag}}
\def\@TCItag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@TCItagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
\@ifundefined{tag}{
\def\@ifnextchar*{\@tagstar}{\@tag}{\@ifnextchar*{\@tagstar}{\@tag}}
\def\@tag#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{(#1)}}
\def\@tagstar*#1{%
\global\@ifnextchar*{\@tagstar}{\@tag}@true
\global\def\@taggnum{#1}}
}{}
\makeatother
\endinput
\section{Introduction}
\emph{The mean-value theorem} (MVT) or Lagrange MVT is considered
as one of the most useful and fundamental result in analysis,
named after the French mathematician Joseph Lagrange, where he
presented his mean value theorem in his book Theorie des functions
analytiques in 1797; he states that: ``If $f$ is continuous on
$[a,b]$, differentiable on $(a,b)$ then there is a point $c$,
$a<c<b$, such that
\begin{align}
f'(c)=\frac{f(b)-f(a)}{b-a}.\RIfM@\expandafter\text@\else\expandafter\mbox\fi{"}
\end{align}
We call such a point $c$ a mean-value point of $f$. Further note
that the mean-value theorem of differentiation can be used to
prove that: if $f' >0$ and every sub-interval contains a point at
which $f' >0$, in particular if $f' >0$ with $f'(x )=0$ at only a
finite number of points, then $f$ is strictly increasing.
In recent years several authors modified and generalized various
types of mean value theorems in different ways and interesting
approach. For recent works the reader may refer for example to
\cite{Abela}, \cite{Tan}, \cite{Trokhimchuk}, for general reading
the interested researcher may find a hundreds of works cited in
the book \cite{Sahoo}.
In 1946, Pompeiu established another MVT for real functions
defined on a real interval that not containing `$0$'; nowadays
known as Pompeiu's mean value theorem, which states
\cite{Pompeiu}( see also, p., 83; \cite{Sahoo}):
\begin{theorem}
\label{pomp}For every real valued function $f$ differentiable on
an interval $[a,b]$ not containing $0$ and for all pairs $x_1\ne
x_2$ in $[a,b]$ there exists a point $\xi\in (x_1,x_2)$ such that
\begin{align}
\frac{{x_1 f\left( {x_2 } \right) - x_2 f\left( {x_1 }
\right)}}{{x_1 - x_2 }} = f\left( \xi \right) - \xi f'\left( \xi
\right).
\end{align}
\end{theorem}
The geometrical interpretation of this theorem as given in
\cite{Sahoo}: the tangent at the point $\left( {\xi ,f\left( \xi
\right)} \right)$ intersects on the $y$-axis at the same point as
the secant line connecting the points $ \left( {x_1 ,f\left( {x_1
} \right)} \right)$ and $ \left( {x_2 ,f\left( {x_2 } \right)}
\right)$. \\
In 1947, Boggio \cite{Boggio} (see also, p.,92; \cite{Sahoo})
established the following generalization of Pompeiu's mean value
theorem \ref{pomp}:
\begin{theorem}
\label{Boggio}For every real valued functions $f$ and $g$
differentiable on an interval $[a,b]$ not containing $0$ and for
all pairs $x_1\ne x_2$ in $[a,b]$ there exists a point $\xi\in
(x_1,x_2)$ such that
\begin{align}
\label{eq.Boggio}\frac{{g\left( {x_1 } \right)f\left( {x_2 }
\right) - g\left( {x_2 } \right)f\left( {x_1 } \right)}}{{g\left(
{x_2 } \right) - g\left( {x_1 } \right)}} = f\left( \xi \right) -
\frac{{g\left( \xi \right)}}{{g'\left( \xi \right)}}f'\left( \xi
\right).
\end{align}
\end{theorem}
In their famous book \cite{Sahoo}, Sahoo and Riedel have discussed
various type of mean value theorems for functions of one or more
variables. Among others, they stated in the end of Chapter four
(p., 145; \cite{Sahoo}) that: ``\emph{We have not been able to
generalize Pompeiu's mean value theorem for functions in two
variables}...".
The aim of this work is to answer Sahoo and Riedel problem about
the characterization of Pompeiu's MVT for functions of two
variables. Namely, for functions of two variables; we prove three
mean value theorems; the Cauchy--, Pompeiu's--, and
Cauchy--Pompeiu's--mean value theorems.\\
Two useful results concerning Rectangular MVT for functions of two
variables have been recently obtained in (p., 93;
\cite{Ghorpade}):
\begin{theorem}(Rectangular Rolle's Theorem)
\label{RRT}Let $a,b,c,d \in \mathbb{R}$ with $a<b$ and $c<d$, and
let $f:\Delta:= [a,b] \times[c,d]$ satisfy the following:
\begin{enumerate}
\item For each fixed $y_0 \in [c,d]$, the function given by $x
\mapsto f\left( {x,y_0 } \right)$ is continuous on $[a,b]$ and
differentiable on $(a,b)$.
\item For each fixed $x_0 \in [a,b]$, the function given by $y
\mapsto f_x\left( {x_0,y} \right)$ is continuous on $[c,d]$ and
differentiable on $(c,d)$.
\item $f(a,c)+ f(b,d)=f(a,d)+f(b,c)$.
\end{enumerate}
Then, there exists $(x_0,y_0) \in (a,b)\times (c,d)$ such that
$f_{xy}(x_0,y_0)=0$.
\end{theorem}
\begin{theorem}(Rectangular Mean Value Theorem)
\label{RMVT}Let $a,b,c,d \in \mathbb{R}$ with $a<b$ and $c<d$, and
let $f:\Delta:= [a,b] \times[c,d]$ satisfy the following:
\begin{enumerate}
\item For each fixed $y_0 \in [c,d]$, the function given by $x
\mapsto f\left( {x,y_0 } \right)$ is continuous on $[a,b]$ and
differentiable on $(a,b)$.
\item For each fixed $x_0 \in [a,b]$, the function given by $y
\mapsto f_x\left( {x_0,y} \right)$ is continuous on $[c,d]$ and
differentiable on $(c,d)$.
\end{enumerate}
Then, there exists $(x_0,y_0) \in (a,b)\times (c,d)$ such that
\begin{eqnarray*}
f\left( {b,d} \right) - f\left( {b,c} \right) - f\left( {a,d}
\right) + f\left( {a,c} \right) = (b-a)(d-c)f_{xy}(x_0,y_0).
\end{eqnarray*}
\end{theorem}
\section{The Result}
We begin with the following generalization of Theorem \ref{RRT}
and Theorem \ref{RMVT}:
\begin{theorem}(Rectangular Cauchy Mean Value Theorem)
\label{RCMVT}Let $a,b,c,d \in \mathbb{R}$ with $a<b$ and $c<d$,
and let $f,g:\Delta:= [a,b] \times[c,d]$ satisfy the following:
\begin{enumerate}
\item For each fixed $y_0 \in [c,d]$, the functions given by $x
\mapsto f\left( {x,y_0 } \right)$ and $x \mapsto g\left( {x,y_0 }
\right)$ are continuous on $[a,b]$ and differentiable on $(a,b)$.
\item For each fixed $x_0 \in [a,b]$, the functions given by $y
\mapsto f_x\left( {x_0,y} \right)$ and $x \mapsto g_x\left( {x_,y
} \right)$ are continuous on $[c,d]$ and differentiable on
$(c,d)$.
\end{enumerate}
Then, there exists $(x_0,y_0) \in (a,b)\times (c,d)$ such that
\begin{eqnarray}
\label{eq.RCMVT}\frac{{f_{xy} \left( {x_0 ,y_0 } \right)}}{{g_{xy}
\left( {x_0 ,y_0 } \right)}} = \frac{{f\left( {b,d} \right) -
f\left( {b,c} \right) - f\left( {a,d} \right) + f\left( {a,c}
\right)}}{{g\left( {b,d} \right) - g\left( {b,c} \right) - g\left(
{a,d} \right) + g\left( {a,c} \right)}}.
\end{eqnarray}
\end{theorem}
\begin{proof}
Define the function $H:\Delta \to \mathbb{R}$, given by
\begin{multline*}
H\left( {x,y} \right) = \left[ {f\left( {b,d} \right) - f\left( {b,c} \right) - f\left( {a,d} \right) + f\left( {a,c} \right)} \right]g\left({x,y}\right)
\\
- \left[ {g\left( {b,d} \right) - g\left( {b,c} \right) - g\left(
{a,d} \right) + g\left( {a,c} \right)} \right]f\left( {x,y}
\right).
\end{multline*}
It is easy to see that $H$ is continuous and differentiable on
$D$, and
\begin{multline*}
H\left( {a,c} \right) = \left[ {f\left( {b,d} \right) - f\left(
{b,c} \right) - f\left( {a,d} \right) + f\left( {a,c} \right)}
\right]g\left( {a,c} \right)
\\
- \left[ {g\left( {b,d} \right) - g\left( {b,c} \right) - g\left(
{a,d} \right) + g\left( {a,c} \right)} \right]f\left( {a,c}
\right),
\end{multline*}
\begin{multline*}
H\left( {a,d} \right) = \left[ {f\left( {b,d} \right) - f\left(
{b,c} \right) - f\left( {a,d} \right) + f\left( {a,c} \right)}
\right]g\left( {a,d} \right)
\\
- \left[ {g\left( {b,d} \right) - g\left( {b,c} \right) - g\left(
{a,d} \right) + g\left( {a,c} \right)} \right]f\left( {a,d}
\right),
\end{multline*}
\begin{multline*}
H\left( {b,c} \right) = \left[ {f\left( {b,d} \right) - f\left(
{b,c} \right) - f\left( {a,d} \right) + f\left( {a,c} \right)}
\right]g\left( {b,c} \right)
\\
- \left[ {g\left( {b,d} \right) - g\left( {b,c} \right) - g\left(
{a,d} \right) + g\left( {a,c} \right)} \right]f\left( {b,c}
\right),
\end{multline*}
and
\begin{multline*}
H\left( {b,d} \right) = \left[ {f\left( {b,d} \right) - f\left(
{b,c} \right) - f\left( {a,d} \right) + f\left( {a,c} \right)}
\right]g\left( {b,d} \right)
\\
- \left[ {g\left( {b,d} \right) - g\left( {b,c} \right) - g\left(
{a,d} \right) + g\left( {a,c} \right)} \right]f\left( {b,d}
\right),
\end{multline*}
then we have
\begin{align*}
&H\left( {a,c} \right) - H\left( {a,d} \right) - H\left( {b,c}
\right) + H\left( {b,d} \right)
\\
&= \left[ {f\left( {b,d} \right) - f\left( {b,c} \right) - f\left(
{a,d} \right) + f\left( {a,c} \right)} \right]\left[ {g\left(
{a,c} \right) - g\left( {a,d} \right) - g\left( {b,c} \right) +
g\left( {b,d} \right)} \right]
\\
&\qquad- \left[ {g\left( {a,c} \right) - g\left( {a,d} \right) -
g\left( {b,c} \right) + g\left( {b,d} \right)} \right]\left[
{f\left( {b,d} \right) - f\left( {b,c} \right) - f\left( {a,d}
\right) + f\left( {a,c} \right)} \right]
\\
&= 0 ,
\end{align*}
which gives that $H\left( {a,d} \right) + H\left( {b,c}
\right)=H\left( {a,c} \right) + H\left( {b,d} \right)$. So by the
Rectangular Rolle's Theorem \ref{RRT}, there is $(x_0, y_0) \in
(a,b) \times (c,d)$ such that $H_{xy}(x_0,y_0) = 0$, therefore
\begin{multline*}
H_{xy} \left( {x_0 ,y_0 } \right) = \left[ {f\left( {b,d} \right)
- f\left( {b,c} \right) - f\left( {a,d} \right) + f\left( {a,c}
\right)} \right]g_{xy} \left( {x_0 ,y_0 } \right)
\\
- \left[ {g\left( {b,d} \right) - g\left( {b,c} \right) - g\left(
{a,d} \right) + g\left( {a,c} \right)} \right]f_{xy} \left( {x_0
,y_0 } \right) = 0,
\end{multline*}
which gives that
\begin{eqnarray*}
\frac{{f_{xy} \left( {x_0 ,y_0 } \right)}}{{g_{xy} \left( {x_0
,y_0 } \right)}} = \frac{{f\left( {b,d} \right) - f\left( {b,c}
\right) - f\left( {a,d} \right) + f\left( {a,c} \right)}}{{g\left(
{b,d} \right) - g\left( {b,c} \right) - g\left( {a,d} \right) +
g\left( {a,c} \right)}}.
\end{eqnarray*}
This yields the desired result.
\end{proof}
\begin{remark}
In Theorem \ref{RCMVT}, if one chooses, $g(t,s)=ts$, then we
recapture the rectangular Mean-Value Theorem \ref{RMVT}.
\end{remark}
Our first main result concerning Pompeiu's Mean Value Theorem for
functions of two variables may be considered as follows:
\begin{theorem}(Pompeiu's Mean Value Theorem)
\label{thm4}Let $a,b,c,d \in \mathbb{R}$ with $a<b$ and $c<d$, and
let $f:\Delta:= [a,b] \times[c,d]\to \mathbb{R}$ satisfy the
following:
\begin{enumerate}
\item $\Delta$ not containing the points $(0,\cdot), (\cdot,0),
(0,0)$.
\item For each fixed $y_0 \in [c,d]$, the function given by $x
\mapsto f\left( {x,y_0 } \right)$ is continuous on $[a,b]$ and
differentiable on $(a,b)$.
\item For each fixed $x_0 \in [a,b]$, the function given by $y
\mapsto f_x\left( {x_0,y} \right)$ is continuous on $[c,d]$ and
differentiable on $(c,d)$.
\item For all pair $x_1,x_2 \in(a,b)$ with $x_1 \ne x_2$ and
$y_1,y_2 \in (c,d)$ with $y_1 \ne y_2$.
\end{enumerate}
Then, there exists $(\xi_1,\xi_2) \in (x_1,y_1)\times (x_2,y_2)$
such that
\begin{multline}
\label{eq2.2}\xi _1 \xi _2 \frac{{\partial ^2 f}}{{\partial
t\partial s}}\left( {\xi _1 ,\xi _2 } \right) - \xi _1
\frac{{\partial f}}{{\partial t}}\left( {\xi _1 ,\xi _2 } \right)
- \xi _2 \frac{{\partial f}}{{\partial s}}\left( {\xi _1 ,\xi _2 }
\right) + f\left( {\xi _1 ,\xi _2 } \right)
\\
= \frac{{x_2 y_2 f\left( {x_1 ,y_1 } \right) - x_2 y_1 f\left(
{x_1 ,y_2 } \right) - x_1 y_2 f\left( {x_2 ,y_1 } \right) + x_1
y_1 f\left( {x_2 ,y_2 } \right)}}{{\left( {x_2 - x_1 }
\right)\left( {y_2 - y_1 } \right)}}.
\end{multline}
\end{theorem}
\begin{proof}
Define a real valued function $F:\left[ {\frac{1}{b},\frac{1}{a}}
\right] \times \left[ {\frac{1}{d},\frac{1}{c}} \right] \to
\mathbb{R}$, given by
\begin{align}
\label{1}F\left( {t,s} \right) = tsf\left(
{\frac{1}{t},\frac{1}{s}} \right).
\end{align}
By the assumptions (1)-(3), its easy to see that
\begin{enumerate}
\item $F$ is defined on $\left[ {\frac{1}{b},\frac{1}{a}} \right]
\times \left[ {\frac{1}{d},\frac{1}{c}} \right]$, since $\Delta$
does not containing the points $(0,\cdot), (\cdot,0), (0,0)$.
\item For each fixed $y_0 \in \left[ {\frac{1}{d},\frac{1}{c}}
\right]$, the function given by $x \mapsto F\left( {x,y_0 }
\right)$ is continuous on $\left[ {\frac{1}{b},\frac{1}{a}}
\right] $ and differentiable on $\left( {\frac{1}{b},\frac{1}{a}}
\right)$.
\item For each fixed $x_0 \in \left[ {\frac{1}{b},\frac{1}{a}}
\right]$, the function given by $y \mapsto F_x\left( {x_0,y}
\right)$ is continuous on $\left[ {\frac{1}{d},\frac{1}{c}}
\right]$ and differentiable on $\left( {\frac{1}{d},\frac{1}{c}}
\right)$.
\end{enumerate}
Therefore, simple calculations yield that
\begin{align*}
F_t \left( {t,s} \right) = sf\left( {\frac{1}{t},\frac{1}{s}}
\right) - \frac{s}{t}f_t \left( {\frac{1}{t},\frac{1}{s}} \right),
\end{align*}
\begin{align*}
F_s \left( {t,s} \right) = tf\left( {\frac{1}{t},\frac{1}{s}}
\right) - \frac{t}{s}f_s \left( {\frac{1}{t},\frac{1}{s}} \right),
\end{align*}
and
\begin{align}
\label{2}F_{ts} \left( {t,s} \right) &= \frac{1}{{ts}}f_{st}
\left( {\frac{1}{t},\frac{1}{s}} \right) - \frac{1}{t}f_t \left(
{\frac{1}{t},\frac{1}{s}} \right) - \frac{1}{s}f_s \left(
{\frac{1}{t},\frac{1}{s}} \right) + f\left(
{\frac{1}{t},\frac{1}{s}} \right)
\nonumber\\
&= F_{st} \left( {t,s} \right).
\end{align}
Applying the Rectangular Mean Value Theorem \ref{RMVT} to $F$ on
the interval $\left[ {u,v} \right] \times \left[ {z,w} \right]
\subset \left[ {\frac{1}{b},\frac{1}{a}} \right] \times \left[
{\frac{1}{d},\frac{1}{c}} \right]$, we get
\begin{align}
\label{3}\left( {v - u} \right)\left( {w - z} \right)F_{ts} \left(
{\eta _1 ,\eta _2 } \right) = F\left( {v,w} \right) - F\left(
{v,z} \right) - F\left( {u,w} \right) + F\left( {u,z} \right)
\end{align}
for some $\left( {\eta _1 ,\eta _2 } \right) \in \left( {u,v}
\right) \times \left( {z,w} \right)$. Let $x_1 = \frac{1}{v}$,
$x_2 = \frac{1}{u}$, $y_1 = \frac{1}{w}$, $y_2 = \frac{1}{z}$,
$\xi _1 = \frac{1}{{\eta _1 }}$, and $\xi _2 = \frac{1}{{\eta _2
}}$. Then, since $\left( {\eta _1 ,\eta _2 } \right) \in \left(
{u,v} \right) \times \left( {z,w} \right)$, we have
$$x_1 \le \xi _1 \le x_2, \,\,\, \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and}\,\,\,y_1 \le \xi _2 \le y_2.$$
Now, using (\ref{1}) and (\ref{2}) on (\ref{3}), we have
\begin{multline*}
\frac{1}{{\eta _1 \eta _2 }}f_{st} \left( {\frac{1}{{\eta _1
}},\frac{1}{{\eta _2 }}} \right) - \frac{1}{{\eta _1 }}f_t \left(
{\frac{1}{{\eta _1 }},\frac{1}{{\eta _2 }}} \right) -
\frac{1}{{\eta _2 }}f_s \left( {\frac{1}{{\eta _1
}},\frac{1}{{\eta _2 }}} \right) + f\left( {\frac{1}{{\eta _1
}},\frac{1}{{\eta _2 }}} \right)
\\
= \frac{1}{{\left( {v - u} \right)\left( {w - z} \right)}}\left[
{vwf\left( {\frac{1}{v},\frac{1}{w}} \right) - vzf\left(
{\frac{1}{v},\frac{1}{z}} \right) - uwf\left(
{\frac{1}{u},\frac{1}{w}} \right) + uzf\left(
{\frac{1}{u},\frac{1}{z}} \right)} \right],
\end{multline*}
that is,
\begin{multline*}
\xi _1 \xi _2 \frac{{\partial ^2 f}}{{\partial t\partial s}}\left(
{\xi _1 ,\xi _2 } \right) - \xi _1 \frac{{\partial f}}{{\partial
t}}\left( {\xi _1 ,\xi _2 } \right) - \xi _2 \frac{{\partial
f}}{{\partial s}}\left( {\xi _1 ,\xi _2 } \right) + f\left( {\xi
_1 ,\xi _2 } \right)
\\
= \frac{{x_2 y_2 f\left( {x_1 ,y_1 } \right) - x_2 y_1 f\left(
{x_1 ,y_2 } \right) - x_1 y_2 f\left( {x_2 ,y_1 } \right) + x_1
y_1 f\left( {x_2 ,y_2 } \right)}}{{\left( {x_2 - x_1 }
\right)\left( {y_2 - y_1 } \right)}}.
\end{multline*}
This completes the proof of the theorem.
\end{proof}
Next, a characterization of Boggio MVT which is of Cauchy's type
for functions of two variables is stated as follows:
\begin{theorem}(Boggio Mean Value Theorem)
Let $a,b,c,d \in \mathbb{R}$ with $a<b$ and $c<d$, and let
$f,g:\Delta:= [a,b] \times[c,d]\to \mathbb{R}$ satisfy the
following:
\begin{enumerate}
\item $\Delta$ not containing the points $(0,\cdot), (\cdot,0),
(0,0)$.
\item For each fixed $y_0 \in [c,d]$, the functions given by $x
\mapsto f\left( {x,y_0 } \right)$ and $x \mapsto g\left( {x,y_0 }
\right)$ are continuous on $[a,b]$ and differentiable on $(a,b)$.
\item For each fixed $x_0 \in [a,b]$, the functions given by $y
\mapsto f_x\left( {x_0,y} \right)$ and $y \mapsto g_x\left(
{x_0,y} \right)$ are continuous on $[c,d]$ and differentiable on
$(c,d)$.
\item For all pair $x_1,x_2 \in(a,b)$ with $x_1 \ne x_2$ and
$y_1,y_2 \in (c,d)$ with $y_1 \ne y_2$.
\end{enumerate}
Then, there exists $(\xi_1,\xi_2) \in (x_1,y_1)\times (x_2,y_2)$
such that
\begin{multline}
\label{eq2.6} \frac{{\xi _1 \xi _2 \frac{{\partial ^2
g}}{{\partial t\partial s}}\left( {\xi _1 ,\xi _2 } \right) - \xi
_1 \frac{{\partial g}}{{\partial t}}\left( {\xi _1 ,\xi _2 }
\right) - \xi _2 \frac{{\partial g}}{{\partial s}}\left( {\xi _1
,\xi _2 } \right) + g\left( {\xi _1 ,\xi _2 } \right)}}{{g\left(
{x_2 ,y_2 } \right) - g\left( {x_2 ,y_1 } \right) - g\left( {x_1
,y_2 } \right) + g\left( {x_1 ,y_1 } \right)}}
\\
- \frac{{\xi _1 \xi _2 \frac{{\partial ^2 f}}{{\partial t\partial
s}}f\left( {\xi _1 ,\xi _2 } \right) - \xi _1 \frac{{\partial
f}}{{\partial t}}f\left( {\xi _1 ,\xi _2 } \right) - \xi _2
\frac{{\partial f}}{{\partial s}}f\left( {\xi _1 ,\xi _2 } \right)
+ f\left( {\xi _1 ,\xi _2 } \right)}}{{f\left( {x_2 ,y_2 } \right)
- f\left( {x_2 ,y_1 } \right) - f\left( {x_1 ,y_2 } \right) +
f\left( {x_1 ,y_1 } \right)}}
\\
= \frac{{x_2 y_2 g\left( {x_1 ,y_1 } \right) - x_1 y_2 g\left(
{x_2 ,y_1 } \right) - x_2 y_1 g\left( {x_1 ,y_2 } \right) + x_1
y_1 g\left( {x_2 ,y_2 } \right)}}{{\left( {x_2 - x_1 }
\right)\left( {y_2 - y_1 } \right)\left[ {g\left( {x_2 ,y_2 }
\right) - g\left( {x_2 ,y_1 } \right) - g\left( {x_1 ,y_2 }
\right) + g\left( {x_1 ,y_1 } \right)} \right]}}
\\
- \frac{{x_2 y_2 f\left( {x_1 ,y_1 } \right) - x_2 y_1 f\left(
{x_1 ,y_2 } \right) - x_1 y_2 f\left( {x_2 ,y_1 } \right) + x_1
y_1 f\left( {x_2 ,y_2 } \right)}}{{\left( {x_2 - x_1 }
\right)\left( {y_2 - y_1 } \right)\left[ {f\left( {x_2 ,y_2 }
\right) - f\left( {x_2 ,y_1 } \right) - f\left( {x_1 ,y_2 }
\right) + f\left( {x_1 ,y_1 } \right)} \right]}}
\end{multline}
\end{theorem}
\begin{proof}
As in the proof of Theorem \ref{thm4}, let $\left[ {u,v} \right]
\times \left[ {z,w} \right] \subset \left[
{\frac{1}{b},\frac{1}{a}} \right] \times \left[
{\frac{1}{d},\frac{1}{c}} \right]$, and setting $x_1 =
\frac{1}{v}$, $x_2 = \frac{1}{u}$, $y_1 = \frac{1}{w}$, $y_2 =
\frac{1}{z}$. Define the following two real valued functions
$H:\Delta \to \mathbb{R}$, given by
\begin{multline}
\label{4} H\left( {t,s} \right) = \left[ {f\left( {x_1,y_1}
\right) - f\left( {x_1,y_2} \right) - f\left( {x_2,y_1} \right) +
f\left( {x_2,y_2} \right)} \right]g\left({t,s}\right)
\\
- \left[ {g\left( {x_1,y_1} \right) - g\left( {x_1,y_2} \right) -
g\left( {x_2,y_1} \right) + g\left( {x_2,y_2} \right)}
\right]f\left( {t,s} \right),
\end{multline}
and $F:\left[ {\frac{1}{b},\frac{1}{a}} \right] \times \left[
{\frac{1}{d},\frac{1}{c}} \right] \to \mathbb{R}$, given by
\begin{align}
\label{5}F\left( {t,s} \right) = ts H\left(
{\frac{1}{t},\frac{1}{s}} \right)
\end{align}
By the assumptions (1)-(3), it is easy to see that
\begin{enumerate}
\item $F$ is defined on $\left[ {\frac{1}{b},\frac{1}{a}} \right]
\times \left[ {\frac{1}{d},\frac{1}{c}} \right]$, since $\Delta$
does not containing the points $(0,\cdot), (\cdot,0), (0,0)$.
\item For each fixed $y_0 \in \left[ {\frac{1}{d},\frac{1}{c}}
\right]$, the function given by $x \mapsto F\left( {x,y_0 }
\right)$ is continuous on $\left[ {\frac{1}{b},\frac{1}{a}}
\right] $ and differentiable on $\left( {\frac{1}{b},\frac{1}{a}}
\right)$.
\item For each fixed $x_0 \in \left[ {\frac{1}{b},\frac{1}{a}}
\right]$, the function given by $y \mapsto F_x\left( {x_0,y}
\right)$ is continuous on $\left[ {\frac{1}{d},\frac{1}{c}}
\right]$ and differentiable on $\left( {\frac{1}{d},\frac{1}{c}}
\right)$.
\end{enumerate}
Therefore, simple calculations yield that
\begin{align}
\label{6}F_{ts} \left( {t,s} \right) &= \frac{1}{{ts}}H_{st}
\left( {\frac{1}{t},\frac{1}{s}} \right) - \frac{1}{t}H_t \left(
{\frac{1}{t},\frac{1}{s}} \right) - \frac{1}{s}H_s \left(
{\frac{1}{t},\frac{1}{s}} \right) + H\left(
{\frac{1}{t},\frac{1}{s}} \right)
\nonumber\\
&= F_{st} \left( {t,s} \right).
\end{align}
Applying the Rectangular Mean Value Theorem \ref{RMVT} to $F$ on
the interval $\left[ {u,v} \right] \times \left[ {z,w} \right]
\subset \left[ {\frac{1}{b},\frac{1}{a}} \right] \times \left[
{\frac{1}{d},\frac{1}{c}} \right]$, we get
\begin{align}
\label{7}\left( {v - u} \right)\left( {w - z} \right)F_{ts} \left(
{\eta _1 ,\eta _2 } \right) = F\left( {v,w} \right) - F\left(
{v,z} \right) - F\left( {u,w} \right) + F\left( {u,z} \right).
\end{align}
for some $\left( {\eta _1 ,\eta _2 } \right) \in \left( {u,v}
\right) \times \left( {z,w} \right)$.
Using the assumption that $x_1 = \frac{1}{v}$, $x_2 =
\frac{1}{u}$, $y_1 = \frac{1}{w}$, $y_2 = \frac{1}{z}$, $\xi _1
= \frac{1}{{\eta _1 }}$, and $\xi _2 = \frac{1}{{\eta _2 }}$.
Then, since $\left( {\eta _1 ,\eta _2 } \right) \in \left( {u,v}
\right) \times \left( {z,w} \right)$, we have
$$x_1 \le \xi _1 \le x_2, \,\,\, \RIfM@\expandafter\text@\else\expandafter\mbox\fi{and}\,\,\,y_1 \le \xi _2 \le y_2.$$
Now, using (\ref{4})--(\ref{6}) on (\ref{7}), we have
\begin{multline*}
\frac{{\xi _1 \xi _2 \frac{{\partial ^2 g}}{{\partial t\partial
s}}\left( {\xi _1 ,\xi _2 } \right) - \xi _1 \frac{{\partial
g}}{{\partial t}}\left( {\xi _1 ,\xi _2 } \right) - \xi _2
\frac{{\partial g}}{{\partial s}}\left( {\xi _1 ,\xi _2 } \right)
+ g\left( {\xi _1 ,\xi _2 } \right)}}{{g\left( {x_2 ,y_2 } \right)
- g\left( {x_2 ,y_1 } \right) - g\left( {x_1 ,y_2 } \right) +
g\left( {x_1 ,y_1 } \right)}}
\\
- \frac{{\xi _1 \xi _2 \frac{{\partial ^2 f}}{{\partial t\partial
s}}f\left( {\xi _1 ,\xi _2 } \right) - \xi _1 \frac{{\partial
f}}{{\partial t}}f\left( {\xi _1 ,\xi _2 } \right) - \xi _2
\frac{{\partial f}}{{\partial s}}f\left( {\xi _1 ,\xi _2 } \right)
+ f\left( {\xi _1 ,\xi _2 } \right)}}{{f\left( {x_2 ,y_2 } \right)
- f\left( {x_2 ,y_1 } \right) - f\left( {x_1 ,y_2 } \right) +
f\left( {x_1 ,y_1 } \right)}}
\\
= \frac{{x_2 y_2 g\left( {x_1 ,y_1 } \right) - x_1 y_2 g\left(
{x_2 ,y_1 } \right) - x_2 y_1 g\left( {x_1 ,y_2 } \right) + x_1
y_1 g\left( {x_2 ,y_2 } \right)}}{{\left( {x_2 - x_1 }
\right)\left( {y_2 - y_1 } \right)\left[ {g\left( {x_2 ,y_2 }
\right) - g\left( {x_2 ,y_1 } \right) - g\left( {x_1 ,y_2 }
\right) + g\left( {x_1 ,y_1 } \right)} \right]}}
\\
- \frac{{x_2 y_2 f\left( {x_1 ,y_1 } \right) - x_2 y_1 f\left(
{x_1 ,y_2 } \right) - x_1 y_2 f\left( {x_2 ,y_1 } \right) + x_1
y_1 f\left( {x_2 ,y_2 } \right)}}{{\left( {x_2 - x_1 }
\right)\left( {y_2 - y_1 } \right)\left[ {f\left( {x_2 ,y_2 }
\right) - f\left( {x_2 ,y_1 } \right) - f\left( {x_1 ,y_2 }
\right) + f\left( {x_1 ,y_1 } \right)} \right]}}.
\end{multline*}
This completes the proof of the theorem.
\end{proof}
\begin{remark}
In (\ref{eq2.6}), if one chooses $g(t,s)=ts$, then we recapture
the Pompeiu's Mean Value Theorem \ref{thm4}.
\end{remark}
\centerline{}
\centerline{}
|
1,116,691,498,288 | arxiv | \section{Introduction}
\IEEEPARstart{S}tate-of-the-art robotic systems are less power efficient and robust than their natural counterparts. Indeed, a bee is capable of robust flight, obstacle avoidance, and cognitive capabilities with a brain that only consumes 10$\mu W$ of power. On the other hand, vehicles in the DARPA Desert and Urban challenges consume around 1$kW$ of power~\cite{Liu2010}. Using nature as inspiration, neuromorphic engineers have attempted to bridge the power-consumption gap through hardware solutions~\cite{Liu2010}. Neuromorphic processors allow for the hardware implementation of spiking neural networks (SNNs)~\cite{chicca2014neuromorphic,indiveri2015neuromorphic}. These mixed-signal analog/digital chips are low power and provide an attractive alternative to current digital hardware used in mobile applications such as robotics.
Another successful neuromorphic solution is the Dynamic Vision Sensor (DVS)~\cite{Lichtsteiner_etal08,Serrano-Gotarredona2013}. The DVS is analogous to a camera, except instead of integrating light in a pixel array for a period of time and then converting it to an image, it detects local changes in luminance at each pixel and transmits these as events pixel by pixel, as they are produced, and with microsecond latency~\cite{delbruck2008frame}. This leads to a reduction in power, bandwidth, and overhead in post processing.
Typically, high-speed agile manoeuvres, such as juggling, pole acrobatics, or flying through thrown hoops use external motion sensors and high powered CPUs to control the UAVs~\cite{Muller2011,brescianini2013quadrocopter,mellinger2011minimum}. A system with sensors and image processing in-situ on the UAV is an essential step for autonomous UAV systems in GPS restricted environments. Due to its high temporal precision, the DVS also does not suffer from blurring as a standard-frame based camera when conducting high-speed manoeuvres on an unmanned aerial vehicle (UAV)~\cite{Mueggler2014}. This makes it ideal as an on-board sensor for high-speed agile manoeuvres.
A model that has shown promise for collision avoidance in robotics is the locust lobula giant motion detector (LGMD). The locust uses the LGMD to escape from predators by detecting whether a stimulus is looming (increasing in size in the field of view) or not~\cite{Santer2004}. It should be robust to translation, which is why it is an ideal candidate for obstacle avoidance. Previous implementations of this model used frame based cameras and simplified neural models for embedded robotic applications~\cite{Santer2004,yue2010reactive,stafford2007bio}.
Salt et al. \cite{Salt2017} modified the LGMD model to use Adaptive Exponential Integrate and Fire (AEIF) neuron equations which have been shown to be biologically plausible~\cite{brette2005adaptive} and readily implementable in hardware neuromorphic processors\cite{chicca2014neuromorphic}. The LGMD Neural Network (LGMDNN) was also modified to make it compatible with the Reconfigurable On-Line Learning Spiking (ROLLS) neuromorphic processor~\cite{Qiao2015}. Coupling the LGMDNN with the EIF neural equations yields 11 user-defined parameters after making simplifying assumptions based on the constraints of the neuromorphic processor. Identifying promising parameter sets for robust functional operation of this model is the focus of this work.
Optimising this parameter space is challenging as it contains up to 18 hyper-parameters that have complex inter-dependencies. Due to the computational resources and time requirements involved in evaluation (approximately 1 to 4 minutes per LGMDNN), an exhaustive search is infeasible. We are therefore motivated to investigate the use of efficient stochastic optimisation algorithms.
Differential Evolution (DE)~\cite{storn1997differential} is particularly suited to our application. DE is a simple and efficient stochastic vector-based real-parameter optimisation algorithm with performance (at least) comparable to other leading optimisation algorithms~\cite{de-vs-pso,de-ga-compare}. DE has only two user-defined rates~\cite{das2011differential,storn1997differential,pedersen2010good}, however their optimal values are problem specific and can drastically affect algorithmic performance~\cite{brest2006self}. This has prompted research into Self-Adaptation (SA), which allows the rates to vary autonomously in a context-sensitive manner throughout an optimisation run. Self-Adaptive DE (SADE) has been shown to perform at least as well as DE on benchmarking problems~\cite{brest2006self,qin2009differential}. Importantly, SA has been shown to reduce the number of evaluations required per optimisation in resource-constrained scenarios with protracted evaluation times~\cite{howard2017gecco}, compared to non-adaptive solutions~\cite{howard2015platform}. Here, we compare DE and SADE to Bayesian Optimisation (BO), which is also well suited to this task.
Spiking networks are particularly amenable to a form of unsupervised learning called Spike-Time Dependent Plasticity (STDP)~\cite{bi-poo}, which allows synaptic weights to change autonomously in response to environmental inputs. STDP has been shown to provide faster responses compared to non-plastic networks in dynamic environments~\cite{howard2012evolution}, which motivates our investigations into its use in our LGMD networks.
Our hypothesis is that these adaptivity mechanisms are beneficial to the optimisation process. To test this hypothesis, we evaluate the performance of our algorithms (DE, SADE, and BO, with and without STDP) when optimising looming responses in LGMD networks which are stimulated by (i) simple and (ii) complex DVS recordings on the UAV.
The original contributions of this work are (i) development of an objective function that accurately describes the desired LGMD behaviour, (ii) statistical comparisons of three leading algorithms in optimising LGMD response, and (iii) the first use of STDP and adaptation in spiking neuromorphic LGMD networks.
\section{Model}
This section will describe the background for the model set-up and the specific equations that were used in the experiment.
\subsection{LGMD}
We implement the model as described by Salt et al.~\cite{Salt2017}. The LGMD model consists of a photoreceptor (P), a summing layer (S), an intermediate photoreceptor (IP), an intermediate summing layer (IS), and an LGMD neuron layer. The intermediate layers can be seen as analagous to sum-pooling layers in deep convolutional neural networks~\cite{liu2015treasure,babenko2015aggregating,fernando2017rank}. These layers are connected by excitatory (E), inhibitory (I), and feed-forward (F) connections, which are modelled as AEIF neurons. \fig{fig:LGMDMe} shows the topology of the network \cite{Salt2017}.
The feed-forward neurons (F) are intended to inhibit translational motion. The inhibitory connections (I) from the photoreceptor to the summing layer inhibit non-looming stimuli. The weights of the inhibitory connections are assigned based on their distance from the central excitatory neuron. This connection configuration spans the P layer like a kernel.
\begin{figure}
\begin{tikzpicture}
\tikzstyle{excite}=[shorten >=1pt,->,draw=black, node distance=2.5cm]
\tikzstyle{inha}=[shorten >=1pt,->,draw=red,dotted, node distance=2.5cm]
\tikzstyle{inhb}=[shorten >=1pt,->,draw=red,dashed, node distance=2.5cm]
\tikzstyle{every pin edge}=[<-,shorten <=1pt]
\tikzstyle{neuron}=[circle,fill=black,minimum size=20pt,inner sep=0pt]
\tikzstyle{P neuron}=[neuron, fill=green];
\tikzstyle{S neuron}=[neuron, fill=red];
\tikzstyle{IP neuron}=[neuron, fill=magenta];
\tikzstyle{IS neuron}=[neuron, fill=olive];
\tikzstyle{LGMD neuron}=[neuron, fill=blue];
\tikzstyle{annot} = [text width=6em, text centered]
\foreach \name / \y in {1,...,9}
\node[P neuron, pin=left:] (I-\name) at (0,-\y) {};
\node[S neuron] (H) at (1.5*2.5cm, -3.5) {};
\node[IS neuron] (IS) at (2*2.5cm,-4) {};
\node[IP neuron] (IP) at (1.5*2.5cm,-6.5) {};
\node[LGMD neuron] (O) at (2.6*2.5cm,-5) {};
\path (I-5) edge[excite] (H) {};
\foreach \name / \y in {2,4,6,8}
\path (I-\y) edge[inha] (H) {};
\foreach \name / \y in {1,3,7,9}
\path (I-\y) edge[inhb] (H) {};
\path (H) edge[excite] (IS);
\foreach \name / \y in {1,...,9}
\path (I-\y) edge[excite] (IP);
\path (IS) edge[excite] (O);
\path (IP) edge[inha] (O);
\node[annot,above of=H, node distance=0.7cm] (hl) {S};
\node[annot,above of=I-1, node distance=0.7cm] (Il) {P};
\node[annot,above of=O, node distance=0.7cm] {LGMD};
\node[annot,above of=IS, node distance=0.7cm] {IS};
\node[annot,above of=IP, node distance=0.7cm] {IP};
\end{tikzpicture}
\caption{The neuromorphic LGMD model. Solid lines: excitatory connections; Inhibitory connections; dashed lines: slower inhibitions; dotted lines: faster inhibitions.}
\label{fig:LGMDMe}
\end{figure}
The intermediate layers were added to make the model compatible with the CXQuad neuromorphic processor described in~\cite{indiveri2015neuromorphic}. However, Salt et al.~\cite{Salt2017} found that the addition of the intermediate (sum-pooling) layer before the LGMD neuron increased the performance of the network on all but slow circular stimuli.
\subsubsection{Adaptive Exponential Integrate and Fire Spiking Networks}
We use Adaptive Exponential Integrate and Fire (AEIF) networks; the respective neuron equations follow (\ref{eq:gerstnerAEIF}) and (\ref{eq:current}):
\begin{equation}
\label{eq:gerstnerAEIF}
\frac{dV}{dt} = \frac{-g_L(V-E_L)+g_L\Delta_T\exp(\frac{V-V_T}{\Delta_T})+I}{C},
\end{equation}
\begin{equation}
\label{eq:current}
I=I_e-I_{iA}-I_{iB}-I_{ad},
\end{equation}
where $C$ is the membrane capacitance, $g_L$ is the leak conductance, $E_L$ is the leak reversal potential, $V_T$ is the spike threshold, $\Delta_T$ is the slope factor, $V$ is the membrane potential, $I_{e}$ is an excitatory current, $I_{ad}$ is the adaptation current, and $I_{iA}$ and $I_{iB}$ describe fast/slow inhibitory current~\cite{brette2005adaptive}. When a spike is detected ($V>V_T$) the voltage resets ($V=V_r$), and the post-synaptic neuron receives a current injection from the pre-neuron firing given by:
\begin{align}
\label{eq:inject}
I_{al} &+= q_{al},\\
I_{ad} &+= b,
\end{align}
where the subscript $l$ corresponds to the post-synaptic layer, $q_{al}$ is the current, $b$ is the spike-triggered adaptation, and the subscript $a$ refers to either excitation or inhibition. To simplify the model for embedded implementation, inhibitory currents were bound as a ratio of the excitatory current:
\begin{equation}
q_{il(A|B)} = inh(A|B)_l\cdot q_{el},
\end{equation}
where the $(A|B)$ notation indicates either A or B type inhibition. The decay of the excitatory or inhibitory currents is described by:
\begin{equation}
\frac{dI_a}{dt}=\frac{-I_a}{\tau_a},\label{eq:currentDecay}
\end{equation}
where $I_a$ is the current and $\tau_a$ is the time constant for the decay. The subscript $a$ refers to either inhibition or excitation. Finally, the decay of the adaptation current is described by:
\begin{equation}
\frac{dI_{ad}}{dt}=\frac{a(V-E_L)-I_{ad}}{\tau_{ad}},
\end{equation}
where $a$ is the sub-threshold adaptation and $\tau_{ad}$ is the time constant for the decay.
Initially, the adaptation current is set to 0, which serves as a comparative baseline when investigating the use of adaptation.
\subsection{Spike Time Dependent Plasticity}
Spike Time Dependent Plasticity (STDP) is a realisation of Hebbian learning based on the temporal correlations between pre- and post-synaptic spikes. This synaptic plasticity is thought to be fundamental to adaptation, learning, and information storage in the brain~\cite{song2000competitive,sjostrom2010spike}.
Considering an arbitrary neuron, receipt of a pre-synaptic spike closely before a post-synaptic spike increases efficacy of the synapse, with the reverse being true if a post-synaptic spike is received in close proximity to a pre-synaptic spike. Long term potentiating (LTP, synaptic weight increase) of the synapse occurs in the former case, long term depression (LTD, synaptic weight decrease) occurs in the latter case. \fig{fig:STDP} shows the effect of the difference of the post- and pre- synaptic spikes on the synaptic weight.
\begin{figure}[htbp]
\centering
\includegraphics[width=1\columnwidth]{stdp}
\caption{The impact of STDP on the synaptic weights. If the pre-synaptic spike arrives before the post synaptic spike, then the strength of the weights is increased. If the post synaptic spike arrives first than the strength of the synapse is weakened.}\label{fig:STDP}
\end{figure}
STDP modifies the synaptic current injection given in (\ref{eq:inject}) by multiplying it by a weight $w$. If a pre-synaptic spike occurs then:
\begin{align}
I_{al} &+= w q_{al},\\
A_{pre} &+= \Delta_{pre,}\\
w &+= A_{post}.
\end{align}
If a post-synaptic spike occurs then:
\begin{align}
A_{post} &+= \Delta_{post},\\
w &+= A_{pre}.\\
\end{align}
$A_{pre|post}$ are the amount by which the weight $w$ is strengthened or weakened, and $\Delta_{pre|post}$ is a user-defined value for increasing $A_{pre|post}$ each time a spike occurs.
At each spike event:
\begin{align}
\frac{dA_{pre}}{dt} &= -\frac{A_{pre}}{\tau_{pre}},\\
\frac{dA_{post}}{dt} &= -\frac{A_{post}}{\tau_{post}}.
\end{align}
Each time a spike occurs, $A_{pre|post}$ decays according to the function given above.
\section{Optimisation Techniques}
In this Section, we describe the three optimisation techniques that we compare: DE, SADE, and BO, and how they are applied to optimising the LGMDNN parameter space. Each individual is a parametrisation of the LGMDNN, given by:
$x =$[$\mathbf{\tau_e }$, $\mathbf{\tau_{iA} }$, $\mathbf{\tau_{iB} }$, $\mathbf{q_{eP} }$, $\mathbf{q_{eS} }$, $\mathbf{q_{eIP} }$, $\mathbf{q_{eIS} }$, $\mathbf{q_{eL} }$, $\mathbf{inhA_S}$, $\mathbf{inhB_S}$, $\mathbf{inhA_L}$, [[$\mathbf{a}$, $\mathbf{b}$, $\mathbf{\tau_{w_{adapt}} }$]], (($\mathbf{\tau_{pre} }$, $\mathbf{\tau_{post }}$, $\mathbf{\Delta_{pre}}$, $\mathbf{\Delta_{post}}$))]
\subsection{Differential Evolution}
DE is an efficient and high performing optimiser for real-valued parameters~\cite{das2011differential,storn1997differential}. As it is based on evolutionary computing, it performs well on multi-modal, discontinuous optimisation landscapes. DE performs a parallel direct search over a population of size $NP$, where each population member $x$ is a $D$-dimensional vector. for each generation $G$:
\begin{equation}
x_{i,G}=1,2,\ldots,NP.
\end{equation}
We use the canonical DE/rand/1/bin to describe the algorithmic process. The initial population is generated from random samples drawn from a uniform probability distribution of the parameter space, bounded to the range of the respective variable. These bounds are shown in Subsubsection~\ref{sssec:hpc}. The fitness of each vector in the population is then calculated by the objective function, as described in Section~\ref{sec:OF}.
In each generation, each parent generates one offspring by way of a `donor' vector, created following Eq.~(\ref{eq:DErand1bin}):
\begin{equation}
\label{eq:DErand1bin}
v_{i,G+1}=x_{r_1,G}+F\cdot (x_{r_2,G}-x_{r_3,G}),
\end{equation}
where $r_1\neq r_2\neq r_3\neq i \in [1,NP]$ index random unique population members, and differential weight $F\in [0,2]$ determines the magnitude of the mutation. The final offspring is generated by probabilistically merging elements of the parent with elements of the donor vector. The new vector $u_{i,G+1} = (u_{1i,G+1},\ldots , u_{Di,G+1})$ is found by:
\begin{equation}
u_{ji,G+1}=\begin{cases}
v_{ji,G+1},& \text{if }rand(j)\leq CR \text{ or } j = R,\\
x_{ji,G},& \text{otherwise},
\end{cases}
\end{equation}
where $j \in (1,\ldots,D)$, $CR\in [0,1]$ is the crossover rate, $rand(j)\in [0,1]$ is a uniform random number generator, and $R\in (1,\ldots,D)$ is a randomly chosen index to ensure that at least one parameter changes. The value of index $i$ is then calculated as:
\begin{equation}
x_{i,G+1} = \begin{cases}
u_{i,G+1},& \text{if }f(u_{ji,G+1})>f(x_{i,G}),\\
x_{i,G},&\text{otherwise}.
\end{cases}
\end{equation}
Once all offspring are generated, they are evaluated on the objective function, and selected into the next generation if they score better than their parent. Otherwise, the parent remains in the population.
\subsection{Self-Adaptive DE}
Storn and Price~\cite{storn1997differential} showed that DE/rand/1/bin outperformed several other stochastic minimisation techniques in benchmarking tests whilst requiring the setting of only two parameters, $CR$ and $F$. Many different mutation schemes were subsequently suggested for DE, named following the convention $DE/x/y/z$, where $x$ denotes the vector to be mutated (in this case a random vector), $y$ denotes the number of vectors used, and $z$ denotes the crossover method (bin corresponds to binomial).
Brest et al.~\cite{brest2006self} present the first widely-used self-adaptive rate-varying DE, which is expanded by Qin et al., to allow the mutation scheme to be selected (from four predetermined schemes) alongside the rates~\cite{qin2009differential}, based on previously-successful settings. Different rates/schemes are shown to work better on different problems, or in different stages of a single optimisation run. The strategy for a given candidate is selected based on a probability distribution determined by the success rate of a given strategy over a learning period $LP$. A strategy is considered successful when it improves the candidate's value. In the interest of brevity, we refer the interested reader to~\cite{qin2009differential} for a full algorithmic description.
Rates are adapted as follows. Before $G>LP$, CR is calculated by randomly selecting a number from a normal distribution, $N(0.5,0.3)$, with a mean of 0.5 and a standard deviation of 0.3. Afterwards it is calculated by a random number from $N(CR_{mk},0.1)$ where $CR_{mk}$ is the median value of the successful $CR$ values for each strategy $K$. $F$ is simply selected from a normal distribution $N(0.5,0.3)$, which will cause it fall on the interval $[-0.4,1.4]$ with a probability of 0.997~\cite{qin2009differential}.
\subsection{Bayesian Optimisation}
Bayesian optimisation (BO), e.g.~\cite{brochu2010tutorial}, is a probabilistic optimisation process that typically requires relatively few evaluations~\cite{mockus1994application, jones2001taxonomy, sasena2002flexibility}, although the evaluations themselves are computationally expensive. When parallelised, BO is shown to locate hyper-parameters within set error bounds significantly faster than other state-of-the-art methods on four challenging ML problems\cite{snoek2012practical}, in one case displaying 3\% improved performance over state-of-the-art expert results. As such, BO can be considered an extremely challenging optimiser as a comparator for DE and SADE, and as SNNs have many hyper-parameters, they are ideal candidates for optimisation.
BO assumes the network hyper-parameters are sampled from a Gaussian process (GP), and updates a prior distribution of the parameterisation based on observations. For LGMDNN, observations are the measure of generalization performance under different settings of the hyper-parameters we wish to optimise. BO exploits the prior model to decide the next set of hyper-parameters to sample.
BO comprises three parts: (i) a prior distribution, (ii) an acquisition function, and (iii) a covariance function.
\subsubsection{Prior}
We use a Gaussian Process (GP) prior, as it is particularly suited to optimisation tasks~\cite{mockus1994application}. A GP is a distribution over functions specified by its mean, $m$, and covariance, $k$, which are updated as hyper-parameter sets are evaluated. The GP returns $m$ and $k$ in place of the standard function $f$:
\begin{equation}
f(x)\sim GP(m(x),k(x,x')).
\end{equation}
\subsubsection{Covariance Function}
The covariance function determines the distribution of samples drawn from the GP~\cite{brochu2010tutorial,snoek2012practical}. Following \cite{snoek2012practical}, we select the 5/2 ARD Mat{\' e}rn kernel (\ref{eq:matern}), where $\theta$ is the covariance amplitude.
\begin{equation}
\label{eq:matern}
k_{m52}(x_i,x_j) = aexp(-\sqrt{5r^2(x_i,x_j)}),
\end{equation}
where:
\begin{equation}
a=\theta(1+\sqrt{5r^2(x_i,x_j)}+\frac{5}{3}r^2(x_i,x_j)),
\end{equation}
where:
\begin{equation}
r^2(x_i,x_j)=\frac{x_i-x_j}{\theta^2}.
\end{equation}
\subsubsection{Acquisition Function}
An acquisition function is a function that selects which point in the optimisation space to evaluate next. We evaluate the three acquisition functions, which select the hyper-parameters for the next experiment: Probability of Improvement (PI), Expected Improvement (EI)\cite{mockus1994application}, and Upper Confidence Bound (UCB)\cite{srinivas2009gaussian} --- see ~\cite{brochu2010tutorial} for full implementation details. Briefly, the {\bf PI} can be calculated, given our current maximum observation of the GP, $x^+$, by:
\begin{align}
PI(x) &= P(f(x)\geq f(x^+)+\zeta )\nonumber \\ &= \Phi(\frac{\mu(x)-f(x^+)-\zeta}{\sigma(x)}).\label{eq:PIZ}
\end{align}
Here, $\zeta\geq 0$ is a user-defined trade-off parameter that balances exploration and exploitation~\cite{lizotte2008practical}.
{\bf EI} maximises improvement with respect to $f(x^+)$:
\begin{equation}
I(x)=\max\{ 0, f(x)-f(x^+)\}.
\end{equation}
The new sample is found by maximising the expectation of $I(x)$:
\begin{equation}
x = \argmax_x\mathbb{E}(I(x)|\{ \mathbf{X},\mathbf{F}\}).
\end{equation}
Following ~\cite{jones1998efficient}, EI is evaluated by:
\begin{align}
EI(x) &= \begin{cases}
a+\sigma(x)\phi(Z), &\text{if } \sigma(x)>0,\\
0, & \text{otherwise};
\end{cases}\label{eq:EI}\\
a&=(\mu(x)-f(x^+)-\zeta)\Phi(Z);\\\nonumber
Z &= \begin{cases}
\frac{\mu-f(x^+)-\zeta}{\sigma(x)},&\text{if } \sigma(x)>0,\\
0, & \text{otherwise},
\end{cases} \nonumber
\end{align}
where $\phi$ and $\Phi$ correspond to the probability and cumulative distribution functions of the normal distribution, respectively.
{\bf UCB} maximises the upper confidence bound:
\begin{equation}
UCB(x) = \mu(x) + \kappa \sigma(x),
\end{equation}
where $\kappa \geq 0$ balances exploration and exploitation~\cite{snoek2012practical}, and is calculated per evaluation as:
\begin{equation}
\kappa = \sqrt{\mathit{\nu \tau_t}},
\end{equation}
where $\nu$ is the user tunable variable and:
\begin{equation}
\tau_t = 2\log(\frac{t^{\frac{d}{2}+2}\pi^2}{3\delta}).
\end{equation}
$\delta \in \{0,1\}$, $d$ is the number of dimensions in the function and $t$ is the iteration number.
\section{Test Problem}
This section will outline the rationale of the objective function, the experimental set-up, and assumptions. It is important to note that the motivation behind the model simplifications and objective function is for the work to be directly transferable to the neuromorphic processors described in~\cite{Qiao2015} once they are readily available.
\subsection{Objective Function}
\label{sec:OF}
The function to optimise was formulated as a weighted multi-objective function~\cite{deb2001multi}.
We direct the interested reader to~\cite{Salt2016} for a detailed formulation of the objective function, $F_{Acc}(\lambda)$, which is calculated by:
\begin{equation}
F_{Acc}(\lambda) = \begin{cases}
2\times F(\lambda), & \text{if } F(\lambda)>0\text{ and } Acc=1,\\
Acc\times F(\lambda), & \text{if } F(\lambda)>0,\\
0, & \text{if } Acc=1\text{ and }F(\lambda)<0,\\
F(\lambda), & \text{otherwise}.
\end{cases}
\end{equation}
Here, $Acc$ is the accuracy of the LGMDNN output and $F(\lambda)$ is the fitness function. The LGMD network is said to have detected a looming stimulus if the output neuron's spike rate exceeds a threshold $SL$. This can be formalised by:
\begin{equation}
Looming = \begin{cases}
\text{True}, &\text{if } SR>SL,\\
\text{False}, &\text{Otherwise},
\end{cases}
\end{equation}
where $SR$ can be calculated by:
\begin{equation}
SR = \sum^{t+\Delta T}_{i=t} S_i,
\end{equation}
where $\Delta T$ is the time over which the rate is calculated and $S_i$ is whether or not there is a spike at time $i$; a spike is defined to occur if at time $i$ the membrane potential exceeds $VT$.
The looming outputs are categorised into true positives ($TP$), false positives ($FP$), true negatives ($TN$), and false negatives ($FN$). Output accuracy is then:
\begin{equation}
Acc = \frac{TP+TN}{TP+TN+FP+FN}.
\end{equation}
$F(\lambda)$ can be calculated by:
\begin{equation}
F(\lambda) = \frac{Score(\lambda)+SSEOS(\lambda)}{2},
\end{equation}
where {\em Score} is a scoring function based on the timing of spiking outputs and $SSEOS$ is the sum squared error of the output signal.
The score is calculated by difference of the penalties' and reward functions' sums over the simulation:
\begin{equation}
Score(\lambda) = \sum^N_{i=1}R_i - \sum^N_{i=1}P_i.
\end{equation}
The reward can at a given time can be calculated by:
\begin{equation}
R(t) = \begin{cases}k\exp(\frac{t}{\Delta t})+1, & \text{if looming and spike},\\
0, & \text{otherwise}.
\end{cases}
\end{equation}
The punishment can be calculated by:
\begin{equation}
P(t) =
\begin{cases}
(l-c)\frac{t}{\Delta t}+c,& \text{if not looming and} \\
\ & \text{spike and } t<\frac{\Delta t}{2};\\
(l-c)\frac{1-(t-\frac{\Delta t}{2})}{\frac{\Delta t}{2}}+c, & \text{if not looming} \\
\ & \text{and spike and } t>\frac{\Delta t}{2};\\
0, & \text{otherwise}.
\end{cases}
\end{equation}
In these equations $t$ and $\Delta t$ remain consistent with the other objective functions and $k$, $l$, and $c$ are all adjustable constants to change the level of punishment or reward.
To calculate $SSEOS(\lambda)$, the signal was first processed so that every spike had the same value. This was done so that the ideal voltage and the actual voltage would match in looming regions, as the voltage can vary for a given spike. Ultimately, the only criterion is that the voltage has crossed the spiking threshold. In the non-looming region the ideal signal was taken to be the resting potential, which was negative for the AEIF model equation. The signal error was calculated at every time step as:
\begin{equation}
SSEOS(\lambda) = -\sum^N_{i=1}(V_{actual}^i-V_{ideal}^i)^2.
\end{equation}
$V_{actual}$ could be obtained directly from the state monitor object of the LGMD output neuron in the SNN simulator (Brian2). $N$ in this case is the length of the simulation and $i$ indicated each recorded data point at each time step of the simulation. $V_{ideal}$ was given by:
\begin{equation}
V_{ideal} = \begin{cases}
V_{spk}, & \text{if looming},\\
V_{r},& \text{otherwise},
\end{cases}
\end{equation}
where $V_{spk}$ is the normalised value given to each spike and $V_r$ is the resting potential.
Overall, this gives an objective function that takes into account the expected spiking behaviour, whilst penalising the system for deviating from plausible voltage values and rewarding it for accurately categorising looming and non-looming stimuli.
\subsection{Experimental Set-up}
The model was set-up using Brian2 spiking neural network simulator~\cite{goodman2014brian}.
\subsubsection{Data Collection}
Data was collected using a DVS in-situ on a quadrotor UAV (QUAV). Two types of data were collected: simple and real world. The simple data was synthesised using PyGame to generate black shapes on a white background that increased in area in the field of view of the DVS. This included: a fast and slow circle, a fast and slow square, and a circle that loomed then translated while increasing in speed (composite). The laptop playing the stimuli was placed in front of the hovering QUAV and the stimuli were recorded. This was done to maintain any noise that might be generated by the propellers of the QUAV.
To challenge the model, real stimuli were also recorded: a white ball on a black slope was rolled towards the DVS from 3 different directions; a cup was suspended in the air and the QUAV flew towards and away from the QUAV; and a hand was moved towards and away from the DVS on the hovering QUAV. These are increasing in complexity in terms of the shapes that are presented.
Two looming and two non-looming events (\~25s) from the composite stimulus were used to optimise the model and then the optimised model was evaluated on the other stimuli. The stimuli were chosen to show that the model generated is both shape and speed invariant.
\subsubsection{Hyper-parameter Constraints}
\label{sssec:hpc}
The hyper-parameters were all continuous and could range from zero to infinity. There were many regions of the parameter space that were not computable even when using a cluster with 368GB of RAM. To mitigate some of the computational difficulties the temporal resolution of the simulation was set to 100$\mu s$. Bayesian optimisation using the expected improvement utility function (BO-EI) was used over 20 eight hour runs to find feasible regions of the optimisation space.
$C$, $g_L$, $E_L$, $V_T$, and $\Delta_T$ were set as constants as they appeared to have little to no co-dependencies and model performance was not impacted by setting these values and appropriately optimising the other parameters~\cite{Salt2016}: $C=124.2pF$, $g_L=60.05nS$, $E_L=-73.12mV$, $V_T=-3.98mV$, and $\Delta_T=6.71mV$.
\tab{tab:Bounds} shows the constraints found for the rest of the hyper-parameters.
\begin{table}[htbp]
\centering
\caption{The constraints of the optimisation space}. \label{tab:Bounds}
\begin{tabular}{|>{\raggedright\arraybackslash}p{90pt}||>{\centering\arraybackslash}p{50pt}|>{\centering\arraybackslash}p{50pt}||}
\hline
\textbf{Parameter}& \textbf{Min} & \textbf{Max} \\
\hline\hline
$\mathbf{\tau_e}$ & 1 & 10\\\hline
$\mathbf{\tau_{iA}}$ & 1 & 20\\\hline
$\mathbf{\tau_{iB}}$ & 1 & 25\\\hline
$\mathbf{q_{eP}}$ & 1014 & 1363\\\hline
$\mathbf{q_{eS}}$ & 2000 & 5000\\\hline
$\mathbf{q_{eIP}}$ & 84 & 230\\\hline
$\mathbf{q_{eIS}}$ & 119 & 270\\\hline
$\mathbf{q_{eL}}$ & 29 & 472\\\hline
$\mathbf{inhA_S}$ &0.04&1.22 \\\hline
$\mathbf{inhB_S}$ &0.24&1.5 \\\hline
$\mathbf{inhA_L}$ &0.019&1.3 \\\hline
$\mathbf{a}$ &1&8\\\hline
$\mathbf{b}$ &40&141 \\\hline
$\mathbf{\tau_{w_{adapt}}}$ &1&150 \\\hline
$\mathbf{\tau_{pre}}$ &1&25 \\\hline
$\mathbf{\tau_{post}}$ &1&25\\\hline
$\mathbf{\Delta_{pre}}$ &1e-16&0.05\\\hline
$\mathbf{\Delta_{post}}$ &1e-16&0.05\\\hline
\hline
\end{tabular}
\end{table}
\subsubsection{Comparing Optimisers}
SADE, DE, BO-EI, BO-PI, and BO-UCB were evaluated thirty times on the same input stimulus, so that they could be statistically compared using a Mann-Whitney U test.
The input stimulus included a black circle on a white background performing a short translation to the right, followed by a half loom, a full recession, and then a full loom (The first two non-loom to loom transitions of the composite stimulus). The stimulus was selected because it consisted of a 50:50 looming to not looming ratio. The values of the user defined parameters were selected as:
\begin{itemize}
\item BO-EI and BO-PI: $\zeta=0.01$;
\item BO-UCB: $\kappa=2.576$;
\item DE: $NP=\frac{10dim}{3}$, $F=0.6607$, $CR=0.9426$;
\item SADE: $LP=3$, $NP=\frac{10dim}{3}$, where dim is the number of hyper parameters.
\end{itemize}\par
The tests were run using the non-adaptive and non-plastic model with the bounds from \tab{tab:Bounds}. They were defined as having converged if they had not improved for $3\times NP$ evaluations. This meant three generations for the DE algorithms and the same number of BO evaluations. The population size was two more than what is recommended by~\cite{pedersen2010good} for the DE algorithm. This size was chosen as it is relatively small and time was an issue. The short convergence meant that the SADE algorithm needed to have a short LP. The processor time was not included as a metric for this as the tests were run on three different computers so the results would not have been comparable.
\subsubsection{Comparing Models}
Once the best optimiser was found (a comparison of optimisers can be found in \ssec{ssec:optcomp}), the best performing optimiser, SADE, was used to optimise the following models:
\begin{description}
\item[\textbf{LGMD:}] Neuromorphic LGMD;
\item[\textbf{A:}] LGMD with adaptation;
\item[\textbf{P:}] LGMD with plasticity;
\item[\textbf{AP:}] LGMD with adaptation and plasticity.
\end{description}
The SADE variables were set to: $LP=3$ and $NP=10dim$. The optimisation process was run 10 times and the best optimiser from these ten runs was selected. The model was then tested on each input case for ten looming to non-looming or non-looming to looming transitions. The performance of each model is reported in \ssec{ssec:modcomp}.
Plasticity was found to degrade the performance sometimes so we experimented clamping it from 0\% to 100\% of the original synaptic strength. This allowed it to range from zero to double the original values when at 100\% to no variation at 0\%.
\section{Results and Discussion}
The results are split into two subsections. First, we will compare the optimisers and then we will compare the addition of adaptation, plasticity, and adaptation and plasticity combined to the baseline model.
The models are evaluated on their accuracy (Acc), sensitivity (Sen), Precision (Pre), and Specificity (Spe). Acc is defined in \ssec{sec:OF}. The other metrics can be found in~\cite{alpaydin}.
\subsection{Optimiser Comparison and Statistical Analysis}
\label{ssec:optcomp}
\tab{tab:OptimiserResults} shows that the SADE algorithm achieved the best fitness, accuracy, precision, and specificity. The BO-PI algorithm converged on its solution in the least number of objective function evaluations and the DE algorithm achieved the best sensitivity but the worst fitness, precision, and specificity.
\begin{table}[htbp]
\centering
\caption{Optimisation algorithm metrics.\label{tab:OptimiserResults}}
\begin{tabular}{|l||c|c|c|c|c|c||}
\hline
\textbf{Meth} & \textbf{Fit} & \textbf{Eva} & \textbf{Acc} & \textbf{Sen} & \textbf{Pre} & \textbf{Spe} \\
\hline\hline
\textbf{BPI} & -197.1 & \textbf{162.5} & 0.64 & 0.47 & 0.78 & 0.81 \\
\textbf{DE} & -675.4 & 238.8 & 0.62 & \textbf{0.65} & 0.76 & 0.59 \\
\textbf{BEI} & -454.0 & 181.3 & 0.63 & 0.57 & 0.80 & 0.69 \\
\textbf{SADE} & \textbf{-84.9} & 253.2 & \textbf{0.66} & 0.45 & \textbf{0.88} & \textbf{0.87} \\
\textbf{BUCB} & -533.3 & 180.0 & 0.62 & 0.61 & 0.78 & 0.64 \\
\hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\centering
\caption{Comparison of the statistical significance of the results. }\label{tab:statistics}
\begin{tabular}{|l|l||c|c|c|c|c|c||}
\hline
\textbf{} & \textbf{Meth} & \textbf{Fit} &\textbf{ Eva}& \textbf{Acc} & \textbf{Sen} & \textbf{Pre} & \textbf{Spe} \\
\hline
\hline
\multirow{4}{*}{\textbf{BUCB}} &BPI & + & .& . & + & .& . \\\cline{2-8}
&DE & . & + & . & . & . & . \\\cline{2-8}
&BEI & . & . & . & . & . & . \\\cline{2-8}
&SADE & + & + & . & + & + & + \\\cline{2-8}
\hline
\hline
\multirow{4}{*}{\textbf{DE}} &BPI & + & + & . & + & . & . \\\cline{2-8}
&BEI & . & + & . & . & . & . \\\cline{2-8}
&SADE & + & . & + & + & + & + \\\cline{2-8}
&BUCB & . & + & . & . & . & . \\\cline{2-8}
\hline
\hline
\multirow{4}{*}{\textbf{BEI}} &BPI & + & . & . & + & . & . \\\cline{2-8}
&DE & . & + & . & . & . & . \\\cline{2-8}
&SADE & + & + & . & + & . & . \\\cline{2-8}
&BUCB & . & . & . & . & . & . \\\cline{2-8}
\hline
\hline
\multirow{4}{*}{\textbf{SADE}}&BPI & . & + & . & . & + & + \\\cline{2-8}
&DE & + & . & + & + & + & + \\\cline{2-8}
&BEI & + & + & . & + & . & . \\\cline{2-8}
&BUCB & + & + & . & + & + & + \\\cline{2-8}
\hline
\hline
\multirow{4}{*}{\textbf{BPI}} &DE & + & + & . & + & . & . \\\cline{2-8}
&BEI & . & . & . & + & . & . \\\cline{2-8}
&SADE & . & + & . & . & + & + \\\cline{2-8}
&BUCB & + & . & . & + & . & . \\\cline{2-8}
\hline
\hline
\end{tabular}
\end{table}
\tab{tab:statistics} shows the statistical significance of the results from \tab{tab:OptimiserResults}. The method in the comparison column is compared to each method in the subsequent column. A + indicates statistically significant values and a . indicates no statistical significance. Statistical significance was defined as $p\leq 0.05$. The Mann-Whitney U test was used to determine statistical significance because it does not require normally distributed samples.
SADE's better fitness is statistically significant compared to all optimisers other than BO-PI. However, to achieve this fitness it also performed the most evaluations when compared to the others. This difference is significant compared to all the optimisers except for DE, which has almost the same number of evaluations. SADE also has significantly worse sensitivity than all but the BO-PI algorithm. Both SADE and BO-PI scored the best fitness values whilst exhibiting the significantly lowest sensitivity values when compared to the other algorithms.
BO-PI was significantly better than DE and BO-UCB for fitness. It also had significantly less evaluations than DE and SADE. Its precision and specificity is significantly less than the SADE algorithm.
BO-EI has significantly worse fitness and sensitivity when compared to BO-PI and SADE.
DE took significantly more evaluations to converge when compared to all algorithms but SADE. It also had significantly worse fitness, accuracy, precision and specificity than SADE but significantly higher sensitivity. It had significantly worse fitness but significantly better sensitivity than BO-PI.
BO-UCB had significantly worse fitness but better sensitivity than SADE and BO-PI. It also had significantly worse precision and specificity than SADE.
A possible reason that DE underperformed is that the $F$ values provided in \cite{pedersen2010good} are not appropriate for this problem. The population size may have also been too small. Before the SADE algorithm was implemented, doubling the recommended population size made DE find better results than when it had a smaller population. When the population size is too small, whole regions of the parameter space can be missed resulting in poor performance.
SADE removes the need to find control parameters and has been shown to perform as well or better than DE even when the control parameters are well selected~\cite{qin2009differential}. The generalisability that comes with finding the right control parameters on-the-fly is also appealing.
The addition of the various mutation functions to SADE also seems to help it find better results. This is probably due to the desirable properties of each mutation function cancelling out the undesirable properties of other mutation functions.
A surprising result was that of the BO algorithms BO-PI seemed to perform the best. This is contrary to what the authors in~\cite{brochu2010tutorial} found. They suggested that it tended to have the worst performance of the three.
\subsection{SADE Averages}
The SADE algorithm performed the best out of all of the algorithms. \fig{fig:sAVG} shows the average $Fitness_{mod}$ of the population over 19 generations. The average $Fitness_{mod}$ converged by five iterations. The max $Fitness_{mod}$ starts off at 0. This indicates that a 100\% accuracy candidate was found in the initialisation period. The max $Fitness_{mod}$ then rises to 400 which is not visible as the range of the average score is -50000 to -1500.\par
\begin{figure}[h!tbp]
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{scoreAVGpng}
\caption{}\label{fig:sAVG}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{FAVGpng}
\caption{}\label{fig:fAVG}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{CRAVGpng}
\caption{}\label{fig:crAVG}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{pAVGpng}
\caption{}\label{fig:pAVG}
\end{subfigure}
\caption{Averages $F_{acc}(\lambda)$, $F$, $CR$, and $p$ for the SADE population over 19 generations. The dotted vertical line indicates that the learning period has ended.}
\end{figure}
The $F$ average results in \fig{fig:fAVG} are quite interesting. They start off at 1 as they are selected from $U([0,2])$ and then drop down to 0.5 as they are selected from $U([0,1])$ after the first generation. Once the learning period has finished all of the $F$ values have converged to less than 0.1. This indicates that the $F$ values that are having the most success are small and therefore taking advantage of exploration rather than exploitation. It was unexpected that the algorithm would find a min/max within so few generations. This could be why the authors select $F$ from $N(0.5,0.3)$ forcing $F$ to range from -0.4 to 1.4. With $F$ this small the algorithm would effectively be performing gradient descent. However, this could be because the function on the restricted space doesn't have many local maxima. Indeed, these results do come from the best performing LGMD model found.
\fig{fig:crAVG} shows how the CR for each function changes over time. For the first nine generations, the CR values are selected from $U([0,1])$ and so the mean stays at 0.5. However, as with the $F$ mean values once the learning period is over, all of the CR values go down to less than 0.1. This means that less than 10\% of the mutations will generally take place. From a set of 11 hyper-parameters this means that probabilistically one value will change in addition to the random index that is chosen. $CR$ is generally associated with convergence.
\begin{table}[h!tbp]
\centering
\caption{Parameters used by each model. \label{tab:PARAMS}}
\begin{tabular}{|l||c|c|c|c||}
\hline
\textbf{Parameter} & \textbf{LGMD} &\textbf{A}&\textbf{P}&\textbf{AP}\\
\hline\hline
$\mathbf{\tau_e (ms)}$ & 5.87&5.87&5.87&5.87 \\\hline
$\mathbf{\tau_{iA} (ms)}$ & 3.57&3.57&3.57&3.57 \\\hline
$\mathbf{\tau_{iB} (ms)}$ & 4.20&4.20&4.20&4.20 \\\hline
$\mathbf{q_{eP} (pA)}$ & 1014.00&1014.00&1014.00&1014.00 \\\hline
$\mathbf{q_{eS} (pA)}$ &4635.30&4635.30&4635.30&4635.30 \\\hline
$\mathbf{q_{eIP} (pA)}$&84.26&84.26 & 84.26& 84.26 \\\hline
$\mathbf{q_{eIS} (pA)}$ & 168.11&168.11&168.11&168.11 \\\hline
$\mathbf{q_{eL} (pA)}$ & 80.00& 100.00 &80.00 &100.00 \\\hline
$\mathbf{inhA_S (1)}$ &1.19&1.19&1.19&1.19\\\hline
$\mathbf{inhB_S (1)}$ &1.50&1.50&1.50&1.50\\\hline
$\mathbf{inhA_L (1)}$ 6&0.14&0.14&0.14&0.14 \\\hline
$\mathbf{a (1)}$ &-&0.79&-&0.79\\\hline
$\mathbf{b (1)}$ &-&14.51&-&14.51\\\hline
$\mathbf{\tau_{w_{adapt}} (ms)}$ &-&30.00&-&30.00 \\\hline
$\mathbf{\tau_{pre} (ms)}$ &-&-&1.56&1.56 \\\hline
$\mathbf{\tau_{post (ms)}}$ &-&-&10.03&10.03\\\hline
$\mathbf{\Delta_{pre} (1)}$ &-&-&0.031&0.031\\\hline
$\mathbf{\Delta_{post} (1)}$ &-&-&0.027&0.027\\\hline
$\mathbf{c (1)}$ &-&-&0.05&0.05\\\hline
\hline
\end{tabular}
\end{table}
The probability of each function being chosen is shown in \fig{fig:pAVG}. The probabilities are fixed at 0.25 for the first 9 generations and then they vary based on their success. It is interesting to see that in spite of the $F$ and $CR$ values suggesting that the algorithm is converging on a solution, the DE/Rand-to-Best/2/Bin algorithm is the least successful. The DE/Curr-to-Rand/1 algorithm performs relatively well until about 16 generations where it tapers off. The DE/Rand/2/bin algorithm dips initially but then increases as DE/Curr-to-Rand/1 starts to drop off. The DE/Rand/1/bin remains relatively high during the entire algorithm only to be overtaken by The DE/Rand/2/bin in the last generation.
\subsection{Comparison of Models}
\label{ssec:modcomp}
\tab{tab:PARAMS} shows the selected final parameters of each model. These values were all found by the SADE algorithm, due to the superior quality of its results. The (1) tag in the parameter column indicates that the variable is unit-less.
In both models with plasticity, the clamping value $c$ was set to 0.05, or 5\%.
\begin{table*}[h!tbp]
\centering
\caption{Quality metrics of the performance of different LGMD models for different simulated looming stimuli. }\label{tab:LGMDComp}
\begin{tabular}{|>{\raggedright\arraybackslash}p{85pt}|>{\centering\arraybackslash}p{50pt}||>{\centering\arraybackslash}p{52pt}|>{\centering\arraybackslash}p{58pt}|>{\centering\arraybackslash}p{52pt}|>{\centering\arraybackslash}p{58pt}||}\hline
\textbf{Stimulus}&\textbf{Model}&\textbf{Accuracy}&\textbf{Sensitivity}&\textbf{Precision}&\textbf{Specificity}\\\hline\hline
\multirow{5}{*}{\textbf{composite}}
&\textbf{LGMD}&0.90&1.00&0.83&0.80\\\cline{2-6}
&\textbf{A}&0.90&1.00&0.83&0.80\\\cline{2-6}
&\textbf{P}&0.90&1.00&0.83&0.80\\\cline{2-6}
&\textbf{AP}&0.90&1.00&0.83&0.80\\\hline
\multirow{5}{*}{\textbf{circleSlow}}
&\textbf{LGMD}&0.80&0.60&1.00&1.00\\\cline{2-6}
&\textbf{A}&0.80&0.60&1.00&1.00\\\cline{2-6}
&\textbf{P}&0.90&0.80&1.00&1.00\\\cline{2-6}
&\textbf{AP}&1.00&1.00&1.00&1.00\\\hline
\multirow{5}{*}{\textbf{circleFast}}
&\textbf{LGMD}&1.00&1.00&1.00&1.00\\\cline{2-6}
&\textbf{A}&1.00&1.00&1.00&1.00\\\cline{2-6}
&\textbf{P}&1.00&1.00&1.00&1.00\\\cline{2-6}
&\textbf{AP}&1.00&1.00&1.00&1.00\\\hline
\multirow{5}{*}{\textbf{squareSlow}}
&\textbf{LGMD}&1.00&1.00&1.00&1.00\\\cline{2-6}
&\textbf{A}&1.00&1.00&1.00&1.00\\\cline{2-6}
&\textbf{P}&1.00&1.00&1.00&1.00\\\cline{2-6}
&\textbf{AP}&1.00&1.00&1.00&1.00\\\hline
\multirow{5}{*}{\textbf{squareFast}}
&\textbf{LGMD}&1.00&1.00&1.00&1.00\\\cline{2-6}
&\textbf{A}&1.00&1.00&1.00&1.00\\\cline{2-6}
&\textbf{P}&1.00&1.00&1.00&1.00\\\cline{2-6}
&\textbf{AP}&1.00&1.00&1.00&1.00\\\hline
\end{tabular}
\end{table*}
\begin{figure}[t!p]
\centering
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{comP}
\caption{Filtered Composite Input (P Layer Raster Plot).}\label{fig:comP}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{circSlowP}
\caption{Filtered circleSlow Input (P Layer Raster Plot).}\label{fig:cSP}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{squarFastP}
\caption{Filtered squareFast Input (P Layer Raster Plot).}\label{fig:sFP}
\end{subfigure}
\caption{The input layer for the simple stimuli.The white and coloured backgrounds indicate non-looming and looming respectively.}\label{simpleStim}
\end{figure}
As expected, all of the models have a $\tau_{iA} < \tau_{iB}$ which means that the $B$ inhibitions will persist for longer and have slower dynamics relative to the $A$ inhibitions. What is unexpected is that the $B$ inhibitions also have stronger current injection than the $A$ inhibitions. On top of this, both of the inhibitory current injections are actually stronger than the excitatory connections. Whereas the model in~\cite{yue2010reactive} with discrete dynamics had relatively low inhibitory current injections, with $inhA_S = 0.25$ and $inhB_S = 0.125$ of the excitation strength. Clearly, there is a difference between the neuron models that are used, but this is an interesting outcome nonetheless.
\begin{figure}[h!tpb]
\centering
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{ballRollP}
\caption{Filtered ballRoll2 Input (P Layer Raster Plot).}\label{fig:BRP}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{loomingCup}
\caption{Filtered cupQUAV Input (P Layer Raster Plot).}\label{fig:lCP}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{handP}
\caption{Filtered Hand Input (P Layer Raster Plot).}\label{fig:hP}
\end{subfigure}
\caption{Complex real stimuli. The white and coloured backgrounds indicate non-looming and looming respectively.}
\end{figure}
\tab{tab:LGMDComp} shows the accuracy, sensitivity, precision, and specificity for each LGMD model for a given simple stimulus. The stimuli can be described as follows:
\begin{description}[topsep=0.2ex]
\item[\textbf{composite:}] A standard test bench stimulus that consists of a black circle on a white background that translates and looms at increasing speeds. \fig{fig:comP} shows the composite input.
\item[\textbf{circleFast/Slow:}] A purely looming black circle on white background at high or low speeds. Collected on hovering QUAV. \fig{fig:cSP} shows the circleFast/Slow stimulus.
\par
\item[\textbf{squareFast/Slow:}] A purely looming black square on a white background at high/low speeds. \fig{fig:sFP} shows the squareFast/Slow stimulus.
\end{description}
The results in \tab{tab:LGMDComp} show that the models performed well ($Accuracy\geq 0.8$) on most of the stimuli. LGMD and \textbf{A} perform poorly on the circleSlow test, missing two out of five of the looming stimuli. \textbf{P} misses one looming stimulus, and \textbf{AP} detects all stimuli accurately. The plasticity increases the weights of important connections and the adaptation filters out over excited neurons.
These results show that the models are capable of detecting looming stimuli of varying speeds and of differentiating between translation and looming stimuli for the most part. \textbf{AP} scored 100\% in every test besides the composite stimulus where it misclassified the first short translation as a loom. This can probably be attributed to the network not starting in its resting/equilibrium state.
After performing the simulated experiments of computer generated shapes, real objects moving towards and away from the camera were recorded. These stimuli can be described as:
\begin{description}
\item[\textbf{ballRoll[1-3]:}] Three different runs of a white ball rolling towards the camera on a black platform at different angles and speeds. This is a purely looming stimulus. \fig{fig:BRP} shows one of the three ball rolls.
\item[\textbf{cupQUAV:} ] A QUAV flying towards a cup suspended in front of it with a white wall behind it. This is a self stimulus.\fig{fig:lCP} shows the QUAV cup stimulus.
\item[\textbf{Hand:} ] A Hand moving towards and away from the hovering QUAV. \fig{fig:hP} shows the looming hand stimulus.
\end{description}\par
\fig{fig:BRP}, \fig{fig:lCP}, and \fig{fig:hP} show that the real stimuli tend to have more noise and do not adhere to a strong pattern when compared to \fig{fig:comP}, \fig{fig:cSP}, and \fig{fig:sFP}. \tab{tab:LGMDComp2} shows that the models do not perform as well on real world stimuli. ballRoll[1-3] is the simplest real stimulus, and as such \textbf{P} and \textbf{AP} achieved full accuracy. LGMD and \textbf{A} missed one roll.
\begin{table*}[h!tbp]
\centering
\caption{Quality metrics of the performance of different LGMD models for different real looming stimuli}\label{tab:LGMDComp2}
\begin{tabular}{|>{\raggedright\arraybackslash}p{85pt}|>{\centering\arraybackslash}p{50pt}||>{\centering\arraybackslash}p{52pt}|>{\centering\arraybackslash}p{58pt}|>{\centering\arraybackslash}p{52pt}|>{\centering\arraybackslash}p{58pt}||}\hline
\textbf{Stimulus}&\textbf{Model}&\textbf{Accuracy}&\textbf{Sensitivity}&\textbf{Precision}&\textbf{Specificity}\\\hline\hline
\multirow{5}{*}{\textbf{ballRoll[1-3]}}
&\textbf{LGMD}&0.66&0.66&1.00&0.00\\\cline{2-6}
&\textbf{A}&0.66&0.66&1.00&0.00\\\cline{2-6}
&\textbf{P}&1.00&1.00&1.00&0.00\\\cline{2-6}
&\textbf{AP}&1.00&1.00&1.00&0.00\\\hline
\multirow{5}{*}{\textbf{cupQUAV}}
&\textbf{LGMD}&0.70&1.00&0.62&0.40\\\cline{2-6}
&\textbf{A}&0.70&1.00&0.62&0.40\\\cline{2-6}
&\textbf{P}&0.70&1.00&0.62&0.40\\\cline{2-6}
&\textbf{AP}&0.80&1.00&0.71&0.60\\\hline
\multirow{5}{*}{\textbf{hand}}
&\textbf{LGMD}&0.50&1.00&0.50&0.00\\\cline{2-6}
&\textbf{A}&0.50&1.00&0.50&0.00\\\cline{2-6}
&\textbf{P}&0.50&1.00&0.50&0.00\\\cline{2-6}
&\textbf{AP}&0.50&1.00&0.50&0.00\\\hline
\end{tabular}
\end{table*}
Surprisingly good results come from the cupQUAV stimulus: 70\% accuracy for all models except for \textbf{AP}, which had 80\%. It is worth noting that \textbf{AP} performed consistently well when compared with the other models.
The possibility of detecting the hand by stochastically dropping pixel-events, was investigated. Dropping 50\% of the DVS events and re-optimising the network gave 100\% accuracy for the hand and cupQuad stimulus. However, in doing this, the network was no longer robust to the speed changes in the composite benchmark test. Indeed, even using all of the pixels, the network could be optimised to work on the real world stimuli. The inhibition values went up and the gain values went down, meaning the network struggled to spike on stimuli that weren't noisy or event heavy. Some sort of additional pre-filtering could be useful in getting the looming network to be fully robust in all situations.
\subsubsection{The Effect of Changing $c$ on Plasticity}
\label{ssec:plasC}
\begin{figure}[h!tbp]
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{plasticityComposite}
\caption{Effect of changing the $c$ clamping value on the learning weight $w$ for the composite stimulus}\label{fig:pCom}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{plasticitySlow}
\caption{Effect of changing the $c$ clamping value on the learning weight $w$ for the circleSlow stimulus}\label{fig:psloC}
\end{subfigure}
\begin{subfigure}{1\columnwidth}
\centering
\includegraphics[width=1\columnwidth]{plasticityHand}
\caption{Effect of changing the $c$ clamping value on the learning weight $w$ for the hand stimulus}\label{fig:pH}
\end{subfigure}
\caption{The effect of changing the clamping value on various stimuli}
\end{figure}
\fig{fig:pCom}, \fig{fig:psloC}, and \fig{fig:pH} show how changing the bounds of the plasticity clamping changes the LGMD (\textbf{P} model) accuracy for the composite, cicleSlow, and hand stimuli respectively.
Interestingly, for the two simulated stimuli increasing the clamping to beyond 25\% caused the accuracy to drop to 50\%. The sensitivity dropped to 0\% indicating that it was no longer detecting looms and that the synaptic weights were no longer causing the LGMD neuron to fire.
Increasing the clamping to 45\% increases the accuracy for both the \textbf{P} and \textbf{AP} models on the hand stimulus. This shows that plasticity is a double edged sword that can both improve and degrade the performance of the model. Knowledge about the nature of your input can help to determine what level of plasticity you require. In all cases, a small contribution of plasticity improved the performance. This could be due to the fact that the amount of noise in the simulated stimuli was far less than the noise in the real stimuli.
\subsubsection{Weight Visualisation}
\begin{figure}[h!tbp]
\centering
\includegraphics[width=1\columnwidth]{PtoIP}
\caption{The weights of the synapses from the P to the IP layer at the end of a looming or non-looming sequence.}\label{fig:PIP}
\end{figure}
\begin{figure}[h!tbp]
\centering
\includegraphics[width=1\columnwidth]{IP2LGMD}
\caption{The weights of the synapses from the IP to the LGMD layer at the end of a looming or non-looming sequence.}\label{fig:IPL}
\end{figure}
\begin{figure}[h!tbp]
\centering
\includegraphics[width=1\columnwidth]{IS2LGMD}
\caption{The weights of the synapses from the IS to the LGMD layer at the end of a looming or non-looming sequence.}\label{fig:ISL}
\end{figure}
\fig{fig:PIP}, \fig{fig:IPL}, and \fig{fig:ISL} show snapshots of the weights at the end of each looming or non-looming sequence. We used the \textbf{P} model with $c=0.25$ on the composite stimulus. This was done because it achieved 100\% accuracy and 25\% clamping has greater weight variation than 10\%.
\fig{fig:PIP} is interesting as it most obviously correlates to the input. We can see in the first non-looming snapshot that the P-IP layer is strongly inhibiting a circle translating from right to left. In the looming section, the circle is moving outwards and the central weights have the highest density of low values. This shows that the centre of the circle is not associated with the output. In the second non-looming snapshot, the density of high values is in the centre of the circle showing that it has higher inhibitions.
\fig{fig:IPL} and \fig{fig:ISL} show the IP and IS connections to the LGMD layer. The IP-LGMD snapshots tend to have higher weights during looming than non-looming stimuli. Interestingly, in both figures, the highest value is one, meaning that the weights have only become weaker than they initially were, at least for these selected times.
\section{Conclusions}
We implemented a neuromorphic model of the locust LGMD network using recordings from a UAV equipped with a DVS sensor as inputs. The neuromorphic LGMDNN was capable of differentiating between looming and non-looming stimuli. It was capable of detecting the black and white simple stimuli correctly regardless of speed and shape. Real-world stimuli performed relatively well using the parameters found by the optimiser for synthesised stimuli. However, when re-optimised, the real-world stimuli performed comparably to the synthesised stimuli. This was mainly because real-world stimuli tend to contain a higher number of luminance changes and therefore the magnitude parameters needed to be reduced.
We showed that BO, DE, and SADE are capable of finding parameter values that give the desired performance in the LGMDNN model. It can be seen that SADE statistically significantly outperformed DE on all metrics besides sensitivity and the number of evaluations, although the only metrics that formed part of the objective function were fitness and accuracy. Once a suitable objective function was found that accurately described the desired output of the LGMDNN, BO, DE and SADE outperformed hand-crafted attempts. The algorithms were able to achieve 100\% accuracy on black and white simple stimuli of varying shapes and speeds. SADE performed well in this task and we have shown that it is suitable for the optimisation of a multi-layered LGMD spiking neural network. This could save time when developing biologically plausible SNNs in related applications.
In the future, we would like to apply the optimisation algorithms directly to tuning the neuromorphic processors using the neuromorphic model, with the end goal being a closed loop control system on a UAV. Showing that optimisation is effective for selecting parameters on neuromorphic hardware will increase their usability.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
We are grateful to Prof. Claire Rind, who provided valuable comments and feedback on the definition of the neuromorphic model, and acknowledge the CapoCaccia Cognitive Neuromorphic Engineering workshop, where these discussions and model developments took place.
We would also like to thank INILABs for use of the DVS sensor and the Institute of Neuroinfoamrtics (INI), University of Zurich and ETH Zurich for its neuromorphic processor developments.
Part of this work was funded by the EU ERC Grant ``neuroP'' (257219) and EU H2020-MSCA-IF-2015 grant ``ECogNet'' (707373).
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,116,691,498,289 | arxiv | \section{Introduction}
Spin dynamics of hot electrons in solids is currently studied intensively \cite{1}. One expects on general grounds that the magnetism in excited non--equilibrium solids, ferromagnets, magnetic semiconductors, tunnel junctions and quantum--dots etc., is controlled by energy and angular momentum conservation. The ultrafast fs--time response occurs due to corresponding fast electronic transitions, strong electron interactions, and fast angular momentum transfer which may also involve the system exciting external electromagnetic field, see Maxwell equations\cite{2}. Clearly, the spin
dynamics of the hot electrons is set by the strength of the molecular fields acting on them and angular momentum transfer and thus varies for different electronic states.
At non--equilibrium a temperature $T_{el}$ is quickly established and acts as a control parameter for the non--equilibrium state. This is also largely the case for strongly itinerant magnetism. In ferromagnets with dominant local moments, Heisenberg like ferromagnet, the temperature $T_{spin}$ referring to the magnetic moment disorder is the control parameter. An interesting case occurs when dynamics involves changes of both magnitude and direction of the magnetic moments.
Note, during ultrafast dynamics (of fs--time scale, 20 to 10 fs or less) no electron temperature etc. is established.
Here, we discuss in particular the time dependent response of hot electrons, of exchange split states, bands, different energy shifts all resulting from spin flip transitions between spin split
states. Note, such transitions may also involve polarized light emission. On general grounds one expects that spin flip transitions of hot minority electrons occur first for energetic reasons more frequently. The spin flip transitions act like a hybridization of the spin split states, for illustration see Figs 1 and 2. Obviously, the transition $\downarrow\rightarrow\uparrow$ causes a shift to larger binding energies. For hot majority electrons a corresponding shift occurs to lower binding \cite{3}.
Note, the asymmetry of $\downarrow\rightarrow\uparrow$ and $\uparrow\rightarrow\downarrow$ spin flip transitions
is also reflected by the lifetimes of hot electrons in ferromagnetic metals\cite{1,4}.
Of course, ultrafast photoemission spectroscopy should generally exhibit such behavior, see in particular recent experiments by Weinelt et al.\cite{3}. Weinelt et al. observed such shifts in Gd \cite{3}.
The electron self--energy $\Sigma_\sigma (\varepsilon, t, F, ...)$, ($\Sigma_\sigma = \Sigma_{\sigma^{'}}\int G_{\sigma^{'}} T_{\sigma \sigma^{'}}$, T is the spin dependent t--scattering matrix, F is the light fluence), of the spin flipping electrons describes the dynamic response,
its real part gives the spin dependent electron energy shifts ($ \Delta \varepsilon_{\sigma} \propto Re \Sigma_{\sigma}$) and its imaginary part the corresponding lifetimes ($\tau^{-1}_{\sigma} \propto Im \Sigma_{\sigma}$). For discussion, note approximately $\Sigma_\sigma \propto \mid V_{\sigma,\sigma^{'}}\mid^2 N N$, where $V$ denotes the spin flipping potential and N the averaged density of states \cite{1,4}.
In view of the spin split density of states in Fe, Co, Ni, for example, one expects at non--equilibrium much smaller asymmetry effects for Co and Ni than for Fe. However, for spin split states in thin films, quantum well states, quantum dots, valence states in Gd (spin split states due to exchange coupling of the 5d--valence electrons to the 4f--electron magnetization), in general for narrow spin split bands such asymmetry regarding spin flip transitions may be large.
Then, for large exchange splitting the minority electrons respond faster upon magnetic disorder. The asymmetry increases for increasing molecular field and disappears for vanishing molecular field and long range magnetic order. Also for large electron temperatures ($T_{el}\rightarrow T_c$) this asymmetry should become smaller and disappear. In case of laser light excitations the asymmetry should become smaller for increasing fluence $F$
Aside from density of states effects the same magnitude is expected for larger times for the energy shifts of minority and majority electrons.
\begin{figure}
\centerline{\includegraphics[width=.5\textwidth]{1neu.eps}}
\caption{Spin split (states) bands of itinerant electrons. The splitting $\protect\Delta_{ex}(t)=\varepsilon_\downarrow-\varepsilon_\uparrow$ may result from exchange coupling or generally a time dependent molecular field $H_{eff}(t)$ (and possibly including an external magnetic field) acting on the spins of the minority $\protect(\downarrow)$ and majority $\protect(\uparrow)$ electrons. At non--equilibrium spin flip transitions cause a spin dependent shift $\Delta \varepsilon_\sigma$ varying in time t. The magnetization $\protect\overrightarrow{M(t)}$ causes asymmetric behavior of spin flip transitions $\protect\downarrow\rightarrow\uparrow$ and $\protect\uparrow\rightarrow\downarrow$.}
\end{figure}
For times t of the order of the spin lifetime $\tau_\sigma$ of the hot electrons the time dependence of the band shifts $\Delta \varepsilon_\sigma (t)$ should be reflected in electron spectroscopy (of course not for shorter times).
Note in ferromagnetic transition metals like Ni, Fe, Co etc. the lifetimes of the hot electrons resulting for example from laser field excitations are different for minority and majority electrons \cite{1,3,4}. As a consequence different time dependent band shifts $\Delta \varepsilon_\sigma (t)$ and of the center of the exchange split bands occur. The shifts of the electronic states or center of gravity of bands $\varepsilon_\sigma$ are revealed in the time dependence of the exchange splitting $\Delta_{ex}(t) = \varepsilon_\downarrow(t) - \varepsilon_\uparrow(t)$, see Fig.1 for illustration.
Such behavior depends of course on the fluence $F(t)$ of the exciting field and resulting electronic temperature $T_{el}(t)$ or spin disorder temperature $T_{spin}$ in case of local magnetic moments. In case of dominantly itinerant magnetism $\Delta_{ex}$ is proportional to the global magnetization $M(t)$.
Note, the shifts may be calculated using the Hubbard hamiltonian, Fermi--Liquid theory or Green's function methods, see Fig.2, and the Landau--Lifshitz equation may also be used to calculate $\Delta_{ex}(t) \sim M(t)$.
Regarding spin dynamics and magnetism dynamics at non--equi\- librium
this time dependence is of fundamental interest, since the molecular field felt by the valence electrons varies with electronic states, s, p, d ones etc. \cite{5}. For example the dynamics reflects itinerant vs Heisenberg type magnetism and the behavior of conservation laws for ultrafast responses (reflects the dominant coupling, electron--electron exchange, electron spin--orbit, etc. controlling the angular momentum transfer).
The different shifts of minority and majority electrons result (likely) mostly from the fact that
in the presence of the molecular field spin flip transitions $\downarrow \rightarrow \uparrow$ may occur for energetic reasons
first more frequently than transitions $\uparrow \rightarrow \downarrow$ for energetic reasons \cite{4}.
\begin{figure}
\centerline{\includegraphics[width=.9\textwidth]{2neu.eps}}
\caption{Illustration of Dyson equation for itinerant hot electrons with spin $\sigma$. Spin flip transitions of hot electrons amount physically to spin mixing and quasi hybridization of the exchange split states.
Spin reversal against the molecular field is more difficult and occurs less frequently and delayed in time at non--equilibrium. The delay time $\delta$ may be estimated from $\delta \propto \tau_\uparrow - \tau_\downarrow$, where
$\tau_\sigma$ are the hot electron lifetimes.
This is expected quite generally., for example, in transition metals and rare--earth.
The transition matrix element $V_{\sigma,\sigma^{'}}$ describes transitions between states for spin $\sigma$
and $\sigma^{'}$. In compact form one gets from Dyson eq. the self--energy $\Sigma_\sigma\sim\sum_{\sigma^{'}}\int G_{\sigma^{'}} T_{\sigma \sigma^{'}}$}
\end{figure}
Fig.3 illustrates the asymmetric response (intensity) expected for spin $\downarrow\rightarrow\uparrow$ and
$\uparrow\rightarrow\downarrow$ transitions. Level shifts of minority electrons are expected to occur during times of the order of the demagnetization time
$\tau_M$ ) and hence faster than those of majority electrons.
\begin{figure}
\centerline{\includegraphics[width=.5\textwidth]{4neu.eps}}
\caption{Spin flip transitions of the (hot) excited electrons. The dots refer to the scattering potential V , or more exactly to the t--matrix T, causing spin flip transitions, yielding the spin dependent lifetimes of the hot electrons. Note, the lifetimes of the excited electrons follow from $\tau^{-1}_\sigma \propto \mid V\mid^2$. The molecular field $H_{eff}(t)$ causes generally $\tau_\uparrow \neq \tau_\downarrow$. For hot electrons
transition a) is favoured relative to transition b). For vanishing molecular field one has $\tau_\uparrow = \tau_\downarrow$.}
\end{figure}
Note, the shifts $\Delta\varepsilon_\sigma$ of the electronic states due to spin flip scattering of the hot electrons are physically quasi spin hybridization, mixing effects, hence $\varepsilon_\uparrow \rightarrow \varepsilon^0_\uparrow +
\alpha ( \varepsilon_\downarrow - \varepsilon_\uparrow )$ (Note, one may attempt to use Kramers--Kronig like analysis to relate such shifts to electron lifetimes). We illustrate the general physical behavior in Fig.4.
\begin{figure}
\centerline{\includegraphics[width=.9\textwidth]{3neu.eps}}
\caption{Illustration of time dependence of exchange splitting of magnetic systems at non--equilibrium. Excited minority electrons respond faster than majority ones in ferromagnets. The time delay of a few hundred fs reflects the for energetic reasons different occurrence of spin flip transitions $\downarrow \rightarrow \uparrow$ and $\uparrow \rightarrow \downarrow$. One estimates a spin flip transition energy difference of about $\Delta_{ex}$ or less. The spin dependent response occurs during at most a few hundred fs--times ($t \sim \tau_M \sim \frac{1}{T_c}$) and the delay time is correspondingly shorter than the demagnetization time. The molecular field $H_{mol}(t)$ determines the dynamics and (angular momentum transfer) the time scales. Note, the spin splitting $\Delta_{ex}(t) = \Delta_{ex}(t, H_{mol}(t))$ and its reduction due to hot electrons may saturate if a rest molecular field $H_{mol, rest}$ resulting from magnetization of different electrons of another band or an external magnetic field is present. For example, for Gd the exchange splitting reduces within 1ps from 0.74 eV to 0.6 eV, see experiment by Weinelt et al.. The recovery of the equilibrium magnetization occurs during times controlled by angular momentum transfer and may involve hysteresis due to
magnetic anisotropy. Note, recovery (relaxation) may be relatively slow if angular momentum transfer is slow. For recent experimental results obtained for Gd see Weinelt et al.}
\end{figure}
The demagnetization response as exhibited by the shifts is relatively fast in transition and rare--earth metals, a time scale of a few to hundred fs \cite{6}.
Note, one estimates a demagnetization time $\tau_M \propto (1/T_c)$ \cite{1}. Thus one expects for ferromagnetic Gd with
ordered 4f--electron spins and exchange coupled 5d,6s valence electrons in exchange split states and $T_c = 293K$ a slower demagnetization than for Ni with $T_c = 631K$, Fe etc..
The delay between minority and majority electron response at non--equilibrium is expected to increase with increasing molecular field. For example, the response delay is expected to be larger in Fe than in Ni \cite{4}. Delay times may also result from an external magnetic field.
Of course, the delay depends on the non--equilibrium state, on the density of excited electrons, thus on the fluence of the exciting laser field, $T_{el}(t)$, and excitation energies. The occurrence of majority electron spin flip transitions determines the onset of majority hot electron response. The response of the minority (hot) electrons is expected to occur
during times of the order of the demagnetization time and its dependence on $M(T_{el})$ and thus on $T_{el}$ need be studied and is of interest regarding comparison of theory and experiment \cite{6,7}.
Note, viewing the demagnetization as a fluctuation in magnetic energy ($\Delta E_M$) one is tempted to use for an estimate
of the electronic temperature dependence of the demagnetization time $\tau_M \Delta E_M \sim h$. Possibly this is in accordance with $\tau_M \propto a(T_{el})/T_c$ \cite{1}. Here, the coefficient a depends on density of states changes near $\varepsilon_F$ etc.upon varying $T_{el}$ and needs be calculated carefully using an electronic theory or using for example the Landau--Lifshitz equation. Note, $\frac{dM}{dt}\approx \frac{dM}{dT_{el}}\frac{dT_{el}}{dt}$ and here one may use for the magnetization $M(t_{el})$. The Landau--Lifshitz equation need be extended in general to include both directional and amplitude changes of $\overrightarrow{M(T_{el},..)}$ \cite{8}. Of course, the dependence of the demagnetization time on fluence sheds also light on its temperature dependence.
Writing for the energies of the valence electrons
\begin{equation}
\varepsilon_{i\sigma} = \varepsilon^0_i - \sigma J_{eff}M(T_{el},..) + \mbox{spin--flip--scattering--terms},
\end{equation}
suggests that the spin flip transitions cause the asymmetric response of hot minority and majority electrons. This asymmetry is expected to disappear for times larger than the spin dependent lifetimes of the hot electrons (see Fermi--liquid theory and exchange type coupling) and for vanishing molecular field (as $T_{el}\rightarrow T_c$). Thus, as the exchange splitting $\Delta_{ex}(t)$ closes,
the magnitude of the level shifts $\varepsilon_\downarrow$ and $\varepsilon_\uparrow$ become equal.
As suggested by the Hubbard hamiltonian
\begin{equation}
H = H_0 + \sum_{i,j} U_{ij}n_{i\sigma}n_{j\sigma^{'}}+ ...,
\end{equation}
at strong non--equilibrium with many excited electrons time and spin dependent energy shifts result from level occupation changes $\Delta n_{i\sigma}(t)$ \cite{1}, $\Delta \varepsilon_{i\sigma} \propto \sum_j U\Delta n_{j-\sigma} (t) + ...$.
Physically the spin flips amount to probing of the hot electrons of the spin split states with energy $\varepsilon_{i\uparrow}$ and
$\varepsilon_{i\downarrow}$. Since $\tau_\uparrow > \tau_\downarrow$, first $\Delta \varepsilon_{i\downarrow}(t)$ increases with a rate $\tau_\downarrow \sim \tau_M$, then later during times $t\sim \tau_\uparrow$ one gets that $\Delta \varepsilon_{i\uparrow}(t)$ increases. For times $t>\tau_\uparrow$ one gets $\Delta\varepsilon_{i\uparrow}=\Delta\varepsilon_{i\downarrow}$.
The used physical model suggests that the shifts are proportional to $\Delta_{ex}$ and depend on the lifetimes $\tau_\sigma$ and thus occur first for minority and somewhat later for majority hot electrons. In magnitude both shifts should become equal at times larger than the lifetimes of the hot electrons.
Furthermore, similar behavior and level shifts occur also due to reversal transitions of magnetic moments. Thus level shift dynamics should reveal also itinerant vs. local moment (Heisenberg type) behavior. For example, the energy levels of d--electrons in ferromagnetic transition metals, Ni, Fe, etc., shift upon reversing the magnetic moment with respect to the (global) magnetization \cite{8}.
Possibly interesting spin dynamics involving level shifts may also occur upon applying a temperature gradient (spatial variation of hot electron density) due to the corresponding spatial variation of the molecular field and for spin currents flowing through a tunnel junction consisting of (two) different ferromagnets, FM$_{1}$ / FM$_{2}$, or a FM / AF tunnel junctions.
Note, the driving force $\Delta \frac{\mu_\sigma}{T_{el}}$ may cause interesting magnetoelectronics effects, see Bennemann
\cite{9}.
In the following we describe briefly the theory for the spin dependent response due to hot electrons. For the analysis one may use for example Keldysh type
Green's functions \cite{10}. Also the Landau--Lifshitz (Gilbert) equation is a general basis for magnetization dynamics \cite{1}. Note, the Landau--Lifshitz equation can be extended to include both time dependent directional and amplitude changes of the magnetization \cite{11}. This may be related then to the electronic theory treating
magnetism dynamics in metals with mixed itinerant and local moment behavior like in Fe and other transition metals or rare earth like Gd \cite{1,8}.
\section{Theory}
We discuss briefly the calculation of the electron spin dynamics and energy shifts at non--equilibrium using various methods like Fermi liquid theory, Green's function theory and using Hubbard hamiltonian taking into account both changes of the direction and amplitude of the magnetization at non--equilibrium.
\subsection{Fermi--Liquid Theory, Hubbard Hamiltonian}
Using Fermi--liquid theory one gets for the system of Fermions at non--equilibrium approximately
\begin{equation}
\varepsilon_{p,\sigma}(t) = \varepsilon_{p,\sigma}^0 (0) + tr \int \frac{d^3p^{'}}{(2\pi)^3} f_{p\sigma,p^{'}\sigma^{'}}(t) \delta n_{p^{'}\sigma^{'}}(t) +\ldots \quad .
\end{equation}
Note, the last term of this Eq. can be rewritten using molecular field theory ($f=f_1 + \sigma_i\bullet \sigma_j f_2$ + ...). In general it is for energetic reasons that transitions $\downarrow \rightarrow \uparrow$ occur faster than $\uparrow \rightarrow \downarrow$ ones (see lifetimes of hot electrons and $\uparrow \rightarrow \downarrow$: $\tau_\uparrow > \tau_\downarrow$, $\downarrow \rightarrow \uparrow$ : $\tau_\downarrow$ shorter) and for the spin flip transition of hot electrons
\begin{equation}
f(t)_{p\uparrow,p{'}\downarrow}\delta n_{p^{'}\downarrow} \neq f(t)_{p\downarrow,p{'}\uparrow}\delta n_{p^{'}\uparrow},
\end{equation}
as long as the molecular field $H_{eff}$ acting on the electron spins is present, but for $H_{eff}=0$ both scattering amplitudes are equal. Note, $\uparrow$
refers to direction parallel to the magnetization and $\downarrow$ to opposite direction. Spin flip transitions $\downarrow \rightarrow \uparrow$
cause lowering of the energy levels and $\uparrow \rightarrow \downarrow$ cause an increase of the levels, see Fig. 1. The energy shifts are given by
\begin{equation}
\Delta \varepsilon_\sigma (t) = \int \frac{d^3p^{'}}{(2\pi)^3} f_{p\sigma,p^{'},-\sigma} \delta n_{p^{'}-\sigma^{'}}(t)+... .
\end{equation}
Here, the dynamics of $\delta n_{p\sigma}(t)$ may be determined using the Boltzmann or Langevin equation \cite{1}. One expects generally in the presence of a molecular field
\begin{equation}
\Delta \varepsilon_\uparrow (t) \neq \Delta \varepsilon_\downarrow (t)\quad .
\end{equation}
Note, the dependence $f(t,M(t),T_{el},..) \delta n(t, F, ...)$. For times $t \gg \tau_\uparrow,\tau_\downarrow$ one expects $f(t)_{p\uparrow,p{'}\downarrow}\delta n_{p^{'}\downarrow}=f(t)_{p\downarrow,p{'}\uparrow}\delta n_{p^{'}-\sigma^{'}}(t)$ and equal resulting level shifts. First, for times $t\sim \tau_\downarrow\sim \tau_M$ minority electron levels $\varepsilon_\downarrow$ shift and then for times
$t\sim \tau_\uparrow$ levels $\varepsilon_\uparrow$ of majority electrons increase for increasing times. It is $\tau_\downarrow < \tau_\uparrow$ in the presence of a molecular field \cite{4}. The situation is illustrated in Fig.4. Approximately, one may find $\tau_\downarrow \simeq \tau_M$, the demagnetization time. Then the rate of the minority electron level shifts is given by $\tau_M$.
Applying theory also to ferromagnetic Gd with 5d, 5s valence electrons having spin split states due to exchange coupling by the 4f electron magnetization one may compare with experiment by Weinelt et al \cite{3}. Note, experiment observes a fast
minority electron level shift and a delayed (by about 500 fs) response of the majority electron states.
Of course, the delay time $\delta(T_{el}, F,..)$ between level response of minority and majority electrons depends on the molecular field $H_{eff}(t)$, fluence and in general the strength of the magnetic interactions. Approximately, one expects \begin{equation}
\delta \propto (\tau_\uparrow - \tau_\downarrow).
\end{equation}
Hence, the delay time varies in ferromagnetic metals \cite{4}.
Such a delay $\delta(H_{eff}, F,..)$ may also result already in a paramagnet in the presence of a strong external magnetic field.
Note, the spin dependent level shifts
\begin{equation}
\varepsilon_{p\sigma}(t) - \varepsilon_{p\sigma}(0)
\end{equation}
are controlled by the fluence F and electronic temperature $T_{el}$ and are expected to be clearly reflected in the time dependence of narrow spin split bands and exchange splitting,
for example in transition and rare--earth metals and ferromagnetic semiconductors. The onset of the minority electron level shifts is expected at $\tau_\downarrow \sim \tau_M (T_{el}$,...) and thus changes in general with $T_{el}$. The response of the majority electrons should occur during times where frequent spin flip transitions $\uparrow \rightarrow \downarrow$
against the molecular field become possible.
In summary, these shifts are due to the spin flip transitions of the hot electrons. The spin flip transition cause quasi a hybridization of states for spins $\sigma$ and $-\sigma$, see equation.
Using the Hubbard Hamiltonian ($H=H_0 + \sum U n_{i\sigma}n_{i-\sigma} + ...$, including intersite exchange coupling) one
may write $\Delta \varepsilon_{i\sigma} \sim U_{eff}\Delta n_{i-\sigma}$ and then in accordance with Fig.1 at non--equilibrium for the energies ($\varepsilon_{i\downarrow} = \varepsilon^0 + U_{eff}\Delta n_{i\uparrow} +...$)
\begin{equation}
\varepsilon_{i\downarrow} = \varepsilon_{i}^0 + \Delta_{ex}(t)-\Delta \varepsilon_{i\downarrow}(t), \varepsilon_{i\uparrow} = \varepsilon_{i}^0 + \Delta \varepsilon_{i\uparrow}(t),
\end{equation}
where the shifts occurring at non--equilibrium, from $t=0$ on, result from spin flips of the hot electrons. Note,
$\Delta n_{i\sigma}(t, T_{el})$ could also be calculated using the v.Neumann, Boltzmann or Langevin equation \cite{12}. It is
\begin{equation}
\Delta \varepsilon_{i\downarrow(\uparrow)} = U_{eff}\Delta n_{i\downarrow(\uparrow)}+... .
\end{equation}
In view of the mixing of the level for spin $\downarrow$ and $\uparrow$ due to spin flip transitions one expects
\begin{equation}
\Delta \varepsilon_\downarrow=a(t)\Delta_{ex}(t)+..., \Delta \varepsilon_\uparrow=b(t)\Delta_{ex}(t)+...,
\end{equation}
and $a(t)\neq b(t)$ for $t<\tau_\downarrow, \tau_\uparrow$. For $t>\tau_\downarrow, \tau_\uparrow$ it is $a=b$. Note,
of course the shifts should include all those due to changes in $H_{eff}(t)$ and $M(t)$.
\subsection{Green's Function Theory}
The energy shifts $\Delta \varepsilon_\sigma (t)$ for minority and majority electrons may also be calculated using the Dyson equation for the electron non--equilibrium Green's function $G(t,..)$ or Fourier transform $G(\omega_n)$ \cite{1,4,10}
\begin{equation}
G(\omega_n)_\sigma = G_{0,\sigma}(\omega_n) + \sum_{n^{'}} G_{0,\sigma}(\omega_n)|V|^2 G_{\sigma^{'}}(\omega_n^{'}) G_\sigma(\omega_n) + \ldots \quad ,
\end{equation}
where $\omega_n = (2n+1)\pi T$ and V is the matrix element for electron transitions involving spin flips.
This equation is illustrated in
Fig.(2). (Note, Eq. holds also in Wannier representation.) The energy shifts follow from the real part of the self--energy $\Sigma_\sigma$ and are
given by \cite{4,10}
\begin{equation}
\Delta \varepsilon_\sigma (t) = Re \sum_{n^{'}} |V|^2 G_{\sigma}(\omega_n^{'}, t, F, T_{el}) + ... \quad .
\end{equation}
Using the Poisson summation formula, see Schrieffer \cite{10}, one gets
\begin{equation}
\Delta \varepsilon_\sigma (t) = Re \frac{1}{2\pi i k T_{el}} \int_c d\omega^{'}\frac{1}{\exp-(\omega^{'}/kT_{el})+1}
|V|^2 G_\sigma(\omega^{'})+\ldots \quad .
\end{equation}
Here, we assume that in the non--equilibrium state one may take already, at least approximately, for the temperature
$T=T_{el}(t)$. Also, all shifts due to changes in $H_{eff}(t)$ and $M(t)$ must be included, see shifts due to term
$-\sigma J_{eff}M(t)$.
For further analysis one may use the spectral repr\"{a}sentation of the Green's function and write
$G_\sigma(z)= \int dz^{'} f(z^{'}) A_\sigma (z^{'},...)/(z-z^{'})$ \cite{10}.
For narrow bands with spectral density $A_\sigma(\omega) \sim \delta (\omega- \varepsilon_\sigma)$ one gets then approximately for minority electrons a level shift
\begin{equation}
\Delta \varepsilon_\downarrow(t) = - a(t,F, T_{el}(t),..) \Delta_{ex}(t)+..., a \propto |V|^2
\end{equation}
due to scattering $\downarrow \rightarrow \uparrow$. Similarly one gets for majority electrons due to $\uparrow \rightarrow \downarrow$ transitions
a level shift
\begin{equation}
\Delta \varepsilon_\uparrow(t) = b(t,F,...) \Delta_{ex}(t,F,..)+..., b \propto |V|^2 \quad .
\end{equation}
In general as indicated by the different lifetimes $\tau_\downarrow$ and $\tau_\uparrow$ for (hot) minority and majority electrons \cite{1,4}, which follow from the imaginary part of the self--energy, $Im \sum_\sigma (\omega)$, one gets
$a(t) \neq b(t)$ and due to the molecular field suppressing if strong enough the transitions $\uparrow \rightarrow \downarrow$ first a shift of
the minority and then later one for majority electrons for decreasing molecular field. As $\Delta_{ex}(t,T_{el},...) \rightarrow 0$, for vanishing molecular field, one gets $a\rightarrow b$.
\subsection{Theory for Spin--Dynamics involving both directional disorder of the magnetic moments and amplitude changes}
In general one expects that local magnetic moment ferromagnetism (Heisenberg type one) will exhibit a somewhat different dynamical behavior than
itinerant magnetism, since energy scales and thus relaxation times may differ. Note, itinerant magnetism is controlled by intersite electron hopping
and spatially different electron correlations than local magnetic moment magnetism due to strong onsite electronic correlations and intersite exchange coupling.
One needs in general a theory which describes demagnetization at non--equilibrium ( presence of hot electrons ) due to both magnetic moment directional disorder and decrease of the magnitude of the magnetic moments. Thus in general then not only the temperature $T_{el}(t)$ of the hot electrons, but also the temperature $T_{spin}$ referring to the moment disorder controls the dynamics. Hence, one has for the exchange splitting to consider $\Delta_{ex}=\Delta_{ex}(T_{el},T_{spin})$.
Using the Hubbard hamiltonian
\begin{equation}
H = \sum_{i,j} t c^{+}_ic_j + \sum_i U \langle n_{i,-\sigma}\rangle n_{i,\sigma} - J\sum_{i,j}\sigma_i\sigma_j +...\quad ,
\end{equation}
which describes electrons hopping between atomic sites i,j and feeling spin dependent effective on--site coupling U, and spin--flip exchange scattering ( J ), etc, one gets for the relative average magnetic moment
\begin{equation}
\mu(t) = (p^+\mu_+ + p^-|\mu_-|)/ \mu(T=0)\quad .
\end{equation}
Here, + and - refers to magnetic moments pointing parallel and antiparallel to the global magnetization M(t) and p$^{+,-}$ are the probabilities
to find such moments. Furthermore, the relative (global) magnetization is
\begin{equation}
M (t) = (p^+\mu_+ + p^-\mu_-)/ \mu(T=0) \quad,
\end{equation}
and the long range order parameter is given by
\begin{equation}
\eta(t) = p^{+} - p^- \quad .
\end{equation}
Of course, the electron energies change also at non--equilibrium and for sites with moment $\mu_+$ one has
\begin{equation}
\varepsilon^{+}_\uparrow (t) = \frac{n - \mu_+}{2} U ,\quad {\varepsilon^+}_\downarrow (t) = \frac{n + {\mu_+}}{2} U .
\end{equation}
Similarly, levels $\varepsilon^{-}_\sigma$ are given, see Moran--Lopez et al., Avignon, Bennemann \cite{8}. The hopping of the electrons in the magnetic moment disordered lattice causes a hybridization of the levels $\varepsilon^{+}_\sigma$ and
$\varepsilon^{-}_\sigma$. The resulting level shifts get more intense as the moment disorder increases.
Note, $U$ could include on--site spin flip effects. The spin splitting is then
\begin{equation}
\Delta^{+,-} = U \mu_{+,-}+... \quad .
\end{equation}
This is proportional to $M(T_{el},...)$ for itinerant magnetism, but proportional more generally to $M(T_{el},T_{spin},...)$ , if a mixed behavior, both itinerant and local moment one occurs. Note, U is the effective field acting on the moments and plays the role of the molecular field. The
center of gravity of the spin split bands is given by $\varepsilon_\sigma (t) = \frac{1}{W} \int d\varepsilon \varepsilon N_\sigma (\varepsilon)$,
where W denotes the band width and $N_\sigma (\varepsilon)$ the density of states yielding approximately above levels $\varepsilon^{+,-}_\sigma$.
The various properties of the non--equilibrium state may be calculated using Keldysh type non--equilibrium Green's functions \cite{10}. Using the Dyson equation (in tight--binding approximation) one gets
\begin{equation}
G^i_\sigma = G^i_{0,\sigma} + \sum_j G^i_{0,\sigma} t G^j_\sigma + \sum_j G^i_{0,\sigma} J G^j_{-\sigma} + ...
\end{equation}
where upper index i, j refer to direction of magnetic moment at the corresponding lattice site, t to the hopping integral
and J to an effective spin flip potential. The last term describes the effective hybridization of the spin split states. Lower Wannier type indices referring to lattice sites are not explicitly given and also not the summations over the lattice sites. One may use the Bethe--ansatz and related methods (t--J model etc.) to determine Green's functions. Note, one may rewrite this Eq. as
\begin{equation}
G^i_\sigma = G^i_{0,\sigma} + \sum_j G^i_{0,\sigma} t G^j_\sigma +
\sum_j G^i_{0,\sigma} (J G^j_{-\sigma} \chi J) G^i_{0,\sigma} + \ldots \quad .
\end{equation}
In Wannier representation the Green's functions $G^i_{00,\sigma}$, referring to lattice site 0, and $G^i_{01,\sigma}$ to lattice sites 0, 1 etc., may be calculated applying usual techniques and see Avignon, Bennemann, to be published \cite{8}. Above
Dyson eq. extends previous theory of Avignon, Moran--Lopez by including spin--flip transitions.
This theory gives the density of states $N_\sigma(t,\varepsilon)$ and the center of gravity $\varepsilon_\sigma(t)$ of the spin split bands ($\varepsilon_\sigma \sim \int d\varepsilon N_\sigma(\varepsilon)$) \cite{13}.
The free energy at temperature $T_{el}(t)$ is given by $F = E - T S$, where S denotes the entropy approximately given by $S = - k N (p^+ \ln p^+ + p^- \ln p^-)$ \cite{5}.
A detailed study should exhibit differences between the demagnetization of dominantly itinerant ferromagnetism and Heisenberg one, see Avignon, Bennemann \cite{8}.
\section{Results and Discussion}
The main resume of our physical model is that asymmetric response in time of minority and majority spins at non--equilibrium in ferromagnets is in general expected,
since spin flip processes $\downarrow \rightarrow \uparrow$ and $\uparrow \rightarrow \downarrow$ exhibit different behavior due to the molecular field acting on them. Note, recent experiments in Gd by Weinelt et al. \cite{3} and theory \cite{1,4} suggest such a behavior. Both minority electrons $\varepsilon_\downarrow$ and for magnetic moment disorder reversal of magnetic moments $\mu_{-}$ occurs first in time. Approximately for given temperatures the delay time $\delta$ between minority and majority electron response may be proportional to the molecular field $H_{mol}$. Also for general reasons we estimate $\delta \sim \tau_\uparrow - \tau_\downarrow$ and
$\delta \longrightarrow 0$, as $\Delta_{ex}$ vanishes. Note, presently detailed results for temperature dependence $T_{el}$ of spin flip transitions are not given and would be desired.
One expects that this asymmetric response and time delay of the majority spins is small for Ni, but may play a role already for Fe and other ferromagnets and rare earth, like Gd and others. For increasing laser fluence, $T_{el}$, and density of excitations the delay may get smaller, while larger for increasing molecular field.
Of course, detailed calculations are necessary to determine definitely the asymmetric response in time of hot electrons.
Also, more experiments in various ferromagnets are needed to identify the origin of the fast asymmetric response at non--equilibrium.
In the following we present some preliminary results, see Figs.5 and 7 (see also Avignon and Bennemann \cite{8}).
In Fig.5 we sketch the time dependent behavior expected for exchange split states due to spin--flip scattering in ferromagnetic transition metals and rare--earth.
In response to hot electrons the minority electron states shift first within a time $\tau_{\downarrow}$ and
then later the majority electron states at a time $\tau_\uparrow$. Approximately, it is $\tau_\downarrow \sim \tau_M$. From calculations of the spin dependent lifetimes one estimates
$\tau_\uparrow \approx 2 \tau_\downarrow$ \cite{4}. Hence, one estimates majority electrons respond at times of the order of
$2\tau_\downarrow\approx 2\tau_M$.
For comparison with experiment see recent results by Weinelt et al. for Gd \cite{3}. Then, one estimates minority electrons
\begin{figure}
\centerline{\includegraphics[width=.9\textwidth]{6neu.eps}}
\caption{Estimated time dependence of the response of the exchange split states in non--equilibrium ferromagnetic metals with hot electrons. Spin flip transitions cause level shifts $\varepsilon_{\sigma}(t) \propto \Delta_{ex}$. Due to transitions $\uparrow \rightleftarrows \downarrow$ levels $\varepsilon_{\downarrow}$ and $\varepsilon_{\uparrow}$ hybridize, get mixed. For energetic reasons the level $\varepsilon_{\downarrow}$ shifts approximately within the demagnetization time $\tau_{M}$. Then delayed by time
$\delta \approx \tau_{\uparrow}- \tau_{\downarrow}$ the majority electron level $\varepsilon_{\uparrow}$ shifts, approximately also within a time of the order of $\tau_{M}$. The resultant both shifts of $\varepsilon_{\downarrow}$ and
$\varepsilon_{\uparrow}$ are expected to be equal. Note, for ferromagnetic transition metals we estimate $\tau_{M}$
of the order of a few hundred fs. and for Gd etc. of the order of 500 fs.($\tau_{M} \sim \frac{1}{T_c}$). Thus, for Gd
with hot electrons we estimate a decrease of the exchange splitting first due to minority electron level shifts within times of the order of 500 fs and then a decrease due to majority electron level shifts at times of the order of twice $\tau_M$ at about 1 ps. Note, spin relaxation may be slow due to angular momentum transfer.}
\end{figure}
respond at $t\sim 400$ to $500$ fs and majority ones at times of the order of 1 ps.
Note, the delay time $\delta(t,F,..)$ between minority and majority electron response is larger for rare--earth like Gd than for transition metals, $\tau_M \propto \frac{1}{T_c}$ \cite{1}. We assume $\tau_\uparrow \approx 2 \tau_\downarrow$, see calculations of the spin dependent lifetimes of the hot electrons by Zhukov, Knorren et al. \cite{4}.
The decrease of the exchange splitting depends, of course, on the remaining molecular field $H_{mol.}$ after times larger than $\tau_\sigma$. Also onset of minority electron response and delay time $\delta$ depends on light fluence, concentration of hot electrons, electron temperature $T_{el}$.
Note, our estimate for Gd yielding that minority electron levels shift within about 500 fs and majority ones
during about 1 ps is in fair agreement with experiment \cite{3}. In Gd the valence electron states (5d,6s) are spin split
due to exchange coupling $J_{eff}$ by the 4f--electron magnetization. Also $\tau_\uparrow /\tau_\downarrow$ about 1 to 2 is
observed approximately for transition metals \cite{4}.
Using the Hubbard hamiltonian we estimate for the exchange splitting in transition metals
\begin{equation}
\Delta_{ex} \sim U_{eff}(t, T_{el}) \mu_{av}(t)+\ldots\quad .
\end{equation}
This permits a general test of local moment vs. itinerant magnetism behavior. For example, for Ni one gets after averaging over the directional fluctuations and spatial moment disorder of $\mu(t)$ that
\begin{equation}
U_{eff}(t,T_{el})\mu_{av}(t) \rightarrow 0,
\end{equation}
as $U_{eff} \sim M(t)$ and many hot electrons such that $T_{el}\rightarrow T_c$. Note, in general as $T_{el}$ and molecular field changes both magnitude of the magnetic moments and magnetization vary \cite{8}.
Likely sub--fs spectroscopy will also detect existence of short range magnetic order and of (local) magnetic moments above $T_c$ (or global demagnetization time).
In nearly ferromagnetic metals a time resolved analysis of the magnetic fluctuations might be of interest.
It would be interesting to study also the spin dynamics in antiferromagnets, at surfaces and at interfaces of feromagnetic metals and in alloys and at impurity sites. This sheds more light on how angular momentum transfer controls the spin dynamics.
Fig. 7 shows results for the DOS in a magnetic moment disordered ferromagnet due to mixing of electron states $\varepsilon_{+,\sigma}$ and $\varepsilon_{-,\sigma}$ at magnetic moment sites (+) and (-). ( Here, (+) refers to magnetic moments pointing into the direction of the magnetization, and (-) to those pointing in opposite direction.)
Note, the electrons move in an "alloy" lattice with lattice sites +, - referring to magnetic moments pointing into direction of magnetization and opposite, see illustration Fig.6 of electrons in a magnetic moment
disordered lattice.
\begin{figure}
\centerline{\includegraphics[width=.3\textwidth]{step1.eps}}
\caption{Illustration of magnetic--moment disordered lattice. This amounts to hybridization of the electron states at
atomic sites with magnetic moment pointing into the direction of the magnetization and opposite direction.}
\end{figure}
The above Dyson eq. is used for calculating the electron DOS \cite{8,13}. The probability for a lattice site with +, - depends of course on temperature and magnetization, see Avignon, Moran--Lopez \cite{8}. The magnetic moment disorder causes state shifts like in an alloy amounting to a hybridization of $\varepsilon_{\sigma}^+$ and $\varepsilon_{\sigma}^-$ and which is reflected in the electron DOS at sites + and - . Note, the results do not take into account direct spin--flip transitions due to scattering by the exchange coupling J.
\begin{figure}
\centerline{\includegraphics[width=.5\textwidth]{5.eps}}
\caption{Results for the density of states of majority and minority electrons for different values of the magnetization and magnetic moments. Note, $\rho_\sigma^+$ and $\rho_\sigma^-$ refers to electrons with spin $\sigma$ and to atomic sites with magnetic moment pointing in direction of magnetization (+) and opposite direction (-). $\eta(t)$ refers to the order parameter, magnetization, and $\mu_{+,-}(t)$ to magnetic moments pointing in direction or opposite to the magnetization. Approximately, the time dependence is given by $\eta (t)$ and thus $\eta=1$ corresponds to $t=0$, $t_1$ corresponds to a time of a few hundred fs during which a demagnetization yielding $\eta=0.4$ occurred ($t_1 \sim 0.5 \tau_M$) and $t_2$ with magnetization $\eta=0$ to the demagnetization time. This shows clearly the "alloy" behavior of ferromagnets exhibiting both itinerant and local moment character, mixing of $\varepsilon_{+,\sigma}$ and $\varepsilon_{-,\sigma}$ energy levels.
This implies already similar "alloy" effects regarding mixing of exchange split states due to spin--flip scattering.} \end{figure}
Dynamics of the hot electrons and
level shifts due to transitions $\mu_{+} \rightleftarrows \mu_{-}$ should occur during a few fs or sub--fs. times. Note, the transitions $\mu_{+} \rightleftarrows \mu_{-}$ occur during a characteristic time expected to be $t < \tau_M$.
Summary:
It is necessary to confirm the physics and approximations used in this discussion by careful
electronic structure calculations
yielding more quantitatively the shifts $\Delta \varepsilon_\sigma (t)$ and the asymmetry of spin flip matrix elements $M_{\downarrow,\uparrow}$ and
$M_{\uparrow, \downarrow}$, see Avignon, Bennemann, to be publ. 2013 \cite{1,4,8}.
Of course spin dependent level shifts in ferromagnets at non--equi\-librium play a role regarding many spin dynamics problems in particular transport ones. A central role for all this in magnetoelectronics \cite{9}, besides electronic theory, plays the Landau--Lifshitz (Landau--Lifshitz--Gilbert) equation, in particular when damping is important and both directional and amplitude changes occur during non--equilibrium \cite{11,14}, since in general for level shifts
$\Delta\varepsilon_\sigma \propto \frac{dM(t)}{dt}$. A solution in compact form of the Landau--Lifshitz equation was derived by F.Nogueira, K.Bennemann ( to be published FU Berlin, 2013, \cite{14} ).
Non--equilibrium dynamics might offer interesting effects, in particular regarding spin currents in tunnel junctions \cite{15}, photoemission at ferromagnetic surfaces,
spin dependent population dynamics (see already earlier studies by C.Siegmann, W.Eberhard, Bennemann and others). Note,
$n_{\sigma}(\varepsilon,M_\varepsilon(t), t,..)$ might even increase temporarily due to $\tau_\uparrow > \tau_\downarrow$. Note, population dynamics and $n_{\uparrow} (t, \varepsilon,..)\neq
n_{\downarrow}(t,\varepsilon,...)$ in presence of hot electrons will be in general the case. During very short times (likely $t < fs$) it might be possible to observe
induced spin dependent population dynamics of molecular bonds, bond dynamics and of inner core levels of atoms.
Regarding tunnel junctions the spin dependent level shifts, shifts of minority and majority electron bands affect characteristically tunnel currents.
For example, no current flows if on the left side (L) of the tunnel junction the majority band is filled and
the Fermi-level lies in the exchange split minority electron band and on the right side (R) no minority band states are available. Then, in the
presence of hot electrons on the right side of the tunnel junction the resultant shifts of the exchange split bands may permit a minority electron spin
current. For illustration see Fig.8.
\begin{figure}
\centerline{\includegraphics[width=.5\textwidth]{letzte.eps}}
\caption{Induced tunnel current (from L towards R) due to band shifts resulting from hot electrons on the right side R of the tunnel junction.}
\end{figure}
The band shifts due to hot electrons may also affect the Josephson like spin currents, see Nogueira, Bennemann \cite{15}. Regarding such currents spin damping, spin lifetimes seem important.
Magnetization reversal will speed up as angular momentum transfer
becomes easily possible, possibly at interfaces, impurity sites and in a.f. or more generally in magnets with inhomogeneous magnetization. This needs more studies.
As discussed using Onsager theory \cite{9} at non--equilibrium magnetoelectronics results for hot electrons from the driving force
$\frac{\Delta \mu_\sigma}{T_{el}}$. Then in particular tunnel junctions may exhibit interesting behavior.
Due to spin, magnetization dynamics electromagnetically induced surface effects occur in topological insulators etc., see Nogueira, Eremin, Bennemann, Meeting DPG, Berlin 2012 and to be publ.\cite{11,14}).
Also, spin dependent behaviour, spin currents, in particular the Josephson like spin current driven by a phase difference,
may be studied using spins of atoms or molecules in an optical lattice. This may help to understand and test in particular various many body theories, the approximations used regarding the Hubbard hamiltonian etc., separation of charge and spin currents.
\section{Acknowledgement}
I thank C.Bennemann for help in preparing this article.
|
1,116,691,498,290 | arxiv | \section{Introduction}
\label{s.intro}
The existence of dark matter (DM) is firmly established by a myriad of astrophysical and cosmological observations \cite{Ade:2013zuv}. Nevertheless, the exact characteristics of dark matter particles remain almost completely mysterious. Weakly Interacting Massive Particles (WIMPs) are the most popular DM candidate since they arise in supersymmetry and can naturally occur with the correct relic abundance \cite{Jungman:1995df}, but many other scenarios are possible.
Direct detection via DM-nucleus scattering \cite{Goodman:1984dc} has made tremendous strides, with experiments like LUX \cite{Akerib:2013tjd}, Super-CDMS \cite{Agnese:2013rvf} and XENON100 \cite{Aprile:2012nq} achieving sensitivities to WIMP-nucleon scattering cross sections of $\sigma^\mathrm{SI}_n \sim 10^{-45} \ \mathrm{cm}^2$ for a $\mathcal{O}(100 \, {\rm GeV}$) WIMP. There have also been several anomalies in the $\mathcal{O}(10 \, {\rm GeV})$ mass range \cite{Bernabei:2013cfa, Aalseth:2012if, Angloher:2011uu, Agnese:2013rvf} that seem to conflict with each other, as well as with various exclusion bounds by the above experiments when assuming a WIMP-like scattering. It remains possible that some or all of these hints will be explained by something other than dark matter, especially given how challenging these measurements and their background suppression is in that mass range. Even so, past and current anomalies naturally stimulate a great deal of work by the theory community in an attempt to reconcile conflicting experimental results. The myriad of plausible models demonstrates the necessity to explore as many different dark matter scenarios as possible, lest a crucial signal be overlooked.
In most models of the dark sector, dark matter is charged under some new symmetry to make it stable. However, in light of the complex structure of the Standard Model (SM) there is no particularly strong reason to assume the dark sector to be so simple. We explore the possibility that not just dark matter, but also the force carrier connecting it to the visible sector is charged under this symmetry. This dark mediator then acts as a Double-Dark Portal.
In \cite{Curtin:2013qsa} we introduced a model to realize this scenario: \emph{Dark Mediator Dark Matter} (dmDM). It features a fermionic dark matter candidate $\chi$ with Yukawa couplings to one or more light scalars $\phi_i$. These scalars carry dark charge and can only couple to the SM in pairs, realized as a nonrenormalizable coupling to quarks, $\bar q q \phi \phi/\Lambda$. For sufficiently light $\phi$ this can lead to direct detection via a $2\to3$ nuclear scattering process, shown in \fref{feynmandiagram}.
Bounds from direct detection experiments are usually analyzed assuming a contact operator interaction $\bar \chi \chi \bar q q/\tilde \Lambda^2$. The shape of the resulting nuclear recoil spectrum is entirely determined by the nuclear form factor and dark matter velocity distribution. Many past models feature different nuclear recoil spectra. Examples include the introduction of a mass splitting \cite{TuckerSmith:2001hy, Graham:2010ca, Essig:2010ye}; considering matrix elements $|\mathcal{M}|^2$ with additional velocity- or momentum transfer suppressions (for a complete list see e.g. \cite{MarchRussell:2012hi}), especially at low DM masses close to a GeV \cite{Chang:2009yt}; light scalar or `dark photon' mediators (see e.g. \cite{Essig:2013lka, Essig:2010ye}) which give large enhancements at low nuclear recoil; various forms of composite dark matter \cite{Alves:2009nf, Kribs:2009fy, Lisanti:2009am, Cline:2012bz, Feldstein:2009tr} which may introduce additional form factors; and DM-nucleus scattering with intermediate bound states \cite{Bai:2009cd} which enhances scattering in a narrow range of DM velocities.
Notably missing from this list are alternative process topologies for DM-nucleus scattering.
This omission is remedied by the dmDM scenario, which generates a functionally unique recoil suppression and overall cross section dependence on DM and nucleus mass. Direct detection constraints on dmDM are explored in this paper in detail, and we show that a $\sim 100 \, {\rm GeV}$ dmDM candidate fakes different $\mathcal{O}(10 \, {\rm GeV})$ standard WIMPs at different experiments.
Dark Mediator Dark Matter has important consequences outside of direct detection. Coupling dark matter to a light scalar can ameliorate inconsistencies between simulations and observations of dwarf galaxies \cite{Carlson:1992fn,Spergel:1999mh,Tulin:2013teo} while being compatible with a thermal relic. Perhaps more drastic however is the unique pair-wise coupling of light scalars to SM quarks.
We conduct the first systematic survey to constrain operators of the form $\bar q q \phi_i \phi_j^*/
\Lambda_{ij}$ where $\phi_i$ is a very light scalar, checking a large variety of cosmological, astrophysical and collider bounds.
The heaviest stable dark mediator has to be lighter than $\sim \, {\rm eV}$ to avoid overclosing the universe. This makes emission during direct detection plausible. The most stringent bounds on its coupling come from observations of neutron star cooling, which require $\Lambda \gtrsim 10^8 \, {\rm TeV}$ for a single dark mediator. However, all constraints are easily circumvented in a model with two mediators, which can generate a strong direct detection signal. The constraints we derive are important outside of the dmDM context as well, applying to any light scalars with the above coupling to the SM.
The pairwise dark mediator coupling to quarks is not gauge invariant above the electroweak breaking scale, necessitating a UV completion. We present one such possibility featuring dark vector quarks, leading to discoverable TeV scale LHC signatures.
This paper is organized as follows. In \sref{dmdm} we define the dark mediator Dark Matter model and outline how dmDM could be realized in a UV-complete theory with its own set of LHC signatures. \sref{yXbounds} summarizes bounds on the dark matter Yukawa coupling to dark mediators.
In \sref{stellarconstraints} we derive stellar astrophysics bounds on dark mediators coupled to SM quarks, which give the most powerful constraints on our scenario. Cosmology and LHC experiments also yield important bounds, which are discussed in \sref{constraints}. A realistic model of dmDM, which avoids all constraints, is defined in \sref{phisummary}. \sref{directdetection} reviews the direct detection phenomenology of dmDM, and we conclude in \sref{conclusion}. Some technical details and additional calculations are presented in the Appendices.
\vspace{3mm}
\section{Dark Mediator Dark Matter}
\label{s.dmdm}
In this section we define the Dark Mediator Dark Matter model and discuss a possible UV-completion involving heavy vector-like quarks that could be discoverable at the LHC.
\subsection{Model Definition}
Given its apparently long lifetime, most models of DM include some symmetry under which the DM candidate is charged to make it stable. An interesting possibility is that not only the DM candidate, but also the mediator connecting it to the visible sector is charged under this dark symmetry. Such a `dark mediator'
$\phi$ could only couple to the SM fields in pairs, at leading order.
There are several possibilities for writing down a dark-mediator model. However, if the mediator couples via additional derivatives or through loops, direct detection is suppressed below observable levels. This limits the choice of dark mediator couplings to the simple construction introduced in \cite{Curtin:2013qsa}, which we repeat here.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{dd_process_prl}
\end{center}
\caption{
The quark-level Feynman diagrams responsible for DM-nucleus scattering in \emph{Dark Mediator Dark Matter} (dmDM). Left: the $2\rightarrow3$ process at tree-level. Right: the loop-induced $2\rightarrow2$ process. The arrows indicate flow of dark charge.
}
\label{f.feynmandiagram}
\end{figure}
Consider real or complex SM singlet scalars $\phi_i$ coupled to quarks,
along with Yukawa couplings to a Dirac fermion DM $\chi$.
The relevant terms in the effective Lagrangian are
\begin{equation}
\label{e.dmdm}
\mathcal{L}_\mathrm{DM} \supset
\displaystyle{\sum_{i,j}^{n_{\phi}}}\, \frac{1}{\Lambda_{ij}} \bar q\,q \,\phi_i \phi_j^* + \displaystyle{\sum_{i}^{n_{\phi}}}\left ( y^i_\chi \overline{\chi^c}\chi \phi_i + h.c. \right)
+ \sum_{i,j,k,l} \lambda_{ijkl} \phi_i \phi_j^* \phi_k \phi_l^* + \cdots,
\end{equation}
where $\ldots$ stands for $\phi, \chi$ mass terms, as well as the rest of the dark sector, which may be more complicated than this minimal setup. This interaction structure can be enforced by a $\mathbb{Z}_4$ symmetry. The first two terms dictate the dark sector's interaction with the SM, while the quartics are only important in the early universe (see \sref{constraints}).\footnote{The $\mathbb{Z}_4$ symmetry also allows higgs portal couplings of the form $|H|^2 \phi_i \phi_j^*$, but they will have a very subdominant effect on phenomenology compared to the first term in \eref{dmdm}.}
The leading order process for DM-nucleus scattering is $\chi N \to \bar \chi N \phi$ if $m_\phi \lesssim \mathcal{O}(10 \, {\rm keV})$. However, an elastic scattering $\chi N \to \chi N$ is always present at loop-level since it satisfies all possible symmetries, see \fref{feynmandiagram}. This low-energy $2\to2$ loop process is equivalent to the operator
\begin{equation}
\label{e.2to2operator}
\frac{\,y_{\chi}^2}{2\,\pi^2}\,\frac{1}{\Lambda\,q} \ (\bar{\chi}\,\chi\,\bar{N}\,N),
\end{equation}
(for $n_\phi = 1$) in the massless $\phi$ limit, where $q=\sqrt{2\,m_N\,E_r}$ is the momentum transfer in the scattering.\footnote{Note that in this limit, the process has an IR pole similar to tree-level $t$-channel exchange, hence the $q^{-1}$ dependence.} Effectively, this is identical to a standard WIMP with a $\bar \chi \chi \bar N N$ contact operator, but with an additional $1/E_r$ suppression in the cross section. This gives a similar phenomenology as a light mediator being exchanged at tree-level with derivative coupling.
The main new features of this model for direct detection in \sref{directdetection} are captured by the minimal case with a single mediator $n_\phi = 1$. However, the actual number of dark mediators is important for interpreting indirect constraints in Sections \ref{s.yXbounds}, \ref{s.stellarconstraints} and \ref{s.constraints}. It also affects the relative importance of the two nuclear scattering processes. When $n_{\phi}=1$, the $2\rightarrow3$ process will dominate direct detection for Yukawa coupling $y_\chi$ below some threshold as long as $m_\phi \lesssim \, {\rm keV}$. If $n_\phi = 2$, however, the dominant scalar-DM coupling could be $\bar q q \phi_1 \phi_2^*/\Lambda_{12}$. In that case, the $2\to2$ operator above is $\propto y_\chi^{\phi_1} y_\chi^{\phi_2}$ and can be suppressed without reducing the $2\to3$ rate by taking $y_\chi^{\phi_1} \ll y_\chi^{\phi_2}$. Both processes will be considered for direct detection in \sref{directdetection}.
The effect of strong differences between proton and neutron coupling to DM have been explored by \cite{Feng:2011vu}. To concentrate on the kinematics we shall therefore assume the operator $\bar q q \phi \phi^*/\Lambda$ is flavor-blind in the quark mass basis.
We point out that depending on the UV completion of the model, a leptonic coupling via $\bar \ell \ell \phi \phi^*$ is also possible. We do not consider it here, since direct detection would be very difficult, but indirect constraints, in particular from white dwarf cooling, could be sensitive to such a scenario.
\subsection{A possible UV-completion}
\label{ss.uvcompletion}
\begin{table}[t]
\begin{center}
\begin{tabular*}{0.45\textwidth}{@{\extracolsep{\fill}}c|cccc}
\hline
\\[-7pt]
$\quad$ & $SU(3)_c$ & $SU(2)_L$ & $U(1)_Y$ & $\mathbb{Z}_4$ \\[2pt]
\hline\hline
\\[-6pt]
$\bar Q$ & $\bar 3$ & $\bar 2$ & $-1/6$ & $0$ \\[2pt]
\\[-6pt]
$u$ & $3$ & $1$ & $2/3$ & $0$ \\[2pt]
\\[-6pt]
$d$ & $3$ & $1$ & $-1/3$ & $0$ \\[2pt]
\\[-6pt]
$H$ &$1$ & $ 2$ & $1/2$ & $0$ \\[2pt]
\hline
\\[-6pt]
$\phi$ &$1$& $1$ & $0$ & $\pi$\\[2pt]
\hline
\\[-6pt]
$\psi_{Q_{1,2}}$ &$3$ & $ 2$ & $1/6$ & $\pi$\\[2pt]
\\[-6pt]
$\psi_{u_{1,2}}$ &$3$ & $1$ & $2/3$ & $\pi$\\[2pt]
\\[-6pt]
$\psi_{d_{1,2}}$ &$3$ & $1$ & $-1/3$ & $\pi$\\[2pt]
\hline
\\[-6pt]
$\chi$ & $1$ & $1$ & $\!\!\!\!0$ & $\pi/2$\\[2pt]
\hline
\end{tabular*}
\caption{Particle content of the dark vector quark UV completion of dmDM: complex scalar $\phi$, Dirac fermions $\psi$ (with index $1,\,2$ for the two Weyl fermion components) and $\chi$. $\tilde{H}=i\,\sigma^2H^*$.}\label{t.particlecontent}
\end{center}
\end{table}
\begin{figure}
\begin{center}
\begin{tikzpicture}[line width=1.5 pt, scale=1.2]
\draw[fermion] (-1.5,1.25) -- (-0.5,0.6);
\draw[fermion] (2.3,1.25) -- (-0.5,0.6);
\draw[scalar] (-0.5,0.6) -- (-0.5,-0.8);
\draw[fermionnoarrow] (-1.5,-1.4) -- (-0.5,-0.8);
\draw[fermion][line width=2pt] (-0.5,-0.8) -- (0.5,-0.8);
\draw[scalarnoarrow] (0.5,-0.8) -- (0.5,0.04);
\draw[fermion][line width=2pt] (0.5,-0.8) -- (1.5,-0.8);
\draw[fermionnoarrow] (1.5,-0.8) -- (2.3,-1.45);
\draw[scalar] (1.5,-0.8) -- (2.3,0); %
\begin{scope}[rotate=90]
\begin{scope}[shift={(0,-.5)}]
\clip (0,0) circle (.175cm);
\draw[fermionnoarrow] (-1,1) -- (1,-1);
\draw[fermionnoarrow] (1,1) -- (-1,-1);
\end{scope}
\end{scope}
\node at (-0.8,0) {$\phi$};
\node at (2.5,0) {$\phi$};
\node at (-1.8,-1.45) {$u$};
\node at (-1.8,1.4) {$\chi$};
\node at (1,0) {$\langle v\rangle$};
\node at (0.8,-1) {$\psi_{Q_1}$};
\node at (1.3,-1) {$\psi_{Q_2}$};
\node at (-0.25,-1) {$\psi_{u_1}$};
\node at (0.25,-1) {$\psi_{u_2}$};
\node at (2.5,-1.45) {$Q$};
\node at (2.5,1.4) {$\chi^c$};
\end{tikzpicture}
\end{center}
\caption{The $2\to 3$ direct detection scattering process within the UV completion of dmDM. When treating Higgs vev as a mass insertion, the propagator of heavy Dirac quark is dominated by the chirality-flipping piece, $\frac{M_Q}{p^2-M_Q^2}$, at low energy. This gives the suppression scale in \eref{effcoupling}.
}
\label{f.2to3process}
\end{figure}
Above the electroweak symmetry breaking scale the $\bar q q \phi \phi^*/\Lambda$ operator is realized as $\bar Q_L H q_R \phi\phi^*/M^2$. This is suggestive of a particular UV completion involving heavy vector-like fermions coupling to $\phi$ and SM quarks via Yukawa couplings. The minimal particle content to realize dmDM is therefore a light scalar mediator $\phi$, heavy vector-like quarks $\psi_{Q,\,q}$ in the same gauge representations as the SM $Q_L, u_R, d_R$ respectively, and a Dirac fermion dark matter candidate $\chi$. Their charges are shown in \tref{particlecontent}. The Lagrangian\footnote{We show the $n_\phi = 1$ complex scalar case, generalization to more or real dark mediators are trivial.} contains Yukawa couplings
\begin{eqnarray}
\nonumber \mathcal{L} &\subset& y_Q\,\phi^*\,\bar Q \,{\psi_{Q_2}}+ y_{h}\left(\bar{\psi}_{Q_{1,2}} H\psi_{d_{2,1}}+\bar{\psi}_{Q_{1,2}}\tilde{H}\psi_{u_{2,1}}\right)\\
&& + y_q\left(\phi\,\bar{\psi}_{d_1}\,d+\phi\,\bar{\psi}_{u_1}\,u\right)+ h.c.,
\label{e.HQyukawa}
\end{eqnarray}
where the index $1,\,2$ represents the chirality component of Dirac fermion $\psi$'s. $\psi_{Q,u,d}$ have Dirac masses
\begin{equation}
M_Q\,\bar \psi_{Q}\, {\psi}_{Q}+M_u\,\bar \psi_{u}\,{\psi}_{u}+M_d\,\bar\psi_{d}\,{\psi}_{d}.
\end{equation}
The DM mass and its coupling to $\phi$ are given by
\begin{equation}\label{eq:dmcoupling}
m_{\chi}\,\bar \chi\,\chi + y_{\chi}\, \overline{\chi^c}\,\chi\,\phi + h.c.\,\,.
\end{equation}
We assume all the couplings are flavor universal and
$M_Q=M_q$, $y_Q = y_q$
for simplicity.
The direct detection $2\rightarrow3$ scattering process is shown in \fref{2to3process}.
When the momentum transfer through heavy quarks is much smaller than $M_Q$, we can integrate out the lower part of the diagram to generate the dimension 6 operator
\begin{equation}\label{eq:phiphiQq}
\frac{y_Q^2 y_{h}}{M_Q^2} \left(\bar{Q}\,H\,d+\bar{Q}\,\tilde{H}\,u\right)\phi \phi^*.
\end{equation}
Below the scale of electroweak symmetry breaking this becomes the operator of \eref{dmdm} with
\begin{equation}\label{e.effcoupling}
\Lambda = \frac{M_Q^2}{y_Q^2 y_h v}
\end{equation}
where $v = 246 \, {\rm GeV}$ is the SM Higgs VEV.
As we will show, $M_Q$ could easily be TeV scale, allowing for discovery of these heavy vector-like quarks at the LHC. As long as the LHC with $\sqrt{s} = 8 \, {\rm TeV}$ has not produced them on shell they are not trivially excluded despite being new colored states that couple to the Higgs. Since they do not receive their mass primarily from the Higgs vev, their contribution to the $h\gamma\gamma$ loop coupling is strongly suppressed. As we discuss in \sref{constraints}, the collider constraints on additional vector-like quark generations can be satisfied for $M_Q \gtrsim \, {\rm TeV}$. The quark Yukawa couplings do receive a flavor-universal correction which may lead to the light quark Yukawa couplings being tuned to the order of $0.1\%$,
but like the origin of the light scalar $\phi$ we put these naturalness issues aside to concentrate on the phenomenology of dmDM.
\vspace{3mm}
\section{Constraining the DM Yukawa Coupling}
\label{s.yXbounds}
The dark matter Yukawa coupling $y_\chi \overline {\chi^c} \chi \phi$ can be constrained by various astrophysical and cosmological observations, the most important of which we summarize here. For simplicity these bounds are formulated for $n_\phi = 1$, but can also be applied directly to $n_\phi > 1$ scenarios if one Yukawa coupling dominates.
The dark matter relic density $\Omega_\mathrm{CDM} = 0.1196 \pm 0.0031$ has been accurately measured by the Planck Satellite \cite{Ade:2013zuv}. Under the assumptions of a simple thermal relic this fixes $y_\chi$ to a \emph{specific value} (which depends on $m_\chi$). The lowest-order annihilation cross section for the process $\chi\,\bar{\chi}\to\phi\phi^*$ is
\begin{equation}
\sigma_{\chi\,\bar{\chi}\to\phi\phi^*}=\frac{y_{\chi}^4}{64\pi m_{\chi}^2}\label{eq:XsecRelic},
\end{equation}
assuming no sizable $\phi^3$ couplings. Performing the standard WIMP freeze-out calculation \cite{Kolb:1990vq} we find that the $\phi \phi^* \leftrightarrow \bar \chi \chi$ process freezes out at the usual $T \sim m_\chi/20$. Requiring that $\Omega_\chi = \Omega_\mathrm{CDM}$ gives
\begin{equation}
\label{e.DMrelicyX}
y_\chi \approx 0.0027 \sqrt{\frac{m_\chi}{\, {\rm GeV}}}.
\end{equation}
This is generically very small, of order $0.01$ for $\sim 10 \, {\rm GeV}$ DM, and is compared to the other $y_\chi$ bounds in \fref{yXboundplot} (magenta line). We emphasize that this constraint will be shifted if $\chi$ is non-thermally produced.
Although DM interaction is mediated by light scalars, the Sommerfeld enhancement, which is proportional to \cite{Feng:2010zp}
\begin{equation}
S\simeq\frac{\pi\alpha_{\chi}/v}{1-e^{-\pi\alpha_{\chi}/v}},
\end{equation}
is negligible due to the small Yukawa coupling $y_{\chi}^2$, as well as the relatively large velocity $v\simeq 0.3$ during freeze-out.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{allyXbounds}
\end{center}
\caption{
Bounds on the Yukawa coupling $y_\chi \overline {\chi^c} \chi \phi$ for $n_\phi = 1$. (These bounds can also be applied directly to $n_\phi > 1$ scenarios if one Yukawa coupling dominates)
Magenta: required value of $y_\chi$ for $\chi$ to be a thermal relic. Cyan and Orange: upper bounds on $y_\chi$ from bullet cluster and ellipticity observations. The green shaded region implements the SIDM solution to the core-cusp and too-big-to-fail problems of dwarf galaxies \cite{Carlson:1992fn,Spergel:1999mh,Tulin:2013teo}, while the pink region can modify the halo of milky way size galaxies. See text for details. Black curve: $2\to3$ dominated direct detection requires $y_\chi$ to lie below this curve if $n_\phi = 1$, see \ssref{ddconstraints}.
}
\label{f.yXboundplot}
\end{figure}
An \emph{upper bound} on the dark matter self-interaction may be obtained from observations of the Bullet Cluster and galactic ellipticities. This was done by the authors of \cite{Feng:2009mn} for a massless mediator. We can apply those bounds directly to our model as long as $m_\phi$ is much smaller than the momentum transfer of a characteristic DM-DM collision ($q \gtrsim \, {\rm MeV}$ for $m_\chi \gtrsim \, {\rm GeV}$). The bullet cluster bound
\begin{equation}
\label{e.bulletclusterbound}
y_\chi \lesssim 0.13 \left(\frac{m_\chi}{\, {\rm GeV}}\right)^{3/4}
\end{equation}
is considered quite reliable, but concerns have been raised about the ellipticity bound, the strength of which may have been overestimated \cite{Peter:2012jh}. Both upper bounds are shown in \fref{yXboundplot} (cyan and orange lines).
Rather than merely requiring the light mediator to not spoil well-understood aspects of galaxy formation and interaction, one could go one step further and use the dark matter self-interaction to address existing inconsistencies between prediction and observation.
Current $N$-body simulations of Cold Dark Matter halos predict an overabundance of dwarf spheroidals, as well as dwarf galaxy halos that are more cusped than observed. These inconsistencies are called the too-big-to-fail and core-cusp problems. It has been shown that the disagreement between simulations and observation can be ameliorated by introducing a sizable dark matter self-interaction, dubbed the Selft Interacting Dark Matter (SIDM) scenario \cite{Carlson:1992fn,Spergel:1999mh,Tulin:2013teo}.
The presence of a light scalar in the $m_\phi \lesssim \, {\rm MeV}$ mass range allows dmDM to act as a realization of SIDM. To derive the \emph{preferred range of $y_\chi$} we follow the procedure in \cite{Tulin:2013teo}.
The small ratio between the potential energy of $\phi$ mediation and the kinetic energy of DM in galactic halos, $2\alpha_{\chi}m_{\phi}/(m_{\chi}v^2)\ll 1$, shows that DM self-interaction should be described in the classical limit. The transfer cross section for DM scattering,
\begin{equation}\label{eq:sigmaTdmDM}
\frac{\sigma_T}{m_{\chi}} \simeq \frac{y_{\chi}^4}{\pi\,m_{\chi}^3\,v^4}\,\ln\left(\frac{4\pi\,m_{\chi}\,v^2}{2\,y_{\chi}^2\,m_{\phi}}\right),
\end{equation}
is just the total cross section weighted by fractional longitudinal momentum transfer. A value of
\begin{equation}
\label{e.SIDMxsec}
\frac{\sigma_T}{m_{\chi}} = 0.5-30\,\rm{cm}^{2}/\rm{g}
\end{equation}
could reconcile the inconsistencies between $N$-body simulations and observations. The required coupling depends on the ambient dark matter velocity, which is $\sim 30$ km/s for dwarf galaxies and $\sim 300$ km/s in larger milky way size galaxies. \fref{yXboundplot} shows the preferred bands of $y_\chi$ to achieve the cross section \eref{SIDMxsec} in these two systems. In this plot, $m_\phi = \, {\rm MeV}$, but the change for $m_\phi = \, {\rm eV}$ is not substantial.\footnote{The heavier $\phi$ is chosen to evade neutron star bounds, see \sssref{neutronstarcooling}. $2\to3$ direct detection with emission of a $\lesssim \, {\rm keV}$ dark scalar can still occur in an $n_\phi = 2$ model, see \sref{phisummary}.} As we can see, the dmDM model with a thermal relic DM does provide a potential solution to the core-cups and too-big-to-fail problem of dwarf galaxies.
Finally, as we discuss in \ssref{ddconstraints}, there is an \emph{upper bound} on $y_\chi$ for the $2\to3$ process to dominate direct detection when $n_\phi = 1$. If $y_\chi$ is larger, direct detection proceeds via the $2\to2$ loop process. This is shown for the LUX experiment as the black line in \fref{yXboundplot}. (The corresponding upper bound for other experiments is somewhat weaker.) Note that this boundary between the two direct detection regimes is arbitrarily shifted for $n_\phi = 2$.
In summary, \fref{yXboundplot} shows both the \emph{preferred values} of $y_\chi$ for a thermal relic and to resolve inconsistencies between observations and simulations for dwarf galaxies and the milky way; it also shows the \emph{upper bounds} on $y_\chi$ to satisfy bullet cluster and self-interaction bounds, and to ensure $2\to3$ dominated direct detection. Roughly speaking, the most relevant values of $y_\chi$ are $\sim 10^{-3} - 10^{-2}$.
\vspace{3mm}
\section{Constraining the Dark Mediator $\phi$ through Stellar Astrophysics}
\label{s.stellarconstraints}
A light dark mediator like $\phi$ coupling to the SM via
\begin{equation}
\label{e.qqphiphi}
\frac{1}{\Lambda}\bar q q \phi \phi^*
\end{equation}
is produced in the early universe, as well as stellar cores and high energy colliders.
In this section we compute $m_\phi$-dependent bounds on $\Lambda$ from stellar astrophysics. The light scalar $\phi$ is produced in stellar cores if $m_\phi \lesssim T$. This can affect the length of the neutrino burst in supernovae explosions, radiative heat transfer and energy loss in the sun, and the cooling of stellar relics. We assume $n_\phi = 1$, but the constraints are easily applied to the more general case.
The derivation of these bounds differs from the corresponding calculations for axions, since light scalars couple more strongly at low energy due to the scaling of the operator \eref{qqphiphi}. In the regime where respective bound can be set, $\phi$ fully thermalizes in the sun and white dwarfs.
By far the strongest constraints are obtained from observations of neutron star cooling: $\Lambda \gtrsim 10^8 \, {\rm TeV}$ for $m_\phi \lesssim 100 \, {\rm keV}$, which excludes this scenario for direct detection completely. However, in \sref{phisummary} we construct $n_\phi = 2$ scenario with one eV and one MeV dark mediator that evades all constraints while allowing for sizable direct detection signals.
It is useful to keep in mind the range of $\Lambda$ relevant for direct detection. As discussed in \sref{yXbounds}, the preferred range for the dominant DM Yukawa coupling is $y_\chi \sim 10^{-3} - 10^{-2}$. Direct detection bounds on dmDM were computed in \cite{Curtin:2013qsa} and are reviewed in \sref{directdetection}. For dmDM to be detectable above the irreducible neutrino background, $\Lambda \lesssim 10^4 \, {\rm TeV}$ in the relevant dark mediator coupling to quarks.
\subsection{$\phi$ interaction and production cross sections}
\begin{figure*}
\begin{center}
\includegraphics[width=11.4cm]{LowEPhiCrossSectionPlot}
\end{center}
\caption{
Low-energy scattering and production cross sections for computing bounds on the dmDM model, compared to some relevant SM processes. Solid lines: coherent scattering of $\phi$ off a stationary nucleus via the operator $\bar q q \phi \phi^*/\Lambda$. Dashed lines: coherent scattering of a neutrino off a stationary nucleus via $Z$-exchange (see also \cite{Brice:2013fwa}). Long-Dash-Dotted lines: Compton scattering of a photon off a stationary electron or proton. Long-Dashed lines: $p p \rightarrow p p \phi \phi^*$, $\gamma p \rightarrow p \phi \phi^*$ and $\gamma \mathrm{He} \rightarrow \mathrm{He} \ \phi \phi^*$ where one initial proton is stationary. The blue band represents a naive dimensional analysis estimate \eref{gagaphiphiestimate} of $\gamma \gamma \rightarrow \phi \phi^*$ (or the reverse annihilation process $\phi \phi^* \to \gamma \gamma$), taking $\mathcal{B}^2 = 1 - 100$. In all cross sections involving the $ \bar q q \phi \phi^*/\Lambda$ operator we used $\Lambda = 10 \, {\rm TeV}$. }
\label{f.crosssectionphilowE}
\end{figure*}
Computing stellar astrophysics and cosmological bounds requires an understanding of the $\phi$-nucleus \emph{scattering} cross sections at sub-GeV energies. This is easily computed analytically using standard methods for DM scattering and is shown in \fref{crosssectionphilowE} for $\Lambda = 10 \, {\rm TeV}$. For illustration we also compare these cross sections to some relevant SM scattering processes, $\nu N \rightarrow \nu N$ and Compton scattering. Note the different energy scaling of these cross sections, with $\sigma(\phi N \rightarrow \phi N)$ being independent of energy for $E_\phi \lesssim 100 \, {\rm MeV}$.
At tree-level, $\phi$ only couples hadronically. Therefore, the most relevant \emph{production} processes for $\phi$ in stellar cores are
\begin{equation}\label{e.phiproduction}
N \gamma \rightarrow N \phi \phi^* \ , \ \ \ \
p p \rightarrow p p \phi \phi^* \ , \ \ \ \
\gamma \gamma \rightarrow \phi \phi^*
\end{equation}
Again we are only concerned with sub-GeV energy scales. We can model the first two processes, shown in \fref{crosssectionphilowE}, in \texttt{MadGraph5} by treating the proton as a fundamental fermion and multiplying the cross section by a quark-nucleon matrix element factor, see \eref{Bsqfactor}. The Helm form factor \eref{Helm} is also included for nuclei. The one-pion exchange approximation was employed for the first process \cite{steigman}, and the obtained cross section should be seen as an $\mathcal{O}(1)$ estimate. The first process can occur off any nucleus, with $N = p, \ \mathrm{He}$ shown in \fref{crosssectionphilowE} (the cross sections for N = He, C, O are nearly identical), which is relevant in the Sun and white dwarfs. The second process proceeds identically for protons and neutrons and is relevant in neutron stars, with additional subtleties due to neutron degeneracy discussed in \sssref{neutronstarcooling}.
The photon annihilation process $\gamma \gamma \rightarrow \phi \phi$ is difficult to calculate due to unknown form factors connecting quarks to hadronic QCD resonances. A rough estimate of the amplitude can be obtained by treating it as a loop process mediated by constituent quarks. The same approach is used to calculate the photon meson couplings, for example in \cite{Volkov:2009mz}. With the correct power of electric charge and one mass insertion for the correct chirality, the size of operator $|\phi|^2F_{\mu\nu}F^{\mu\nu}$ is approximated as
\begin{equation}
\frac{\alpha}{4\,\pi}\frac{\mathcal{B}}{\Lambda\,m_q}\,|\phi|^2F_{\mu\nu}F^{\mu\nu},
\end{equation}
where $\mathcal{B}$ is the form factor between the $\phi$ and constituent quarks, and $m_{u,d}\simeq 263$ MeV \cite{Volkov:2009mz} is the mass of the constituent quarks within the NJL model. The resulting cross section is
\begin{eqnarray}
\label{e.gagaphiphiestimate}
\sigma_{\gamma\gamma\to\phi\phi}
&\sim&
\frac{1}{16\,\pi}\left(\frac{\alpha}{\pi\,m_q}\right)^2\left( \frac{\mathcal{B}}{\Lambda}\right)^2\,E_{\gamma}^2
\\ \nonumber
&\approx&(7 \times 10^{-14} \mathrm{pb} ) \ \mathcal{B}^2 \left(\frac{\, {\rm TeV}}{\Lambda}\right)^2 \left(\frac{E_\gamma}{\, {\rm keV}}\right)^2
\end{eqnarray}
The blue band in \fref{crosssectionphilowE} is a very rough estimate with $\mathcal{B}^2 = 1 - 100$. At our level of precision we also take this to be the cross section for the reverse annihilation process $\phi \phi \to \gamma \gamma$.
\subsection{Supernovae}
\label{sss.supernovae}
Like massless axions, production and emission of $\phi$'s can lead to rapid energy loss during a supernova explosion. This can be constrained by measuring the duration of the associated neutrino burst. There are two allowed regimes \cite{Raffelt:2006cw}. The $\phi$ are trapped in the stellar medium if they couple more strongly to the SM than neutrinos. In that case they do not affect the neutrino burst. Alternatively, if the SM copuling is 5 orders of magnitude weaker, $\phi$ production is too negligible to affect the supernova.
Rescaling $\sigma_{\phi N \to \phi N} \propto \Lambda^{-2}$ at $E_\phi \sim 10 \, {\rm MeV}$ from \fref{crosssectionphilowE}, we see that the former constraint is satisfied for $\Lambda \lesssim 10^6 \, {\rm TeV}$. Therefore, supernova roughly supply the bound
\begin{equation}
\Lambda \gtrsim 10^{11} \, {\rm TeV} \ \ \mathrm{or} \ \ \Lambda \lesssim 10^6 \, {\rm TeV}
\end{equation}
on $\Lambda$ involving scalars with a mass of $m_\phi \lesssim 10 \, {\rm MeV}$, the temperature of a supernova explosion.
\subsection{Solar Energy Loss and Radiative Heat Transfer}
\label{sss.solarcooling}
A stellar core at some temperature $T \ll \, {\rm GeV}$ can be seen as a fixed target experiment in which slow-moving nuclei are bombarded by photons as well as relativistic electrons and, in the case of dmDM, $\phi$ scalars. The most relevant production processes for $\phi$ are shown in \eref{phiproduction}, with cross sections as a function of energy illustrated in \fref{crosssectionphilowE}. The $\phi$ production rate per second per volume via a process $X_1 X_2 \rightarrow \phi \phi^* +$ SM particles, with cross section $\sigma_\mathrm{\phi prod}$ and parent particle number densities $n_{X_i}$, is
\begin{equation}
\label{e.rphicreate}
r_\phi^\mathrm{create} = 2 n_{X_1} n_{X_2} c \ \sigma_\mathrm{\phi prod} \propto \Lambda^{-2}.
\end{equation}
(We assume $\phi$ is so light that it is always relativistic.) On the other hand, the mean free path for $\phi$ before it scatters off nuclei in the star is
\begin{equation}
\label{e.Lphi}
L_\phi = \left( \sum_i n_{N_i} \sigma_{\phi N_i \rightarrow \phi N_i} \right)^{-1} \propto \Lambda^2
\end{equation}
for $N_i = $ \{p, He\} in the case of the Sun, with additional heavier elements in white dwarfs.
Estimating \emph{energy loss} due to $\phi$ emission from the star is greatly simplified if we can make four assumptions:
\begin{enumerate}[(a)]
\item The effect of $\phi$ production is small enough so as to not significantly influence the evolution of the star, allowing us to treat it as a background source of $\phi$'s.\footnote{This is a consistent assumption when setting conservative limits.}
\item $\phi$ particles are produced predominantly at the center of the star.
\item $L_\phi$ is short enough that $\phi$ scatters many times and thermalizes before leaving the star.
\item There is negligible $\phi$ annihilation in the star.
\end{enumerate}
If these conditions are satisfied, the created $\phi$ particles diffuse outwards from the center until they reach a layer of low enough density so that the surface of the star is within $\sim$ one scattering length, at which point they escape. Each $\phi$ carries away energy $E_\phi \sim T_\phi^\mathrm{escape}$, where $T_\phi^\mathrm{escape}$ is the temperature of the `layer of last scattering'.\footnote{This is to be compared to the free-streaming case, where the energy distribution of $\phi$'s from creation processes might have to be taken into account.} In the absence of annihilation processes, the equilibrium rate for $\phi$ emission is equal to the total rate of $\phi$ production, which together with $T_\phi^\mathrm{escape}$ gives the total energy loss from $\phi$ emission.
We now perform this computation for the case of the Sun. Radial density, temperature and mass fraction profiles for the standard solar model can be found in basic astrophysics textbooks and are reproduced in \aref{radialprofiles} for reference. The radius of the sun is about $R_\mathrm{sun} \approx 3.85 \times 10^{26}$ cm, while the central density and temperature are $\rho_\mathrm{sun}(0) \approx \ 150 \ \mathrm{g} \ \mathrm{cm}^{-3}$ and $T_\mathrm{sun}(0) \approx 1.5 \times 10^7 \mathrm{K} \approx 1.3 \, {\rm keV}$. The corresponding nucleus number densities are of order $10^{25} \ \mathrm{cm}^{-3}$, while the density of photons obeying a Bose-Einstein distribution is $n_\gamma = \frac{2 \zeta(3)}{\pi^2} T^3 \sim 10^{22} \mathrm{cm}^{-3}$. Consulting \fref{crosssectionphilowE} and \eref{rphicreate} it is clear that $\gamma N \to N \phi \phi^*$ is the dominant production process for $T \sim \, {\rm keV}$.
The $\phi$ creation rate per volume as a function of distance $R$ from the sun's center is
\begin{equation}
r_\phi^\mathrm{create}(R) = 2 c n_\gamma \sum_{N = \mathrm{p}, \mathrm{He}} n_N \tilde \sigma_{N \gamma \to N \phi \phi^*}
\label{e.rphicreatesun}
\end{equation}
where $n_\gamma$ and $\tilde \sigma$ are evaluated at temperature $T_\mathrm{sun}(R)$, and $\tilde \sigma$ is the thermally averaged cross section for a Maxwell-Bolzmann distribution (excellent approximation of Bose-Einstein in the sun) of photons hitting a stationary nucleus:
\begin{equation}
\tilde \sigma_{N \gamma \to N \phi \phi^*}(T) = \int_0^\infty d E_\phi f_\mathrm{MB}(T; E_\phi) \sigma_{N \gamma \to N \phi \phi^*} (E_\phi).
\end{equation}
The resulting $r_\phi^\mathrm{create}(R) \propto \Lambda^{-2}$ is shown in \fref{SUNrphicreateLambda10tev}. About $90\%$ of $\phi$ production takes place within $0.2$ solar radii, validating assumption (b) above. The \emph{total} rate of $\phi$ creation in the entire sun is
\begin{equation}
\mathcal{R}_\phi^\mathrm{create} \approx (1.0 \times 10^{42} \ \mathrm{s}^{-1}) \left(\frac{\, {\rm TeV}}{\Lambda}\right)^2.
\end{equation}
For our purposes here, define $R_\mathrm{core} = 0.2 R_\mathrm{star}$. Since most of the $\phi$ creation takes place within that radius,
\begin{equation}
\mathcal{R}_\phi^\mathrm{create} \sim r_\phi^\mathrm{create}(0) \ \times \ \frac{4}{3} \pi R_\mathrm{core}^3
\end{equation}
is satisfied up to a factor of two.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{radialprofiles_sunWD_rphicreateLambda10tev.pdf}
\end{center}
\caption{
Rate of $\phi$ creation in the sun (solid) and our benchmark white dwarf with 0.1 solar luminosity (dashed) for $\Lambda = 10 \, {\rm TeV}$.
}
\label{f.SUNrphicreateLambda10tev}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{radialprofiles_sunWD_Lphi10tev.pdf}
\end{center}
\caption{
Solid (dashed) line: mean free path of $\phi$ in the sun (our benchmark white dwarf with 0.1 solar luminosity) for $\Lambda = 10 \, {\rm TeV}$. The intersection with the dotted line marks the `layer of last scattering'.
}
\label{f.SUNLphiLambda10tev}
\end{figure}
We next compute the $\phi$ mean free path $L_\phi(R) \propto \Lambda^2$ via \eref{Lphi} using similarly averaged scattering cross sections. This is shown in \fref{SUNLphiLambda10tev} for $\Lambda = 10 \, {\rm TeV}$. For this benchmark value assumption (c) is certainly satisfied. The `layer of last scattering' is situated at $R \approx R_\phi^\mathrm{escape}$, where
\begin{equation}
L_\phi(R_\phi^\mathrm{escape}) = R_\mathrm{star} - R_\phi^\mathrm{escape}.
\end{equation}
This allows us to define the approximate temperature of the escaping $\phi$'s as
\begin{equation}
T_\phi^\mathrm{escape} = T_\mathrm{sun}(R_\phi^\mathrm{escape}).
\end{equation}
Both $R_\phi^\mathrm{escape}$ and $T_\phi^\mathrm{escape}$ are shown as functions of $\Lambda$ in \fref{SUNRescapeTescapetev}. Assumption (c) holds for $\Lambda \lesssim 100 \, {\rm TeV}$. On the other hand, our calculations become unreliable around $\Lambda \sim 1 \, {\rm TeV}$ since we then become sensitive to details of the sun's surface structure.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{radialprofiles_sunWD_RphiescapevsLambdaTeV.pdf}
\includegraphics[width=7cm]{radialprofiles_sunWD_TphiescapevsLambdaTeV.pdf}
\end{center}
\caption{
Top: $R_\phi^\mathrm{escape}$ defining the `layer of last scattering' of $\phi$'s in the Sun (solid) and our benchmark white dwarf with 0.1 solar luminosity (dashed). The many-scattering assumption is valid in the Sun for $\Lambda \lesssim 100 \, {\rm TeV}$. Bottom: $T_\phi^\mathrm{ecape} = T_\mathrm{star}(R_\phi^\mathrm{escape})$, the temperature of escaping $\phi$'s. (Valid in the Sun for $\Lambda \lesssim 100 \, {\rm TeV}$).
}
\label{f.SUNRescapeTescapetev}
\end{figure}
We can now estimate the fraction of the star's luminosity in the form of $\phi$ emission, making use of the (yet to be verified) assumption (d), which gives at equilibrium:
\begin{equation}
\mathcal{R}_\phi^\mathrm{escape} = \mathcal{R}_\phi^\mathrm{create}.
\end{equation}
Therefore, the power of $\phi$ emission is
\begin{equation}
\label{e.Pphisun}
P_\phi \approx \frac{3}{2} T_\phi^\mathrm{escape} \ \mathcal{R}_\phi^\mathrm{create}.
\end{equation}
The sun's measured power output is $P_\mathrm{sun} \approx 3.85 \times 10^{26}$ Watts. The ratio $P_\phi/P_\mathrm{sun}$ as a function of $\Lambda$ is shown in \fref{SUNLambdaLimit}. The $\phi$ contribution becomes negligible\footnote{Compare to neutrino emission $P_\nu/P_\mathrm{sun} \approx 2\%$. \cite{raffelt1996}} for
\begin{equation}
\Lambda \gtrsim 3 \, {\rm TeV}.
\end{equation}
However, as we will see below, this does not constitute the strongest bound obtained from the sun.
We still need to verify that assumption (d) holds. Evaluating the rate of $\phi$ annihilation in the sun requires us to solve for the equilibrium number density $n_\phi(R)$. We can construct the associated diffusion equation with the information assembled here, but numerically solving it is beyond the scope of this work. However, we can make a ball-park estimate of the total equilibrium $\phi$ population by noting that the time taken for a single $\phi$ to escape is dominated by the time taken to diffuse from the dense core:
\begin{equation}
t_\phi^\mathrm{escape} \sim \frac{R_\mathrm{core}^2}{c L_\phi(0)} \approx (2 \times 10^4 \mathrm{\ s})\left(\frac{\, {\rm TeV}}{\Lambda}\right)^2
\end{equation}
where $L_\phi(0) \approx 5 \times 10^{-6} R_\mathrm{star} (\Lambda/\, {\rm TeV})^2$ (see \fref{SUNLphiLambda10tev}). This means $N_\phi$, the equilibrium total number of $\phi$'s in the sun, is approximately given by solving
\begin{equation}
\label{e.solardNphi}
\frac{dN_\phi}{dt} = \mathcal{R}^\mathrm{create}_\phi - \frac{N_\phi}{t_\phi^\mathrm{escape}} = 0,
\end{equation}
which gives
\begin{equation}
N_\phi \sim (2 \times 10^{46}) \ \left(\frac{\, {\rm TeV}}{\Lambda}\right)^4
\end{equation}
Assuming all these $\phi$'s live in the core, the corresponding number denisty is
\begin{equation}
\label{e.nphiequilibriumsun}
n_\phi \sim (2 \times 10^{15} \ \mathrm{cm}^{-3}) \ \left(\frac{\, {\rm TeV}}{\Lambda}\right)^4
\end{equation}
Consulting \fref{crosssectionphilowE} and comparing with number densities $n_\gamma \sim 10^{22} \ \mathrm{cm}^{-3}$ and $n_N \sim 10^{25} \ \mathrm{cm}^{-3}$ in the core, it is clear that the $\phi$ annihilation rate
\begin{equation}
r_\phi^\mathrm{annihilation} = 2 c n_\phi^2 \sigma_{\phi \phi \to \gamma \gamma}
\end{equation}
is completely negligible compared to the creation rate in \eref{rphicreatesun}.
\begin{figure}
\begin{center}
\includegraphics[width=7cm]{radialprofiles_sunWD_phiPowerFractionLambdaTeV}
\end{center}
\caption{
Power of emitted $\phi$ radiation as a fraction of total star power output for the Sun (solid) and our benchmark white dwarf with 0.1 solar luminosity (dashed).
}
\label{f.SUNLambdaLimit}
\end{figure}
To make sure the sun is not disturbed by $\phi$ production we also have to ensure that radiative heat transfer, which dominates the core and radiative zone, is relatively unaffected. The radiative heat transfer due to $\phi$ should be compared to the photon heat flux \cite{Raffelt:1988rx}:
\begin{equation}
\frac{F_\phi}{F_\gamma} \sim \frac{n_\phi L_\phi}{n_\gamma L_\gamma}.
\end{equation}
Substituting \eref{nphiequilibriumsun}, $n_\gamma(T_\mathrm{sun}(0))$, $L_\phi(0)$ as well as $L_\gamma \sim 10^{-2}$ cm (see \fref{solarprofiles} in \aref{radialprofiles}), we obtain the following heat transfer ratio in the core:
\begin{equation}
\frac{F_\phi}{F_\gamma} \sim 1 \times \left(\frac{\, {\rm TeV}}{\Lambda}\right)^2.
\end{equation}
While this estimate is crude it does yield an important constraint,
\begin{equation}
\Lambda \gtrsim 10 \, {\rm TeV},
\end{equation}
which is significantly stronger than the bound from energy loss due to $\phi$ emission.
Two final remarks are in order. Firstly, as mentioned above, \fref{SUNRescapeTescapetev} shows that assumption (c) of trapped and thermalized $\phi$'s starts breaking down when $\Lambda \gtrsim 100 \, {\rm TeV}$. In that case $\phi$ no longer contributes to radiative heat transfer, while the lost power due to $\phi$ emission is roughly $P_\phi \sim T_\mathrm{core} \mathcal{R}_\phi^\mathrm{create} \sim (10^{-5} P_\mathrm{sun}) (100 \, {\rm TeV} / \Lambda)^2$. The free-streaming regime in the sun therefore sets no constraints on $\Lambda$.
Secondly, we also point out that there is a sub-population of protons and photons with $E \sim 10 \, {\rm MeV}$ produced by fusion reactions in the sun, but the total rate of fusion reactions $R_\mathrm{fusion} \approx 3.6 \times 10^{38} \ \mathrm{s}^{-1}$ is many orders of magnitude too low for this subpopulation to affect our estimates.
\subsection{White Dwarf Cooling}
\label{sss.whitedwarfcooling}
White dwarfs (WD) represent the evolutionary endpoint of stars up to several solar masses. They are supported by electron degeneracy pressure, which largely decouples their hydrostatic and thermal properties and results in a strong relationship between their mass and radius. Since white dwarfs do not support fusion processes in their cores they simply cool down after they are formed, with observable luminosities ranging from 0.5 to $\sim 10^{-4} \mathcal{L}_\mathrm{sun}$, corresponding to core temperatures of around $10$ to $0.1 \, {\rm keV}$ (about $10^8$ and $10^6$ K) \cite{raffelt1996}.
Their relative simplicity makes white dwarfs suitable for constraining new physics with light particles (see e.g. \cite{Dreiner:2013tja, raffelt1996}). Unlike the Sun, where we have a single well-studied star to compare predictions to, white dwarf cooling is constrained by the \emph{White Dwarf Luminosity Function} (WDLF), which is the number of observed WDs at different luminosities, see \fref{WDLF}. For reasonable assumptions about the star formation rate, the shape of this WDLF curve is given entirely by the WD cooling rate \cite{raffelt1996}.
The large central density of white dwarfs $\rho_{WD} \sim 10^{6} \ \mathrm{g} \ \mathrm{cm}^{-3}$ means $\phi$'s can be copiously produced, but also thermalize completely before diffusing out of the star. This makes their emission somewhat sensitive to details of WD stellar structure, unlike e.g. free-streaming axions. Comprehensively studying the constraints on the $\frac{1}{\Lambda} q \bar q \phi \phi^*$ operator set by WD cooling would therefore require modeling a representative WD population, which is beyond the scope of this work.
Fortunately, the WD population in our stellar neighborhood is strongly peaked around $0.5 - 0.7$ solar masses \cite{raffelt1996, DeGennaro:2007yw}. This means we can obtain a preliminary estimate of the bound on $\Lambda$ by studying a single star in this representative mass range.
Our benchmark dwarf (about 0.5 solar masses) started its life as a roughly one solar mass main sequence star that was evolved forward in time using the stellar evolution code \texttt{MESA} \cite{mesa}.\footnote{We thank Max Katz, who performed the simulation for us.} Most of the observational data in the WDLF is for luminosities $\lesssim 0.1 \mathcal{L}_\mathrm{sun}$, which corresponds to a bolometric magnitude $M_\mathrm{bol}> 7$. Photon cooling, well-described by Mestel's Law \cite{Mestel}, dominates for such cool white dwarfs. We therefore compute $\phi$-cooling in our benchmark dwarf for $\mathcal{L}_\mathrm{WD} < 0.1 \mathcal{L}_\mathrm{sun}$.
Since the degenerate electron gas in WD cores is an excellent conductor of heat, radiative heat transfer is unimportant. We therefore only compute the total power loss due to $\phi$ emission, in an identical manner to the previous subsection. As we will see, assumptions (a) - (d) are satisfied throughout as long as $\Lambda$ is large enough. Radial profiles of density, composition and temperature produced by \texttt{MESA} for our benchmark dwarf are shown in \aref{radialprofiles}.
The $\phi$ creation rate per volume is shown in \fref{SUNrphicreateLambda10tev} for $\Lambda = 10 \, {\rm TeV}$ when the white dwarf has 0.1 solar luminosity. Due to the similar temperature but larger density, it is 5 orders of magnitude higher than in the sun. The $p p \to p p \phi \phi$ process is still strongly temperature-suppressed. Figs. \ref{f.SUNLphiLambda10tev} and \ref{f.SUNRescapeTescapetev} show that $\phi$'s do not escape until they are very close to the white dwarf surface. The resulting power loss is shown in \fref{SUNLambdaLimit}, and $\phi$ emissivities are compared to known photon and neutrino emissivities in \fref{WDcoolingluminosities}. To a reasonable approximation,
\begin{equation}
\label{e.WDephi}
\epsilon_\phi \approx 1.5 \times 10^{-2} \left( \frac{\, {\rm TeV}}{\Lambda}\right)^2 \left( \frac{T}{10^7 K}\right)^{11/5} \mathrm{erg} \ \mathrm{s}^{-1} \mathrm{g}^{-1}.
\end{equation}
Requiring $\phi$ emission to represent a subdominant 10\% fraction of the total WD luminosity requires $\Lambda \gtrsim 40 \, {\rm TeV}$, but as it turns out the actual bound on $\Lambda$ from the white dwarf luminosity function is significantly less constraining. We now compute this bound following the procedure in \cite{raffelt1996}.
The white dwarf looses internal energy $U$ with time due to emission of photons, neutrinos and (in our case) $\phi$'s, so that $dU/dt = - (L_\gamma + L_\nu + L_\phi)$, where $L_\gamma$ is the total photon luminosity of the star.
Assuming a constant star formation rate $B$, the number density of white dwarfs in a given magnitude interval is proportional to the time it takes to cool through that interval, so
\begin{equation}
\frac{d N}{d M_\mathrm{bol}} = B \frac{dt}{d M_\mathrm{bol}} = - B \frac{dU/d M_\mathrm{bol}}{L_\gamma + L_\nu + L_\phi}.
\end{equation}
For a white dwarf, the photon emissivity can be be computed using Kramer's opacity law:
\begin{equation}
\epsilon_\gamma \approx 3.3 \times 10^{-3} \left( \frac{T}{10^7 K}\right)^{7/2} \mathrm{erg} \ \mathrm{s}^{-1} \mathrm{g}^{-1}.
\end{equation}
Bolometric magnitude gives the photon luminosity relative to the sun, $\log_{10}(L_\gamma/L_\mathrm{sun}) = (4.74 - M_\mathrm{bol})/2.5$. Therefore $T \propto 10^{-4 M_\mathrm{bol}/35}$ and we get
\begin{equation}
\label{e.WDLF}
\log_{10} \left[ \frac{d N}{d M_\mathrm{bol}}\right] = \mathrm{C} + \frac{2}{7} M_\mathrm{bol} + \log_{10}\left[ \frac{\epsilon_\gamma}{\epsilon_\gamma + \epsilon_\nu + \epsilon_\phi}\right].
\end{equation}
where we have absorbed details of the star formation rate and white dwarf heat capacity into the constant $C$. (When comparing to observational data it is conventional to normalize this constant to the data point with the smallest uncertainty.) For pure photon cooling, this reduces to the well-known Mestel's cooling law~\cite{Mestel}, shown as the black dashed line in \fref{WDLF}. This already gives a reasonable fit, but full simulations (thin green curve) are needed to account for the observed data in detail.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{WD_emissivities.pdf}
\end{center}
\caption{
Photon (short-dashed) and neutrino (long-dashed) emissivities for typical white dwarfs with $\rho_\mathrm{core} = 10^6 \ \mathrm{g} \ \mathrm{cm}^{-3}$ as a function of core temperature \cite{raffelt1996,Mestel}. Solid lines indicate $\phi$ emissivities obtained for our benchmark dwarf.
}
\label{f.WDcoolingluminosities}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm]{WDLF_Lambdatev_1_5_10}
\end{center}
\caption{
Green and magenta datapoints: measured white dwarf luminosity function from \cite{Harris:2005gd} and \cite{Krzesinski}, showing the number density of observed white dwarfs per bolometric magnitude interval per $\mathrm{pc}^{3}$. The gray band indicates how much the WDLF of \cite{Harris:2005gd} would change by varying the assumed scale height of the galactic disk between 200 and 350pc.
Green line: WDLF from a full simulation, assuming constant star formation rate, taken from \cite{Dreiner:2013tja}. Black dashed line: Mestel's cooling law (pure photon cooling). Colored solid lines: modification of Mestel's law due to additional $\phi$ cooling for $\Lambda = 1, 5$ and $10 \, {\rm TeV}$. All cooling curves have been shifted to pass through the datapoint with the smallest uncertainty.
}
\label{f.WDLF}
\end{figure}
For the range of core temperature we consider, neutrino cooling can be neglected. Using the $\phi$ emissivities of our simulated benchmark dwarf in \eref{WDLF}, we obtain the modifications to Mestel's cooling law shown in \fref{WDLF} for different values of $\Lambda$. Given the crudeness of our estimate a full fit to the data is not appropriate. However, we can estimate the \emph{sensitivity} of a full stellar simulation to $\phi$ cooling by the size of the deviation from Mestel's law.
Given the scale of astrophysical uncertainty in the WDLF (illustrated by the gray band in \fref{WDLF}), a reasonable rough bound on the allowed modification to standard white dwarf cooling is
\begin{equation}
\Lambda \gtrsim 10 \, {\rm TeV}.
\end{equation}
The effect of $\phi$ cooling is more pronounced for young, hot white dwarfs (smaller $M_\mathrm{bol}$).
A more thorough study, including full simulation of $\phi$ cooling throughout the life of the white dwarf, might therefore give a somewhat more stringent bound on $\Lambda$. However, as we see in the next section, a much stronger constraint is supplied by neutron star cooling.
Finally, one might worry about $\phi$ being produced in electron collisions or plasmon decay via its loop-induced coupling to the $Z$-boson, see \eref{zphiphi}. However, this coupling is $\sim10^{-3} (10\, {\rm TeV}/\Lambda)$ smaller than the equivalent tree-level electroweak coupling. According to the discussion in \cite{Dreiner:2013tja},
$\phi$ emission from the plasmon decay and electron Bremsstrahlung is therefore $\mathrel{\ltap} 10^{-8}(10\, {\rm TeV}/\Lambda)^2\mathrm{erg} \ \mathrm{s}^{-1} \mathrm{g}^{-1}$ at $T \sim 4 \times 10^7$ K, which is much lower than the nuclear production discussed above.
\subsection{Neutron Star Cooling}
\label{sss.neutronstarcooling}
Neutron stars are the evolutionary endpoint for heavy stars that do not collapse to a black hole. They are supported by neutron degeneracy pressure and constitute the densest form of matter in the universe. This introduces many subtleties into their cooling processes, which are not yet fully understood even in the Standard Model (see e.g. \cite{Yakovlev:2004iq,
Yakovlev:2007vs,
Page:2005fq,
Page:2009fu,
Yakovlev:2010ed,
Page:2010aw,
Shternin:2010qi,
Potekhin:2011xe,
Page:2012se,
Elshamouty:2013nfa}). However, neutron stars are such powerful ``$\phi$-factories'' in dmDM that we can still set very strong constraints despite these uncertainties.
Neutron stars are born in hot supernovae explosions with $T \sim 10^{11}\mathrm{K} \simeq 10 \, {\rm MeV}$ but quickly cool down and enter the neutrino cooling phase when their internal temperature reaches about $T \sim 10^9 \mathrm{K} \sim 100$ keV (see e.g. \cite{Yakovlev:2004iq} for a review). Neutrino cooling dominates for $\sim 10^5$ years, after which photon cooling takes over. For a given equation of state, the mass of the neutron star fixes both the radius and density profile. The radius is about 10km, while the central density is $\rho \sim 2 - 10 \times \rho_0$, where $\rho_0 \approx 2.8 \times 10^{14} \ \mathrm{g} \ \mathrm{cm}^{-3}$ is the density of nuclear matter at saturation.
The neutron star core extends to about 1km below the surface and is divided into an inner core ($\rho \gtrsim 2 \rho_0$) and an outer core ($\rho \lesssim 2 \rho_0$). (Light neutron stars with $M \lesssim 1.3 M_\mathrm{sun}$ do not have an inner core.) The characteristics of the outer core are well constrained by nuclear theory and laboratory data, while the inner core is much less well understood, with hypotheses for its composition ranging from normal nuclear matter to hyperions, pion or kaon condensates, or a pure quark fluid (called `strange quark matter' due to the presence of $s$ quarks). However, the recent observation of a 2 solar mass neutron star \cite{Demorest:2010bx} is in conflict with all core hypotheses other than normal nuclear matter, which provides the only equation of state `stiff' enough to support such large masses. We shall therefore only consider neutron stars with nucleon cores.
The neutron star is surrounded by an outer crust of thickness $\sim$ few 100 m, consisting of a \emph{non-degenerate} neutron gas with characteristic density of order $\rho_N \sim 4 \times 10^{11} \ \mathrm{g} \ \mathrm{cm}^{-3}$. During the neutrino cooling phase the outer crust acts as a heat blanket, thermally insulating the neutron star interior against radiative losses into space. For a nonmagnetic iron envelope the surface temperature of the star can be related to the interior temperature by \cite{Gundmundsson1,Gundmundsson2,Page:2004fy}
\begin{equation}
T_\mathrm{surface} = (0.87 \times 10^{6}\mathrm{K}) \left(\frac{g_s}{10^{14} \mathrm{cm/s}^2}\right)^{1/4} \left( \frac{T_\mathrm{core}}{10^8 \mathrm{K}}\right)^{0.55},
\label{e.Tsurface}
\end{equation}
where $g_s = G M / R_\mathrm{star}^2$ is the surface gravity.
The inner crust has a thickness of $\sim$ 1km and forms the transition between the heat blanket and the core. The thermal conductivity of nuclear matter is so high that the interior below the blanket is close to isothermal.
Late-time cooling is constrained by $\sim 20$ observations of neutron stars for which both surface temperature and age could be determined, see green data points in \fref{neutronstarcoolingcurves}. The mass of an individual neutron star, which is not known in the dataset, determines the cooling curve $T_\mathrm{surface}(M; t)$. Different cooling models can be excluded by the requirement that the observed data points fall into the range of allowed cooling curves. For non-superconducting neutron stars with non-magnetic iron heat blankets in the Standard Model this range is illustrated with the two blue dashed lines in \fref{neutronstarcoolingcurves} \cite{Yakovlev:2007vs}. Accretion of light elements in the crust and the presence of strong magnetic fields at the surface would increase the thermal conductivity of the outer neutron star layers, increasing $T_\mathrm{surface}$ by a factor of a few during the neutrino cooling phase. Furthermore, the core may be in different phases of neutron and/or proton superfluidity, which can affect the surface temperature by at least a similar factor. Nevertheless, quite stringent constraints on $\phi$-cooling can be obtained from light, slow-cooling neutron stars.
It is necessary to understand how the standard range of allowed cooling curves changes when $\phi$ emission is included. We will therefore estimate first the emissivity $\epsilon_\phi(T_\mathrm{core})$ and then the cooling curves $T_\mathrm{surface}(t)$ for a light, slow-cooling neutron star and a heavy, fast-cooling neutron star, which bounds the range of allowed cooling curves. The relevant parameters of our benchmark stars are given in \tref{NSbenchmarks}.
\begin{table}
\begin{center}
\begin{tabular}{| c | c | c | c | }
\hline
$M/M_\mathrm{sun}$ & $R$ (km) & $R_\mathrm{core}$ (km) & $\rho_\mathrm{core}/\rho_0$\\
\hline
1.1 & 13 & 11 & 2\\
2.0 & 11 & 10 & 10\\ \hline
\end{tabular}
\end{center}
\caption{
Parameters of light and heavy neutron stars to determine the range of allowed cooling curves in dmDM. $\rho_\mathrm{core}$ is the central density. Adapted from~\cite{Yakovlev:2004iq}, which assumed nucleon cores.
}
\label{t.NSbenchmarks}
\end{table}
We model the neutron star core as a sphere of constant temperature and density. Assume for the moment that the annihilation process $\phi \phi^* \to \gamma \gamma$ can be ignored, and that $\phi$ is free-streaming in both the core and the crust. In that case, the $\phi$ emissivity is given simply by
\begin{equation}
\epsilon_\phi \sim r_\phi^\mathrm{create} T_\mathrm{core}.
\end{equation}
$\phi$ creation proceeds via the process $n n \to n n \phi \phi^*$. Here we have to take into account Pauli-blocking: since the neutrons in the core are strongly degenerate, only the subpopulation living on the Fermi surface can participate in reactions, and furthermore the phase space of reactions is suppressed since neutrons cannot scatter into the occupied Fermi volume. The neutron Fermi Energy $E_F = \hbar^2 (3 \pi^2 n_n)^{2/3}/(2 m_n)$ is 95 MeV (280 MeV) for $\rho = 2 \rho_0$ $(10 \rho_0)$. The fraction of neutrons on the Fermi surface is roughly $T/E_F$, so we define the number density of `available neutrons' (with kinetic energy $\approx E_F$) as
\begin{equation}
n_{n_F} \sim n_n \frac{T}{E_F}.
\end{equation}
This gives
\begin{equation}
r_\phi^\mathrm{create} \ \sim \ 2 \ c \ n_{n_F}^2 \ \sigma_{n n \to n n \phi \phi^*}^F.
\end{equation}
$\sigma_{n n \to n n \phi \phi^*}^F$ is much smaller than $\sigma_{n n \to n n \phi \phi^*}$ from \fref{crosssectionphilowE} due to Pauli Blocking: two neutrons with kinetic energy $\sim E_F$ interact softly to produce two $\phi$'s with energy $\lesssim T$ so that their final energy is still on the Fermi surface. We can roughly estimate this phase space suppression $\zeta(E_F, T)$ using MadGraph, shown in \tref{fermixsectable}. This gives
\begin{eqnarray}
\nonumber \sigma_{n n \to n n \phi \phi^*}^F &\sim& \sigma_{n n \to n n \phi \phi^*}(E_F) \times \zeta(E_F, T)
\\
\nonumber&\sim& \sigma_\mathrm{prod}^0 \ \left(\frac{\, {\rm TeV}}{\Lambda}\right)^2 \ \left(\frac{T_\mathrm{core}}{\, {\rm keV}}\right)^2
\\
\label{e.NSsigmaprod}
\end{eqnarray}
where $\sigma_\mathrm{prod}^0 = \mathcal{B}_\mathrm{prod} \ \times \ 7 \times 10^{-9} \ \mathrm{pb}$ and $\mathcal{B}_\mathrm{prod} = 0.3 \ - \ 3$ is a parameter we vary to account for the uncertainty of this estimate. Interestingly the cross section is constant up to a factor of $\approx 2$ for $E_F$ in the range of 95 to 280 MeV, so we absorb this $\rho_\mathrm{core}$ dependence into the uncertainty.
\begin{table}
\begin{center}
\begin{tabular}{| c | c | c | c | }
\hline
$\rho_\mathrm{core}/\rho_0$ & $E_F$ (MeV) & $\sigma_{n n \to n n \phi \phi^*} $ (pb) & $\zeta(E_F, T)$\\
\hline
$2$ & 95 & $60 \left( \frac{\, {\rm TeV}}{\Lambda}\right)^2$ & $1 \times 10^{-10} \left(\frac{T}{\, {\rm keV}}\right)^2$ \\
\hline
$10$ & 280 & $600 \left( \frac{\, {\rm TeV}}{\Lambda}\right)^2$ & $5 \times 10^{-12} \left(\frac{T}{\, {\rm keV}}\right)^2 $ \\
\hline
\end{tabular}
\end{center}
\caption{
Cross section of $n n \to n n \phi \phi^*$ for two neutrons both with kinetic energy $E_F$, computed in MadGraph in the one-pion exchange approximation \cite{steigman}. The third column gives the phase space suppression when requiring both final state neutrons to have kinetic energy in the range $(E_F - T, E_F + T)$. All quantities are understood to be $\sim$ estimates.
}
\label{t.fermixsectable}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm]{neutronstaremissivities_lightslowcoolingNS}
\end{center}
\caption{
$\phi$ emissivity $\epsilon_\phi(T_\mathrm{core})$ for different $\Lambda$ in the light neutron star defined in \tref{NSbenchmarks}. The allowed range for each $\Lambda$ comes from the production cross section uncertainty in \eref{NSsigmaprod}.
Also shown is slow neutrino emission (dashed) via the modified Urca process, which occurs in all neutron star cores \cite{Shapiro:1983du}, and the effective emissivity from photon emission~\cite{Gundmundsson1,Gundmundsson2,Page:2004fy} (long-dashed).
}
\label{f.neutronstaremissivities}
\end{figure}
Defining $\tilde E_F \approx 95 \, {\rm MeV}$ and $n_n^0 \approx 3.3 \times 10^{38} \mathrm{cm}^{-3}$ to be the Fermi energy and neutron number density when $\rho_\mathrm{core} = 2 \rho_0$, and specifying the actual number density via the dimensionless ratio $\tilde n_n = {n_n}/{n_n^0}$,
we obtain
\begin{eqnarray}
\nonumber
\epsilon_\phi &=&
\left[ 2 c (n_n^0)^2 \left(\frac{\, {\rm keV}}{\tilde E_F}\right)^2 \right] \sigma_\mathrm{prod}^0 \tilde n_n^{2/3} \left(\frac{\, {\rm TeV}}{\Lambda}\right)^2 \frac{T_\mathrm{core}^5}{(\, {\rm keV})^4} \\
\label{e.ephiNS}
\end{eqnarray}
This is shown in \fref{neutronstaremissivities} for the light neutron star as a function of core temperature, and compared to the effective emissivity from neutrino and photon emission. Requiring that $\phi$-cooling be subdominant to standard cooling mechanisms in the light neutron star sets the strong constraint $\Lambda \gtrsim 10^8 \, {\rm TeV}$. The constraint derived from the heavy neutron star is much weaker, since the powerful direct Urca neutrino emission process is active when the central density is $\rho_\mathrm{core} \gtrsim 2 \rho_0$ \cite{Lattimer:1991ib}.
We have checked that for $\Lambda \gtrsim 10^4 \, {\rm TeV}$, the equilibrium $\phi$ density in the neutron star is indeed small enough to render the annihilation process $\phi \phi^* \to \gamma \gamma$ irrelevant. Furthermore, $\phi$ becomes free-streaming in the crust (core) when $\Lambda \gtrsim 5000 \, {\rm TeV}$ ($500 \, {\rm TeV}$). This validates the assumptions of our estimate, and allows us to circumvent the subtleties of $\phi$-scattering inside the neutron star core and crust (see \cite{Bertoni:2013bsa} for some of the involved issues).
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm]{neutronstarcoolingcurves}
\end{center}
\caption{
Green data points: surface temperature and age of observed neutron stars \cite{Yakovlev:2007vs}. Blue dotted lines: cooling curves for heavy and light non-superconducting neutron stars with non-magnetic iron heat blankets \cite{Yakovlev:2007vs}. Solid black lines: our corresponding estimate of these cooling curves using \eref{ephiNS} and \eref{coolingcurveNS}. Orange contours: estimate of the light neutron star cooling curve with $\phi$ emission for $\Lambda = 10^{6}, 10^{7}, 10^{8} \, {\rm TeV}$ and $\mathcal{B}_\mathrm{prod} = 0.3$. (For $\Lambda >10^6 \, {\rm TeV}$, the cooling curve for the heavy NS does not change perceptibly.) In all our estimates we multiplied $T_\mathrm{surface}(t)$ by 0.6 (0.2 units on vertical axis) to bring them into better agreement with the full calculation by \cite{Yakovlev:2007vs}.
}
\label{f.neutronstarcoolingcurves}
\end{figure}
We can explicitly demonstrate the effect of $\phi$ emission on neutron star cooling. Following \cite{Kouvaris:2007ay}, a reasonable estimate of the cooling curve can be obtained by solving the differential equation
\begin{equation}
\label{e.coolingcurveNS}
\frac{d T_\mathrm{core}}{d t} = - \frac{\epsilon_\nu + \epsilon_\gamma + \epsilon_\phi}{c_V},
\end{equation}
where the specific heat for a gas of non-interacting fermions is
\begin{equation}
c_V = \frac{k_B^2 T_\mathrm{core}}{3 \hbar^3 c} \sum_{i = n,p,e} p_F^i \sqrt{m_i^2 c^2 + (p_F^i)^2},
\end{equation}
and the Fermi momenta are $p_F^N = (340 \, {\rm MeV}) (2 \tilde n_n)^{1/3}$ and $p_F^{n,e} = (60 \, {\rm MeV}) (2 \tilde n_n)^{2/3}$. The surface temperature is then approximately given by \eref{Tsurface}.
The resulting cooling curves for the heavy and light neutron star are shown in \fref{neutronstarcoolingcurves}. (Since we are interested in the effect of introducing $\phi$-cooling \emph{compared to} the standard scenario, we multiply all our $T_\mathrm{surface}(t)$ by 0.6 to bring our estimates into better agreement with complete cooling calculations. This corresponds to a uniform downward shift of 0.2 units on the vertical axis of \fref{neutronstarcoolingcurves}.)
The heavy neutron star cooling curve is not visibly affected for $\Lambda \gtrsim 10^6 \, {\rm TeV}$. To avoid altering light neutron star cooling curves by much more than the plausible size of the effects of surface accretion, magnetic fields, and the likely presence of a superfluid component in the core \cite{Yakovlev:2004iq}, requires
\begin{equation}\label{e.NSbound}
\Lambda \gtrsim 10^8 \, {\rm TeV}.
\end{equation}
This confirms our earlier estimate of the constraint. Light neutron stars therefore supply a very strong bound on $\Lambda$ in dmDM for $n_\phi = 1$. However, as we shall see in \sref{phisummary}, this constraint is easily circumvented when $n_\phi = 2$.
\vspace{3mm}
\section{Other Constraints on $\phi$}
\label{s.constraints}
While stellar astrophysics provides the most impressive constraints on the operator $\bar q q \phi \phi^*/\Lambda$, cosmology and LHC searches bound the dmDM parameter space in complementary directions.
We find that all cosmological constraints are satisfied as long as the only \emph{stable} dark mediator is lighter than $\sim \, {\rm eV}$. LHC searches provide a constraint of $\Lambda \gtrsim$ few TeV that does not depend strongly on $n_\phi$ or $m_\phi$. Flavor physics bound could restrict the allowed SM flavor structure of the coupling in \eref{qqphiphi}, but we avoid those constraints by making the operator diagonal in the SM quark mass basis.
Dark mediators can also be probed, in principle, using fixed target experiments, precision electroweak measurements or indirect detection of dark matter annihilation. However, as discuss in Appendix~\ref{a.otherconstraints}, these measurements yield no meaningful constraints.
\subsection{LHC Searches}
\label{ss.colliderbounds}
The LHC is sensitive not just to the effective coupling in \eref{qqphiphi} but also to the UV completion of dmDM. We therefore analyze constraints in terms of the dark vector quark model of \ssref{uvcompletion}.
The di-jet + MET search by CMS \cite{CMS:2013gea} is sensitive to on-shell production of two heavy vector-like quarks via the process $p p \rightarrow \psi_Q \bar \psi_Q \rightarrow \phi \phi^* j j$. The constraint is straightforward to apply in our model, since the vector quarks are produced by gauge interactions. The resulting bound is
\begin{equation}
M_Q > 1.5 \, {\rm TeV}.
\end{equation}
The 20/fb CMS mono-jet search \cite{ATLAS:2012zim} is sensitive to the processes
\begin{equation}
p\,p\to q^* \to \phi\,\bar{\psi}_{Q,q}\to\phi\phi^* \,j,\quad p\,p\to \phi\,\phi^*+\rm{ISR}
\end{equation}
by doing a counting experiment in different missing energy bins. We simulated the dmDM signal expectation in MG5$+$Pythia 6.420$+$PGS4 \cite{Alwall:2011uj,Sjostrand:2006za} and validated our simulations by reproducing the CMS $jZ(\nu\bar{\nu})$ background prediction with an overall scaling factor of $K = 1.4$. The same scaling factor was also applied to the dmDM signal. The resulting $95\%$ CL lower bound on the $\bar q q \phi \phi^*/\Lambda$ operator depends on whether the intermediate dark vector quark is produced on-shell:
\begin{equation}
\Lambda_\mathrm{eff} \gtrsim 2 \ (6.6) \ \, {\rm TeV} \ \ \ \mathrm{for} \ \ \ M_Q = 4 \ (1.5) \ \, {\rm TeV}.
\end{equation}
where $\Lambda_\mathrm{eff} = \Lambda$ for $n_\phi = 1$. For $n_\phi > 1$,
\begin{equation}
\Lambda_\mathrm{eff} = \left(\sum_{i \geq j} \frac{1}{\Lambda_{ij}^2} \right)^{-1/2},
\end{equation}
since the total signal production cross section is given as the sum of all the $\phi_i \phi_i^*$ cross sections.
\subsection{Cosmological Constraints}
\label{ss.cosmoconstraints}
The mass of the dark mediator should be smaller than about an MeV to allow for sizable direct detection of $\chi$. This means $\phi$ can be thermally produced in the early universe even after $\chi$ freezes out. Such a stable light degree of freedom can overclose the universe and affect Big Bang Nucleosynthesis (BBN) as well as structure formation. In this section, we discuss the thermal history of $\phi$ and derive the relevant cosmological constraints at each step.
\subsubsection{Thermal $\phi$ production}
\label{sss.thermalphi}
The relic density of a light $\phi$ is given by \cite{Kolb:1990vq}
\begin{equation}\label{e.Omegaphi}
\Omega_{\phi}\,h^2\equiv 7.83\times 10^{-2}\,\frac{g_{\phi}}{g_{*S}}\,\frac{m_{\phi}}{eV},
\end{equation}
where $g_\phi = 2$ is the number of degrees of freedom (d.o.f.) associated with a complex scalar and $g_{*S}$ is the total number of d.o.f. at the temperature at which $\phi$ decouples from the thermal bath. Given the possible size of $g_{*S}$, it is clear that dark mediators with sizable couplings to the SM will overclose the universe unless the heaviest stable scalar has a mass of $m_\phi \lesssim \, {\rm eV}$. This is effectively massless for the purpose of computing all other bounds in this section, which we shall assume from now on.
Assessing the effect of $\phi$ on BBN requires knowing its freeze-out temperature more precisely. For values of $\Lambda$ relevant to direct detection, the hadronic coupling to the SM bath keeps $\phi$ in thermal equilibrium at least until pions decay at $T \sim 100 \, {\rm MeV}$. After pion decay, the process $\phi \phi^* \leftrightarrow \gamma \gamma$ maintains thermal equilibrium until the time taken for two photons to annihilate exceeds the hubble time, i.e.
\begin{equation}
\sigma_{\gamma \gamma \rightarrow \phi \phi*} \ \times \ \frac{2 \zeta(3)}{\pi^2} T^3 \lesssim g_*^{1/2}\,\frac{T^2}{M_\mathrm{pl}}.
\end{equation}
Substituting \eref{gagaphiphiestimate} and the smallest possible $g_* \approx 3$ to slightly underestimate the freeze-out temperature, we obtain
\begin{equation}
T_\phi^\mathrm{freeze-out} \approx (10 \, {\rm MeV}) \left(\frac{\Lambda}{\, {\rm TeV}} \ \frac{1}{\mathcal{B}}\right)^{2/3}.
\end{equation}
For $\Lambda \gtrsim \, {\rm TeV}$, $\phi$ will decouple from the SM bath before BBN.
\subsubsection{Big Bang Nucleosynthesis}
The presence of $\phi$ during the BBN epoch ($T$ around $10$ to $0.1$ MeV) can affect the generation of light elements in two ways. First, even though $\phi$-nucleon scattering does not change the relative number of neutrons and protons, the presence of an additional light degree of freedom speeds up the expansion of the universe and makes the neutron-proton ratio freeze out at a larger value. This leads to an over production of $^4$He, an effect that can be constrained by measuring the effective number of neutrino flavors, $N_\mathrm{eff}$ during BBN. Current observation gives $N_\mathrm{eff}=3.3\pm0.6$ \cite{Ade:2013zuv} at 95\% CL from Plank$+$WMAP$+$HighL CMB observations. Since $\phi$ is relativistic it will contribute to an effective number of light neutrino flavors,
\begin{equation}\label{eq:deltaNeff}
\delta N =\frac{8}{7}\times \left(\frac{g^*_{\rm{BBN}}}{g^*_{\phi\,\,\rm{decouple}}}\right)^{4/3},
\end{equation}
to the SM value of $N_\mathrm{eff} = N_\nu = 3$. Assuming all $\phi_i$ are in thermal contact with the SM bath during BBN, this restricts $n_\phi < 2$ (1) if $\phi$ is real (complex).\footnote{$\phi_i$ are colder than photons during the era of Baryon Acoustic Oscillations (BAO), so that $N_\mathrm{eff}$ measurement provides a weaker constraint.} Note that this constraint is weaker if $\phi_i$ decouples earlier.
When light elements are formed around $0.1$ MeV, $\phi$ can dissociate the nuclei if it gives a recoil energy larger than the nuclear binding energy. However, due to the lightness of $\phi$, the energy that $\phi$ can give to a nuclei is rather small. For example, in the $^2$H rest-frame, the maximum recoil energy of the $^2$H nucleus being hit by a $\phi$ is $E_R^{max} = {2\,E_{\phi}^2}/{m_{^2\rm{H}}}$. This is only larger than the $^2$H binding energy of $2.2$ MeV if $E_\phi > 47 \, {\rm MeV}$, which is much higher than the $\phi$ temperature at the same time. The effect on nuclear number densities is negligible.
\subsubsection{Structure Formation}
During the structure formation era (around a temperature of $10$ eV), the scattering length between $\phi$ and $^4\mathrm{He}$ was about $3\times 10^4$ Mpc for $\Lambda = 10 \, {\rm TeV}$, so we can treat $\phi$ as a collisionless particle. $\phi$ therefore generates a Landau damping to the primordial density fluctuations, with a free-streaming length that can be estimated as \cite{Kolb:1990vq}
\begin{equation}
\lambda_{FS,\,\phi}\simeq 20\,\rm{Mpc} \left(\frac{m_{\phi}}{10\,\rm{eV}}\right)^{-1}.
\end{equation}
This is close to free-streaming neutrinos with $\lambda_{FS,\,\nu}\simeq 20\,\rm{Mpc} \left(\frac{m_{\nu}}{30\,\rm{eV}}\right)^{-1}$, and $\phi$ should satisfy similar constraints as a thermally produced sterile neutrino, with cold dark matter still dominating relic density. As discussed in \sssref{thermalphi}, this latter requirement of a sub-dominant hot dark matter $\phi$ component requires $m_\phi \lesssim$ eV. The scenario is then similar to the case studied by \cite{Wyman:2013lza}. The existence of sterile neutrinos at sub-eV scale can relax the tension between Planck result and the local measurements of galaxy clusters on matter perturbation and the expansion rate of the universe. Similar conclusions apply to scalars, meaning sub-eV scale $\phi$'s are compatible with structure formation bounds.
\subsubsection{Dark Acoustic Oscillations}
When DM particles $\chi$ couple to a bath of nearly massless $\phi$ scalars, we expect the DM-$\phi$ system to give rise to dark acoustic oscillations (DAO), similar to the acoustic oscillations of baryons.
The temperature and polarization spectra obtained from CMB data strongly constrain this effect, which translates to an upper bound on $\chi$-$\phi$ scattering.
In dmDM, the only tree-level $\chi$-$\phi$ interaction that is not suppressed by $m_\chi$ is the process $\chi\phi\to\chi^c\phi\phi$. This is mediated by a $t$-channel $\phi$ and is generated both by the DM yukawa coupling to $\phi$ and the quartic coupling $\lambda \phi^4$ in eq.~(\ref{e.dmdm}).
The transverse cross section of this process is only suppressed by the energy transfer $m_{\chi}^2v_{\chi}^4$ and decouples at a very late time for light $\phi$. Therefore, for scalars with mass below about 10 eV (the temperature of structure formation), CMB data sets stringent upper bounds on the coupling combination $\lambda y_\chi$ to ensure DAO do not generate a sizable effect.
Although the coupling between four of the lightest scalars has no direct implication for direct detection signals, it is still useful to understand this constraint for completeness. A detailed analysis using CMB data is beyond the scope of this work (for an example of a analysis, see \cite{Cyr-Racine:2013fsa}).
However, we can estimate a conservative bound on $y_\chi \lambda$ by requiring the scattering to decouple before structure formation ($T \sim 10 \, {\rm eV}$).
There are two ways for the $\chi\phi\to\chi^c\phi\phi$ process to freeze out before structure formation.
\begin{enumerate}
\item The lightest scalar could have a mass above 10 eV. To avoid overclosure, \eref{Omegaphi} then requires $\phi$ to freeze out when $g_{*S}\simeq 10^2$, i.e. before the electroweak scale. This can be the case if $\Lambda$ is very large. Indeed, for $n_\phi = 1$, this is required by neutron star cooling, see \eref{NSbound}. However, such a scenario would be sterile with respect to direct detection.
\item In the next section we will define a $n_\phi = 2$ model which avoids neutron star bounds while allowing for direct detection. In that case, the lightest scalar must have a mass below $\sim \, {\rm eV}$ to avoid overclosure. The light scalar therefore remains in thermal contact with dark matter, and remains relativistic during structure formation, which translates to a strong constraint on its quartic coupling.
The extent to which DM motion is influenced by $\phi$ scattering is given by the transverse cross section for $\chi\phi\to\chi^c\phi\phi$, which we can estimate as
\begin{equation}
\sigma_T\sim\frac{(y_{\chi}^L)^2\,\lambda^2}{16\pi^2\times 16\pi\,m_{\chi}^2v_{\chi}^4}\ln\left(\frac{4\pi m_{\chi}v^2}{y_{\chi}^L\lambda m_{\phi}}\right).
\end{equation}
The logarithmic factor comes from the Coulomb potential of the long-range $\phi$ interaction, and the additional phase space suppression of emitting an additional scalar is approximated by a factor of $(16\pi^2)^{-1}$ . Assuming the scattering rate to be smaller than Hubble before $10$ eV, the upper bound on the couplings translates to
\begin{equation}
\label{e.DAObound}
y_{\chi}^L\,\lambda\mathrel{\ltap} 10^{-12},
\end{equation}
for benchmark parameters $m_{\chi}=10$ GeV, $m_{\phi_L}=1$ eV, and DM with kinetic energy $\sim 10$ eV.
\end{enumerate}
A more detailed study may relax the rather conservative bound, but a sizable tuning with $\lambda\sim 10^{-8}$ is expected for $y_{\chi}^L \sim 10^{-4}$. However, this bound has no bearing on direct detection.
\section{A Realistic dmDM Scenario for Direct Detection}
\label{s.phisummary}
\renewcommand{\arraystretch}{2}
\begin{table*}
\begin{center}
\begin{tabular}{| m{4cm} | m{9cm} |}
\hline
Avoiding $\phi$-overclosure & Heaviest stable $\phi$ must have $m_\phi \lesssim \, {\rm eV}$\\
\hline
$N_\mathrm{eff}$ during BBN & At most two real light scalars: $n_\phi \leq 2$\\
\hline \hline
LHC direct searches & $\Lambda_\mathrm{eff} = \left(\sum_{i \geq j} \Lambda_{ij}^{-2}
\right)^{-2} > 2 \, {\rm TeV}$.
\\
\hline \hline
Solar Heat Transfer & $\Lambda_{ij} \gtrsim 10 \, {\rm TeV}$ if $m_{\phi_{i,j}} \lesssim \, {\rm keV}$\\
\hline
White Dwarf Cooling & $\Lambda_{ij} \gtrsim 10 \, {\rm TeV}$ if $m_{\phi_{i,j}} \lesssim$ few keV\\
\hline
Neutron Star Cooling & $\Lambda_{ij} \gtrsim 10^8 \, {\rm TeV}$ if $m_{\phi_{i,j}} \lesssim 100 \, {\rm keV}$\\
\hline
Supernovae & $\Lambda_{ij} \lesssim 10^6 \, {\rm TeV}$ or $\Lambda_{ij} \gtrsim 10^{11} \, {\rm TeV}$ if $m_{\phi_{i,j}} \lesssim 10 \, {\rm MeV}$\\
\hline
\end{tabular}
\end{center}
\caption{Bounds on light scalars coupling to SM via operators $\bar q\,q \,\phi_i \phi_j^*/\Lambda_{ij}$. $m_{\phi_{i,j}}$ refers to \emph{both} scalars, not either. Indirect detection via DM annihilation, fixed target experiments and precision measurement bounds yield no relevant constraints if the coupling is SM flavor-blind, see \aref{otherconstraints}.
}
\label{t.phiconstraints}
\end{table*}
A summary of our derived constraints on light dark mediators and their coupling to the SM, formulated for $n_\phi \geq 1$, is shown in \tref{phiconstraints}. To place these in context, recall from \sref{yXbounds} that the dominant Yukawa coupling should be $y_\chi \sim 10^{-3} - 10^{-2}$. Furthermore, as we review in \sref{directdetection}, direct detection of dmDM in the $2\to3$ regime is feasible if $\Lambda_{ij} \lesssim 10^4 \, {\rm TeV}$ with $m_{\phi_i} \lesssim \, {\rm keV}$ and $m_{\phi_j} \lesssim \, {\rm MeV}$, so that one scalar can be emitted while the other acts as a light mediator.
With this in mind it is clear that any $n_\phi = 1$ scenario of dmDM with realistic direct detection prospects is \emph{completely excluded} neutron star bounds. In fact, the minimum value of $\Lambda$ required by neutron star cooling truncates the length of the supernova neutrino burst, so the actual lower bound on $\Lambda$ becomes $10^{11} \, {\rm TeV}$.
However, there is a very simple $n_\phi = 2$ scenario which behaves almost identically to the minimal $n_\phi = 1$ model for purposes of direct detection, yet is not excluded by any of the bounds in \tref{phiconstraints}.
Consider a dmDM setup like \eref{dmdm} with two light dark mediators, real scalars $\phi_L$ and $\phi_H$ having masses $m_{\phi_L} \lesssim \, {\rm eV}$ and $m_{\phi_H} \sim \, {\rm MeV}$.
We also add a quartic coupling to allow $\phi_H$ to decay into $\phi_L$:
\begin{eqnarray}
\nonumber
\mathcal{L}_\mathrm{DM} & \supset &
\bar q q \left(\frac{1}{\Lambda_{HH}} \phi_H \phi_H + \frac{1}{\Lambda_{LL}} \phi_L \phi_L + \frac{1}{\Lambda_{LH}} \phi_H \phi_L \right)\\
\nonumber
&& +\ \overline{\chi^c} \chi \left( y_\chi^H \phi_H + y_\chi^L \phi_L \right) + h.c.\\
\label{e.HLmodel}
&& +\ \lambda \phi_H \phi_L^3
\end{eqnarray}
Other quartic couplings are omitted for simplicity\footnote{The quartic coupling for $\phi_L^4$ would have to obey the constraint from dark acoustic oscillations, see \eref{DAObound}.}. When $\lambda > 10^{-9}$, $\phi_H \to \phi_L \phi_L \phi_L$ is instantaneous when the temperature drops below one MeV, leaving $\phi_L$ with a similar relic density to \eref{Omegaphi}.
Now let $\Lambda_{LL} > 10^8 \, {\rm TeV}$ to comply with neutron star bounds, while $\Lambda_{HH}, \Lambda_{LH} < 10^6 \, {\rm TeV}$ avoids supernova bounds by trapping both $\phi_L$ and $\phi_H$ in the stellar medium. In that case, all the bounds in \tref{phiconstraints} are satisfied. Importantly, $\Lambda_{LH}$, which can be relatively small, now controls direct detection. This can give a large rate for the process $\bar \chi N \to \bar \chi N \phi_L$. Since $\phi_H$ is much lighter than the typical momentum exchange of $\gtrsim 10 \, {\rm MeV}$ for ambient DM scattering off nuclei, the nuclear recoil spectrum is nearly identical to the $n_\phi = 1$ case with $m_\phi \sim \, {\rm eV}$.
A small $\Lambda_{LH}$ will generate an effective $\Lambda_{LL}$ coupling through a loop of constituent quarks and $\phi_H$. The size of this effective operator is
\begin{equation}
\Lambda_{LL}^\mathrm{eff} \approx 80 \pi^2 \frac{\Lambda_{LH}^2}{m_q},
\end{equation}
where $m_q \approx 263 \, {\rm MeV}$ is the constituent quark mass. The neutron star bound on $\Lambda_{LL}$ then translates to a bound on $\Lambda_{LH}$:
\begin{equation}
\label{e.NSboundHL}
\Lambda_{LH} \gtrsim 10 \, {\rm TeV}
\end{equation}
which is the bound we adopt when discussing direct detection in the next section.
Finally, the presence of two Yukawa couplings to dark matter gives additional freedom. A very modest hierarchy $y_\chi^H/y_\chi^L \gtrsim 10$ would suppress the $\chi N \to \chi N$ loop process. This makes it possible for $y_\chi^H$ to be large enough for a thermal relic $\chi$ and ameliorate the inconsistencies between dwarf galaxy simulations and observation, all while being in the $2\to3$ regime of direct detection (see \fref{yXboundplot}).
This scenario can be realized in the UV completion of \ssref{uvcompletion} by assuming hierarchical Yukawa couplings between dark mediators, dark vector quarks and different chiralities of the SM quarks.
\vspace{3mm}
\section{Direct Detection of \lowercase{dm}DM}
\label{s.directdetection}
In this section we outline in detail our computation of nuclear recoil spectra and direct detection constraints on dmDM, first summarized in \cite{Curtin:2013qsa}. We work with the minimal $n_\phi = 1$ scenario with effectively massless $\phi$ for simplicity, with the understanding that this phenomenology can be replicated by the unexcluded $n_\phi = 2$ scenario defined by \eref{HLmodel}.
The discussion of the previous two sections derived constraints on the Yukawa couplings between dark mediators and dark matter, and the coupling between dark mediators and SM quarks. Direct detection is sensitive to a combination of the two. We predict the dmDM signal at XENON100 \cite{Aprile:2012nq}, LUX \cite{Akerib:2013tjd}, CDMS-Si \cite{Agnese:2013rvf} and CDMSlite \cite{Agnese:2013lua} and demonstrate that large regions of the direct detection plane are not yet excluded.
It is instructive to compare the dmDM interaction with nuclei to the contact operator
\begin{equation}
\label{e.contactoperator}
\frac{\bar q q \bar \chi \chi}{\tilde \Lambda^2},
\end{equation}
since it is the standard choice for showing constraints from different direct detection experiments in the same $(m_\chi, \sigma_\mathrm{SI}^n)$-plane. Referring to the above interaction model as the ``standard-WIMP'', we find that $\mathcal{O}(100 \, {\rm GeV})$ dmDM will fake a different lighter $\mathcal{O}(10 \, {\rm GeV})$ WIMP at different experiments. This is due to energy loss from the outgoing $\phi$, which leads to an underestimate of the DM energy when assuming the above contact operator. We study this interesting phenomenon by first examining at the parton level cross section, understanding the parametric dependence of the recoil spectrum, and then produce the full experimental recoil prediction including form factors and the velocity distribution.
\subsection{Differential cross section calculation}
\begin{figure*}
\begin{center}
\includegraphics[width=4.9cm]{light_N28_DM10_E10}\quad\includegraphics[width=4.9cm]{light_N131_DM10_E10}\quad\includegraphics[width=4.9cm]{light_N80_DM30_E30}\\
\includegraphics[width=4.9cm]{light_N28_DM10_E200}\quad\includegraphics[width=4.9cm]{light_N28_DM100_E10}\quad\includegraphics[width=4.9cm]{light_N200_DM100_E50}
\end{center}
\caption{Examples of nuclear recoil spectra with dmDM at `parton-level' (without nuclear/nucleus form factors and coherent scattering enhancement) for different $m_N, m_{\chi}$ and a given incoming energy $E_{\chi}$. The blue datapoints are given by the MG5 simulation, and the red curve are the analytical approximation of the spectrum in Eq.~\eref{Erfit2}.
}
\label{f.partonxsection}
\end{figure*}
We are interested in the differential cross section for a dark matter particle hitting a stationary nucleus which then recoils with kinetic energy $E_r$. This is given by
\begin{equation}
\frac{d \sigma_N}{d E_r} \ = \ F^2(E_r) \ A^2 \ \left( \Sigma B \right)^2 \ \frac{d \sigma_N^\mathrm{bare}}{d E_r}.
\end{equation}
$d \sigma_N^\mathrm{bare}/d E_r$ is the `parton-level' differential cross section evaluated for the process $q \chi \to \bar q \chi \phi$ or $q \chi \to q \chi$ with the substitution of $m_q \to m_N$. This is because ambient dark matter is extremely non-relativistic with velocites of order a few $100$ km/s, interacting with the entire nucleus coherently. $d \sigma_N^\mathrm{bare}/d E_r$ is easily evaluated analytically for the $2\to2$ loop process using \eref{2to2operator}, reproducing the result of a standard WIMP with an additional suppression at high momentum transfer. For $2\to3$ scattering we adopt a Monte-Carlo approach\footnote{This was more practical than the analytical approach for evaluating different possible models that realize $2\rightarrow3$ scattering. It is unlikely that fully analytical cross section expressions would have been extremely illuminating.
We do discuss analytical approximations below.} by defining a \texttt{MadGraph5} \cite{Alwall:2011uj} model containing the DM Yukawa coupling and the $\bar N N \phi \phi^*/\Lambda$ effective operator using \texttt{FeynRules1.4}~\cite{Ask:2012sm}. We discuss the resulting spectrum below.
The factor $\left( \Sigma B \right)^2$ is a quark-nucleon form-factor to convert the amplitude from quark- to nucleon-level by taking into account the values of quark currents inside the proton or neutron (see \cite{Belanger:2008sj,Crivellin:2013ipa} for a review). Since the momentum transfer $q^2$ is much less than the QCD confinement scale we can take this form factor to be constant. The relevant case for dmDM is the scalar operator $\langle N | m_q \bar q q |N\rangle = f_q^N m_N$, which is interpreted as the contribution of quark $q$ to the nucleon mass $m_N$. Importantly, the contribution of all sea quarks is \emph{additive}, giving a large matrix element enhancement. $f_q^N < 1$, since each sea quark contributes more than its bare mass to the proton mass, and can be computed from lattice techniques. This gives matrix elements $\langle N|\bar{q}q|N\rangle\equiv B^N_q$, where
\begin{eqnarray*}
& B_u^p=8.6,\quad B_d^p=6.3,\quad B_s^p=2.4,\\
& B_u^n=6.8,\quad B_d^n=8.0,\quad B_s^n=2.4.
\end{eqnarray*}
Assuming equal coupling of $\phi$ to all SM quarks, the $|\mathcal{M}|^2$ enhancement is therefore
\begin{equation}
\label{e.Bsqfactor}
\bigg( \sum_{q = u,d,s} B_q^{n,p} \bigg)^2 \approx 300.
\end{equation}
Going from nucleon- to nucleus-level, the cross section is enhanced by $A^2$ (assuming equal $\phi$ coupling to protons and neutrons) and must be convolved with the \emph{Helm Form Factor} \cite{Engel:1991wq, Lewin:1995rx}. This is just the Fourier transform of the radial nuclear density distribution,
\begin{equation}
\label{e.Helm}
F^2(E_r) = \left( \frac{3 j_1(q r_0) }{q r_0}\right)^2 e^{-s^2 q^2},
\end{equation}
where $j_1$ is a Bessel Function, $q = \sqrt{2 m_N E_r}$, $s = 1 \ \mathrm{fm}$, $r_0 = \sqrt{r^2 - 5 s^2}$ and $r = 1.2 A^{1/3}$.
It is instructive to compare $d \sigma^\mathrm{bare}_N/dE_r$ for the $2\to3$ scenario to the simple WIMP case generated by the contact operator \eref{contactoperator}. We examine the case of massless emitted $\phi$. $m_\phi \sim $ few keV could be interesting to introduce shape features into the recoil spectrum but is cosmologically disfavored, see \tref{phiconstraints}.
As shown in \fref{partonxsection}, the recoil spectrum of dmDM can be well described by the function\begin{eqnarray}
\frac{d\,\sigma_{2\to3}^{bare}}{d\,E_r}&\simeq& \, \frac{\mathcal{C}}{E_r} \,\left(1-\sqrt{\frac{E_r}{E_r^\mathrm{max}}}\right)^2,
\label{e.Erfit2}
\end{eqnarray}
where $\mathcal{C}=1.3\times 10^{-42}\,(\, {\rm TeV}/\Lambda)^2$ cm$^2$. $E_r^{max}\simeq 2\,\frac{\mu_{\chi N}^2}{m_N}\,v^2$ is the maximum allowed nuclear recoil energy for a given incoming DM velocity, same as for the standard WIMP. The above approximation holds for massive dark mediators as well, provided the intermediate $t$-channel $\phi$ has mass $\lesssim \, {\rm MeV}$ and the emitted $\phi$ has mass $\lesssim \, {\rm keV}$.
Eq.~\ref{e.Erfit2} can be decomposed into the phase space part of $\chi\,N\to\chi\,N\,\phi$ scattering via a contact interaction, times the propagator of the light mediator $\phi$. The phase space can be approximated by
\begin{equation}
\frac{d\,\sigma_{2\to3}^{contact}}{d\,E_R}\propto m_N^2\,E_R\left(1-\sqrt{\frac{E_R}{E_R^{max}}}\right)^2,
\end{equation}
which vanishes when $E_R$ reaches its maximum value, or when $E_{R}\to 0$, since a relativistic $\phi$ itself cannot compensate both the energy and momentum of a non-relativistic DM particle. The non-relativistic scattering also requires the spatial momentum exchange to be much larger than the kinetic energy, which makes the spatial momentum of the relativistic $\phi$ negligible in energy-momentum conservation. Because of this, the propagator of $\phi$ in the dmDM scattering is $(2m_N\,E_R)^{-2}$, which is dominated by the spatial momentum exchange between $N$ and $\chi$ and gives the spectrum in \eref{Erfit2}.
The contact operator \eref{contactoperator} produces a flat parton-level nuclear recoil spectrum for $E_r < E_r^\mathrm{max}$. On the other hand, \eref{Erfit2} features a suppression at large recoil.
The functional form of this recoil suppression is different than for $2\to2$ scattering with light mediators and/or derivative couplings. Furthermore, the scaling of total cross section with $m_N, m_\chi$ is unique. This necessitates a full re-interpretation of all direct detection bounds to understand how a heavy dmDM candidate fakes different light WIMPs at different detectors. We expect the recoil suppression to increase the sensitivity advantage enjoyed by low-threshold Xenon detectors over CDMS.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{fractionDMxsectionabovethreshold}
\end{center}
\caption{
The fraction of the dmDM (solid) and WIMP DM (dashed) direct detection cross section above experimental threshold (blue for CDMS II Si $E_r > 7 \, {\rm keV}$, black for LUX $S1 > 2$) as a fraction of the total cross section. $m_\phi < \, {\rm keV}$. $S1$ light collection efficiency is taken into account but signal selection cuts have not been applied.
}
\label{f.ddefficiency}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=15cm]{DDspectraCDMS}
\end{center}\vspace{-3mm}
\caption{
Nuclear recoil spectra at CDMS II Silicon ($m_N = 28 \, {\rm GeV}$) with 140.2 kg$\cdot$days exposure for dmDM (solid) and WIMP DM (dotted) of mass 5 (red), 10 (blue) and 50 (green) GeV. Experimental efficiencies are not included, and the recoil spectrum is shown only for $E_r > 3 \, {\rm keV}$ because the dmDM spectrum is so sharply peaked at the origin that no other features would be visible if it were included. The shown WIMP-nucleon cross sections for (5, 10, 50) GeV are $(4, 2, 6) \times 10^{-40} \ \mathrm{cm}^2$, while the dmDM parameters are $y_\chi = 0.02$, $\Lambda = (29, 91, 91) \, {\rm TeV}$ and $m_\phi < \, {\rm keV}$
}
\label{f.DDspectraCDMS}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=15cm]{DDspectraLUX}
\end{center} \vspace{-3mm}
\caption{
S1 spectra at LUX ($m_N = 131 \, {\rm GeV}$) with 10065.4 kg$\cdot$days exposure for dmDM (solid) and WIMP DM (dotted) of mass 10 (red), 20 (blue) and 50 (green) GeV. The 14\% S1 light gathering efficiency is included but selection cuts are not. No DM signal below $E_r = 3 \, {\rm keV}$ is included due to limitations of the measured $\mathcal{L}_{eff}$, in accordance with the collaboration's analysis. The shown WIMP-nucleon cross sections for (10, 20, 50) GeV are $(18.5, 3.6, 4.9) \times 10^{-45} \ \mathrm{cm}^2$, while the dmDM parameters are $y_\chi = 0.02$ and $\Lambda = (1900, 9700, 13000) \, {\rm TeV}$ and $m_\phi < \, {\rm keV}$.
}
\label{f.DDspectraLUX}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[width=7.5cm]{m2to3vsm2to2}
\end{center}
\caption{
The measured nuclear recoil spectrum produced by a dmDM candidate with mass $m_\chi = m_{2\rightarrow3}$ is very similar to that of a WIMP with mass $m_{2\to2} < m_{2\to3}$, interacting with nuclei via the contact operator \eref{contactoperator}. $m_{2\to2}(m_{2\to3})$
is shown for XENON100 ($S1 > 3$ with 6\% light gathering efficiency, dashed red line), LUX ($S1 > 2$ with 14\% light gathering efficiency, dash-dotted black line), CDMS II Silicon ($E_r > 7 \, {\rm keV}$, solid blue line), and CDMSlite (Germanium, $Er > 0.2 \, {\rm keV}$, dotted purple line) before selection cuts.
}
\label{f.compare2to2to2to3}
\end{figure}
\subsection{Nuclear Recoil Spectra}
To compute the expected nuclear recoil spectrum at a direct detection experiment, the differential scattering cross section must be convolved with the dark matter speed distribution in the earth frame,
\begin{equation}
\frac{d R}{d E_r} = N_T \frac{\rho_\chi}{m_\chi} \int dv \ v f(v) \frac{d \sigma_N}{d E_r},
\end{equation}
The speed distribution is given in \aref{ddcomputation}.
In our Monte Carlo calculation for $2\to3$ scattering, we simulate $\chi N \rightarrow \chi^c N \phi$ for different $m_N, m_\chi, m_\phi$ and incoming DM velocities $v$ in bins of $20$ km/s to build up a table of the various required $\frac{d \sigma_N}{d E_r}$ and perform this convolution numerically. For verification, we applied our pipeline to WIMP-nucleus scattering, reproducing the expected analytical results.
The spectrum of nuclear recoil events that occurred in the detector must be translated to actual experimental observables. This involves folding in efficiencies, as well as converting the nuclear recoil signal to a scintillation light signal in the case of liquid Xenon detectors. These details are also given in \aref{ddcomputation}.
The detection efficiency for $2\to3$ scattering in dmDM is about $100 - 1000$ times smaller compared to the standard WIMP (and also $2\to2$ scattering in dmDM), see \fref{ddefficiency}. This is expected, given the additional $E_r$-suppression. In the next subsection we will take care to understand the parameter regions where $2\to3$ scattering dominates over the $2\to2$ process in dmDM
\fref{DDspectraCDMS} shows some $2\to3$ nuclear recoil spectra at CDMS II Si before taking detection efficiency into account. dmDM is compared to WIMPs for different DM masses, and the principal experimental feature of our model is apparent: a $\sim 50 \, {\rm GeV}$ dmDM candidate looks like a $\sim 10 \, {\rm GeV}$ WIMP. \fref{DDspectraLUX} shows different $S1$ spectra at LUX, where a $\sim 50 \, {\rm GeV}$ dmDM candidate looks more like a $\sim 20 \, {\rm GeV}$ WIMP. This mass remapping compared to the standard contact operator interpretation is shown for different experiments in \fref{compare2to2to2to3}. This dependence of recoil suppression on the detector and DM parameters is unique to dmDM, and could be added to other DM models by including the emission of a light particle..
\subsection{Direct Detection Constraints}
\label{ss.ddconstraints}
We compute direct detection bounds on dmDM in two ways. The first is by remapping the bounds provided by the respective experimental collaborations using the remapping of dmDM to standard WIMP parameters \cite{Curtin:2013qsa}, part of which is shown in \fref{compare2to2to2to3}. These results are then reproduced, for verification, by using a full modified maximum likelihood analysis \cite{Barlow:1990vc} for each experiment. The resulting bounds in the direct detection plane for dominant $2\to3$ and $2\to2$ scattering in dmDM are shown in \fref{mappingbounds} and \fref{mappingbounds2to2}. To provide a lower boundary on the relevant parameter space we indicate where in the direct detection plane the dmDM signal gets drowned out by the irreducible neutrino background \cite{Billard:2013qya}.
For the $2\to3$ and $2\to2$ scattering regimes, direct detection probes $y_\chi/\Lambda$ and $y_\chi^2/\Lambda$ respectively. The neutron star cooling bound \eref{NSboundHL} for the $n_\phi = 2$ model $\Lambda \gtrsim 10 \, {\rm TeV}$ and the bounds on dark matter Yukawa coupling $y_\chi$ can be combined to be shown in the direct detection planes of Figs. \ref{f.mappingbounds} and \ref{f.mappingbounds2to2}. The assumption of a thermal relic then sets bounds which supersede the liquid Xenon experiments for $m_\chi \lesssim 10 \, {\rm GeV}$.
For $n_\phi = 1$, the $2\to2$ loop process in \fref{feynmandiagram} dominates if $y_\chi \gtrsim 10^{-3}$. This is indicated, together with the neutron star bound, by the dashed orange line in \fref{mappingbounds}. However, in the $n_\phi = 2$ model of \sref{phisummary}, the vertical axis of Figs.~\ref{f.mappingbounds} and \ref{f.mappingbounds2to2} is $(y_\chi^{H}/\Lambda)^2$ and $(y_\chi^{H} y_\chi^L/\Lambda)^2$ respectively, so this orange line can be moved arbitrarily upwards. This means the $n_\phi = 2$ model can realize $2\to3$ dominated direct detection while being consistent with a thermal relic, as well as the SIDM solution to the inconsistencies between dwarf galaxy simulations and observation.
\begin{figure}
\begin{center}
\hspace*{-7mm}
\includegraphics[width=8cm]{2to3_alltransformedboundsLambdam2}
\end{center}
\caption{Direct detection bounds on the $2\rightarrow3$ regime of $n_\phi = 1$ dmDM. The vertical axis is proportional to $\sigma_{\chi N \to \bar \chi N \phi}$, and is understood to be $(y_\chi^H/\Lambda)^2$ for the $n_\phi = 2$ model of \sref{phisummary}. \emph{Solid lines}: 90\% CL bounds by XENON100 (red), LUX (black) and \mbox{CDMSlite} (purple), as well as the best-fit regions by CDMS II Si (blue, green). The large-dashed black line indicates where the dmDM signal starts being drowned out by the irreducible neutrino background \cite{Billard:2013qya}.
\emph{Small-dashed magenta line}: $y_\chi = y_\chi^\mathrm{relic}(m_\chi)$ and $\Lambda = 10 \, {\rm TeV}$. Need to be below this line for a thermal relic to be compatible with the neutron star cooling bound \eref{NSboundHL}.
\emph{Lower dotted orange line}: for $n_\phi = 1$, below this line $y_\chi$ is small enough to ensure the $2\to3$ process dominates direct detection while also satisfying the neutron star cooling bound. This line can be arbitrarily moved when $n_\phi = 2$.}
\label{f.mappingbounds}
\end{figure}
\begin{figure}
\begin{center}
\hspace*{-7mm}
\includegraphics[width=8cm]{2to2dmDM_alltransformedboundsLambdam2_no2to3dominancebound}
\end{center}
\caption{Direct detection bounds on the $2\rightarrow2$ regime of $n_\phi = 1$ dmDM. The vertical axis is proportional to $\sigma_{\chi N \to \bar \chi N}$, and is understood to be $(y_\chi^H y_\chi^L/\Lambda)^2$ for the $n_\phi = 2$ model of \sref{phisummary}.
Same labeling as \fref{mappingbounds}. }
\label{f.mappingbounds2to2}
\end{figure}
\vspace{3mm}
\section{Conclusion}
\label{s.conclusion}
Previous theoretical investigations have shown that direct detection can proceed very differently from the standard WIMP scenario. Investigating all possibilities is of vital importance. For one, the current list of experimental anomalies naively conflicting with other collaborations' bounds motivates the search for alternative interpretations of the data. Another general reason for achieving `full theoretical coverage' is the looming irreducible neutrino background \cite{Billard:2013qya} that direct detection could become sensitive to in about a decade. Hitting this neutrino floor without a clear dark matter signal is an undesirable scenario, but being left without alternative options to explore would be an even more dire situation.
\emph{Dark Mediator Dark Matter} is the first example of a slightly non-minimal dark sector where the mediators connecting dark matter to the Standard Model are themselves charged under the same symmetry that makes dark matter stable. Phenomenologically, this closes a long-standing gap in the list of investigated scenarios by realizing $2\to3$ nuclear scattering at direct detection experiments.
We carry out the first systematic exploration of light scalar mediators coupling to the SM quarks via operators of the form $\bar q q \phi \phi^*/\Lambda$. Their existence and coupling can be strongly constrained by cosmological bounds, LHC direct searches and stellar astrophysics, see \tref{phiconstraints}. Neutron star cooling excludes detectable dmDM scenarios with a single dark mediator completely, but an $n_\phi = 2$ scenario can easily evade all bounds while giving identical direct detection phenomenology.
The presence of a light mediator and additional particle emission means that the nuclear recoil spectrum of dmDM at direct detection experiments is strongly peaked towards the origin. The functional form of this recoil suppression and the overall cross section dependence on nucleus and DM mass is unique. As a consequence of this suppression, a $\sim 100 \, {\rm GeV}$ dmDM candidate fakes different $\mathcal{O}(10 \, {\rm GeV})$ standard WIMPs at different experiments. We compute direct detection bounds on dmDM for both nuclear scattering processes, $\chi N \to \chi N \phi$ and the loop suppressed $\chi N \to \chi N$ and find large regions that are not excluded but discoverable in the future. The abovementioned $n_\phi = 2$ scenario can realize $2\to3$ direct detection while being compatible with a thermal relic and the SIDM solution for the inconsistencies between dwarf galaxy simulations and observation.
Our model represents an interesting combination of light mediator and inelastic scattering ideas, since the latter is realized by having a light scalar $\phi$ from a direct-detection point of view. This allows us to smoothly map dmDM spectra to similar WIMP spectra, and the resulting map of dmDM parameters to WIMP parameters makes transparent how the direct detection bounds are re-interpreted (see also \cite{Curtin:2013qsa}). While dmDM does not reconcile the conflicting signals and constraints, it may point the way towards another model that does. For example, it might be interesting to explore how this new scattering process changes models with non-standard form factors or exothermic down-scattering.
\vspace{4mm}
\textbf{Acknowledgements}
The authors would like to gratefully acknowledge the contributions of Yue Zhao and Ze'ev Surujon during early stages of this collaboration.
We thank Patrick Meade for valuable comments on an early draft of this paper. We are very grateful to
Haipeng An,
Brian Batell,
Joseph Bramante,
Rouven Essig,
Greg Gabadadze,
Roni Harnik,
Jasper Hasenkamp,
Patrick Meade,
Matthew McCullough,
Olivier Mattelaer,
Ann Nelson,
Matthew Reece,
Philip Schuster,
Natalia Toro,
Sean Tulin,
Neal Weiner,
Itay Yavin and
Hai-Bo Yu
for valuable discussions. D.C. is supported in part by the National Science Foundation under Grant PHY-0969739. Y.T. is supported in part by the Department of Energy under Grant DE-FG02-91ER40674. The work of Y.T. was also supported in part by the National Science Foundation under Grant No. PHYS-1066293 and the hospitality of the Aspen Center for Physics. The work of D.C. and Y.T. was also supported by the hospitality of the Center for Future High Energy Physics in Beijing, China.
|
1,116,691,498,291 | arxiv | \section{Introduction}
Advances in photon measurements have stimulated innovation in encrypted communications and information processing, distinguished by a single datum being encoded in a single photon \cite{Tittel:01,chen:06}.
As relevant techniques mature, accuracy in photon generation and detection is growing to compete with the present standard of radiative flux.
A new standard is thus being defined by photon flux, in which counting is a central element for information processing with photons, and which is differentiated from the present standard where absoluteness is given by foreign physical systems based on electrical current and temperature \cite{Willson:73,Martin_1985,QCandelta:07,Zwinkels:10}.
For the realization of this new standard, the accurate calibration of photon flux as well as high-purity single photon generation are prerequisite \cite{Chunnilall:14}.
Commonly used single photon counters like avalanche diodes and superconducting nanowires have a short period of breakdown after a detection event \cite{Cova:96,Kerman:SucDetDeadT06}, and require post-corrections to represent real radiant flux with pre-assumptions about photon statistics.
Recent correction techniques supposing a Poisson distribution have achieved repeatable accuracies with parameters of quantum efficiency and detection dead time, promising that the counting-based standard can soon compete \cite{Bae_2019}.
To improve even further, the goal has become achieving single photon generation that deterministically furnishes a single photon only when the detector is ready to count, or at least with a certain and uniform probability \cite{Rodiek:17,Vaigu:2017,Molecule:19}.
Such an ideal source has to maintain single photon flux with low fluctuations and high repeatability.
Current sources, however, show complex behaviors of relaxations, and as a result their emission blinks \cite{Jeong:GAN,Konthasinghe:19}.
In this study, we focus on materials that emit single photon fluorescence at room temperature by investigating silicon vacancy in diamond, defects in GaN, and vacancy in hBN as spectrally narrow and accessible platforms.
We characterize photon number statistics and fluctuations since their degrees are the major factors in determining the accuracy of photon flux for radiometry applications.
Additionally, we compare the maximum count rate allowed for the bare materials under the conventional collection technique of confocal microscopy.
The reason why we stress this condition is that detection count rates are not intrinsic but rather depend on efficiencies given by refractive index geometries and detection techniques.
We note that the present experiments are limited by estimations of internal quantum efficiency and the theoretically maximum count rates under continuous wave operation; more specific methods to optimize collection efficiency, an important subject of photonics, remain for future, application-oriented studies.
To find general tendencies and characteristics from among the complexity and variety of our materials, the current work was based on a large amount of data collected from numerous emitters.
Our full dataset consists of two levels: the first concerns basic properties in identifying single photon emitters, and the second concerns figures of stability.
Data fields in the first level are of photon coincidence correlation $g^{(2)}(0)$ and spectra, which have been used to authenticate single photon fluorescence.
Statistical distributions of the positions of the spectral peaks were collected as a subject for later studies on defect states and their formations.
We discuss the results of these basic properties in \sref{sec:spe}.
Data fields in the second level include photon number uncertainty and the repeatability of measurements for conversion to radiometry flux.
They are examined with a dual detection system that measures photon streams with two modes of detection, namely, photon counting detectors with results in count per second (cps) and photocurrent-generating photodiodes with results in joules per second (W).
Having two different detection mechanisms enables a comparison of outcomes for the same single photon stream as well as an examination of the uncertainty of conversion between the two measures.
Setting the system to photon counting detectors, we measured both photon number fluctuation and repeatability of photon flux measurements to evaluate the degrees of stability both for the photon sources and for the detection system itself, respectively.
Analyzed results are discussed in sections~\ref{sec:stab}--\ref{sec:radiant}.
The main difficulty in radiant flux measurements is to find an emitter that produces a photon flux intense enough to be detected by the photocurrents of the photodiode.
To this end, we exploited an emitter of count rate $> 10^6$ per second and $g^{(2)}(0) < 1$.
We verified both the repeatability of measurement of the radiant flux generated by the single photon emitters and the equivalence of the calibration parameters of detection, as described in \sref{sec:radiant}.
We did not find additional corrections other than our given parameters \cite{Bae_2019}, which was expected from our collection efficiency of $< 6.4 \%$ that flattens any dissimilar photon number statistics into Poisson distribution \cite{Loudon:105699}.
Increasing collection efficiency will be our main subject in future studies to achieve the advantages of sub-Poisson statistics, which is being pursued for few-photon metrology.
\section{Samples and Experimental Methods}\label{sec:method}
Materials of interest are silicon vacancy in diamond (vc-SiV), crystal defects in GaN (df-GaN), and vacancy in hBN (vc-hBN).
Their common feature is a wide band-gap of the host crystal that preserves single photon emission at room temperature.
The SiV sample in this work took the form of nano-diamonds on an iridium substrate, which were grown to the shape of the film by the silicon-free CVD process and milled to diameters of 50--100 nm.
Silicon was implanted after the clean CVD process to attain a high SiV purity \cite{Neu_2011}.
The GaN substrates are a commercially available 4 $\mu$m GaN crystal grown on sapphire \cite{GaN:Spec}.
The fluorescence center in GaN was explained to be a charge-trapped dot that locates at the intersection of a lattice dislocation stemming from the sapphire--GaN interface and a layer mismatch of crystal orientation, similar to the point-like potential wall in a two-dimensional quantum well \cite{Jeong:GAN}.
The hBN sample took the form of nano-flakes dispersed on an oxidized layer of silicon substrate. The nano-flakes are commercially available, but required a special treatment of annealing in an inert environment: 800$^{\circ}$C for 30 min in 1 Torr Ar gas \cite{hBN:ACS2016}.
Because emitters are randomly distributed, confocal microscopy has been commonly used to confine fluorescence signals.
We were helped by the measurement system of our previous work, where we set up modules of single photon collection with single mode fibers (SMFs) and analyzed their photon number statistics \cite{Lim:19}.
The setup has benefits of uninterrupted serial measurements of spatial positions, spectra and photon statistics ($g^{(2)}(0)$), and a high stability of maintained alignments that leads to repeatable measurements (see \sref{sec:radiant}).
The SMF interface gives identical beam profiles for analyzing the modules, leaving the external systems available for exploitation.
The theoretically maximum collection efficiency from the sample--air interface to the SMF output is 21 \%, assuming a Gaussian beam as in past work \cite{Lim:19}.
The collection efficiency $< 6.4$ \% was drawn with consideration of the mode coupling efficiency of an electrical dipole radiation and a SMF mode.\cite{novotny_hecht_2012,Schneider:18} (For more details, see \ref{sec:apenA}.)
The real collection efficiency is smaller than this prediction though when we take into account surface scattering and all the variable nano-crystal shapes.
Still, our photon count rate results are similar to other works that studied the same materials, implying a sufficient level of mechanical rigidity to maintain the count rate.
For the evaluation of photon number fluctuation and application to radiometry experiments, we constructed a radiometry module that includes the dual detection system described above.
This new module has two stages of detection: the first is counting photon flux with a silicon single photon avalanche detector (SPAD), and the second is measuring radiant power converted from the photodiode photocurrents.
They share the same single photon input injected via SMF, and the same incidence position at which we rotate groups of detectors to place.
This intends to attain convergence between the outcomes of the two detection mechanisms consistently without any corrections for optical path loss (\sref{sec:radiant}).
The module also helped the measurement of photon number fluctuations, as shown in \sref{sec:stab}.
\section{Spectra and Photon Statistics}\label{sec:spe}
\begin{figure}[ht!]
\begin{center}
\includegraphics[width=10cm]{FIG2v2.pdf}
\end{center}
\caption{Spectra of single photon emitters of different materials: \textbf{(a)} silicon vacancy in diamond (SiV), \textbf{(b)} fluorescence defect in GaN crystal (GaN), and \textbf{(c)} vacancy in hBN (hBN). Insets are photoluminescence images scanned over the area centered on each fluorescence center.
\textbf{(d)} Histograms of spectral peaks of wavelength and linewidth for SiV (blue), GaN (red), and hBN (gray).
The gray zone is the resolution boundary limited by our spectrometer and its coupling optics.
}
\label{fig:spectra}
\end{figure}
Spectra of fluorescence centers commonly consist of a zero-phonon line (ZPL) and phonon side-bands, and show a high ratio of ZPL intensity compared to the photon side-band, i.e., a high Debye--Waller factor, as shown in \fref{fig:spectra}(a--c).
The ZPL positions of vc-hBN and df-GaN depend on strain and defect formations, and are widely distributed over 600--750 nm and 690--800 nm, respectively [\fref{fig:spectra}(d)].
Due to the different mechanisms of defect formation between df-GaN and vc-hBN, df-GaN has a greater linewidth on average and can be more directly affected by crystal strain.
Likewise, both df-GaN and vc-hBN have a wider linewidth on average, which can vary by the local strain of the host crystal because of the various kinds of defect formations and their large degrees of freedom.
On the other hand, vc-SiV has a definite ZPL position, $\sim737$ nm\cite{SiV:1996}, the formation of which is explicitly allowed by the diamond crystal.
\begin{figure}[t]
\begin{center}
\includegraphics[width=12cm]{FIG3V4raster.pdf}
\end{center}
\caption{Photon correlation ($g^{(2)}(\tau)$) acquired from a Hanbury Brown--Twiss interferometer for \textbf{(a)} silicon vacancy in diamond (blue), \textbf{(b)} defects in GaN (red), and \textbf{(c)} vacancy in hBN (gray). Solid lines show the model $g^{(2)}(\tau) = 1 - p_1 \exp (-\tau/\tau_1) + p_2 \exp (-\tau/\tau_2)$ with characteristic times of anti-bunching ($\tau_1$) and trapping in meta-stable dark states ($\tau_2$).
Fitted to this model, the zero-time correlations ($g^{(2)}(0)$) are derived to be \textbf{(a)} 0.38 $\pm$ 0.22, \textbf{(b)} 0.24 $\pm$ 0.14, and \textbf{(c)} 0.33 $\pm$ 0.05 (95 \% trust).
From the full set of data attained from various fluorescence centers in the three materials, photon counts as acquired from the detectors are plotted for \textbf{(d)} $g^{(2)}(0)$, \textbf{(e)} $\tau_1$, and \textbf{(f)} $\tau_2$.
The gray zone in the figure (e) is the resolution boundary limited by time jitter noises of single photon detectors.
}
\label{fig:g2}
\end{figure}
For a statistical approach, we collected photon statistics from $>$ 20 fluorescence emitters.
Single photon emitters have a unique property of fluorescence, exhibiting a low coincidence of photon count.
The degree and time scale of this coincidence suppression have been commonly represented by a normalized correlation ($g^{(2)}(\tau) = \langle C(t+\tau) C(t)\rangle/\langle C(t)\rangle^2$) of photon count ($C$) as measured by a Hanbury Brown--Twiss interferometer (HBT) \cite{Loudon:105699}.
We employed two methods of deducing $g^{(2)}(\tau)$ experimentally: start-stop histogram and time-tag correlation (TTC).
The former is advantageous for real-time acquisition as the trigger intervals between signals from two SPADs are collected.
The latter though has a better convergence at $g^{(2)}(\infty) \rightarrow 1$ because this method stores time-tags crudely and deduces a normalization factor from the total count of each detector during data processing,
which gives reliable results of $g^{(2)}(0)$.
In our study, estimations of $g^{(2)}(\tau)$ were based on raw data given by the TTC method.
The $g^{(2)}(\tau)$ from our samples, however, shows composite features of both anti-bunching ($g^{(2)}(\tau) < 1 $) at $|\tau| < \tau_1$ and bunching ($g^{(2)}(\tau) > 1$) at $\tau_1<|\tau| < \tau_2$ simultaneously.
Here, $\tau_1$ and $\tau_2$ are effective time constants defined by the fitting model $g^{(2)}(\tau) = 1 - p_1 e^{-|\tau|/\tau_1} + p_2 e^{-|\tau|/\tau_2}$, where $p_1$ is the depth contributing to the anti-bunching of $g^{(2)}(0)$, $p_2$ is the height above $g^{(2)}(\tau) > 1$,
$\tau_1$ is the time width of anti-bunching, and $\tau_2$ is the characteristic exponential decay time of bunching.
We collected $\tau_1$ and $\tau_2$ because they have physical origins: $\tau_1$ is the spontaneous emission time for an ideal photon source or relates to the lifetime of the radiative transition for single photon emission, while $\tau_2$ stems from non-radiative relaxations, which can cause blinking in the photon emission.\cite{Santori:2004bv}
If a single emitter is trapped in meta-stable dark states, it stops emitting until it is released to bright states, with $\tau_2$ directly representing this time scale.
With this model, we observed $g^{(2)}(0) = 0.38 \pm 0.22$ for vc-SiV, $0.24\pm 0.14$ for df-GaN, and $0.33 \pm 0.05$ for vc-hBN, as shown in \fref{fig:g2}(a--c), where the errors are the widths of the 95 \% confidence interval calculated via robust covariance estimation \cite{RCVME}.
The full $g^{(2)}(0)$ data was measured from $>$ 20 fluorescence centers of our samples as shown in \fref{fig:g2}(d).
The large errors in the $g^{(2)}(0)$ obtained from SiV emitters are due to the short $\tau_1$ of these emitters and a time jitter of the detector.
To restore the pure $g^{(2)}(0)$ before time jitter noises, we tried a deconvolution method with a noise filter $H(\tau;D) = (D\sqrt{2 \pi})^{-1}\exp(-\tau^2/2 D^2)$ assumed for a time jitter $D = 0.3$ ns.\cite{Nomura:2010gx}
Our method of deconvolution is to fit data with a convolution form of exponential function.
However, this method is redundant as its results are similar to the previous estimation of $g^{(2)}(0)$ with parameters of $p_1$ and $p_2$, and they can be biased when $D$ is overestimated.
It is a common tendency regardless of materials that time jitter errors of $g^{(2)}(0)$ grows as $\tau_1$ is shortened.
Since $g^{(2)}(\tau)$ measured by TTC is in absolute values, we could take the average $g^{(2)}(0)$ for some intervals of $\tau \in (-\tau_1, \tau_1)$.
These experimentally allowed values are also presented as transparent points from 18/hBN fluorescence emitter shown in \fref{fig:g2}(d).
Their differences from pure values $g^{(2)}(0)$ of the model are reflected in confidence intervals for all emitters.
Our closest recorded $g^{(2)}(0)$ values to 0 were attained around $0.24\pm 0.14$ from df-GaN and $0.31 \pm 0.05$ from vc-hBN, both of which are allowed at a low excitation power.
The effective excitation power differed among materials and samples due to the differences in refractive index, diameter of grains, and geometry.
However, the set of data used for \fref{fig:g2}(d--f) comprises the full range of photon count rates achieved from far below and above the saturation points of each material.
We suspect that the lowest value of $g^{(2)}(0) > 0.2$ can either be attributed to background photoluminescence, including deeper infrared, as we used a long pass filter with a 568 nm edge for purification, or the unresolved single photon emitting transitions in a ZPL \cite{Alexander:2019um}.
We discuss this in \sref{sec:radiant} with \fref{fig:conv}.
Our result shows that $\tau_1$ and $\tau_2$ decrease with an increasing excitation power for every material we observed.
These variables of the $g^{(2)}(\tau)$ model has an origin in relaxation processes of fluorescence materials.
The power dependence of $\tau_1$ evidently shown with df-GaN and vc-hBN implies that $\tau_1^{-1}$ represents a re-excitation rate rather than a spontaneous emission rate under moderate $P$ that allows exploitable photon count rates.
Nevertheless, we can expect large spontaneous emission rates from SiV, whose $\tau_1$ is as short as being close to the instrumental limit of time jitter at the entire range of $P$.
According to the three level model, $\tau_2$ are related to a recovering relaxation from meta-stable states (deshelving), and also depends on excitation power because it gives more chances of initializing ionization \cite{Santori:2004bv,Jeong:GAN,ASTAKHOV2018211}.
We observed high count rates of $> 10^6$ cps with vc-hBN, similar to other studies \cite{C7NR08249E}, and low count rates of $< 2\times 10^5$ cps with vc-SiV.
This result is opposed to the long $\tau_{1,2}$ of hBN, as shown in \fref{fig:g2}(e) and (f), and to the intuition that fast transitions allow high photon rates.
The speed of the transitions and blinking of vc-SiV was the highest among the materials of interest, but exact pictures of these have yet to be unveiled to predict the internal efficiency of fluorescence emissions at room temperature \cite{SiV:APL:2011,Lindner_2018}.
The photon count rate of df-GaN, $< 3 \times 10^5$ cps, seems limited by total internal reflections at the GaN--air interface, which can be overcome by an immersion medium before the objective lens.
Otherwise, vc-hBN is preferable for radiometry experiments that require a wide range of photon counts on the order of $> 10^6$ cps.
\section{Photon number fluctuation}\label{sec:stab}
\begin{figure}[h]
\centerline{\includegraphics[width=7.5 cm]{fig4_PwStability_2.pdf}}
\caption{Relative number uncertainty $\sigma(N)/\langle N\rangle$ of average photon number $\langle N\rangle$ from various fluorescence centers of different materials: silicon vacancy in diamond (SiV, blue), defect in GaN (red), and vacancy in hBN (gray).
Each point was acquired from a 10 s long streaming acquisition with a time bin width of 10 ms.
Data collected with different emitters were distinguished by shape of marker specified in \fref{fig:g2}.
}
\label{fig:stability}
\end{figure}
Low photon number uncertainty is required for an ideal photon source.
From various values of $\tau_2$ depending on the emitters and materials, which are related to the time scales of emission blinking, we can infer the requirement to find one that has a low photon number fluctuation.
We first measured the photon number uncertainty ($\sigma (N)/\langle N\rangle$), defined as the ratio of the standard deviation of photon number ($\sigma (N)$) to its average value ($\langle N\rangle$).
These statistical variables were obtained by a single shot of streaming acquisition of photon counts over 10 s.
Photon number $N$ is the accumulated photon count in a 10 ms long time bin ($\Delta t$) within the streaming acquisition.
Thus, we clarify the terminological distinction between $N$ and photon count rate $C$ by the following relation:
\begin{equation}
N_i = \int_{t_i}^{+ \Delta t} C(t)\, \mathrm{d}t,
\end{equation}
for an $i$-th time bin $[t_i, t_i + \Delta t]$.
In experiments, we measured $N_i$ directly from an edge counter with a fixed $\Delta t$ and then deduced $C_i$ via $C_i = N_i/\Delta t$.
For shot-noise limited $N$ in Poisson statistics as acquired for $\Delta t > \tau_1$, the defined photon number uncertainty $\sigma (N)/\langle N\rangle$ can be reduced to $1/\sqrt{N}$, which tends to decrease with increasing $N$.
This assumption excludes the observed differences of uncertainty between the materials shown in \fref{fig:stability}(a).
Measured values of the uncertainty are split into two groups: SiV with GaN, and hBN on the other side.
With a few exceptions, every hBN emitter exhibited a larger $\sigma (N)/\langle N\rangle$ than SiV and GaN.
\begin{figure}[h]
\centerline{\includegraphics[width=10cm]{FIG4Braster.pdf}}
\caption{Photon number variance $\langle \Delta N^2\rangle$ with respect to average photon number $\langle N \rangle$, measured from fluorescence emissions of defect in GaN (red) and vacancy in hBN (black) samples.
Photon number $N$ is an integrated variable of count rate $C$ for time bin $\Delta t$ as $N = C \Delta t$.
Varying $\Delta t$ from 1 ms to 1 s, the statistical values of $N$ were taken from streaming acquisitions for 200 s, as shown in the inset.
The gray line is the shot-noise limit: $\langle \Delta N^2\rangle_{\mathrm{shot}} = \langle N \rangle$.
The solid lines follow the model $\langle \Delta N^2\rangle = \langle N \rangle + \nu_2 \langle N \rangle^2$, with $\nu_2 = 7.9 (\pm 0.3)\times10^{-5}$ for GaN and $1.3 (\pm 0.05)\times10^{-3}$ for hBN.
}
\label{fig:fluc}
\end{figure}
Additional noises other than the shot-noise $\langle \Delta N^2\rangle_{\mathrm{shot}} = \langle N \rangle$ are clearly seen.
We examined $\langle \Delta N^2\rangle$ with integration times ($\Delta t$) from 1 ms to 1 s for df-GaN and vc-hBN;
$\langle N \rangle$ was set to a similar value ($1.3 \times 10^5$ cps) by adjusting excitation power.
In this condition, $g^{(2)}(0) = 0.41 \pm 0.04$ for the df-GaN and $0.54 \pm 0.03$ for the vc-hBN, with both emitters below saturation.
We extrapolated the noise model to include the quadratic term of $\langle N \rangle$ with the coefficient $\nu_2$:
\begin{equation}
\langle \Delta N^2 \rangle = \langle N \rangle + \nu_2 \langle N \rangle^2.
\end{equation}
This model was fitted to df-GaN with $\nu_2 = 7.9 (\pm 0.3) \times 10^{-5}$, and as shown in \fref{fig:fluc}, $\langle \Delta N^2 \rangle$ was close to $\langle \Delta N^2\rangle_{\mathrm{shot}}$ at $\Delta t <$ 20 ms and small $\langle N \rangle$.
However, the data from vc-hBN was close to the model only for large $\langle N \rangle > 10^4$ with $\nu_2 = 1.3 (\pm 0.05) \times 10^{-3}$, and did \textit{not} converge to $\langle \Delta N^2\rangle_{\mathrm{shot}}$ at short $\Delta t$.
The $\langle \Delta N^2 \rangle$ of vc-hBN was greater than that of df-GaN by an order of magnitude, as much as the difference of $\nu_2$ between them.
The large fluctuation of $N$ in vc-hBN can easily be seen from the real-time data in the inset of \fref{fig:fluc}.
The origin is related with the blinking phenomenon, as vc-hBN has long $\tau_2 = 1.0 \pm 0.1$ $\mu$s.
This value is in contrast to the short $\tau_2= 53 \pm 5$ ns of df-GaN.
In our survey of $>$ 36 fluorescence centers, most vc-hBN $\tau_2$, which are related to the time dynamics of the blinking, are slower than those of GaN, as shown in \fref{fig:g2}(f).
This analysis of $\tau_2$ is still insufficient to extrapolate the general blinking dynamics though, since we could not measure $\tau_2 > 1$ ms $\sim \Delta t$ in $g^{(2)}(\tau)$ due to the sampling time window.
Nevertheless, this tendency of slow blinking dynamics being more clearly seen in vc-hBN, as shown in the inset of \fref{fig:fluc}, is intriguing.
On the other hand, df-GaN with the fast $\tau_2$ did \textit{not} exhibit such fluctuations, and enabled the low noise $\sim \langle \Delta N^2\rangle_{\mathrm{shot}}$ with $\Delta t < 10$ ms.
Because of the low $\langle N \rangle$ of vc-SiV, we could not perform a fair test of $\nu_2$ for the vc-SiV samples.
However, the analogous behavior of $\langle \Delta N^2 \rangle$ with df-GaN as shown in \fref{fig:fluc} leads us to expect that $\langle \Delta N^2 \rangle$ of vc-SiV would be as low as that of df-GaN if it had a similar level of $\langle N \rangle$.
The low $\langle N \rangle$ of vc-SiV also has disadvantages in obtaining repeatable $S$, as in the following section.
\section{Conversion Between Photon and Radiant Fluxes with High Repeatability}\label{sec:radiant}
In order to apply single photon emitters in radiometry experiments, evaluations of reliability in measurement results must precede.
The main quality of our assessment is the repeatability of photon flux ($\Phi_q$) or photon count ($C$) measurement followed by conversion to radiant flux ($S$).
We performed two tests: one to deduce the repeatability errors of $\langle N\rangle$, and the other to confirm the validity of applying present calibration parameters for photon counts to represent radiant fluxes.
The dual detection module introduced in \sref{sec:method} is well suited for performing these tests.
With this module, we measured $C$ and $S$ from a SPAD and photodiode, respectively, and cross-checked the independent results with previously proven calibration parameters.
\begin{figure}[h]
\centerline{\includegraphics[width=7.5 cm]{fig4_Repeatability_2.pdf}}
\caption{Repeatability of $\langle N\rangle$ ($=\sigma(\langle N\rangle)/\sqrt{M}$, where $M$ is the number of repeated measurements).
Single values of $\langle N \rangle$ were acquired from a 10 s long streaming acquisition at a sampling rate of 10 ms.
$\sigma(\langle N\rangle)$ was calculated by $M$ values of $\langle N \rangle$, where $M=5$ was set to extract a quantity of repeatability.
Data in the green circles were attained from high repetitions up to $M=20$.
Data collected with different emitters were distinguished by shape of marker specified in \fref{fig:g2}.
}
\label{fig:repeat}
\end{figure}
Repeatability errors were measured via the following process.
We repeated the streaming acquisition of $N$ the same as we did for photon number uncertainty in \sref{sec:stab}.
We took the average value ($\langle N\rangle$) of each shot ($i$) and repeated $M$ times to obtain $\langle N\rangle_{1\leq i\leq M}$, thereby yielding the repeatability error as $\sigma (\langle N \rangle)/\sqrt M$ according to its theoretical definition.
In order to obtain practical values, we inserted an $S$ measurement process between each shot of $N$ acquisition to mirror the calibration sequence in radiometry experiments.
Hence, we attain results of both repeatability error and data on $C$ and $S$.
As shown in \fref{fig:repeat}, df-GaN, which has low $\langle \Delta N^2 \rangle$, demonstrates a high repeatability of $\langle N\rangle$ measurement.
We extended $M$ to 20 to obtain the upper bound of repeatability with a qualified df-GaN emitter with $g^{(2)}(0) = 0.43 \pm 0.09$ and $C = 1.3 \times 10^5$ cps.
This highest result reaches to 30 ppm, as marked with green circles in \fref{fig:repeat}.
We note that this value is close to the present repeatability of radiometry experiments with this laser source.
\begin{figure}[h]
\centerline{\includegraphics[width=10cm]{FIG5raster_2.pdf}}
\caption{\textbf{(a)} Relation of radiant flux (red dots, $I$) with respect to photon count rate ($C$), where detections are based on different mechanisms of operation.
The radiant fluxes (y-axis) are given by currents produced in a traceable, high-sensitivity photodiode, and the photon fluxes are from a single photon counter (SPC).
The solid red line represents the theoretical relation $S = \frac{hc}{\lambda} \eta \Phi_q$ (identical to \eref{eq:identity}), where $\eta$ is the quantum efficiency of the SPC and $\lambda$ is the center wavelength.
The black squares show $g^{(2)}(0)$ varied at different levels of $C$.
\textbf{(b)} The spectrum of the single photon source presents mainly as the Lorentzian function centered on 665 nm with a FWHM of 2 nm in wavelength (1) while containing small peaks (2--5).
\textbf{(c)} Photon autocorrelation measured at an excitation power of 0.2 mW and a photon count of $2 \times 10^5$ cps.
All were measured with 18/hBN (see \fref{fig:g2}).
}
\label{fig:conv}
\end{figure}
Despite a low fluctuation and high repeatability, the maximum $C$ obtained from df-GaN has a lower limit around $2 \times 10^5$ cps [\fref{fig:g2}(d--f)].
This also limits the signal to noise ratio (SNR) to 22 for $S$ measurements by a silicon photodiode having a noise equivalent power (NEP) of 8.4 fW/$\mathrm{Hz}^{\frac{1}{2}}$ and integrating photocurrents for 10 s \cite{Mountford:08,Park:16}.
To take advantage of the high SNR of $S$ and the wide range of flux levels shown in \fref{fig:conv}, we chose 18/hBN among vc-hBN emitters (see \fref{fig:g2}), whose maximum $C \sim 2 \times 10^6$ cps.
We adjusted the level of $C$ by controlling the excitation power ($P$).
$C$ has a behavior following the function $C =C_0 (1 - e^{-P/P_\mathrm{sat}})$ with a saturation count rate of $C_0 = 1.8\times 10^6$ cps and a saturation power of $P_\mathrm{sat} = 1.3 \pm 0.2$ mW.
Over this wide range of $C$, $g^{(2)}(0)$ remained within the range 0.37--0.58.
The uncertainty of $g^{(2)}(0)$ (confidence interval 95 \% trust) increased at high $C$, as strong excitation power shortens the anti-bunching time width $\tau_1$ into the time jitter limit of SPAD, as shown in \fref{fig:g2}(e).
As shown in \fref{fig:conv}(c), $g^{(2)}(0) = 0.53 \pm 0.03$ when measured at a low excitation power of $0.15 \times P_\mathrm{sat}$, and it decreases to 0.35 at higher $P$.
We attribute this irregularity to the presence of other transitions that independently emit single photons.
Spectral investigations reveal that the ZPL is composed of two Lorentzian peaks, labeled \textit{1} and \textit{2} in \fref{fig:conv}(b).
Similar findings were observed in another work, and according to the previous analysis we can predict that cross-correlations between independently emitted single photons increase $g^{(2)}(0)$ \cite{Alexander:2019um}.
As the overlapped peaks \textit{1} and \textit{2} have mostly similar areas, the degree of mixture is sufficient to be the main cause of the high $g^{(2)}(0)$.
We also attribute the oscillating behavior of $g^{(2)}(0)$ with respect to $P$ to a property of mixture in which different excitation cross-sections of fluorescence transitions make them compete in contributing to a sum of photon count rate.
Thanks to the narrow linewidth ($\Delta \lambda$ $\sim 2$ nm) shown in \fref{fig:conv}(b), we took a single value of $\eta$ at the center wavelength $\lambda_c = 665.8 \pm 0.03$ nm \cite{Bae_2019}, with the justification that $\Delta \eta$ at the $\lambda$ within the narrow line is smaller than the measured uncertainty (0.5 \%).
Error caused by detection dead time and nonlinear counting was predicted to be 0.2 \% at $C=10^5$ cps and grow to 4 \% at $C=5 \times 10^5$ cps, so that error correction is critical for $C > 10^5$ cps.
For this, we used the correction function $\hat C = U(C)$ to present the corrected count rate $\hat C$, as defined in a previous work \cite{Bae_2019}.
At this stage, with given quantum efficiency $\eta$, we can extract the important $\Phi_q = \eta \times \hat C$.
In practice, we can reduce this to $\Phi_q = \mathrm{DE} (C) \times C$ by introducing an effective function $DE (C)$ that includes both effects of $\eta$ and $U: C \rightarrow \hat C$ \cite{Bae_2019}.
Then we represent the relation between $S$ and $\Phi_q$ in a convenient form with the uncorrected variable $C$:
\begin{equation}\label{eq:identity}
S = \frac{hc}{\lambda} \times \mathrm{DE} (C) \times C.
\end{equation}
This relation with given $\mathrm{DE} (C)$ agrees well with the experimental $S$ and $C$ data shown in \fref{fig:conv}(a).
We first note that the parameters given in previous works were applied here identically, even though the measurement system in this work was newly fashioned.
Second, identical results were achieved even with a non-classical light source that was not fully controlled, as this new source contained uncontrolled internal dynamics related to $\tau_2$ and the blinking phenomenon.
Such a photon source may reduce the repeatability of $C$ due to a high $\langle\Delta N^2\rangle$, but it does not cause contradictions with the given parameters in $\mathrm{DE} (C)$ because the parameters are central values.
\section{Discussions}
We have studied single photon emitters based on fluorescence defects in various crystal materials at room temperature.
Silicon vacancy in diamond, defects in GaN, and vacancy in hBN platforms all have a narrow linewidth, and their spectrum centers are spread over a wide range of wavelengths.
This is, in fact, an important advantage for few-photon radiometry where we have various quantum efficiencies as variable parameters in conversion models for radiant power.
Both stability and brightness are important qualities of single photon sources in few-photon metrology.
None of the materials investigated in this study are endowed with both characteristics simultaneously.
For example, although single photon sources in hBN nano-flakes exhibit high rates of photon detection, they are interrupted by slow blinking, which causes severe fluctuations in photon count rates.
On the other hand, those in GaN show a high degree of stability and a high repeatability of emission rate, while their photon count rates are lower than those attained from hBN.
Such differences can be expected from their shapes.
Emitters in hBN are close to the surface or edges in a nano-flake, where electrostatic fields can have large effects and cause vulnerability to charge fluctuations.
The scenario of severe blinking in hBN is also supported by a recent study that revealed many internal states and frequent relaxations between them, even at low temperature.
On the other hand, emitters in GaN are embedded at a depth of a few micrometers, and this reduces the fluctuations caused by electrostatic fields.
However, the flat surface with high refractive index of GaN significantly decreases its photon collection efficiency, according to the total internal reflection effect \cite{Bowman:14}.
Various methods have been developed to alleviate total internal reflection.
This has long been a subject to mitigate the surface effects in solar cells and light-emitting diodes, and many techniques have been developed.
The simplest solution is a solid immersion lens that includes various lens techniques like micro-half spheres and meta-surfaces.
These methods do not modify the regions near the emitters beneath the surface.
We are expecting high collection efficiencies as supported by these methods, which will enable us to apply non-classical photon number statistics, like one corresponding to the Fock state, and to achieve a high degree of number uncertainty.
|
1,116,691,498,292 | arxiv |
\section{Introduction}
Predictive business process monitoring (\ppm) techniques predict future process behaviour to improve operational business processes~\cite{maggi.2014}. %
A \ppm{} technique constructs predictive models from historical event log data~\cite{marquez.2017} to tackle different prediction tasks like predicting next activities, %
process outcomes %
or remaining time~\cite{di.2018}. %
Concerning the next activity prediction, recent \ppm{} techniques use state-of-the-art deep neural networks (DNNs) to learn predictive models for producing more accurate predictions in running process instances~\cite{weinzierl.2020}.
DNNs belong to the class of deep-learning (DL) algorithms. DL is a subarea of machine learning (ML) that identifies intricate structures in high-dimensional data through multi-representation learning~\cite{lecun.2015}.
After learning, models can predict the next most likely activity of running process instances.
However, providing the next most likely activity does not necessarily support process stakeholders in process executions~\cite{teinemaa.2018}.
Organisations measure the performance of processes through key performance indicators (KPIs) in regard to three dimensions: time, cost and quality~\cite{vanderAalst.2016}.
Recent PBPM techniques rely on state-of-the-art DNNs that can only learn predictive models from event log data.
Even though an event log can include KPI information, it does not directly affect such an algorithm's learning procedure unless a KPI is the (single) learning target itself.
As a consequence, the learned models can output next activity predictions, which are less beneficial for process stakeholders.
Some works tackled this problem with prescriptive business process monitoring (\magic) approaches.
\magic{} approaches assess predictions regarding their impact on the process performance -- typically measured by KPIs -- to prevent undesired activities~\cite{teinemaa.2018}.
To achieve that, existing approaches generate alarms
~\cite{teinemaa.2018,fahrenkrog.2019,metzger2017predictive,metzger2019dl} or recommend actions~\cite{conforti.2013,groger.2014}.
However, none of these approaches recommends next best actions in the form of process activities that are optimised regarding a given KPI for running processes.
In our case, \emph{best} refers to a KPI's optimal value regarding the future course of a process instance.
Additionally, the next best actions, which depend on next activity predictions and prediction of a particular KPI, might obscure the actual business process. Therefore, transforming methods should check whether a recommended action is conform regarding a process description.
\vspace{-0.5cm}
\begin{figure}[htb]
\centering
\includegraphics[width=7.5cm]{res/action_prediction.pdf}
\caption{A next activity prediction vs. a next best action recommendation.}
\label{fig:predact}
\end{figure}
\vspace{-0.7cm}
Given a running process instance of a purchase order handling process after finishing the first two activities, a DNN model predicts the next most likely activity D (cf. (a) in Fig.~\ref{fig:predact}). Additionally, the KPI \textit{time} is of interest and the the first two activities A and B take each $1$ hour, the predicted activity D takes $2$ hours and the last activity E takes $2$ hours. In sum, the complete process instance takes $6$ hours and a deadline of $5$ hours -- that exists due to a general agreement -- is exceeded.
In contrast, the recommended action with optimisation can be the activity E (cf. (b) in Fig.~\ref{fig:predact}). Even though the complete process instance takes $4$ hours, the mandatory activity D is skipped.
With an additional simulation, the activity ``Remove payment block" can be recommended (cf. (c) in Fig.~\ref{fig:predact}) taking $1$ hour. Afterwards, the activities D and E are followed with a duration of $2$ hours and $1$ hour. Here, the activity E takes $1$ hour instead of $2$ hours since the payment block is already removed. Thus, the complete process instance takes $5$ hours, and the deadline is met.
In this paper, we provide a \magic{} technique for recommending the next best actions depending on a KPI. Thereby, it conducts a business process simulation (BPS) to remain within the allowed control-flow.
To reach our research objective, we develop a \magic{} technique and evaluate its actions regarding the optimisation of a KPI and the distance from ground truth process instances with two real-life event logs.
This paper is an extended and revised version of a research-and-progress paper~\cite{weinzierl.2020c}. Additionally, it includes a BPS and an evaluation with two real-life logs.
The paper is structured as follows:
Sec.~\ref{sec:background} presents the required background for our~\magic{}~technique.
Sec.~\ref{sec:artifact} introduces the design of our \magic{} technique for recommending next best actions.
Further, we evaluate our technique in Sec.~\ref{sec:eval}.
Sec.~\ref{sec:discussion}~provides a discussion.
The paper concludes with a summary and an outlook on future work in Sec.~\ref{sec:conclusion}.
\section{Background}
\label{sec:background}
\subsection{Preliminaries}
\ppm\ or PrBPM techniques require event log data. We adapt definitions by Polato et al.~\cite{polato.2014} to formally describe the terms \textit{event}, \textit{trace}, \textit{event log}, \textit{prefix} and \textit{suffix}.
In the following, $\mathcal{A}$ is the set of process activities, $\mathcal{C}$ is the set of process instances~(cases), and $C$ is the set of case ids with the bijective projection $id : C \to \mathcal{C}$, and $\mathcal{T}$ is the set of timestamps.
To address time, a process instance $c \in \mathcal{C}$ contains all past and future events, while events in a trace $\sigma_c$ of $c$ contain all events up to the currently available time instant.
$\mathcal{E}= \mathcal{A} \times C \times \mathcal{T}$ is the event universe.
\begin{definition}[Event] An event $e \in \mathcal{E}$
is a tuple $e=(a,c,t)$,
where $a \in \mathcal{A}$
is the process activity,
$c \in C$ is the case id,
and $t \in \mathcal{T}$ is its timestamp.
Given an event $e$,
we define the projection functions
$F_{p}=\{f_{a}, f_{c}, f_{t}\}$: $f_{a}: e \to a, f_{c}: e \to c, \text{and}\ f_{t}: e \to t$.
\end{definition}
\begin{definition}[Trace] A trace is a %
sequence $\sigma_{c} = \langle e_{1}, \dots, e_{\vert \sigma_{c} \vert} \rangle \in \mathcal{E}^*$ of events, such that $f_{c}(e_{i}) = f_{c}(e_{j}) \wedge f_{t}(e_i) \leq f_{t}(e_j)$ for $1 \leq i < j \leq \vert \sigma_{c} \vert$. Note a trace $\sigma_{c}$ of process instance $c$ can be considered as a process instance $\sigma_{c}$.
\end{definition}
\begin{definition}[Event log]
An event log $\mathcal{L}_\tau$
for a time instant $\tau$
is a set of traces,
such that
$\forall \sigma{}_c \in \mathcal{L}_\tau\,.\,\exists c \in \mathcal{C}\,.\, (\forall e \in \sigma_c\,.\,id(f_c(e)) = c) \wedge (\forall e \in \sigma_{c}\,.\,f_t(e) \leq \tau)$,
\ie all events of the observed cases that already happened.
\end{definition}
\begin{definition}[Prefix, suffix of a trace]
Given a trace $\sigma_{c}= \langle e_{1},..,e_{k},.., e_{n} \rangle$,
the prefix of length $k$
is $hd^k(\sigma_{c})= \langle e_{1},..,e_{k} \rangle$, and
the suffix of length $k$ is $tl^k(\sigma_{c})=\langle e_{k+1},.., e_{n} \rangle$, with $1 \leq k < n$.
\end{definition}
\subsection{Long short-term memory neural networks}
Our \magic{} technique transforms next activity predictions into the next best actions. Thus, next activity predictions are the basis for our \magic{} technique.
To predict next activities, we use a ``vanilla", i.e. basic, long short-term memory network (LSTM)~\cite{hochreiter.1997} because most of the PBPM techniques for predicting next activities rely on this DNN architecture~\cite{weinzierl.2020b}. LSTMs belong to the class of recurrent neural networks (RNNs)~\cite{lecun.2015} and are designed to handle temporal dependencies in sequential prediction problems~\cite{bengio.1994}.
In general, consists of three layers: an input layer (receiving data input), a hidden layer (i.e. an LSTM layer with an LSTM cell) and an output layer (providing predictions).
An LSTM cell uses four gates to manage its memory over time to avoid the problem of gradient exploding/vanishing in the case of longer sequences~\cite{bengio.1994}.
First, a forget gate that determines how much of the previous memory is kept. Second, an input gate controls how much new information is stored into memory. Third, a gate gate or candidate memory that defines how much information is stored into memory. Fourth, an output gate that determines how much information is read out of the memory.
To learn an LSTM's parameters, a loss function (e.g. the cross-entropy loss for classification) is defined on a data point (i.e. prediction and label) and measures the penalty. Additionally, a cost function in its basic form calculates the sum of loss functions over the training set.
The LSTM's parameters are updated iteratively via a gradient descent algorithm (e.g. stochastic gradient descent), in that, the gradient of the cost function is computed by backpropagation through time~\cite{rumelhart.1986}. After learning the parameters, an LSTM model with adjusted parameter values exists.
\subsection{Business process simulation}
Actions optimised according to a KPI can be not conform to the process control-flow.
Thus, suggesting process-conform actions to process stakeholders is essential.
Consequently, we add control-flow knowledge to our \magic{} technique with formal process models.
A well-known approach to assess the quality of process executions is business process simulation (BPS).
Several approaches examine processes and their variants regarding compliance or performance with BPS~\cite{centobelli.2015,redlich.2012}.
We refer to discrete-event-driven BPS~\cite{tumay.1996}. Here, simulation models formally contain discrete events which are interrelated via process semantics.
BPS usually delivers its insights to users~\cite{rosenthal.2018}.
Unlike existing approaches, such as~\cite{wynn.2008,rozinat.2009}, we use the simulation results to process the predictions of an LSTM.
Thus, we use discrete-event-driven~\cite{tumay.1996} short-term simulation~\cite{rozinat.2009} as a boundary measure to ensure that the DNN-based next best action makes sense from a control-flow perspective. Alike Rozinat et al.~\cite{rozinat.2009}, our simulation starts from a non-empty process state to aid in recommending the next best action from the current state on.
\section{A PrBPM technique for recommending next best actions}
\label{sec:artifact}
Our PrBPM technique transforms next activity predictions into the next best actions. %
The technique consists of an \textit{offline} and an \textit{online component}.
In the offline component, it learns a DNN for predicting next activities and values of a KPI. %
In the online component, the next best actions are recommended based on the next activity and KPI value predictions.
\subsection{Offline component}
The offline component receives as input an event log $\mathcal{L}_\tau$, and outputs the two ML models $m_{pp}$ and $m_{cs}$.
While $m_{pp}$ (process prediction model) predicts next activities and a KPI value related to next activities,
$m_{cs}$ (candidate selection model) selects a fix set of suffix candidates.
The technique learns both models from individually pre-processed versions of $\mathcal{L}_\tau$.
Fig.~\ref{fig:rec_offline}~visualises the steps of the offline component.
\vspace{-0.5cm}
\begin{figure}[htb]
\centering
\includegraphics[width=11cm]{res/offline_component.pdf}
\caption{Four-step offline component scheme with the two models $m_{pp}$ and $m_{cs}$.}
\label{fig:rec_offline}
\end{figure}
\vspace{-0.5cm}
In the following, we describe the steps of the offline component based on the exemplary finished process instance $\sigma_{1}^f$, as represented in (\ref{eq:example}). The last attribute per event is the KPI; here the defined costs for executing an activity.
\begin{equation}
\begin{split}
\sigma_{1}^f=\langle&\text{(1, ``Create Application", 2011-09-30 16:20:00, 0),}\\
&\text{(1, ``Concept", 2011-09-30 17:30:00, 10),}\\
&\text{(1, ``Accepted", 2011-09-30 18:50:00, 20),}\\
&\text{(1, ``Validating", 2011-09-30 19:10:00, 40)}
\rangle.
\end{split}
\label{eq:example}
\end{equation}
\textbf{Pre-process event log for process prediction.}
$m_{pp}$ is a multi-task DNN for predicting next activities and a KPI value at each time step of a running process instance. For $m_{pp}$, the pre-processing of $\mathcal{L}_\tau$ comprises four steps.
First, to determine the end of each process instance in $\mathcal{L}_\tau$, it adds a termination event to the end of each process instance. So, for $\sigma_{1}^{f}$, as represented in (\ref{eq:example}), we add the event $\text{(1, ``End", 2011-09-30 19:10:00, 0)}$ after the fourth event with the activity name ``Validating". Additionally, for termination events, we always overtake the timestamp value of the previous event and set the value of the KPI to $0$.
Second, it onehot-encodes all activity
names in the process instances as numeric values (cf. (\ref{eq:example_2}) for $\sigma_{1}^f$ including the termination event's activity).
\begin{equation}
\begin{split}
\sigma_{1}^f=\langle
(0, 0, 0, 0, 1),
(0, 0, 0, 1, 0),
(\dots),
(0, 1, 0, 0, 0),
(1, 0, 0, 0, 0)
\rangle.
\end{split}
\label{eq:example_2}
\end{equation}
This step is necessary since LSTMs, as used in this paper, use a gradient descent optimisation algorithm to learn the network's parameters.
Third, it crops prefixes out of process instances by using the function $hd^k()$. For instance, a prefix with size three of $\sigma_{1}^f$ is:
\begin{equation}
\begin{split}
hk^3(\sigma_{1}^f)=\langle
\text{(0, 0, 0, 0, 1),}&\text{(0, 0, 0, 1, 0),}
\text{(0, 0, 1, 0, 0)}
\rangle.
\end{split}
\label{eq:example_3}
\end{equation}
Lastly, it transforms the cropped input data into a three-order tensor (prefixes, time steps and attributes). Additionally, $m_{pp}$ needs two label structures for parameter learning. First, for the onehot-encoded next activities, a two-dimensional label matrix is required. Second, if the KPI values related to the next activities are scaled numerically, a one-dimensional label vector is required. If the values are scaled categorically, a two-dimensional label matrix is needed.
\textbf{Create process prediction model.}
$m_{pp}$ is a multi-task DNN.
The model's architecture follows the work of Tax et al.~\cite{tax.2017}.
The input layer of $m_{pp}$ receives the data and transfers it to the first hidden layer. The first hidden layer is followed by two branches. Each branch refers to a prediction task and consists of two layers, a hidden layer and an output layer. The output layer of the upper branch realises next activity predictions, whereas the lower creates KPI value predictions.
Depending on the KPI value's scaling (i.e. numerical or categorical), the lower branch solves either a regression or classification problem.
Each hidden layer of $m_{pp}$ is an LSTM layer with an LSTM cell.
\textbf{Pre-process event log for candidate selection.}
$m_{cs}$ is a nearest-neighbour-based ML algorithm for finding suffixes ``similar" to predicted suffixes.
For $m_{cs}$, the pre-processing of $\mathcal{L}_\tau$ consists of three steps.
First, it ordinal-encodes all activity names in numerical values. For example, the ordinal-encoded representation of $\sigma_{1}^f$, as depicted in (\ref{eq:example}) including the termination event's activity, is $\langle (1), (2), (3), (4), (5) \rangle$.
Second, it crops suffixes out of process instances through the function $tl^k(\cdot)$. For instance, the suffix with size three of $\sigma_{1}^f$ ($lt^3(\sigma_{1}^f))$ is $\langle (4), (5) \rangle$.
Lastly, the cropped input data is transformed into a two-dimensional matrix (suffixes and attributes).
\textbf{Create candidate selection model.}
$m_{cs}$ is a nearest-neighbour-based ML algorithm.
It retrieves $k$ suffixes ``nearest" to a suffix predicted for a given prefix (i.e. a running process instance at a certain time step).
The technique learns the model $m_{cs}$ based on all suffixes cropped out of $\mathcal{L}_\tau$.
\subsection{Online component}
The online component receives as input a new process instance, and the two trained predictive models $m_{pp}$ and $m_{cs}$. It consists of five steps~(see~Fig.~\ref{fig:rec_phase}), and outputs next best actions.
After pre-processing (first step) of the running process instance, a suffix of next activities and its KPI values are predicted (second step) by applying $m_{pp}$.
The second step is followed by the condition, whether the sum of the KPI values of the suffix and the respective prefix exceeds a threshold or not. If the threshold is exceeded, the predicted suffix of activities is transferred from the second to the third step (i.e. find candidates) and the procedure for generating next best actions starts. Otherwise, it provides the next most likely activity.
To find a set of suffix candidates, the technique loads $m_{cs}$ from the offline component. Subsequently, it selects the best candidate from this set depending on the KPI and concerning BPS.
Finally, the first activity of the selected suffix represents the best action and is concatenated to the prefix of activities (i.e. running process instance at a certain time step).
If the best action is the end of the process instance, the procedure ends. Otherwise, the procedure continues and predicts the suffix of the new prefix.
\vspace{-0.5cm}
\begin{figure}[htb]
\centering
\includegraphics[width=\textwidth]{res/online_component2.pdf}
\caption{Five-step online component scheme with the two models $m_{pp}$ and $m_{cs}$.}
\label{fig:rec_phase}
\end{figure}
\vspace{-0.5cm}
In the following, we detail the online component's five steps. Thereby, we refer to the running process instance~$\sigma_{2}^r$, for that the second event has just finished.
\begin{equation}
\begin{split}
\sigma_{2}^r=\langle&\text{(2, ``Create Application", 2012-09-30 18:00:00, 20),}\\
&\text{(2, ``Concept", 2012-09-30 18:30:00, 20)}\rangle.
\end{split}
\label{eq:example_4}
\end{equation}
\textbf{Pre-process process instance.} To predict the suffix of the running process instance with $m_{pp}$, we onehot-encode all activity names in numerical values and transform the output into a third-order tensor.
\textbf{Predict suffix.}
Based on a running process instance (prefix), $m_{pp}$ predicts the next sequence of activities and the KPI values.
To get the complete suffix, we apply $m_{pp}$ repeatedly.
Afterwards, the technique calculates the sum of the KPI values over the activities of the complete process instance consisting of the prefix and its predicted suffix.
For instance, if the prefix of a process instance is $\sigma_{2}^r$, one potential suffix is:
\begin{equation}
\begin{split}
s_{\sigma_{2}^r}=\langle
\text{(``Accepted", 20),}
\text{(``Validating", 40),}
\text{(``End", 10)}
\rangle.
\end{split}
\label{eq:example_5}
\end{equation}
For a better intuition, we omit the suffixes' encoding in the online component. The values $20$, $40$ and $10$ assigned to the events are KPI values (e.g. cost values) predicted by $m_{pp}$.
In line with Tax et al.~\cite{tax.2017}, we do not perform the suffix prediction for prefixes with size $\leq 1$ since the amount of activity values is insufficient.
After predicting the suffix, the total costs of $\sigma_{2}^r$ are $110$.
To start the procedure for recommending the next best actions, the total KPI value of an instance has to exceed a threshold value $t$. The value of $t$ can be defined by domain experts or derived from the event log (e.g. average costs of process instances).
Regarding $\sigma_{2}^r$, the procedure starts because we assume $t=100$ ($110>t$).
\textbf{Find candidates.}
Second, for the predicted suffix, $m_{cs}$ from the offline component reveals a set of alternatives with a meaningful control-flow.
For example, $m_{cs}$ ($k=3$) selects based on $s_{\sigma_{2}^r}$ the following three suffix alternatives:
\begin{equation}
\begin{split}
m_{cs}(s_{\sigma_{2}^r})=[
&\langle
\text{(``End", 10)}
\rangle,\\
&\langle
\text{(``Accepted", 20),}
\text{(``Validating", 40),}
\text{(``End", 10)}
\rangle,\\
&\langle
\text{(``Validating", 20),}
\text{(``Accepted", 10),}
\text{(``Validating", 10),}\\
&\text{(``End", 10)}
\rangle].
\end{split}
\label{eq:example_6}
\end{equation}
In (\ref{eq:example_6}), the first and the third suffix result in total costs ($50$ and $90$) falling below $t$.
\textbf{Select the best candidate.}
In the third step, we select the next best action from the set of possible suffix candidates.
We sort the suffixes by the KPI value. Thus, the first suffix is the best one in regard to the KPI.
To incorporate control-flow knowledge, a simulation model checks the resulting instance.
Thereby, we reduce the risk of prescribing nonsensical actions.
The simulation uses a formal process model to retrieve specific process semantics.
The simulation produces the current process state from the prefix and the process model.
If the prefix does not comply with the process model, the simulation aborts the suffix selection for the prefix and immediately recommends an intervention.
Otherwise, we check the $k$ selected suffixes whether they comply with the process model in the simulation from the current process state on.
If a candidate suffix fails the simulation,
our technique omits it in the selection.
However, when all suffix candidates infringe the simulation, the technique assumes the predicted next activity as the best action candidate.
Concerning the candidate set from (\ref{eq:example_6}), the best candidate is suffix three since it does not infringe the simulation model.
\textbf{Update process instance.}
To evaluate our technique, we assume that a process stakeholder performs in each case the recommended action. Thus, if the best suffix candidate exists, the activity (representing the next best action) and the KPI value of the first event are concatenated to the running process instance (i.e. prefix).
After the update,~$\sigma_{2}^r$ comprises three events, as depicted in~(\ref{eq:example_7}).
\begin{equation}
\begin{split}
\sigma_{2}^r=\langle
&\text{(2, ``Create Application", 2012-09-30 18:00:00, 20),}\\
&\text{(2, ``Concept", 2012-09-30 18:30:00, 20),}\\
&\text{(2, ``\textbf{Validating}", --, \textbf{20})}\rangle.
\end{split}
\label{eq:example_7}
\end{equation}
The technique repeats the complete procedure until the termination event is reached.
\section{Evaluation}
\label{sec:eval}
We provide an evaluation regarding our PrPBM technique's optimisation of a KPI and the distance from ground truth process instances. For that, we developed a prototype that recommends next best actions depending on the KPI \textit{throughput time} and concerning a process simulation realised with DCR graphs. We compare our results to a representative baseline~\cite{tax.2017} for two event logs.
\subsection{Event logs}
First, we use the~\textit{helpdesk}\footnote{https://data.mendeley.com/datasets/39bp3vv62t/1.}~event log containing data from an Italian software company's ticketing management process.
It includes $21,348$ events, $4,580$ process instances, $226$ process instance variants and $14$ activities.
Second, we include the \textit{bpi2019}\footnote{https://data.4tu.nl/repository/uuid:a7ce5c55-03a7-4583-b855-98b86e1a2b07.} event log from the BPI challenge 2019, provided by a company for paints and coatings.
It depicts a purchase order handling processes. For this event log, we only considered a random 10\%-sampling with sequences of $30$ events or shorter, due to the high computation effort.
It includes $101,714$ events, $24,900$ process instances, $3,255$ process instance variants and $32$ activities.
\subsection{Process models}
We used DCR graphs as models for the BPS in our technique's best candidate selection. In Fig.~\ref{fig:dcr_hd}, we present the DCR graph for the \textit{helpdesk} event log. The three most important constraints are the following. First, after ``Closed" the other activities should not happen. Second, if ``Assign seriousness" occurs, someone must take over the responsibility. Third, before a ticket is closed, ``Resolve ticket" must occur.
Fig.~\ref{fig:dcr_bpi} shows the DCR graph for the \textit{bpi2019} event log.
The three most essential constraints are the following. First, ``Create Purchase Order Item" may only happen once per order. Second, After the goods were received, ``Change Quantity" and ``Change price" should not occur. Third, ``Record Goods Receipt", ``Record Invoice Receipt" and ``Clear Invoice" must eventually follow each other.
\begin{figure}[htb!]
\centering
\includegraphics[width=.55\textwidth]{res/helpdesk.pdf}
\caption{DCR graph for the \textit{helpdesk} event log.}
\label{fig:dcr_hd}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{res/dcr_bpi2019.pdf}
\caption{DCR graph for the \textit{bpi2019} event log.}
\label{fig:dcr_bpi}
\end{figure}
\subsection{Procedure}
We split both event logs in a 2/3 training and 1/3 test set with a random process-instance-based sampling. As a baseline, we use the most cited next event PBPM technique from Tax et al.~\cite{tax.2017}.
We evaluate the technique in two ways.
First, we evaluate the optimisation of the KPI \textit{throughput time} (\textit{in-time} value) by the percentage of process instances that could comply with the temporal threshold for different prefix sizes. The temporal threshold is the average throughput time of a process instance in an event log.
Second, we evaluate the \textit{distance} from the ground truth process instance through the average Damerau-Levenshtein distance.
This metric determines the distance between two strings or sequences through the minimum number of operations (consisting of insertions, deletions or substitutions of a single character, or transposition of two adjacent characters), i.e. the lower the value, the more similar the strings are.
To train the multi-task LSTM $m_{pp}$, we apply the \emph{Nadam} optimisation algorithm %
with a \emph{categorical cross-entropy loss} for next activity predictions and a \emph{mean squared error} for \textit{throughput time} (KPI) predictions.
Moreover, we set the batch size to 256,
\ie gradients update after every 256\textsuperscript{th} sample of the training set. We set the default values for the other optimisation parameters.
For training the candidate selection model $m_{cs}$, we apply the nearest-neighbour-based ML algorithm ball tree~\cite{omohundro.1989}. Ball tree utilises a binary tree data structure for maintaining spatial data hierarchically. We choose a spatial-based algorithm to consider the semantic similarity between the suffixes of activities and KPI values.
Moreover, we set the hyperparameter $k$ (number of ``nearest" neighbours) of $m_{cs}$ to $5$, $10$ and $15$. Thereby, we check different sizes of the suffix candidate set.
Finally, technical details and the source code are available on GitHub\footnote{https://github.com/fau-is/next-best-action.}.
\subsection{Results}
Fig.~\ref{fig:res_hd} shows the results for the \textit{helpdesk} event log.
\vspace{-0.3cm}
\begin{figure}[htbp]
\centering
\includegraphics[width=\textwidth]{res/res_hd.pdf}
\caption{Results for the \textit{helpdesk} event log.}
\label{fig:res_hd}
\end{figure}
\vspace{-0.5cm}
For most of the prefixes in the \textit{helpdesk} event log, our technique's next best actions are more \textit{in time} than next activity predictions.
While for $k=10$ next best actions have the lowest \textit{in-time} values compared to next activity predictions, \textit{in-time} values of next best actions with $k=5$ and $k=15$ are rather similar to each other.
Furthermore, the higher the $k$, the lower is the \textit{distance} of the next best actions from the actual process instances.
Up to prefix size $4$, the \textit{distance} of the next best actions is lower compared to next activity predictions.
\begin{figure}[htbp!]
\centering
\includegraphics[width=\textwidth]{res/res_bpi.pdf}
\caption{Results for the bpi2019 event log.}
\label{fig:res_bpi}
\end{figure}
Fig.~\ref{fig:res_bpi} shows the results for the \textit{bpi2019} event log.
For most of the prefixes with a size $\geq$ $8$, the next best actions of our technique are more \textit{in time} than next activity predictions.
For $k=15$ and prefixes $\geq$ $8$, next best actions have an \textit{in-time} value of $0$.
With an increasing $k$, the \textit{in-time} values of the next best actions vary less from prefix size $2$ to $12$.
In contrast to next activity predictions, the \textit{distance} of next best actions is lower for prefixes with a size $>3$ and $<15$.
Over the three $k$ values, the \textit{distance} of next best actions is rather similar.
\section{Discussion}
\label{sec:discussion}
Our contribution to academia and practice is a \magic{} technique that recommends the next best actions depending on a KPI while concerning BPS.
Moreover, the evaluation presents an instantiation of our \magic{} technique. The KPI is the \textit{throughput time}, and a DCR graph realises BPS via the event log.
Based on our results, our \magic{} technique can provide actions with lower \textit{in-time} values and less \textit{distance} from the ground truth compared to the next most likely activities for both event logs.
However, the \textit{in-time} values (i.e. the percentage of process instances that could comply with the temporal threshold) of next best actions differs more from the baseline's next activity prediction for the \textit{bpi2019} event log than for the \textit{helpdesk} event log.
The \textit{helpdesk} event log has a lower instance variability than the \textit{bpi2019} event log. Therefore, fewer process paths exist from which our technique can recommend actions with lower \textit{in-time} values.
Further, the number of candidates $k$ has an effect on the KPI's optimisation.
While we get the actions with the lowest \textit{in-time} values with $k=10$ for the \textit{helpdesk} event log, the KPI values with $k=5$ and $k=15$ are similar to each other. For the \textit{bpi2019} event log, our technique provides actions with the lowest \textit{in-time} value if $k$ is set to $15$. The results with $k=5$ are similar to those of $k=10$.
A higher $k$ value leads to lower \textit{in-time} values in the \textit{bpi2019} event log because of a higher instance variability.
On the contrary, the \textit{helpdesk} event log needs a lower $k$ value.
Regarding the \textit{distance} from ground truth process instances, most of the next best actions (especially those for the \textit{bpi2019} event log) reach a better result than next activity predictions.
A reason for that could be the limited predictive quality of the underlying DNN model for predicting next activities. However, our technique integrates control-flow knowledge and therefore overcomes this deficit to a certain degree.
Moreover, for the \textit{bpi2019} event log, our technique provides actions with lower \textit{in-time} values for prefixes with a size $\geq$ $8$.
In terms of the \textit{helpdesk} event log, we get actions with lower \textit{in-time} values for shorter prefixes.
We suppose that our technique requires a longer prefix for event logs with higher instance variability to recommend next best actions.
Finally, even though our technique provides actions with lower \textit{in-time} values, it seems that it does not terminate before the baseline.
Our results show the aggregated values over different prefix sizes.
Thus, we assume that few sequences, for which the termination can not be determined, distort the results. %
Despite all our efforts, our technique bears three shortcomings.
First, we did not optimise the hyperparameters of the DNN model $m_{pp}$, e.g. via random search. Instead, we set the hyperparameters for $m_{pp}$ according to the work of Tax et al.~\cite{tax.2017}. We used the same setting since we compare our technique's next best actions to their next activity predictions.
Second, even though our technique is process-modelling-notation agnostic, we argue that declarative modelling is an appropriate approach for the process simulation. Due to its freedoms, declarative modelling facilitates the partial definition of the control-flow.
As a consequence, we have a more flexible definition of a process's control-flow than by using a restricted procedural process model.
While our DNN-based technique copes well with rather flexible processes, other techniques using \textit{traditional} ML algorithms (e.g. a decision tree) might handle restricted processes faster and with a higher predictive quality.
Third, for our \magic{} technique's design, we neither consider cross-instance nor cross-business-process dependencies. In an organisational environment, additional effects like direct and indirect rebound effects can hinder our technique.
\section{Related Work}
\label{sec:relatedWork}
A variety of PBPM techniques were proposed by researchers as summarised by, e.g. Márquez-Chamorro et al.~\cite{marquez.2017} or Di Francescomarino et al.~\cite{di.2018}.
Many of these techniques are geared to address the next activity prediction task. For that, most of the recent techniques rely on LSTMs~\cite{weinzierl.2020b} such as Weinzierl et al.~\cite{weinzierl.2020}.
To predict not only the next activities with a single predictive model, Tax et al.~\cite{tax.2017} suggest a multi-task LSTM-based DNN architecture. With this architecture, they predict the next activities and their timestamps. Metzger et al.~\cite{metzger.2019} extend their architecture by another LSTM layer to additionally predict the binary process outcome whether a delay occurs in the process or not.
These techniques output predictions and do not %
recommend next best actions.
Furthermore, researchers suggested \magic{} approaches that raise alarms or recommend actions to prevent undesired activities.
Metzger et al.~\cite{metzger2017predictive} investigate the effect of reliability estimates on (1) intervention costs (called adaption cost) and (2) the rate of non-violation of process instances by performing a simulation of a parameterised cost model. Thereby, they determine the reliability estimates based on the predictions of an ensemble of multi-layer perceptron classifiers at a pre-defined point in the process. In a later work~\cite{metzger2019dl}, reliability estimates were determined based on an ensemble of LSTM classifiers at different points in the process. The recommendation of actions is not part of these works.
Teinemaa et al.~\cite{teinemaa.2018} propose a concept of an alarm-based \magic{} framework. They suggest a cost function for generating alarms that trigger interventions to prevent an undesired outcome or mitigate its effect. In a later work~\cite{fahrenkrog.2019}, a multi-perspective extension of this framework was presented. In both versions, the framework focuses on alarms.
Gr{\"o}ger et al.~\cite{groger.2014} present a \magic{} technique that provides action recommendations for the next process step during the execution of a business process to avoid a predicted performance deviation. Performance deviation is interpreted as a binary outcome prediction, i.e. exists a deviation or not. In detail, an action recommendation comprises several action items and is represented by a rule extracted from a learned decision tree. An action item consists of the name and value of a process attribute. Even though this approach recommends actions in the form of process attribute values of the next process step which are optimised according to a KPI (e.g. lead time), process steps as next best actions are not recommended.
Conforti et al.~\cite{conforti.2013} propose a \magic{} technique that predicts risks depending on the deviation of metrics during process execution. The technique's purpose is to provide decision support for certain actions such as the next process activity which minimises process risks. However, this technique can only recommend actions which are optimised regarding the KPI risk.
Thus, with the best of our knowledge, there is no \magic{} approach that transforms next most likely activity predictions into the next best actions (represented by activities) depending on a given KPI.
\section{Conclusion}
\label{sec:conclusion}
Next activity predictions provided by PBPM techniques can be less beneficial for process stakeholders.
Based on our motivation and the identified research gap, we argue that there is a crucial need for a \magic{} technique that recommends the next best actions in running processes.
We reached our research goal with the evaluation of our developed \magic{} technique in Sec. 5.
Thereby, we show that our technique can outperform the baseline regarding KPI fulfilment and distance from ground truth process instances.
Further research might concern different directions.
First, we plan to adapt existing loss functions for LSTMs predicting next most likely activities. Such a loss function can enable an LSTM to directly consider information on KPIs in the learning procedure.
Second, future research should further develop existing \magic{} approaches. More advanced multi-tasking DNN architectures can facilitate the recommendation of more sophisticated next best actions. For instance, next best actions that optimise more than one KPI.
Finally, we call for \magic{} techniques that are aware of concept evolution. Our technique is not able to recommend an activity as the best action if it was not observed in the training phase of the ML models. %
\section*{Acknowledgments}
This project is funded by the German Federal Ministry of Education and Research (BMBF) within the framework programme \textit{Software Campus} under the number 01IS17045.
|
1,116,691,498,293 | arxiv | \section{\label{sec:level1}Introduction}
Heteroarenes containing a subset of B, N, O, P, and S atoms are very versatile organic compounds exhibiting useful mechanical, optoelectronic, chemisorption and catalytic properties\cite{wong2006modulation,marcon2007tuning,campbell2010hydrogen,jiang2013heteroarenes,al2014water,hashimoto2014triplet,gong2015boron,stepien2016heterocyclic,ito2017annulative,wang2017ladder}. The most extensively studied heteroarenes are either those based
on polycyclic aromatic hydrocarbon (PAH) molecules or their two-/three-dimensional (2D/3D) periodic forms: graphene and graphite.
Borazine and hexagonal boron-nitride ($h-$BN) are fully heteroatom-substituted arenes; while their popular names, \emph{inorganic benzene} and \emph{inorganic graphite}, are inspired by their isoelectronicity with
organic counterparts, their properties do exhibit stark contrast.
The latter compound, due to its suitable band structure properties, plays a vital role in the design of graphene-based heterostructures\cite{dean2010boron}; while in its partially hydrogenated form shows visible-light activity for photocatalyzed water splitting\cite{li2013semihydrogenated}.
Both at the molecular level and in the extended 2D domain, B,~N-arenes exhibit unique physical and chemical properties\cite{dutta2008half,ci2010atomic,yamijala2013structural}.
Due to the isoelectronic and isosteric relationships between C-C and B-N fragments, these compounds do exhibit similarities in chemistry to their parent hydrocarbon compounds--a fact that has motivated all the related synthesis endeavors during the past decades\cite{dewar1958624,dewar1959546,chissick1960new,dewar1964new,davies1967new,fang2006syntheses,jaska2006triphenylene,bosdet2007blue,yamamoto2007facile,bosdet2009bn,matsui2017one}.
On the other hand, these heteroarenes have also been reported to exhibit mechanical and electronic properties differing from those of pure carbon and fully heteroatomic compounds\cite{miyamoto1994chiral,watanabe1996visible}.
Combinatorial diversity arising from site-specific atomistic substitutions in the arene framework combined with the local polarity
introduced by heteroatoms gives rise to continuously distributed molecular properties in ranges desirable for a multitude of applications\cite{ghosh2011density,morgan2016efficient}.
Yet another combinatorial scenario arises in extended arenes; for instance, nanotubules made of hexagonal BC$_2$N sheets show a wide degree of anisotropic conductivity stemming from distinct ways of rolling the heteroaromatic sheet into tubules\cite{miyamoto1994chiral}.
From a molecular perspective, introduction of heteroatoms in the arene framework serves as an invaluable alternative to coupling C atoms with functional groups which in the case of aromatic arenes is thermodynamically amenable only at peripheral sites\cite{muller2014boron}.
Mathematical methods for enumerating molecular datasets have been thoroughly reviewed by Faulon\cite{faulon1992using}. Historically, isomer counting of cyclic compounds has been based on the enumeration theorem named after George P{\'o}lya\cite{polya1937kombinatorische,freudenstein1967basic,polya2012combinatorial}. Exhaustive applications of this technique have been carried out by Lindsey \textit{et al.} to enumerate macrocyclic compound libraries \cite{taniguchi2011virtual,taniguchi2013enumeration}suitable for light-harvesting\cite{yuen2018origin}. Baraldi \emph{et al.}\cite{baraldi2000regarding,baraldi1999cycle}have applied P{\'o}lya's theorem and enumerated the stereoisomers of highly symmetric icosahedral topologies. Graph theoretical methods came into light with comprehensive enumerations of polycyclic hydrocarbons through the pioneering works of Dias\cite{dias1985periodic,dias1986periodic,dias2007mathematics}, Balaban, \cite{balaban1985applications} and others\cite{lukvis2007growth}. Paton \emph{et al.} have enumerated saturated acyclic alkanes by explicitly accounting for the instability of strained carbon skeletons \cite{paton2007exploration}. Shao \emph{et al.} have applied subgroup decomposition of isomer permutation groups and enumerated a few fullerene cages\cite{shao1996enumeration2}and applied the same technique to list the isomers of B$_{24-m}$N$_m$\cite{shao1996enumeration1}.
All the aforementioned works have been based on a {\it non-constructive} strategy, {\it i.e.}, enumeration is based on closed-form algebraic expressions rather than explicit generation of molecular structures. As far as benzenoid hydrocarbons are concerned, a {\it constructive} strategy has been found to be of larger applicability, wherein explicit generation of structures or the corresponding
graphs is required. For example, benzenoid hydrocarbons have been enumerated using the perimeter or the area of self-avoiding polygons on a hexagonal lattice\cite{gutman1989introduction,cyvin1992enumeration,voge2002number}.
Using this approach the benzenoid compounds with up to 50 hexagonal cells have been enumerated with a parallel
algorithm\cite{jensen2009parallel}.
To date, one of the fastest approaches to constructively enumerate Kekulean benzenoid compounds---which is of interest to chemists---has been that of Brinkmann {\it et al.}\cite{brinkmann2007fusenes} which utilizes the idea of dual-graphs\cite{brinkmann2002constructive}.
A detailed account of the Kekulean structure as an important descriptor for benzenoid hydrocarbons can be found in the work of Cyvin \emph{et al.}\cite{cyvin2013kekule}.
B,~N-arenes have been the subject of a diverse range of theoretical as well as combined theoretical/experimental studies. To begin with, already in the late 60's Hoffmann \emph{et al.} had applied the extended-H\"uckel molecular orbital theory to a dozen or so heteroaromatic compounds, discussed the role of intramolecular non-bonded electrostatic interactions in these compounds and commented on their stability \cite{hoffmann1964extended}. Prior to that, Dewar had reported the synthesis of hetero-phenanthrene with 9:10 C atoms replaced by B, N pairs \cite{dewar1958624}. Eventually, a number of investigations have studied larger compounds ranging from PAHs\cite{neue2013bn,wang2014straightforward,long2016doping,sanyal2014bn,wang2015b2n2,ishibashi2017bn,fias2018alchemical} to fullerene cages\cite{zhu1997alternant,jensen1993stability,zhu1995bn,seifert1997boron,evangelisti1997ab,pattanayak2002boron,balawender2018exploring} partly or completely enriched by B, N pairs.
\begin{figure}[htbp!]
\centering
\includegraphics[width=8cm]{fig01.pdf}
\caption{Generation of polycyclic aromatic hydrocarbon frameworks by connecting and fusing three hexagons. {\it B}, {\it K}, {\it nB}, and {\it nK} stand for benzenoid,
Kekul{\'e}, non-benzenoid, and non-Kekul{\'e} frameworks, respectively.}
\label{fig01}
\end{figure}
The purpose of this paper is to non-constructively enumerate all possible unique compounds formed by substituting pairs of C atoms in PAHs comprising up to six cycles with B and N atoms. We utilize the nuclear permutation groups that are isomorphic to the rotation groups of the PAH scaffolds and generate the corresponding {\it pattern inventory}. One of the aims of this study is to provide consolidated tabulations of B,~N-substituted PAH (BN-PAH) compound frequencies---as a function of stoichiometries, symmetry and sites---to enable identification of statistical and combinatorial trends facilitating chemical space design strategies. As exemplars, we discuss (i) deviation of substitution patterns from commonly expected binomial/multinomial distributions; (ii) in the case of two or more PAHs
of similar size ({\it i.e.} made of the same number of benzene rings) and symmetry (same point group), non-trivial frequency selectivities giving rise to distinct product distributions. We clarify the origin of both these effects using nuclear permutation groups.
Furthermore, for a subset of 33,059 BN-PAH compounds, we present accurate geometrical features, energetics, optoelectronic properties, and inter-property correlations based on high-throughput density functional theory (DFT) calculations.
\begin{figure*}[hpbt!]
\centering
\includegraphics[width=14.5cm]{77_4_2.pdf}
\caption{
Clar structures of the seventy-seven poly aromatic hydrocarbons comprising up to six benzene rings. Coronene,
a formally seven-membered system, is shown as a special case (see discussions for more details):
1)Naphthalene, 2)Anthracene, 3)Phenanthrene, 4)Isochrysene, 5)Tetracene, 6)Pyrene, 7)Chrysene, 8)Tetrahelicene, 9)Tetraphene, 10)Pentacene, 11)Perylene, 12)Picene, 13)Pentahelicene, 14)Benzo[k]tetraphene, 15)Benzo[f]tetraphene, 16)Benzo[m]tetraphene, 17)Benzo[e]pyrene, 18)Pentaphene, 19)Benzo[c]tetraphene, 20)Benzo[g]chrysene, 21)Benzo[a]pyrene, 22)Benzo[a]tetracene, 23)Benzo[c]chrysene, 24)Benzo[a]tetraphene, 25)Hexacene, 26)Dibenzo[g,p]chrysene, 27)Dibenzo[fg,op]tetracene, 28)Benzo[rst]pentaphene, 29)Dibenzo[c,pqr]tetraphene, 30)Dibenzo[def,mno]chrysene, 31)1,12-benzoperylene, 32)Benzo[s]picene, 33)Dibenzo[c,l]chrysene, 34)Dibenzo[de,mn]tetracene, 35)Naphtho[1,2-g]chrysene, 36)Dibenzo[de,qr]tetracene, 37)Benzo[h]pentaphene, 38)Dibenzo[a,c]tetracene, 39)Benzo[c]picene, 40)Naphtho[2,3-c]tetraphene, 41)Dibenzo[a,j]tetracene, 42)Naphtho[2,3-a]tetraphene, 43)Naphtho[1,2-c]chrysene, 44)Dibenzo[a,l]tetracene, 45) Hexahelicene, 46)Benzo[pqr]picene, 47)Dibenzo[m,pqr]tetraphene, 48)Dibenzo[ij,no]tetraphene, 49)Dibenzo[f,pqr]tetraphene, 50)Naphtho[2,1-c]tetraphene, 51)Naphtho[2,3-c]chrysene, 52)Dibenzo[a,c]tetraphene, 53)Benzo[a]pentacene, 54)Dibenzo[c,f]tetraphene, 55)Dibenzo[a,f]tetraphene, 56)Dibenzo[a,k]tetraphene, 57)Dibenzo[c,m]tetraphene, 58)Benzo[a]picene, 59)Dibenzo[f,k]tetraphene, 60)Benzo[f]picene,
61)Benzo[f]pentahelicene, 62)Naphtho[1,2,3,4-pqr]tetraphene, 63)Dibenzo[c,p]chrysene, 64)Benzo[c]pentahelicene, 65)Benzo[b]perylene, 66)Benzo[a]perylene, 67)Benzo[b]pentahelicene,
68)Naphtho[1,2-a]tetracene, 69)Benzo[a]pentaphene, 70)Dibenzo[a,m]tetraphene, 71)Dibenzo[c,k]tetraphene, 72)Benzo[c]pentaphene, 73)Hexaphene, 74)Naphtho[2,1-a]tetracene,
75)Benzo[b]picene, 76)Naphtho[2,1,8-qra]tetracene \& 77)Coronene. Names of PAHs are based on \Refs{NIST,sander1997polycyclic,ChemSpider}. For all molecules, inner-sites are marked with red dots.}
\label{PAH}
\end{figure*}
\begin{figure*}[htbp!]
\centering
\includegraphics[width=12cm]{cir_hist3_7.pdf}
\caption{Circular histogram depicting frequencies of all possible B,~N-substituted compounds of all seventy-seven PAHs. See Table \ref{tab:allcarbons} for more details.}
\label{fig:circular}
\end{figure*}
\begin{table*}[htbp!]
\caption{Number of constitutional isomers formed by substituting pairs of inner C atoms with B and N atoms. For every parent hydrocarbon, number of products are collected for various stoichiometries along with their total: $n$ is the number of rings in the parent hydrocarbon, $I$ is the index of the hydrocarbon in Fig.~\ref{PAH}, $X$ is the number of inner C atoms in the parent hydrocarbon available for substitution, and $\lbrace Z \rbrace$ lists the the cycle indices of the generator subgroup. Also given for every parent hydrocarbon, is the isomorphic point group of the molecular symmetry group ($\mathcal{G}^{\rm iso.}$).}
\centering
\begin{tabular}{l l l r r r r r r r r r}
\hline
$n$ & $I$ & $\mathcal{G}^{\rm iso.}$ & $X$ & C$_{X-2}$ & C$_{X-4}$& C$_{X-6}$& C$_{X-8}$& C$_{X-10}$ & C$_{X-12}$& Total\textsuperscript{a} & $\lbrace Z \rbrace$\\
& & & & B$_1$N$_1$ & B$_2$N$_2$& B$_3$N$_3$& B$_4$N$_4$& B$_5$N$_5$ & B$_6$N$_6$& & \\
\hline
2 & 1 & $C_2$ & 2 & 1 & & & & & & 1 & \{$P_2^1$\}\\
3 & 2 & $D_2$ & 4& 3 & 3 & & & & & 6 & \{$2P_2^2$\}\\
3 & 3 & $C_2$ & 4& 6 & 4 & & & & & 10& \{$P_2^2$\}\\
4 & 4 & $D_3$ & 6& 5 & 18 & 4 & & & & 27& \{$P_3^2, P_2^3$\}\\
4 & 5,6 & $D_2$ & 6& 8 & 27 & 6 & & & & 82 & \{$P_2^3, P_2^2$\}\\
4 & 7 & $C_2$ & 6& 15 & 48 & 10 & & & & 73 & \{$P_2^3$\}\\
4 & 8 & $C_2$ & 6& 16 & 48 & 12 & & & & 76 & \{$P_2^2$\} \\
4 & 9 & $C_1$ & 6& 30 & 90 & 20 & & & & 140 & \{$P_1^6$\}\\
5 & 10 & $D_2$ & 8& 14 & 114 & 140 &22 & & & 290 & \{$2P_2^4$\}\\
5 & 11 & $D_2$ & 8& 17 & 119 & 150 &24 & & & 310 & \{$P_2^4,P_2^2$\} \\
5 & 12--18 & $C_2$ & 8& 28 & 216 & 280&38 & & & 3,934 & \{$P_2^4$ \} \\
5 & 19--24 & $C_1$ & 8& 56 & 420 & 560 &70 & & & 6,636 & \{$P_1^8$\}\\
6 & 25--27 & $D_2$ &10& 23 & 330 & 1056 & 810 & 66 & & 6,855 & \{$P_2^5,P_2^4$\} \\
6 & 28--41 & $C_2$ &10& 45 & 640 & 2100 & 1590 & 126 & & 63,014 & \{$P_2^5$\} \\
6 & 42--45 & $C_2$ &10& 46 & 640 & 2112 & 1590 & 132 & & 18,080 & \{$P_2^4$\} \\
6 & 46--76 & $C_1$ &10& 90 &1,260& 4,200 & 3,150 & 252 & & 277,512 & \{$P_1^{10}$\}\\
7 & 77 & $D_6$ &12& 14 & 274 & 1,586 & 2,976 & 1428 & 96 & 6,374 & \{$P_6^2, P_2^6, P_2^4$\} \\
\hline
\end{tabular}
\label{tab:innercarbons}
\newline
\textsuperscript{a} For a given $n$, values are reported as sum over all $I$.
\end{table*}
\begin{table*}[htbp!]
\caption{
Number of constitutional isomers formed by substituting pairs of
peripheral C atoms with B and N atoms. For every parent hydrocarbon, number of products are collected for various stoichiometries along with their total: $n$ is the number of rings in the parent hydrocarbon, $I$ is the index of the hydrocarbon in Fig.~\ref{PAH}, $X$ is the number of peripheral C atoms in the parent hydrocarbon available for substitution, and $\lbrace Z \rbrace$ lists the the cycle indices of the generator subgroup. Also given for every parent hydrocarbon, is the isomorphic point group of the molecular symmetry group ($\mathcal{G}^{\rm iso.}$).
}
\begin{tabular}{l l l r r r r r r r r r r r l }
\hline
$n$ & $I$ & $\mathcal{G}^{\rm iso.}$ & $X$ & C$_{X-2}$ & C$_{X-4}$ & C$_{X-6}$ & C$_{X-8}$ & C$_{X-10}$ & C$_{X-12}$ & C$_{X-14}$ & C$_{X-16}$ &Total\textsuperscript{a} & $\left\lbrace Z\right\rbrace$\\
& & & & B$_1$N$_1$ & B$_2$N$_2$& B$_3$N$_3$& B$_4$N$_4$& B$_5$N$_5$ & B$_6$N$_6$& B$_7$N$_7$ & B$_8$N$_8$ & & $\left\lbrace p_x^y,\,p_n^m p_l^k \right\rbrace$\\
\hline
2 & 1 & $D_2$ & 8 & 14 & 114 & 140 & 22 & & & & & 290 & \{$2P_2^4$\}\\
3 & 2 & $D_2$ & 10 & 23 & 330 & 1,056 & 810 & 66 & & & & 2285 & \{$P_2^5,P_2^4$\} \\
3 & 3 & $C_2$ & 10 & 45 & 640 & 2,100 & 1,590 & 126 & & & & 4501 & \{$P_2^5$\} \\
4 & 4 & $D_3$ & 12 & 22 & 510 & 3,084 & 5,820 & 2,772 & 166 & & & 12,374 & \{$P_3^4, 2P_2^6$\} \\
4 & 5 & $D_2$ & 12 & 33 & 765 & 4,620 & 8,730 & 4,158 & 246 & & & 18,552 & \{$2P_2^6$\} \\
4 & 6 & $D_2$ & 10 & 23 & 330 & 1,056 & 810 & 66 & & & & 2,285 & \{$P_2^5, P_2^4$\} \\
4 & 7,8 & $C_2$ & 12 & 66 & 1,500 & 9,240 & 17,370 & 8,316 & 472 & & & 73,928 & \{$P_2^6$\}\\
4 & 9 & $C_1$ & 12 & 132 & 2,970 & 18,480 & 34,650 & 16,632 & 924 & & & 73,788 & \{$P_1^12$\} \\
5 & 10 & $D_2$ & 14 & 46 & 1,533 & 15,030 & 52,710 & 63,108 & 21,126 & 868 & & 154,421 & \{$P_2^7, P_2^6$\} \\
5 & 11 & $D_2$ & 12 & 33 & 765 & 4,620 & 8,730 & 4,158 & 246 & & & 18,552 & \{$2P_2^6$\} \\
5 & 12--15, 18 & $C_2$ & 14 & 91 & 3,024 & 30,030 & 105,210 & 126,126 & 42,112 & 1,716 & & 1,541,545 & \{$P_2^7$\} \\
5 & 16 & $C_2$ & 14 & 92 & 3,024 & 30,060 & 105,210 & 126,216 & 42,112 & 1,736 & & 308,450 & \{$P_2^6$\} \\
5 & 17 & $C_2$ & 12 & 66 & 1,500 & 9,240 & 17,370 & 8,316 & 472 & & & 73,928 & \{$P_2^6$\} \\
5 & 19,20\& & $C_1$ & 14 & 182 & 6,006 & 60,060 & 210,210 & 252,252 & 84,084 & 3,432 & & 3,081,130 & \{$P_1^14$\} \\
& 22--24 & & & & & & & & & & & \\
5 & 21 & $C_1$ & 12 & 132 & 2,970 & 18,480 & 34,650 & 16,632 & 924 & & & 73,788 & \{$P_1^12$\} \\
6 & 25,26 & $D_2$ & 16 & 60 & 2,772 & 40,040 & 225,540 & 504,504 & 420,840 & 102,960 & 3,270 & 2,599,972 & \{$2P_2^8$\} \\
6 & 27 & $D_2$ & 14 & 46 & 1,533 & 15,030 & 52,710 & 63,108 & 21,126 & 868 & & 154,421 & \{$P_2^7,P_2^6$\} \\
6 & 28,29 \& & $C_2$ & 14 & 91 & 3,024 & 30,030 & 105,210 & 126,126 & 42,112 & 1,716 & & 1,233,236 & \{$P_2^7$\} \\
& 34,36 & & & & & & & & & & & & \\
6 & 30,31 & $C_2$ & 12 & 66 & 1,500 & 9,240 & 17,370 & 8,316 & 472 & & & 73,928 & \{$P_2^6$\} \\
6 & 32,33 \& & $C_2$ & 16 & 120 & 5,488 & 80,080 & 450,660 & 1,009,008 & 841,120 & 205,920 & 6,470 & 31,186,392 & \{$P_2^8$\} \\
& 35,37--45 & & & & & & & & & & & \\
6 & 46--49,62 \& & $C_1$ & 14 & 182 & 6,006 & 60,060 & 210,210 & 252,252 & 84,084 & 3,432 & & 4,929,808 & \{$P_1^14$\} \\
& 65,66,76 & & & & & & & & & & & \\
6 & 50--61,63 \& & $C_1$ & 16 & 240 & 10,920 & 160,160 & 900,900 & 2,018,016 & 1,681,680 & 411,840 & 12,870 & 119,522,398 & \{$P_1^{16}$\} \\
& 64,67--75 & & & & & & & & & & & \\
7 & 77 & $D_6$ & 12 & 11 & 265 & 1,542 & 2,940 & 1,386 & 90 & & & 6,234 & \{$P_6^2, 2P_2^6$\} \\
\hline
\end{tabular}
\label{tab:PeripheralCarbons}
\newline
\textsuperscript{a} For a given $n$, values are reported as sum over all $I$.
\end{table*}
\begin{turnpage}
\begin{table*}[htbp!]
\caption{
Number of constitutional isomers formed by substituting pairs of
all C atoms with B and N atoms.
For every parent hydrocarbon, number of products are collected for various stoichiometries
along with their total: $n$ is the number of rings in the parent hydrocarbon,
$I$ is the index of the hydrocarbon in Fig.~\ref{PAH}, and
$X$ is the number of C atoms in the parent hydrocarbon
available for substitution.
Also given for every parent hydrocarbon,
is the isomorphic point group of the molecular symmetry group ($\mathcal{G}^{\rm iso.}$).
For clarity, the cycle indices of the group generators, $\left\lbrace Z\right\rbrace$, are collected in the Table footnote\textsuperscript{a}.}
\resizebox{\linewidth}{!}{%
\begin{tabular}{l l l r r r r r r r r r r r r r r r}
\hline
$n$ & $I$ & $\mathcal{G}^{\rm iso.}$ & $X$ & C$_{X-2}$ & C$_{X-4}$ & C$_{X-6}$ & C$_{X-8}$ & C$_{X-10}$ & C$_{X-12}$ & C$_{X-14}$ & C$_{X-16}$ & C$_{X-18}$ & C$_{X-20}$ &C$_{X-22}$ &C$_{X-24}$& C$_{X-26}$&Total\textsuperscript{b}\\
& & & & B$_1$N$_1$ & B$_2$N$_2$& B$_3$N$_3$& B$_4$N$_4$& B$_5$N$_5$ & B$_6$N$_6$& B$_7$N$_7$ & B$_8$N$_8$ & B$_9$N$_9$ & B$_{10}$N$_{10}$ & B$_{11}$N$_{11}$ & B$_{12}$N$_{12}$ &B$_{13}$N$_{13}$&\\
\hline
2 & 1 & $D_2$ & 10 & 23 & 330 & 1,056 & 810 & 66 & & & & & & &&& 2,285\\
3 & 2 & $D_2$ & 14 & 46 & 1,533 & 15,030 & 52,710 & 63,108 & 21,126 & 868 & & & & &&& 154,421 \\
3 & 3 & $C_2$ & 14 & 91 & 3,024 & 30,030 & 105,210 & 126,126 & 42,112 & 1,716 & & & & &&& 308,309\\
4 & 4 & $D_3$ & 18 & 51 & 3,096 & 61,890 & 510,888 & 1,837,836 & 2,859,726 & 1,750,320 & 328,500 & 8,110 & & &&& 7,360,417\\
4 & 5 & $D_2$ & 18 & 77 & 4,644 & 92,848 & 766,332 & 2,756,964 & 4,289,544 & 2,625,760 & 492,750 & 12,190 & & &&& 11,041,109\\
4 & 6 & $D_2$ & 16 & 63 & 2,785 & 40,142 & 225,690 & 504,894 & 421,050 & 103,140 & 3,290 & & & &&& 1,301,054\\
4 & 7 & $C_2$ & 18 & 153& 9,216 & 185,640 & 1,531,908 & 5,513,508 & 8,577,408 & 5,250,960 & 984,870 & 24,310 & & &&& 22,077,973 \\
4 & 8 & $C_2$ & 18 & 154& 9,216 & 185,696 & 1,531,908 & 5,513,928 & 8,577,408 & 5,251,520 & 984,870 & 24,380 & & &&& 22,079,080 \\
4 & 9 & $C_1$ & 18 & 306& 18,360 & 371,280 & 3,063,060 & 11,027,016 & 17,153,136 & 10,501,920 & 1,969,110 & 48,620 & & &&& 44,152,808\\
5 & 10& $D_2$ & 22 & 116& 11,055 & 373,110 & 5,597,460 & 40,739,328 & 149,382,156 & 274,364,760 & 240,075,990 & 88,915,400 & 10,671,738 & 176,484 &&& 810,307,597 \\
5 & 11& $D_2$ & 20 & 98 & 7,352 & 193,984 & 2,205,812 & 11,641,224 & 29,103,760 & 33,258,880 & 15,592,270 & 2,310,220 & 46,448 & &&& 94,360,048 \\
5 & 12--15,18& $C_2$ & 22 & 231 & 22,000 & 746,130 & 11,192,940 & 81,477,396 & 298,755,072 & 548,725,320 & 480,140,430 & 177,827,650 & 21,340,704 & 352,716 &&& 8,102,902,945\\
5 & 16& $C_2$ & 22 & 232 & 22,000 & 746,220 & 11,192,940 & 81,478,656 & 298,755,072 & 548,729,520 & 480,140,430 & 177,830,800 & 21,340,704 & 352,968 &&& 1,620,589,542\\
5 & 17& $C_2$ & 20 & 190 & 14,580 & 387,600 & 4,409,580 & 23,279,256 & 58,200,240 & 66,512,160 & 31,179,150 & 4,618,900 & 92,504 & &&& 188,694,160 \\
5 & 19,20,22-24& $C_1$ &22 & 462 & 43,890 & 1,492,260 & 22,383,900 & 162,954,792 & 597,500,904 & 1,097,450,640 & 960,269,310 & 355,655,300 & 42,678,636 & 705,432 &&& 16,205,677,630\\
5 & 21& $C_1$ &20 & 380 & 29,070 & 775,200 & 8,817,900 & 46,558,512 & 116,396,280 & 133,024,320 & 62,355,150 & 9,237,800 & 184,756 & &&& 377,379,368\\
6 & 25,26& $D_2$ & 26 & 163 & 22,542 & 1,151,216 & 27,343,030 & 334,640,790 & 2,230,954,440 & 8,286,315,840 & 17,090,574,930 & 18,989,469,950 & 10,634,147,524 & 2,636,560,416 & 219,721,684 & 2,600,612 & 120,907,006,274 \\
6 & 27 & $D_2$ & 24 & 141 & 16,059 & 673,270 & 12,873,780 & 123,563,628 & 624,680,196 & 1,682,775,288 & 2,366,416,530 & 1,636,032,230 & 490,822,458 & 48,678,084 & 676,984 & & 6,987,208,648 \\
6 & 28,29,34 \& 36& $C_2$ &24 & 276 & 31,944 & 1,345,960 & 25,742,970 & 247,118,256 & 1,249,329,312 & 3,365,515,296 & 4,732,773,210 & 3,272,028,760 & 981,616,944 & 97,349,616 & 1,352,540 && 55,896,820,336 \\
6 & 30,31& $C_2$ &22 & 231 & 22,000 & 746,130 & 11,192,940 & 81,477,396 & 298,755,072 & 548,725,320 & 480,140,430 & 177,827,650 & 21,340,704 & 352,716 &&& 3,241,161,178\\
6 & 32,33,35,37--41& $C_2$ & 26 & 325 & 44,928 & 2,302,300 & 54,681,770 & 669,278,610 & 4,461,874,560 & 16,572,613,200 & 34,181,059,770 & 37,978,905,250 & 21,268,222,976 & 5,273,104,200 & 439,431,356 & 5,200,300 & 967,253,756,360 \\
6 & 42--45 & $C_2$ & 26 & 326 & 44,928 & 2,302,432 & 54,681,770 & 669,281,580 & 4,461,874,560 & 16,572,631,680 & 34,181,059,770 & 37,978,939,900 & 21,268,222,976 & 5,273,120,832 & 439,431,356 & 5,201,224 & 483,627,173,336 \\
6 & 46--49,62,65,66 \& 76 & $C_1$ & 24 & 552 & 63,756 & 2,691,920 & 51,482,970 & 494,236,512 & 2,498,640,144 & 6,731,030,592 & 9,465,511,770 & 6,544,057,520 & 1,963,217,256 & 194,699,232 & 2,704,156 & & 223,586,691,040 \\
6 & 50--61,63,64,67--75 & $C_1$ & 26 & 650 & 89,700 & 4,604,600 & 109,359,250 & 1,338,557,220 & 8,923,714,800 & 33,145,226,400 & 68,362,029,450 & 75,957,810,500 & 42,536,373,880 & 10,546,208,400 & 878,850,700 & 10,400,600 & 5,561,704,201,450 \\
7 & 77 & $D_6$ & 24 & 49 & 5,411 & 224,626 & 4,292,790 & 41,190,876 & 208,237,164 & 560,936,856 & 788,825,460 & 545,356,070 & 163,616,810 & 16,228,212 & 226,150 & & 2,329,140,474 \\
\hline
\end{tabular}
}
\footnotesize \textsuperscript{a} $\left\lbrace Z\right\rbrace: \left\lbrace p_x^y,\,p_n^m p_l^k \right\rbrace$ 1) \{ $P_2^5, P_2^4$ \}, 2) \{ $P_2^7, P_2^6$ \}, 3) \{ $P_2^7$ \}, 4) \{ $P_3^6, 2P_2^9$ \}, 5) \{ $P_2^9, P_2^8$ \}, 6) \{ $P_2^8, P_2^6$ \}, 7) \{ $P_2^9$ \} , 8) \{ $P_2^8$ \}, 9) \{ $P_1^{18}$ \}, 10) \{ $P_2^{11}, P_2^{10}$ \}, 11) \{ $P_2^{10}, P_2^8$ \}, 12--15,18) \{ $P_2^{11}$ \}, 16) \{ $P_2^{10}$ \}, 17) \{ $P_2^{10}$ \}, 19,20,22--24) \{ $P_1^{22}$ \}, 21) \{ $P_1^{20}$ \}, 25,26) \{ $P_2^{13}, P_2^{12}$ \}, 27) \{ $P_2^{12}, P_2^{10}$ \}, 28,29,34) \{ $P_2^{12}$ \}, 30,31) \{ $P_2^{11}$ \}, 32,33,35,37--41) \{ $P_2^{13}$ \}, 36) \{ $P_2^{12}$ \}, 42--44) \{ $P_2^{12}$ \}, 45--49,62,65,66) \{ $P_1^{24}$ \}, 50--61,63,64,67--75) \{ $P_1^{26}$ \}, 76) \{ $P_6^4, P_2^{12} \}, P_2^{10}$ \};\,
\footnotesize \textsuperscript{b} For a given $n$, values are reported as sum over all $I$.
\label{tab:allcarbons}
\end{table*}
\end{turnpage}
\section{\label{sec:level2}Results and Discussion}
\subsection{Clar structures of Benzenoid Hydrocarbons}
Benzenoid hydrocarbons (BHs) are a class of fusenes with two or more
six-membered rings that are mutually-condensed, {\it i.e.}
each ring sharing an edge with at least another ring\cite{balaban1968chemical}.
The structures of BHs are typically isomorphic to that of hexagonal sub-lattices, however, helicenoid compounds---a class of BHs---adopt structures that are non-isomorphic to hexagonal sub-lattices\cite{brinkmann2007fusenes}.
The valence electronic properties of BHs can be explained by Clar aromatic sextet theory\cite{clar1964c,sola2013forty}.
The essence of this theory is to derive a graphical representation of the electronic structure, which encodes bonding information from all the Kekul{\'e} structures (KSs) as well as the resonant hybrid one. For a given BH every KS can be represented as a Clar diagram composed of one or more aromatic $\pi$-sextets (hexagons inscribed with circles) and non-sextet carbon-carbon double bonds. A Clar diagram with the most number of aromatic $\pi$-sextets is the so-called {\it Clar structure} (CS). It is important to consider the CS as a versatile chemical descriptor for benzenoid-Kekulean structures. Another such descriptor is Wiener index\cite{nikolic1995wiener,vukivcevic2004wiener}, which is a topological score mapping the molecular structure to various chemical properties.
Algebraic representations of CSs have indeed been shown to capture more chemical information than a single KS\cite{randic2004algebraic,randic2018local}.
Multiple Clar diagrams, where the $\pi$-sextets are located in adjacent rings, can be collectively represented by a single {\it migrating} CS\cite{sola2013forty}.
The total number of $\pi$-sextets in the CS has been shown to correlate with bond-length variations, aromaticity and also HOMO-LUMO gap of the BH\cite{sola2013forty}.
Throughout this paper, all molecular cartoons are presented in the Clar form.
\subsection{Constructive Enumeration of Benzenoid Hydrocarbons}
KSs of PAHs with up to six benzene rings have been constructively enumerated using an in-house program. For a given number of rings, the program exhaustively generates all possible molecular structures by tiling ({\it i.e.} plane-filling) and ignores all the disconnected structures such as non-bonded molecular dimers. As an exemplary case, in Fig.~\ref{fig01} we have collected all the generated structures containing three rings. The resulting structures include benzenoid ($B$) as well as non-benzenoid ($nB$) compounds. While the latter are excluded in this study, among benzenoids we have considered only those with a valid Kekul{\'e} formula ($K$). We note in passing that compounds which do not have a valid Kekul{\'e} formula ($nK$) such as the one shown in Fig.~\ref{fig01} are non-aromatic as long as the system is charge neutral. It is worthwhile to note that the aforementioned procedure allows certain benzenoid structures with more rings to qualify, for instance, isochrysene a formally 4-ring system (see Fig.~\ref{fig01}, blue labels) can be generated using three benzene rings mutually connected at 1,2 sites in a non-benzenoid fashion ({\it i.e.} with sigma bondings).
For a given number of rings, we exclusively collect all the benzenoid molecules with valid Kekul{\'e} formulae ($B+K$) (see Fig.~\ref{fig01}, green labels).
The structures of all the resulting hydrocarbons comprising up to six benzene rings are displayed in Fig.~\ref{PAH}. For 2-, 3-, 4-, 5-, and 6-ring compounds, our procedure has resulted in 1, 2, 6, 15, and 52 hydrocarbons, respectively. These numbers agree with those given by Brinkmann {\it et al.}\cite{brinkmann2007fusenes} who further enumerated the hydrocarbons with more benzene rings and reported 195, 807, 3513, 16025 entries for 7-, 8-, 9-, and 10-ring compounds, respectively. We note that our plane-filling algorithm does not distinguish between the 6-ring compound hexahelicene (structure 45 in Fig.~\ref{PAH}) and the peri-condensed 7-ring compound coronene a.k.a. superbenzene (structure 77 in Fig.~\ref{PAH})---both 3D structures surjectively mapped to the same plane-filling 2D pattern. For this reason, we have included coronene as the sole 7-ring system in this work. It is important to note that the helical compounds such as tetrahelicene (8 in Fig.~\ref{PAH}), pentahelicene (13 in Fig.~\ref{PAH}) and hexahelicene (45 in Fig.~\ref{PAH}) are chiral with two enantiomers having same substitution pattern. For a given number of rings we found the number of hydrocarbon compounds in Fig.~\ref{PAH} to fully agree with a constructive enumeration performed using the program {\tt CaGe} \cite{brinkmann2010cage}. The main objective of this study concerns the grouping of seventy-seven PAHs shown in Fig.~\ref{PAH} according to their molecular symmetry group as well as available substitution sites and enumerate the number of all possible unique compounds that can be formed by replacing pairs of C atoms in PAHs with B and N atoms.
\subsection{\label{sec:level3}Chemical Space Enumeration: Mathematical Formulation and Computation}
Combinatorial enumeration of isomers, topologies, nuclear spin statistical weights, etc., can be efficiently performed using generalized character cycle index (GCCI)\cite{balasubramanian1985applications,balasubramanian1992combinatorics} of the molecular symmetry group that are subgroups of the complete nuclear permutation inversion (CNPI)\cite{bunker2006molecular} group.
The GCCI ($\mathcal{Z}^{\Gamma}_\mathcal{G}$) of a group $\mathcal{G}$ corresponding to a particular irreducible representation $\Gamma$ is defined as
\begin{eqnarray}
\mathcal{Z}^{\Gamma}_\mathcal{G} & = & \frac{1}{ |\mathcal{G}| }
\sum_{g \in \mathcal{G}} \chi^{\Gamma}(g)
\left[ \Pi_{k} \mathcal{F}_k \right]
\label{eq:gcci}
\end{eqnarray}
where $g$ goes over all elements of the permutation group $\mathcal{G}$, $\chi^{\Gamma}(g)$ is the character of $g$'s matrix representation for the given irreducible representation $\Gamma$ and
$\Pi_{k} \mathcal{F}_k$ corresponds to the cycle representation of element $g$ with $k$ running over all the factor-cycles of $g$.
In Eq.~\ref{eq:gcci},
$|\mathcal{G}|$ denotes the group order and
the figure counting series $(\mathcal{F})$ for cycle length $n$ is given by $\mathcal{F} = \mathcal{A}^n + \mathcal{B}^n + \ldots$ with $\mathcal{A}$, $\mathcal{B}$, etc. being the objects to be permuted. For the special case where $\Gamma$ is the (fully) symmetric representation of $\mathcal{G}$, one arrives at the P{\'o}lya enumeration formula\cite{polya1937kombinatorische,polya2012combinatorial} that has been so successfully employed for enumerating various chemical subspaces\cite{faulon2005enumerating,balaban1991enumeration}.
\begin{eqnarray}
\mathcal{Z}_\mathcal{G} & = & \frac{1}{ |\mathcal{G}| }
\sum_{g \in \mathcal{G}}
\left[ \Pi_{k} \mathcal{F}_k \right]
\label{eq:ci}
\end{eqnarray}
For the enumeration of BN-PAH chemical space, $\mathcal{F}$ takes the form $C^n+B^n+N^n$, where $n$ is the cycle length. Application of Eq.~\ref{eq:ci}, for instance, to naphthalene (${\rm C}_{10}{\rm H}_{8}$) with $\mathcal{G}$ isomorphic to the ${D}_2$ point group results in the following pattern inventory
\begin{widetext}
\begin{eqnarray}
\mathcal{Z}_\mathcal{G} & = & C_{10}+3C_9N+3C_9B+15C_8N_2+23C_8NB+15C_8B_2+32C_7N_3+92C_7N_2B+ \nonumber \\
& & 92C_7NB_2+32C_7B_3+60C_6N_4+212C_6N_3B+330C_6N_2B_2+212C_6NB_3+ \nonumber \\
& & 60C_6B_4+66C_5N_5+318C_5N_4B+636C_5N_3B_2+636C_5N_2B_3+318C_5NB_4+ \nonumber \\
& & 66C_5B_5+60C_4N_6+318C_4N_5B+810C_4N_4B_2+1056C_4N_3B_3+810C_4N_2B_4+\nonumber \\
& & 318C_4NB_5+60C_4B_6+32C_3N_7+212C_3N_6B+636C_3N_5B_2+1056C_3N_4B_3+\nonumber \\
& & 1056C_3N_3B_4+636C_3N_2B_5+212C_3NB_6+32C_3B_7+15C_2N_8+92C_2N_7B+\nonumber \\
& & 330C_2N_6B_2+636C_2N_5B_3+810C_2N_4B_4+636C_2N_3B_5+330C_2N_2B_6+\nonumber \\
& & 92C_2NB_7+15C_2B_8+3CN_9+23CN_8B+92CN_7B_2+212CN_6B_3+318CN_5B_4+\nonumber \\
& & 318CN_4B_5+212CN_3B_6+92CN_2B_7+23CNB_8+3CB_9+N_{10}+3N_9B+15N_8B_2+\nonumber \\
& &32N_7B_3+60N_6B_4+66N_5B_5+60N_4B_6+32N_3B_7+15N_2B_8+3NB_9+B_{10}
\label{eq:patterninventory}
\end{eqnarray}
\end{widetext}
where each term corresponds to the stoichiometry ${\rm C}_{x}{\rm B}_{y}{\rm N}_{z}$ ($x,y,z\in{0,\ldots,10}$ and $x+y+z=10$).
In all cases, the number of ${\rm H}$ atoms, not shown in Eq.~\ref{eq:patterninventory},
is 8 as in the parent hydrocarbon naphthalene.
The coefficient of each term in Eq.~\ref{eq:patterninventory} gives the number of unique compounds for that stoichiometry ({\it i.e.} the number of constitutional isomers). For example, the total number of naphthalene derivatives with stoichiometry ${\rm C}_{8}{\rm B}{\rm N}{\rm H}_{8}$ is 23 as given by the fifth term on the right side of Eq.~\ref{eq:patterninventory}.
For asymmetric molecules, the permutation group is isomorphic to the $C_1$ point group and the GCCI reduces to the multinomial expansion
\begin{equation}
\mathcal{Z}_{C_{1}}= \left( C+B+N \right)^m = \underset{x+y+z=m}{\sum}\left(\begin{array}{c}
m\\
x,y,z
\end{array}\right)C_{x}B_{y}N_{z},
\label{eq:c1}
\end{equation}
where $m$ is the number of sites available for substitution and the summation goes over all combinations of non-negative $x$, $y$ and $z$ with the constraint $x+y+z=m$.
In the present work, we are solely interested in the enumeration of isosteric/isoelectronic compounds with equal number of B and N atoms. For example, all naphthalene based compounds are of the form
${\rm C}_{10-2y}{\rm B}_{y}{\rm N}_{y}{\rm H}_{8}$, where $y=0,\ldots,5$. For all seventy-seven hydrocarbons listed in Fig.~\ref{PAH}, we have separately collected the pattern inventory for such substitutions of the inner C atoms (Table~\ref{tab:innercarbons}), outer C atoms (Table~\ref{tab:PeripheralCarbons}), and both inner and outer C sites without restrictions (Table~\ref{tab:allcarbons}).
In these tables, we have also listed the cycle indices of the group generators denoted by $\left\lbrace Z\right\rbrace$.
For naphthalene, in the case of substitution of the two inner C sites, the relevant group is $\mathcal{G}=\left\lbrace (1)(2), (1,2) \right\rbrace$ and its generator is $\left\lbrace (1,2) \right\rbrace$. The cycle index set is then denoted by $\lbrace P_2^1 \rbrace$, {\it i.e.}, one cycle of length two.
For the substitution pattern of the peripheral C atoms of naphthalene, the appropriate permutation group $\mathcal{G}$ can be formed using the generator set $\left\lbrace (1,9)(2,8)(3,7)(4,6), (1,4)(2,3)(6,9)(7,8) \right\rbrace$ with $\left\lbrace Z\right\rbrace=\lbrace 2P_2^4 \rbrace$ (see Table~\ref{tab:PeripheralCarbons}).
The permutation group used for collecting the substitution pattern of all ten carbons of naphthalene is derived using the generator set $\left\lbrace (1,9)(2,8)(3,7)(4,6), (1,4)(2,3)(5,10)(6,9)(7,8) \right\rbrace$ with the corresponding cycle indices $\lbrace P_2^5, P_2^4 \rbrace$, {\it i.e.}, a generator with five indices of length two, and another with four indices of length two.
For all seventy-seven PAHs displayed in Fig.~\ref{PAH}, the total number of compounds obtained by substituting all (both inner and outer C sites without restrictions) available C atoms is graphically presented in Fig.~\ref{fig:circular}. The overall trend is that for a given number of substitution sites maximal number of products is noted for asymmetric hydrocarbon frameworks that follow a multinomial distribution, see Eq.~\ref{eq:c1}.
\subsection{Symmetry-controlled yield-pattern selectivity}
For a given PAH, deviations from a multinomial distribution of B, N-substituted compounds increase with the order of the symmetry group of the hydrocarbon. Such a trend has been discussed in the past for the enumeration of gallium arsenide (Ga$_x$As$_y$) clusters\cite{balasubramanian1988enumeration,balasubramanian1992combinatorics}. In the case of symmetric PAHs, it is interesting to note that even when two (or more)
PAHs share the same rotational group and comprise of same number of substitution sites ($X$),
these compounds can lead to distinct GCCIs ($\mathcal{Z}_\mathcal{G}$), therefore distinct product distributions. For example, see the two rows corresponding to $n=4,\,I=7$ and $n=4,\,I=8$ in Table~\ref{tab:innercarbons}.
While on one hand, the symmetry groups of the respective compounds, chrysene and tetrahelicene (7 and 8 in Fig.~\ref{PAH}) correspond to the same isomorphic point group $C_2$; on the other, their inner-substituted analogues result in different constitutional isomer distributions (see Table~\ref{tab:innercarbons}). Similar trends are noted also in the cases of larger PAHs; few illustrative examples are on display in Fig.~\ref{fig:isomerdist} while all such instances are collected in Table~\ref{tab:dist_table}.
Qualitatively, such a selectivity can be understood by considering the fact that, in the case of tetrahelicene (a {\it cisoid} fused system, structure 8 in Fig.~\ref{PAH}), two C atoms lie on the principal axis whose positions are invariant with respect to the $C_2$ rotation. In other words, the corresponding labels form an {\it invariant subspace} of the isomorphic permutation operation.
In the case of chrysene (a {\it transoid} fused system, structure 7 in Fig.~\ref{PAH}), the $C_2$ principal axis is normal to the molecular plane and
passes through the geometric center of the molecule without coinciding with any of the C atoms, {\it i.e.}, zero-invariant subspace.
So tetrahelicene, which has a larger invariant subspace, can be thought of as a permutationally less symmetric molecule than chrysene, as far as the inner C atom framework is considered.
\begin{table}[htbp!]
\centering
\caption{
Symmetry-controlled yield-pattern selectivities
across all seventy-seven PAHs:
$n$ is the number of rings in the parent hydrocarbon,
$\mathcal{G}$ is the point group isomorphic to the symmetry group,
$X$ is the number of substitution sites in the parent hydrocarbon,
$I$ is the index of the hydrocarbon in Fig.~\ref{PAH},
and $\lbrace Z \rbrace$ denotes the cycle indices of the generator subgroup.}
\begin{tabular}{l l l l l l l l}
\hline
$n$~~~& $\mathcal{G}$~~~ & $X$~~~ & \multicolumn{2}{l}{$\mathcal{A}$} &~~~& \multicolumn{2}{l}{$\mathcal{B}$}\\
\cline{4-5}\cline{7-8}
& & & $I$ &$\lbrace Z \rbrace$& & $I$ &$\lbrace Z \rbrace$ \\
\hline
\multicolumn{8}{l}{Inner sites} \\
4 & $C_2$ & 6 & 7 &\{$P_2^3$\} && 8 & \{$P_2^2$\} \\
5 & $D_2$ & 8 & 10 &\{2$P_2^4$\} && 11 & \{$P_2^4,P_2^2$\} \\
6 & $C_2$ & 10 & 28--41 &\{$P_2^5$\} && 42--45 & \{$P_2^4$\} \\
\multicolumn{8}{l}{} \\
\multicolumn{8}{l}{Peripheral sites} \\
5 & $C_2$ & 14 & 12--15, 18 &\{$P_2^7$\} && 16 & \{$P_2^6$\} \\
\multicolumn{8}{l}{} \\
\multicolumn{8}{l}{All sites} \\
4 & $C_2$ & 18 & 7 &\{$P_2^9$\} & &8 &\{$P_2^8$\} \\
5 & $C_2$ & 22 & 12--15, 18 &\{$P_2^{11}$\} & &16 & \{$P_2^{10}$\} \\
6 & $C_2$ & 26 & 32, 33, 35, 37--41 &\{$P_2^{13}$\} & & 42--45 &\{$P_2^{12}$\} \\
\hline
\end{tabular}
\label{tab:dist_table}
\end{table}
\begin{figure}[htbp!]
\centering
\includegraphics[width=8.5cm]{distribution.pdf}
\caption{
Selected cases of symmetry-controlled yield-pattern selectivity across
BN-PAH compounds. Molecules on left and right sides have the same point group symmetry and same number of inner C sites available for substitution.
}
\label{fig:isomerdist}
\end{figure}
To gain a more quantitative appreciation of the
aforementioned selectivity and more importantly, to compute the actual
distribution, one has to consider the corresponding nuclear permutation groups and inspect the cycle length structure of the group generators. Let us consider the GCCIs for the case
of inner-site substitutions of the two PAHs chrysene
($\mathcal{Z}_\mathcal{G,A}$) and tetrahelicene ($\mathcal{Z}_\mathcal{G,B}$):
\begin{eqnarray}
\mathcal{Z}_\mathcal{G,A} & = & \frac{1}{2} \lbrace (B+N+C)^6 + (B^2+N^2+C^2)^3 \rbrace \\
\mathcal{Z}_\mathcal{G,B} & = & \frac{1}{2} \lbrace
(B+N+C)^6 + (B+N+C)^2(B^2+N^2+C^2)^2 \nonumber \rbrace \\
& &
\label{eq.symm}
\end{eqnarray}
While expanding the expressions on the right side yields the pattern inventory, it is evident that the difference in the number
of substituted products between the two PAHs arises from the generator cycle length structure. In Eq.~\ref{eq.symm}, the first factor on the second term, $(B+N+C)^2$, indicates that two C atoms are invariant with respect to the corresponding symmetry element $C_2\equiv(1)(2)(3,4)(5,6)$. Typically, BN-PAH molecules are synthesized by starting with suitable precursor compounds, see for example \Ref{bosdet200710a}.
In order to realize the symmetry-controlled yield-pattern selectivity proposed here, for any given PAH, synthesis strategies must statistically account for all possible substituted compounds.
\subsection{High-throughput first-principles modeling of BN-PAH compounds}
All the 7,453,041,547,842, {\it i.e.} $7.4$ tera BN-PAH molecules enumerated in the previous
sections feature even number of electrons and are of closed-shell type.
Past computational investigations of some of these compounds have demonstrated the reliability of density functional approximations (DFAs)
for semi-quantitative prediction of their structures and
electronic properties\cite{marcon2007tuning,al2014water,ghosh2011density}. It is important to note that while first-principles modeling of any single constituent of the BN-PAH dataset presents no conceptual challenge, the sheer size of this dataset will render any brute-force high-throughput computational endeavor aiming towards complete coverage impractical---even when depending on petascale computer facilities.
For instance, geometry relaxation of a typical medium-sized organic molecule using even a semi-empirical method such as PM7 requires about 10 CPU seconds. Carrying out such calculations for all the $7.4$ tera BN-PAH molecules would require over two million CPU years. Deploying thousands of CPU cores for this purpose can at most
decrease this time by three orders of magnitude.
In this first study, to gain rational insights into the stability and optoelectronic properties of BN-PAH compounds, we have performed high-throughput DFT modeling for only a representative subset. To this end, we restrict the number of molecules to a feasible size by considering all possible substitutions in naphthalene (row 1 of Table \ref{tab:allcarbons}) and only single B,~N pair substitutions in the remaining 76 PAHs (column 5 of Table \ref{tab:allcarbons}), overall amounting to 33,059 (33k) compounds.
For all these compounds we have performed geometry optimizations and collected various properties for minimum energy structures; details of the DFT calculations are provided in Computational Methods.
In the following, we report on the geometric features of the BN-PAH molecules and discuss deviations of the predicted structures from those expected solely based on formal hybridization scenario of the unsubstituted PAH compounds. Then we present the distribution of HOMO-LUMO gap of the 33k compounds in the context of solar spectrum. Finally we comment on inter-property correlations between dipole moment, electronic gap and atomization energy
which are relevant to rational compound design strategies.
\subsubsection*{E.1. Deviations from ideal $sp^2$ geometry}
Formally, we understand the aromatic character of benzenoid
species by Clar's $\pi$-sextet rule \cite{clar1972aromatic,feixas2008performance,sola2013forty},
which predicts hydrocarbons with migrating sextets to contain bonds of equivalent lengths and a planar geometry, characteristic of an ideal $sp^2$ hybridization. On the other hand, compounds with fewer $\pi$-sextets show local aromaticity resulting in deviations from ideal $sp^2$ structures.
Further, trends in thermodynamic stability are also expected to correlate with the number of aromatic $\pi$-sextets---the larger its value, greater should be the stability. However, it is important to note that all these assumptions that are expected to hold for PAHs need not hold for the hetero-compounds. To gain a first-principles understanding of key geometric features of both the PAHs as well as their hetero analogues, we have collected bond length distributions and out-of-plane deviations (OD) from DFT calculations in Fig.~\ref{DFTdistoop}.
The OD was obtained by finding a best-fit molecular plane through rigid body rotations, and then calculating the deviations along the normal ($z$-axis) in a root-mean-squares fashion, $Z=\sqrt{\sum_{i=1}^N (\Delta z)^2}$, where $N$ is the number of atoms.
In order to compare these structural features across various PAHs, we consider only those substituted compounds containing only one pair of B,~N atoms resulting in 30,797 (31k) BN-PAH compounds.
First of all, let us inspect the CC bond distances of the hydrocarbons in Fig.~\ref{DFTdistoop}A and note the bond lengths to vary between the values of a conventional CC double bond ($\sim$ 1.34~\AA) and a CC single bond ($\sim$ 1.5~\AA). In the case of BN-PAH compounds (see Fig.~\ref{DFTdistoop}B), we note the CN double bond to display the widest distribution followed by BN and CB double bonds.
This variation of bond lengths implies the heteroatoms to introduce structural inhomogeniety thereby weakening electron delocalization across the molecule. Such an effect may
be understood via the electronegativity criterion alone, where one can classify the CN moiety to be electron-rich and CB moiety to be electron-deficient compared to the CC fragment resulting in a net gain in electron density on the CN fragments.
Moreover, even though the BN fragment is isoelectronic and isosteric to the CC one, due to the larger ionic character, we find the average BN bond length to be larger than the average CC bond length (Fig.~\ref{DFTdistoop}B).
\begin{figure}[htb!]
\includegraphics[width=8.5cm]{finalhist_4_1.pdf}
\caption{ Comparison of trends in bond lengths ($R$) and out-of-plane deviation measures ($Z$) between seventy-seven PAHs and their B, N-substituted counterparts. Bond length distributions are shown in panels A and B, while that of the out-of-plane deviations ($Z$) are shown in panels C and D. Out of 33,059 BN-PAH compounds studied using DFT, only 30,797 compounds with a single B,N pair have been considered
for a comparison across PAHs, the remaining 2,262 molecules are naphthalene derivatives with multiple B,~N pairs. See text for more details.}
\label{DFTdistoop}
\end{figure}
Moving on to the OD of the hydrocarbons and their hetero counterparts, it is
useful to note that such distortions signify the presence of strain. For a better clarity, it is worthwhile to recall the formal classification of benzenoid compounds based on the presence of the various topological features--{\it fissure}, {\it bay}, {\it cove} and {\it fjord}\cite{dias2005perimeter,pogodin2002overcrowding}; see Fig.~\ref{topfeat1}. The bay, cove and fjord regions have H atoms in close proximity introducing strain in a strictly planar geometry. A cove may be thought to be comprised of two proximate bays while a fjord three proximate bays. For a given number of C atoms, a strictly peri-condensed compound---approaching a more circular topology---must have minimum number of external C atoms, hence fewer number of bays\cite{dias1990periodic}. Evidently, structures with a fjord region are more susceptible to OD followed by ones with coves and lastly by those with bays.
\begin{figure}[htbp!]
\centering
\includegraphics[width=8.0cm]{top_features2-1.pdf}
\caption{Various topological features as encountered in
the smallest polycyclic aromatic hydrocarbons exhibiting them. For clarity only one occurrence of {\it fissure} is shown.}
\label{topfeat1}
\end{figure}
The most striking feature in Fig.~\ref{DFTdistoop}C
is that 90\% of the parent PAHs (70 out of 77) are perfectly planar, while a few do show moderate OD; such structures are those with fjords (13, 35, 45, 61, 64 \& 67 in Fig. \ref{PAH}) and multiple coves (26 in Fig. \ref{PAH}).
In contrast, in Fig.~\ref{DFTdistoop}D, we note only about 66\% of the hetero structures (about 20k out of 31k) to be planar. To further quantify, let us consider a threshold of $Z=0.01$ \AA.
While only 9.1\% of the PAHs have a larger $Z$ with respect to this threshold, 19.3\% of the B,~N-substituted ones have $Z>0.01$ \AA.
These results show that B,~N-substitution induces the molecules to distort from planar configurations.
Loss of planarity also compromises the efficiency of these molecules as chromophores, or components of singlet-fission systems. Singlet-fission is a process by which an organic chromophore in an excited singlet state transfers its excess energy to a neighbouring chromophore in the ground state resulting in two triplet states
there by doubling the number of carrier charges\cite{smith2010singlet}. In inter-molecular singlet-fission, especially in molecular solids, planarity of the constituent molecules is crucial to stabilize the crystal via $\pi-$stacking\cite{bhattacharyya2017polymorphism}. Out of the 31k BN-PAH compounds 80.7\% satisfy this prerequisite. However, in actual
singlet-fission applications, the singlet-triplet energetics also play a vital role. Therefore, more efforts are needed for a better evaluation of the structure-energetics trade-off across the BN-PAH dataset.
\subsubsection*{E.2. Trends in electronic structure}
Clar structures also provide
information about the electronic energy level separations of BHs. In general, with increasing number of $\pi$-sextets in the Clar structure, the HOMO-LUMO transition is blue-shifted\cite{sola2013forty}. It is interesting to note that this
formal empirical rule is corroborated by TPSSh results as seen in Fig.~\ref{DFTgap}A.
Out of seventy-seven hydrocarbons, most have HOMO--LUMO gaps ($\Delta\varepsilon$)
in the visible region of the solar spectrum (see Fig.~\ref{DFTgap}A).
At the longer wavelength region near the spectral maximum, only about half-a-dozen elongated hydrocarbons are active.
For the linear PAHs---napthalene, anthracene, tetracene, pentacene and hexacene---the TPSSh values deviate from the experimental counterparts\cite{george1968intensity,malloci2007time,malloci2011electronic}
({\it i.e.} experimental$-$TPSSh) by 0.24, -0.69, 0.47, 0.60, and 0.57 eV, respectively amounting to an average prediction error of 0.24 eV and a root mean square error of 0.54 eV. These error measures imply the above discussed trends in HOMO--LUMO gaps to retain their semi-quantitative accuracy at least for other PAH molecules.
It may be noted that a number of solar cell applications have been based on the organic dye coumarin because of its very desirable electronic gap at 2.25 eV (552 nm) \cite{mishra2009metal}. However, none of the PAHs exhibit $\Delta\varepsilon$ in the yellow region of the spectrum, where the spectral energy density of the solar black-body radiation is maximum.
It is important to note that for these transitions to be allowed, the corresponding transition dipole moment integrals must also be non-vanishing. Computation of such integrals must be done using time-dependent DFT; such efforts are beyond the scope of the present study.
\begin{table}[!htbp]
\centering
\caption{Percentage distribution of HOMO--LUMO gap ($\Delta\varepsilon$) of 33,059 BN-PAH compounds across UV, visible and IR regions of the solar spectrum.}
\begin{tabular}{l c c c }
\hline
Compounds & & \%($\Delta\varepsilon$) \\
\cline{2-4}
& IR & ~~~~~~~~~~~~visible~~~~~~~~~~~~ & UV \\
& $<$ 1.77 eV & 1.77--3.09 eV & $>$ 3.09 eV \\
\hline
\multicolumn{4}{c}{naphthalene} \\
C$_8$B$_1$N$_1$H$_8$ & 0.00 & 30.43 & 69.57 \\
C$_6$B$_2$N$_2$H$_8$ & 4.85 & 56.97 & 38.18 \\
C$_4$B$_3$N$_3$H$_8$ & 16.95 & 59.19 & 23.86 \\
C$_2$B$_4$N$_4$H$_8$ & 29.14 & 49.01 & 21.85 \\
B$_5$N$_5$H$_8$ & 19.70 & 37.88 & 42.42 \\
\multicolumn{4}{c}{larger rings} \\
3 rings & 7.30 & 62.04 & 30.66 \\
4 rings & 28.86 & 55.60 & 15.55 \\
5 rings & 43.14 & 49.36 & 7.50 \\
6 rings & 53.59 & 42.88 & 3.53 \\
coronene & 4.08 & 87.76 & 8.16 \\
\hline
\end{tabular}
\label{tab:dist_table1}
\end{table}
\begin{figure}[htb!]
\includegraphics[width=8cm]{finalhistBNPAH13.pdf}
\caption{Spectral distribution of HOMO--LUMO gap ($\Delta\varepsilon$) of 33,059 BN-PAH compounds. A) $\Delta\varepsilon$ of unsubstituted PAHs (77) mapped with selected Clar structures.
B) $\Delta\varepsilon$ of 2,285 (B,N)$_x$-substituted ($x=1,\ldots5$) isomers of naphthalene. C) $\Delta\varepsilon$ of 30,797 (B,N)$_1$-substituted isomers of all PAHs; the inset zooms into under-represented compounds. The ideal solar spectrum with colors in the visible region is shown in panel-A for comparison.
}
\label{DFTgap}
\end{figure}
Fig.~\ref{DFTgap}B presents the $\Delta\varepsilon$ of 2,285 (B, N)$_x$-substituted isomers of naphthalene and Fig.~\ref{DFTgap}C shows the $\Delta\varepsilon$ of 30,797 (B, N)$_1$-substituted isomers for all PAHs. In both figures,
we observe the property distributions to follow roughly a Gaussian-type trend. Such statistical trends often imply that the character of the electronic excitation is preserved across all the constitutional isomers with same stoichiometry\cite{ramakrishnan2015electronic}.
In Table \ref{tab:dist_table1} we have collected the gap distribution across different regions of solar spectra for (B,~N)$_1$-substituted compounds and all possible isomers of naphthalene. We observe that for most of the classes, majority of the molecules ($>50\%$) lie in the visible region of the solar spectrum.
An interesting correlation can be drawn based on works by Hoffmann \textit{et al.} \cite{hoffmann1964extended,alkaabi2012ionic,niedenzu2012boron,zeng2014seeking}, where the authors explain the properties of a substituted PAH with B$_x$N$_y$ units
as the consequence of perturbation of parent hydrocarbon's properties by heteroatoms.
In Table \ref{tab:dist_table1} we note that for naphthalene, the (B,~N)$_1$ substitution shifts from predominantly UV to predominantly UV-visible with increase in the number of heteroatoms, a maximum shift to the visible region is noted for the stoichiometry C$_4$B$_3$N$_3$H$_8$.
When comparing the modulation of $\Delta\varepsilon$ by (B,~N)$_1$ substitution (Fig.~\ref{DFTgap}C)
in all the PAHs with those of (B,~N)$_x$ substitution in naphthalene (Fig.~\ref{DFTgap}B), one notes a similar of spread of 0--4 eV in both cases with too few examples with $\Delta\varepsilon>4$ eV.
Overall, when comparing the PAH molecules with B,~N-substituted ones, red-shifting of $\Delta\varepsilon$
arises due to quasi-degenerate valence MOs characteristic of a diradical-type system, while blue-shifting of $\Delta\varepsilon$ arises due to increase in the $\sigma$ character (decrease in $\pi$ character) of the excitation.
It may be worthwhile to relate this trend to the observation
that 2D BN-sheet is a wide-gap insulator due to lack of extended $\pi$-conjugations\cite{nagashima1995electronic}.
\subsubsection*{E.3. Inter-property correlations}
Rational chemical compound design based on high-throughput DFT computations
often requires multi-property optimization. To this end, we
explore some of the static ground state properties that are typically computed in a single-point calculation.
Following $\Delta\varepsilon$, the next key property of interest is the thermodynamic stability of the molecules. For this purpose, we use atomization energy per electron ($E$) as a measure. In addition, ground state dipole moment ($\mu$)---that is routinely computed during single point calculations---contain information about spatial separations of partial charges. In the following, we briefly discuss the correlations between these three properties.
\begin{figure*}[!htb]
\includegraphics[width=16.5cm]{plot_E_mu_gap.pdf}
\caption{Inter-property correlations across 30,797 BN-PAH compounds:
$E$ is the atomization energy per electron in kcal/mol,
$\mu$ is the dipole moment in debye,
$\Delta\varepsilon$ is the HOMO-LUMO gap in eV.}
\label{DFTgap1}
\end{figure*}
Pairwise inter-property correlations: $E$ vs $\mu$, $E$ vs $\Delta\varepsilon$, $\Delta\varepsilon$ vs $\mu$ are on display in Fig.~\ref{DFTgap1}. For all properties, the
range is largest for the 6-ring compounds; there are 25k such molecules in the 31k set. The spread in the property values decrease gradually with the number of rings. A noticeable feature in Fig.~\ref{DFTgap1}A is that molecules with large $E$ exhibit small $\mu$. We ascribe this relation to the fact that these molecules have the shortest B-N separations leading to strong bonding. We note in Fig.~\ref{DFTgap1}B that smaller $E$ values typically correlate with smaller $\Delta\varepsilon$ as expected in the case of diradical-type molecules such as those with well-separated B and N centers. Fully-conjugated aromatic molecules show electronic gap in the typical region of about 2-5 eV (see Fig.~\ref{DFTgap1}B) and these molecules are also found to be more stable with larger $E$. Such an interpretation is also supported by the trends shown in Fig.~\ref{DFTgap1}C;
molecules with longer diradical-type bonds show larger $\mu$ and smaller $\Delta\varepsilon$ and {\it vice versa}. These trends are reminiscent of those noted in a previous high-throughput DFT study of 134k small organic molecules\cite{ramakrishnan2014quantum}.
\section{Computational Methods}
For all seventy-seven PAHs listed in the previous section, we have generated Cartesian coordinates using the program Avogadro\cite{hanwell2012avogadro}. With the same program minimum energy structures were obtained
by employing universal force-field (UFF) parameters\cite{rappe1992uff}. The resulting structures
were used as templates for combinatorially generating the atomic coordinates of all the B,~N-substituted molecules; permutationally redundant structures were eliminated by comparing the principal moments of inertia. In the case of naphthalene, we have generated all possible molecules where pairs of C atoms were substituted by the isoelectronic B, N atom pairs resulting in 2,285 compounds; while for the larger PAHs comprising of more than two benzene rings, we have restricted the substitution to only a single pair of C atoms which gave rise to 30,774 compounds. These numbers tally perfectly with those from Polya enumeration, as long as the Cartesian coordinates encode the molecular symmetry (see Table~\ref{tab:allcarbons}). For all 33,059 molecules, we have performed geometry optimization and electronic structure calculations at the Kohn--Sham density functional theory (DFT) level using the ORCA (version 4.0.1.2) suite of programs\cite{neese2012orca}. In order to reach a high-degree of quantitative accuracy to model the electronic excitation spectrum one must perform linear-response time-dependent (LR-TD)-DFT calculations preferably based on long-range corrected hybrid DFs and large basis sets. However, in the present study we only wish to provide qualitative and semi-quantitative insights to the stability and HOMO-LUMO gaps of the B,~N-substituted PAHs. For this purpose, we limited our DFT explorations only to the TPSSh\cite{perdew1999accurate,perdew2004meta} hyper-GGA functional---that has been shown to be applicable to model the electronic properties of organic and inorganic molecules \cite{jensen2008bioinorganic,irfan2012quantum,zhang2013cyano,el2014molecular,liu2014novel}---in combination with the split valence basis set def2-SVP \cite{weigend2005balanced}. In all calculations, we have used the
resolution-of-identity technique to approximate the two-electron Coulomb and exchange integrals (RI-JK approximation) with the corresponding def2/JK \cite{weigend2008hartree} auxiliary basis sets along with {\tt Grid5}-level integration grids to estimate the exchange-correlation energies using numerical quadrature.
\section{Conclusions}
We have applied a combinatorial algorithm to enumerate all possible compounds obtained by substituting a pair of carbon atoms in the smallest seventy-seven poly aromatic hydrocarbons containing 2-6 benzene rings with isoelectronic/isosteric B, N atom pairs. For a hydrocarbon with $N$ carbon atoms, maximal number of compounds are obtained when exactly $N/2$ carbon atoms are substituted by $N$ heteroatom pairs. The grand set of all the resulting compounds is eleven orders of magnitude (7,453,041,547,842=$7.4$ tera) larger than that of the parent hydrocarbons. To facilitate large scale data-mining and discovery of combinatorial trends across the BN-PAH dataset, we have provided consolidated tabulations of molecular distributions according to symmetry, stoichiometry and sites. Furthermore, we show more than one hydrocarbons with same number of carbon atoms and same point group symmetry to lead to distinct yield-patterns revealing a symmetry-controlled selectivity; we have rationalized this effect using the generalized character cycle indices (GCCIs). Our results based on B,~N substitutions are also transferable when using other isovalent heteroatom pairs.
For a tiny fraction of the 7.4 tera set consisting of 33,059 (33 kilo) representative molecules, we have performed DFT calculations and analyzed structural and electronic features relevant for light-harvesting applications. For the unsubstituted hydrocarbons, we provide qualitative insights into the DFT-predicted properties using Clar's valence electronic structure formulae. Replacing a couple of carbon atoms in the hydrocarbons with a heteroatom pair has been shown to perturbatively modulate all the properties by retaining the essential characteristics of the parent compounds.
More importantly, our results indicate that combinatorial introduction of B,~N atoms in smallest polycyclic aromatic hydrocarbons gives rise to a library of compounds with HOMO-LUMO gaps spanning the entire solar spectrum,
with a significant fraction exhibiting HOMO-LUMO gap near
the solar spectral maximum. This prompts us to suggest the suitability of the BN-PAH dataset for various light--harvesting, singlet fission applications\cite{paci2006singlet,greyson2010maximizing,zeng2014seeking} or
to design material with desirable exciton energetics\cite{hill2000charge}.
Designing material exhibiting low exciton binding energies, in order for the absorbed photon to generate maximum output voltage, has been a major theme in studies of organic photovoltaics. The HOMO-LUMO gaps reported in the present work corresponds to the {\it real} gap (a.k.a. transport or fundamental gap), denoted in the relevant literature\cite{bredas2014mind} as $E_g$ or $E_{\rm fund.}$. There have also been studies\cite{bappler2014exciton} addressing how to directly model the so-called {\it optical} gap, $E_{\rm opt.}$, which is the least energy required for the creation of a bound electron-hole pair; this excitation corresponds to the first (narrow) peak in the absorption/photo-luminescence spectra. The difference $E_g-E_{\rm opt.}$ accounts for the exciton binding energy, $E_b$, higher its value lower will be the charge photogeneration efficieny.
As far as the unsubstituted PAHs are concerned, $E_b$ takes the value of 1 eV for naphthalene and 0.1--0.5 eV for pentacene\cite{lanzani2006photophysics}.
We hope the BN-PAH dataset along with the presented TPSSh results for $E_g$ could be of use in future high-throughput efforts towards screening materials with small $E_b$.
Recent ventures in comprehensive chemical space design have demonstrated
their pivotal role in accelerating the discovery of novel compounds for a multitude of application domains\cite{shoichet2004virtual,tu2012exploring,balawender2013exploring,reymond2015chemical,ramakrishnan2014quantum,fias2018alchemical}. Such Big Data
efforts also enable rational benchmarking and parameterization of approximate computational chemistry methods in a data-driven fashion\cite{kranz2018generalized,li2018density}. For more elaborate design studies based on the BN-PAH compound library the results provided in the present work may be
considered as a baseline.
\section{Supplementary Material}
For all seventy-seven PAH,
complete pattern inventory for all possible B, N substitutions are
collected. For seventy-seven PAH and 33,059 BN-PAH molecules, TPSSh/def2-SVP/RI-JK-level equilibrium structures and various electronic properties are also collected.
\begin{acknowledgements}
We gratefully acknowledge Prof. Gunnar Brinkmann for providing the {\tt CaGe} program, Prof. Ranjan Das and Dr. Vamsee Voora for useful discussions. PK is grateful to TIFR for Visiting Students’ Research Programme (VSRP) and junior research fellowships. RR and SC thank TIFR for financial support. All calculations have been performed using the {\tt helios} computer cluster which is an integral part of the {\tt MolDis} Big Data facility, TIFR Hyderabad ({\tt https://moldis.tifrh.res.in/}).
\end{acknowledgements}
\section*{References}
|
1,116,691,498,294 | arxiv | \section{Introduction}\label{sec:intro}
\subsection{The extension complexity of hypersimplices}
The \Defn{extension complexity} or \Defn{nonnegative rank} $\rkN(P)$ of a
convex polytope $P$ is the minimal number of facets (i.e., describing linear
inequalities) of an extension, a polytope $\ext{P}$ that linearly projects
onto $P$. The motivation for this definition comes from linear optimization:
The computational complexity of the simplex algorithm is intimately tied to
the number of linear inequalities and hence it can be advantageous to optimize
over~$\ext{P}$. As a complexity measure, the nonnegative rank is an object of
active research in combinatorial optimization; see~\cite{dagstuhl}. There are
very few families of polytopes for which the exact nonnegative rank is known.
Besides simplices, examples are cubes, crosspolytopes, Birkhoff polytopes and
bipartite matching polytopes~\cite{FKPT13} as well as all $d$-dimensional
polytopes with at most $d+4$ vertices~\cite{Padrol16}.
Determining the nonnegative rank is non-trivial even for
polygons~\cite{polygons1,polygons2, shitov2, shitov}. For important classes of
polytopes exponential lower bounds obtained in~\cite{FMPT,rothvoss,rothvoss2}
are celebrated results.
In the first part of the paper we explicitly
determine the nonnegative rank of the family of hypersimplices. For $0 < k <
n$, the \Defn{$\boldsymbol{(n,k)}$-hypersimplex} is the convex polytope
\begin{equation}\label{eqn:hyper}
\Delta_{n,k} \ = \ \conv\left\{ x \in \{0,1\}^n : x_1 + \cdots + x_n =
k\right\}.
\end{equation}
Hypersimplices were first described (and named) in connection with moment
polytopes of orbit closures in Grassmannians (see~\cite{GGMS}) but, of course,
they are prominent objects in combinatorial optimization, appearing in
connection with packing problems and matroid theory; see also below.
This marks hypersimplices as polytopes of considerable interest and naturally
prompts the question as to their extension complexity.
Note that
$\Delta_{n,k}$ is affinely isomorphic to $\Delta_{n,n-k}$. The hypersimplex
$\Delta_{n,1} = \Delta_{n-1}$ is the standard simplex of dimension $n-1$ and
$\rkN(\Delta_{n-1}) = n$. Our first result concerns the extension complexity
of the \emph{proper} hypersimplices, that is, the hypersimplices $\Delta_{n,k}$
with $2 \le k \le n-2$.
\begin{thm}\label{thm:main}
The hypersimplex $\Delta_{4,2}$ has extension complexity $6$, the
hypersimplices $\Delta_{5,2} \cong \Delta_{5,3}$ have extension complexity
$9$. For any $n\ge 6$ and $2 \le k \le n-2$, we have $\rkN(\Delta_{n,k}) =
2n$.
\end{thm}
It is straightforward to check that
\begin{equation}\label{eqn:ineq}
\Delta_{n,k} \ = \ [0,1]^n \cap \{ x \in \R^n : x_1 + \cdots + x_n = k\}
\end{equation}
and that for $1 < k < n-1$, all $2n$ inequalities of the $n$-dimensional cube
are necessary. The nonnegative rank of a polytope is trivially upper bounded
by the minimum of the number of vertices and the number of facets. We call a
polytope $P$ \Defn{extension maximal} if it attains this upper bound. Cubes as
well as their duals, the crosspolytopes, are know to be extension maximal; see
also Corollary~\ref{cor:cube}. Theorem~\ref{thm:main} states that in addition
to simplices, cubes, and crosspolytopes, all proper hypersimplices except for
$\Delta_{5,2}$ are extension maximal.
\subsection{Psd rank and $2$-level matroids}
Our original motivation for studying the nonnegative rank of hypersimplices
comes from matroid theory~\cite{Oxley}. For a matroid $M$ on the ground set $[n]
:= \{1,\dots,n\}$ and bases $\mathcal{B} \subseteq 2^{[n]}$, the associated
\Defn{matroid base polytope} is the polytope
\[
P_M \ := \ \conv \{ \1_B : B \in \mathcal{B}\},
\]
where $\1_B \in \{0,1\}^n$ is the characteristic vector of $B \subseteq [n]$.
Hence, the $(n,k)$-hypersimplex is the matroid base polytope of the uniform
matroid $U_{n,k}$. In~\cite{gs14}, the first and third author studied
\mbox{\Defn{$\boldsymbol 2$-level matroids}}, which exhibit extremal behavior
with respect to various geometric and algebraic measures of complexity. In
particular, it is shown that $M$ is $2$-level if and only if $P_M$ is
\Defn{psd minimal}. The \Defn{psd rank} $\rkPSD(P)$ of a polytope $P$ is the
smallest size of a spectrahedron (an affine section of the positive definite
cone) that projects onto $P$. In~\cite{GRT} it is shown that $\rkPSD(P) \ge
\dim P + 1$ and polytopes attaining this bound are called psd minimal. Our
starting point was the natural question whether the class of $2$-level
matroids also exhibits an extremal behavior with respect to the nonnegative
rank. We recall from~\cite[Theorem~1.2]{gs14} the following synthetic
description of $2$-level matroids: A matroid $M$ is $2$-level if and only if
it can be constructed from uniform matroids by taking direct sums or $2$-sums.
So, the right starting point are the hypersimplices.
To extend Theorem~\ref{thm:main} to all $2$-level matroids, it would be
necessary to understand the effect of taking direct and $2$-sums on the
nonnegative rank. The direct sum of matroids translates into the Cartesian
product of matroid polytopes. Two out of three authors of this paper believe
in the following conjecture, first asked during a Dagstuhl seminar in
2013~\cite{dagstuhlseminar2013}.
\begin{conj}\label{conj:prod}
The nonnegative rank is additive with respect to Cartesian products, that
is,
\[
\rkN(P_1 \times P_2) \ = \ \rkN(P_1) + \rkN(P_2),
\]
for polytopes $P_1$ and $P_2$.
\end{conj}
We provide evidence in favor of Conjecture~\ref{conj:prod} by showing it to
hold whenever one of the factors is a simplex
(cf.~Corollary~\ref{cor:prod_simplex}).
By taking products of extensions it trivially follows that the nonnegative
rank is subadditive with respect to Cartesian products. As for the $2$-sum
$M_1 \oplus_2 M_2$ of two matroids $M_1$ and $M_2$, it follows
from~\cite[Lemma~3.4]{gs14} that $P_{M_1 \oplus_2 M_2}$ is a codimension-$1$
section of $P_{M_1} \times P_{M_2}$ and the extension complexity is therefore
dominated by that of the direct sum. Combined with Theorem~\ref{thm:main}
and~\cite[Theorem~1.2]{gs14} we obtain the following simple estimate.
\begin{cor}
If $M$ is a $2$-level matroid on $n$ elements, then $\rk(P_M) \le 2n$.
\end{cor}
\subsection{Extension complexity of combinatorial hypersimplices}
The extension complexity is not an invariant of the combinatorial type. That
is, two combinatorially isomorphic polytopes do not necessarily have the same
extension complexity. For example, the extension complexity of a hexagon is
either $5$ or $6$ depending on the incidences of the facet-defining
lines~\cite[Prop.~4]{polygons2}. On the other hand, the extension complexity
of any polytope combinatorially isomorphic to the $n$-dimensional cube is
always $2n$; cf.\ Corollary~\ref{cor:cube}. The close connection to simplices
and cubes and Theorem~\ref{thm:main} raises the following question for
\Defn{combinatorial} $(n,k)$-hypersimplices.
\begin{quest}\label{quest:comb}
Is $\rkN(P)=2n$ for any combinatorial $(n,k)$-hypersimplex $P$ with $n\geq
6$ and $2\leq k\leq n-2$?
\end{quest}
For $n=6$ and $k\in\{ 2,3\}$ this is true due to
Proposition~\ref{prop:small_cases} but we suspect that the answer is no for
some $n > 6$ and $k=2,n-2$. The rectangle covering number $\rc(P)$ of a
polytope~$P$ is a combinatorial invariant that gives a lower bound on
$\rkN(P)$; see Section~\ref{sec:rc}. While the rectangle covering number of
the \emph{small} hypersimplices $\Delta_{6,2}$ and $\Delta_{6,3}$ is key to
our proof of Theorem~\ref{thm:main}, it is not strong enough to resolve
Question~\ref{quest:comb} (see Proposition~\ref{prop:rcbounds}).
We introduce the notions of $F$-, $G$-, and $FG$-genericity of combinatorial
hypersimplices, that are defined in terms of the relative position of certain
facets and that play a crucial role. We show that all $FG$-generic
hypersimplices are extension maximal (Theorem~\ref{thm:main_generic}).
Unfortunately, $FG$-genericity is not a property met by all hypersimplices,
which is confirmed by the existence of a non-$FG$-generic realization of
$\Delta_{6,2}$; see Proposition~\ref{prop:singular62}. On the other hand, we
show that hypersimplices with $n \ge 6$ and $\lfloor\frac{n}{2}\rfloor \le k
\le \lceil\frac{n}{2}\rceil$ are $FG$-generic, which ensues the following.
\begin{cor}\label{cor:XCcombinatorial}
If $P$ is a combinatorial $(n,k)$-hypersimplex with $n\geq 6$ and $2\leq
k\leq \lceil\frac{n}{2}\rceil$, then
\[
\rkN(P) \ \geq \
\begin{cases}
n+2k+1 & \text{ if }k<\ffloor{n}{2},\\
2n & \text{ otherwise.}
\end{cases}
\]
\end{cor}
We do not know of any realization of a $(n,k)$-hypersimplex with $n \ge 6$ of
extension complexity less than $2n$, but we do not dare to conjecture that
every combinatorial $(n,k)$-hypersimplex with $n \ge 6$ and $2 \le k \le n$ is
extension maximal.
\subsection{Realization spaces of hypersimplices}
The \Defn{projective realization space} $\Rel_{n,k}$ of combinatorial
$(n,k)$-hypersimplices parametrizes the polytopes combinatorially isomorphic
to $\Delta_{n,k}$ up to projective transformation. (Projective) realization
spaces of polytopes are provably complicated objects. The \emph{universality
theorems} of Mn\"ev~\cite{Mnev1988} and
Richter-Gebert~\cite{RichterGebert1997} assert that realization spaces of
polytopes of dimension $\ge 4$ are as complicated as basic open semialgebraic
sets defined over the integers.
In contrast, for a $3$-dimensional polytope~$P$ with $e \ge 9$ edges, it
follows from Steinitz' theorem that the projective realization space is
homeomorphic to an open ball of dimension $e - 9$; see
also~\cite[Thm.~13.3.3]{RichterGebert1997}.
For our investigation of the extension complexity of combinatorial
hypersimplices, we study their realization spaces. The observation that every
hypersimplex is either $F$- or $G$-generic (Lemma~\ref{lem:generic}) turns out
to be instrumental in our study. For $k=2$, we are able to give a full
description.
\begin{thm}\label{thm:dim_rel_space}
For $n \geq 4$, $\Rel_{n,2}$ is {rationally} equivalent to the
interior of a $\binom{n-1}{2}$-dimensional cube. In particular,
$\Rel_{n,2}$ is homeomorphic to an open ball and hence contractible.
\end{thm}
Rationally equivalent means that the homeomorphism as well as its inverse are
given by rational functions (c.f.~\cite[Sect.~2.5]{RichterGebert1997}).
A key tool in the context of the Universality Theorem is that the
projective realization of a facet of a high-dimensional polytope can not be
prescribed in general; see, for example,~\cite[Sect.~6.5]{Ziegler1995}. In contrast,
the shape of any single facet of a $3$-polytope can be prescribed~\cite{BarnetteGruenbaum1970}.
This description of $\Rel_{n,2}$ allows us to show that facets
of $(n,2)$-hypersimplices can be prescribed (Corollary~\ref{cor:n2prescribability}), but also allows us to
construct hypersimplices that are not
$FG$-generic, which implies that facets of hypersimplices cannot be prescribed
in general (Corollary~\ref{cor:prescribability}).
For $2 < k < n-2$, the realization spaces are more involved and, in
particular, related to the algebraic variety of $n$-by-$n$ matrices with
vanishing principal $k$-minors that was studied by Wheeler~\cite{Wheeler15}.
In Theorem~\ref{thm:Rel_UB}, we show that certain facets of $\Delta_{n,k}$
completely determine the realization, which then gives an upper bound on the
dimension of the realization space. However, we can currently not exclude
that $\Rel_{n,k}$ is disconnected and has components of different dimensions.
The extension complexity is invariant under (admissible) projective
transformations and hence $\rkN$ is well-defined on $\Rel_{n,k}$. The locus
$E_{n,k} \subseteq \Rel_{n,k}$ of extension maximal $(n,k)$-hypersimplices is
open and Theorem~\ref{thm:main} implies that $E_{n,k}$ is non-empty for $n \ge
6$ and $2 \le k \le n-2$. For $k=2$, we can say considerably more.
\begin{cor}\label{cor:dense2}
For $n \ge 5$, the combinatorial $(n,2)$-hypersimplices with extension
complexity $2n$ are dense in $\Rel_{n,2}$.
\end{cor}
Our results on $FG$-generic hypersimplices, which are characterized by the
non-vanishing of a determinantal condition on $\Rel_{n,k}$, strongly suggest
that Corollary~\ref{cor:dense2} extends to all the cases.
\begin{conj}\label{conj:dense}
For $n\geq 5$ and $2\leq k\leq n-2$, the combinatorial hypersimplices of
nonnegative rank~$2n$ form a dense open subset of $\Rel_{n,k}$.
\end{conj}
\subsection{Structure of the paper}
Theorem~\ref{thm:main} is proved in Sections~\ref{sec:geom} and~\ref{sec:rc}.
In Section~\ref{sec:geom} we investigate the discrete geometry of extensions
and we set up an induction that deals with the \emph{large} hypersimplices
$\Delta_{n,k}$ with $n > 6$. In particular, we devise general tools for upper
bounding the extension complexity. For the \emph{small} hypersimplices
$\Delta_{6,2}$ and $\Delta_{6,3}$, we make use of rectangle covering numbers
in Section~\ref{sec:rc}. We show that most of the geometric tools of
Section~\ref{sec:geom} have combinatorial counterparts for rectangle covering
numbers. Section~\ref{sec:relspaces} is devoted to the study of combinatorial
hypersimplices and the associated realization spaces. In Section~\ref{sec:n2}
we focus on the combinatorial $(n,2)$-hypersimplices.
\section{The geometry of extensions and large hypersimplices}\label{sec:geom}
In this section we develop some useful tools pertaining to the geometry of
extensions. These will be used to give an inductive argument for the
\emph{large} hypersimplices $\Delta_{n,k}$ with $n > 6$ and $1 < k < n-1$. The
\emph{small} hypersimplices are treated in the next section.
For a polytope~$P$, we write~$v(P)$ for the number of vertices of~$P$
and~$f(P)$ for the number of facets. Moreover, $\ext{P}$ will typically denote
an extension of~$P$, and the linear projection that takes~$\ext{P}$ to~$P$
is denoted by~$\pi$. We start with the simple observation that the nonnegative rank is
strictly monotone with respect to taking faces.
\begin{lem}\label{lem:facet}
Let $P$ be a polytope and $F \subset P$ a facet. Then
\[
\rkN(P) \ \ge \ \rkN(F) + 1.
\]
\end{lem}
\begin{proof}
Let $\ext{P}$ be a minimal extension of $P$. The preimage $\ext{F} =
\pi^{-1}(F) \cap \ext{P}$ is an extension of~$F$. Every facet of $\ext{F}$
is the intersection of a facet of $\ext{P}$ with $\ext{F}$. Moreover,
since $\ext{F}$ is a proper face of $\ext{P}$, there are at least $c \ge
1$ facets of $\ext{P}$ that contain $\ext{F}$ and hence do not contribute
facets to $\ext{F}$. It follows that
\[
\rkN(P) \ = \ f(\ext{P}) \ \ge \ f(\ext{F}) + c \ \ge \ \rkN(F) + 1,
\]
which proves the claim.
\end{proof}
By induction, this extends to lower dimensional faces.
\begin{cor}\label{cor:face}
Let $P$ be a polytope and $F \subset P$ a face. Then
\[
\rkN(P) \ \ge \ \rkN(F) + \dim(P) - \dim(F).
\]
\end{cor}
We can strengthen this observation if we take into consideration more than one
facet.
\begin{lem}\label{lem:2distinct}
Let $P$ be a polytope and let $F_1$ and $F_2$ be two disjoint facets of
$P$. Then
\[
\rkN(P) \ \ge \ \min\left\{\rkN(F_1),\rkN(F_2)\right\} + 2.
\]
\end{lem}
\begin{proof}
If $\rkN(F_1) > \rkN(F_2)$, the claim follows from Lemma~\ref{lem:facet}.
Hence, we can assume that $\rkN(F_1) = \rkN(F_2) = k$. Extending the
argument of Lemma~\ref{lem:facet}, let $\ext{P}$ be a minimal extension of
$P$ and $\ext{F}_i$ the preimage of $F_i$ for $i=1,2$. Let $c_i$ be the
number of facets of $\ext{P}$ containing $\ext{F}_i$. Since $f(\ext{P}) \ge
k + c_i$, the relevant case is $c_1 = c_2 = 1$. Now, $\pi(\ext{F}_1
\cap \ext{F}_2) \subseteq F_1 \cap F_2 = \emptyset$ implies that
$\ext{F}_1$ and $\ext{F}_2$ are disjoint facets of $\ext{P}$. Hence,
$\rkN(P) = f(\ext{P}) \ \ge \ k + 2$.
\end{proof}
We cannot replace $\min$ with $\max$ in Lemma~\ref{lem:2distinct}: The convex
hull of the $12$ columns of the matrix
\[
\left(
\begin{array}{rrrrrrrrrrrr}
1 & -1 & -1 & 1 & 2 & 2 & -2 & -2 & 1 & -1 & 1 & -1 \\
2 & 2 & -2 & -2 & 1 & -1 & -1 & 1 & 1 & 1 & -1 & -1 \\
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 \\
\hline
1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1
\end{array}
\right)
\]
gives rise to a $4$-dimensional polytope $Q$ combinatorially isomorphic to
product of a triangle and a quadrilateral, and consequently has $7$ facets.
If we project onto the first three coordinates we obtain a $3$-dimensional
polytope $P$ with two parallel facets, $F_1$ and $F_2$, that are an octagon
and a square, with nonnegative ranks $6$ and $4$, respectively. Thus,
$\rkN(P) \le 7 < \max\{\rkN(F_1),\rkN(F_2)\} + 2 = 8$. Figure~\ref{fig:sqoct}
gives an idea of the geometry underlying $Q$.
\begin{figure}[htbp]
\includegraphics[width=.7\linewidth]{Figures/square_octagon}
\caption{ The left figures gives a sketch (not a Schlegel diagram) of the
geometric idea underlying the construction of $Q$. It is a union of three
facets that yield the projection on the right. We highlighted the structure
as a product of polygons, that makes it more visible how the two square faces
of $Q$ yield the octogonal face of $P$.}
\label{fig:sqoct}
\end{figure}
Combining Lemma~\ref{lem:facet} and Lemma~\ref{lem:2distinct} yields the
following result pertaining to Conjecture~\ref{conj:prod}.
\begin{cor}\label{cor:prod_simplex}
Let $P$ be a non-empty convex polytope and $k \ge 1$. Then
\[
\rkN(P \times \Delta_{k}) \ = \ \rkN(P) + k + 1.
\]
\end{cor}
\begin{proof}
Let $\ext{P}$ be a minimal extension of $P$ with $\rkN(P)$ facets. Since
the number of facets of a product add up, $\ext{P} \times \Delta_k$ is an
extension of $P \times \Delta_k$ with $\rkN(P)+k+1$ facets. Thus, we need
to show that $\rkN(P)+k+1$ is also a lower bound.
For $k=1$, the polytope $P \times \Delta_1$ is a prism over $P$ with two
distinct facets isomorphic to $P$ and the claim follows from
Lemma~\ref{lem:2distinct}. If $k > 1$, note that $P \times \Delta_{k-1}$
is a facet of $P \times \Delta_{k}$, and an application of Lemma~\ref{lem:facet}
yields the claim by induction on $k$.
\end{proof}
Another byproduct is a simple proof that every combinatorial cube is extension
maximal (see \cite[Proposition 5.9]{FKPT13}).
\begin{cor}\label{cor:cube}
If $P$ is combinatorially equivalent to the $n$-dimensional cube $C_n =
[0,1]^n$, then $\rkN(P) = 2n$.
\end{cor}
\begin{proof}
Since $f(P) = f(C_n) = 2n$, we only need to prove $\rkN(P) \ge 2n$. For
$n=1$, $P$ is a $1$-dimensional simplex for which the claim is true. For
$n \ge 2$ observe that $P$ has two disjoint facets $F_1,F_2$ that are
combinatorially equivalent to $(n-1)$-cubes. By induction and
Lemma~\ref{lem:2distinct} we compute $\rkN(P) \ge \rkN(C_{n-1}) + 2 =
2n$.
\end{proof}
With these tools, we are ready to prove Theorem~\ref{thm:main} for the cases
with $n>6$. The case $n=6$ and $1 < k < n-1$ will be treated in
Proposition~\ref{prop:small_cases} in the next section. A key property,
inherited from cubes, that allows for an inductive treatment of hypersimplices
is that for $1 < k < n-1$, the presentation~\eqref{eqn:ineq} purports that
\begin{equation}\label{eqn:FG}
\begin{aligned}
F_i \ &:= \ \Delta_{n,k} \cap \{x_i = 0\} \ \cong \ \Delta_{n-1,k},
%
\text{ and } \\
G_i \ &:= \ \Delta_{n,k} \cap \{x_i = 1\} \ \cong \ \Delta_{n-1,k-1},
%
\end{aligned}
\end{equation}
are disjoint facets for any $1 \le i \le n$. We call these the
\Defn{$\boldsymbol F$-facets} and \Defn{$\boldsymbol G$-facets},
respectively.
\begin{prop}\label{prop:large_cases}
Assume that $\rkN(\Delta_{6,2}) = \rkN(\Delta_{6,3}) = 12$. Then
$\rkN(\Delta_{n,k}) = 2n$ for all $n > 6$ and $1 < k < n-1$.
\end{prop}
\begin{proof}
Let $n \ge 7$. For $2 < k < n-2$, the pairs of disjoint
facets~\eqref{eqn:FG} allow us to use Lemma~\ref{lem:2distinct} together
with induction on $n$ and $k$ to establish the result. Hence, the relevant
cases are $n \ge 7$ and $k = 2$ (which is equivalent to $k=n-2$).
For $k = 2$, let
\[
\ext{P} \ = \ \{ y \in \R^m : \ell_i(y) \ge 0 \text{ for }
i=1,\dots,M \}
\]
be an extension of $\Delta_{n,k}$ given by affine linear forms
$\ell_1,\dots,\ell_M$ and $M = \rkN(\Delta_{n,k})$. For convenience, we
can regard $\Delta_{n,k}$ as a full-dimensional polytope in the affine
hyperplane \mbox{$\{ x\in\R^n : x_1 + \cdots + x_n = k \} \cong \R^{n-1}$.
Let $\pi : \R^m \rightarrow \R^{n-1}$} the linear projection that takes
$\ext{P}$ to $\Delta_{n,k}$. If for some $1 \le i \le n$, the preimage
$\ext{F}_i = \pi^{-1}(F_i) \cap \ext{P}$ is not a facet then $f(\ext{P})
\ge \rkN(F_i) + 2 = 2n$ by induction and we are done. So, we have to
assume that $\ext{F}_i = \{ y \in \ext{P} : \ell_i(y) = 0 \}$ is a facet
of $\ext{P}$ for all $i=1,\dots,n$.
It is sufficient to show that the polyhedron $\ext{Q} := \{ y \in \R^m :
\ell_i(y) \ge 0 \text{ for } i = n+1,\dots,M \}$ is bounded and hence has
$f(\ext{Q}) \ge m+1 \ge n$ facets. Since $f(\ext{P}) = n + f(\ext{Q})$
this implies the result. The key observation is that the polyhedron $Q
\subset \R^{n-1}$ bounded by the hyperplanes defining the facets $G_i$ of
$\Delta_{n,k}$ is a full-dimensional simplex and hence bounded. We claim
that $\pi(\ext{Q}) \subseteq Q$. For this it is sufficient to show that
if $H_i$ is the unique hyperplane containing $G_i$, then $\pi^{-1}(H_i)$
supports a face of $\ext{Q}$. By construction, $\pi^{-1}(H_i)$ supports
the face $\ext{G}_i := \pi^{-1}(G_i) \cap \ext{P}$ of $\ext{P}$. Now, if
$\ext{G}_i \subseteq \ext{F}_j$ for some $1 \le j \le n$, this would imply
$G_i \subseteq F_j$. This, however, cannot happen as $G_i =
\pi(\ext{G}_i)$ and $F_j$ are distinct facets of $\Delta_{n,k}$. Thus,
$\ext{G}_i = \{ y \in \ext{P} : \ell_j(y) = 0 \text{ for } j \in J\}$ for
some $J \subseteq \{ n+1,\dots,M\}$ and consequently $H_i$ supports a face
of $\ext{Q}$. Moreover, $\ext{Q} \subseteq \pi^{-1}(Q)$ and hence, the
lineality space of $\ext{Q}$ is contained in $\ker \pi$. However the
hyperplanes $\{ y : \ell_i(y) = 0 \}$ with $1\leq i\leq n$ are parallel to
$\ker \pi$, because they are preimages of the hyperplanes supporting the
facets $F_i$. Therefore, $\ext{Q}$ is bounded since we assumed that
$\ext{P} = \ext{Q} \cap \{ y : \ell_i(y) \ge 0 \text{ for } i =
1,\dots,n\}$ is bounded.
\end{proof}
\section{Rectangle covering numbers and small hypersimplices}\label{sec:rc}
In this section we treat the small hypersimplices $4 \le n \le 6$ and $1 < k <
n-1$. We will do this by way of rectangle covering numbers. The rectangle
covering number, introduced in~\cite{FKPT13}, is a very elegant, combinatorial
approach to lower bounds on the nonnegative rank of a polytope. For a
polytope
\[
P \ = \ \{ x \in \R^d : \ell_1(x) \ge 0, \dots,\ell_M(x) \ge 0\} \ = \
\conv(v_1,\dots,v_N),
\]
where $\ell_1(x),\dots,\ell_M(x)$ are affine linear forms,
the \Defn{slack matrix} is the nonnegative matrix $S_P \in \R_{\ge0}^{M \times
N}$ with $(S_P)_{ij} = \ell_i(v_j)$. A \Defn{rectangle} of $S_P$ is an index
set $R = I \times J$ with $I \subseteq [M]$, $J \subseteq [N]$ such that
$(S_P)_{ij} > 0$ for all $(i,j) \in R$. The \Defn{rectangle covering number}
$\rc(S_P)$ is the smallest number of rectangles $R_1,\dots,R_s$ such that
$(S_P)_{ij} > 0$ if and only if $(i,j) \in \bigcup_t R_t$. As explained
in~\cite[Section~2.4]{FKPT13}
\[
\rc(S_P) \ \le \ \rkN(P).
\]
There are strong ties between the geometry of extensions and rectangle
covering numbers. In particular our geometric tools from
Section~\ref{sec:geom} have independent counterparts for rectangle covering
numbers. Note that although the results are structurally similar they do not
imply each other and even the proofs are distinct.
\begin{lem}\label{lem:rc}
Let $P$ be a polytope and $F \subset P$ a facet. Then
\[
\rc(S_P) \ \ge \ \rc(S_F) + 1.
\]
Moreover, if there is a facet $G \subset P$ disjoint from $F$, then
\[
\rc(S_P) \ \ge \ \min\{\rc(S_F),\rc(S_G)\} + 2.
\]
\end{lem}
\begin{proof}
In the first case, part of the slack matrix of $S_P$ is of the form
\[
\renewcommand\arraystretch{.85}
\left[ \begin{array}{c@{\,}c@{\,}c|c}
0& \cdots & 0 & a \\
\hline
& & & \ast \\
& S_F & & \vdotsHacked \\
& & & \ast \\
\end{array} \right],
\]
and since $F$ is a facet, $a > 0$. There are at least $\rc(S_F)$
rectangles necessary to cover $S_F$. None of these rectangles can cover
$a$ as this is obstructed by the zero row above $S_F$.
For the second case, we may assume that $r = \rc(S_F) = \rc(S_G)$.
Similarly, we can assume that parts of $S_P$ look like
\[
\renewcommand\arraystretch{.85}
\left[
\begin{array}{c@{\,}c@{\,}c|c@{\,}c@{\,}c}
0& \cdots & 0 & a_1 & \cdots & a_l \\
b_1 & \cdots & b_k & 0 & \cdots & 0 \\
\hline
& & & \ast & \cdots & \ast \\
& S_F & & \vdotsHacked & &
\vdotsHacked \\
& & & \ast & \cdots & \ast \\
\hline
\ast & \cdots & \ast & & & \\
\vdotsHacked & & \vdotsHacked & & S_G & \\
\ast & \cdots & \ast & & & \\
\end{array}
\right]
\]
with $a_1,\dots,a_l,b_1,\dots,b_k > 0$. There are $r$ rectangles necessary
to cover $S_F$. None of these rectangles can cover the first row. If the
first row is covered with $\ge 2$ rectangles, we are done. If, however, a
single rectangle covers the first row, then it cannot cover any
row of $S_G$. Indeed, every row of $S_G$ corresponds to a facet of $G$ and
contains at least one vertex of $G$. Hence, every row of $S_G$ has one
zero entry. Since also $S_G$ needs at least
$r$ rectangles to be covered, by the
same token we obtain that the second row must be covered by a
unique rectangle which does not extend to $S_F$ or $S_G$. Consequently,
at least $r+2$ rectangles are necessary.
\end{proof}
The example from Section~\ref{sec:geom} shows that similar to
Lemma~\ref{lem:2distinct}, we cannot replace $\min$ with $\max$. It can be
checked that the rectangle covering number of an octagon is $6$.
As direct consequence we obtain a lower bound on rectangle covering numbers (cf.~\cite[Prop.~5.2]{FKPT13}).
\begin{cor}\label{cor:lb_rc}
Let $P$ be a $d$-polytope, then $\rc(S_P)\geq d+1$.
\end{cor}
It was amply demonstrated in~\cite{KaibelW15,FMPT} that the rectangle covering
number is a very powerful tool. We use it to compute the nonnegative rank of
small hypersimplices. For a given polytope $P$ with slack matrix $S = S_P$ the
decision problem of whether there is a rectangle covering with $r$ rectangles
can be phrased as a satisfiability problem: For every rectangle $R_l$ and
every $(i,j)$ with $S_{ij} >0$ we designate a Boolean variable $X^l_{ij}$. If
$X^l_{ij}$ is true, this signifies that $(i,j) \in R_l$. Every $(i,j)$ has to
occur in at least one rectangle. Moreover, for $(i,j)$ and $(i',j')$ if
$S_{ij} \cdot S_{i'j'} > 0$ and $S_{ij'} \cdot S_{i'j} = 0$, then $(i,j)$ and
$(i',j')$ cannot be in the same rectangle. The validity of the resulting
Boolean formula can then be verified using a SAT solver. For the
hypersimplices $\Delta_{n,k}$ with $1<k<n-1$, the sizes of the slack matrix is
$2n \times \binom{n}{k}$. For $n \le 6$ these sizes are manageable and the
satisfiability problem outlined above can be decided by a computer. For
example, for $(n,k) = (6,3)$ this yields $1320$ Boolean variables and $55566$
clauses in a conjunctive normal form presentation. The attached
\texttt{python} script produces a SAT instance for all $(n,k,r)$ and we used
\texttt{lingeling}~\cite{lingeling} for the verification. This gives a
computer-aided proof for the small cases which also completes our proof for
Theorem~\ref{thm:main}.
\begin{prop}\label{prop:small_cases}
For $n \le 6$, $\rc(\Delta_{n,k}) = \rkN(\Delta_{n,k})$ for all $1 \le k
\le n$. In particular, $\rkN(\Delta_{4,2})=6$,
$\rkN(\Delta_{5,2})=\rkN(\Delta_{5,3})=9$, and
$\rkN(\Delta_{6,2})=\rkN(\Delta_{6,3})=12$.
\end{prop}
\begin{proof}
The hypersimplex $\Delta_{4,2}$ is a $3$-dimensional polytope with $6$
vertices and, more precisely, affinely isomorphic to the octahedron. Since
the nonnegative rank is invariant under taking polars,
Corollary~\ref{cor:cube} asserts that the nonnegative rank is indeed $6$.
The polytope $\Delta_{5,2}$ is a $4$-dimensional polytope with $10$
vertices and facets. Its nonnegative rank is $9$. It was computed
in~\cite[Table 3]{OVW14} under the ID 6014. Alternatively it can be
computed with the \texttt{python} script in the appendix. Using, for
example, {\tt polymake}~\cite{polymake}, removing two non-adjacent
vertices of $\Delta_{5,2}$ yields a $4$-dimensional polytope $Q$ with $8$
vertices and $7$ facets. Taking a $2$-fold pyramid over $Q$ gives an
extension of $\Delta_{5,2}$ with $9$ facets. Finally, $\Delta_{6,2}$ and
$\Delta_{6,3}$ are $5$-polytopes with $12$ facets and the SAT approach
using the attached \textit{python} script yields the matching lower bound
on the rectangle covering number.
\end{proof}
The hypersimplex $\Delta_{5,2}$ is special. We will examine it more closely in
Section~\ref{sec:52} and we will, in particular, show that up to a set of
measure zero all realizations have the expected nonnegative rank $10$.
It is tempting to think that Proposition~\ref{prop:large_cases} might hold on
the level of rectangle covering numbers. Indeed, such a result would imply
that all \emph{combinatorial} hypersimplices are extension maximal. As can be
checked with the \texttt{python} script in the appendix,
Proposition~\ref{prop:small_cases} extends at least to $n=8$. In fact, the
results above imply that $\rc(\Delta_{n,k})=2n$ when $\max\{2,n-6\}\leq k\leq
\min\{n-2,6\}$. However, the same script also shows that
$\rc(\Delta_{10,2})\leq 19$ and the following result (a corollary of \cite[Lemma~3.3]{FKPT13})
shows just how deceiving
the situation is in small dimensions.
\begin{prop}\label{prop:rcbounds}
The rectangle covering number of the $(n,k)$-hypersimplex satisfies
\[
n \ \leq \ \rc(\Delta_{n,k}) \ \leq \ n+\lceil\mathsf{e}\,(k+1)^2\log(n)\rceil.
\]
\end{prop}
\begin{proof}
The lower bound follows from Corollary~\ref{cor:lb_rc}. For the upper
bound, consider the matrix {$\Gnk{n}{k}$} whose columns are the $0/1$
vectors with $k$ zeros. If $2\leq k\leq n-2$, the rows of the slack
matrix of $\Delta_{n,k}$ induced by the $G_i$ facets provide a copy of
$\Gnk{n}{k}$, and the rows induced by $F_i$ facets a copy of
$\Gnk{n}{n-k}$. Thus,
\[
\rc(\Delta_{n,k}) \ \leq \ \rc(\Gnk{n}{n-k})+\rc(\Gnk{n}{k}).
\]
Observing that $\rc(\Gnk{n}{n-k})$ is trivially bounded from above by $n$
(take a rectangle for each row), it suffices to see that
$\rc(\Gnk{n}{k})\leq \lceil\mathsf{e}\,(k+1)^2\log(n)\rceil$, which is
shown in~\cite[Lemma~3.3]{FKPT13}.
We reproduce their nice argument for completeness. The rows and columns of
$\Gnk{n}{k}$ are indexed by the sets $[n]$ and $\binom{[n]}{n-k}$,
respectively. The non-zero elements are the pairs $(x,S) \in [n] \times
\binom{[n]}{n-k}$ with $x\in S$. The inclusion-maximal rectangles are of
the form
\[
R_I \ := \ \left\{(x,S)\in[n]\times \binom{[n]}{n-k}\ : \ x\in I
\text{ and } I \subseteq S\right\},
\]
for $I\subseteq [n]$. We can pick an $I$ at random by selecting every
element in $I$ independently with probability $p=\frac{1}{k+1}$. The
probability then that an entry $(x,S)$ with $x\in S$ is covered by $R_I$
is $p(1-p)^k$. Hence, if we choose $r =
\lceil\mathsf{e}\,(k+1)^2\log(n)\rceil$ such rectangles $R_I$
independently, then probability that an entry is not covered by any of the
rectangles is $(1-p(1-p)^k)^r$.
The total number of non-zero entries of $\Gnk{n}{k}$ is $(n-k)\cdot
\binom{n}{k}<n^{k+1}$. Therefore, the
logarithm of the expected number of non-zero entries of $\Gnk{n}{k}$ that are not covered by any rectangle is at most
\begin{align*}\log\left((1-p(1-p)^k)^rn^{k+1}\right)&=r\log\left(1-p(1-p)^k\right)+(k+1)\log(n)\\
&\leq - r p(1-p)^k +(k+1)\log(n)=- r \frac{k^k}{(k+1)^{k+1}} +(k+1)\log(n).
\end{align*}
If this is negative, then there is at least one covering with $r$ rectangles. That is, whenever
\begin{align*} r &>\frac{(k+1)^{k+2}}{k^k}\log(n).
\end{align*}
Observing that
\begin{align*}
\frac{(k+1)^{k+2}}{k^k}\log(n) \ = \
(k+1)^2\left(\frac{k+1}{k}\right)^k\log(n) \ < \
\mathsf{e}\,(k+1)^2\log(n)
\end{align*}
concludes the proof.
\end{proof}
\section{Combinatorial hypersimplices and realization spaces}
\label{sec:relspaces}
Typically, the extension complexity is not an invariant of the combinatorial
isomorphism class of a polytope (see, for example, the situation with
hexagons~\cite{polygons2}). However, Corollary~\ref{cor:cube} states
that every combinatorial cube, independent of its realization, has the same
extension complexity. The proximity to cubes and the results in
Sections~\ref{sec:geom} and~\ref{sec:rc} raised the hope that this extends
to all hypersimplices. A \Defn{combinatorial $(n,k)$-hypersimplex} is any
polytope whose face lattice is isomorphic to that of $\Delta_{n,k}$. One
approach would have been through rectangle covering numbers but
Proposition~\ref{prop:rcbounds} refutes this approach in the strongest possible
sense.
We extend the notions of $F$- and $G$-facets from~\eqref{eqn:FG} to
combinatorial hypersimplices. The crucial property that we used in the proof
of Proposition~\ref{prop:large_cases} was that in the standard realization of
$\Delta_{n,k}$, the polyhedron bounded by hyperplanes supporting the
$G$-facets is a full-dimensional simplex. We call a combinatorial hypersimplex
\Defn{$\mathbf{G}$-generic} if the hyperplanes supporting the $G$-facets are
not projectively concurrent, that is, if the hyperplanes supporting
combinatorial $(n-1,k-1)$-hypersimplices do not meet in a point and are not
parallel to a common line. We define the notion of
\Defn{$\mathbf{F}$-generic} hypersimplices likewise and we simply write
\Defn{$\mathbf{FG}$-generic} if a hypersimplex is $F$- and $G$-generic.
Now, if a combinatorial hypersimplex $P$ is $G$-generic, then there is an
admissible projective transformation that makes the polyhedron induced by the
$G$-facets bounded. To find such a transformation, one can proceed as follows:
translate $P$ so that it contains $0$ in the interior, then take the polar
$P^\circ$ and translate it so that the origin belongs to the interior of the
convex hull of the $G$-vertices. This is possible because $G$-genericity
implies that these vertices span a full-dimensional simplex. Taking the polar
again yields a polytope $P'$ that is projectively equivalent to $P$. Since
projective transformations leave the extension complexity invariant, the proof
of Proposition~\ref{prop:large_cases}, almost verbatim, carries over to
$FG$-generic hypersimplices.
Indeed, with the upcoming Lemma~\ref{lem:generic}, it is straightforward to
verify that $F$-facets of an $FG$-generic $(n,k)$-hypersimplex with $2k\geq n$
are again $FG$-generic; and the same works with $G$-facets when $2k\leq n$.
Hence, one can apply the inductive reasoning of the $k=2$ case of
Proposition~\ref{prop:large_cases} and together with
Proposition~\ref{prop:small_cases}, this proves the following theorem.
\begin{thm}\label{thm:main_generic}
If $P$ is an $FG$-generic combinatorial $(n,k)$-hypersimplex with $n\geq
6$ and $2\leq k\leq n-2$, then $\rkN(P)=2n$.
\end{thm}
The following lemma states that \emph{every} combinatorial hypersimplex is
either $F$-generic or $G$-generic.
\begin{lem}\label{lem:generic}
Every combinatorial $(n,k)$-hypersimplex is
\begin{enumerate}[\rm (i)]
\item $F$-generic if $2k<n+2$, and
\item $G$-generic if $2k>n-2$.
\end{enumerate}
In particular, every combinatorial $(n,k)$-hypersimplex is $FG$-generic
for $n-2<2k<n+2$.
\end{lem}
\begin{proof}
The two statements are dual under the affine equivalence $\Delta_{n,k} \cong
\Delta_{n,n-k}$. Hence, we only prove the second statement. For this, let
$P^\circ$ be polar to a combinatorial $(n,k)$-hypersimplex with $2k >
n-2$. Thus, $P^\circ$ is a polytope of dimension $n-1$ with $2n$ vertices
$f_1,\dots,f_n$ and $g_1,\dots,g_n$ corresponding to the $F$- and
$G$-facets. In this setting, $G$-genericity means that the polytope $Q =
\conv(g_1,\dots,g_n)$ is of full dimension $n-1$.
From the combinatorics of $(n,k)$-hypersimplices, we infer that for every
$I \subseteq [n]$ with $|I| = k$, the set
\[
\conv \left( \{g_i : i \in I\} \cup \{f_i : i \not\in I \} \right)
\]
is a face of $P^\circ$ and hence $\conv(g_i : i \in I)$ is a face of $Q$.
Thus $Q$ is a \emph{$k$-neighborly} polytope with $2k \ge n-1 \ge \dim Q$.
It follows from~\cite[Thm.~7.1.4]{Grunbaum} that $Q$ is a simplex and
thus of dimension $n-1$.
\end{proof}
Although we will later see that not every combinatorial hypersimplex is
$FG$-generic (cf. Proposition~\ref{prop:singular62}), this has some
immediate consequences for the extension complexity of combinatorial
hypersimplices. The following corollary can be deduced from
Figure~\ref{fig:hypersimplex_chart}, using Corollary~\ref{cor:face} to
navigate along the arrows to the (thick) diagonal.
\begin{repcor}{cor:XCcombinatorial}
If $P$ is a combinatorial $(n,k)$-hypersimplex with $n\geq 6$, then
\[
\renewcommand\arraystretch{1.2}
\rkN(P) \ \geq \
\left\{
\begin{array}{rrr@{}c@{}l}
n+2k+1,& \text{ if} & 2 \ \le \ & k & \ < \ \ffloor{n}{2},\\
2n,& \text{ if} & \ffloor{n}{2} \ \le \ &k & \ \le \
\fceil{n}{2}, \text{ and}\\
n+2(n-k)+1,& \text{ if} & \fceil{n}{2} \ < \ &k & \ \le \ n-2.
\end{array}
\right.
\]
\end{repcor}
\begin{proof}
For $\ffloor{n}{2} \le k \le \fceil{n}{2}$, we get the result as a
combination of Theorem~\ref{thm:main_generic} with
Lemma~\ref{lem:generic}. If $P$ is a combinatorial $(n,k)$-hypersimplex
with $k<\ffloor{n}{2}$, then $P$ has a $2k$-dimensional face $Q$
isomorphic to $\Delta_{2k+1,k}$ (by successively taking $F$-facets). By
the previous case, $\rkN(Q)=2(2k+1)$. By Corollary~\ref{cor:face},
$\rkN(P)\geq n+2k+1$. The case $k>\fceil{n}{2}$ follows symmetrically.
\end{proof}
\begin{figure}[htpb]
\makeatletter
\pgfdeclareshape{rectangle with diagonal fill}
{
%
\inheritsavedanchors[from=rectangle]
\inheritanchorborder[from=rectangle]
\inheritanchor[from=rectangle]{north}
\inheritanchor[from=rectangle]{north west}
\inheritanchor[from=rectangle]{north east}
\inheritanchor[from=rectangle]{center}
\inheritanchor[from=rectangle]{west}
\inheritanchor[from=rectangle]{east}
\inheritanchor[from=rectangle]{mid}
\inheritanchor[from=rectangle]{mid west}
\inheritanchor[from=rectangle]{mid east}
\inheritanchor[from=rectangle]{base}
\inheritanchor[from=rectangle]{base west}
\inheritanchor[from=rectangle]{base east}
\inheritanchor[from=rectangle]{south}
\inheritanchor[from=rectangle]{south west}
\inheritanchor[from=rectangle]{south east}
\inheritbackgroundpath[from=rectangle]
\inheritbeforebackgroundpath[from=rectangle]
\inheritbehindforegroundpath[from=rectangle]
\inheritforegroundpath[from=rectangle]
\inheritbeforeforegroundpath[from=rectangle]
%
\behindbackgroundpath{%
%
%
%
\pgfextractx{\pgf@xa}{\southwest}%
\pgfextracty{\pgf@ya}{\southwest}%
\pgfextractx{\pgf@xb}{\northeast}%
\pgfextracty{\pgf@yb}{\northeast}%
\ifpgf@diagonal@lefttoright
\def\pgf@diagonal@point@a{\pgfpoint{\pgf@xa}{\pgf@yb}}%
\def\pgf@diagonal@point@b{\pgfpoint{\pgf@xb}{\pgf@ya}}%
\else
\def\pgf@diagonal@point@a{\southwest}%
\def\pgf@diagonal@point@b{\northeast}%
\fi
\pgfpathmoveto{\pgf@diagonal@point@a}%
\pgfpathlineto{\northeast}%
\pgfpathlineto{\pgfpoint{\pgf@xb}{\pgf@ya}}%
\pgfpathclose
\ifpgf@diagonal@lefttoright
\color{\pgf@diagonal@top@color}%
\else
\color{\pgf@diagonal@bottom@color}%
\fi
\pgfusepath{fill}%
\pgfpathmoveto{\pgfpoint{\pgf@xa}{\pgf@yb}}%
\pgfpathlineto{\southwest}%
\pgfpathlineto{\pgf@diagonal@point@b}%
\pgfpathclose
\ifpgf@diagonal@lefttoright
\color{\pgf@diagonal@bottom@color}%
\else
\color{\pgf@diagonal@top@color}%
\fi
\pgfusepath{fill}%
}
}
\newif\ifpgf@diagonal@lefttoright
\def\pgf@diagonal@top@color{white}
\def\pgf@diagonal@bottom@color{gray!30}
\def\pgfsetdiagonaltopcolor#1{\def\pgf@diagonal@top@color{#1}}%
\def\pgfsetdiagonalbottomcolor#1{\def\pgf@diagonal@bottom@color{#1}}%
\def\pgfsetdiagonallefttoright{\pgf@diagonal@lefttorighttrue}%
\def\pgfsetdiagonalrighttoleft{\pgf@diagonal@lefttorightfalse}%
\tikzoption{diagonal top color}{\pgfsetdiagonaltopcolor{#1}}
\tikzoption{diagonal bottom color}{\pgfsetdiagonalbottomcolor{#1}}
\tikzoption{diagonal from left to right}[]{\pgfsetdiagonallefttoright}
\tikzoption{diagonal from right to left}[]{\pgfsetdiagonalrighttoleft}
\makeatother
\begin{tikzpicture}[scale=1.1]
\foreach \n in {3,...,9} {
\foreach \j [evaluate=\j as \k using int(\j-1)] in {2,...,\n} {
\pgfmathparse{int(2*\k-\n)}
\ifnum\pgfmathresult>1
\node[rectangle, draw, very thick, fill=red!25, draw=red] (\n\k) at (2*\n,\n/2-\k) {$\Delta_{\n,\k}$};
\else
\ifnum\pgfmathresult<-1
\node[rectangle, draw, very thick, fill=blue!25, draw=blue] (\n\k) at (2*\n,\n/2-\k) {$\Delta_{\n,\k}$};
\else
\node[rectangle with diagonal fill, draw, very thick, diagonal top color=blue!25, diagonal bottom color=red!25,draw=black] (\n\k) at (2*\n,\n/2-\k) {$\Delta_{\n,\k}$};
\fi
\fi
}
}
\foreach \n [evaluate=\n as \m using int(\n+1)] in {3,...,8} {
\foreach \j [evaluate=\j as \k using int(\j-1)] in {2,...,\n} {
\draw [latex-,red, very thick] (\n\k) -- node[midway,fill=white] {$G$} (\m\j);
\draw [latex-,blue, very thick] (\n\k) -- node[midway,fill=white] {$F$}(\m\k);
}
}
\end{tikzpicture}
\caption{The smaller hypersimplices. Arrows represent the structure of the $F$- and $G$-facets. Those in the upper half are $F$-generic, those in the lower half are $G$-generic and those in the middle are $FG$-generic.}\label{fig:hypersimplex_chart}
\end{figure}
\subsection{Realization spaces of hypersimplices}
A combinatorial $(n,k)$-hypersimplex is a polytope $P \subset \R^{n-1}$ given
by $2n$ linear inequalities $f_i(\x) = f_{i0} + \sum_j f_{ij}x_j \ge 0$ and
$g_i(\x) = g_{i0} + \sum_j g_{ij}x_j \ge 0$ for $i=1,\dots,n$ such that $P$ is
combinatorially isomorphic to $\Delta_{n,k}$ under the correspondence
\begin{align*}
F_i = \{ \x \in \Delta_{n,k} : x_i = 0\} &\quad \longrightarrow \quad \{ \x
\in P : f_i(\x) = 0\}, \text{ and} \\
G_i = \{ \x \in \Delta_{n,k} : x_i = 1\} &\quad \longrightarrow \quad \{ \x
\in P : g_i(\x) = 0\}.
\end{align*}
Of course, the inequalities are unique only up to a
positive scalar and hence the group $(\R^{2n}_{>0},\cdot)$ acts on ordered
collections of linear inequalities furnished by all combinatorial
$(n,k)$-hypersimplices in $\R^{n-1}$. We only want to consider realizations that
are genuinely distinct and it is customary to identify two realizations of
$\Delta_{n,k}$ that differ by an affine transformation or an (admissible) projective transformation; see,
for example, \cite[Sect. 2.1]{RichterGebert1997}
or~\cite[Sect.~8.1]{OrientedMatroids1993}. We do the latter. However, care has to be taken as
the projective linear group does not act on the realization space. To that
end, we identify $P$ with its \Defn{homogenization}
\[
\homog(P) \ := \ \cone( \{1 \} \times P) \ = \
\left\{ \binom{x_0}{\x} \in \R^n :
\begin{array}{r@{ \ \ge \ }l}
g_{0i}x_0 + \dots + g_{ni}x_n & 0 \\
f_{0i}x_0 + \dots + f_{ni}x_n & 0 \\
\end{array}
\text{ for } i=1,\dots,n
\right\}.
\]
Under this identification, one verifies that two $(n,k)$-hypersimplices $P$ and
$P'$ are projectively equivalent if and only if $\homog(P)$ and $\homog(P')$
are linearly isomorphic. The \Defn{projective realization space} $\Rel_{n,k}$
of combinatorial $(n,k)$-hypersimplices is the set of matrices
$(g_1,\dots,g_n,f_1,\dots,f_n) \in \R^{n \times 2n}$ that yield cones
isomorphic to the homogenization of the $(n,k)$-hypersimplex modulo the action
of $\mathrm{GL}_n \times \R^{2n}_{>0}$. We write $g_i^\perp$ and $f_i^\perp$
for the facet defining hyperplanes of $\homog(P)$ corresponding to $g_i$ and
$f_i$, respectively.
Let us fix $1 < k \le \frac{n}{2}$ and let $P$ be a combinatorial
$(n,k)$-hypersimplex. By Lemma~\ref{lem:generic}, the $F$-facets are generic
and hence bound a simplex $Q$ up to projective transformation. That is,
\[
\homog(Q) \ = \ \{ \x \in \R^n :
f_{0i}x_0 + \dots + f_{ni}x_n \ge 0 \text{ for } i=1,\dots,n \} \ \cong \
\R^n_{\ge 0}
\]
by a linear transformation. Hence, we can choose a matrix representing
$\homog(P)$ of the form
\begin{equation}\label{eqn:G_proj}
\left(
\begin{array}{cccc|ccc}
| &| & &| & 1 & & \\
g_1&g_2&\cdots&g_n & & \ddotsHacked &\\
| &| & & | & & &1\\
\end{array}
\right).
\end{equation}
Modulo positive column and row scaling, the matrix $(g_1,\dots,g_n)$ uniquely determines
$P$ up to projective transformations.
Indeed, by using suitable positive column scaling on the $f_i$'s,
the effect of positively scaling rows of~\eqref{eqn:G_proj} leaves
the identity matrix of~\eqref{eqn:G_proj} invariant.
For example, a representative of the standard realization of $\Delta_{n,k}$ is given by the $G$-matrix:
\begin{equation}\label{eqn:standardGmatrix}
\left(
\begin{array}{c c c c}
-k+1 & 1 & \cdots & 1\\
1 & -k+1 & \cdots & 1\\
\vdots & \vdots& \ddots & \vdots \\
1 & 1& \cdots & -k+1
\end{array}\right).
\end{equation}
The $G$-matrix also gives us a condition for $G$-genericity. The hyperplanes
to $G$-facets are projectively concurrent if there is a nonzero element in
the kernel of the $G$-matrix. This proves the following useful criterion.
\begin{lem}\label{lem:G_generic}
Let $P$ be a combinatorial $(n,k)$-hypersimplex with $k \le \frac{n}{2}$.
Then $P$ is $G$-generic if and only if $\det(g_1,\dots,g_n) \neq 0$.
\end{lem}
Notice that the principal $k$-minors of~\eqref{eqn:standardGmatrix} vanish.
This is common to all
combinatorial hypersimplices. Indeed, the combinatorics of hypersimplices
dictates that for any $I \subseteq [n]$ with $|I| = k$, the intersection of
the hyperplanes $\{g_i^\perp : i \in I\}$ and $\{f_i^\perp : i \not\in I\}$
with $\homog(P)$ is a face of dimension $1$. By our choice of
$f_1,\dots,f_n$, this is equivalent to
\begin{equation}\label{eqn:G_pm}
\rk(G_{II}) \ = \ k-1 \text{ for all } I \subseteq [n] \text{ with } |I| =
k.
\end{equation}
Notice that these are the only equality constrains for $\Rel_{n,k}$.
The remaining conditions are only strict inequalities coming vertex-facet-nonincidences.
The \emph{complex} variety of $n$-by-$n$ matrices with vanishing principal
$k$-minors was studied by Wheeler in~\cite{Wheeler15}, which turns out to be a
rather complicated object. For example, it is not known if the variety is
irreducible and even the dimension is only known in certain cases. The
following result yields an upper bound on the dimension of $\Rel_{n,k}$.
\begin{thm}\label{thm:Rel_UB}
For $2<k< n-2$, every $P \in \Rel_{n,k}$ is completely identified by
a realization of $\Delta_{n-1,k-1}$ unique up to affine transformation.
%
%
%
In particular, for $2\leq k\leq n-2$, the dimension of $\Rel_{n,k}$ is at
most $\binom{n-1}{2}$.
\end{thm}
\begin{proof}
Let $P$ be a combinatorial $(n,k)$-hypersimplex. By a suitable projective
transformation, we can assume that the facet-defining hyperplanes of $F_1$
and $G_1$ are parallel. This assumption fixes the intersection of
$\aff(G_1)$ with the hyperplane at infinity and hence fixes $G_1$ up to an
affine transformation.
Since $F_1$ and $G_1$ lie in parallel hyperplanes, the corresponding
facets of $F_1$ and $G_1$ are parallel (because they are induced by the
intersection of the same supporting hyperplanes of $\Delta_{n,k}$ with
these two parallel hyperplanes).
A result of Shephard (see~\cite[Thm.~15.1.3, p.321]{Grunbaum}) states that
if all the $2$-dimensional faces of a polytope $R \subset \R^d$ are
triangles, then for any representation $R = R_1 + R_2$ of $R$ as
Minkowski sum, there are $t_i \in
\R^d$ and $\lambda_i \ge 0$ such that $R_i = t_i + \lambda_i R$ for
$i=1,2$. Now, if $Q$ and $Q'$ are \emph{normally equivalent} polytopes, i.e.\
combinatorially equivalent and corresponding facets are parallel, and $Q$
has only triangular $2$-faces, then, by~\cite[Prop.~7.12]{Ziegler1995},
all $2$-faces of $Q+Q'$ are triangles as well. It follows that $Q$ and
$Q'$ are positively homothetic.
Since every face of a hypersimplex is a hypersimplex and $2$-dimensional
hypersimplices are triangles, this shows that realizations of
hypersimplices are determined up to positive homothety once their facet
directions are determined. In particular, this shows that given $G_1$,
$F_1$ is determined up to a positive homothety. Hence, given $G_1$, $P$ is
unique up to projective transformations.
The bound on the dimension follows by induction on $k$. We will see in
Theorem~\ref{thm:dim_rel_space} that $\dim\Rel_{n,2}=\binom{n-1}{2}$,
settling the base case. The affine group of $\R^d$ is a codimension~$d$
subgroup of the projective group. Hence, by induction,
\[
\dim\Rel_{n,k} \ \leq \ \dim\Rel_{n-1,k-1}+(n-2) \ \leq \
\binom{n-2}{2}+(n-2) \ = \ \binom{n-1}{2}.\qedhere
\]
\end{proof}
\section{The $(n,2)$-hypersimplices}\label{sec:n2}
Although realization spaces are notoriously difficult objects and it is
generally difficult to access different realizations of a given polytope, in
the case of $(n,2)$-hypersimplices we have a simple construction and a nice
geometrical interpretation. Let us denote by $\Delta_{n-1} =
\conv(e_1,\dots,e_n) \subset \R^n$ the \Defn{standard simplex} of dimension
$n-1$.
\begin{thm}\label{thm:simplex_edges}
For $n \ge 4$, let $p_{ij}$ be a point in the relative interior of the
edge $[e_i,e_j] \subset \Delta_{n-1}$ for $1 \le i < j \le n$. Then
\[
P \ := \ \conv\{ p_{ij} : 1 \le i < j \le n \}
\]
is a combinatorial $(n,2)$-hypersimplex. Up to projective transformation,
every combinatorial $(n,2)$-hypersimplex arises this way.
\end{thm}
\begin{proof}
Since $\Delta_{n-1}$ is a simple polytope, the polytope $P$ is the result
of truncating every vertex $e_i$ of $\Delta_{n-1}$ by the unique
hyperplane spanned by $\{p_{ij} : j \neq i\}$. Hence, $P$ has
$\binom{n}{2}$ vertices and every $p_{ij}$ is incident to exactly two
facets isomorphic to $\Delta_{n-1,1} \cong \Delta_{n-2}$. If $n=4$, then
it is easily seen that $P$ is an octahedron. For $n > 4$, we get by
induction on $n$ that the remaining facets $P \cap \{ x : x_i = 0 \}$ are
isomorphic to $\Delta_{n-1,2}$, which implies the first claim.
For the second statement, let $P$ be a combinatorial $(n,2)$-hypersimplex.
We know from Lemma~\ref{lem:generic} that the
$F$-facets bound a projective simplex. By a
suitable projective transformation, we may assume that this is exactly
$\Delta_{n-1}$. Each vertex of $P$ lies in all the $F$-facets except for
two. So every vertex of $P$ lies in the relative interior of a unique edge
of $\Delta_{n-1}$.
\end{proof}
Note that the representation given in Theorem~\ref{thm:simplex_edges} is not
unique up to projective transformation. The simplest way to factor out the
projective transformations is to perform a first truncation at the vertex
$e_1$ of $\Delta_{n-1}$. Then $\conv\{p_{12},\dots, p_{1n}, e_2,\dots, e_n\}$
is a prism over a simplex, which is projectively unique
(cf.~\cite[Ex.~4.8.30]{Grunbaum}). It then only remains to choose
$\binom{n-1}{2}$ points in the interior of every edge of the base of the
prism. Each choice produces a projectively distinct $(n,2)$-hypersimplex, and
every $(n,2)$-hypersimplex arises this way, up to projective transformation.
This completes the proof of Theorem~\ref{thm:dim_rel_space}.
\begin{repthm}{thm:dim_rel_space}
For $n \geq 4$, $\Rel_{n,2}$ is {rationally} equivalent to the
interior of a $\binom{n-1}{2}$-dimensional cube. In particular,
$\Rel_{n,2}$ is homeomorphic to an open ball and hence contractible.
\end{repthm}
To recover the description as an $n\times n$ matrix, we can proceed as
follows. Set the (projective) simplex $\Delta_F$ bounded by the $F$-facets to
be the standard simplex. Now, for every (oriented) edge $[e_i,e_j]$ of
$\Delta_F$, consider the ratio
$\rho_{ij}=\frac{\|e_i-p_{ij}\|}{\|e_j-p_{ij}\|}$ for $i \neq j$. It is not
hard to see that the diagonal entries of the $G$-matrix are negative and that,
if we scale its columns so that they are $-1$, then we are left with the
matrix
\begin{equation}\label{eq:MatDeltan2}
\left(
\begin{array}{c c c c}
-1 & \rho_{12} & \cdots & \rho_{1n}\\
\rho_{21} & -1 & \cdots & \rho_{2n}\\
\vdots & \vdots& \ddots & \vdots \\
\rho_{n1} & \rho_{n2}& \cdots & -1
\end{array}\right);
\end{equation}
containing $-1$ in the diagonal and the ratios $\rho_{ij}$ in the remaining entries. Notice how the condition on the vanishing $2\times 2$ principal minors coincides with the relation $\rho_{ij}=\rho_{ji}^{-1}$. By Theorem~\ref{thm:simplex_edges}, any choice of positive ratios fulfilling this relation gives rise to a realization of an $(n,2)$-hypersimplex.
All non-diagonal entries are positive by construction. Multiplying the $i$th
column by $\rho_{i1}$ and the $i$th row by $\rho_{1i}$ for $2\leq i\leq n$,
which corresponds to a projective transformation, we are left with an
equivalent realization. Relabelling the ratios, it is of the form:
\begin{equation}\label{eq:ProjDeltan2}
\left(
\begin{array}{c c c c}
-1 & \phantom{-}1 & \cdots & \phantom{-}1\\
\phantom{-}1 & -1 & \cdots & \rho_{2n}\\
\phantom{-}\vdots & \phantom{-}\vdots& \ddots & \phantom{-}\vdots \\
\phantom{-}1 & \rho_{n2}& \cdots & -1
\end{array}\right).
\end{equation}
Choosing a realization of $\Delta_{n,2}$ up to projective transformation amounts to choosing $\binom{n-1}{2}$ positive ratios and we recover the description of Theorem~\ref{thm:dim_rel_space}.
Actually, this is the transformation that we also used before, which transforms the truncated simplex into a prism over a simplex. The $\binom{n-1}{2}$ remaining ratios correspond to the edges of the basis of the prism.
Notice that, although a similar argumentation provides the realization space up to affine transformation, that setup is slightly more delicate. While a choice of a point on each of the
$\binom{n}{2}$ edges of a standard simplex gives a unique affine realization of $\Delta_{n,2}$, not every affine realization can be obtained this way: The $F$-simplex might be unbounded for some realizations of $\Delta_{n,2}$. Hence, there are several ``patches'' in the affine realization space, each one corresponding to a different relative position of the $F$-simplex with respect to the hyperplane at infinity. For every patch, realizations are parametrized by the position of the points in the edges of $\Delta_F$.
\begin{ex}
To show an example, we will work with $\Delta_{3,2}$, which we look at as two nested projective simplices, the second being the convex hull of a point in each edge of the first. Even though this is not strictly a hypersimplex, it provides simpler figures than $\Delta_{4,2}$. Figure~\ref{fig:Deltan2} depicts four such realizations.
\begin{figure}[htpb]
\includegraphics[width=\linewidth]{Figures/Deltan2}
\caption{Four (projectively equivalent) realizations of $\Delta_{3,2}$ as two nested (projective) simplices.}\label{fig:Deltan2}
\end{figure}
In the first three, the outer simplex is a standard simplex. Computing the ratios of~\eqref{eq:MatDeltan2} we recover the matrices
\begin{align*}
\left(
\begin{array}{c c c}
-1 & \phantom{-}3 & \phantom{-}1\\
\phantom{-}\frac13 & -1 & \phantom{-}3\\
\phantom{-}1 & \phantom{-}\frac13 & -1
\end{array}\right)\text{, } \qquad\qquad
\left(
\begin{array}{c c c}
-1 & \phantom{-}3 & \phantom{-}\frac13\\
\phantom{-}\frac13 & -1 & \phantom{-}1\\
\phantom{-}3 & \phantom{-}1 & -1
\end{array}\right)
\qquad \text{ and } \qquad
\left(
\begin{array}{c c c}
-1 & \phantom{-}3 & \phantom{-}3\\
\phantom{-}\frac13 & -1 & \phantom{-}9\\
\phantom{-}\frac13 & \phantom{-}\frac19 & -1
\end{array}\right).
\end{align*}
We can transform the first into the second (resp.\ third) by multiplying the
third row by $3$ (resp.\ $\frac13$) and the third column by $\frac13$ (resp.\
$3$), which means that they represent projectively equivalent realizations. To
fix a unique projective representative, we send $e_1$ to infinity, and impose
that $p_{12},p_{13},e_2,e_3$ form a prism over a standard simplex (in this
case a square), as in the fourth figure. The ratios that we get on the base of
the prism give the entries of the matrix corresponding to
\eqref{eq:ProjDeltan2}:
\[
\left(
\begin{array}{c c c}
-1 & \phantom{-}1 & \phantom{-}1\\
\phantom{-}1 & -1 & \phantom{-}9\\
\phantom{-}1 & \phantom{-}\frac19 & -1
\end{array}\right).
\]
In this example, this projective transformation corresponds to multiplying the second row and the third column of the first matrix by $3$, and its third row and second column by $\frac13$.
\end{ex}
With the aid of this description, we can produce our first example of a non
$FG$-generic hypersimplex. The following is due to Francisco Santos (personal
communication), who found nicer coordinates than our original example.
\begin{prop}\label{prop:singular62}
There are $(6,2)$-hypersimplices that are not $G$-generic.
\end{prop}
\begin{proof}
The following matrix corresponds to a non-$G$-generic realization of
$\Delta_{6,2}$
\[
\left(
\begin{array}{c c c c c c}
-1 & \phantom{-}2\sqrt{6}+5 & -2 \sqrt{6} +5 & \phantom{-}1 & \phantom{-}1 & \phantom{-}1\\
-2\sqrt{6}+5 & -1 & \phantom{-}2\sqrt{6}+5 & \phantom{-}1& \phantom{-}1& \phantom{-}1\\
\phantom{-}2\sqrt{6}+5 & -2\sqrt{6}+5 & -1 & \phantom{-}1& \phantom{-}1 & \phantom{-}1 \\
\phantom{-}1& \phantom{-}1& \phantom{-}1& -1& \phantom{-}1& \phantom{-}1\\
\phantom{-}1& \phantom{-}1& \phantom{-}1& \phantom{-}1& -1& \phantom{-}1\\
\phantom{-}1& \phantom{-}1& \phantom{-}1& \phantom{-}1& \phantom{-}1& -1
\end{array}\right).
\]
It can be checked that the determinant is zero and
Lemma~\ref{lem:G_generic} finishes the proof.
\end{proof}
\begin{cor}\label{cor:prescribability}
Not every combinatorial $(n,k)$-hypersimplex is a facet of a combinatorial $(n+1,k+1)$-hypersimplex.
\end{cor}
\begin{proof}
Every combinatorial $(7,3)$-hypersimplex is $FG$-generic by
Lemma~\ref{lem:generic}, a property that is inherited by its $G$-facets.
Hence, the combinatorial $(6,2)$-hypersimplex of Proposition~\ref{prop:singular62} is not a facet of any combinatorial $(7,3)$-hypersimplex.
\end{proof}
In contrast, the shape of facets of $(n,2)$-hypersimplices can be prescribed. This is a direct corollary of
the construction from Theorem~\ref{thm:simplex_edges}.
\begin{cor}\label{cor:n2prescribability}
Every combinatorial $(n,2)$-hypersimplex is a facet of a combinatorial $(n+1,2)$-hypersimplex.
\end{cor}
\subsection{The $(5,2)$-hypersimplex}\label{sec:52}
The $(5,2)$-hypersimplex is special. As we argued in the proof of
Proposition~\ref{prop:small_cases}, the nonnegative rank of the hypersimplex
$\Delta_{5,2}$ in its defining realization~\eqref{eqn:hyper} is equal to $9$.
In light of Theorem~\ref{thm:main}, this deviates from the `expected'
nonnegative rank. The goal of this section is to show that \emph{generic}
realizations of $\Delta_{5,2}$ have nonnegative rank $10$.
We just described the (projective) realization space of $\Delta_{5,2}$ obtained by
choosing an interior point in every edge of the base of $\Delta_3\times \Delta_1$. That means that
$\dim\Rel_{5,2}=6$.
We now claim that the realization $P$ of $\Delta_{5,2}$ given as the
convex hull of the columns
\begin{equation}\label{eqn:52special}
\left(\begin{array}{rrrrrrrrrr}
35 & 35 & 35 & 35 & 0 & 0 & 0 & 0 & 0 & 0 \\
35 & 0 & 0 & 0 & 50 & 42 & 20 & 0 & 0 & 0 \\
0 & 35 & 0 & 0 & 20 & 0 & 0 & 56 & 60 & 0 \\
0 & 0 & 35 & 0 & 0 & 28 & 0 & 14 & 0 & 42 \\
0 & 0 & 0 & 35 & 0 & 0 & 50 & 0 & 10 & 28
\end{array}\right)
\end{equation}
has nonnegative rank $10$. To prove this, we again use a computer to compute
the refined rectangle covering number from~\cite{OVW14}. Like the ordinary
rectangle covering number, the refined rectangle covering number yields lower
bounds on the nonnegative rank of a polytope $P$ but instead of only the
support pattern of the slack matrix $S_P$, it takes into account simple
relations between the actual values. A covering of $S = S_P$ by rectangles
$R_1,\dots,R_s$ is \Defn{refined} if for any pair of indices $(i,k),(j,l)$
such that
\begin{equation}\label{eqn:rrc}
S_{ik} S_{jl} \ > \ S_{il}S_{jk},
\end{equation}
then the number of rectangles containing $S_{ik}$ or $S_{jl}$ is at least two.
(necessarily positive) entries $S_{ik},S_{jl}$ are contained in at least
two rectangles. Of course, if $S_{il}$ or $S_{jk}$ are zero, then this reduces
to the condition for ordinary coverings by rectangles. The \Defn{refined rectangle
covering number} $\rrc(S_P)$ is the least size of a refined covering. It is
shown in~\cite[Theorem 3.4]{OVW14} that $\rrc(S_P)$ lies between $\rc(S_P)$
and $\rkN(P)$ and thus yields a possibly better lower bound on the nonnegative
rank. We will work with the following relaxation that we call the
\Defn{generic} refined rectangle covering number. We consider coverings by
ordinary rectangles with the additional condition that for any pair of indices
$(i,k),(j,l)$ such that
\begin{equation}\label{eqn:grrc}
S_{ik}, S_{jl},S_{il}, S_{jk} > 0 \quad \text{ and } \quad
S_{ik} S_{jl} \ \neq \ S_{il}S_{jk},
\end{equation}
the four entries $S_{ik}, S_{jl},S_{il}, S_{jk}$ are covered by at least two
rectangles. The denomination `generic' is explained in the proof of
Theorem~\ref{thm:52} below.
As for the rectangle covering number, to determine if there is a generic
refined covering of a given size can be phrased as a Boolean formula. For the
example given above and the number of rectangles set to $9$, a \texttt{python}
script in the appendix produces such a formula with $450$ Boolean variables
and $16796$ clauses. Any suitable SAT solver verifies that this formula is
unsatisfiable which proves that the $(5,2)$-hypersimplex given
in~\eqref{eqn:52special} has nonnegative rank $10$.
\begin{thm}\label{thm:52}
The combinatorial $(5,2)$-hypersimplices with nonnegative rank $10$ form a
dense open subset of $\Rel_{5,2}$.
\end{thm}
\begin{proof}
Let $\mathcal{I}$ be the collection of all pairs of indices $(i,k;j,l)$
satisfying~\eqref{eqn:grrc} in the realization of $\Delta_{5,2}$ given
in~\eqref{eqn:52special}. The set of realizations $X \subseteq \Rel_{5,2}$
for which~\eqref{eqn:grrc} is not satisfied for some $(i,k;j,l) \in
\mathcal{I}$ form an algebraic subset of $\Rel_{5,2}$. Since the
realization~\eqref{eqn:52special} is not in $X$, this shows that
$X$ is a proper subset and hence of measure zero in the open set
$\Rel_{5,2}$.
\end{proof}
The same result extends easily to the remaining $(n,2)$-hypersimplices.
\begin{repcor}{cor:dense2}
For $n \ge 5$, the combinatorial $(n,2)$-hypersimplices with extension
complexity $2n$ are dense in $\Rel_{n,2}$.
\end{repcor}
\begin{proof}
For $n=5$, this is the previous result. For $n\geq 6$, this follows from Theorem~\ref{thm:main_generic}
with the fact that $FG$-generic hypersimplices
form a dense open subset of $\Rel_{n,2}$, because $G$-genericity is equivalent to the determinant of~\eqref{eq:ProjDeltan2} being non-zero (Lemma~\ref{lem:G_generic}).
\end{proof}
As of now, we are unable to extend this result to higher values of $k$.
Nevertheless, we conjecture that except for a subset of zero measure, all
$(n,k)$-hypersimplices have nonnegative rank~$2n$ (for $n\geq 5$). (Notice
that the existence of particular instances already ensures the existence of
open neighborhoods of hypersimplices of nonnegative rank~$2n$.)
\begin{repconj}{conj:dense}
For $n\geq 5$ and $2\leq k\leq n-2$, the combinatorial hypersimplices of
nonnegative rank~$2n$ form a dense open subset of $\Rel_{n,k}$.
\end{repconj}
Theorem~\ref{thm:main_generic} implies that combinatorial hypersimplices of
nonnegative rank~$2n$ are an open subset of $\Rel_{n,k}$. However, notice that
$\Rel_{n,k}$ is a quotient of the variety of vanishing principal $k$-minors, which
has irreducible components of different dimensions~\cite{Wheeler15}.
We cannot certify that the nonnegative rank~$2n$ subset is dense because it could skip
a whole component.
\textbf{Acknowledgements.} We would like to thank Stefan Weltge, for help with
the computation of rectangle covering numbers and Günter Ziegler for
insightful discussions regarding realizations of hypersimplices. We are
indebted to Francisco Santos for extensive discussions regarding realization
spaces of hypersimplices. The coordinates for the example in
Proposition~\ref{prop:singular62} are due to him. Finally, we would like to
thank the anonymous reviewers for their careful reading and useful
suggestions; in particular, for pointing out the argument
from~\cite[Lemma~3.3]{FKPT13} which simplified and strengthened
Proposition~\ref{prop:rcbounds}.
\bibliographystyle{amsalpha}
|
1,116,691,498,295 | arxiv | \section{CONCLUSIONS}
We have studied multi-agent discrete-event systems that can be divided into several groups of independent and similar agents. We have employed a relabeling map to generate template structures, based on which scalable supervisors are designed whose state sizes and computational process are independent of the number of agents. We have presented a sufficient condition for the validity of the designed scalable supervisors, and shown that this condition may be verified with low computational effort. Moreover, based on the scalable supervisor we have designed scalable local controllers, one for each component agent.
Three examples have been provided to illustrate our proposed synthesis methods.
In future research, we aim to find conditions under which scalable supervisors may be designed to achieve controlled behavior identical to the monolithic supervisor. We also aim to search for new designs of scalable supervisors when the sufficient condition of Theorem~1 fails to hold.
Additionally we are interested in investigating, in the context of scalable supervisory control, the issue of partial observation.
\section{INTRODUCTION} \label{sec1_intro}
Multi-agent systems have found increasing applications in large-scale engineering practice where tasks are difficult to be accomplished by a single entity. Examples include multiple machines in factories, robots in manufacturing cells, and AGVs in logistic systems \cite{Elm05,WuZhou07,WurDanMou08}. Although not always the case, multi-agent systems typically can be divided into several groups, according to different roles, functions, or capabilities.
For instance, machines are grouped to process different types of workpieces, robots to manufacture different parts of a product, AGVs to transport items of distinct sizes, shapes and weights.
Agents in the same group often have similar or even identical state transition structures, i.e. dynamics. This we shall refer to as a {\it modular} characteristic.
In this paper we study multi-agent systems with such a modular characteristic, and consider individual agents modeled by discrete-event systems (DES). Given a control specification, one may in principle apply supervisory control theory \cite{Ramadge & Wonham (1987a), Ramadge & Wonham (1987),Wonham (2016)} to synthesize a monolithic (i.e. centralized) supervisor for the entire multi-agent system. While the supervisor computed by this method is optimal (i.e. maximally permissive) and nonblocking, there are two main problems. First, the state size of the supervisor increases (exponentially) as the number of agents increases \cite{Gohari & Wonham (2000)}; consequently, the supervisor synthesis will become computationally infeasible for large numbers of agents. Second, whenever the number of agents changes (increases when more agents are added into the system to enhance productivity or to improve redundancy for the sake of reliability; or decreases when some agents malfunction and are removed from the system), the supervisor must be recomputed or reconfigured (e.g. \cite{KumTak12,NooSch15}) in order to adapt to the change.
The first problem may be resolved by decentralized and/or hierarchical supervisory synthesis methods (e.g. \cite{Wong & Wonham (1996), WonLee02, Feng & Wonham (2008), Schmidt & Moor (2008)}). These methods, however, usually can deal only with fixed numbers of agents, and thus must also be recomputed or reconfigured if and when the agent number changes.
In this paper we solve both problems mentioned above by exploiting the modular characteristic of multi-agent systems, and thereby designing a {\it scalable} supervisor whose state number and computational process are {\it independent} of the number of agents. First, owing to similar/identical transition structures of agents in the same group, we employ a {\it relabeling map} (precise definition given in Section II.A below) to generate a ``template structure'' for each group. The template structures thus generated are independent of the agent numbers. Then we design a supervisor based on these template structures, and prove that it is a scalable supervisor for the multi-agent system under certain sufficient condition. The controlled behavior of the designed scalable supervisor need not be optimal, but is nonblocking. Moreover, we show that the sufficient condition for the scalable supervisor is efficiently checkable.
While the designed scalable supervisor serves as a {\it centralized} controller for the multi-agent system,
it may sometimes be natural, and even more desirable, to equip each individual agent with its own {\it local} controller (such that it becomes an autonomous, intelligent agent).
Hence we move on to design {\it scalable} local controllers whose state numbers and computational process are invariant with respect to the number of component agents; for this design,
we employ the method of supervisor localization \cite{Cai & Wonham (2010),Cai & Wonham (2015),Cai & Wonham (2016)}.
Directly localizing the scalable supervisor may be computationally expensive, inasmuch as the localization method requires computing the overall plant model.
To circumvent this problem, we localize the supervisor based on the template structures and thereby derive scalable local controllers without constructing the underlying plant model.
It is proved that the collective controlled behavior of these local controllers is equivalent to that achieved by the scalable supervisor.
The contributions of our work are threefold.
First, our designed centralized supervisor has scalability with respect to the number of agents in the system.
This scalability is a desired feature of a supervisor for multi-agent systems,
inasmuch as it allows the supervisor to remain invariant regardless of how many agents are added to or removed from the system (which may occur frequently due to productivity/reliability concerns or malfunction/repair).
Second, the local controllers we designed for individual agents have the same scalability feature, and are guaranteed to collectively achieve
identical controlled behavior as the centralized supervisor does. With the local controllers `built-in', the agents become autonomous and make their own local decisions;
this is particularly useful in applications like multi-robot systems.
Finally, the computation of the scalable supervisor and local controllers is based solely on template structures and is thus independent of agent numbers as well. As a result, the computation load remains the same even if the number of agents increases; this is advantageous as compared to centralized/decentralized supervisory synthesis methods.
We note that \cite{Eyzell & Cury (2001)} also studied multi-agent systems with a modular characteristic and used group-theoretic tools to characterize symmetry among agents with similar/identical structures. Exploiting symmetry, ``quotient automata'' were constructed to reduce the state size of the composed system, based on which supervisors are synthesized. Quotient automata construction was further employed in \cite{Rohloff & Lafortune (2006)} to develop decentralized synthesis and verification algorithms for multi-agent systems. While the systems considered in \cite{Eyzell & Cury (2001), Rohloff & Lafortune (2006)} are more general than ours in that agents are allowed to share events, the state size of the resulting quotient automata is {\it dependent} on the agent numbers and in the worst case exponential in the number of agents. By contrast, we use the relabeling map approach and synthesize scalable supervisors whose state sizes are independent of agent numbers.
We also note that in \cite{Su (2013)}, an automaton-based modeling framework was presented for multi-agent systems in which the agents' dynamics are instantiated from a finite number of ``templates''; a particular product operation enforcing synchronization on broadcasting or receiving events was proposed to compose the agent dynamics. Building on \cite{Su (2013)}, the work in \cite{Su & Lin (2013)} proposed a method that first decomposes the overall control specification into local ones for individual agents, and then incrementally synthesizes a supervisor based on the local specifications. The presented algorithm for incremental synthesis is (again) dependent on, and in general exponential in, the number of agents.
By extending the ideas in \cite{Su (2013)} and \cite{Su & Lin (2013)},
the work in \cite{Su & Lenna (2017)} proposed a scalable control design for a type of multi-agent systems,
where an ``agent'' was not just a plant component, but indeed a plant of its own including an imposed specification.
The ``agents'' were instantiated from a template; for the template, under certain condition, an algorithm was proposed to design a supervisor whose instantiation was shown to work for each ``agent''.
By contrast, we consider multi-agent systems where each agent is simply a plant component, in particular involving {\it no} specification.
Moreover, the centralized/local scalable supervisors we design are distinct from the supervisor given in \cite{Su & Lenna (2017)}, because our centralized supervisor works effectively for the entire system and local supervisors for individual plant components.
The work most related to ours is reported in \cite{Jiao & Gan (2015), Jiao (2017)}. Therein the same type of multi-agent systems is investigated and relabeling maps are used to generate template structures. Various properties of the relabeling map are proposed which characterize relations between the relabeled system and the original one. Moreover, a supervisor is designed that is provably independent of agent numbers, when these numbers exceed a certain threshold value. The design of the supervisor is, however, based on first computing the synchronous product of all agents, which can be computationally expensive. This can be relieved by using {\it state tree structures} \cite{Jiao (2017)}, but the computation is still dependent on the agent numbers and thus the supervisor has to be recomputed or reconfigured whenever the number of agents changes. By contrast, our synthesis is based only on the template structures and thus independent of the agent numbers; furthermore the state size of our designed supervisor is {\it always} independent of the number of agents, with no threshold value required.
The rest of this paper is organized as follows. Section~II introduces preliminaries and formulates the scalable supervisory control synthesis problem. Section~III solves the problem by designing a scalable supervisor, and shows that the sufficient condition for solving the problem is efficiently verifiable.
Section~IV designs scalable local controllers for individual agents, and Section~V presents three examples to illustrate scalable supervisors and local controllers.
Finally Section~VI states our conclusions.
\section{Preliminaries and Problem Formulation} \label{sec2_probm}
\subsection{Preliminaries}
Let the DES plant to be controlled be modeled by a {\it generator}
\begin{center}
$\textbf{G}=(Q,\Sigma,\delta,q_0,Q_m)$
\end{center}
where $\Sigma=\Sigma_c\dot{\cup} \Sigma_u$ is a finite event set that is partitioned into a controllable event subset and an uncontrollable subset, $Q$ is the finite state set, $q_0\in Q$ the initial state, $Q_m\subseteq Q$ the set of marker states, and $\delta:Q\times \Sigma\rightarrow Q$ the (partial) transition function. Extend $\delta$ in the usual way such that $\delta:Q\times \Sigma^{*}\rightarrow Q$. The {\it closed behavior} of \textbf{G} is the language
\begin{center}
$L(\textbf{G}):=\{s\in \Sigma^{*}\mid \delta(q_0,s)!\}\subseteq \Sigma^{*}$
\end{center}
in which the notation $\delta(q_0,s)!$ means that $\delta(q_0,s)$ is defined. The {\it marked behavior} of \textbf{G} is
\begin{center}
$L_m(\textbf{G}):=\{s\in L(\textbf{G})\mid \delta(q_0,s)\in Q_m\}\subseteq L(\textbf{G})$.
\end{center}
A string $s_1$ is a {\it prefix} of another string \textit{s}, written $s_1\leq s$, if there exists $s_2$ such that $s_1s_2$ = $s$. The {\it prefix closure} of $L_m(\textbf{G})$ is
\begin{center}
$\overline{L_m(\textbf{G})}:= \{s_1\in\Sigma^{*} \mid (\exists s\in L_m(\textbf{G})) s_1\leq s\}$.
\end{center}
We say that \textbf{G} is \emph{nonblocking} if $\overline{L_m(\textbf{G})}= L(\textbf{G})$.
A language $K\subseteq L_m(\textbf{G})$ is \textit{controllable} with respect to $L(\textbf{G})$ provided $ \overline{K}\Sigma_u \cap L(\textbf{G}) \subseteq \overline{K}$ \cite{Wonham (2016)}. Let $E\subseteq L_m({\bf G})$ be a specification language for \textbf{G}, and define the set of all sublaguages of \textit{E} that are controllable with respect to L(\textbf{G}) by
\begin{align*}
\mathcal{C}(E) := \{K\subseteq E \mid \overline{K}\Sigma_u \cap L(\textbf{G}) \subseteq \overline{K} \}.
\end{align*}
Then $\mathcal{C}(E)$ has a unique supremal element \cite{Wonham (2016)}
\begin{align*}
\sup\mathcal{C}(E)=\cup \{K|K\in \mathcal{C}(E)\}.
\end{align*}
For describing a modular structure of plant \textbf{G}, we first introduce a relabeling map.
Let $T$ be a set of new events, i.e. $\Sigma \cap T=\emptyset$. Define a \textit{relabeling} map $R: \Sigma\rightarrow T$ such that for every $\sigma\in\Sigma$,
\begin{align*}
R(\sigma) = \tau, \ \ \ \tau \in T.
\end{align*}
In general $R$ is surjective but need not be injective.
For $\sigma\in\Sigma$, let $[\sigma]$ be the set of events in $\Sigma$ that have the same $R$-image as $\sigma$, i.e.
\begin{align*}
[\sigma] := \{\sigma'\in \Sigma | R(\sigma')=R(\sigma)\}.
\end{align*}
Then $\Sigma = [\sigma_1]\dot{\cup}[\sigma_2]\dot{\cup}\cdots\dot{\cup}[\sigma_k]$, for some $k\geq 1$, and $T$ can be written as $T=\{R(\sigma_1),R(\sigma_2),\ldots,R(\sigma_k)\}$.
We require that $R$ preserve controllable/uncontrollable status of events in $\Sigma$; namely $R(\sigma)$ is a controllable event if and only if $\sigma\in\Sigma_c$.
Thus $T_c:=\{R(\sigma)| \sigma \in \Sigma_c\}$, $T_u:=\{R(\sigma)| \sigma \in \Sigma_u\}$, and $T=T_c \dot{\cup} T_u$.
We extend $R$ such that $R: \Sigma^{*}\rightarrow T^{*}$ according to
(i) $R(\varepsilon) = \varepsilon$, where $\varepsilon$ denotes the empty string;
(ii) $R(\sigma) = \tau$, $\sigma \in \Sigma$ and $\tau \in T$;
(ii) $R(s\sigma) = R(s)R(\sigma)$, $\sigma\in\Sigma$ and $s\in \Sigma^*$.
\noindent Note that $R(s) \neq \varepsilon$ for all $s \in \Sigma^* \setminus \{\varepsilon\}$.
Further extend $R$ for languages, i.e. $R:Pwr(\Sigma^{*})\rightarrow Pwr(T^{*})$, and define
\begin{align*}
R(L)=\{R(s) \in T^* | s\in L\},\ \ L\subseteq \Sigma^{*}.
\end{align*}
The \textit{inverse-image function} $R^{-1}$ of $R$ is given by $R^{-1}:$ $Pwr(T^{*})\rightarrow Pwr(\Sigma^{*})$:
\begin{center}
$R^{-1}(H)=\{s\in \Sigma^{*}| R(s)\in H\}$, \ $H\subseteq T^{*}$.
\end{center}
Note that $RR^{-1}(H)=H$, $H\subseteq T^{*}$
while $R^{-1}R(L)\supseteq L$, $L\subseteq \Sigma^{*}$.
We say that $L \subseteq \Sigma^*$ is {\it (${\bf G}, R$)-normal} if $R^{-1}R(L) \cap L_m({\bf G}) \subseteq L$; this property will turn out to be important in Section~III below. Several useful properties of $R$ and $R^{-1}$ are presented in the following lemma, whose proof is given in Appendix.
\begin{Lemma} \label{lem:cr}
For $R:Pwr(\Sigma^{*})\rightarrow Pwr(T^{*})$ and $R^{-1}:$ $Pwr(T^{*})\rightarrow Pwr(\Sigma^{*})$, the following statements are true.
(i) $R(\overline{L})= \overline{R(L)}$, $L\subseteq \Sigma^{*}$.
(ii) $R(L_1 \cap L_2)\subseteq R(L_1)\cap R(L_2)$, $L_1, L_2\subseteq \Sigma^{*}$.
(iii) $R^{-1}(\overline{H})= \overline{R^{-1}(H)}$, $H\subseteq T^{*}$.
(iv) $R^{-1}(H_1 \cap H_2)= R^{-1}(H_1)\cap R^{-1}(H_2)$, $H_1, H_2\subseteq T^{*}$.
\end{Lemma}
\smallskip
We now discuss computation of $R$, $R^{-1}$ by generators.
Let $R: \Sigma^{*}\rightarrow T^{*}$ be a relabeling map and $\textbf{G}=(Q,\Sigma,\delta,q_0,Q_m)$ a generator. First, relabel each transition of {\bf G} to obtain ${\bf G}_T = (Q,T,\delta_T,q_0,Q_m)$, where $\delta_T : Q \times T \rightarrow Q$ is defined by
\begin{align*}
\delta_T(q_1, \tau) = q_2 \mbox{ iff } (\exists \sigma \in \Sigma) R(\sigma)=\tau \ \&\ \delta(q_1,\sigma)=q_2.
\end{align*}
Hence $L_m(\textbf{G}_T)=R(L_m(\textbf{G}))$ and $L(\textbf{G}_T)=R(L(\textbf{G}))$. However, ${\bf G}_T$ as given above may be {\it nondeterministic} \cite{Wonham (2016)}. Thus apply {\it subset construction} \cite{Wonham (2016)} to convert ${\bf G}_T$ into a deterministic generator $\textbf{H}=(Z,T,\zeta,z_0,Z_m)$, with $L_m(\textbf{H})=L_m(\textbf{G}_T)$ and $L(\textbf{H})=L(\textbf{G}_T)$.\footnote{The worst-case complexity of subset construction is exponential. In the problem considered in this paper, nevertheless, the generators that need to be relabeled typically have small state sizes, and hence their relabeled models may be easily computed. This point will be illustrated by examples given below.}
See Fig.~\ref{fig:relabel_generator} for an illustrative example.
\begin{Lemma} \label{lem:nonb}
If $\textbf{G}$ is nonblocking, then the relabeled generator $\textbf{H}$ is also nonblocking.
\end{Lemma}
{\it Proof.} Suppose that $\textbf{G}$ is nonblocking, i.e. $\overline{L_m(\textbf{G})}=L(\textbf{G})$. Then
\begin{align*}
&R(\overline{L_m(\textbf{G})})=R(L(\textbf{G}))\\
\Rightarrow &\overline{R(L_m(\textbf{G}))}=R(L(\textbf{G})) \ \ \mbox{ (by Lemma~\ref{lem:cr}(i))}\\
\Rightarrow &\overline{L_m(\textbf{H})}=L(\textbf{H})
\end{align*}
namely ${\bf H}$ is nonblocking. \hfill $\Box$
Conversely, to inverse-relabel {\bf H}, simply replace each transition $\tau (\in T)$ of {\bf H} by those $\sigma (\in \Sigma)$ with $R(\sigma)=\tau$; thus one obtains ${\bf G}'=(Z,\Sigma,\zeta',z_0,Z_m)$, where $\zeta' : Z \times \Sigma \rightarrow Z$ is defined by
\begin{align*}
\zeta'(z_1, \sigma) = z_2 \mbox{ iff } (\exists \tau \in T) R(\sigma)=\tau \ \&\ \zeta(z_1,\tau)=z_2.
\end{align*}
It is easily verified that $L_m(\textbf{G}')=R^{-1}L_m(\textbf{H})$ and $L(\textbf{G}')=R^{-1}L(\textbf{H})$.
Note that ${\bf G}'$ as given above is deterministic (since {\bf H} is), and has the same number of states as {\bf H}; namely inverse-relabeling does not change state numbers. Note that $L_m(\textbf{G}') \supseteq L_m(\textbf{G})$ and $L(\textbf{G}') \supseteq L(\textbf{G})$. Refer again to Fig.~\ref{fig:relabel_generator} for illustration.
Henceforth we shall write $R(\textbf{G}) := \textbf{H}$ and $R^{-1}({\bf H}) := {\bf G}'$.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{EX.pdf}
\caption{Consider the generator \textbf{G} as displayed and a relabeling map $R: \Sigma^{*}\rightarrow T^{*}$ with $\Sigma=\{11,21,12,22\}$, $T=\{1,2\}$, $R(11)=R(21)=1$ and $R(12)=R(22)=2$. First, relabel each transition of {\bf G} to obtain $\textbf{G}_T$. Evidently $\textbf{G}_T$ is nondeterministic. Thus apply subset construction on ${\bf G}_T$ to derive a deterministic generator $\textbf{H}$. It is easily checked that $L_m(\textbf{H})=R(L_m(\textbf{G}))$ and $L(\textbf{H})=R(L(\textbf{G}))$. To inverse-relabel $\textbf{H}$, replace transition 1 by 11,21 and 2 by 12, 22; thereby one obtains the generator $\textbf{G}'$. It is verified that $L_m(\textbf{G}')=R^{-1}(L_m(\textbf{H}))$ and $L(\textbf{G}')=R^{-1}(L(\textbf{H}))$. Note that $\textbf{G}'$ and \textbf{H} have the same number of states. Convention: the initial state of a generator is labeled by a circle with an entering arrow, while a marker state is labeled by a circle with an exiting arrow. The same notation will be used in subsequent figures.}
\label{fig:relabel_generator}
\end{figure}
\subsection{Problem Formulation}
Let $R: \Sigma^{*}\rightarrow T^{*}$ be a relabeling map, and $\mathcal{G}=\{\textbf{G}_{1},\ldots,\textbf{G}_{k}\}$ be a set of generators. We say that $\mathcal{G}$ is a \textit{similar set} under $R$ if there is a generator \textbf{H} such that
\begin{align} \label{eq:similarset}
(\forall i\in \{1,\ldots,k\}) R(\textbf{G}_i)=\textbf{H}.
\end{align}
One may view {\bf H} as a ``template'' for $\mathcal{G}$ in that each generator ${\bf G}_i$ in the set may be relabeled to {\bf H}.\footnote{More generally, one may consider {\it DES isomorphism} (e.g. \cite{Cai & Wonham (2010)}) and say that $\mathcal{G}=\{\textbf{G}_{1},\ldots,\textbf{G}_{k}\}$ is a similar set if $R(\textbf{G}_i)$ and $R(\textbf{G}_j)$ are isomorphic for all $i,j\in \{1,\ldots,k\}$. For simplicity of presentation we use the definition in (\ref{eq:similarset}), and subsequent development may be readily extended to the more general case using DES isomorphism.}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{MACH.pdf}
\caption{Consider a small factory consisting of 3 input machines ${\bf G_{11}}, {\bf G_{12}}, {\bf G_{13}}$ and 2 output machines ${\bf G_{21}}, {\bf G_{22}}$, linked by a buffer in the middle. Events $1j1$ ($ j \in \{1,2,3\}$) and $2j1$ ($ j \in \{1,2\}$) mean that machine ${\bf G}_{ij}$ starts to work by taking in a workpiece; events $1j2$ and $2j2$ mean that ${\bf G}_{ij}$ finishes work and outputs a workpiece.
Let $\Sigma = \Sigma_c \dot\cup \Sigma_u = \{111,121,131,211,221\} \dot\cup \{112,122,132,212,222\}$, $T=\{i1,i2 \,|\, i \in \{1,2\}\}$, and a relabeling map $R: \Sigma^* \rightarrow T^*$ with $R(ij1)=i1 \in T_c$, $R(ij2)=i2 \in T_u$ for all $i\in \{1,2\}$. Hence, under $R$, the plant is divided into 2 similar groups $\{\textbf{G}_{11},\textbf{G}_{12},\textbf{G}_{13}\}$ and $\{\textbf{G}_{21},\textbf{G}_{22}\}$, with template generators $\textbf{H}_1$ and $\textbf{H}_2$ respectively. It is evident that Assumptions (A1) and (A2) hold. Convention: the initial state of a generator is labeled by a circle with an entering arrow, while a marker state is labeled by a circle with an exiting arrow. The same notation will be used in subsequent figures.}
\label{fig:SmallFact22}
\end{figure}
In this paper, the plant {\bf G} is divided into $l (>1)$ groups of component agents, each group $\mathcal{G}_i \,(i \in \{1,\ldots,l\})$ being a similar set of generators under a given relabeling map $R$, i.e. $\mathcal{G}_i = \{ \textbf{G}_{i1},\ldots,\textbf{G}_{i \, n_i} \}$ ($n_i \geq 1$) and there is a generator $\textbf{H}_i$ such that
\begin{align} \label{eq:Hi}
(\forall j \in\{1,\dots,n_i\}) R(\textbf{G}_{ij}) = {\bf H}_i.
\end{align}
Let $\textbf{G}_{ij}$ be defined on $\Sigma_{ij}$ and $\textbf{H}_i$ on $T_i$. Then $R(\Sigma_{ij})=T_i$ for all $j \in \{1,...,n_i\}$.
Note that we do not consider the case where {\bf G} is divided into only one group (i.e. $l=1$),
because the control specifications considered in this paper are imposed {\it between} different groups.
Also we shall demonstrate in Section~V.C how to transform the problem where {\bf G} (naturally) contains only one group of agents into our setup.
Now we make the following assumptions.
\noindent (A1) All component agents are nonblocking and independent, i.e. their event sets are pairwise disjoint.\footnote{Under (A1), ${\bf H}_i$ ($i \in \{1,\ldots,l\}$) computed from (\ref{eq:Hi}) are nonblocking by Lemma~\ref{lem:nonb}.}
\noindent (A2) The template generators ${\bf H}_i$ ($i \in \{1,\ldots,l\}$) have pairwise-disjoint event sets. (This assumption can be regarded as being imposed on the relabeling map $R$, since the event set $T_i$ of ${\bf H}_i$ is obtained by relabeling those $\Sigma_{ij}$ of $\textbf{G}_{ij}$, $j \in \{1,\ldots,n_i\}$.)
As described above, the plant {\bf G} represents a multi-agent DES with a {\it modular} structure, i.e. containing multiple groups of similar and independent agents.
Although it would be more general to consider event sharing among agents, this modular structure is not uncommon in practical multi-agent systems
(e.g. machines in factories, robots in warehouses, and vehicles at intersections). One example of this type of modular plant is given in Fig.~\ref{fig:SmallFact22}; more examples will be illustrated in Section~V below.
Let $\Sigma$ ($=\Sigma_c \dot\cup \Sigma_u$) be the event set of plant ${\bf G}$, and $E \subseteq \Sigma^*$ a specification language that imposes behavioral constraints on ${\bf G}$ (thus the specification with respect to the plant is $E \cap L_m(\textbf{G})$).
We make the following assumption on the specification.
(A3) The specification language $E$ can be represented by a (nonblocking) generator ${\bf E}$ (i.e. $L_m({\bf E}) = E$) that satisfies $R^{-1}(R({\bf E}))= {\bf E}$.
This assumption implies that $E$ is $({\bf G}, R)$-normal, i.e. $R^{-1}R(E)\cap L_m(G) \subseteq E$.
To check if (A3) holds, first compute $R^{-1}(R({\bf E}))$ as described in Section 2.1, and then verify if the result is DES isomorphic (e.g. [1]) to ${\bf E}$.
Now with plant ${\bf G}$ and specification $E$, the standard supervisory control design \cite{Wonham (2016)} proceeds as follows. First compute the plant ${\bf G}$ by {\it synchronous product} \cite{Wonham (2016)} of all component agents:
\begin{align*}
{\bf G} = ||_{i \in \{1,\ldots,l\}} {\bf G}_i,\ \mbox{ where } {\bf G}_{i} = ||_{j \in \{1,\ldots,n_i\}} {\bf G}_{ij}.
\end{align*}
Under Assumption (A1), {\bf G} is nonblocking.
Then synthesize a supervisor {\bf SUP} (a nonblocking generator) such that\footnote{A supervisor is formally defined as a map associating each string in the closed behavior of {\bf G} with a {\it control patter}, i.e. a subset of enabled events. The generator supervisor {\bf SUP} we use is an {\it implementation} of such a map.}
\begin{align*}
L_m({\bf SUP}) = \sup\mathcal{C}(E \cap L_m(\textbf{G})).
\end{align*}
To rule out the trivial case, we assume the following.
\smallskip
\noindent (A4) $L_m({\bf SUP}) \neq \emptyset$ for $n_i = 1$, $i \in \{1,\ldots,l\}$. Denote this special {\bf SUP} by {\bf SUP1} henceforth, which is the monolithic supervisor when plant {\bf G} contains exactly one agent in each group.
\smallskip
By this synthesis method, the number of states of {\bf SUP} increases (exponentially) as the number of agents ($n_i$, $i \in \{1,\ldots,l\}$) increases, and consequently the supervisor synthesis becomes computationally difficult (if not impossible). In addition, whenever the number $n_i$ of agents changes (e.g. an operating agent malfunctions and is removed from the system, or a new agent/machine is added to increase productivity), the supervisor {\bf SUP} has to be recomputed or reconfigured.
These two problems may be resolved if one can synthesize a supervisor whose state size, as well as the computational effort involved in its synthesis, is {\it independent} of the number $n_i$ of agents, by exploiting the modular structure of the plant ${\bf G}$. We will call such a supervisor {\it scalable}, where scalability is with respect to the number of agents in the plant.
With this motivation, we formulate the following Scalable Supervisory Control Synthesis Problem (SSCSP):
\smallskip
{\it
Design a scalable supervisor {\bf SSUP} (a nonblocking generator) such that
\noindent (i) The number of states of {\bf SSUP} and its computation are independent of the number $n_i$ of agents for all $i \in \{1,\ldots,l\}$;
\noindent (ii) $L_m({\bf SSUP}) \cap L_m({\bf G})$ satisfies $L_m({\bf SUP1}) \subseteq$ $L_m({\bf SSUP}) \cap L_m({\bf G}) \subseteq L_m({\bf SUP})$.
}
Condition (ii) requires that $L_m({\bf SSUP}) \cap L_m({\bf G})$ be controllable with respect to $L({\bf G})$, and be lower-bounded by the marked behavior of {\bf SUP1}.
It would be ideal to have $L_m({\bf SSUP}) \cap L_m({\bf G}) = L_m({\bf SUP})$. Inasmuch as this requirement might be too strong to admit any solution to the problem, we shall consider (ii) above.
\smallskip
\section{Scalable Supervisory Control}
In this section we design a scalable supervisor to solve the Scalable Supervisory Control Synthesis Problem (SSCSP), under a easily-verifiable condition.
Consider the plant {\bf G} as described in Section~II.B. Let $\Sigma (=\Sigma_c \dot\cup \Sigma_u)$ be the event set of {\bf G}, and $R : \Sigma \rightarrow T$ a relabeling map. The procedure of designing a scalable supervisor is as follows, (P1)-(P4), which involves first synthesizing a supervisor for `relabeled system' under $R$ and then inverse-relabeling the supervisor.
\smallskip
\noindent (P1) Let $k_i \in \{1,...,n_i\}$ denote the number of agents in group $i$ allowed to work in parallel, and compute ${\bf M}_i := R(||_{j=1,\dots,k_i} {\bf G}_{ij})$. Then compute the relabeled plant ${\bf M}$ as the synchronous product of the generators ${\bf M}_i$, i.e.
\begin{align} \label{eq:M}
{\bf M} := ||_{i \in \{1,...,l\}} {\bf M}_i.
\end{align}
We call ${\bf M}$ the {\it relabeled plant} under $R$; it is nonblocking by Assumptions (A1), (A2). The event set of ${\bf M}$ is $T =T_c \dot\cup T_u$, where $T_c = R(\Sigma_c)$ and $T_u=R(\Sigma_u)$. For computational trackability, one would choose $k_i$ to be (much) smaller than $n_i$. When all $k_i=1$, we have the special case addressed in \cite{Yingying(2018)}. Note that once $k_i$ are fixed, the state sizes of ${\bf M}_i$ and ${\bf M}$ are fixed as well, and independent of the number $n_i$ of agents in group $i$.
\noindent (P2) Compute $F := R(E)$, where $E \subseteq \Sigma^*$ is the specification imposed on {\bf G}. We call $F \subseteq T^*$ the {\it relabeled specification} imposed on {\bf H}.
\noindent (P3) Synthesize a {\it relabeled supervisor} {\bf RSUP} (a nonblocking generator) such that
\begin{align*}
L_m({\bf RSUP}) = \sup\mathcal{C}(L_m(\textbf{H}) \cap F) \subseteq T^*.
\end{align*}
The number of states of {\bf RSUP} is independent of the number of agents, since ${\bf H}$'s state size is so.
\noindent (P4) Inverse-relabel {\bf RSUP} to derive {\bf SSUP}, i.e.
\begin{align}\label{eq:SSUP}
{\bf SSUP} := R^{-1} ({\bf RSUP})
\end{align}
with the marked behavior
\begin{align*}
L_m({\bf SSUP}) = R^{-1} L_m({\bf RSUP}) \subseteq \Sigma^*.
\end{align*}
By the inverse-relabeling computation introduced in Section~II.A, {\bf SSUP} computed in (\ref{eq:SSUP}) has the same number of states as {\bf RSUP}. It then follows that the state size of {\bf SSUP} is independent of the number of agents in plant {\bf G}.\footnote{Note that the state size of {\bf SSUP} is related to the number of groups that the plant is divided into, as well as the state size of the generator representing the relabeled specification $F$. In this paper we focus on the scalability of supervisor with respect to the number of agents, and thus assume the above two factors fixed for each problem we consider. In applications where these factors may be relevant, different approaches will need to be developed.} Moreover, it is easily observed that {\bf SSUP} is nonblocking (since {\bf RSUP} is), and its computation does not depend on the number $n_i$ of agents in each group $i$ ($\in \{1,\ldots,l\}$). The above design procedure is demonstrated with an example displayed in Fig.~\ref{fig:SmallFact22_ScaSup}.
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{SFB2.pdf}\\
\caption{Consider the small factory example in Fig.~\ref{fig:SmallFact22}, and a specification that protects the buffer (with two slots) against overflow and underflow. This specification is represented by {\bf E}, which satisfies (A3). In (P1), compute the relabeled plant ${\bf M} = {\bf M}_1 \| {\bf M}_2$, where ${\bf M}_1=R(\textbf{G}_{11}|| \textbf{G}_{12}),\ {\bf M}_2=R(\textbf{G}_{21})$ for $k_1=2$ and $k_2=1$. In (P2), compute the relabeled specification $R({\bf E})$. In (P3), compute the relabeled supervisor {\bf RSUP} for {\bf M} and $R({\bf E})$. Finally in (P4), compute $R^{-1}({\bf RSUP})$ to derive the scalable supervisor {\bf SSUP}. Note that the relabeled plant {\bf M} and the scalable supervisor {\bf SSUP} allow at most two machines in the input group to work in parallel as $k_1=2$.
\label{fig:SmallFact22_ScaSup}
\end{figure}
\footnotetext[6] The scalable supervisor {\bf SSUP} has the same number of states and structure as {\bf SUP1} (the monolithic supervisor when plant {\bf G} contains exactly one agent in each group). Note, however, that {\bf SSUP} is more permissive than
{\bf SUP1} (having strictly larger closed and marked behaviors), and is more flexible in that {\bf SSUP} imposes {\it no} restriction on the order by which the agents in the same group (input machines or output machines).
Our main result is the following.
\begin{Theorem} \label{thm:main}
Consider the plant {\bf G} as described in Section~II.B and suppose that Assumptions (A1), (A2), (A3), and (A4) hold.
If $L_m(\textbf{M})$ is controllable with respect to $R(L(\textbf{G}))$, then {\bf SSUP} in (\ref{eq:SSUP}) is a scalable supervisor that solves SSCSP.
\end{Theorem}
Theorem~\ref{thm:main} provides a sufficient condition under which {\bf SSUP} in (\ref{eq:SSUP}) is a solution to SSCSP. This condition is the controllability of $L_m({\bf H})$ with respect to $R(L({\bf G}))$, i.e.
$\overline{L_m({\bf H})} \Sigma_u \cap R(L({\bf G})) \subseteq \overline{L_m({\bf H})}$.
This means that the relabeled plant should be controllable with respect to the relabeling of the original plant {\bf G}; in other words, the relabeling operation should not remove uncontrollable events that are allowed by {\bf G}. As we shall see below, this condition is essential in proving the controllability of $L_m({\bf SSUP}) \cap L_m({\bf G})$ with respect to $L({\bf G})$.
For the success of our scalable supervisory control synthesis, it is important to be able to efficiently verify this sufficient condition. At the appearance, however, this condition seems to require computing {\bf G} which would be computationally infeasible for large systems.
Nevertheless, we have the following result.
\begin{Proposition} \label{prop:check}
Consider the plant {\bf G} as described in Section~II.B and suppose that Assumptions (A1), (A2) hold.
For each group $i \in \{1,\ldots,l\}$ if $L_m(\textbf{H}_{i})$ is controllable with respect to $R(L(\textbf{G}_{i1}\|\textbf{G}_{i2}))$, then $L_m(\textbf{M})$ is controllable with respect to $R(L(\textbf{G}))$.
\end{Proposition}
Proposition~\ref{prop:check} asserts that the controllability of $L_m(\textbf{M})$ with respect to $R(L(\textbf{G}))$ may be checked in a modular fashion: namely it is sufficient to check the controllability of $L_m(\textbf{H}_{i})$ for each group with respect to only two component agents. As a result, the computational effort of checking the condition is low.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{RGH.pdf}\\
\caption{Consider a group of 2 machines ${\bf G_{11}}, {\bf G_{12}}$. Let $\Sigma = \Sigma_c \dot\cup \Sigma_u$, where $\Sigma_c=\{11,21\}$ and $\Sigma_u=\{10,20\}$. Let $T=\{0,1\}$, and the relabeling map $R: \Sigma \rightarrow T$ with $R(11)=R(21)=1 \in T_c$, $R(10)=R(20)=0 \in T_u$. Under $R$, $\textbf{H}_1=R({\bf G}_{11})=R({\bf G}_{12})$ and $R(\textbf{G}_{11}\|\textbf{G}_{12})$ are displayed. Observe that $L_m({\bf H}_1)$ is {\it not} controllable with respect to $R(L(\textbf{G}_{11}\|\textbf{G}_{12}))$: let $t=0\in \overline{L_m(\textbf{H}_{1})}$ and $\tau=0\in T_u$ such that $t\tau\in R(L(\textbf{G}_{11}\|\textbf{G}_{12}))$, but $t\tau\notin \overline{L_m(\textbf{H}_{1})}$.}
\label{fig:RGH}
\end{figure}
Note that the condition in Proposition~\ref{prop:check}, $L_m(\textbf{H}_{i})$ being controllable with respect to $R(L(\textbf{G}_{i1}\|\textbf{G}_{i2}))$, does not always hold. An example where this condition fails is shown in Fig.~\ref{fig:RGH}.
To prove Proposition~\ref{prop:check}, we need the following two lemmas. For convenience it is assumed that Assumptions (A1), (A2) hold henceforth in this subsection.
\begin{Lemma} \label{lem:3.2}
Let $i \in \{1,\ldots,l\}$.
If $L_m(\textbf{H}_{i})=(R(L_m({\bf G}_{i1})))$ is controllable with respect to $R(L(\textbf{G}_{i1}\|\textbf{G}_{i2}))$, then $R(L_m(\textbf{G}_{i1}\|\textbf{G}_{i2}))$ is controllable with respect to $R(L(\textbf{G}_{i1}\|\textbf{G}_{i2}\|\textbf{G}_{i3}))$.
\end{Lemma}
{\it Proof.} Let $i \in \{1,\ldots,l\}$, $t\in \overline{R(L_m(\textbf{G}_{i1}\|\textbf{G}_{i2}))} $, $\tau \in T_u$, and $t\tau \in R(L(\textbf{G}_{i1}\|\textbf{G}_{i2}\|\textbf{G}_{i3}))$. We shall show that $t\tau \in \overline{R(L_m(\textbf{G}_{i1}\|\textbf{G}_{i2}))}$. By $t\in \overline{R(L_m(\textbf{G}_{i1}\|\textbf{G}_{i2}))}$ we derive
\begin{align*}
&t\in R(\overline{L_m(\textbf{G}_{i1}\|\textbf{G}_{i2})}) = R(L(\textbf{G}_{i1}\|\textbf{G}_{i2})) \\
\Rightarrow &(\exists s \in L(\textbf{G}_{i1}\|\textbf{G}_{i2})) R(s) = t.
\end{align*}
By Assumption (A1), $\textbf{G}_{i1}$, $\textbf{G}_{i2}$, $\textbf{G}_{i3}$ do not share events, the string $s$ can be divided into two cases:
Case 1: $s\in L(\textbf{G}_{i1})$ (resp. $s\in L(\textbf{G}_{i2})$).
Thus for each $\sigma \in \Sigma_u$ with $R(\sigma) = \tau$, if $s\sigma \in L(\textbf{G}_{i1}\|\textbf{G}_{i2}\|\textbf{G}_{i3})$, then
\begin{align*}
&\mbox{either } s\sigma \in L(\textbf{G}_{i1}) \mbox{ if $\sigma$ is an event of $\textbf{G}_{i1}$} \\
&\mbox{or } s\sigma \in L(\textbf{G}_{i1}\|\textbf{G}_{i2}) \mbox{ if $\sigma$ is an event of $\textbf{G}_{i2}$} \\
&\mbox{or } s\sigma \in L(\textbf{G}_{i1}\|\textbf{G}_{i3}) \mbox{ if $\sigma$ is an event of $\textbf{G}_{i3}$}.
\end{align*}
Hence $t\tau = R(s\sigma) \in R(L(\textbf{G}_{i1}\|\textbf{G}_{i2}\|\textbf{G}_{i3}))$ implies
\begin{align*}
&\mbox{either } R(s\sigma) \in R(L(\textbf{G}_{i1})) \\
&\mbox{or } R(s\sigma) \in R(L(\textbf{G}_{i1}\|\textbf{G}_{i2})) \\
&\mbox{or } R(s\sigma) \in R(L(\textbf{G}_{i1}\|\textbf{G}_{i3}))=R(L(\textbf{G}_{i1}\|\textbf{G}_{i2})).
\end{align*}
For this case, $t\tau \in \overline{R(L_m(\textbf{G}_{i1}\|\textbf{G}_{i2}))}$ always hold.
Case 2: $s\notin L(\textbf{G}_{i1})$ and $s \in L(\textbf{G}_{i1}\|\textbf{G}_{i2})$.
Similarly, for each $\sigma \in \Sigma_u$ with $R(\sigma) = \tau$, if $s\sigma \in L(\textbf{G}_{i1}\|\textbf{G}_{i2}\|\textbf{G}_{i3})$, then
\begin{align*}
&\mbox{either } s\sigma \in L(\textbf{G}_{i1}\|\textbf{G}_{i2}) \mbox{ if $\sigma$ is an event of $\textbf{G}_{i2}$ or $\textbf{G}_{i2}$} \\
&\mbox{or } s\sigma \in L(\textbf{G}_{i1}\|\textbf{G}_{i2}\|\textbf{G}_{i3}) \mbox{ if $\sigma$ is an event of $\textbf{G}_{i3}$}.
\end{align*}
We have $t \notin R(L(\textbf{G}_{i1})$ as $s\notin L(\textbf{G}_{i1})$. For the latter case,
{\small
\begin{align*}
&t \in R(L(\textbf{G}_{i1}\|\textbf{G}_{i2}),\ t\tau \notin R(L(\textbf{G}_{i1}\|\textbf{G}_{i2})), \ t\tau \in R(L(\textbf{G}_{i1}\|\textbf{G}_{i2}\|\textbf{G}_{i3})).
\end{align*}}
Since $\textbf{G}_{i1}$, $\textbf{G}_{i2}$, and $\textbf{G}_{i3}$ have the same state transition structure and all the events of them are relabeled by the same relabeling map, there must exist string $t'\in T^*$ and $\tau' \in T_u$ such that
\begin{align*}
&t' \in R(L(\textbf{G}_{i1})),\ t'\tau' \notin R(L(\textbf{G}_{i1})), \ t'\tau' \in R(L(\textbf{G}_{i1}\|\textbf{G}_{i2})).
\end{align*}
However, $R(L_m({\bf G}_{i1}))$ is controllable with respect to $R(L(\textbf{G}_{i1}\|\textbf{G}_{i2}))$. For all $\tau' \in T_u$ if $t' \in R(L(\textbf{G}_{i1})$ and $t'\tau' \in R(L(\textbf{G}_{i1}\|\textbf{G}_{i2}))$, then $t'\tau' \in R(L(\textbf{G}_{i1})$ must hold, which is conflict with Case 2.
Therefore, after all, $t\tau \in \overline{R(L_m(\textbf{G}_{i1}\|\textbf{G}_{i2}))}$ as required.\hfill $\Box$
\medskip
Applying Lemma~\ref{lem:3.2} inductively, one derives that if $L_m(\textbf{H}_{i})$ is controllable with respect to $R(L(\textbf{G}_{i1}\|\textbf{G}_{i2}))$, then $L_m(\textbf{M}_{i})$ ($i \in \{1,\ldots,l\}$) is controllable with respect to $R(L(||_{j \in \{1,\ldots,k_{i}+1\}} {\bf G}_{ij}))$.
\medskip
\begin{Lemma} \label{lem:3.3}
Let $i \in \{1,\ldots,l\}$.
If $L_m(\textbf{M}_{i})$ ($i \in \{1,\ldots,l\}$) is controllable with respect to $R(L(||_{j \in \{1,\ldots,k_{i}+1\}} {\bf G}_{ij}))$, then $L_m(\textbf{M}_{i})$ is controllable with respect to $R(L(\textbf{G}_{i}))$.
\end{Lemma}
{\it Proof.} Let $t\in \overline{L_m({\bf M}_i)}$, $\tau \in T_u$, and $t\tau \in R(L(\textbf{G}_{i}))$. We shall show that $t\tau \in \overline{L_m({\bf M}_i)} = L({\bf M}_i)$. By
$t\in \overline{L_m({\bf M}_i)}$ we derive
\begin{align*}
&t\in L({\bf M}_i)= R(L(||_{j \in \{1,\ldots,k_{i}\}} {\bf G}_{ij})) \\
\Rightarrow &(\exists s \in L(||_{j \in \{1,\ldots,k_{i}\}} {\bf G}_{ij})) R(s) = t.
\end{align*}
By Assumption (A1), agents in the same group do not share events, the string $s$ must be in $||_{j \in \{1,\ldots,k_{i}\}} {\bf G}_{ij}$. Thus for each $\sigma \in \Sigma_u$ with $R(\sigma) = \tau$, if $s\sigma \in L(\textbf{G}_{i})$, then
\begin{align*}
&\mbox{either } s\sigma \in L(||_{j \in \{1,\ldots,k_{i}\}} {\bf G}_{ij}) \mbox{ if $\sigma\in \bigcup_{j \in \{1,\ldots,k_{i}\}}\Sigma_{ij}$} \\
&\mbox{or } s\sigma \in L(||_{j \in \{1,\ldots,k_{i}+1\}} {\bf G}_{ij}) \mbox{ if $\sigma$ is an event of $\textbf{G}_{i,k_{i}+1}$}.
\end{align*}
For the former case, $t\tau = R(s\sigma) \in R(L(||_{j \in \{1,\ldots,k_{i}\}} {\bf G}_{ij}))= L({\bf M}_i)$.
For the latter case, use the controllability of $L_m(\textbf{M}_{i})$ with respect to $R(L(||_{j \in \{1,\ldots,k_{i}+1\}} {\bf G}_{ij}))$ to derive $R(s\sigma) \in L({\bf M}_i)$. Therefore, $t\tau \in L({\bf M}_i)$ is proved.\hfill $\Box$
\medskip
We are now ready to present the proof of Proposition~\ref{prop:check}.
\medskip
{\it Proof of Proposition~\ref{prop:check}.} Let $t\in \overline{L_m({\bf M})}$, $\tau \in T_u$, and $t\tau \in R(L(\textbf{G}))$. We shall show that $t\tau \in \overline{L_m({\bf M})} = L({\bf M})$. By
$t\in \overline{L_m({\bf M})}$ we derive
\begin{align*}
&t\in L({\bf M})= L(||_{i \in \{1,\ldots,l\}} {\bf M}_{i})=L(\bigcap_{i \in \{1,\ldots,l\}}P_i^{-1}({\bf M}_{i})),\\
\end{align*}
where $P_i:T^*\rightarrow T_i^*.$ We thus get that $P_i(t)\in L({\bf M_i})$. We have $t\tau \in R(L(\textbf{G}))=R(L(||_{i \in \{1,\ldots,l\}} {\bf G}_{i}))=||_{i \in \{1,\ldots,l\}} R(L({\bf G}_{i}))$ (by Assumption (A1)). Hence
\begin{align*}
&t\tau\in ||_{i \in \{1,\ldots,l\}} R(L({\bf G}_{i}))=\bigcap_{i \in \{1,\ldots,l\}}P_i^{-1}(R(L({\bf G}_{i}))).
\end{align*}
Hence, $t\tau\in P_i^{-1}(R(L({\bf G}_{i})))$, i.e. $P_i(t\tau)\in R(L({\bf G}_{i}))$.
Combining Lemmas~\ref{lem:3.2} and \ref{lem:3.3}, it directly follows that if $L_m(\textbf{H}_{i})$ ($i \in \{1,\ldots,l\}$) is controllable with respect to $R(L(\textbf{G}_{i1}\|\textbf{G}_{i2}))$, then $L_m(\textbf{M}_{i})$ is controllable with respect to $R(L(\textbf{G}_{i}))$. Therefore, $P_i(t\tau)\in L({\bf M}_{i})$, i.e. $t\tau \in P_i^{-1}L(({\bf M}_{i}))$. It is derived that $t\tau \in L(||_{i \in \{1,\ldots,l\}} {\bf M}_{i})=L({\bf M})$.
\hfill $\Box$
Thus under the easily checkable sufficient condition, Theorem~\ref{thm:main} asserts that {\bf SSUP} in (\ref{eq:SSUP}) is a valid scalable supervisor whose state size is independent of the number of agents in the plant. The advantages of this scalability are, (i) computation of {\bf SSUP} is independent of the number of agents and thus this method may handle systems with large numbers of agents; (ii) {\bf SSUP} does not need to be recomputed or reconfigured if and when some agents are removed due to failure or added for increasing productivity.
For the example in Fig.~\ref{fig:SmallFact22_ScaSup}, it is verified that the sufficient condition of Theorem~\ref{thm:main} are satisfied, and therefore the derived scalable supervisor {\bf SSUP} is a solution to SSCSP.
To prove Theorem~\ref{thm:main} we need to the following lemmas.
\begin{Lemma} \label{lem:sub}
Consider the plant {\bf G} as described in Section~II.B and suppose that Assumptions (A1), (A2) hold. Then {\bf H} is nonblocking, and
\begin{center}
$L_m(\textbf{H})\subseteq R(L_m(\textbf{G}))$.
\end{center}
\end{Lemma}
\medskip
\begin{Lemma} \label{lem:3.1}
Consider the plant {\bf G} as described in Section~II.B and suppose that Assumptions (A1), (A2) hold.
Then ${\bf SSUP}$ and ${\bf G}$ are nonconflicting, i.e.
\begin{center}
$\overline{L_m(\textbf{SSUP})\cap L_m(\textbf{G})}= \overline{L_m(\textbf{SSUP})}\cap \overline{L_m(\textbf{G})}$.
\end{center}
\end{Lemma}
\medskip
The proofs of the above Lemmas are referred to Appendix.
Now we are ready to provide the proof of Theorem~\ref{thm:main}.
{\it Proof of Theorem~\ref{thm:main}.} That the number of states of ${\bf SSUP}$ and its computation are independent of the number $n_i$ of agents for all $i\in \{1,\ldots,l\}$ has been asserted following (P4) of designing {\bf SSUP}. Hence to prove that {\bf SSUP} is a scalable supervisor that solves SSCSP, we will show that $L_m({\bf SUP1}) \subseteq L_m({\bf SSUP}) \cap L_m({\bf G}) \subseteq L_m({\bf SUP})$.
First we prove that $L_m({\bf SUP1}) \subseteq L_m({\bf SSUP}) \cap L_m({\bf G})$. Let $s \in L_m({\bf SUP1})$. Then $s \in ||_{i \in [1,l]} L_m({\bf G}_{i1})\subseteq L_m({\bf G})$. Also it is observed from (P1)-(P3) of designing {\bf SSUP} that $R(L_m({\bf SUP1})) = L_m({\bf RSUP})$. Hence $R(s) \in L_m({\bf RSUP})$ and $s \in R^{-1}(L_m({\bf RSUP})) = L_m({\bf SSUP})$. Therefore $s \in L_m({\bf SSUP}) \cap L_m({\bf G})$, and $L_m({\bf SUP1}) \subseteq L_m({\bf SSUP}) \cap L_m({\bf G})$ is proved.
It remains to show that $L_m({\bf SSUP}) \cap L_m({\bf G}) \subseteq L_m({\bf SUP}) = \sup\mathcal{C}(E \cap L_m(\textbf{G}))$.
For this we will prove that (i) $L_m({\bf SSUP}) \cap L_m({\bf G})$ is controllable with respect to $L(\textbf{G})$, and (ii)
$L_m({\bf SSUP}) \cap L_m({\bf G}) \subseteq E \cap L_m({\bf G})$. For (i) let $s \in \overline{L_m({\bf SSUP}) \cap L_m({\bf G})}$, $\sigma\in \Sigma_u$, $s\sigma \in L(\textbf{G})$. Then
\begin{align*}
&s \in \overline{L_m({\bf SSUP}) \cap L_m({\bf G})} \\
\Rightarrow &(\exists t) st \in L_m({\bf SSUP}) \\
\Rightarrow &st \in R^{-1} L_m({\bf RSUP}) \ \ \ \mbox{ (by (P4))}\\
\Rightarrow &R(st) \in L_m({\bf RSUP}) \subseteq L_m({\bf M}) \\
\Rightarrow &R(s) \in \overline{L_m({\bf RSUP})} \ \&\ R(s) \in \overline{L_m({\bf M})}.
\end{align*}
Since $s \sigma \in L({\bf G})$, we have $R(s)R(\sigma) \in R(L({\bf G}))$ where $R(\sigma) \in T_u$ (since $\sigma \in \Sigma_u$). It then follows from the controllability of $L_m({\bf M})$ with respect to $R(L({\bf G}))$ that $R(s)R(\sigma) \in \overline{L_m({\bf M})} = L({\bf M})$ ({\bf M} is nonblocking by Lemma~\ref{lem:sub}). Now use the controllability of $L_m({\bf RSUP})$ with respect to $L({\bf M})$ to derive $R(s)R(\sigma) \in \overline{L_m({\bf RSUP})}$, and in turn
\begin{align*}
&s\sigma \in R^{-1}R(s\sigma) \subseteq R^{-1} \overline{L_m({\bf RSUP})} \\
\Rightarrow &s\sigma \in \overline{R^{-1} L_m({\bf RSUP})} = \overline{L_m({\bf SSUP})}.
\end{align*}
In the derivation above, we have used Lemma~\ref{lem:cr}(iii).
In addition, since $s\sigma \in L({\bf G}) = \overline{L_m({\bf G})}$ ({\bf G} is nonblocking by Assumption (A1)), we have
\begin{align*}
s\sigma \in \overline{L_m({\bf SSUP})} \cap \overline{L_m({\bf G})}
\end{align*}
Under Assumptions~(A1), (A2), it follows from Lemma~\ref{lem:3.1} that {\bf SSUP} and {\bf G} are nonconflicting, i.e. $\overline{L_m({\bf SSUP})} \cap \overline{L_m({\bf G})}= \overline{L_m({\bf SSUP}) \cap L_m({\bf G})}$. Hence $s\sigma\in \overline{L_m({\bf SSUP}) \cap L_m({\bf G})}$, which proves (i).
For (ii) let $s\in L_m({\bf SSUP}) \cap L_m({\bf G})$. Then
\begin{align*}
&s \in R^{-1}L_m({\bf RSUP}) \cap L_m({\bf G}) \\
\Rightarrow &s \in L_m({\bf G}) \ \&\ R(s) \in L_m({\bf RSUP}) \subseteq F =R(E)\\
\Rightarrow &s \in L_m({\bf G}) \ \&\ s \in R^{-1}R(s) \subseteq R^{-1}R(E).
\end{align*}
Since $E$ is (${\bf G}, R$)-normal, i.e. $R^{-1}R(E) \cap L_m({\bf G}) \subseteq E$, we derive $s \in E \cap L_m({\bf G})$, which proves (ii).
The proof is now complete.\hfill $\Box$
From the proof above, note that if $R(L_m({\bf SUP}))\subseteq L_m({\bf RSUP})$, then we derive $L_m({\bf SUP})\subseteq R^{-1}R(L_m({\bf SUP}))\cap L_m({\bf G})\subseteq R^{-1} (L_m({\bf RSUP}))\cap L_m({\bf G}) = L_m({\bf SSUP})\cap L_m({\bf G})$. This leads to the following.
Corollary 1: Consider the plant {\bf G} as described in Section II.B and suppose that Assumptions (A1), (A2), (A3) hold. If the specification $E \subseteq \Sigma^*$ is $({\bf G},R)$-normal, $L_m({\bf M})$ is controllable with respect to $R(L({\bf G}))$, and
$R(L_m({\bf SUP})) \subseteq L_m({\bf RSUP})$, then {\bf SSUP} in (3) is the least restrictive scalable supervisor that solves SSCSP (i.e. $L_m({\bf SSUP})\cap L_m({\bf G}) = L_m({\bf SUP}))$.
Although the least restrictive scalable solution in Corollary~1 is of theoretical interest, the additional condition
$R(L_m({\bf SUP}))\subseteq L_m({\bf RSUP}) $ may be too strong and its verification requires computing the monolithic supervisor {\bf SUP} which itself is infeasible for large multi-agent systems.
Alternatively, one may explore the threshold of $k_i$ (the number of agents in group $i$ that are allowed to work in parallel) to achieve the least restrictive controlled behavior. For the small factory example in Figs. 1 and 2, the threshold for both $k_1$ and $k_2$ is $2$, the buffer size. More generally, for a small factory consists of $n_1$ input machines, $n_2$ output machines, and a buffer of size $b\ (\leq n_1, n_2)$, the threshold for both $k_1$ and $k_2$ is $b$. A thorough study on the threshold of $k_i$ that achieves the least restrictive controlled behavior will be pursued in our future work.
Remark 1: In Theorem 1, the condition that $L_m({\bf M})$ is controllable with respect to $R(L({\bf G}))$ rules out the case where agent models start with an uncontrollable event. To address this case, one approach is to replace the relabeled plant {\bf M} in (P1) by ${\bf M} := R({\bf G})$; the rest (P2)-(P4) remain the same. Suppose that the specification $E \subseteq L_m({\bf G})$ is controllable with respect to $L({\bf G})$. Then it is verified that $R(E)$ is controllable with respect to $R(L({\bf G})) = L({\bf M})$ (under Assumptions (A1), (A2), (A3)), (A4). Hence the resulting $L_m({\bf SSUP}) = R^{-1}R(E)$. Therefore, assuming $E$ is $({\bf G},R)$-normal (as in Theorem 1), we derive $L_m({\bf SSUP})\cap L_m({\bf G}) = R^{-1}R(E)\cap L_m({\bf G}) = E = L_m({\bf SUP})$. The above reasoning leads to the following.
Corollary 2: Consider the plant {\bf G} as described in Section II.B and suppose that Assumptions (A1), (A2), (A3), (A4) hold. Also suppose that the relabeled plant ${\bf M}$ in (P1) is ${\bf M} := R({\bf G})$. If the specification $E\subseteq L_m({\bf G})$ is controllable with respect to $L({\bf G})$, then {\bf SSUP} in (3) solves SSCSP (with $L_m({\bf SSUP}) \cap L_m({\bf G}) = L_m({\bf SUP}))$.
Although Corollary 2 allows agents to start with an uncontrollable event, the assumption that ${\bf M}=R({\bf G})$ requires computing the plant model ${\bf G}$ which is infeasible for large multi-agent systems. A special case where the conditions in Corollary~2 hold is when the specification $E = L_m({\bf G})$. For the more general case where $E$ is not controllable with respect to $L({\bf G})$, we shall postpone the investigation to our future work.
\section{Extensions for improving permissiveness}
The preceding section presented a synthesis procedure for a scalable supervisor, and provided efficiently checkable condition under which the scalable supervisor is a solution to SSCSP. In this section, we present an extension to the previous synthesis procedure in order to improve behavioral permissiveness while maintaining scalability. The improved permissiveness comes with the cost of increased computational cost, which demonstrates a tradeoff relation between scalability (state size of supervisor and its computation) and permissiveness.
The extension to improve behavioral permissiveness is to `refine' the relabeling map $R$ in such a way that each group of agents is further divided into subgroups.
Recall from Section II that the relabeling map $R : \Sigma \rightarrow T$ is assumed to satisfy the following. For an agent ${\bf G}_{ij}$ in group $i \in \{1,\ldots, l\}$ (and $j \in \{1,\ldots,n_i\}$), defined over the event set $\Sigma_{ij}$, there holds $R(\Sigma_{ij}) = T_i$; the sets $T_i$ ($i \in \{1,\ldots,l\}$) are pairwise disjoint and $T = \dot\cup_{i \in \{1,\ldots,l\}} T_i$. Now consider the following `refinement' of $R$. Let $T'_i$ be a set disjoint from $T_i$, and define a new relabeling map $R'$ by
\begin{align*}
& R'(\Sigma_{ij}) = T_i,\ j \in [1, \lfloor \frac{n_i}{2} \rfloor]\\
& R'(\Sigma_{ij}) = T'_i,\ j \in [\lfloor \frac{n_i}{2} \rfloor+1, n_i].
\end{align*}
Thus $R'$ relabels the events of the first half agents in each group $i$ to $T_i$, and the second half to $T'_i$. Denote by $T' := \dot\cup_{i \in \{1,\ldots,l\}} T'_i$; then $R' : \Sigma \rightarrow T \dot\cup T'$ further divides the agents in each group $i$ into two subgroups corresponding to $T_i$ and $T'_i$, respectively. Extension that divides a group into three or more subgroups follows similarly. Under the new relabeling map $R'$, there are two distinct template generators for each group $i$:
\begin{align*}
& R'({\bf G}_{ij}) = {\bf H}_i,\ j \in [1, \lfloor \frac{n_i}{2} \rfloor]\\
& R'({\bf G}_{ij}) = {\bf H}'_i,\ j \in [\lfloor \frac{n_i}{2} \rfloor+1, n_i].
\end{align*}
Consider the following extension of (P1):
(P1$'$): Compute the relabeled plant ${\bf H}'$ as the synchronous product of the generators ${\bf H}_i || {\bf H}'_i$, i.e.
\begin{flushright}
${\bf H}' := ||_{i \in \{1,...,l\}} ({\bf H}_i || {\bf H}'_i).$ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (5)
\end{flushright}
As so constructed, ${\bf H}'$ allows at most two agents in the same group to work in parallel.
Proceed with the same (P2)-(P4) as in Section III, and denote the resulting supervisor by ${\bf SSUP}'$. The state size of ${\bf SSUP}'$ and its computation do not depend on the number of component agents, but depend on (5) and the number of groups. We have the following result.
\begin{Proposition} \label{prop:rest2}
Consider the plant {\bf G} as described in Section~II.B and suppose that Assumptions (A1), (A2), (A3), and (A4) hold.
If $L_m(\textbf{H}')$ is controllable with respect to $R'(L(\textbf{G}))$, then ${\bf SSUP}'$ is a scalable supervisor that solves SSCSP.
\end{Proposition}
The proof of Proposition~\ref{prop:rest2} is similar to that of Theorem 1, by replacing $R$, {\bf H} and {\bf SSUP} by $R'$,
${\bf H}'$ and ${\bf SSUP'}$ throughout. The condition of Proposition~\ref{prop:rest2} is efficiently checkable, due analogously to Propositions~1 and 2.
Note that the improved permissiveness comes at a cost of increased computation effort.
The more subgroups are divided by `refining' the relabeling map,
the more agents in the same group are allowed to work in parallel in the relabeled plant ${\bf H}'$, and the more computational cost for deriving ${\bf SSUP'}$. In Section VI. A below, we shall illustrate the method presented in this section by an example.
\section{Scalable Distributed Control} \label{sec4_sloc}
So far we have synthesized a scalable supervisor {\bf SSUP} that effectively controls the entire multi-agent system, i.e. {\bf SSUP} is a {\it centralized} controller. For the type of system considered in this paper which consists of many independent agents, however, it is also natural to design a {\it distributed} control architecture where each individual agent acquires its own local controller (thereby becoming autonomous)\footnote{In the centralized architecture, the communication from {\bf SSUP} to the agents is typically done via event broadcasting. On the other hand, in a distributed architecture, the communication between local controllers of the agents is naturally pairwise.}.
Generally speaking, a distributed control architecture is advantageous in reducing (global) communication load, since local controllers typically need to interact only with their (nearest) neighbors. A distributed architecture might also be more fault-tolerant, as partial failure of local controllers or the corresponding agents would unlikely to overhaul the whole system.
For these potential benefits, we aim in this section to design for the multi-agent system a distributed control architecture.
In particular, we aim to design local controllers that have the same {\it scalability} as the centralized {\bf SSUP}; namely their state sizes and computation are independent of the number of agents in the system. Thus when some agents break down and/or new agents are added in, there is no need of recomputing or reconfiguring these local controllers.
Let us now formulate the following Scalable Distributed Control Synthesis Problem (SDCSP):
\smallskip
{\it
Design a set of scalable local controllers ${\bf SLOC}_{ij}$ (a nonblocking generator), one for each agent ${\bf G}_{ij}$ ($i\in \{1,...,l\}$, $j \in \{1,...,n_i\}$) such that
\noindent (i) The number of states and computation of ${\bf SLOC}_{ij}$ are independent of the number $n_i$ of agents for all $i \in \{1,\ldots,l\}$;
\noindent (ii) the set of ${\bf SLOC}_{ij}$ is (collectively) {\it control equivalent} to the scalable supervisor ${\bf SSUP}$ with respect to plant ${\bf G}$, i.e.
{\small \begin{align} \label{eq:sloc_problem}
\left( \bigcap_{\substack{i\in \{1,...,l\} \\ j \in \{1,...,n_i\}}} L_m({\bf SLOC}_{ij}) \right) \cap L_m({\bf G})= L_m({\bf SSUP}) \cap L_m({\bf G}).
\end{align}
}}
\smallskip
To solve SDCSP, we employ a known technique called {\it supervisor localization} \cite{Cai & Wonham (2010),Cai & Wonham (2015),Cai & Wonham (2016)}, which works to decompose an arbitrary supervisor into a set of local controllers whose collective behavior is equivalent to that supervisor. Since we have synthesized {\bf SSUP}, the scalable supervisor, a straightforward approach would be to apply supervisor localization to decompose the associated controlled behavior $L_m(\textbf{SSUP})\cap L_m(\textbf{G})$.\protect\footnote[7]{Note that it is incorrect to localize $L_m(\textbf{SSUP})$, because $L_m(\textbf{SSUP})$ is in general not controllable with respect to $L({\bf G})$.} This approach would require, however, the computation of {\bf G} which is infeasible for large systems and cause the resulting local controllers non-scalable.
Instead we propose the following procedure for designing scalable local controllers $\textbf{SLOC}_{ij}$, for $i\in \{1,...,l\}$ and $j \in \{1,...,n_i\}$.
\noindent (Q1) Apply supervisor localization to decompose the relabeled supervisor {\bf RSUP} into {\it relabeled local controllers} ${\bf RLOC}_{i}$, $i\in \{1,...,l\}$, such that \cite{Cai & Wonham (2016)}
\begin{align*}
\left( \bigcap_{\substack{i\in \{1,...,l\}}} L_m({\bf RLOC}_{i}) \right) \cap L_m({\bf H})= L_m({\bf RSUP}).
\end{align*}
\noindent (Q2) Compute $\mbox{trim}( \textbf{RLOC}_i \| \textbf{H}_i )$, where trim($\cdot$) operation removes blocking states (if any) of the argument generator.
\noindent (Q3) Inverse-relabel $\mbox{trim}( \textbf{RLOC}_i \| \textbf{H}_i )$ to obtain $\textbf{SLOC}_{ij}$ ($j \in \{1,...,n_i\}$), i.e.
\begin{align} \label{eq:sloc}
\textbf{SLOC}_{ij} := R^{-1}( \mbox{trim}( (\textbf{RLOC}_i \| \textbf{H}_i) ) ).
\end{align}
\medskip
Notice that the computations involved in the above procedure are independent of the number $n_i$ ($i\in \{1,...,l\}$) of agents.
In (Q1), computing ${\bf RLOC}_{i}$ by localization requires computing {\bf RSUP} and {\bf H} (in (P1) and (P3) respectively), both of which are independent of $n_i$.
In (Q2), for the synchronous product both ${\bf RLOC}_{i}$ and ${\bf H}_i$ are independent of $n_i$, while trim may only reduce some states.
Finally in (Q3), inverse-relabeling does not change the number of states. Therefore the state number of the resulting scalable local controller $\textbf{SLOC}_{ij}$ and its computation are independent of the number $n_i$ ($i\in \{1,...,l\}$) of agents.
The synchronous product in (Q2) is indeed crucial to ensure the correctness of the resulting local controllers.
If we did not compute this synchronous product and set the local controllers to be $R^{-1}( \mbox{trim}( \textbf{RLOC}_i ) )$, then
such local controllers cannot even guarantee that the controlled behavior satisfies the imposed specification, as will be demonstrated in Section~V.A below.
On the other hand, the synchronous product in (Q2) may produce blocking states; such an example is provided in Section~V.B.
Thus the trim operation is needed to ensure that the resulting ${\bf SLOC}_{ij}$ is a nonblocking generator.
In addition, note that $\textbf{SLOC}_{ij}$ are the same for all $j \in \{1,...,n_i\}$. This means that every agent ${\bf G}_{ij}$ in the same group $\mathcal{G}_i$ obtains the same local controller, although each local controller will be dedicated to enabling/disabling only the controllable events originated from its associated agent.
The main result of this section is the following.
\begin{Theorem} \label{thm:local}
The set of ${\bf SLOC}_{ij}$ ($i\in \{1,...,l\}$, $j \in \{1,...,n_i\}$) as in (\ref{eq:sloc}) is a set of scalable local controllers that solves SDCSP.
\end{Theorem}
{\it Proof:} That the number of states of ${\bf SLOC}_{ij}$ and its computation are independent of the number $n_i$ of agents for all $i\in \{1,\ldots,l\}$, $j \in \{1,...,n_i\}$ has been asserted following (Q3) of designing ${\bf SLOC}_{ij}$. Hence to prove that the set of ${\bf SLOC}_{ij}$ is a set of scalable local controllers that solves SDCSP, we will show (\ref{eq:sloc_problem}).
From (Q1) we have
{\small \begin{align*}
&\left( \bigcap_{\substack{i\in \{1,...,l\}}} L_m({\bf RLOC}_{i}) \right) \cap L_m({\bf H})= L_m({\bf RSUP}) \\
\Rightarrow &\left( \bigcap_{\substack{i\in \{1,...,l\}}} L_m({\bf RLOC}_{i}) \right) \cap \left( \|_{\substack{i\in \{1,...,l\}}} L_m({\bf H}_{i}) \right)= L_m({\bf RSUP}) \\
\Rightarrow &\bigcap_{\substack{i\in \{1,...,l\}}} \left( L_m({\bf RLOC}_{i}) \| L_m({\bf H}_{i}) \right)= L_m({\bf RSUP}) \\
\Rightarrow &\bigcap_{\substack{i\in \{1,...,l\}}} L_m( \mbox{trim}( {\bf RLOC}_{i} \| {\bf H}_{i}) ) = L_m({\bf RSUP})
\end{align*} }
Inverse-relabeling both sides and applying Lemma~\ref{lem:cr}(iv), we derive
{\small \begin{align*}
&R^{-1}\left( \bigcap_{\substack{i\in \{1,...,l\}}} L_m( \mbox{trim}( {\bf RLOC}_{i} \| {\bf H}_{i}) ) \right)= R^{-1}(L_m({\bf RSUP})) \\
\Rightarrow &\bigcap_{\substack{i\in \{1,...,l\}}} R^{-1}( L_m( \mbox{trim}( {\bf RLOC}_{i} \| {\bf H}_{i}) ) )= R^{-1}(L_m({\bf RSUP})) \\
\Rightarrow &\bigcap_{\substack{i\in \{1,...,l\}}} L_m( R^{-1}(\mbox{trim}( {\bf RLOC}_{i} \| {\bf H}_{i}) ) )= R^{-1}(L_m({\bf RSUP}))
\end{align*} }
Finally it follows from (\ref{eq:sloc}) and (P4) that
{\small \begin{align*}
&\left( \bigcap_{\substack{i\in \{1,...,l\} \\ j \in \{1,...,n_i\}}} L_m({\bf SLOC}_{ij}) \right)= L_m({\bf SSUP})\\
\Rightarrow &\left( \bigcap_{\substack{i\in \{1,...,l\} \\ j \in \{1,...,n_i\}}} L_m({\bf SLOC}_{ij}) \right) \cap L_m({\bf G})= L_m({\bf SSUP}) \cap L_m({\bf G}).
\end{align*} }
That is, (\ref{eq:sloc_problem}) is established. \hfill $\Box$
\section{Illustrating Examples}
In this section, we provide three examples to illustrate our proposed scalable supervisory synthesis as well as distributed control.
The first example is the extension of the small factory example (studied in Figs.~\ref{fig:SmallFact22}, \ref{fig:SmallFact22_ScaSup}) to arbitrary numbers of input and output machines. The second example is a transfer line system, where we illustrate how to deal with more than one specification. The last example is called mutual exclusion, where the plant naturally contains only one group of agents; we demonstrate how to fit this type of multi-agent systems into our setting and apply our method to derive scalable supervisors and local controllers.
\subsection{Small Factory}
This example has already been presented in Figs.~\ref{fig:SmallFact22} and \ref{fig:SmallFact22_ScaSup}, with 3 input machines and 2 output machines. Here we consider the general case where there are $n$ input machines and $m$ output machines, for arbitrary $n,m \geq 1$.
\begin{figure}[t!]
\centering
\includegraphics[width=0.28\textwidth]{HN.pdf}
\caption{${\bf H}'_i$ using method in Subsection IV.A.}
\label{fig:HN}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\textwidth]{SUPHN.pdf}
\caption{Small Factory: maximally permissive and scalable supervisor ${\bf SSUP'}$ derived using methods in Section IV. }
\label{fig:SUPHN}
\end{figure}
To improve permissiveness of {\bf SSUP}, we employ the method presented in Section IV.
Define a new relabeling map as follows:
\begin{align*}
& R'(1j1) = 11, R'(1j2) = 12,\ \mbox{for}\ j \in [1, \lfloor \frac{n}{2} \rfloor]\\
& R'(1j1) = 11', R'(1j2) = 12',\ \mbox{for}\ j \in [\lfloor \frac{n}{2} \rfloor +1, n]\\
& R'(2j1) = 21, R'(2j2) = 22,\ \mbox{for}\ j \in [1, \lfloor \frac{m}{2} \rfloor]\\
& R'(2j1) = 21', R'(2j2) = 22',\ \mbox{for}\ j \in [\lfloor \frac{m}{2} \rfloor +1, m].\\
\end{align*}
Accordingly there are two distinct template generators for each of the two groups:
\begin{align*}
& R'({\bf G}_{1j}) = {\bf H}_1,\ j \in [1, \lfloor \frac{n}{2} \rfloor]\\
& R'({\bf G}_{1j}) = {\bf H}'_1,\ j \in [\lfloor \frac{n}{2} \rfloor+1, n]\\
& R'({\bf G}_{2j}) = {\bf H}_2,\ j \in [1, \lfloor \frac{m}{2} \rfloor]\\
& R'({\bf G}_{2j}) = {\bf H}'_2,\ j \in [\lfloor \frac{m}{2} \rfloor+1, m].\\
\end{align*}
Compute the relabeled plant ${\bf H}' := ({\bf H}_1 || {\bf H}'_1) || ({\bf H}_2 || {\bf H}'_2)$. Proceeding with the same (P2)-(P4) as in Section III, we derive again the ${\bf SSUP}'$ in Fig.~\ref{fig:SUPHN}. Therefore, the method in Section IV lead to a more permissive scalable supervisor (at the cost of increased computational effort), in this particular example maximally permissive.
\subsection{Transfer Line}
\begin{figure}[t!]
\centering
\includegraphics[width=0.5\textwidth]{TL.pdf}\\
\caption{Transfer line: system configuration and component agents. Event $1i1$ ($i \in \{1,...,n\}$) means that ${\bf G}_{1i}$ starts to work by taking in a workpiece, and $1i2$ means that ${\bf G}_{1i}$ finishes work and deposits a workpiece to buffer {\bf B1}; event $2j1$ ($j \in \{1,...,m\}$) means that ${\bf G}_{2j}$ starts to work by taking in a workpiece, and $2j2$ means that ${\bf G}_{2j}$ finishes work and deposits a workpiece to buffer {\bf B2}; event $3l1$ ($l \in \{1,...,k\}$) means that ${\bf G}_{3l}$ starts to work by testing a workpiece, $3l0$ means that ${\bf G}_{3l}$ detects a fault and sends the faulty workpiece back to buffer {\bf B1}, and $3l2$ means that ${\bf G}_{3l}$ detects no fault and output the successfully processed workpiece.}
\label{fig:TL}
\end{figure}
The second example we present is a transfer line system, adapted from \cite{Wonham (2016)}. In this example, we demonstrate how to deal with the case where the overall specification is composed from two independent ones. As displayed in Fig.~\ref{fig:TL}, transfer line consists of machines (${\bf G}_{11},\ldots,{\bf G}_{1n}$; ${\bf G}_{21},\ldots,{\bf G}_{2m}$) and test units (${\bf G}_{31},\ldots,{\bf G}_{3k}$), linked by two buffers {\bf B1} and {\bf B2} both with capacities 1.
The generators of the agents are shown in Fig.~\ref{fig:TL}. Based on their different roles, the machines are divided into 3 groups:
\begin{center}
$\mathcal{G}_1=\{\textbf{G}_{11},\ldots,\textbf{G}_{1n}\}$
$\mathcal{G}_2=\{\textbf{G}_{21},\ldots,\textbf{G}_{2m}\}$
$\mathcal{G}_3=\{\textbf{G}_{31},\ldots,\textbf{G}_{3k}\}$.
\end{center}
Let the relabeling map $R$ be given by
\begin{align*}
& R(1i1)=11,\ R(1i2)=12,\ i \in \{1,\ldots,n\} \\
& R(2j1)=21,\ R(2j2)=22,\ j \in \{1,\ldots,m\} \\
& R(3l0)=30,\ R(3l1)=31,\ R(3l2)=32,\ l \in \{1,\ldots,k\}
\end{align*}
where odd-number events are controllable and even-number events are uncontrollable.
It is easily observed that Assumptions (A1), (A2) hold.
The specification is to avoid underflow and overflow of buffers {\bf B1} and {\bf B2}, which is enforced by the two generators {\bf E1} and {\bf E2} in Fig.~\ref{fig:TL_SSUP}. Thus the overall specification $E$ is $E = L_m({\bf E1}) \cap L_m({\bf E2})$, which can be verified to satisfy Assumption (A3). It is also verified that Assumption (A4) holds.
In addition, it is checked that $L_m({\bf H}_i) := L_m(R(\textbf{G}_{i1}))$ ($i=1,2,3$) is controllable with respect to $R(L(\textbf{G}_{i1}||\textbf{G}_{i2}))$. By Proposition~\ref{prop:check}, we have that $L_m({\bf M})$ is controllable with respect to $R(L({\bf G}))$. Therefore the sufficient condition of Theorem~1 is satisfied.
\begin{figure}[t!]
\centering
\includegraphics[width=0.35\textwidth]{TLKSMN.pdf}\\
\caption{Transfer line: specification generators {\bf E1}, {\bf E2}, and scalable supervisor {\bf SSUP}}
\label{fig:TL_SSUP}
\end{figure}
By the procedure (P1)-(P4) with $k_1=2,\ k_2=3,\ k_3=1$, we design a scalable supervisor {\bf SSUP}, displayed in Fig.~\ref{fig:TL_SSUP}. The state size of {\bf SSUP} and its computation are independent of the agent numbers $n,m,k$. Moreover, the controlled behavior of {\bf SSUP} is in fact equivalent to that of the monolithic supervisor {\bf SUP}, i.e. $L_m({\bf SSUP}) \cap L_m(\textbf{G}) = L_m({\bf SUP})$, for arbitrary fixed values of $n,m,k$. This is owing to that both buffers have only one slot, and thus the restriction due to relabeling is already enforced by the monolithic supervisor in order to satisfy the specification.
\begin{figure}[t!]
\centering
\includegraphics[width=0.48\textwidth]{TLSLOC.pdf}\\
\caption{Transfer line: scalable local controllers ($\textbf{SLOC}_{1i}$ for machine ${\bf G}_{1i}$, $i \in \{1,...,n\}$; $\textbf{SLOC}_{2i}$ for machine ${\bf G}_{2j}$, $j \in \{1,...,m\}$; $\textbf{SLOC}_{3i}$ for test unit ${\bf G}_{3l}$, $l \in \{1,...,k\}$)}
\label{fig:TLSLOC}
\end{figure}
{\bf Scalable distributed control.} Following the procedure (Q1)-(Q3) in Section~4, we compute the scalable local controllers for the individual agents.
In (Q2), certain synchronous products turn out to be blocking, as displayed in Fig.~\ref{fig:TLSLOC} (upper part). Hence the trim operation in (Q2) is important to ensure that the resulting local controllers are nonblocking.
In Fig.~\ref{fig:TLSLOC} (lower part), $\textbf{SLOC}_{1i}$ (6 states) is for the machine ${\bf G}_{1i}$, $i \in \{1,...,n\}$; $\textbf{SLOC}_{2j}$ (4 states) for the machine ${\bf G}_{2j}$, $j \in \{1,...,m\}$; and $\textbf{SLOC}_{3i}$ (6 states) for the test unit ${\bf G}_{3l}$, $l \in \{1,...,k\}$. It is verified that the desired control equivalence between the set of local controllers and the supervisor {\bf SSUP} in Fig.~\ref{fig:TL_SSUP} is satisfied, i.e. the condition (ii) of SDCSP holds.
The control logic of the scalable local controllers is as follows. First for $\textbf{SLOC}_{1i}$ ($i \in \{1,...,n\}$), which controls only the event $1i1$ of machine ${\bf G}_{1i}$, observe that event $1i1$ is disabled at states 1, 2, and 4 to protect buffer {\bf B1} against overflow, while it is disabled at states 5 due to the restriction of relabeling. As mentioned above, relabeling allows parallel operations of two machines in group one.
Next for $\textbf{SLOC}_{2j}$ ($j \in \{1,...,m\}$), which is responsible only for event $2j1$ of machine ${\bf G}_{2j}$, observe that event $2j1$ is disabled at states 0, 2 and 3. This is to protect buffer {\bf B1} against underflow and buffer {\bf B2} against overflow.
Finally for $\textbf{SLOC}_{3l}$ ($l \in \{1,...,k\}$), which is responsible only for event $3l1$ of test unit ${\bf G}_{3l}$, observe that event $3l1$ is disabled at states 0, 1, 3, 4 and 5. This is to protect buffer {\bf B2} against underflow and buffer {\bf B1} against overflow.
\subsection{Mutual Exclusion}
In this last example, mutual exclusion, we demonstrate how to transform the problem into our setup and apply our scalable supervisory synthesis. There are $n (>1)$ agents that compete to use a single resource; the specification is to prevent the resource being simultaneously used by more than one agent.
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\textwidth]{P.pdf}\\
\caption{Mutual exclusion: component agents and specification. Event $1i1$ ($i \in \{1,...,m\}$) means that ${\bf G}_{1i}$ starts using the resource, $1i2$ means that ${\bf G}_{1i}$ finishes using the resource; event $2j1$ ($j \in \{1,...,k\}$) means that ${\bf G}_{2j}$ starts using the resource, $2j2$ means that ${\bf G}_{2j}$ finishes using the resource.}
\label{fig:MX}
\end{figure}
For this problem, it is natural to treat all agents as just one group. However, our approach would then relabel every agent to a single template model, to which the mutual exclusion specification could not be imposed (mutual exclusion specifies requirement {\it between} different agents).
Thus in order to apply our synthesis method, we (artificially) separate the agents into two groups, with $m$ and $k$ agents respectively, such that $n=m+k$. Namely
\begin{center}
$\mathcal{G}_1=\{\textbf{G}_{11},\ldots,\textbf{G}_{1m}\}$
$\mathcal{G}_2=\{\textbf{G}_{21},\ldots,\textbf{G}_{2k}\}$.
\end{center}
The generators of the agents separated into two groups and the specification are displayed in Fig.~\ref{fig:MX}.
Let the relabeling map $R$ be given by
\begin{align*}
& R(1i1)=11,\ R(1i2)=12,\ i \in \{1,\ldots,m\} \\
& R(2j1)=21,\ R(2j2)=22,\ j \in \{1,\ldots,k\}
\end{align*}
where odd-number events are controllable and even-number events are uncontrollable.
It is readily checked that Assumptions (A1), (A2) hold. Moreover, it is verified that ${\bf H}_i := L_m(R(\textbf{G}_{i1}))$ ($i=1,2$) is controllable with respect to $R(L(\textbf{G}_{i1}||\textbf{G}_{i2}))$, and $R^{-1}R({\bf E})={\bf E}$; hence the sufficient condition of Theorem~1 is satisfied.
By the procedure (P1)-(P4) with $k_1=1,\ k_2=1$, we design a scalable supervisor {\bf SSUP}, displayed in Fig.~\ref{fig:MX_SSUP}. Note that {\bf SSUP} is identical to the specification {\bf E}, and the state size of {\bf SSUP} and its computation are independent of the agent numbers $m,k$ (hence the total number $n$). Moreover, the controlled behavior of {\bf SSUP} is equivalent to that of the monolithic supervisor {\bf SUP}, i.e. $L_m({\bf SSUP}) \cap L_m(\textbf{G}) = L_m({\bf SUP})$, for any fixed value of $n$. This is because there is only a single resource, and no matter how many agents are in the system, the resource can be used by only one agent at any given time. Thus the restriction due to relabeling has already been imposed by the mutual exclusion specification and enforced by the monolithic supervisor {\bf SUP}.
\begin{figure}[t!]
\centering
\includegraphics[width=0.25\textwidth]{MEKS.pdf}
\caption{Mutual exclusion: scalable supervisor {\bf SSUP}}
\label{fig:MX_SSUP}
\end{figure}
{\bf Scalable distributed control.} Following the procedure (Q1)-(Q3) in Section~IV, we compute the scalable local controllers for the individual agents. Specifically, as displayed in Fig.~\ref{fig:MESLOC}, $\textbf{SLOC}_{1i}$ (4 states) is for the first-group agent ${\bf G}_{1i}$, $i \in \{1,...,m\}$; while $\textbf{SLOC}_{2j}$ (4 states) is for the second-group agent ${\bf G}_{2j}$, $j \in \{1,...,k\}$. It is verified that the desired control equivalence between the set of local controllers and the supervisor {\bf SSUP} in Fig.~\ref{fig:MX_SSUP} is satisfied, i.e. (\ref{eq:sloc_problem}) holds.
The control logic of the scalable local controllers is as follows. First for $\textbf{SLOC}_{1i}$ ($i \in \{1,...,m\}$), which controls only the event $1i1$ of the first-group agent ${\bf G}_{1i}$, observe that event $1i1$ is disabled at states 1, 2, and 3. At all these states, the resource is being used by some agent; hence by mutual exclusion event $1i1$ must be disabled.
It is worth noting that if the sequence $1i1.2j1$ ($j \in \{1,...,k\}$) occurred, which is allowed by $\textbf{SLOC}_{1i}$, the mutual exclusion specification would be violated. Indeed $2j1$ must be disabled after the occurrence of $1i1$. However, since the local controller $\textbf{SLOC}_{1i}$ is responsible only for event $1i1$, the correct disablement of $2j1$ ($j \in \{1,...,k\}$) is left for another dedicated local controller $\textbf{SLOC}_{2j}$. As we can see in $\textbf{SLOC}_{2j}$, event $2j1$ is disabled at states 1, 2, and 3. In particular, at state 1 (i.e. after $1i1$ occurs) event $2j1$ is correctly disabled to guarantee mutual exclusion (as expected). Therefore, while each local controller enables/disables only its locally-owned events, together they achieve correct global controlled behavior.
\begin{figure}[t!]
\centering
\includegraphics[width=0.35\textwidth]{MESLOC.pdf}\\
\caption{Scalable local controllers for mutual exclusion: $\textbf{SLOC}_{1i1}$ and $\textbf{SLOC}_{2j1}$ are the scalable local controllers of event $1i1$ and $2j1$ respectively; $R^{-1}(\textbf{RLOC}_{11})$ and $R^{-1}(\textbf{RLOC}_{21})$ are obtained by inverse relabeling the local controllers of event 11 and 21; $\textbf{LOC}_{1i1}$ and $\textbf{LOC}_{2j1}$ are calculated by scalable supervisor ${\bf SSUP}$ and plant ${\bf G}$.}
\label{fig:MESLOC}
\end{figure}
|
1,116,691,498,296 | arxiv | \section{Introduction}
Fine-grained entity typing (FET) aims to classify entity mentions to a fine-grained semantic label set, e.g., classify ``\textit{FBI agents}" in ``\textit{They were arrested by \textbf{FBI agents}.}" as \{\textit{organization, administration, force, agent, police}\}. By providing fine-grained semantic labels, FET is critical for entity recognition~\citep{DBLP:conf/acl/LinLHS19_icip1, DBLP:conf/emnlp/LinLHSDJ19_icip2, DBLP:conf/emnlp/LinLTHSWY20_icip3, DBLP:conf/aaai/ZhangLH0LWY21_icip4, zhang-etal-2021-de} and can benefit many NLP tasks, such as relation extraction~\cite{DBLP:conf/eacl/SchutzeYA17_RE_downstream1, DBLP:conf/acl/ZhangHLJSL19_RE_downstream2}, entity linking~\cite{DBLP:conf/aaai/OnoeD20_EL_downstream4} and question answering~\cite{DBLP:conf/emnlp/YavuzGSSY16_QA_downstream3}.
\begin{figure}[htb!]
\setlength{\belowcaptionskip}{-0.5cm}
\centering
\subfigure[Extrinsic dependency]{
\resizebox{0.22\textwidth}{!}{
\includegraphics{figure/introduction_a.pdf}
\label{Fig.introduction(a)}
}}
\subfigure[Intrinsic dependency]{
\resizebox{0.22\textwidth}{!}{
\includegraphics{figure/introduction_b.pdf}
\label{Fig.introduction(b)}
}}
\subfigure[Label reasoning process]{
\resizebox{0.48\textwidth}{!}{
\includegraphics{figure/introduction_c.pdf}
\label{Fig.introduction(c)}
}}
\caption{Examples of deductive reasoning based on the extrinsic dependency and inductive reasoning based on the intrinsic dependency, where the labels \textit{person}, \textit{theorist} and \textit{commander} are deducted respectively and the label \textit{scientist} is inducted from the attributes \{\texttt{expert}, \texttt{scholar}\}.}
\label{Fig.introduction}
\end{figure}
The fundamental challenge of FET comes from its large-scale and fine-grained entity label set, which leads to significant difference between FET and conventional entity typing. First, due to the massive label set, it is impossible to independently recognize each entity label without considering their dependencies. For this, existing approaches use the predefined label hierarchies~\citep{ren2016afet_hier1, DBLP:conf/eacl/InuiRSS17_shimaoka_hier7, DBLP:conf/eacl/AbhishekAA17_hier5, DBLP:conf/eacl/SchutzeWK17_hier8, XuandBarbosa2018neural_hier2, DBLP:conf/ijcai/WuZMGH19_hier3, DBLP:conf/acl/ChenCD20_onto_hier4, ren2020fine_inference_hier6} or label co-occurrence statistics from training data~\citep{DBLP:conf/acl/RabinovichK17_core1, xiong2019imposing_core2, linandJi2019attentive_core3} as external constraints. Unfortunately, these label structures or statistics are difficult to obtain when transferring to new scenarios. Second, because of the fine-grained and large-scale label set, many long tail labels are only provided with several or even no training instances. For example, in Ultra-Fine dataset~\citep{choi2018ultra_data1}, $>$80\% of entity labels are with $<$5 instances, and more seriously 25\% of labels never appear in the training data. However, training data can provide very limited direct information for these labels, and therefore previous methods commonly fail to recognize these long-tailed labels.
Fortunately, the implicitly entailed label dependencies in the data provide critical knowledge to tackle the above challenges. Specifically, the dependencies between labels exist extrinsically or intrinsically. On the one hand, the extrinsic dependencies reflect the \emph{direct} connections between labels, which partially appear in the form of label hierarchy and co-occurrence. For example, in Figure~\ref{Fig.introduction(a)} the labels \textit{person}, \textit{musician}, \textit{composer} are with extrinsic dependencies because they form a three-level taxonomy. Furthermore, \textit{singer} and \textit{composer} are also with extrinsic dependency because they often co-occur with each other. On the other hand, the intrinsic dependencies entail the \emph{indirect} connections between labels through their underlying attributes. For the example in Figure~\ref{Fig.introduction(b)}, label \textit{theorist} and \textit{scientist} share the same underlying attribute of \texttt{scholar}. Such intrinsic dependencies provide an effective way to tackle the long tail labels, because many long tail labels are actually composed by non-long tail attributes which can be summarized from non-long tail labels.
To this end, this paper proposes \textit{Label Reasoning Network (LRN)}, which uniformly models, learns and reasons both extrinsic and intrinsic label dependencies without given any predefined label structures. Specifically, LRN utilizes an auto-regressive network to conduct deductive reasoning and a bipartite attribute graph to conduct inductive reasoning between labels. Both of these two kinds of mechanisms are jointly applied to sequentially generate fine-grained labels in an end-to-end, sequence-to-set manner. Figure~\ref{Fig.introduction(c)} shows several examples. To capture extrinsic dependencies, LRN introduces deductive reasoning (i.e., draw a conclusion based on premises) between labels, and formulates it using an auto-regressive network to predict labels based on both the context and previous labels. For example, given previously-generated label \textit{person} of the mention \textit{they}, as well as the context \textit{they theorize}, LRN will deduce its new label \textit{theorist} based on the extrinsic dependency between \textit{person} and \textit{theorist} derived from data. For intrinsic dependencies, LRN introduces inductive reasoning (i.e., gather generalized information to a conclusion), and utilizes a bipartite attribute graph to reason labels based on current activated attributes of previous labels. For example, if the attributes \{\texttt{expert}, \texttt{scholar}\} have been activated, LRN will induce a new label \textit{scientist} based on the attribute-label relations. Consequently, by decomposing labels into attributes and associating long tail labels with frequent labels, LRN can also effectively resolve the long tail label problem by leveraging their non-long tail attributes. Through jointly leveraging the extrinsic and intrinsic dependencies via deductive and inductive reasoning, LRN can effectively handle the massive label set of FET.
Generally, our main contributions are:
\begin{itemize}[leftmargin=0.6cm,topsep=0.1cm]
\setlength{\itemsep}{0cm}
\setlength{\parskip}{0.1cm}
\item We propose \textit{Label Reasoning Network}, which uniformly models, automatically learns and effectively reasons the complex dependencies between labels in an end-to-end manner.
\item To capture extrinsic dependencies, LRN utilizes deductive reasoning to sequentially reason labels via an auto-regressive network. In this way, extrinsic dependencies are discovered and exploited without predefined label structures.
\item To capture intrinsic dependencies, LRN utilizes inductive reasoning to reason labels via a bipartite attribute graph. By decomposing labels into attributes and associating long-tailed labels with frequent attributes, LRN can effectively reason long-tailed and even zero-shot labels.
\end{itemize}
We conduct experiments on standard Ultra-Fine~\citep{choi2018ultra_data1} and OntoNotes~\citep{ontonotes_data2} dataset. Experiments show that our method achieves new state-of-the-art performance: a 13\% overall F1 improvement and a 44\% F1 improvement in the ultra-fine granularity.\footnote{Our source codes are openly available at \href{https://github.com/loriqing/Label-Reasoning-Network}{https://github.com/loriqing/Label-Reasoning-Network}}
\begin{figure*}[tbh!]
\setlength{\belowcaptionskip}{-0cm}
\centering
\includegraphics[width=0.88\textwidth]{figure/framework.pdf}
\caption{Overview of the process for LRN which contains an encoder, a deductive reasoning-based decoder and an inductive reasoning-based decoder. The figure shows: at step 1, the label \textit{person} is predicted by deductive reasoning, and the attribute \texttt{human} is activated; at step 3, the label \textit{scientist} is generated by inductive reasoning.}
\label{Fig.framework}
\end{figure*}
\section{Related Work}
One main challenge for FET is how to exploit complex label dependencies in the large-scale label set. Previous studies typically use predefined label hierarchy and co-occurrence structures estimated from data to enhance the models. To this end, \citet{ren2016afet_hier1, XuandBarbosa2018neural_hier2, DBLP:conf/ijcai/WuZMGH19_hier3, DBLP:conf/acl/ChenCD20_onto_hier4} design new loss function to exploit label hierarchies. \citet{DBLP:conf/eacl/AbhishekAA17_hier5} enhance the label representation by sharing parameters. \citet{DBLP:conf/eacl/InuiRSS17_shimaoka_hier7, McCallumVVMR18_space1, DBLP:conf/emnlp/Lopez020_hyper} embed labels into a high-dimension or a new space. And the studies exploit co-occurrence structures including limiting the label range during label set prediction~\citep{DBLP:conf/acl/RabinovichK17_core1}, enriching the label representation by introducing associated labels~\citep{xiong2019imposing_core2}, or requiring latent label representation to reconstruct the co-occurrence structure~\citep{linandJi2019attentive_core3}. However, these methods require predefined label structures or statistics from training data, and therefore is difficult to be extended to new entity types or domains.
The ultra fine-grained label set also leads to data bottleneck and the long tail problem. In recent years, some previous approaches try to tackle this problem by introducing zero/few-shot learning methods~\citep{ma2016label_few1, HuangMPJ16_open1, ZhouKTR18_open2, YuanD18_few2, obeidat2019description_few3, zhang2020mzet_few4, DBLP:conf/www/RenLZ20_zero}, or using data augmentation with denosing strategies~\citep{DBLP:conf/kdd/RenHQVJH16_denoise3, DBLP:conf/naacl/OnoeD19_elmo, DBLP:conf/ijcai/ZhangLXZXHW20_denoise1, DBLP:conf/aaai/AliSLW20_denoise2} or utilizing external knowledge~\citep{DBLP:conf/emnlp/CorroAGW15_KB, DBLP:conf/emnlp/DaiDLS19_el} to introduce more external knowledge.
In this paper, we propose Label Reasoning Network, which is significantly different from previous methods because 1) by introducing deductive reasoning, LRN can capture extrinsic dependencies between labels in an end-to-end manner without predefined structures; 2) by introducing inductive reasoning, LRN can leverage intrinsic dependencies to predict long tail labels; 3) Through the sequence-to-set framework, LRN can consider two kinds of label dependencies simultaneously to jointly reason frequent and long tail labels.
\section{Label Reasoning Network for FET}
Figure~\ref{Fig.framework} illustrates the framework of \textit{Label Reasoning Network}. First, we encode entity mentions through a context-sensitive encoder, then sequentially generate entity labels via two label reasoning mechanisms: deductive reasoning for exploiting extrinsic dependencies and inductive reasoning for exploiting intrinsic dependencies. In our Seq2Set framework, the label dependency knowledge can be effectively modeled in the parameters of LRN, automatically learned from training data, and naturally exploited during the sequential label decoding process. In the following we describe these components in detail.
\subsection{Encoding}
For encoding, we form the input instance $\mathcal{X}$ as ``[CLS], $x_1$, ..., [$E_1$], $m_1$, ..., $m_k$, [$E_2$], ..., $x_n$" where [$E_1$], [$E_2$] are entity markers, $m$ is mention word and $x$ is context word. We then feed $\mathcal{X}$ to BERT and obtain the source hidden state $\mathcal{H}=\{h_1,...,h_n\}$. Finally, the hidden vector of [CLS] token is used as sentence embedding $\bm{g}$.
\subsection{Deductive Reasoning for Extrinsic Dependencies}
This section describes how to capture extrinsic dependencies for label prediction via a deductive reasoning mechanism. To this end, the deductive reasoning-based decoder sequentially generates labels based on both context and previous labels, e.g., ``\textit{for his books}" + \textit{person} $\rightarrow$ \textit{writer} and ``\textit{record an album}" + \textit{person} $\rightarrow$ \textit{musician}. In this way, a label is decoded by considering both context-based prediction and previous labels-based prediction.
Concretely, we utilize a LSTM-based auto-regressive network as decoder and obtain the hidden state of decoder $\mathcal{S}=\{s_0,...,s_{k}\}$, where $k$ is the number of predicted labels. We first initialize $s_0$ using sentence embedding $\bm{g}$, then at each time step, two attention mechanisms -- contextual attention and premise attention, are designed to capture context and label information for next prediction.
\paragraph{Contextual Attention} is used to capture the context evidence for label prediction. For example, the context ``\textit{they theorize}" provides rich information for \textit{theorist} label. Specifically, at each time step $t$, contextual attention identifies relevant context by assigning a weight $\alpha_{ti}$ to each $\bm{h}_i$ in the source hidden state $\mathcal{H}$:
\begin{align}
e_{ti} &= \bm{v}_c^{T}tanh(\bm{W}_c \bm{s}_t + \bm{U}_c \bm{h}_i) \label{att1_e}
\\
\alpha_{ti} &= \frac{exp(e_{ti})}{\sum_{i=1}^{n}exp(e_{ti})} \label{att1_a}
\end{align}
where $\bm{W}_c$, $\bm{U}_c$, $\bm{v}_c$ are weight parameters and $\bm{s}_t$ is the hidden state of decoder at time step $t$. Then the context representation $\bm{c}_t$ is obtained by:
\begin{align}
\bm{c}_t &= \sum_{i=1}^{n}\alpha_{ti}\bm{h}_i \label{att1_c}
\end{align}
\paragraph{Premise Attention} exploits the dependencies between labels for next label prediction. For example, if \textit{person} has been generated, its hyponym label \textit{theorist} will be highly likely to be generated in context ``\textit{they theorize}". Concretely, at each time step $t$, premise attention captures the dependencies to previous labels by assigning a weight $\alpha_{tj}$ to each $\bm{s}_j$ of previous hidden states of decoder $\mathcal{S}_{<t}$:
\begin{align}
e_{tj} &= \bm{v}_p^{T}tanh(\bm{W}_p \bm{s}_t + \bm{U}_p\bm{s}_j) \label{att2_e} \\
\alpha_{tj} &= \frac{exp(e_{tj})}{\sum_{j=0}^{t-1}exp(e_{tj})} \label{att2_a}
\end{align}
where $\bm{W}_p$, $\bm{U}_p$, $\bm{v}_p$ are weight parameters. Then the previous label information $\bm{u}_t$ is obtained by:
\begin{align}
\bm{u}_t &= \sum_{j=0}^{t-1}\alpha_{tj}\bm{s}_j \label{att1_u}
\end{align}
\paragraph{Label Prediction. }Given the context representation $\bm{c}_t$ and the previous label information $\bm{u}_t$, we use $\bm{m}_t=[\bm{c}_t+\bm{g}; \bm{u}_t+\bm{s}_t]$ as input, and calculate the probability distribution over label set $L$:
\begin{align}
\bm{s}_t &= {\rm{LSTM}}(\bm{s}_{t-1}, \bm{W}_b\bm{y}_{t-1}) \\
\bm{o}_t &= \bm{W}_o\bm{m}_t \\
\bm{y}_t &= softmax(\bm{o}_t+\bm{I}_t)
\end{align}
where $\bm{W}_o$ and $\bm{W}_b$ are weight parameters and we use the mask vector $\bm{I}_t \in \mathbb{R}^{L+1}$ \cite{YangSLMWW18_sgm} to prevent duplicate predictions.
\begin{align}
(\bm{I}_t)_i = \begin{cases}
-\inf &, l_i \in \mathcal{Y}^*_{t-1} \\
1 &, {\rm otherwise} \\
\end{cases}
\end{align}
where $\mathcal{Y}^*_{t-1}$ is the predicted labels before step $t$ and $l_i$ is the $i^{th}$ label in label set $L$. The label with maximum value in $\bm{y}_t$ is generated and used as the input for the next time step until $[EOS]$ is generated.
\subsection{Inductive Reasoning for Intrinsic Dependencies}
Deductive reasoning can effectively capture extrinsic dependencies. However, labels can also have intrinsic dependencies if they share attributes, e.g., \textit{theorist} and \textit{scientist} shares \texttt{scholar} attribute. To leverage intrinsic dependencies, LRN conducts inductive reasoning by associating labels to attributes via a bipartite attribute graph. A label will be generated if most of its attributes are activated. Instead of heuristically setting the number of attributes to be activated, we select labels based on their overall activation score from all attributes. By capturing such label-attribute relations, many long tail labels can be effectively predicted because they are usually related to non-long tail attributes.
To this end, we first design a bipartite attribute graph to represent attribute-label relations. Based on the bipartite attribute graph, at each time step, attributes will be activated based on the hidden state of decoder, and new labels will be inducted by reasoning over the activated attributes. For example, in Figure~\ref{Fig.framework} the predicted labels \textit{person}, \textit{theorist} and \textit{commander} will correspondingly activate the attributes \texttt{human}, \texttt{scholar} and \texttt{expert}, and then the \textit{scientist} label will be activated via inductive reasoning based on these attributes.
\paragraph{Bipartite Attribute Graph (BAG).}\label{AG method} BAG $\mathcal{G}=\{V,E\}$ is designed to capture the relations between attributes and labels. Specifically, nodes $V$ contain attribute nodes $V_a$ and label nodes $V_l$, and edges $E$ only exist between attributes nodes and labels nodes, with the edge weight indicating the attribute-label relatedness. Attributes are represented using natural language words in BAG. Figure~\ref{Fig.framework} shows a BAG where $V_a$ contains words \{\texttt{scholar}, \texttt{expert}, \texttt{historian}, ...\}, $V_l$ are all entity labels in label set $\textit{L}$, containing \{\textit{student}, \textit{musician}, \textit{scientist}, ...\}
\paragraph{BAG Construction.}\label{collection} Because there are many labels and many attributes, we dynamically build a local BAG during the decoding for each instance. In this way the BAG is very compact and the computation is very efficient~\citep{DBLP:journals/ai/ZupanBDB99_attribute_reduce}. In local BAG, we collect attributes in two ways: (1) We mask the entity mention in the sentence, and predict the [MASK] token using masked language model (this paper uses BERT-base-uncased), and the non-stop words whose prediction scores greater than a confidence threshold ${\theta}_c$ will be used as attributes --- we denote them as context attributes; Since PLM usually predicts high-frequency words, the attributes are usually not long-tailed, which facilitates modeling dependencies between head and tail labels. This mask-prediction strategy is also used in~\citet{DBLP:conf/emnlp/XinZH0S18_put_back}, for collecting additional semantic evidence of entity labels. (2) We directly segment the entity mention into words using Stanza\footnote{https://pypi.org/project/stanza/}, and all non-stop words are used as attributes --- we denote them as entity attributes. Figure~\ref{Fig.attributes} shows several attribute examples. Given attributes, we compute the attribute-label relatedness (i.e. $E$ in $\mathcal{G}$) using the cosine similarity between their GloVe embeddings~\citep{DBLP:conf/emnlp/PenningtonSM14_glove}.
\begin{figure}[!t]
\vspace{-0cm}
\setlength{\belowcaptionskip}{-0.3cm}
\centering
\includegraphics[width=0.48\textwidth]{figure/attributes.pdf}
\caption{Examples of attributes.}
\label{Fig.attributes}
\end{figure}
\paragraph{Reasoning over BAG.} At each time step, we activate attributes in BAG by calculating their similarities to the current hidden state of decoder $\bm{s}_t$. For the $i^{th}$ attribute node ${V_a}^{(i)}$, its activation score is:
\begin{align}
{score}_{V_a}^{(i)} &= ReLU(sim(\bm{W}_s \bm{s}_t, \bm{W}_a {V_a}^{(i)})
\end{align}
where $\bm{W}_s$ is the weight parameter, $\bm{W}_a$ is the attribute embedding (i.e., word embedding of attribute words). We use cosine distance to measure similarity and employ ReLU to activate attributes. Then we induce new labels by reasoning over the activated attributes as:
\begin{align}
{score}_{V_l}^{(j)} &= \sum_{i=1}^{n_a} {score}_{V_a}^{(i)} E_{ij}
\end{align}
where $n_a$ is the number of attributes, ${V_l}^{(j)}$ is the $j_{th}$ label nodes and $E_{ij}$ is the weight between them. Finally a label will be generated if its activation score is greater than a similarity threshold ${\theta}_s$.
Note that our inductive reasoning and deductive reasoning are jointly modeled in the same decoder, i.e., they share the same decoder hidden state but with different label prediction process. Once deductive reasoning-based decoder generates \textit{[EOS]}, the label prediction stops. Finally, we combine the predicted labels of both deductive reasoning and inductive reasoning as the final FET results.
\section{Learning}
In FET, each instance is represented as \{$\mathcal{X}$, $\mathcal{Y}$\} where $\mathcal{X}$ is ``[CLS], $x_1$, ..., [$E_1$], $m_1$, ..., $m_k$, [$E_2$],..., $x_n$" and $\mathcal{Y}=\{y_1,...,y_m\}$ is the golden labels. To learn our model, we design two losses: set prediction loss for deductive reasoning-based decoding and BAG loss for inductive reasoning-based decoding.
\paragraph{Set Prediction Loss.}
In FET, cross entropy loss is not appropriate because the prediction results is a label set, i.e., \{$y^{*}_1$, $y^{*}_2$, $y^{*}_3$\} and \{$y^{*}_3$, $y^{*}_2$, $y^{*}_1$\} should have the same loss. Therefore we measure the similarity of two label set using the bipartite matching loss~\citep{DBLP:journals/corr/abs-2011-01675_matching_loss}. Given the golden label set $\mathcal{Y}=\{y_1,...,y_m\}$ and generated label set ${\mathcal{Y}}^{*}=\{{y^{*}_1},...,{y^{*}_m}\}$, the matching loss $\mathcal{L}(ij)_{S}$ of $y_i$ and ${y^{*}_j}$ is calculated by \ref{single_loss}, then we use the Hungarian Algorithm~\citep{kuhn1955hungarian_loss} to get the specific order of golden label set as $\widetilde{\mathcal{Y}}=\{\widetilde{y}_1,...,\widetilde{y}_m\}$ to obtain minimum matching loss $\mathcal{L}_{S}$:
\begin{align}
\mathcal{L}(ij)_{S} &= \textup{CE}(y_i, y^{*}_j) \label{single_loss} \\
\mathcal{L}_{S} &= \textup{CE}(\widetilde{\mathcal{Y}}, \mathcal{Y}^{*}) \label{total_loss}
\end{align}
where \textup{CE} is cross-entropy.
\paragraph{BAG Loss.}
To make the model activate labels correctly, we add a supervisory loss to the bipartite attribute graph to active correct labels:
\begin{align}
\mathcal{L}_{A} &= -\sum_{j=1}^{|\textit{L}|}{score}_{V_l}^{(j)} * y_j \\
y_j &= \begin{cases}
1 &, v_j \in \mathcal{Y} \\
-1 &, v_j \notin \mathcal{Y}
\end{cases}
\end{align}\normalsize
\paragraph{Final Loss.} The final loss is a combination of set loss and BAG loss:
\begin{align}
\mathcal{L} = \mathcal{L}_{S} + \lambda \mathcal{L}_{A}
\end{align}
where $\lambda$ is the relative weight of these two losses\footnote{ In our auxiliary experiments, we find that its impact is minor, so this paper empirically sets it to 1.}.
\section{Experiments}
\subsection{Settings}
\paragraph{Datasets} We conduct experiments on two standard fine-grained entity typing datasets\footnote{Released in https://github.com/uwnlp/open\_type}: Ultra-Fine as primary dataset and OntoNotes as complementary dataset. Ultra-Fine contains 6K manually-annotated examples, 2519 categories, and 5.4 labels per sample on average. Followed~\citet{choi2018ultra_data1} we use the same 2K/2K/2K train/dev/test splits and evaluate using macro precision, recall and F-score. Original OntoNotes dataset~\citep{ontonotes_data2} contains 25K/2K/9K train/dev/test data, 89 categories and 2.7 labels per sample on average. And \citet{choi2018ultra_data1} offers an augmentation training data with 2.3 labels per sample on average. We evaluate on both versions using the standard metrics: accuracy, macro F-score and micro F-score.
\paragraph{Baselines} For Ultra-Fine dataset, we compare with following baselines: \citet{DBLP:conf/naacl/OnoeD19_elmo} which offers two multi-classifiers using BERT and ELMo as encoder respectively, \citet{choi2018ultra_data1} which is a multi-classifier using GloVe+LSTM as encoder, \citet{xiong2019imposing_core2} which is a multi-classifier using GloVe+LSTM as encoder and exploits label co-occurrence via introducing associated labels to enrich the label representation, \citet{DBLP:conf/emnlp/Lopez020_hyper} which is a hyperbolic multi-classifier using GloVe. For OntoNotes dataset, in addition to the baselines for Ultra-Fine, we also compare with \citet{wang2020empirical} which offers a multi-classifier using BERT as encoder, \citet{linandJi2019attentive_core3} which offers a multi-classifier using ELMo as encoder and exploits label co-occurrence via requiring the latent representation to reconstruct the co-occurrence association and \citet{DBLP:conf/acl/ChenCD20_onto_hier4} which offers a multi-classifier using ELMo as encoder and exploits label hierarchy via designing a hierarchy-aware loss function.
\paragraph{Implementation} We use BERT-Base(uncased) \citep{DBLP:conf/naacl/DevlinCLT19_bert} as encoder, Adam optimizer \citep{DBLP:journals/corr/KingmaB14_adam} with learning rate of BERT as 5e-5 and of other parameters as 1e-3. The batch size is 32, encoder hidden size is 768, the decoder hidden size is 868 and label embedding size is 100, the dropout rate of decoder is 0.6. The confidence threshold ${\theta}_c$ and the similarity threshold ${\theta}_s$ both are optimized on dev set and set as 0.1 and 0.2 respectively.
We use the GloVe embedding \citep{DBLP:conf/emnlp/PenningtonSM14_glove} to represent the nodes of BAG and fix it while training.
\subsection{Overall Results}
\begin{table}[!t]
\setlength{\belowcaptionskip}{-0.4cm}
\centering
\resizebox{.45\textwidth}{!}{
\begin{tabular}{lccc}
\Xhline{1.2pt}
\multicolumn{1}{l|}{\textbf{Model}} & \textbf{P} & \textbf{R} & \textbf{F1} \\ \hline
\multicolumn{4}{c}{without label dependency} \\ \hline
\multicolumn{1}{l|}{*\small\citet{choi2018ultra_data1}\normalsize} & 47.1 & 24.2 & 32.0 \\
\multicolumn{1}{l|}{*ELMo\small\citep{DBLP:conf/naacl/OnoeD19_elmo}\normalsize} & 51.5 & 33.0 & 40.2 \\
\multicolumn{1}{l|}{BERT\small\citep{DBLP:conf/naacl/OnoeD19_elmo}\normalsize} & 51.6 & 33.0 & 40.2 \\
\multicolumn{1}{l|}{BERT{[}in-house{]}} & 55.9 & 33.0 & 41.5 \\ \hline
\multicolumn{4}{c}{with label dependency} \\ \hline
\multicolumn{1}{l|}{*L\small{ABEL}\normalsize{GCN} \small\citep{xiong2019imposing_core2}\normalsize} & 50.3 & 29.2 & 36.9 \\
\multicolumn{1}{l|}{LRN \small w/o IR \normalsize} & \textbf{61.2} & 33.5 & 43.3 \\
\multicolumn{1}{l|}{LRN} & 54.5 & \textbf{38.9} & \textbf{45.4} \\ \Xhline{1.2pt}
\end{tabular}}
\caption{Macro P/R/F1 results on Ultra-Fine test set. * means using augmented data. "without label dependency" methods formulated FET as multi-label classification without considering associations between labels. "with label dependency" methods leveraged associations between labels explicitly or implicitly.}
\label{tab:Ultra-Fine main result}
\end{table}
\begin{table*}[tbh!]
\centering
\resizebox{.9\textwidth}{!}{
\begin{tabular}{l|cccccccccccc}
\Xhline{1.2pt}
\multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Model}}} & \multicolumn{3}{c}{\textbf{Total}} & \multicolumn{3}{c}{\textbf{General}} & \multicolumn{3}{c}{\textbf{Fine}} & \multicolumn{3}{c}{\textbf{Ultra-Fine}} \\ \cline{2-13}
\multicolumn{1}{c|}{} & \textbf{P} & \textbf{R} & \multicolumn{1}{c|}{\textbf{F}} & \textbf{P} & \textbf{R} & \multicolumn{1}{c|}{\textbf{F}} & \textbf{P} & \textbf{R} & \multicolumn{1}{c|}{\textbf{F}} & \textbf{P} & \textbf{R} & \textbf{F} \\ \hline
*\citet{choi2018ultra_data1} & 48.1 & 23.2 & \multicolumn{1}{c|}{31.3} & 60.3 & 61.6 & \multicolumn{1}{c|}{61.0} & 40.4 & 38.4 & \multicolumn{1}{c|}{39.4} & 42.8 & 8.8 & 14.6 \\
$\dagger$L\small{ABEL}\normalsize{GCN} \citep{xiong2019imposing_core2} & 49.3 & 28.1 & \multicolumn{1}{c|}{35.8} & 66.2 & 68.8 & \multicolumn{1}{c|}{67.5} & 43.9 & 40.7 & \multicolumn{1}{c|}{42.2} & 42.4 & 14.2 & 21.3 \\
HY Large \citep{DBLP:conf/emnlp/Lopez020_hyper} & 43.4 & 34.2 & \multicolumn{1}{c|}{38.2} & 61.4 & 73.9 & \multicolumn{1}{c|}{67.1} & 35.7 & 46.6 & \multicolumn{1}{c|}{40.4} & 36.5 & 19.9 & 25.7 \\
*ELMo \cite{DBLP:conf/naacl/OnoeD19_elmo} & 50.7 & 33.1 & \multicolumn{1}{c|}{40.1} & 66.9 & \textbf{80.7} & \multicolumn{1}{c|}{73.2} & 41.7 & 46.2 & \multicolumn{1}{c|}{43.8} & 45.6 & 17.4 & 25.2 \\
BERT \cite{DBLP:conf/naacl/OnoeD19_elmo} & 51.6 & 32.8 & \multicolumn{1}{c|}{40.1} & 67.4 & 80.6 & \multicolumn{1}{c|}{73.4} & 41.6 & 54.7 & \multicolumn{1}{c|}{47.3} & 46.3 & 15.6 & 23.4 \\ \hline
BERT[in-house] & 54.1 & 32.1 & \multicolumn{1}{c|}{40.3} & 68.8 & 79.2 & \multicolumn{1}{c|}{73.6} & 43.8 & \textbf{57.4} & \multicolumn{1}{c|}{49.7} & \textbf{50.7} & 14.6 & 22.6 \\
LRN \small w/o IR \normalsize & \textbf{60.7} & 32.5 & \multicolumn{1}{c|}{42.3} & \textbf{79.3} & 75.5 & \multicolumn{1}{c|}{\textbf{77.4}} & \textbf{59.6} & 44.8 & \multicolumn{1}{c|}{51.2} & 45.7 & 18.7 & 26.5 \\
LRN & 53.7 & \textbf{38.6} & \multicolumn{1}{c|}{\textbf{44.9}} & 77.8 & 76.4 & \multicolumn{1}{c|}{77.1} & 55.8 & 50.6 & \multicolumn{1}{c|}{\textbf{53.0}} & 43.4 & \textbf{26.0} & \textbf{32.5} \\ \Xhline{1.2pt}
\end{tabular}}
\caption{Macro P/R/F1 of each label granularity on Ultra-Fine dev set, and long tail labels are mostly in the ultra-fine layer. * means using augmented data. $\dagger$ We adapt the results from \citet{DBLP:conf/emnlp/Lopez020_hyper}.}
\label{tab:Ultra-Fine layer score}
\end{table*}
\begin{table*}[ht!]
\centering
\resizebox{.9\textwidth}{!}{
\begin{tabular}{l|cccccccccccc}
\Xhline{1.2pt}
\multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Model}}} & \multicolumn{3}{c}{\textbf{Total}} & \multicolumn{3}{c}{\textbf{General}} & \multicolumn{3}{c}{\textbf{Fine}} & \multicolumn{3}{c}{\textbf{Ultra-Fine}} \\ \cline{2-13}
\multicolumn{1}{c|}{} & \textbf{P} & \textbf{R} & \multicolumn{1}{c|}{\textbf{F}} & \textbf{P} & \textbf{R} & \multicolumn{1}{c|}{\textbf{F}} & \textbf{P} & \textbf{R} & \multicolumn{1}{c|}{\textbf{F}} & \textbf{P} & \textbf{R} & \textbf{F} \\ \hline
HY XLarge~\citep{DBLP:conf/emnlp/Lopez020_hyper} & / & / & \multicolumn{1}{c|}{/} & / & / & \multicolumn{1}{c|}{69.1} & / & / & \multicolumn{1}{c|}{39.7} & / & / & 26.1 \\
BERT[in-house] & 55.9 & 33.0 & \multicolumn{1}{c|}{41.5} & 69.7 & \textbf{81.6} & \multicolumn{1}{c|}{75.2} & 43.7 & \textbf{56.0} & \multicolumn{1}{c|}{49.1} & \textbf{53.5} & 15.5 & 24.0 \\
LRN \small w/o IR \normalsize & \textbf{61.2} & 33.5 & \multicolumn{1}{c|}{43.3} & \textbf{78.3} & 76.7 & \multicolumn{1}{c|}{\textbf{77.5}} & \textbf{61.6} & 44.1 & \multicolumn{1}{c|}{51.4} & 47.8 & 19.9 & 28.1 \\
LRN & 54.5 & \textbf{38.9} & \multicolumn{1}{c|}{\textbf{45.4}} & 77.4 & 76.7 & \multicolumn{1}{c|}{77.1} & 58.4 & 50.4 & \multicolumn{1}{c|}{\textbf{54.1}} & 43.5 & \textbf{26.4} & \textbf{32.8} \\ \Xhline{1.2pt}
\end{tabular}}
\caption{Macro P/R/F1 of different label granularity on Ultra-Fine test set.}
\label{tab:layer score on UF Test}
\end{table*}
\begin{table*}[thb!]
\setlength{\belowcaptionskip}{-0 cm}
\centering
\resizebox{.9\textwidth}{!}{
\begin{tabular}{l|c|c|ccc|ccc|ccc}
\Xhline{1.2pt}
\multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Number of}}} & \multirow{2}{*}{\textbf{Category}} & \multirow{2}{*}{\textbf{Prediction}} & \multicolumn{3}{c|}{\textbf{Shot$=$0}} & \multicolumn{3}{c|}{\textbf{Shot$=$1}} & \multicolumn{3}{c}{\textbf{Shot$=$2}} \\ \cline{4-12}
\multicolumn{1}{c|}{} & & & Correct & Predicted & Prec. &Correct & Predicted & Prec. & Correct & Predicted & Prec. \\ \hline
BERT[in-house] & 293 & 5683 & 0 & 0 & / & 1 & 1 & 100.0\% & 9 & 66 & 13.6\% \\ \cline{1-1}
LRN \small w/o IR \normalsize & 330 & 5740 & 0 & 0 & / & 1 & 3 & 33.3\% & 15 & 28 & 53.6\% \\ \cline{1-1}
LRN & 997 & 7808 & 110 & 218 & 50.5\% & 67 & 252 & 26.6\% & 94 & 276 & 34.1\% \\ \Xhline{1.2pt}
\end{tabular}}
\caption{Performance of the zero-shot, shot$=$1 and shot$=$2 label prediction. "Category" means how many kinds of types are predicted. "Prediction" means how many labels are generated.}
\label{tab:Ultra-Fine each shot}
\end{table*}
Table~\ref{tab:Ultra-Fine main result} shows the main results of all baselines and our method in two settings: LRN is the full model and LRN \small w/o IR \normalsize is the model without inductive reasoning. For fair comparisons, we implement a baseline with same settings of LRN but replace the decoder with a multi-classifier same as \citet{choi2018ultra_data1} --- BERT[in-house]. We can see that:
1) \textit{By performing label reasoning, LRN can effectively resolve the fine-grained entity typing problem.} Compared with previous methods, our method achieves state-of-the-art performance with a F1 improvement from 40.2 to 45.4 on test set. This verified the necessity for exploiting label dependencies for FET and the effectiveness of our two label reasoning mechanisms. We believe this is because label reasoning can help FET by making the learning more data-efficient (i.e., labels can share knowledge) and the prediction of labels global coherent.
2) \textit{Both deductive reasoning and inductive reasoning are useful for fine-grained label prediction.} Compared with BERT[in-house], LRN \small w/o IR \normalsize can achieve 4.3\% F1 improvement by exploiting extrinsic dependencies via deductive reasoning. LRN can further improve F1 from 43.3 to 45.4 by exploiting intrinsic dependencies via inductive reasoning. We believe this is because deductive reasoning and inductive reasoning are two fundamental but different mechanisms, therefore, modeling them simultaneously will better leverage label dependencies to predict labels.
3) \textit{Seq2Set is an effective framework to model, learn and exploit label dependencies in an end-to-end manner.} Compared with L\small{ABEL}\normalsize{GCN}~\citep{xiong2019imposing_core2} which heuristically exploits label co-occurrence structure, LRN can achieve a significant performance improvement. We believe this is because neural networks have strong ability for representing and learning label dependencies. And the end-to-end manner makes LRN can easily generalize to new scenarios.
\subsection{Effect on Long Tail Labels}
As described above, another advantage of our method is it can resolve the long tail problem by decomposing long tail labels to common attributes and modeling label dependencies between head and tail labels. Because the finer the label granularity, the more likely it to be a long tail label, we report the performance of each label granularity on dev set and test set same as previous works in Table~\ref{tab:Ultra-Fine layer score} and Table~\ref{tab:layer score on UF Test}. Moreover, we report the performance of the labels with shot$\leq$2 in Table~\ref{tab:Ultra-Fine each shot}. Based on these results, we find that:
1) \textit{LRN can effectively resolve the long tail label problem.} Compared to BERT[in-house], LRN can significantly improve the F-score of ultra-fine granularity labels by 44\% (22.6 $\rightarrow$ 32.5) and recall more fine-grained labels (14.6 $\rightarrow$ 26.0).
2) \textit{Both deductive reasoning and inductive reasoning are helpful for long tail label prediction, but with different underlying mechanisms: deductive reasoning exploits the extrinsic dependencies between labels, but inductive reasoning exploits the intrinsic dependencies between labels.} LRN \small w/o IR \normalsize cannot predict zero-shot labels because it resolves long tail labels by relating head labels with long tail labels, therefore it cannot predict unseen labels. By contrast, LRN can predict zero-shot labels via inductive reasoning because it can decompose labels into attributes. Furthermore, we found LRN \small w/o IR \normalsize has higher precision for few-shot (shot$=$2) labels than BERT and LRN, we believe this is because inductive reasoning focuses on recalling more labels, which inevitably introduce some incorrect labels.
\subsection{Detailed Analysis}
\paragraph{Effect of Components}
\begin{table}[htb!]
\centering
\resizebox{.45\textwidth}{!}{
\begin{tabular}{l|ccc|ccc}
\Xhline{1.2pt}
\multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Model}}} & \multicolumn{3}{c|}{\textbf{Dev}} & \multicolumn{3}{c}{\textbf{Test}} \\ \cline{2-7}
\multicolumn{1}{c|}{} & \textbf{P} & \textbf{R} & \textbf{F} & \textbf{P} & \textbf{R} & \textbf{F} \\ \hline
\textbf{LRN} & 53.7 & 38.6 & 44.9 & 54.5 & 38.9 & 45.4 \\
-PreAtt & 53.1 & 39.3 & 45.2 & 52.6 & 39.5 & 45.1 \\
-PreAtt-ConAtt & 56.3 & 36.3 & 44.2 & 56.4 & 36.5 & 44.3 \\
-SetLoss & 46.8 & 40.7 & 43.5 & 47.8 & 40.7 & 44.0 \\ \hline
\textbf{LRN \small w/o IR \normalsize} & 60.7 & 32.5 & 42.3 & 61.2 & 33.5 & 43.3 \\
-PreAtt & 54.5 & 34.2 & 42.1 & 55.1 & 35.0 & 42.8 \\
-PreAtt-ConAtt & 55.2 & 32.9 & 41.3 & 56.2 & 34.3 & 42.6 \\
-SetLoss & 46.0 & 37.6 & 41.4 & 46.6 & 37.5 & 41.6 \\ \Xhline{1.2pt}
\end{tabular}}
\caption{Ablation results on Ultra-Fine dataset: PreAtt denotes premise attention, ConAtt denotes contextual attention, and -SetLoss denotes replacing set prediction loss with cross-entropy loss.}
\label{Module Ablation}
\end{table}
\begin{figure}[htb!]
\setlength{\belowcaptionskip}{-0cm}
\centering
\subfigure[]{
\resizebox{0.23\textwidth}{!}{
\includegraphics{figure/bar.pdf}
\label{Fig.attribute}
}}
\subfigure[]{
\resizebox{0.23\textwidth}{!}{
\includegraphics{figure/plot.pdf}
\label{Fig.thred}
}}
\caption{(a) Ablation experiments of context attributes and entity attributes on Ultra-Fine dataset. (b) Performances of different confidence threshold ${\theta}_c$ and similarity threshold ${\theta}_s$ on dev set.}
\end{figure}
To evaluate the effect of different components, we report the ablation results in Table~\ref{Module Ablation}. We can see that: (1) Set prediction loss is effective: replacing it with cross-entropy loss will lead to a significant decrease. (2) Both context and premise attention mechanisms are important for Seq2Set generation.
\paragraph{Effect of Attributes Set}
To explore the impact of entity attributes and context attributes in BAG, Figure~\ref{Fig.attribute} shows the results of different attributes configurations. We can see that: both attributes are useful, the context attribute has high coverage and may be noisy, while the entity attribute is opposite. However when introducing both of them, the information in entity attributes might help the context attributes to disambiguate them. This is similar to the effectiveness of contextual information in word sense disambiguation. As a result, these two kinds of attributes can complement each other. And Figure~\ref{Fig.thred} shows the performance on different thresholds, and we optimize confidence threshold ${\theta}_c=0.1$ and similarity threshold ${\theta}_s=0.2$ on dev set. Notice that ${\theta}_s$ is the threshold of activating labels and when ${\theta}_s=1$, it is equivalent to LRN \small w/o IR \normalsize.
\paragraph{Results of OntoNotes}
\begin{table}[!t]
\setlength{\belowcaptionskip}{-0cm}
\centering
\resizebox{.45\textwidth}{!}{
\begin{tabular}{llccc}
\Xhline{1.2pt}
\multicolumn{1}{c}{\textbf{Encoder}} & \multicolumn{1}{c|}{\textbf{Model}} & \textbf{Acc} & \textbf{MaF} & \textbf{MiF} \\ \Xhline{1.1pt}
\multicolumn{5}{c}{\textbf{with augmentation}} \\ \Xhline{1.1pt}
\multirow{1}{*}{HYPER} & \multicolumn{1}{l|}{\citet{DBLP:conf/emnlp/Lopez020_hyper}} & 47.4 & 75.8 & 69.4 \\ \hline
\multirow{2}{*}{LSTM} & \multicolumn{1}{l|}{\citet{choi2018ultra_data1}} & 59.5 & 76.8 & 71.8 \\
& \multicolumn{1}{l|}{\citet{xiong2019imposing_core2}} & 59.6 & 77.8 & 72.2 \\ \hline
\multirow{2}{*}{ELMo} & \multicolumn{1}{l|}{*\citet{DBLP:conf/naacl/OnoeD19_elmo}} & 64.9 & 84.5 & 79.2 \\
& \multicolumn{1}{l|}{\cite{linandJi2019attentive_core3}} & 63.8 & 82.9 & 77.3 \\ \hline
\multirow{4}{*}{BERT} & \multicolumn{1}{l|}{\citet{wang2020empirical}} & 61.1 & 81.8 & 76.3 \\
& \multicolumn{1}{l|}{BERT {[}in-house{]}} & 62.2 & 83.4 & 78.8 \\
& \multicolumn{1}{l|}{LRN \small w/o IR \normalsize} & \textbf{66.1} & \textbf{84.8} & \textbf{80.1} \\
& \multicolumn{1}{l|}{LRN} & 64.5 & 84.5 & 79.3 \\ \Xhline{1.1pt}
\multicolumn{5}{c}{\textbf{without augmentation}} \\ \Xhline{1.1pt}
\multirow{2}{*}{ELMo} & \multicolumn{1}{l|}{*\citet{DBLP:conf/naacl/OnoeD19_elmo}} & 42.7 & 72.7 & 66.7 \\
& \multicolumn{1}{l|}{\citet{DBLP:conf/acl/ChenCD20_onto_hier4}} & \textbf{58.7} & 73.0 & 68.1 \\ \hline
\multirow{4}{*}{BERT} & \multicolumn{1}{l|}{\citet{DBLP:conf/naacl/OnoeD19_elmo}} & 51.8 & 76.6 & 69.1 \\
& \multicolumn{1}{l|}{BERT[in-house]} & 51.5 & 76.6 & 69.7 \\
& \multicolumn{1}{l|}{LRN \small w/o IR \normalsize} & 55.3 & 77.3 & 70.4 \\
& \multicolumn{1}{l|}{LRN} & 56.6 & \textbf{77.6} & \textbf{71.8} \\ \Xhline{1.2pt}
\end{tabular}}
\caption{Results on OntoNotes test set. Augmentation is the augmented data created by \cite{choi2018ultra_data1} which contains 800K instances and therefore there’re little few-shot labels in this setting. And * indicates using additional features to enhance the label representation.}
\label{ontonotes results}
\end{table}
To verify the generality of our method, we further conduct experiments on OntoNotes and report results of with and without augmentation data in Table~\ref{ontonotes results}. To embed labels in OntoNotes, we use the embedding of the last word of a label, e.g., \textit{/person/artist/director} is represented using embedding of \textit{director}.
We can see that: 1) LRN still achieves the best performance on both settings, which verified the robustness of our method. 2) Compared with Ultra-Fine, our method achieves a smaller improvement on OntoNotes. We found this is mainly because: First, OntoNotes has weaker label dependencies for its label set is smaller (89 vs 2519 for Ultra-Fine) and most of its labels are coarse-grained. Secondly, most labels in OntoNotes are frequent labels with many training instances, therefore the long tail label problem is not serious. This also explains why LRN \small w/o IR \normalsize can achieve better performance than LRN in the setting of with augmentation data: the more the training instance, the less need for long tail prediction.
\subsection{Case Study}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.45\textwidth]{figure/heatmap.pdf}
\caption{Heat map of co-occurrence matrices of different models' prediction and ground truth. LRN w/o IR and LRN learn very similar co-occurrence matrices to Ground Truth.}
\label{Fig.heatmap}
\end{figure}
\begin{figure}[htb!]
\setlength{\belowcaptionskip}{-0cm}
\centering
\includegraphics[width=0.48\textwidth]{figure/case.pdf}
\caption{Cases of prediction results.}
\label{Fig.case}
\end{figure}
To intuitively present the learned label dependencies, Figure~\ref{Fig.heatmap} shows the label co-occurrence matrices of different models' predictions and ground truth, we can see that both LRN and LRN \small w/o IR \normalsize can accurately learn label dependencies. Figure~\ref{Fig.case} shows some prediction cases and demonstrates that deductive and inductive reasoning have quite different underlying mechanisms and predict quite different labels.
\section{Conclusions}
This paper proposes \textit{Label Reasoning Network}, which uniformly models, learns and reasons complex label dependencies in a sequence-to-set, end-to-end manner. LRN designs two label reasoning mechanisms for effective decoding -- deductive reasoning to exploit extrinsic dependencies and inductive reasoning to exploit intrinsic dependencies. Experiments show that LRN can effectively cope with the massive label set on FET. And because our method uses no predefined structures, it can be easily generalized to new datasets and applied to other multi-classification tasks.
\section{Acknowledgments}
This work is supported by the National Key Research and Development Program of China (No. 2020AAA0106400), the National Natural Science Foundation of China under Grants no.U1936207 and 62106251, Beijing Academy of Artificial Intelligence (BAAI2019QN0502), and in part by the Youth Innovation Promotion Association CAS(2018141).
\bibliographystyle{acl_natbib}
\section{Introduction}
Fine-grained entity typing (FET) aims to classify entity mentions to a fine-grained semantic label set, e.g., classify ``\textit{FBI agents}" in ``\textit{They were arrested by \textbf{FBI agents}.}" as \{\textit{organization, administration, force, agent, police}\}. By providing fine-grained semantic labels, FET is critical for entity recognition~\citep{DBLP:conf/acl/LinLHS19_icip1, DBLP:conf/emnlp/LinLHSDJ19_icip2, DBLP:conf/emnlp/LinLTHSWY20_icip3, DBLP:conf/aaai/ZhangLH0LWY21_icip4, zhang-etal-2021-de} and can benefit many NLP tasks, such as relation extraction~\cite{DBLP:conf/eacl/SchutzeYA17_RE_downstream1, DBLP:conf/acl/ZhangHLJSL19_RE_downstream2}, entity linking~\cite{DBLP:conf/aaai/OnoeD20_EL_downstream4} and question answering~\cite{DBLP:conf/emnlp/YavuzGSSY16_QA_downstream3}.
\begin{figure}[htb!]
\setlength{\belowcaptionskip}{-0.5cm}
\centering
\subfigure[Extrinsic dependency]{
\resizebox{0.22\textwidth}{!}{
\includegraphics{figure/introduction_a.pdf}
\label{Fig.introduction(a)}
}}
\subfigure[Intrinsic dependency]{
\resizebox{0.22\textwidth}{!}{
\includegraphics{figure/introduction_b.pdf}
\label{Fig.introduction(b)}
}}
\subfigure[Label reasoning process]{
\resizebox{0.48\textwidth}{!}{
\includegraphics{figure/introduction_c.pdf}
\label{Fig.introduction(c)}
}}
\caption{Examples of deductive reasoning based on the extrinsic dependency and inductive reasoning based on the intrinsic dependency, where the labels \textit{person}, \textit{theorist} and \textit{commander} are deducted respectively and the label \textit{scientist} is inducted from the attributes \{\texttt{expert}, \texttt{scholar}\}.}
\label{Fig.introduction}
\end{figure}
The fundamental challenge of FET comes from its large-scale and fine-grained entity label set, which leads to significant difference between FET and conventional entity typing. First, due to the massive label set, it is impossible to independently recognize each entity label without considering their dependencies. For this, existing approaches use the predefined label hierarchies~\citep{ren2016afet_hier1, DBLP:conf/eacl/InuiRSS17_shimaoka_hier7, DBLP:conf/eacl/AbhishekAA17_hier5, DBLP:conf/eacl/SchutzeWK17_hier8, XuandBarbosa2018neural_hier2, DBLP:conf/ijcai/WuZMGH19_hier3, DBLP:conf/acl/ChenCD20_onto_hier4, ren2020fine_inference_hier6} or label co-occurrence statistics from training data~\citep{DBLP:conf/acl/RabinovichK17_core1, xiong2019imposing_core2, linandJi2019attentive_core3} as external constraints. Unfortunately, these label structures or statistics are difficult to obtain when transferring to new scenarios. Second, because of the fine-grained and large-scale label set, many long tail labels are only provided with several or even no training instances. For example, in Ultra-Fine dataset~\citep{choi2018ultra_data1}, $>$80\% of entity labels are with $<$5 instances, and more seriously 25\% of labels never appear in the training data. However, training data can provide very limited direct information for these labels, and therefore previous methods commonly fail to recognize these long-tailed labels.
Fortunately, the implicitly entailed label dependencies in the data provide critical knowledge to tackle the above challenges. Specifically, the dependencies between labels exist extrinsically or intrinsically. On the one hand, the extrinsic dependencies reflect the \emph{direct} connections between labels, which partially appear in the form of label hierarchy and co-occurrence. For example, in Figure~\ref{Fig.introduction(a)} the labels \textit{person}, \textit{musician}, \textit{composer} are with extrinsic dependencies because they form a three-level taxonomy. Furthermore, \textit{singer} and \textit{composer} are also with extrinsic dependency because they often co-occur with each other. On the other hand, the intrinsic dependencies entail the \emph{indirect} connections between labels through their underlying attributes. For the example in Figure~\ref{Fig.introduction(b)}, label \textit{theorist} and \textit{scientist} share the same underlying attribute of \texttt{scholar}. Such intrinsic dependencies provide an effective way to tackle the long tail labels, because many long tail labels are actually composed by non-long tail attributes which can be summarized from non-long tail labels.
To this end, this paper proposes \textit{Label Reasoning Network (LRN)}, which uniformly models, learns and reasons both extrinsic and intrinsic label dependencies without given any predefined label structures. Specifically, LRN utilizes an auto-regressive network to conduct deductive reasoning and a bipartite attribute graph to conduct inductive reasoning between labels. Both of these two kinds of mechanisms are jointly applied to sequentially generate fine-grained labels in an end-to-end, sequence-to-set manner. Figure~\ref{Fig.introduction(c)} shows several examples. To capture extrinsic dependencies, LRN introduces deductive reasoning (i.e., draw a conclusion based on premises) between labels, and formulates it using an auto-regressive network to predict labels based on both the context and previous labels. For example, given previously-generated label \textit{person} of the mention \textit{they}, as well as the context \textit{they theorize}, LRN will deduce its new label \textit{theorist} based on the extrinsic dependency between \textit{person} and \textit{theorist} derived from data. For intrinsic dependencies, LRN introduces inductive reasoning (i.e., gather generalized information to a conclusion), and utilizes a bipartite attribute graph to reason labels based on current activated attributes of previous labels. For example, if the attributes \{\texttt{expert}, \texttt{scholar}\} have been activated, LRN will induce a new label \textit{scientist} based on the attribute-label relations. Consequently, by decomposing labels into attributes and associating long tail labels with frequent labels, LRN can also effectively resolve the long tail label problem by leveraging their non-long tail attributes. Through jointly leveraging the extrinsic and intrinsic dependencies via deductive and inductive reasoning, LRN can effectively handle the massive label set of FET.
Generally, our main contributions are:
\begin{itemize}[leftmargin=0.6cm,topsep=0.1cm]
\setlength{\itemsep}{0cm}
\setlength{\parskip}{0.1cm}
\item We propose \textit{Label Reasoning Network}, which uniformly models, automatically learns and effectively reasons the complex dependencies between labels in an end-to-end manner.
\item To capture extrinsic dependencies, LRN utilizes deductive reasoning to sequentially reason labels via an auto-regressive network. In this way, extrinsic dependencies are discovered and exploited without predefined label structures.
\item To capture intrinsic dependencies, LRN utilizes inductive reasoning to reason labels via a bipartite attribute graph. By decomposing labels into attributes and associating long-tailed labels with frequent attributes, LRN can effectively reason long-tailed and even zero-shot labels.
\end{itemize}
We conduct experiments on standard Ultra-Fine~\citep{choi2018ultra_data1} and OntoNotes~\citep{ontonotes_data2} dataset. Experiments show that our method achieves new state-of-the-art performance: a 13\% overall F1 improvement and a 44\% F1 improvement in the ultra-fine granularity.\footnote{Our source codes are openly available at \href{https://github.com/loriqing/Label-Reasoning-Network}{https://github.com/loriqing/Label-Reasoning-Network}}
\begin{figure*}[tbh!]
\setlength{\belowcaptionskip}{-0cm}
\centering
\includegraphics[width=0.88\textwidth]{figure/framework.pdf}
\caption{Overview of the process for LRN which contains an encoder, a deductive reasoning-based decoder and an inductive reasoning-based decoder. The figure shows: at step 1, the label \textit{person} is predicted by deductive reasoning, and the attribute \texttt{human} is activated; at step 3, the label \textit{scientist} is generated by inductive reasoning.}
\label{Fig.framework}
\end{figure*}
\section{Related Work}
One main challenge for FET is how to exploit complex label dependencies in the large-scale label set. Previous studies typically use predefined label hierarchy and co-occurrence structures estimated from data to enhance the models. To this end, \citet{ren2016afet_hier1, XuandBarbosa2018neural_hier2, DBLP:conf/ijcai/WuZMGH19_hier3, DBLP:conf/acl/ChenCD20_onto_hier4} design new loss function to exploit label hierarchies. \citet{DBLP:conf/eacl/AbhishekAA17_hier5} enhance the label representation by sharing parameters. \citet{DBLP:conf/eacl/InuiRSS17_shimaoka_hier7, McCallumVVMR18_space1, DBLP:conf/emnlp/Lopez020_hyper} embed labels into a high-dimension or a new space. And the studies exploit co-occurrence structures including limiting the label range during label set prediction~\citep{DBLP:conf/acl/RabinovichK17_core1}, enriching the label representation by introducing associated labels~\citep{xiong2019imposing_core2}, or requiring latent label representation to reconstruct the co-occurrence structure~\citep{linandJi2019attentive_core3}. However, these methods require predefined label structures or statistics from training data, and therefore is difficult to be extended to new entity types or domains.
The ultra fine-grained label set also leads to data bottleneck and the long tail problem. In recent years, some previous approaches try to tackle this problem by introducing zero/few-shot learning methods~\citep{ma2016label_few1, HuangMPJ16_open1, ZhouKTR18_open2, YuanD18_few2, obeidat2019description_few3, zhang2020mzet_few4, DBLP:conf/www/RenLZ20_zero}, or using data augmentation with denosing strategies~\citep{DBLP:conf/kdd/RenHQVJH16_denoise3, DBLP:conf/naacl/OnoeD19_elmo, DBLP:conf/ijcai/ZhangLXZXHW20_denoise1, DBLP:conf/aaai/AliSLW20_denoise2} or utilizing external knowledge~\citep{DBLP:conf/emnlp/CorroAGW15_KB, DBLP:conf/emnlp/DaiDLS19_el} to introduce more external knowledge.
In this paper, we propose Label Reasoning Network, which is significantly different from previous methods because 1) by introducing deductive reasoning, LRN can capture extrinsic dependencies between labels in an end-to-end manner without predefined structures; 2) by introducing inductive reasoning, LRN can leverage intrinsic dependencies to predict long tail labels; 3) Through the sequence-to-set framework, LRN can consider two kinds of label dependencies simultaneously to jointly reason frequent and long tail labels.
\section{Label Reasoning Network for FET}
Figure~\ref{Fig.framework} illustrates the framework of \textit{Label Reasoning Network}. First, we encode entity mentions through a context-sensitive encoder, then sequentially generate entity labels via two label reasoning mechanisms: deductive reasoning for exploiting extrinsic dependencies and inductive reasoning for exploiting intrinsic dependencies. In our Seq2Set framework, the label dependency knowledge can be effectively modeled in the parameters of LRN, automatically learned from training data, and naturally exploited during the sequential label decoding process. In the following we describe these components in detail.
\subsection{Encoding}
For encoding, we form the input instance $\mathcal{X}$ as ``[CLS], $x_1$, ..., [$E_1$], $m_1$, ..., $m_k$, [$E_2$], ..., $x_n$" where [$E_1$], [$E_2$] are entity markers, $m$ is mention word and $x$ is context word. We then feed $\mathcal{X}$ to BERT and obtain the source hidden state $\mathcal{H}=\{h_1,...,h_n\}$. Finally, the hidden vector of [CLS] token is used as sentence embedding $\bm{g}$.
\subsection{Deductive Reasoning for Extrinsic Dependencies}
This section describes how to capture extrinsic dependencies for label prediction via a deductive reasoning mechanism. To this end, the deductive reasoning-based decoder sequentially generates labels based on both context and previous labels, e.g., ``\textit{for his books}" + \textit{person} $\rightarrow$ \textit{writer} and ``\textit{record an album}" + \textit{person} $\rightarrow$ \textit{musician}. In this way, a label is decoded by considering both context-based prediction and previous labels-based prediction.
Concretely, we utilize a LSTM-based auto-regressive network as decoder and obtain the hidden state of decoder $\mathcal{S}=\{s_0,...,s_{k}\}$, where $k$ is the number of predicted labels. We first initialize $s_0$ using sentence embedding $\bm{g}$, then at each time step, two attention mechanisms -- contextual attention and premise attention, are designed to capture context and label information for next prediction.
\paragraph{Contextual Attention} is used to capture the context evidence for label prediction. For example, the context ``\textit{they theorize}" provides rich information for \textit{theorist} label. Specifically, at each time step $t$, contextual attention identifies relevant context by assigning a weight $\alpha_{ti}$ to each $\bm{h}_i$ in the source hidden state $\mathcal{H}$:
\begin{align}
e_{ti} &= \bm{v}_c^{T}tanh(\bm{W}_c \bm{s}_t + \bm{U}_c \bm{h}_i) \label{att1_e}
\\
\alpha_{ti} &= \frac{exp(e_{ti})}{\sum_{i=1}^{n}exp(e_{ti})} \label{att1_a}
\end{align}
where $\bm{W}_c$, $\bm{U}_c$, $\bm{v}_c$ are weight parameters and $\bm{s}_t$ is the hidden state of decoder at time step $t$. Then the context representation $\bm{c}_t$ is obtained by:
\begin{align}
\bm{c}_t &= \sum_{i=1}^{n}\alpha_{ti}\bm{h}_i \label{att1_c}
\end{align}
\paragraph{Premise Attention} exploits the dependencies between labels for next label prediction. For example, if \textit{person} has been generated, its hyponym label \textit{theorist} will be highly likely to be generated in context ``\textit{they theorize}". Concretely, at each time step $t$, premise attention captures the dependencies to previous labels by assigning a weight $\alpha_{tj}$ to each $\bm{s}_j$ of previous hidden states of decoder $\mathcal{S}_{<t}$:
\begin{align}
e_{tj} &= \bm{v}_p^{T}tanh(\bm{W}_p \bm{s}_t + \bm{U}_p\bm{s}_j) \label{att2_e} \\
\alpha_{tj} &= \frac{exp(e_{tj})}{\sum_{j=0}^{t-1}exp(e_{tj})} \label{att2_a}
\end{align}
where $\bm{W}_p$, $\bm{U}_p$, $\bm{v}_p$ are weight parameters. Then the previous label information $\bm{u}_t$ is obtained by:
\begin{align}
\bm{u}_t &= \sum_{j=0}^{t-1}\alpha_{tj}\bm{s}_j \label{att1_u}
\end{align}
\paragraph{Label Prediction. }Given the context representation $\bm{c}_t$ and the previous label information $\bm{u}_t$, we use $\bm{m}_t=[\bm{c}_t+\bm{g}; \bm{u}_t+\bm{s}_t]$ as input, and calculate the probability distribution over label set $L$:
\begin{align}
\bm{s}_t &= {\rm{LSTM}}(\bm{s}_{t-1}, \bm{W}_b\bm{y}_{t-1}) \\
\bm{o}_t &= \bm{W}_o\bm{m}_t \\
\bm{y}_t &= softmax(\bm{o}_t+\bm{I}_t)
\end{align}
where $\bm{W}_o$ and $\bm{W}_b$ are weight parameters and we use the mask vector $\bm{I}_t \in \mathbb{R}^{L+1}$ \cite{YangSLMWW18_sgm} to prevent duplicate predictions.
\begin{align}
(\bm{I}_t)_i = \begin{cases}
-\inf &, l_i \in \mathcal{Y}^*_{t-1} \\
1 &, {\rm otherwise} \\
\end{cases}
\end{align}
where $\mathcal{Y}^*_{t-1}$ is the predicted labels before step $t$ and $l_i$ is the $i^{th}$ label in label set $L$. The label with maximum value in $\bm{y}_t$ is generated and used as the input for the next time step until $[EOS]$ is generated.
\subsection{Inductive Reasoning for Intrinsic Dependencies}
Deductive reasoning can effectively capture extrinsic dependencies. However, labels can also have intrinsic dependencies if they share attributes, e.g., \textit{theorist} and \textit{scientist} shares \texttt{scholar} attribute. To leverage intrinsic dependencies, LRN conducts inductive reasoning by associating labels to attributes via a bipartite attribute graph. A label will be generated if most of its attributes are activated. Instead of heuristically setting the number of attributes to be activated, we select labels based on their overall activation score from all attributes. By capturing such label-attribute relations, many long tail labels can be effectively predicted because they are usually related to non-long tail attributes.
To this end, we first design a bipartite attribute graph to represent attribute-label relations. Based on the bipartite attribute graph, at each time step, attributes will be activated based on the hidden state of decoder, and new labels will be inducted by reasoning over the activated attributes. For example, in Figure~\ref{Fig.framework} the predicted labels \textit{person}, \textit{theorist} and \textit{commander} will correspondingly activate the attributes \texttt{human}, \texttt{scholar} and \texttt{expert}, and then the \textit{scientist} label will be activated via inductive reasoning based on these attributes.
\paragraph{Bipartite Attribute Graph (BAG).}\label{AG method} BAG $\mathcal{G}=\{V,E\}$ is designed to capture the relations between attributes and labels. Specifically, nodes $V$ contain attribute nodes $V_a$ and label nodes $V_l$, and edges $E$ only exist between attributes nodes and labels nodes, with the edge weight indicating the attribute-label relatedness. Attributes are represented using natural language words in BAG. Figure~\ref{Fig.framework} shows a BAG where $V_a$ contains words \{\texttt{scholar}, \texttt{expert}, \texttt{historian}, ...\}, $V_l$ are all entity labels in label set $\textit{L}$, containing \{\textit{student}, \textit{musician}, \textit{scientist}, ...\}
\paragraph{BAG Construction.}\label{collection} Because there are many labels and many attributes, we dynamically build a local BAG during the decoding for each instance. In this way the BAG is very compact and the computation is very efficient~\citep{DBLP:journals/ai/ZupanBDB99_attribute_reduce}. In local BAG, we collect attributes in two ways: (1) We mask the entity mention in the sentence, and predict the [MASK] token using masked language model (this paper uses BERT-base-uncased), and the non-stop words whose prediction scores greater than a confidence threshold ${\theta}_c$ will be used as attributes --- we denote them as context attributes; Since PLM usually predicts high-frequency words, the attributes are usually not long-tailed, which facilitates modeling dependencies between head and tail labels. This mask-prediction strategy is also used in~\citet{DBLP:conf/emnlp/XinZH0S18_put_back}, for collecting additional semantic evidence of entity labels. (2) We directly segment the entity mention into words using Stanza\footnote{https://pypi.org/project/stanza/}, and all non-stop words are used as attributes --- we denote them as entity attributes. Figure~\ref{Fig.attributes} shows several attribute examples. Given attributes, we compute the attribute-label relatedness (i.e. $E$ in $\mathcal{G}$) using the cosine similarity between their GloVe embeddings~\citep{DBLP:conf/emnlp/PenningtonSM14_glove}.
\begin{figure}[!t]
\vspace{-0cm}
\setlength{\belowcaptionskip}{-0.3cm}
\centering
\includegraphics[width=0.48\textwidth]{figure/attributes.pdf}
\caption{Examples of attributes.}
\label{Fig.attributes}
\end{figure}
\paragraph{Reasoning over BAG.} At each time step, we activate attributes in BAG by calculating their similarities to the current hidden state of decoder $\bm{s}_t$. For the $i^{th}$ attribute node ${V_a}^{(i)}$, its activation score is:
\begin{align}
{score}_{V_a}^{(i)} &= ReLU(sim(\bm{W}_s \bm{s}_t, \bm{W}_a {V_a}^{(i)})
\end{align}
where $\bm{W}_s$ is the weight parameter, $\bm{W}_a$ is the attribute embedding (i.e., word embedding of attribute words). We use cosine distance to measure similarity and employ ReLU to activate attributes. Then we induce new labels by reasoning over the activated attributes as:
\begin{align}
{score}_{V_l}^{(j)} &= \sum_{i=1}^{n_a} {score}_{V_a}^{(i)} E_{ij}
\end{align}
where $n_a$ is the number of attributes, ${V_l}^{(j)}$ is the $j_{th}$ label nodes and $E_{ij}$ is the weight between them. Finally a label will be generated if its activation score is greater than a similarity threshold ${\theta}_s$.
Note that our inductive reasoning and deductive reasoning are jointly modeled in the same decoder, i.e., they share the same decoder hidden state but with different label prediction process. Once deductive reasoning-based decoder generates \textit{[EOS]}, the label prediction stops. Finally, we combine the predicted labels of both deductive reasoning and inductive reasoning as the final FET results.
\section{Learning}
In FET, each instance is represented as \{$\mathcal{X}$, $\mathcal{Y}$\} where $\mathcal{X}$ is ``[CLS], $x_1$, ..., [$E_1$], $m_1$, ..., $m_k$, [$E_2$],..., $x_n$" and $\mathcal{Y}=\{y_1,...,y_m\}$ is the golden labels. To learn our model, we design two losses: set prediction loss for deductive reasoning-based decoding and BAG loss for inductive reasoning-based decoding.
\paragraph{Set Prediction Loss.}
In FET, cross entropy loss is not appropriate because the prediction results is a label set, i.e., \{$y^{*}_1$, $y^{*}_2$, $y^{*}_3$\} and \{$y^{*}_3$, $y^{*}_2$, $y^{*}_1$\} should have the same loss. Therefore we measure the similarity of two label set using the bipartite matching loss~\citep{DBLP:journals/corr/abs-2011-01675_matching_loss}. Given the golden label set $\mathcal{Y}=\{y_1,...,y_m\}$ and generated label set ${\mathcal{Y}}^{*}=\{{y^{*}_1},...,{y^{*}_m}\}$, the matching loss $\mathcal{L}(ij)_{S}$ of $y_i$ and ${y^{*}_j}$ is calculated by \ref{single_loss}, then we use the Hungarian Algorithm~\citep{kuhn1955hungarian_loss} to get the specific order of golden label set as $\widetilde{\mathcal{Y}}=\{\widetilde{y}_1,...,\widetilde{y}_m\}$ to obtain minimum matching loss $\mathcal{L}_{S}$:
\begin{align}
\mathcal{L}(ij)_{S} &= \textup{CE}(y_i, y^{*}_j) \label{single_loss} \\
\mathcal{L}_{S} &= \textup{CE}(\widetilde{\mathcal{Y}}, \mathcal{Y}^{*}) \label{total_loss}
\end{align}
where \textup{CE} is cross-entropy.
\paragraph{BAG Loss.}
To make the model activate labels correctly, we add a supervisory loss to the bipartite attribute graph to active correct labels:
\begin{align}
\mathcal{L}_{A} &= -\sum_{j=1}^{|\textit{L}|}{score}_{V_l}^{(j)} * y_j \\
y_j &= \begin{cases}
1 &, v_j \in \mathcal{Y} \\
-1 &, v_j \notin \mathcal{Y}
\end{cases}
\end{align}\normalsize
\paragraph{Final Loss.} The final loss is a combination of set loss and BAG loss:
\begin{align}
\mathcal{L} = \mathcal{L}_{S} + \lambda \mathcal{L}_{A}
\end{align}
where $\lambda$ is the relative weight of these two losses\footnote{ In our auxiliary experiments, we find that its impact is minor, so this paper empirically sets it to 1.}.
\section{Experiments}
\subsection{Settings}
\paragraph{Datasets} We conduct experiments on two standard fine-grained entity typing datasets\footnote{Released in https://github.com/uwnlp/open\_type}: Ultra-Fine as primary dataset and OntoNotes as complementary dataset. Ultra-Fine contains 6K manually-annotated examples, 2519 categories, and 5.4 labels per sample on average. Followed~\citet{choi2018ultra_data1} we use the same 2K/2K/2K train/dev/test splits and evaluate using macro precision, recall and F-score. Original OntoNotes dataset~\citep{ontonotes_data2} contains 25K/2K/9K train/dev/test data, 89 categories and 2.7 labels per sample on average. And \citet{choi2018ultra_data1} offers an augmentation training data with 2.3 labels per sample on average. We evaluate on both versions using the standard metrics: accuracy, macro F-score and micro F-score.
\paragraph{Baselines} For Ultra-Fine dataset, we compare with following baselines: \citet{DBLP:conf/naacl/OnoeD19_elmo} which offers two multi-classifiers using BERT and ELMo as encoder respectively, \citet{choi2018ultra_data1} which is a multi-classifier using GloVe+LSTM as encoder, \citet{xiong2019imposing_core2} which is a multi-classifier using GloVe+LSTM as encoder and exploits label co-occurrence via introducing associated labels to enrich the label representation, \citet{DBLP:conf/emnlp/Lopez020_hyper} which is a hyperbolic multi-classifier using GloVe. For OntoNotes dataset, in addition to the baselines for Ultra-Fine, we also compare with \citet{wang2020empirical} which offers a multi-classifier using BERT as encoder, \citet{linandJi2019attentive_core3} which offers a multi-classifier using ELMo as encoder and exploits label co-occurrence via requiring the latent representation to reconstruct the co-occurrence association and \citet{DBLP:conf/acl/ChenCD20_onto_hier4} which offers a multi-classifier using ELMo as encoder and exploits label hierarchy via designing a hierarchy-aware loss function.
\paragraph{Implementation} We use BERT-Base(uncased) \citep{DBLP:conf/naacl/DevlinCLT19_bert} as encoder, Adam optimizer \citep{DBLP:journals/corr/KingmaB14_adam} with learning rate of BERT as 5e-5 and of other parameters as 1e-3. The batch size is 32, encoder hidden size is 768, the decoder hidden size is 868 and label embedding size is 100, the dropout rate of decoder is 0.6. The confidence threshold ${\theta}_c$ and the similarity threshold ${\theta}_s$ both are optimized on dev set and set as 0.1 and 0.2 respectively.
We use the GloVe embedding \citep{DBLP:conf/emnlp/PenningtonSM14_glove} to represent the nodes of BAG and fix it while training.
\subsection{Overall Results}
\begin{table}[!t]
\setlength{\belowcaptionskip}{-0.4cm}
\centering
\resizebox{.45\textwidth}{!}{
\begin{tabular}{lccc}
\Xhline{1.2pt}
\multicolumn{1}{l|}{\textbf{Model}} & \textbf{P} & \textbf{R} & \textbf{F1} \\ \hline
\multicolumn{4}{c}{without label dependency} \\ \hline
\multicolumn{1}{l|}{*\small\citet{choi2018ultra_data1}\normalsize} & 47.1 & 24.2 & 32.0 \\
\multicolumn{1}{l|}{*ELMo\small\citep{DBLP:conf/naacl/OnoeD19_elmo}\normalsize} & 51.5 & 33.0 & 40.2 \\
\multicolumn{1}{l|}{BERT\small\citep{DBLP:conf/naacl/OnoeD19_elmo}\normalsize} & 51.6 & 33.0 & 40.2 \\
\multicolumn{1}{l|}{BERT{[}in-house{]}} & 55.9 & 33.0 & 41.5 \\ \hline
\multicolumn{4}{c}{with label dependency} \\ \hline
\multicolumn{1}{l|}{*L\small{ABEL}\normalsize{GCN} \small\citep{xiong2019imposing_core2}\normalsize} & 50.3 & 29.2 & 36.9 \\
\multicolumn{1}{l|}{LRN \small w/o IR \normalsize} & \textbf{61.2} & 33.5 & 43.3 \\
\multicolumn{1}{l|}{LRN} & 54.5 & \textbf{38.9} & \textbf{45.4} \\ \Xhline{1.2pt}
\end{tabular}}
\caption{Macro P/R/F1 results on Ultra-Fine test set. * means using augmented data. "without label dependency" methods formulated FET as multi-label classification without considering associations between labels. "with label dependency" methods leveraged associations between labels explicitly or implicitly.}
\label{tab:Ultra-Fine main result}
\end{table}
\begin{table*}[tbh!]
\centering
\resizebox{.9\textwidth}{!}{
\begin{tabular}{l|cccccccccccc}
\Xhline{1.2pt}
\multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Model}}} & \multicolumn{3}{c}{\textbf{Total}} & \multicolumn{3}{c}{\textbf{General}} & \multicolumn{3}{c}{\textbf{Fine}} & \multicolumn{3}{c}{\textbf{Ultra-Fine}} \\ \cline{2-13}
\multicolumn{1}{c|}{} & \textbf{P} & \textbf{R} & \multicolumn{1}{c|}{\textbf{F}} & \textbf{P} & \textbf{R} & \multicolumn{1}{c|}{\textbf{F}} & \textbf{P} & \textbf{R} & \multicolumn{1}{c|}{\textbf{F}} & \textbf{P} & \textbf{R} & \textbf{F} \\ \hline
*\citet{choi2018ultra_data1} & 48.1 & 23.2 & \multicolumn{1}{c|}{31.3} & 60.3 & 61.6 & \multicolumn{1}{c|}{61.0} & 40.4 & 38.4 & \multicolumn{1}{c|}{39.4} & 42.8 & 8.8 & 14.6 \\
$\dagger$L\small{ABEL}\normalsize{GCN} \citep{xiong2019imposing_core2} & 49.3 & 28.1 & \multicolumn{1}{c|}{35.8} & 66.2 & 68.8 & \multicolumn{1}{c|}{67.5} & 43.9 & 40.7 & \multicolumn{1}{c|}{42.2} & 42.4 & 14.2 & 21.3 \\
HY Large \citep{DBLP:conf/emnlp/Lopez020_hyper} & 43.4 & 34.2 & \multicolumn{1}{c|}{38.2} & 61.4 & 73.9 & \multicolumn{1}{c|}{67.1} & 35.7 & 46.6 & \multicolumn{1}{c|}{40.4} & 36.5 & 19.9 & 25.7 \\
*ELMo \cite{DBLP:conf/naacl/OnoeD19_elmo} & 50.7 & 33.1 & \multicolumn{1}{c|}{40.1} & 66.9 & \textbf{80.7} & \multicolumn{1}{c|}{73.2} & 41.7 & 46.2 & \multicolumn{1}{c|}{43.8} & 45.6 & 17.4 & 25.2 \\
BERT \cite{DBLP:conf/naacl/OnoeD19_elmo} & 51.6 & 32.8 & \multicolumn{1}{c|}{40.1} & 67.4 & 80.6 & \multicolumn{1}{c|}{73.4} & 41.6 & 54.7 & \multicolumn{1}{c|}{47.3} & 46.3 & 15.6 & 23.4 \\ \hline
BERT[in-house] & 54.1 & 32.1 & \multicolumn{1}{c|}{40.3} & 68.8 & 79.2 & \multicolumn{1}{c|}{73.6} & 43.8 & \textbf{57.4} & \multicolumn{1}{c|}{49.7} & \textbf{50.7} & 14.6 & 22.6 \\
LRN \small w/o IR \normalsize & \textbf{60.7} & 32.5 & \multicolumn{1}{c|}{42.3} & \textbf{79.3} & 75.5 & \multicolumn{1}{c|}{\textbf{77.4}} & \textbf{59.6} & 44.8 & \multicolumn{1}{c|}{51.2} & 45.7 & 18.7 & 26.5 \\
LRN & 53.7 & \textbf{38.6} & \multicolumn{1}{c|}{\textbf{44.9}} & 77.8 & 76.4 & \multicolumn{1}{c|}{77.1} & 55.8 & 50.6 & \multicolumn{1}{c|}{\textbf{53.0}} & 43.4 & \textbf{26.0} & \textbf{32.5} \\ \Xhline{1.2pt}
\end{tabular}}
\caption{Macro P/R/F1 of each label granularity on Ultra-Fine dev set, and long tail labels are mostly in the ultra-fine layer. * means using augmented data. $\dagger$ We adapt the results from \citet{DBLP:conf/emnlp/Lopez020_hyper}.}
\label{tab:Ultra-Fine layer score}
\end{table*}
\begin{table*}[ht!]
\centering
\resizebox{.9\textwidth}{!}{
\begin{tabular}{l|cccccccccccc}
\Xhline{1.2pt}
\multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Model}}} & \multicolumn{3}{c}{\textbf{Total}} & \multicolumn{3}{c}{\textbf{General}} & \multicolumn{3}{c}{\textbf{Fine}} & \multicolumn{3}{c}{\textbf{Ultra-Fine}} \\ \cline{2-13}
\multicolumn{1}{c|}{} & \textbf{P} & \textbf{R} & \multicolumn{1}{c|}{\textbf{F}} & \textbf{P} & \textbf{R} & \multicolumn{1}{c|}{\textbf{F}} & \textbf{P} & \textbf{R} & \multicolumn{1}{c|}{\textbf{F}} & \textbf{P} & \textbf{R} & \textbf{F} \\ \hline
HY XLarge~\citep{DBLP:conf/emnlp/Lopez020_hyper} & / & / & \multicolumn{1}{c|}{/} & / & / & \multicolumn{1}{c|}{69.1} & / & / & \multicolumn{1}{c|}{39.7} & / & / & 26.1 \\
BERT[in-house] & 55.9 & 33.0 & \multicolumn{1}{c|}{41.5} & 69.7 & \textbf{81.6} & \multicolumn{1}{c|}{75.2} & 43.7 & \textbf{56.0} & \multicolumn{1}{c|}{49.1} & \textbf{53.5} & 15.5 & 24.0 \\
LRN \small w/o IR \normalsize & \textbf{61.2} & 33.5 & \multicolumn{1}{c|}{43.3} & \textbf{78.3} & 76.7 & \multicolumn{1}{c|}{\textbf{77.5}} & \textbf{61.6} & 44.1 & \multicolumn{1}{c|}{51.4} & 47.8 & 19.9 & 28.1 \\
LRN & 54.5 & \textbf{38.9} & \multicolumn{1}{c|}{\textbf{45.4}} & 77.4 & 76.7 & \multicolumn{1}{c|}{77.1} & 58.4 & 50.4 & \multicolumn{1}{c|}{\textbf{54.1}} & 43.5 & \textbf{26.4} & \textbf{32.8} \\ \Xhline{1.2pt}
\end{tabular}}
\caption{Macro P/R/F1 of different label granularity on Ultra-Fine test set.}
\label{tab:layer score on UF Test}
\end{table*}
\begin{table*}[thb!]
\setlength{\belowcaptionskip}{-0 cm}
\centering
\resizebox{.9\textwidth}{!}{
\begin{tabular}{l|c|c|ccc|ccc|ccc}
\Xhline{1.2pt}
\multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Number of}}} & \multirow{2}{*}{\textbf{Category}} & \multirow{2}{*}{\textbf{Prediction}} & \multicolumn{3}{c|}{\textbf{Shot$=$0}} & \multicolumn{3}{c|}{\textbf{Shot$=$1}} & \multicolumn{3}{c}{\textbf{Shot$=$2}} \\ \cline{4-12}
\multicolumn{1}{c|}{} & & & Correct & Predicted & Prec. &Correct & Predicted & Prec. & Correct & Predicted & Prec. \\ \hline
BERT[in-house] & 293 & 5683 & 0 & 0 & / & 1 & 1 & 100.0\% & 9 & 66 & 13.6\% \\ \cline{1-1}
LRN \small w/o IR \normalsize & 330 & 5740 & 0 & 0 & / & 1 & 3 & 33.3\% & 15 & 28 & 53.6\% \\ \cline{1-1}
LRN & 997 & 7808 & 110 & 218 & 50.5\% & 67 & 252 & 26.6\% & 94 & 276 & 34.1\% \\ \Xhline{1.2pt}
\end{tabular}}
\caption{Performance of the zero-shot, shot$=$1 and shot$=$2 label prediction. "Category" means how many kinds of types are predicted. "Prediction" means how many labels are generated.}
\label{tab:Ultra-Fine each shot}
\end{table*}
Table~\ref{tab:Ultra-Fine main result} shows the main results of all baselines and our method in two settings: LRN is the full model and LRN \small w/o IR \normalsize is the model without inductive reasoning. For fair comparisons, we implement a baseline with same settings of LRN but replace the decoder with a multi-classifier same as \citet{choi2018ultra_data1} --- BERT[in-house]. We can see that:
1) \textit{By performing label reasoning, LRN can effectively resolve the fine-grained entity typing problem.} Compared with previous methods, our method achieves state-of-the-art performance with a F1 improvement from 40.2 to 45.4 on test set. This verified the necessity for exploiting label dependencies for FET and the effectiveness of our two label reasoning mechanisms. We believe this is because label reasoning can help FET by making the learning more data-efficient (i.e., labels can share knowledge) and the prediction of labels global coherent.
2) \textit{Both deductive reasoning and inductive reasoning are useful for fine-grained label prediction.} Compared with BERT[in-house], LRN \small w/o IR \normalsize can achieve 4.3\% F1 improvement by exploiting extrinsic dependencies via deductive reasoning. LRN can further improve F1 from 43.3 to 45.4 by exploiting intrinsic dependencies via inductive reasoning. We believe this is because deductive reasoning and inductive reasoning are two fundamental but different mechanisms, therefore, modeling them simultaneously will better leverage label dependencies to predict labels.
3) \textit{Seq2Set is an effective framework to model, learn and exploit label dependencies in an end-to-end manner.} Compared with L\small{ABEL}\normalsize{GCN}~\citep{xiong2019imposing_core2} which heuristically exploits label co-occurrence structure, LRN can achieve a significant performance improvement. We believe this is because neural networks have strong ability for representing and learning label dependencies. And the end-to-end manner makes LRN can easily generalize to new scenarios.
\subsection{Effect on Long Tail Labels}
As described above, another advantage of our method is it can resolve the long tail problem by decomposing long tail labels to common attributes and modeling label dependencies between head and tail labels. Because the finer the label granularity, the more likely it to be a long tail label, we report the performance of each label granularity on dev set and test set same as previous works in Table~\ref{tab:Ultra-Fine layer score} and Table~\ref{tab:layer score on UF Test}. Moreover, we report the performance of the labels with shot$\leq$2 in Table~\ref{tab:Ultra-Fine each shot}. Based on these results, we find that:
1) \textit{LRN can effectively resolve the long tail label problem.} Compared to BERT[in-house], LRN can significantly improve the F-score of ultra-fine granularity labels by 44\% (22.6 $\rightarrow$ 32.5) and recall more fine-grained labels (14.6 $\rightarrow$ 26.0).
2) \textit{Both deductive reasoning and inductive reasoning are helpful for long tail label prediction, but with different underlying mechanisms: deductive reasoning exploits the extrinsic dependencies between labels, but inductive reasoning exploits the intrinsic dependencies between labels.} LRN \small w/o IR \normalsize cannot predict zero-shot labels because it resolves long tail labels by relating head labels with long tail labels, therefore it cannot predict unseen labels. By contrast, LRN can predict zero-shot labels via inductive reasoning because it can decompose labels into attributes. Furthermore, we found LRN \small w/o IR \normalsize has higher precision for few-shot (shot$=$2) labels than BERT and LRN, we believe this is because inductive reasoning focuses on recalling more labels, which inevitably introduce some incorrect labels.
\subsection{Detailed Analysis}
\paragraph{Effect of Components}
\begin{table}[htb!]
\centering
\resizebox{.45\textwidth}{!}{
\begin{tabular}{l|ccc|ccc}
\Xhline{1.2pt}
\multicolumn{1}{c|}{\multirow{2}{*}{\textbf{Model}}} & \multicolumn{3}{c|}{\textbf{Dev}} & \multicolumn{3}{c}{\textbf{Test}} \\ \cline{2-7}
\multicolumn{1}{c|}{} & \textbf{P} & \textbf{R} & \textbf{F} & \textbf{P} & \textbf{R} & \textbf{F} \\ \hline
\textbf{LRN} & 53.7 & 38.6 & 44.9 & 54.5 & 38.9 & 45.4 \\
-PreAtt & 53.1 & 39.3 & 45.2 & 52.6 & 39.5 & 45.1 \\
-PreAtt-ConAtt & 56.3 & 36.3 & 44.2 & 56.4 & 36.5 & 44.3 \\
-SetLoss & 46.8 & 40.7 & 43.5 & 47.8 & 40.7 & 44.0 \\ \hline
\textbf{LRN \small w/o IR \normalsize} & 60.7 & 32.5 & 42.3 & 61.2 & 33.5 & 43.3 \\
-PreAtt & 54.5 & 34.2 & 42.1 & 55.1 & 35.0 & 42.8 \\
-PreAtt-ConAtt & 55.2 & 32.9 & 41.3 & 56.2 & 34.3 & 42.6 \\
-SetLoss & 46.0 & 37.6 & 41.4 & 46.6 & 37.5 & 41.6 \\ \Xhline{1.2pt}
\end{tabular}}
\caption{Ablation results on Ultra-Fine dataset: PreAtt denotes premise attention, ConAtt denotes contextual attention, and -SetLoss denotes replacing set prediction loss with cross-entropy loss.}
\label{Module Ablation}
\end{table}
\begin{figure}[htb!]
\setlength{\belowcaptionskip}{-0cm}
\centering
\subfigure[]{
\resizebox{0.23\textwidth}{!}{
\includegraphics{figure/bar.pdf}
\label{Fig.attribute}
}}
\subfigure[]{
\resizebox{0.23\textwidth}{!}{
\includegraphics{figure/plot.pdf}
\label{Fig.thred}
}}
\caption{(a) Ablation experiments of context attributes and entity attributes on Ultra-Fine dataset. (b) Performances of different confidence threshold ${\theta}_c$ and similarity threshold ${\theta}_s$ on dev set.}
\end{figure}
To evaluate the effect of different components, we report the ablation results in Table~\ref{Module Ablation}. We can see that: (1) Set prediction loss is effective: replacing it with cross-entropy loss will lead to a significant decrease. (2) Both context and premise attention mechanisms are important for Seq2Set generation.
\paragraph{Effect of Attributes Set}
To explore the impact of entity attributes and context attributes in BAG, Figure~\ref{Fig.attribute} shows the results of different attributes configurations. We can see that: both attributes are useful, the context attribute has high coverage and may be noisy, while the entity attribute is opposite. However when introducing both of them, the information in entity attributes might help the context attributes to disambiguate them. This is similar to the effectiveness of contextual information in word sense disambiguation. As a result, these two kinds of attributes can complement each other. And Figure~\ref{Fig.thred} shows the performance on different thresholds, and we optimize confidence threshold ${\theta}_c=0.1$ and similarity threshold ${\theta}_s=0.2$ on dev set. Notice that ${\theta}_s$ is the threshold of activating labels and when ${\theta}_s=1$, it is equivalent to LRN \small w/o IR \normalsize.
\paragraph{Results of OntoNotes}
\begin{table}[!t]
\setlength{\belowcaptionskip}{-0cm}
\centering
\resizebox{.45\textwidth}{!}{
\begin{tabular}{llccc}
\Xhline{1.2pt}
\multicolumn{1}{c}{\textbf{Encoder}} & \multicolumn{1}{c|}{\textbf{Model}} & \textbf{Acc} & \textbf{MaF} & \textbf{MiF} \\ \Xhline{1.1pt}
\multicolumn{5}{c}{\textbf{with augmentation}} \\ \Xhline{1.1pt}
\multirow{1}{*}{HYPER} & \multicolumn{1}{l|}{\citet{DBLP:conf/emnlp/Lopez020_hyper}} & 47.4 & 75.8 & 69.4 \\ \hline
\multirow{2}{*}{LSTM} & \multicolumn{1}{l|}{\citet{choi2018ultra_data1}} & 59.5 & 76.8 & 71.8 \\
& \multicolumn{1}{l|}{\citet{xiong2019imposing_core2}} & 59.6 & 77.8 & 72.2 \\ \hline
\multirow{2}{*}{ELMo} & \multicolumn{1}{l|}{*\citet{DBLP:conf/naacl/OnoeD19_elmo}} & 64.9 & 84.5 & 79.2 \\
& \multicolumn{1}{l|}{\cite{linandJi2019attentive_core3}} & 63.8 & 82.9 & 77.3 \\ \hline
\multirow{4}{*}{BERT} & \multicolumn{1}{l|}{\citet{wang2020empirical}} & 61.1 & 81.8 & 76.3 \\
& \multicolumn{1}{l|}{BERT {[}in-house{]}} & 62.2 & 83.4 & 78.8 \\
& \multicolumn{1}{l|}{LRN \small w/o IR \normalsize} & \textbf{66.1} & \textbf{84.8} & \textbf{80.1} \\
& \multicolumn{1}{l|}{LRN} & 64.5 & 84.5 & 79.3 \\ \Xhline{1.1pt}
\multicolumn{5}{c}{\textbf{without augmentation}} \\ \Xhline{1.1pt}
\multirow{2}{*}{ELMo} & \multicolumn{1}{l|}{*\citet{DBLP:conf/naacl/OnoeD19_elmo}} & 42.7 & 72.7 & 66.7 \\
& \multicolumn{1}{l|}{\citet{DBLP:conf/acl/ChenCD20_onto_hier4}} & \textbf{58.7} & 73.0 & 68.1 \\ \hline
\multirow{4}{*}{BERT} & \multicolumn{1}{l|}{\citet{DBLP:conf/naacl/OnoeD19_elmo}} & 51.8 & 76.6 & 69.1 \\
& \multicolumn{1}{l|}{BERT[in-house]} & 51.5 & 76.6 & 69.7 \\
& \multicolumn{1}{l|}{LRN \small w/o IR \normalsize} & 55.3 & 77.3 & 70.4 \\
& \multicolumn{1}{l|}{LRN} & 56.6 & \textbf{77.6} & \textbf{71.8} \\ \Xhline{1.2pt}
\end{tabular}}
\caption{Results on OntoNotes test set. Augmentation is the augmented data created by \cite{choi2018ultra_data1} which contains 800K instances and therefore there’re little few-shot labels in this setting. And * indicates using additional features to enhance the label representation.}
\label{ontonotes results}
\end{table}
To verify the generality of our method, we further conduct experiments on OntoNotes and report results of with and without augmentation data in Table~\ref{ontonotes results}. To embed labels in OntoNotes, we use the embedding of the last word of a label, e.g., \textit{/person/artist/director} is represented using embedding of \textit{director}.
We can see that: 1) LRN still achieves the best performance on both settings, which verified the robustness of our method. 2) Compared with Ultra-Fine, our method achieves a smaller improvement on OntoNotes. We found this is mainly because: First, OntoNotes has weaker label dependencies for its label set is smaller (89 vs 2519 for Ultra-Fine) and most of its labels are coarse-grained. Secondly, most labels in OntoNotes are frequent labels with many training instances, therefore the long tail label problem is not serious. This also explains why LRN \small w/o IR \normalsize can achieve better performance than LRN in the setting of with augmentation data: the more the training instance, the less need for long tail prediction.
\subsection{Case Study}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.45\textwidth]{figure/heatmap.pdf}
\caption{Heat map of co-occurrence matrices of different models' prediction and ground truth. LRN w/o IR and LRN learn very similar co-occurrence matrices to Ground Truth.}
\label{Fig.heatmap}
\end{figure}
\begin{figure}[htb!]
\setlength{\belowcaptionskip}{-0cm}
\centering
\includegraphics[width=0.48\textwidth]{figure/case.pdf}
\caption{Cases of prediction results.}
\label{Fig.case}
\end{figure}
To intuitively present the learned label dependencies, Figure~\ref{Fig.heatmap} shows the label co-occurrence matrices of different models' predictions and ground truth, we can see that both LRN and LRN \small w/o IR \normalsize can accurately learn label dependencies. Figure~\ref{Fig.case} shows some prediction cases and demonstrates that deductive and inductive reasoning have quite different underlying mechanisms and predict quite different labels.
\section{Conclusions}
This paper proposes \textit{Label Reasoning Network}, which uniformly models, learns and reasons complex label dependencies in a sequence-to-set, end-to-end manner. LRN designs two label reasoning mechanisms for effective decoding -- deductive reasoning to exploit extrinsic dependencies and inductive reasoning to exploit intrinsic dependencies. Experiments show that LRN can effectively cope with the massive label set on FET. And because our method uses no predefined structures, it can be easily generalized to new datasets and applied to other multi-classification tasks.
\section{Acknowledgments}
This work is supported by the National Key Research and Development Program of China (No. 2020AAA0106400), the National Natural Science Foundation of China under Grants no.U1936207 and 62106251, Beijing Academy of Artificial Intelligence (BAAI2019QN0502), and in part by the Youth Innovation Promotion Association CAS(2018141).
\bibliographystyle{acl_natbib}
|
1,116,691,498,297 | arxiv | \subsection*{Acknowledgements}
We thank K. Sliwa, A. Narla, and L. Sun for helpful discussions. This research was supported by the U.S. Army Research Office (W911NF-14-1-0011). A.P. was supported by the National Science Foundation (NSF) (PHY-1309996). S.M.G. acknowledges additional support from NSF DMR-1301798. Facilities use was supported by the Yale Institute for Nanoscience and Quantum Engineering (YINQE), the Yale SEAS cleanroom, and the NSF (MRSECDMR 1119826).
\subsection*{Author Contributions}
A.P. and N.O. performed the experiment and analyzed the data. N.O. designed and built the feedback architecture with help from Y.L. under the supervision of R.J.S. and M.H.D. R.H. and P.R. developed the optimal control pulses. M.M., Z.L., L.J., S.G., and B.V. provided theoretical support. R.H. and L.F. fabricated the transmon qubit. R.J.S. supervised the project. A.P., N.O., L.F., and R.J.S. wrote the manuscript with feedback from all authors.
\clearpage
\begin{figure*}[!ht]
\centering
\includegraphics[width=5.5in]{fig1_final.pdf}
\caption{\footnotesize
\textbf{The cat code cycle.}
In the logical encoding of $\ket{0}\equiv\ket{C^+_{\alpha}}=\ket{\alpha}+\ket{-\alpha}$ and $\ket{1}\equiv\ket{C^+_{i\alpha}}=\ket{i\alpha}+\ket{-i\alpha}$ (normalizations omitted), the two ``2-cats" $\ket{C^+_{\alpha}}$ and $\ket{C^+_{i\alpha}}$ are both eigenstates of even photon number parity (an ``$n$-cat" is a superposition of $n$ coherent states). For large enough $|\alpha|$ they are also effectively orthogonal to one another. In this basis, the states along $+X_c$ and $+Y_c$ are both ``4-cats" of even parity as well. The different patterns in the fringes of their cartoon Wigner functions signify the different phase relationship between the basis states. These features allow one to store a qubit in a superposition of 2-cats and at the same time monitor the parity as the error syndrome without projecting the state out of this encoding. The loss of a single photon changes not just the parity of the basis states, but the phase relationship between them by a factor of $i$ ($\ket{C^+_{\alpha}}+\ket{C^+_{i\alpha}}\rightarrow\ket{C^-_{\alpha}}+i\ket{C^-_{i\alpha}}$). Decoding after one jump, one finds the initial qubit rotated by $\pi/2$ about the $Z_c$ axis (indicated by green shading). Thus, with each application of $\hat{a}$, the encoded state cycles between the even and odd parity subspaces (shaded in red and blue), while due to each consequent factor of $i$, the encoded information rotates about the $Z_c$ axis by $\pi/2$, returning to the original state after four photon jumps. Between the stochastic applications of $\hat{a}$, the cat states deterministically decay toward vacuum: $\alpha\rightarrow\alpha e^{-\kappa t/2}$ (not depicted here). As long as the coherent states do not overlap appreciably, this effectively constitutes the identity operation on the encoded state. }
\label{fig1}
\end{figure*}
\begin{figure}[!ht]
\centering
\includegraphics[width=3in]{fig2_final.pdf}
\caption{\footnotesize
\textbf{Quantum state machine for adaptive error monitoring.}
\textbf{(a)} Schematic of the logical flow. This quantum state machine implements an adaptive parity monitoring scheme in which the parity mapping protocol is updated in real-time to maximize the probability to measure $\ket{g}$. This reduces the time the ancilla spends in $\ket{e}$ by up to a factor of $50$ per error (Methods), enhancing parity measurement fidelities and making it an essential component in the experimental workflow. Entering the state machine (double-arrow pointing inward) consists of initializing a counter ($\mathrm{cnt}=0$) and using the protocol that maps even parity to $\ket{g}$ with a simple Ramsey-style pulse sequence~\cite{Bertet:2002df,Haroche:2007uc} (red circle) followed by a projective measurement of the ancilla; prior to the pulse sequence there is an idling time $t_\mathrm{w}$. In addition, the counter is incremented. If the measurement result is $\ket{g}$, the system idles for $40\mathrm{ns}$ (denoted by the stopwatch) and then returns to the previous state. A measurement of $\ket{e}$ implies a change in parity, or the occurrence of an error. In this case, a $\pi$ pulse is applied (Gaussian envelope; $\sigma=2\mathrm{ns}$; total duration $40\mathrm{ns}$) and the system moves to using a pulse sequence that maps odd parity to $\ket{g}$ (blue circle), and again increments the counter. The controller returns to the initial state after another error, completing the state machine cycle. When the counter reaches the pre-loaded number $\mathrm{cnt}=N_\mathrm{fin}$, the system exits (double-arrows pointing out). Throughout a single measurement trajectory, counting the errors amounts to counting the number of times $\ket{e}$ occurs. Measurement infidelities are emphasized by lighter shades of red and blue between parity transitions, corresponding to a lower confidence when the meter measures $\ket{e}$. The non-adaptive protocol simply cycles between using a fixed parity mapping sequence and the short idling time (green dotted arrow). \textbf{(b)} Results. An example single-shot record of parity measurement results, with $t_\mathrm{w}=9\mu$s between each measurement, demonstrates the difference between the adaptive and non-adaptive protocols. In the former, the ancilla is found to be in $\ket{e}$ just once out of 20 measurements regardless of the parity. With the latter, $\ket{e}$ is measured 8 times; given $t_\mathrm{w}/T_1\approx0.3$, the odds of ancilla decay in this trace are so high that it is unclear how many errors occurred.}
\label{fig2}
\end{figure}
\begin{figure*}[!ht]
\centering
\includegraphics[width=7.2in]{fig3_final.pdf}
\caption{\footnotesize
\textbf{Example of a two-step quantum trajectory executed by the QEC state machine.}
Two steps shown for a total monitoring time of $\sim 28\mathrm{\mu s}$. \textbf{(a)} Six cardinal points on the Bloch sphere $\rho_\mathrm{init}$, initially encoded in the ancilla, are encoded onto a resonator state; green markers indicate the initial coordinate system orientation. A ``Wigner [tomography] snapshot'' is shown for an initial state $\frac{1}{\sqrt{2}}(\ket{0}-\ket{1})$ mapped onto $\frac{1}{\sqrt{2}}(\ket{C^+_{\alpha}}-\ket{C^+_{i\alpha}})$; $\bar{n}_0=|\alpha|^2=3$. \textbf{(b)} The controller employs the quantum state machine for adaptive parity monitoring protocol with delays of $t_\mathrm{w}\sim13\mu$s between each measurement. Parity measurement rectangles: pulse patterns that map even (odd: dotted line) parity onto ancilla $\ket{g}$ ($\ket{e}$); diamonds: the controller branches on the ancilla measurement result ($0\rightarrow$ no error, $\ket{g}$; $1\rightarrow$ error, $\ket{e}$); $\pi$ pulse rectangle: ancilla reset ($\ket{e}\rightarrow\ket{g}$). The controller records the time at which an error occurs $t_j$ (clock icon); purple arrows emphasize the phase rotation due to the non-commutativity of $\hat{a}$ and the Kerr Hamiltonian: $\theta_K=Kt_j$, which leads to a phase difference between trajectories $10$ and $01$. Deterministic rotations $\theta_M$ are due to cross-Kerr interactions between the readout and storage resonators during projective measurements of the ancilla. The program returns one of the four possible records $\{00,01,10,11\}$ with probabilities $\{70.4\%,13.7\%,11.8\%,4.1\%\}$, in agreement with expected statistics. The parity (origin of the Wigner tomogram) matches the controller's best estimate at any time (border color), and each tomogram matches the expected resonator state as seen in simulations. \textbf{(c)} The feedback rotates all states back to the initial reference frame, where $01$ and $10$ are averaged together after the $\theta_K$ correction. Ancilla state tomography after decoding (D) returns octahedrons similar to the one in (a), which exhibit the expected rotations of $\pi/2$ about $Z$ per error (green markers). The controller decides in real-time whether to apply decoding pulses for even (red D), or odd (blue D) parity pulses. \textbf{(d)} The correction to obtain final state $\rho_\mathrm{fin}$ is made in software through coordinate system rotations by $0$ (0 errors), $-\pi/2$ (1 error), and $-\pi$ (2 errors), emphasizing that correcting errors, whether active or passive, amounts to just knowing how many errors occurred. \textbf{(e)} Process tomography results for $j=0$, $1$, and $2$ errors prior to correction. Ideal $X_{j\pi/2}$ process matrices are shown in wire-outlined bars. Experimental data for $X^M_j$ is shown in solid bars; the values are complex numbers with amplitude on the vertical axis and an argument specified by the bar color. Amplitudes less than $0.01$ are not depicted. Process tomography after correction is shown to the right of the arrow.}
\label{fig3}
\end{figure*}
\begin{figure*}[!ht]
\centering
\includegraphics[width=7.2in]{fig4_final.pdf}
\caption{\footnotesize
\textbf{QEC program process tomography.}
\textbf{(a)} Process fidelity decay of four possible qubit storage schemes in our system: just the ancilla transmon (green squares), an uncorrected superposition of cat states with $\bar{n}_0=2$ (orange circles), the cat code QEC system with $\bar{n}_0=2$ (red triangles), and a superposition of resonator Fock states $\ket{0}_f,\ket{1}_f$ (gray empty circles). Each point in the cat code data was averaged 100,000 times, and each point in the qubit and Fock state encodings was averaged 50,000 times; error bars are smaller than marker sizes. All curves are fit to single exponentials, $F(t)=0.25+Ae^{-t/\tau}$, except for the uncorrected cat code, which is fit to $F(t)=0.25+Ae^{-\bar{n}_0(1-e^{-t/\tau_c})}$; for short times this decay is well-approximated by a single exponential with $\tau\approx\tau_c/\bar{n}_0$. Fluctuations beyond the error bounds can be explained by the effects of Kerr and are reproduced in simulation. Uncertainties are given by errors in the fit. When trajectories of low measurement confidence are excluded (purple diamonds), the qubit decay rate decreases by nearly a factor of two. The top axis shows the number of syndrome measurements used to obtain each point in the cat code data. \textbf{(b)} Qubit state tomography after the correction step of the QEC system (corresponding to the cat code data in red triangles), shown for four different times. The uniform shrinking of the Bloch sphere in time demonstrates that the residual loss of fidelity is well-represented by a depolarization channel.}
\label{fig4}
\end{figure*}
\begin{figure}[!ht]
\centering
\includegraphics[width=3.3in]{tab1_final.pdf}
\captionsetup{labelformat=empty}
\caption{\footnotesize
\textbf{The Optimal Measurement Strategy.}
The dominant modes of failure in the cat code are double errors ($\hat{a}$ followed by $\hat{a}$) between consecutive syndrome measurements separated by a time $t_\mathrm{w}$; possible errors that the cat code does not address, such as additions of a single photon ($\hat{a}^\dag$); a failed parity mapping resulting from ancilla dephasing ($T_\phi$); incorrect ancilla initialization prior to syndrome measurement resulting from unknown excitations ($\Gamma_\uparrow$) of the ancilla during $t_\mathrm{w}$; undesired couplings that result in dephasing due to Kerr ($\hat{a}^{\dag2}\hat{a}^{2}$); and finally ancilla decoherence that directly propagates to unrecoverable errors in the codeword, a result of ancilla decay or excitation ($T_1$). This table shows the individual and total ramifications of these failure modes, where expected multiplicative gains in lifetime over a Fock state encoding ($290\mathrm{\mu s}$) are shown for two different measurement strategies: as quickly as possible (predicted), and the optimal monitoring time for an initial $\bar{n}=2$ (observed). As detailed in the Methods, these channels of loss are not independent. The performance of our QEC system is most limited by the non-fault tolerance of the error syndrome measurements.}
\label{tab1}
\end{figure}
\makeatletter
\renewcommand{\thefigure}{S\@arabic\c@figure}
\makeatother
\setcounter{figure}{0}
\setcounter{equation}{0}
\clearpage
\section{Methods}\label{setup}
\subsection{Setup}\label{Hamiltonian}
We perform our experiments in a cryogen-free dilution refrigerator that operates in a temperature range of $10-20$mK. Our setup is identical to that described in~\cite{Vlastakis:2015tw}, aside from a $2\mathrm{mm}$ rather than $4\mathrm{mm}$ wall separating the storage and readout resonators (Fig.~S\ref{fig:setup}a), a Josephson Parametric Converter (JPC)~\cite{Bergeal:2010iu} replacing a Josephson Bifurcation Amplifier (JBA)~\cite{Vijay:2009ko} as the first stage of amplification (Fig.~S\ref{fig:setup}b), and a quantum control architecture replacing the hybrid FPGA-Tektronix AWG configuration (Fig.~S\ref{fig:setup}c; see sec.~\ref{quantum_control}).
\subsection{Hamiltonian Parameters}\label{hamiltonian}
Employing the rotating wave approximation (RWA), the Hamiltonian of our system in the strong dispersive regime~\cite{Schuster:2007ki} is approximated by:
\begin{align}
\label{eq:hamiltonian}
\hat{H}/\hbar = (\omega_{a} - \chi_{sa}\hat{a}_s^\dagger\hat{a}_s) \hat{b}^\dag\hat{b}+ (\omega_r -\chi_{ra}\hat{b}^\dag\hat{b})\hat{a}_r^\dag\hat{a}_r + (\omega_s-\chi_{sr}\hat{a}_r^\dag\hat{a}_r)\hat{a}_s^\dagger\hat{a}_s - \frac{K_s}{2}\hat{a}_s^{\dag 2}\hat{a}_s^2 - \frac{K_r}{2}\hat{a}_r^{\dag 2}\hat{a}_r^2 - \frac{K_a}{2}\hat{b}^{\dag 2}\hat{b}^2,
\end{align}
where the annihilation operator for the storage resonator from the main text is relabeled, $\hat{a}\rightarrow\hat{a}_s$, that of the readout resonator $\hat{a}_r$ is introduced, and that of the ancilla $\hat{b}$ replaces the projector $\ket{e}\bra{e}$. In a similar manner, anharmonicities for all three components are defined as $K_s$, $K_r$, and $K_a$. Higher-order corrections to dispersive shifts and anharmonicities were found to be negligible in this experiment.
The Hamiltonian is organized to highlight some of the dominant mechanisms in our system. As described in the main text, the frequency of the ancilla is dispersively shifted by a frequency $\chi_{sa}$ for every photon in the storage resonator~\cite{Schuster:2007ki}; with this mechanism we realize an effective controlled-NOT (cNOT) gate on the ancilla depending on the photon number parity of the storage (see secs.~\ref{cohstate_basis},~\ref{smart_tracking}). Similarly, the frequency of the readout is shifted by $\chi_{ra}$ depending on the ancilla state; this is the typical dispersive readout mechanism~\cite{Blais:2004tl} that is now ubiquitous in superconducting cQED systems. Finally, the frequency of the storage is shifted by $\chi_{sr}$ per photon in the readout, indicating that every time we measure the state of the ancilla we shift the phase of the storage by an amount that depends on the strength of the readout pulse.
The mode frequencies, dispersive shifts, and the ancilla anharmonicity are measured using the experimental methods described in~\cite{Vlastakis:2015tw}. The Kerr interaction of the storage resonator $K_s$ is measured by monitoring the errors of a cat code logical state (see sec.~\ref{cat_code}) and finding the difference in phase between trajectories where errors are measured to occur at different times: $\Delta\theta=K_s\Delta t_j$ (see secs.~\ref{undesired_couplings},~\ref{record_error_time}). In Fig.~3 of the main text, we show two Wigner functions~\cite{book:haroche06} for the case of a single parity jump: $01$ and $10$, where $0\equiv \mathrm{``no~error"}$ and $1\equiv \mathrm{``error."}$ We Fourier transform circular cuts at a fixed radius of these Wigner functions that show pronounced interference fringes to compare the phase of the oscillations for $01$ vs. $10$ and in so doing find $\Delta\theta$. On average, photon jumps for $01$ versus $10$ are separated in time by $t_{M}$, where $t_M$ is the total time between syndrome measurements (see sec.~\ref{exptflow}); we thus find the average difference between jump time $\overline{\Delta t_j}=t_\mathrm{M}$ to find $K_s$. Table~\ref{table_params} summarizes the measured (predicted~\cite{Nigg:2012jja}) parameters.
\subsection{Coherence Times and Measurement Fidelities}\label{fidelities}
Coherence times and thermal populations of all modes, obtained using the methods outlined in~\cite{Vlastakis:2015tw}, are summarized in Table~\ref{coherence}. The single-shot ancilla measurement fidelity is $99.3\%$. The parity measurement fidelity is $98.5\%$ for no photons in the storage resonator, $98.1\%$ for an average photon number $\bar{n}=2$, and $97.7\%$ for $\bar{n}=3$; these fidelities are also obtained using the methods in~\cite{Vlastakis:2015tw}. The average rate of thermal excitation of the ancilla from its ground state $\ket{g}$ to excited state $\ket{e}$, $\Gamma_\uparrow$, is given by $\Gamma_\uparrow=1/(T_1n^a_{th})$, where $n^a_{th}=0.04$ is the steady-state ancilla excited state occupation. A lower bound on the dephasing rate of the storage resonator, $\Gamma^s_{\phi}$, is given by $\Gamma^s_{\phi}=\Gamma_\uparrow$~\cite{Reagor:2015}, akin to the dephasing one expects of a qubit (e.g. transmon) coupled to a low-Q readout resonator with a finite thermal population~\cite{Sears:2012cm}. The storage resonator coherence time $T^s_2$ is thus given by $(T^s_2)^{-1}=(2\tau_s)^{-1}+\Gamma^s_{\phi}$, where $\tau_s$ is the average lifetime of the single photon Fock state $\ket{1}$. The coherence time $T^s_2$ is consistent with the observed time constant in the decay of the process fidelity of a qubit stored in Fock states $\ket{0}_f$, $\ket{1}_f$ (see sec.~\ref{depolarization}). Henceforth we refer to the storage resonator as just ``the resonator."
\section{Building the Cat Code}
\subsection{Encoding in a Continuous-Variable System}
Universal quantum computation with continuous-variable encoding schemes is possible~\cite{Lloyd:2003kv,Gottesman:2001jb,Menicucci:2006ir} and can in fact offer advantages over those employing collections of discrete physical qubits. Although a continuous-variable quantum computer formally has the same power as a discrete-variable system, there are regimes in which it could be more efficient. For example, a single oscillator can in principle accommodate an unlimited amount of information, owing to the infinite size of its Hilbert space. The hardware requirements can be more favorable as well, calling for linear elements and photon detectors in the optical platforms~\cite{Knill:2001is} or simple microwave resonators with long coherence times that are easy to assemble in superconducting cQED systems~\cite{Reagor:2015}. Furthermore, the natural relation between continuous variables and communication can in principle simplify the transmission of quantum information for the purposes of teleportation~\cite{Lloyd:1998ub,Braunstein:1998uo}, cryptography~\cite{Ralph:1999ex}, and dense coding~\cite{Braunstein:2000cw}, to name a few.
There are trade-offs as well, however, which include challenges resulting from possible non-orthogonality of basis states in experimental realizations, the possibility of continuous excursions from a logical sub-space, and manipulating encoded states with high fidelity. Nonetheless, several promising continuous variable QEC protocols exist~\cite{Braunstein:1998cd,LundRalph:2008,Ralph:2011ct,Leghtas:2013ff,Mirrahimi:2014js,Michael:2016}, and substantial progress has been made in demonstrating that continuous-variable encodings can be a powerful resource for the storage, control, and measurement of quantum information~\cite{Pittman:2005du,Deleglise:2008gt, Aoki:2009ew,Hofheinz:2009ba,Jensen:2011ba,Vlastakis:2013ju,Sun:2013ha,Leghtas:2015uf,Heeres:2015,Vlastakis:2015tw}. Moreover, with recent progress demonstrating cQED architectures that offer a natural path toward scalability~\cite{Brecht:2015_1,Brecht:2015_2,Minev:2015}, we see continuous variables systems as a promising platform for realizing a practical quantum computer.
\subsection{Single Photon Loss as the Dominant Error}
We consider the system introduced in section~\ref{setup}. The time evolution of the density matrix $\rho_s$ of a resonator field, which has some equilibrium thermal photon population $n^s_{th}$, a rate $\kappa_s$ associated with single photon creation and annihilation operators $\hat{a}_s^\dag$ and $\hat{a}_s$, is well-modeled by the following Lindblad operators in the master equation formalism~\cite{book:haroche06}:
\begin{align}
L_-&=\sqrt{\kappa_s (1+n^s_{th})}\hat{a}_s\\
L_+&=\sqrt{\kappa_s n^s_{th}}\hat{a}_s^\dag,
\end{align}
where the master equation reads:
\begin{align}
\frac{d\rho_s}{dt}=-i\omega_s[\hat{a}_s^\dag\hat{a}_s,\rho_s]-\frac{\kappa_s(1+n^s_{th})}{2}(\hat{a}_s^\dag\hat{a}_s\rho_s+\rho_s \hat{a}_s^\dag\hat{a}_s-2\hat{a}_s\rho_s\hat{a}_s^\dag)-\frac{\kappa_s n^s_{th}}{2}(\hat{a}_s\hat{a}_s^\dag\rho_s+\rho_s \hat{a}_s\hat{a}_s^\dag-2\hat{a}_s^\dag\rho_s a)
\end{align}
One can show that such a formulation returns the expected prediction that on average the occupation of the resonator mode $\bar{n}=\mathrm{Tr}[\rho_s\hat{a}_s^\dag\hat{a}_s]$ simply decays exponentially in time to thermal equilibrium with a characteristic time constant $\tau_s=1/\kappa_s$:
\begin{align}
\bar{n}(t)=n_0e^{-t/\tau_s}+n^s_{th}(1-e^{-t/\tau_s})\label{energy_decay}
\end{align}
Treating $n^s_{th}$ as negligible for the remainder of this discussion, one may conclude that the evolution of the density matrix $\rho_s$ can be simply described by the stochastic application of the lowering operator $\hat{a}_s$ on the resonator field. This assertion, that there are essentially just two dominant processes within the resonator, the application of $e^{-\frac{\kappa_s}{2}\hat{a}^\dag_s\hat{a}_s\Delta t}$ (for time steps $\Delta t$) and the stochastic application of $\hat{a}_s$, is a powerful incentive to consider storing quantum information in a superconducting resonator rather than a two level system, such as a transmon, which is susceptible to both amplitude ($\sigma_-$) and phase damping ($\sigma_z$). Here $\sigma_x,\sigma_y,\sigma_z$ are the standard Pauli operators and $\sigma_\pm=\sigma_x\mp i\sigma_y$. Indeed, it has been experimentally demonstrated~\cite{Reagor:2015} that 3D superconducting resonators have no currently measurable source of inherent dephasing arising from a Lindblad operator of the form $L_\phi=\sqrt{\kappa_\phi}\hat{a}_s^\dag\hat{a}_s$, which corresponds to a frequency jitter, or higher order photon loss mechanisms such as $L_{2ph}=\sqrt{\kappa_{2ph}}\hat{a}_s^2$~\cite{Sun:2013ha}. In practice, some resonator dephasing is induced by its dispersive coupling to occupation fluctuations of other modes in the system, particularly to that of the transmon qubit used as the ancilla in the error correction; see sec.~\ref{losses}. The governing goal is to thus construct a code that can track the occurrence of single photon jumps, as this would correct for the dominant error channel in the system. In implementing a QEC system to realize this goal, we look to translate the discretized energy dissipation of the resonator field into a unitary operation on an encoded state, the occurrence of which can be deduced from an appropriate error syndrome measurement.
\subsection{A Basis of Coherent States}\label{cohstate_basis}
Coherent states $\ket{\alpha}$ are an attractive option for a logical encoding scheme as they are eigenstates of $\hat{a}_s$, where $\ket{\alpha}=e^{-|\alpha|^2/2}\sum_{n=0}^{\infty}\frac{\alpha^n}{\sqrt{n!}}\ket{n}$ for a complex amplitude $\alpha$, and $\hat{a}_s\ket{\alpha}=\alpha\ket{\alpha}$. This feature suggests that one could try encoding a qubit in a superposition of coherent states: $\ket{\psi}=c_0\ket{0}+c_1\ket{1}\rightarrow c_0\ket{\alpha}+c_1\ket{-\alpha}$. As the overlap between two coherent states falls off exponentially in the difference of their amplitudes~\cite{book:haroche06}, choosing an $|\alpha|^2=\bar{n}\gtrsim 1.5$ would be sufficient for basis states $\ket{\alpha}$ and $\ket{-\alpha}$ to be almost completely orthogonal, with an overlap of $\sim0.2\%$. The penalty we pay is that the rate of photon jumps $\gamma$ scales with the mean photon number $\bar{n}=|\alpha|^2$. When $c_0=c_1=\pm1/\sqrt{2}$, $\ket{\psi}\rightarrow1/\sqrt{2}(\ket{\alpha}\pm\ket{-\alpha})$, the logical encoding is an equal superposition of coherent states that we refer to in this work as ``2-cat" states, which are eigenstates of even ($+$) or odd ($-$) photon number parity $\hat{P}=e^{i\pi\hat{a}_s^\dag\hat{a}_s}$:
\begin{align}
\ket{C^{+}_{\alpha}}=&\frac{1}{\sqrt{2}}(\ket{\alpha}+\ket{-\alpha})=e^{-|\alpha|^2/2}\sum_{n=0}^{\infty}\frac{\alpha^{2n}}{\sqrt{2}\sqrt{(2n)}!}\ket{2n}\\
\ket{C^{-}_{\alpha}}=&\frac{1}{\sqrt{2}}(\ket{\alpha}-\ket{-\alpha})=e^{-|\alpha|^2/2}\sum_{n=0}^{\infty}\frac{\alpha^{2n+1}}{\sqrt{2}\sqrt{(2n+1)}!}\ket{2n+1},
\end{align}
where $\bra{C^{\pm}_{\alpha}}\hat{P}\ket{C^{\pm}_{\alpha}}=\pm1$. In fact, this parity is a quantity that can be measured in our system with a simple Ramsey-style pulse sequence~\cite{Bertet:2002df,Haroche:2007uc} (see sec.~\ref{smart_tracking}). Measurements of photon parity have already been used to track the loss of single photons in real-time with high fidelity, demonstrating their efficiency in extracting an error syndrome from a resonator field~\cite{Sun:2013ha}. The problem with this encoding, however, is that aside from the special case of a 2-cat, for arbitrary $c_0$ and $c_1$ there is no parity symmetry and no other measurable symmetry property that would indicate the loss of a photon.
\subsection{The Cat Code: A Basis of Cat States}\label{cat_code}
This necessitates us to move on to the cat code~\cite{Leghtas:2013ff,Mirrahimi:2014js}, wherein we access a larger part of the resonator's Hilbert space in order to accommodate an encoding scheme where the individual basis states are themselves 2-cats along the real and imaginary axes in phase space: $\ket{C^\pm_{\alpha}}\equiv\mathcal{N^\pm_\alpha}(\ket{\alpha}\pm\ket{-\alpha})$ and $\ket{C^\pm_{i\alpha}}\equiv\mathcal{N^\pm_\alpha}(\ket{i\alpha}\pm\ket{-i\alpha})$, where $\mathcal{N^\pm_\alpha}\rightarrow1/\sqrt{2}$ for large $\alpha$. To prevent appreciable basis overlap (see sec.~\ref{orthogonality}), one must now have $|\alpha|^2=\bar{n}\gtrsim 2$ and thus $\gamma\gtrsim2\kappa_s$. Such a modification allows us to encode a quantum state with arbitrary $c_0$ and $c_1$ in an eigenstate of parity. Figure S\ref{fig:Circuit_a} shows an explicit example where $c_0=c_1=1/\sqrt{2}$. This in turn allows changes in parity to serve as the error syndrome for the loss of a photon in a logical qubit:
\begin{align}
\ket{\psi_{\mathrm{init}}}=&c_0\ket{0}+c_1\ket{1}\rightarrow c_0\ket{C^+_{\alpha}}+c_1\ket{C^+_{i\alpha}}\label{cat_code}\\
\hat{a}_s(c_0\ket{C^+_{\alpha}}+c_1\ket{C^+_{i\alpha}})=&\mathcal{N^-_\alpha}[c_0(\ket{\alpha}-\ket{-\alpha})+ic_1(\ket{i\alpha}-\ket{-i\alpha})]\nonumber \\
=&c_0\ket{C^-_{\alpha}}+ic_1\ket{C^-_{i\alpha}}\nonumber\\
\hat{a}_s(c_0\ket{C^-_{\alpha}}+ic_1\ket{C^-_{i\alpha}})=&\mathcal{N^+_\alpha}[c_0(\ket{\alpha}+\ket{-\alpha})-c_1(\ket{i\alpha}+\ket{-i\alpha})]\nonumber \\
=&c_0\ket{C^+_{\alpha}}-c_1\ket{C^+_{i\alpha}}\nonumber
\end{align}
Equation~\ref{cat_code} shows that the cat code maps a photon loss error in the resonator field into a rotation by $\pi/2$ about the logical $Z$ axis, as seen from the factor of $i$ that comes out in front of $c_1$. With each successive error the parity of the basis states cycles between even and odd, while the encoded information continues to rotate about $Z$ in increments of $\pi/2$, returning to the initial state after four errors (Fig.~1, main text). Left uncorrected, an encoded state devolves into a mixture of cat states, and equivalently, a classical mixture of coherent states. By performing single-shot parity measurements, however, we repeatedly update our knowledge as to the parity of the state and infer the occurrence of an error when the parity changes~\cite{Sun:2013ha}. We thereby follow the stochastic evolution of the resonator state through the cat code loop, maintaining the coherence of the qubit despite errors.
In addition to the stochastic loss of single photons, as shown in eq.~\ref{energy_decay}, the energy of the resonator field decays deterministically to vacuum at a rate $\kappa_s$. Therefore, in the experimental implementation of the cat code, whenever we decode a resonator state back onto the ancilla transmon (see sec.~\ref{oc_pulses}) we must always take into account the decay of the cat state amplitude after a finite time of monitoring $t$:
\begin{align}
\ket{C^{\pm}_{\alpha}}\rightarrow&\ket{C^{\pm}_{\alpha e^{-\kappa_s t/2}}}=\mathcal{N^\pm_\alpha}_{(t)}(\ket{\alpha(t)}\pm\ket{-\alpha(t)})\\
\mathcal{N^\pm_\alpha}_{(t)}&=\frac{1}{\sqrt{2(1\pm e^{-2|\alpha(t)|^2})}}\nonumber\\
\alpha(t)&=\alpha e^{-\kappa_s t/2}\nonumber,
\end{align}
where again $\mathcal{N^\pm_\alpha}_{(t)}\rightarrow1/\sqrt{2}$ for large $\alpha(t)$. Of course, without any intervention, any state stored in the resonator eventually decays to vacuum, thereby erasing any stored information. This effect is not irreversible, however, as energy can be periodically repumped into the resonator using unitary gates. Moreover, it has recently been demonstrated that cat state amplitudes can be preserved indefinitely through the application of off-resonant pumps at carefully chosen frequencies~\cite{Leghtas:2015uf}. We have not yet implemented such a system in this work.
Perhaps surprisingly, the loss of a photon has no effect on the amplitude of a coherent state, as can be seen from a simple argument in~\cite{book:haroche06}, section 4.4.4. The authors explain that losing single photons simply updates one's knowledge that there must have been more photons in the resonator to begin with. By virtue of this curious property of coherent states, the amplitude of our logical states is independent of the number of photon jumps we detect.
\section{Encoding and Decoding Pulses}\label{grape}
\subsection{Optimal Control Pulses}\label{oc_pulses}
We employ optimal control pulses to encode and decode logical states in the resonator based on the Gradient Ascent Pulse Engineering (GRAPE) algorithm originally developed for pulse sequences in NMR spectroscopy~\cite{Khaneja:2005vd,deFouquieres:2011wm}. This algorithm is designed to numerically find a set of pulses that most accurately realizes a unitary operation or state transfer, taking an initial state $\ket{\psi(t=0)}$ to a final state $\ket{\psi(T)}$. We define the fidelity of the simulated state transfer $F_{oc}$ to be:
\begin{align}
F_{oc}=\frac{1}{K^2}|\sum_k^K\left<\psi_k(T)|\psi_k^\mathrm{tar}\right>|^2,
\end{align}
for a target state $\psi^\mathrm{tar}$, where $K$ is the total number of state transfers we wish to realize. In order to model the physical limits in output amplitude imposed by our electronics hardware, we add an amplitude constraint of the form $\lambda\sum_{t=1}^T e^{(a_t/h)^2}$, where $\lambda$ is an overall scaling; $a_t$ is the amplitude at each point in time of the pulse (discretized into $1\mathrm{ns}$ steps); and $h$ is an amplitude threshold, which we choose to be slightly below the maximum output amplitude our waveform generators can produce. This penalty term turns on sharply when $a_t$ reaches $h$. The scaling factor $\lambda$ is a proportionality constant that makes the total penalty much smaller than $1$ for pulses that have all amplitudes below $h$. We also include a derivative penalty to give preference to smoother pulses, similarly defined as $\lambda_d\sum_{t=1}^T e^{(a_t-a_{t-1})^2/h_d^2}$. With such a term included, the simulation favors pulses with changes smaller than $h_d$ between neighboring control points. The criterion we enforce is that $F_{oc}$ must exceed a value typically set to be $98\%$, although this constraint is relaxed when the overlap of basis states becomes non-negligible (see sec.~\ref{orthogonality}).
In our implementation, we use the Hamiltonian defined in sec.~\ref{Hamiltonian} (excluding any terms involving the readout resonator) along with driving terms on the ancilla and resonator of the form $\varepsilon_a(t)\hat{b}^\dag+\varepsilon^*_a(t)\hat{b}$ and $\varepsilon_s(t)\hat{a}^\dag_s+\varepsilon^*_s(t)\hat{a}_s$. Temporal envelopes $\varepsilon_a(t)$ and $\varepsilon_s(t)$, which specify $\hat{U}(t)$, are discretized into $1\mathrm{ns}$ pieces. It is the shape and amplitude of these envelopes that we wish to numerically optimize in order to realize the desired state transfer. More explicitly, for the encoding pulses we wish to find a single unitary $\hat{U}_\mathrm{tar}$ such that for all $c_0$ and $c_1$ we have:
\begin{align}
\hat{U}_\mathrm{tar}(c_0\ket{g}+c_1\ket{e})\ket{0}\rightarrow&\ket{g}(c_0\ket{C_\alpha^+}+c_1\ket{C_{i\alpha}^+})
\end{align}
This unitary takes a quantum bit initially stored in the ancilla (with resonator in the vacuum) to a superposition of cat states in the resonator with the same amplitudes $c_0$ and $c_1$ (returning the ancilla to $\ket{g}$). Figure S\ref{fig:GRAPE}a shows a set of such encoding pulses on the ancilla and resonator.
The decoding pulses simply reverse this mapping. A single decoding pulse, however, cannot take two resonator states of different parity back to the same state of the ancilla since a unitary operation cannot bring two orthogonal states to a single state. For a given monitoring time we thus prepare two sets of decoding pulses, one for even states and one for odd states, and apply the correct one depending on the final parity of the state (see sec.~\ref{adaptive_decoding}). After monitoring errors for arbitrary lengths of time the decoding pulse must also take into account the substantial deviations of the coefficients in the Fock state expansion of the basis states from their original Poisson values. We thus use these pulses to remove any distortions in the resonator state due to the deterministic action of the Kerr Hamiltonian and deterministic amplitude damping due to energy decay. For example, after a monitoring time $T$ the decoding pulse for even parity realizes the following state transfer:
\begin{align}
\ket{g}\{e^{-i\frac{K_s}{2}\hat{a}_s^{\dag 2}\hat{a}_s^2T}\mathcal{N^+_\alpha}_{(T)}[c_0(\ket{\alpha(T)}+\ket{-\alpha(T)})+c_1(\ket{i\alpha(T)}+\ket{-i\alpha(T)})]\}\rightarrow&(c_0\ket{g}+c_1\ket{e})\ket{0}
\end{align}
For the data presented in Fig.~4 of the main text we therefore require a different pair of decoding pulses for each of the nine data points in the plot. The feedback controller stores these in memory and applies them at the appropriate time.
\subsection{Implementation}\label{grape_implementation}
Crucial to the success of finding an optimal control pulse with high fidelity is an accurate knowledge of the dominant Hamiltonian parameters. Furthermore, careful microwave hygiene at all points in the experimental chain is necessary to prevent undesired reflections and dispersions that can distort the pulse as it goes from room temperature to the setup inside the dilution refrigerator. Indeed, the fluctuations in the process fidelity decay curves can be mostly attributed to the imperfect application of the decodings. Figure S\ref{fig:GRAPE}b demonstrates a calibration sequence we use to tune the amplitudes on individual ancilla and resonator drives. Ideally, the encoding pulse returns the ancilla to the ground state and creates a cat state with mean parity $\left<\hat{P}\right>=+1$. In practice, both the parity and the final ground state occupation are slightly lower than their ideal values, and are sensitive to errors in pulse power. By sweeping the relative powers for both the ancilla and resonator drives, we find the maximum ground state occupation and parity value to occur at roughly equal scalings. The ultimate figure of merit comes down to the process fidelity of encoding and decoding with no time delay in between. As seen in Fig.~4a of the main text, the drop in process fidelity at $t=0$ is $\sim8\%$; assuming both pulses are similarly afflicted by all sources of error, this implies that each one contributes $\sim4\%$ infidelity, in line with expectations given that the ancilla coherence time $T_2=12\mathrm{\mu s}$ and the duration of both pulses combined is $\sim 1\mathrm{\mu s}$. This fixed cost of entrance, to realize the augmented codeword, afflicts any QEC implementation to some degree.
The full Wigner tomography shown in Fig.~S\ref{fig:GRAPE}c, defined as $W(\alpha) = \frac{2}{\pi}\braket{\hat{D}_\alpha \hat{P}\hat{D}_\alpha^\dagger}$ for a resonator displacement operator $\hat{D}_\alpha $~\cite{book:haroche06}, illustrates visually how we do indeed have the capability to encode any arbitrary state with the same pulses, where the only difference lies in the preparation of the initial qubit. Shown in the first two rows are examples of encoding and decoding all six cardinal points on the Bloch sphere with cat states of average photon number $\bar{n}=3$. In the third row we show the result of encoding a qubit into the Fock states $\ket{0}_f$, $\ket{1}_f$. We note that our numerical optimization can find encoding and decoding pulses for $\ket{0}_f$, $\ket{1}_f$ that are about half the length of those used for the cat code, owing to the smaller fraction of the Hilbert space that needs to be accessed. The shorter pulse length reduces the time the ancilla is entangled with the resonator and thus improves the process fidelity, accounting for the initial offset between the Fock state and cat code encodings. These results demonstrate that beyond offering the convenience of fast encoding and decoding that take into account distortions due to higher order Hamiltonian parameters, optimal control pulses provide a striking example of the levels of control possible with continuous-variable systems in a cQED framework.
\section{Sources of Loss}\label{losses}
In this section we expand upon the calculations that produce the predicted gains in lifetime of the cat code over the Fock state encoding listed in Table 1 of the main text. Each dominant source of decoherence, whether arising from errors in the codeword or in the syndrome interrogation, contributes to the probability that the cat code will fail in a single round of error correction. When isolated in a hypothetical situation as the only source of loss in the system, it can be quantified with simple estimates based on coherence times, thermal populations, and measurement fidelities. We stress, however, that the sources of loss detailed in the sections below do not act independently when considering the experimental reality. Indeed, simply adding all of the rates in parallel leads to an underestimate of the cat code performance. In section~\ref{optimal_rate} we analyze the system as a whole and show that we can analytically predict the data for the cat code decay shown in Fig.~4a of the main text.
In section~\ref{forward_prop} we arrive at a result that has substantial bearing on our understanding of fault tolerance regarding this implementation of the cat code. We find that forward propagation of errors into the codeword due to the ancilla $T_1$ constitutes the single dominant source of non-fault tolerance in the current implementation of the cat code. Indeed, changes in ancilla energy at unknown times result in unknown excursions of our encoded states out of their logical space. This is the central limiting feature of our QEC system that necessitates a lower measurement cadence, one that effectively balances the probability of dephasing due to ancilla decay with the probability of dephasing due to all other sources of error combined (quantified in sec.~\ref{optimal_rate}). We show that mitigating this form of ancilla decoherence promises to offer considerable improvements in cat code performance.
\subsection{Double-Errors}
The cat code is a first-order code, which means that the error syndrome we employ cannot detect the occurrence of multiple errors between two consecutive measurements. The probability of such events, $p_{2\varepsilon}$, can be calculated from the Poisson distribution:
\begin{align}\label{form:double_jumps}
p_{2\varepsilon}(t_M)=\frac{(\bar{n}\kappa_s t_M)^2}{2}e^{-\bar{n}\kappa_s t_M},
\end{align}
where we take the approximation that $\bar{n}$ is constant throughout the small time interval $t_M$. In this expression, $t_M\approx t_\mathrm{w}+1\mathrm{\mu s}$ is the total measurement time; $t_\mathrm{w}$ is the time delay between the end of one syndrome measurement and the beginning of the next; and the parity mapping together with ancilla readout totals $\sim1\mathrm{\mu s}$ (see sec.~\ref{exptflow} for exact timings). The average time between photon jumps is given by $1/\bar{n}\kappa_s$. A simple calculation using eq.~\ref{form:double_jumps} for measurement intervals $t_M\approx1\mathrm{\mu s}$ and $t_M\approx21\mathrm{\mu s}$ returns the predicted gains in the process fidelity lifetime over a Fock state encoding, $\tau_{f01}$, one would expect to see if missing such events were the only source of error. Defining a gain $G_{2\varepsilon}(t_M)=t_M/(p_{2\varepsilon}\tau_{f01})$, we find:
\begin{align}
G_{2\varepsilon}(1\mathrm{\mu s})\approx 125\\
G_{2\varepsilon}(21\mathrm{\mu s})\approx 6,
\end{align}
reproducing the results presented in Table 1 of the main text.
We also quantify how Quantum Non-Demolition (QND) our parity measurements are, or in other words, with what probability of demolition $p_d$ do we induce a photon jump by measuring parity. We use the methods studied extensively in~\cite{Sun:2013ha} to find that $p_d=0.1\%$ in our system (Fig.~S\ref{fig:QND}), comparable to the result in~\cite{Sun:2013ha}. The probability of dephasing, however, is $p_d^2$, since this is the probability of inducing two jumps in a row; this effect is negligible. The mechanism behind $p_d$ is a subject of our on-going research.
\subsection{Uncorrectable Errors}
The cat code cannot distinguish between photon loss ($\hat{a}_s$) and photon gain ($\hat{a}_s^\dag$). The probability of excitation due to $\hat{a}_s^\dag$ is given by $p_{\uparrow s}(t_M)=t_M n^s_{th}\bar{n}/\tau_s$. Given the low thermal population and high coherence of the resonator (Table~\ref{coherence}), we expect an $\hat{a}_s^\dag$ event on average every $\sim6\mathrm{ms}$ for $\bar{n}=2$, a rate of thermal excitation that is negligible compared to all other sources of loss. If this were the only source of code failure, the gain $G_{\uparrow s}=t_M/(p_{\uparrow s}\tau_{f01})$ would be independent of $t_M$ and equal to approximately $20$, as given in Table 1 of the main text.
When these currently uncorrectable sources of error become dominant, the redundancy of the cat code can be augmented by increasing the size of the logical basis stated to superpositions of three coherent states (and higher) ~\cite{Leghtas:2013ff}. Although coherent states are not eigenstates of $\hat{a}_s^\dag$, for large enough amplitudes the addition of a single photon results in a distortion in the Poisson coefficients that can still be corrected by the pumping scheme described in~\cite{Leghtas:2015uf}.
\subsection{Readout Error}\label{readout_errors}
During the parity mapping sequence, ancilla dephasing due to $T_2$ is the dominant contribution to the overall drop in parity measurement fidelity. The parity of the state of course does not change upon an errant measurement, but our reaction to the result within the experimental flow does (see sec.~\ref{feedback_applications}). Were it not for the detrimental effects of ancilla back-action (see sec.~\ref{forward_prop}), the optimal approach would be to measure as quickly as possible to build up measurement statistics (see sec.~\ref{confidence}). Indeed this was the strategy implemented in~\cite{Sun:2013ha}, where the goal was to understand the dynamics of photon jumps and so error propagation was not considered. In that work, a quantum filter used Bayesian statistics to best estimate the parity of the state at any given time, and it would take about three consecutive agreeing measurements for the filter to switch from one parity to another. In this work, with improved fidelities and lifetimes we understand that with a measurement cadence of $t_M\approx1\mathrm{\mu s}$ it would take roughly $2\mathrm{\mu s}$ for an equivalent filter to converge on a parity with high probability. If a photon jump occurs within this effective bandwidth, the filter will not detect it, resulting in a readout error. With average photon jump times on the order of $120\mathrm{\mu s}$ for an $\bar{n}=2$ in the resonator, the probability of missing a jump is therefore $p_{mj}\approx2/120\approx1.5\%$. The gain is therefore approximately equal to $120\mathrm{\mu s}/(p_{mj}\tau_{f01})\approx25$, as in Table 1 of the main text.
For the optimal measurement cadence $t_M\approx21\mathrm{\mu s}$ for $\bar{n}=2$, given that the probability to have photon jumps within $t_M$ is approximately $20\%$, the optimal strategy is to trust each result implicitly (see sec.~\ref{optimal_rate}). If ancilla dephasing were the only source of error, after the syndrome mapping time $\pi/\chi_{sa}$ the purity of the ancilla state would decrease to approximately $\pi/(\chi_{sa}T_2)\approx0.98$. The remaining $2\%$, which is mixture, would be measured to have the correct syndrome mapping result half of the time. The probability to dephase the qubit due to an errant syndrome measurement result becomes $p_\phi\approx1\%$. The gain $G_\phi(t_M)=t_M/(p_\phi\tau_{f01})\approx7$ returns the result presented in Table 1 of the main text.
\subsection{Ancilla Preparation}\label{anc_prep}
After every syndrome measurement, we reinitialize the ancilla to $\ket{g}$ regardless of the result (see sec.~\ref{reset}). Given its finite rate of excitation, $\Gamma_\uparrow$ (see sec.~\ref{setup}), after $t_M$ the ancilla may no longer be in the ground state with a probability $p_{\uparrow a}=\Gamma_\uparrow t_M$, which leads to an errant subsequent syndrome measurement. With a maximal syndrome measurement rate, ancilla preparation errors are negligible as some type of majority voting can be performed on groups of measurements to filter out this effect. Errors only occur on the order of $p_{\uparrow a}^2$ when majority voting in groups of three. The gain is therefore $G_{\uparrow a}(t_M)=t_M/(p_{\uparrow a}^2\tau_{f01})\approx2000$, leading to the high gain seen in Table 1 of the main text. For the optimal rate (see sec.~\ref{optimal_rate}), however, this gain is limited by $\Gamma_\uparrow$ only, and thus $G_{\uparrow a}(t_M)=1/(\Gamma_\uparrow\tau_{f01})\approx3$. This mechanism is of course also responsible for resonator dephasing, as is described in section~\ref{forward_prop}, and could be mitigated by stabilizing the ancilla ground state during $t_\mathrm{w}$.
\subsection{Orthogonality of Basis States}\label{orthogonality}
The non-orthogonality of the basis states in the cat code is an important consideration in deciding the initial amplitude of the encoded state. Larger cat states mean that the cat code can be employed for longer periods of time without applying any unitary gates or dissipative pumps~\cite{Leghtas:2015uf} to restore the amplitude. As this is largely a technical point that has been demonstrated not to be a fundamental limitation, the more salient question is at what point does increasing the cat state amplitude begin to adversely affect the performance of the code due to the increased rate of errors. The trade-off between non-orthogonality and average error rate is in fact very generous, however, since the overlap between two coherent states falls off exponentially with the difference between them in a resonator's phase space~\cite{book:haroche06} while the error rate increases linearly (in $\bar{n}$):
\begin{align}
\left<\alpha|\beta\right>&=e^{-|\alpha|^2/2}e^{-|\beta|^2/2}e^{-\alpha^*\beta}\label{eq:overlap}\\
|\left<\alpha|\beta\right>|^2&=e^{-|\alpha-\beta|^2}
\end{align}
Using eq.~\ref{eq:overlap}, one can perform a similar calculation for the cat code basis states to obtain the following overlaps:
\begin{align}
|\left<C_\alpha^+|C_{i\alpha}^+\right>|^2&=\left(\frac{2e^{-\alpha^2}\cos(\alpha^2)}{1+e^{-2\alpha^2}}\right)^2\\
|\left<C_\alpha^-|C_{i\alpha}^-\right>|^2&=\left(\frac{2e^{-\alpha^2}\sin(\alpha^2)}{1-e^{-2\alpha^2}}\right)^2,
\end{align}
where $\alpha$ is understood to be a real number here. The trigonometric terms are a result of the interference changing with $\alpha$, or equivalently in this experiment, with time. Using a Python quantum simulation software package called QuTiP~\cite{Johansson20131234,Johansson20121760}, we simulate the effect of the increasing non-orthogonality on the efficacy of the optimized decoding pulses to faithfully transfer an encoded state in the resonator back onto the ancilla. We find that with an initial encoding size of $\bar{n}_0=2$ and after a time of $\sim100\mathrm{\mu s}$, the error in the decoding pulses for even parity states approximately equals $2\textrm{-}3\%$, while for the odd parity states it approximately equals $6\textrm{-}7\%$. Using the Poisson distribution to calculate the percentage of even and odd parity states after $\sim100\mathrm{\mu s}$, we find that the resulting infidelity due to overlapping basis states at the end of the tracking sequence amounts to roughly $4\textrm{-}5\%$. For earlier times, this error rapidly decreases toward $0$, indicating that even for small cat sizes, the approximation that the basis states are orthogonal is still quite accurate.
\subsection{Code-space Leakage}\label{leakage}
To a good approximation, for $\bar{n}\lesssim2$ the basis states of the cat code can be interpreted as superpositions of only the Fock states $\ket{0}_f\rightarrow\ket{7}_f$. In turn, this restricted space can be described in a binary representation that requires just three physical qubits (Fig.~S\ref{fig:Circuit_b}a), with coefficients given by the Poisson distribution of a coherent state of $\alpha\lesssim\sqrt{2}$ and with $d_0$ the least significant bit in $\ket{d_2d_1d_0}$ (e.g. $\ket{110}\equiv\ket{6}$). In this formulation, $d_0$ is the ``Parity Bit:" when $d_0=0$ the parity of the state is even and when $d_0=1$ the parity is odd. Note that the even and odd logical basis states are still all mutually orthogonal.
Although in principle such an encoding scheme can be fashioned in a cQED system with three transmons, the error processes would be completely different, dominated by single transmon energy decay and dephasing rather than the correlated shift of bits arising from the action of some effective lowering operator. The utility of this representation, however, is that it emphasizes the possibility of excursions out of the code space; for increasing deviations $\epsilon_n$ of the coefficients $c_n$ from their ideal values as specified by the Poisson distribution, the overlap $(\bra{C_\alpha^+}+\sum_{n=0}^7\epsilon_n\bra{n})\ket{C_\alpha^+}\rightarrow 0$. This effect is of course continuous.
One may note that the Kerr of the resonator immediately changes these coefficients at a rate $K_s$. This effect, however, is deterministic, does not change the parity of the state, and in fact periodically brings the coefficients back to their original values (minus the effect of amplitude decay)~\cite{Kirchmair:2013gu}. It therefore does not constitute a source of dephasing since it can be taken as just a continuous change of basis in time. As long as we take this into account when decoding the encoded state back onto the ancilla at the end of our protocol, no information is lost (sec.~\ref{grape}). There are, however, several non-deterministic effects that do constitute dephasing, all arising from undesired interactions of the resonator with the ancilla, with the readout resonator, and again with itself (a second effect of Kerr to be described in the subsequent section). Some of these sources of loss are possible to partially recover from even in the current implementation of the experiment, while others are a central vehicle of non-fault-tolerance in this system.
\subsubsection{Undesired Couplings -- Self-Kerr \& Cross-Kerr}\label{undesired_couplings}
The non-commutativity of the resonator's Kerr Hamiltonian and the annihilation operator $[\frac{K_s}{2}\hat{a}_s^{2\dag}\hat{a}^2_s,\hat{a}_s]\neq0$ results in the following relation~\cite{Leghtas:2012ff} (excluding an irrelevant global phase):
\begin{align}
\hat{a}e^{-i\frac{K}{2}t\hat{a}^{\dag2}\hat{a}^2}&=e^{-i\frac{K}{2}t\hat{a}^{\dag2}\hat{a}^2}e^{-iKt\hat{a}^\dag\hat{a}}\hat{a}.
\end{align}
This means that every time a photon jumps at a time $t_j$, the resonator state is rotated in phase space by an angle $\theta=K_st_j$. Without an infinite cadence of measurement, however, there is always some finite uncertainty in $t_j$, $\delta t_j$, and consequently in $\theta$, $\delta\theta$. A non-zero $\delta\theta$ results in an angular spread of cat code basis states in phase space. On a shot-by-shot case, it means that we lose track of the phase of the resonator state within the angular window defined by $K_st_M$. In other words, the state leaks out of the code space. We study the effects of such leakage by encoding a qubit into our codeword and then immediately thereafter intentionally decoding back at the wrong phase (Fig.~S\ref{fig:Circuit_b}b). The resulting Gaussian curve allows us to quantify the sensitivity of the cat code to uncertainties in the jump time.
If we were to measure the error syndrome very quickly, we would know $t_j$ to high accuracy and the resulting error due to uncertainty in jump time would be negligible given the second-order dependence of process fidelity on decoding angle. With a $t_M\approx1\mathrm{\mu s}$, the average angle of deflection $\bar{\theta}\approx1^\circ$. For this cadence, we can assume a uniform probability distribution that gives an uncertainty in angle of $\delta\theta\approx0.5^\circ$. Using the result of the fit in Fig.~S\ref{fig:Circuit_b}b, we average over the Gaussian distribution within a window of $\pm0.8^\circ$ to find a probability of dephasing $p_K\approx0.02\%$, an expectedly minor contribution. Weighting $p_K$ by the probability to have a jump within $t_M$, which is on the order of $1\%$, the predicted gain in such a scenario where this is the only source of error is consequently very high: $G_K=t_M/(p_K\cdot0.01\cdot\tau_{f01})\approx2000$, as in the table in the main text.
Given the necessity of spacing out parity measurements in time by $t_\mathrm{w}$ in order to maximize lifetime gain (see sec.~\ref{optimal_rate}), however, the absolute time of the jump and thus the value of $\theta$ inherit some non-negligible uncertainty, resulting in code space leakage. In other words, the coefficients $c_n$ now deviate from the Poisson distribution by $\epsilon_n$ that are unknown. For a typical $t_\mathrm{w}\approx 20\mathrm{\mu s}$ (see Table 1 in the main text) and the value of the resonator's Kerr (Table~\ref{table_params}) the uncertainty in jump angle is $\sim10^\circ$, resulting in a $\sim3\%$ loss of fidelity. This loss of course increases the more errors occur. Assuming the probability of detecting a photon jump is again $\sim 20\%$ per step, the loss in process fidelity is $p_K\approx0.2\times0.03$. The gain $G_K=t_M/(p_K\tau_{f01})\approx10$, as in the table in the main text. The rate $K_s$, however, is on the order of several kHz, and so in principle we can completely recover from this minor dephasing by interleaving the parity measurements with the dissipative pumping scheme demonstrated in~\cite{Leghtas:2015uf}, which pumps and refocuses slightly dephased cat states back to the original logical basis (restoring their amplitude as well).
Likewise, code space leakage occurs whenever a coherent state is injected into the readout resonator to measure the state of the ancilla. As coherent states have an uncertainty in their average photon number of $\sqrt{\bar{n}}$, the cross-Kerr interaction leads to a dephasing of the encoded state at a rate proportional to $\chi_{sr}$. Quantitatively, per measurement we see an average deflection of $\bar{\phi}=\bar{n}\chi_{sr}\tau_\mathrm{meas}\approx20^\circ$ of the resonator state in phase space for a readout pulse duration $\tau_\mathrm{meas}$. Given the value of $\chi_{sr}$ in Table~\ref{table_params}, we estimate that each readout pulse contains $\bar{n}\approx70$ photons. The uncertainty in the deflection scales as the square root of $\bar{n}$: $\delta\bar{\phi}=\sqrt{\bar{n}}\chi_{sr}\tau_\mathrm{meas}\approx2^\circ$. With $n$ measurements, the total uncertainty in the angle is $\sqrt{n}\delta\bar{\phi}$. For example, after ten measurements this uncertainty is still much smaller than the standard deviation of the Gaussian in Fig.~S\ref{fig:Circuit_b}b. Given the minimal effect on the process fidelity, this source of dephasing is excluded from the discussion in the main text.
We therefore treat the resonator anharmonicity and coupling to the readout resonator not as sources of non-fault tolerance, but rather as necessary technical trade-offs that can in principle be fully compensated. In fact, by applying real-time feedback to mitigate some of these effects we are able to substantially enhance the fidelities our QEC system can offer; these features are detailed in sec.~\ref{record_error_time}.
\subsubsection{Forward Propagation}\label{forward_prop}
Infidelities due to ancilla dephasing outlined in sec.~\ref{readout_errors} and the forward propagation of errors to be discussed in this section have a common denominator: ancilla decoherence. In the former, both phase flips and amplitude decay of the ancilla contribute to a decrease in parity measurement fidelity. In the latter, one can see by looking at the system Hamiltonian (see sec.~\ref{hamiltonian}) that the frequency of the resonator depends on the state of the transmon. Any random change in ancilla energy due to decay ($\sigma_-$) or excitation therefore rotates the resonator state at a rate $\chi_{sa}$, while phase flips ($\sigma_z$) have no effect. Figure S\ref{fig:Circuit_b}c depicts how one can model a parity measurement in the digitized version of the cat code. Employing a single ancilla, the parity measurement is nothing more than a cNOT gate between this ancilla and the parity bit $d_0$, which specifies the state's symmetry with respect to a $180^\circ$ rotation. The cNOT is written here equivalently as a controlled phase gate between two Hadamard gates (H)~\cite{NielsenQI}. The higher parity bits, $d_1$ and $d_2$, provide further information about the state's symmetry properties with respect to $90^\circ$ and $45^\circ$ degrees. The first panel shows that with no ancilla energy decay, the parity mapping is perfect since it does nothing to coefficients $c_n$ at the end of the protocol.
The length of this mapping, $\pi/\chi_{sa}\approx 250\mathrm{ns}$, however, is a small but non-negligible fraction of the ancilla $T_1$. One can model this finite gate time by splitting the phase gate into two ``controlled-$\pi/2$" gates and adding two phase gates to the next parity bit $d_1$. With a perfect parity mapping one obtains the exact same results as in the first row. If the ancilla decays exactly half-way through the sequence, however, the resonator state inherits a phase of $\pi/2$ in phase space; this is a logical bit flip in our basis. One can continue splitting the gate into smaller and smaller pieces (e.g. third panel), where now the ancilla $T_1$ decay rotates the resonator state by an arbitrary angle that is known only if the time of ancilla decay is known. Of course, as we depend on the entangling interaction between the ancilla and resonator throughout the parity mapping time, in this implementation we have no way of detecting when this decay occurs. Equivalently, the photon number parity operator $\hat{P}=e^{i\pi\hat{a}^\dag_s\hat{a}_s}$ commutes with any rotation in phase space: $[\hat{P},e^{i\theta\hat{a}^\dag_s\hat{a}_s}]=0$. The environment gains information that we do not. Beyond the risk of $T_1$ decay during the parity mapping, the equally detrimental effect of $T_1$ decay of the ancilla during the readout pulse (before ancilla reset, see sec.~\ref{smart_tracking}) reduces the fidelity of all trajectories where one or more photon jumps occur. In addition, unknown ancilla excitations from $\ket{g}$ to $\ket{e}$ (or higher states), which occur at a rate proportional to $\Gamma_\uparrow$, dephase the resonator state similarly.
We can comb through the probabilities of a $T_1$ event in each of the three main steps throughout the entire duration of the sequence and calculate a gain in a manner similar to calculations in previous sections. During the parity mapping, the probability of ancilla decay is $p_{\downarrow a,1}\approx\pi/(\chi_{sa}\cdot2T_1)=0.004$; the probability of measuring ancilla $\ket{e}$ equals the probability of measuring a photon jump, and so the contribution to dephasing is $p_{\downarrow a,2}\approx(\bar{n}\kappa_s t_M)\tau_\mathrm{meas}/T_1=0.0001$, the probability of a photon jump times the probability of $T_1$ decay in a duration $\tau_\mathrm{meas}$; finally, during $t_M$ there is always the risk of ancilla excitation, with a probability $p_{\uparrow a,3}\approx t_M\Gamma_\uparrow=0.001$. The total probability of forward propagation is $p_{fp}(t_M)\approx p_{\downarrow a,1}+p_{\downarrow a,2}+p_{\uparrow a,3}$. For $\bar{n}=2$, we find that $p_{fp}(1\mathrm{\mu s})\approx0.5\%$. Defining the gain $G_{fp}(t_M)=t_M/(p_{fp}\tau_{f01})$, we find $G_{fp}(1\mathrm{\mu s})\approx0.7$. Performing the same calculation for $t_M\approx21\mathrm{\mu s}$, we find $G_{fp}(21\mathrm{\mu s})\approx2$. We thus obtain the numbers in the final row of Table 1 in the main text, and arrive at the key constraint of our system: measuring more frequently enhances the likelihood of forward propagation of errors. As seen in the preceding sections, by mitigating this decoherence channel we stand to gain substantially in all other aspects with faster syndrome measurements.
\section{Quantum Control Architecture}\label{quantum_control}
Experiments in quantum computation typically involve a continuous and carefully choreographed generation of pulses in order to implement a certain operation. This process may often require modifying the pulse generation in real-time as a response to the real-time analysis of returned signals (real-time feedback). Given that the time evolution of our quantum systems can be counted in nanoseconds, crucial to the success of the quantum experiment is the efficiency of collecting, interpreting, and reacting to the returned signals. The architecture demonstrated here is comprised of four major components: Digital-to-Analog converters (DACs) that output pulses; Analog-to-Digital converters (ADCs) that sample input signals; digital inputs/outputs (DIG-IOs) that enable inter-card communication as well as the triggering of certain digital RF components; and finally a Field Programmable Gate Array (FPGA) that dictates the flow of the experiment in time, orchestrating the three previous components to steer the quantum system to some desired state in real-time (see Fig.~\ref{fig:setup} for a schematic). This is our quantum controller.
Each hardware unit, or ``card," is an independent agent. It combines the functionality of an instrument like a commercially available Arbitrary Waveform Generator (AWG), a data sampling card, and certain data analysis functions crucial for efficient feedback all on one piece of equipment. Such a design dramatically enhances the possible levels of control, sophistication, and complexity of a quantum experiment. Furthermore, all cards run in parallel with no inherent dependency on each other. They may produce pulses that are sent to manipulate a particular aspect of the quantum system. Incoming signals from the quantum system may also be routed as inputs to the cards in some pre-defined way. Each card thus produces and analyzes different signals, and it can then distribute its findings among the other cards in real-time through a dedicated digital communication layer.
The common denominator in this scheme is the set of instructions loaded onto each card prior to the experiment, which coordinates how the cards work together. Once the experiment starts there is no one master card that must dictate the flow; each card can decide what to do independently. The result is a decentralized network of classical computation that provides a fast, efficient, and flexible platform to interface with the quantum system. Thus, by properly coordinating the signals sent and received by this network of cards, the user ultimately coordinates the interactions between distinct entities of the quantum system, all accomplished on time scales of just a few hundred nanoseconds.
\subsection{Hardware Specifications}
We use three Innovative Integration X6-1000M boards housed in a VPXI-ePC chassis with a VPX-COMEX module that produces a 1GHz clock and sharp, synchronized triggers. Each board contains two 1 GS/s ADCs, two 1 GS/s DAC channels, and digital inputs/outputs that are controlled by a Xilinx VIRTEX-6 FPGA loaded with in-house logic. The three boards are synchronized to control the storage resonator, readout resonator, and ancilla transmon. The readout signals are routed to the ADCs on the readout resonator board, whereafter the FPGA demodulates and thresholds the signal to determine the state of the ancilla ($\ket{g}$, $\ket{e}$, and higher). The feedback latency between the last sample in to the FPGA and the first sample out is approximately 200ns, providing us with a powerful tool to mitigate the effects of ancilla $T_1$ decay post-measurement (see section~\ref{feedback_applications}).
\subsection{Quantum Error Correction Process}\label{exptflow}
Here we summarize our full protocol (Fig.~S\ref{fig:timings}). Each run of the experiment cycles through the following steps:
\begin{enumerate}
\item System and ancilla reset -- using feedback, we make sure that the ancilla is in ground state $\ket{g}$ and the resonator is in vacuum $\ket{0}_f$.
\item Qubit initialization -- in each realization of the experiment we apply a single-qubit gate on the ancilla to encode one of the six cardinal points on the qubit Bloch sphere. This over-complete set of states allows us to perform process tomography of the QEC system (see sec.~\ref{process_tomo}), and is equivalent to characterizing the action of a system on the qubit $\ket{\psi_\mathrm{init}}=c_0\ket{0}+c_1\ket{1}$.
\item Encoding -- we transfer the qubit from the ancilla into a superposition of cat states in the resonator. At the end of this step the state of the resonator is $\ket{\psi_\mathrm{init}}\rightarrow c_0\ket{C_\alpha^+}+c_1\ket{C_{i\alpha^+}}$, while the ancilla, to the best of our ability given experimental realities (see sec. ~\ref{grape_implementation}), ends in $\ket{g}$, ideally completely disentangled from the resonator state.
\item Parity monitoring -- we identify photon jumps, or errors in our codeword, by monitoring the parity of the logical state in the resonator. This is done using the adaptive parity monitoring scheme (Fig.~2 of the main text, and elaborated upon in~\ref{smart_tracking}).
\item Decoding and correction -- the quantum bit of information is brought back onto the ancilla using the knowledge we gather while monitoring the error syndrome (see sec.~\ref{adaptive_decoding}). A different decoding pulse is used for each point in time due to the changing amplitude and Kerr evolution of the cat states. At the end of the decoding pulse, the resonator should ideally be completely in vacuum and with the measured error record the qubit is corrected following the cat code prescription (see sec.~\ref{cat_code}).
\item Tomography -- we perform qubit state tomography on the ancilla to compare the final qubit $\ket{\psi_\mathrm{fin}}$ with the initial state $\ket{\psi_\mathrm{init}}$. Using the results we fully characterize the QEC system process (see sec.~\ref{process_tomo}).
\end{enumerate}
Steps 1-2 prepare the system in its ground state to high accuracy; ancilla reset is more than $99.8\%$ effective and no residual thermal population is measured after resonator reset. Steps 3-5 are the error correction part of the experiment. This part does not assume any knowledge about the quantum state it is designed to protect or the decoherence mechanisms. In the final step we measure the ancilla that is ideally back in the initial state. Any deviation leads to a decay of the process fidelity in time. While the entire experiment is implemented as one big state machine, only the exact durations of the system and ancilla reset step are not predetermined.
\subsection{Feedback Details}\label{feedback_applications}
\subsubsection{System and Ancilla Reset}\label{reset}
As detailed in Table~\ref{coherence}, the thermal populations of the ancilla and the resonator are $\sim4\%$ and $<2\%$, respectively. This is enough to adversely affect not only our encoding pulses (see sec.~\ref{grape}), but also subsequent error syndrome detection. The protocol starts with the quantum controller measuring the state of the ancilla. If the result is the excited state $\ket{e}$, the controller applies a fast $\pi$ pulse (Gaussian envelope with $\sigma=2\mathrm{ns}$) to return the ancilla to $\ket{g}$ and measures again; if the pulse is not successful the loop is repeated, while if the pulse is successful the experiment continues. With feedback latencies of just $\sim200$ns (last sample in, first sample out), a readout pulse duration of $\tau_\mathrm{meas}\approx400$ns, and latencies due to cables into and out of the experimental setup totaling $\sim100\mathrm{ns}$, we are able to reset the ancilla to $>99.8\%$ in $\ket{g}$. This protocol was also demonstrated in~\cite{Riste:2012wc}.
Second, we use the now initialized ancilla to project the resonator state to the vacuum by applying long $\pi$ pulses on the ancilla (Gaussian envelope with $\sigma=600\mathrm{ns}>1/\chi_{sa}$) that address only the $\ket{0}_f$ Fock state~\cite{Schuster:2007ki}. If the result of a subsequent ancilla measurement is $\ket{e}$, with high probability the resonator is in vacuum. These pulses, however, have a lower fidelity ($\sim90-95\%$) owing to the ancilla $T_2$, and so we repeat this experiment until we measure $\ket{e}$ three times consecutively. Once this occurs, we once again employ the protocol above to reset the ancilla to $\ket{g}$ and continue to the encoding step. With a thermal population of $\sim2\%$ and a lifetime of $250\mathrm{\mu s}$, the average rate of excitation of the resonator from $\ket{0}_f\rightarrow\ket{1}_f$ is on the order of $10\mathrm{ms}$, and so we are unable to measure any residual population in $\ket{1}_f$ after this purification procedure.
\subsubsection{Adaptive Parity Monitoring State Machine}\label{smart_tracking}
In measuring photon number parity as the error syndrome, two protocols may be used that both employ a Ramsey-style pulse sequence to map opposite parities to opposite poles of the ancilla Bloch sphere; they differ only in the sign of the second $\pi/2$ pulse. For example, when the resonator starts in an even parity cat state and the sign of the second $\pi/2$ pulse is positive, the ancilla ends up in $\ket{e}$ at the end of the protocol:
\begin{align}
\ket{\psi}=&\ket{g}\frac{1}{\sqrt{2}}(\ket{\alpha}+\ket{-\alpha})\\
\xrightarrow{+\pi/2}&\frac{1}{2}(\ket{g}+\ket{e})(\ket{\alpha}+\ket{-\alpha})\\
\xrightarrow{wait}&\frac{1}{2}\ket{g}(\ket{\alpha}+\ket{-\alpha})+\frac{1}{2}\ket{e}(\ket{\alpha e^{i\chi_{sa}t}}+\ket{-\alpha e^{i\chi_{sa}t}})\\
\xrightarrow{t=\pi/\chi_s}&\frac{1}{2}\ket{g}(\ket{\alpha}+\ket{-\alpha})+\frac{1}{2}\ket{e}(\ket{\alpha e^{i\chi_{sa}(\pi/\chi_{sa})}}+\ket{-\alpha e^{i\chi_{sa}(\pi/\chi_{sa})}})\\
=&\frac{1}{2}\ket{g}(\ket{\alpha}+\ket{-\alpha})+\frac{1}{2}\ket{e}(\ket{\alpha e^{i\pi}}+\ket{-\alpha e^{i\pi}})\\
=&\frac{1}{2}\ket{g}(\ket{\alpha}+\ket{-\alpha})+\frac{1}{2}\ket{e}(\ket{-\alpha}+\ket{\alpha})\\
\xrightarrow{+\pi/2}&(\ket{g}+\ket{e})(\ket{\alpha}+\ket{-\alpha})+(\ket{e}-\ket{g})(\ket{-\alpha}+\ket{\alpha})\\
=&\ket{e}\frac{1}{\sqrt{2}}(\ket{\alpha}+\ket{-\alpha})
\end{align}
Note, however, that if the parity is odd the ancilla ends up in $\ket{g}$; and likewise, if the parity is even but the sign of the second pulse is negative, the ancilla again ends up in $\ket{g}$. Thus, in implementing our QEC system, simply repeating just one of the two protocols during the error syndrome monitoring does not suffice, since with either one the ancilla spends much more time in the excited state for one of the two parities. This asymmetry provides a strong motivation for using real-time feedback.
Starting in even parity, and given the low probability ($\sim20\%$) to have an error between two consecutive syndrome measurements for the optimal measurement cadence, the controller plays the pulse sequence that maps even parity to ancilla $\ket{\psi_a}=\ket{g}$ and odd parity to ancilla $\ket{\psi_a}=\ket{e}$. When an error occurs and the parity changes, the controller pulses the ancilla from $\ket{e}$ back to $\ket{g}$ and then continues monitoring errors by employing the opposite protocol, which instead maps odd parity to $\ket{g}$ and even parity to $\ket{e}$. Therefore, throughout a single measurement trajectory, counting the number of errors amounts to just counting the number of times $\ket{\psi_a}=\ket{e}$ occurs.
The benefits of employing the adaptive protocol, depicted in the main text (Fig.~2), cannot be overstated. Feedback latencies of just $\sim200\mathrm{ns}$ mean that the qubit spends just $\sim700\mathrm{ns}$ in $\ket{e}$ per error. Without feedback, this time can be far greater, perhaps as much as $\sim50\mathrm{\mu s}$ per error in a $100\mathrm{\mu s}$-long experiment, effectively guaranteeing cat state dephasing with our ancilla coherence times (see sec.~\ref{forward_prop}). As shown in Fig.~S\ref{fig:timings}, with the adaptive parity monitoring scheme and the ancilla reset described above, the full timeline of our measurement sequence is designed to have the ancilla in the ground state as much as possible. We therefore regard the role of the quantum controller to be crucial to our goal of realizing a QEC scheme without the use of any post-selection or corrections for measurement inefficiencies.
\subsubsection{Recording Error Time}\label{record_error_time}
As described in section~\ref{undesired_couplings}, the resonator state $\ket{\psi(t)}$ is rotated by an angle $\theta=K_st_j$ in phase space every time a photon jumps at time $t_j$: $\ket{\psi(t)}\xrightarrow{\hat{a}_s}e^{i\theta\hat{a}_s^\dag\hat{a}_s}(\hat{a}_s\ket{\psi(t)})$. When the difference in time between syndrome measurements $t_\mathrm{w}$ is non-zero, the uncertainty in jump time, which grows with increasing $t_\mathrm{w}$, leading to the aforementioned dephasing. For $n$ total syndrome measurements spaced by $t_\mathrm{w}$ there is, however, still a known average angle of rotation for a jump that is measured at step $j$: $\bar{\theta}_j=K_s(j-1/2)t_\mathrm{w}$. In other words, $K_s(j-1/2)t_\mathrm{w}$ is our best estimate of $t_j$ given the measurement cadence. We can and must take this angle into account to prevent substantially greater excursions out of the logical subspace.
In order to do so, our controller must record the step in the monitoring at which the photon jump occurs, or equivalently the time. Then, in real-time it must apply a rotation to the coordinate system of the resonator's phase space by an angle $\bar{\theta}(k)=\sum_{j=1}^k\bar{\theta}_j$ for $k$ jumps so that the decoding pulse at the end of the sequence is applied correctly. For $n$ monitoring steps there are $l=\frac{n!}{k!(n-k)!}$ different combinations of jump times for $k\leq n$ errors, and thus the controller must individually align all $l$ error trajectories that correspond to $k$ photon jumps on top of one another. For example, when $n=3$ one can have $2^n=8$ different monitoring outcomes $(0\equiv \mathrm{``no~error"}$ and $1\equiv \mathrm{``error"})$: $000,100,010,001,110,101,011,111$; in this case, the feedback rotates $100$ by $\bar{\theta}_1=K_st_\mathrm{w}/2$, $010$ by $\bar{\theta}_2=3K_s t_\mathrm{w}/2$, and $001$ by $\bar{\theta}_3=5K_st_\mathrm{w}/2$ so all three can be decoded with a single pulse. Employing this scheme increases the process fidelity of cases where jumps are detected by as much as $10-15\%$.
This aspect of the feedback highlights the complexity of the calculations that the controller does in real-time. Furthermore, it can in principle handle an unlimited number of steps; as the number of combinations of jump times grows exponentially, it is a testament to the capability of the logic to efficiently perform and store the results of such calculations. In the future, when measurement rates become much faster, this will be an indispensable feature.
\subsubsection{Adaptive Decoding}\label{adaptive_decoding}
Finally, based on the knowledge of how many errors occurred, the controller decides in real-time to apply one of two decoding pulses, depending on even or odd parity, to map the resonator state back onto the ancilla. The necessity of these features stems from the fact that a unitary operation cannot map two initially orthogonal states onto a single final state (see sec.~\ref{grape}). Although this feature is simple to implement, it is in some sense the most crucial; applying the wrong pulse does not disentangle the resonator and ancilla at the end of the decoding, leading to a complete incoherent qubit mixture when tracing over the resonator state.
\section{Optimizing Cat Code Performance}\label{optimal_rate}
The presence of forward propagating errors in our system due to ancilla decoherence substantially alters the optimal strategy one normally seeks to employ in a QEC system. Typically, the goal is to suppress the occurrence of errors within the codeword to second order with measurements performed at the maximum rate permitted by the parameters of the system. With such a strategy, however, one necessarily entangles the logical states with the ancillary systems needed to extract the error syndromes for a substantial fraction of the QEC protocol's duration. Since the rate of photon jumps in our system is much lower than $1/\tau_\mathrm{meas}$, the probability that two errors occur within $\tau_\mathrm{meas}$ is considerably lower than the probability of ancilla-induced decoherence. Such a strategy thus results in a net-loss; indeed, by measuring as quickly as possible, we end up dephasing the qubit more quickly than had we encoded it in the resonator's Fock states $\ket{0}_f$, $\ket{1}_f$.
We thus explore a different approach, one that instead slows down the syndrome measurement cadence to find an optimal balance between errors in the code and ancilla induced dephasing. We take the point of view that experimentally our task is to preserve a quantum bit of information for a total time $T$. The analytical model we present below then calculates the optimal measurement cadence and the predicted lifetime. It takes into consideration the basic measured parameters in our system: resonator and ancilla coherence properties, thermal populations, and measurement fidelities. The predictions we arrive at closely match the data we present in the main text. Such results indicate that not only can we successfully optimize and employ a measurement strategy that preserves a qubit beyond the break-even point, but given only a basic set of assumptions about the sources of loss in the system, namely those outlined in sec.~\ref{losses}, we capture the dominant mechanisms that set the performance metrics of our QEC system. Using this model, we can then predict the potential gains we expect to witness when certain key system parameters are enhanced. In particular, by improving ancilla coherence times to levels of $\sim100\mathrm{\mu s}$ (well within the range of current transmon technology), the cat code promises to provide gains of over an order of magnitude.
At the moment of decoding, what we have available is a record of measurements. As a function of this classical information we then act on the system and decode the state to the best of our knowledge. There are two questions we can ask:
\begin{enumerate}
\item [a.] For a given trajectory of photon jumps, what is the probability that the conclusion we obtained, based on the measurement record, is correct?
\item [b.] What is the probability distribution of possible trajectories that may produce a given measurement record?
\end{enumerate}
The first question relates to the optimization process. We wish to maximize the probability of correctly identifying the actual resonator error trajectory. Trajectories with either many errors or consecutive errors result in a lower probability of success, as those events are rare and can be polluted by measurement errors.
The second question relates to our confidence in the output (see sec.~\ref{confidence}). There are different error trajectories that can produce the same measurement record. The best strategy is to simply choose the most probable one. Our confidence in the output is then the probability that this scenario occurred conditioned on the measured record. The output is thus not only the final state, but also a measure of confidence. We can either ignore this extra classical information and treat the whole process as a black box (red curve, Fig.~4a of the main text) or we can also use this information to post-select the data relative to some required confidence constraints (purple curve, Fig.~4a of the main text).
\subsection{The Optimized Configuration}\label{opt_config}
In this section we summarize our findings from an analytical model fully derived in section~\ref{optconfig_deriv}. For a fixed, desired time $T$ for which we would like to correct a qubit, we define a configuration to be the combination of the following parameters:
\begin{enumerate}
\item The initial Cat size, $\overline{n}_0=\left|\alpha\right|^2$.
\item The number of parity tracking steps $S$.
\item The step durations $\{t_1,t_2,\dots,t_S\}$, where $\sum_{k=1}^St_k=T$.
\end{enumerate}
The process fidelity, $F_{process}$ decays exponentially from 1 to 1/4 (which is the process fidelity for a completely mixed final state). Here we derive and optimize the scaled version which decays from 1 to 0 at exactly the same rate, which we denote as the \emph{fidelity}:
\begin{equation}
F \equiv \frac{F_{process}-1/4}{3/4}.
\end{equation}
This is the probability we successfully corrected the state. We can write the total fidelity as a product of four terms:
\begin{equation}
F(T,\overline{n}_0,S,\left\{t_k\right\}_{t=1}^S)=F_{\Gamma_\uparrow}(T)\cdot F_{ED}(T,\overline{n}_0)\cdot F_T(T,\overline{n}_0,S,\left\{t_k\right\}_{t=1}^S)\cdot F_{KD}(T,\overline{n}_0,S,\left\{t_k\right\}_{t=1}^S).
\end{equation}
\begin{itemize}
\item [$F_{\Gamma_\uparrow}$] Whenever the ancilla is excited to $\ket{e}$ we lose the encoded information (see sec.~\ref{forward_prop}). This affects our protocol and also the $\ket{0},\ket{1}$ Fock states encoding equally the same. It depends on $T$ alone and equals $e^{-T\cdot\Gamma_\uparrow}$.
\item [$F_{ED}$] The fidelities of the encoding and decoding pulses depend on the initial and final cat sizes which are $\overline{n}_0$ and $\overline{n}_0 e^{-\kappa_s\,T}$. The non-orthogonality is simulated and taken into account numerically (see sec.~\ref{orthogonality}).
\item [$F_T$] The loss of fidelity due to the monitoring itself depends on the ancilla's figures of merit throughout the time-scale of the parity mapping and projective measurement. It also depends on the cat size and $\kappa_s$ through the probability to miss photon jumps during a single step.
\item [$F_{KD} $] The uncertainty in the angle due to the Kerr deflection decreases the fidelity of the decoding pulse (see sec.~\ref{undesired_couplings}). We calculate the Kerr deflection distribution from the number of expected photon jumps and the step lengths together with the measured fidelity of the decoding pulse as a function of the angle (Fig.~S\ref{fig:Circuit_b}b).
\end{itemize}
If we ignore $F_{KD}$ we can show that the optimal fidelity can be written in the following form:
\begin{equation}
F^{OPT}(T,\overline{n}_0)=e^{-T \Gamma_\uparrow}\cdot F_{ED}(T,\overline{n}_0)\cdot e^{-\overline{n}_0[1-e^{-\kappa_s\,T}]/G},
\label{eq:opt}
\end{equation}
where $G$, the system gain, is a function of the other system parameters ($\chi_{sa}, T_1,\dots$), a constant of the system. When $\kappa_s T\ll 1$ we can approximate the optimized fidelity as:
\begin{equation}
F^{OPT}(T,\overline{n}_0)=e^{-T \Gamma_\uparrow}\cdot F_{ED}(0,\overline{n}_0)\cdot e^{-{\kappa_s\,T}\cdot \frac{\overline{n}_0}{G}},
\end{equation}
which shows that the decay rate of the quantum error corrected information is $G/\overline{n}_0$ slower compared to storage cavity decay rate $\kappa_s$. The process fidelity decay rate of the $\ket{0}_f,\ket{1}_f$ Fock state encoding decays $3/2$ slower than $\kappa_s$. The break-even condition is therefore:
\begin{equation}
\frac{2G}{3\overline{n}_0}>1.
\end{equation}
As we increase $\overline{n}_0$, the gain in lifetime decreases. On the other hand, in order to have sufficient orthogonality between the logical basis states, $\overline{n}_0$ has to be high enough. Hence, an optimal $\overline{n}_0$ exists. Since eq. \ref{eq:opt} ignores $F_{KD}$, it expresses an upper bound for the fidelity, and thus $G$ has to be even larger in order to get an actual gain in lifetime. Better ancilla coherence times will increase the optimal measurement cadence and make $F_{KD}$ approach unity (see sec.~\ref{losses}). In this limit, eq. \ref{eq:opt} is exact and defines the limits of our scheme.
Figure S\ref{fig:analyticalopt}a shows how $G$ depends on $T_1$ and $T_\phi$ of the ancilla. With our ancilla's coherence times, $G$ is about 5. With $\overline{n}_0=2$, the ratio $2G/3\overline{n}_0$ equals 1.65. The actual gain is lower, since we need to take into account the effect of the Kerr deflection and the degradation of the decoding pulse due to loss of orthogonality. The decay due to $\Gamma_\uparrow$ affects both the cat code and the $\ket{0}_f,\ket{1}_f$ Fock state encoding and also lowers the gain in measured lifetimes.
We can also optimize the configuration when fixing the number of steps and taking into account both $F_{KD}$ and $F_{ED}$. Figure S\ref{fig:analyticalopt}b displays the measured fidelity and our model's prediction for the same configurations. It also displays the optimal expectations when forcing different numbers of steps. Our model predicts the measured data accurately and lets us find the optimal configuration as a function of the total duration $T$. We end up with a total predicted gain of 10\%, in line with the results in Fig.~4 of the main text.
\subsection{Deriving the Optimized Configuration}\label{optconfig_deriv}
Here we derive the optimal fidelity $F$ assuming $F_{KD}=1$. We follow a pessimistic approach. Any error that may happen will be regarded as a total loss of information. An error in the parity measurement could, in principle, be corrected by repeating it several times and taking a majority vote. In practice, although our measurement fidelities are very high, the errors that these extra measurements introduce are larger than the those we wish to correct for, so it is better to blindly trust any result. This approach simplifies the analysis greatly, and we can calculate the probability to successfully keep the information for each step independently and take the product to get the final total fidelity.
A successful step starts with the ancilla in $\ket{g}$, followed by either no jumps or a single photon jump during the delay, an accurate parity measurement and the ancilla back in $\ket{g}$ at the end. There are several failure mechanisms:
\begin{enumerate}
\item While waiting, the ancilla may have been excited to $\ket{e}$
\item Two or more photon jumps occurred during the step delay
\item The parity measurement returned the wrong answer
\item A successful parity measurement brought the ancilla to $\ket{e}$, but the following reset pulse failed to return it back to $\ket{g}$
\end{enumerate}
The probabilities to have zero, one, or more jumps are a function of the cat size at the beginning of the step and the step length. We express the step success probability as:
\begin{equation}
F_k = e^{-t_k \Gamma_\uparrow}\left[P_k(0)\cdot f_0+P_k(1)\cdot f_1\right],
\end{equation}
where $P_k(n)$ is the probability to have $n$ photon jumps in the $k$'th step, $f_0$ ($f_1$) is the conditional success probability when no (a single) photon jump occurred. The final success probability is then:
\begin{equation}
F/F_{ED} = \prod_{k=1}^Se^{-t_k \Gamma_\uparrow}\left[P_k(0)\cdot f_0+P_k(1)\cdot f_1\right]= \underbrace{e^{-T \Gamma_\uparrow}}_{F_{\Gamma_\uparrow}}\cdot\underbrace{\prod_{k=1}^S\left[P_k(0)\cdot f_0+P_k(1)\cdot f_1\right]}_{F_T}.
\end{equation}
From this point onward we focus only on $F_T$.
The success of the parity measurement depends primarily on the ancilla's $T_2$. On top of that, there is the readout fidelity (which is different for the ground and the excited states). When a single photon jump occurs, the ancilla ends up in $\ket{e}$. It may decay back to $\ket{g}$ before the reset pulse, which means that the reset pulse inadvertently returns it back to $\ket{e}$. This is a critical period of time where we are vulnerable to $T_1$ decay of the ancilla:
\begin{eqnarray}
f_0&\approx&e^{-\frac{\pi}{\chi_{sa} T_2}}\cdot M_{gg}\\
f_1&\approx&e^{-\frac{\pi}{\chi_{sa} T_2}}\cdot M_{ee}\cdot e^{-\frac{\tau_{\mathrm{meas}}+T_{_{FB}}}{T_1}},
\end{eqnarray}
where $M_{gg}$ ($M_{ee}$) is the probability to measure correctly $\ket{g}$ ($\ket{e}$), $\tau_{\mathrm{meas}} = 400$ns is the readout pulse length and $T_{_{FB}}=332$ns is the feedback latency that includes delays due to the experimental setup (cables, etc.). Ancilla $T_1$ decay causes code failure no matter when it happens, which is why we take into account the whole duration until the ancilla is back in $\ket{g}$.
At the beginning of the $k^{th}$ step, the averaged photon number in the cavity is given by ($\overline{n}_0$ is the cat size at the beginning of the $k=1$ step):
\begin{equation}
\overline{n}_{k-1}=\overline{n}_0\cdot \exp\left(-\kappa_s \sum_{i=1}^{k-1}t_i\right).
\end{equation}
During the step delay, the number of photon jumps has a Poissonian distribution with the following mean value (the decay during the step itself is taken into account):
\begin{equation}
\lambda_k=\overline{n}_{k-1}\cdot\left[1-e^{-\kappa_s t_i}\right].
\end{equation}
We expect no photon jumps with probability $e^{-\lambda_k}$ and a single jump with probability $\lambda_k e^{-\lambda_k}$. Hence, the step fidelity is simply given by:
\begin{eqnarray}
F_T^{(k)} &=& f_0\, e^{-\lambda_k} + f_1\, \lambda_k e^{-\lambda_k}=\left(f_0+f_1\,\lambda_k\right)e^{-\lambda_k}\\
\nonumber&\downarrow&\\
F_T\left(T,\overline{n}_0,S,\left\{t_k\right\}_{t=1}^S
\right)&=&\prod_{k=1}^S\left[\left(f_0+f_1\,\lambda_k\right)e^{-\lambda_k}\right]
\label{eq:F_tilde_total}
\end{eqnarray}
We can now optimize the step length for a fixed number of steps. For $S=1$, a single step, the solution is forced to be $t_1=T$. For two steps we will need to optimize the following expression:
\begin{eqnarray}
\max_{t_1} F_T\left(T,\overline{n}_0,S,\left\{t_1,t_2=T-t_1\right\}
\right) &=&\left[\left(f_0+f_1\,\lambda_1\right)e^{-\lambda_1}\right]\cdot\left[\left(f_0+f_1\,\lambda_2\right)e^{-\lambda_2}\right]\\
&=&\left(f_0+f_1\,\lambda_1\right)\cdot\left(f_0+f_1\,\lambda_2\right)e^{-(\lambda_1+\lambda_2)} \\
&=&\left(f_0+f_1\,\lambda_1\right)\cdot\left(f_0+f_1\,\lambda_2\right)e^{-(1-e^{-\kappa_s T})}
\end{eqnarray}
The expected number of photon jumps during the two steps sums up to the expected number of jumps during the whole duration. This is independent of how we partition the whole duration into two steps. We can simply maximize the multiplication of $\left(f_0+f_1\,\lambda_1\right)$ and $\left(f_0+f_1\,\lambda_2\right)$. Since the sum of these terms is constant, the maximum is achieved when they are equal, meaning $\lambda_1=\lambda_2$. In other words, we need to maintain a constant rate of photon jumps between the steps. This will be true for any number of steps; for $S$ steps the mean number of photon jumps per step is given by:
\begin{equation}
\lambda_k=\frac{\overline{n}_0\left[1-e^{-\kappa_s T}\right]}{S}.
\end{equation}
Substituting this expression into eq. \ref{eq:F_tilde_total} and using the optimal step lengths, we obtain:
\begin{eqnarray}
F_T\left(T,\overline{n}_0,S\right) &=&\left[\left(f_0+f_1\,\frac{\overline{n}_0\left[1-e^{-\kappa_s T}\right]}{S}\right)\cdot e^{-{\overline{n}_0\left[1-e^{-\kappa_s T}\right]}/{S}}\right]^S\\
&=&\underbrace{\left(f_0+f_1\,\frac{\overline{n}_0\left[1-e^{-\kappa_s T}\right]}{S}\right)^S}_{\equiv F'_T}\cdot e^{-\overline{n}_0\left[1-e^{-\kappa_s T}\right]}
\end{eqnarray}
As an exercise, we can see how this expression behaves for $S$ much greater than the number of expected photon jumps during the whole duration, $S\gg\overline{n}_0[1-e^{-\kappa T}]$:
\begin{eqnarray}
\nonumber F_T(\overline{n}_0,T,S)&=&\lim_{S\rightarrow\infty}f_0^S \left(1+\frac{f_1\overline{n}_0[1-e^{-\kappa_s T}]}{f_0\,S}\right)^S\,e^{-\overline{n}_0[1-e^{-\kappa_s T}]} \\
\nonumber &\approx&f_0^S\,e^{\overline{n}_0[1-e^{-\kappa_s T}]\frac{f_1}{f_0}} \,e^{-\overline{n}_0[1-e^{-\kappa_s T}]} \\
\nonumber&=&f_0^S\,{e^{-(1-f_1/f_0)\overline{n}_0[1-e^{-\kappa_s T}]} } \\
\nonumber&\approx&f_0^S \left(\frac{f_1}{f_0}\right)^{\overline{n}_0[1-e^{-\kappa_s T}]} \\
&=&f_0^{S-\overline{n}_0[1-e^{-\kappa_s T}]}\,f_1^{\overline{n}_0[1-e^{-\kappa_s T}]}
\end{eqnarray}
What this limit means is that when we measure frequently enough, the fidelity will fall off by a factor $f_0$ for any steps where no jumps happened and by $f_1$ when a single jump occurs. As the step size is so short, errors due to double photon jumps are negligible and therefore excluded.
We continue with $F'_T$:
\begin{eqnarray}
n_j&\equiv&\overline{n}_0\left[1-e^{-\kappa_s T}\right]\cdot\frac{f_1}{f_0}\\
\nonumber&\downarrow&\\
F'_T\left(T,\overline{n}_0,S\right)&=&f_0^S\left(1+\frac{n_j}{S}\right)^S,
\end{eqnarray}
where $n_j$ is the number of expected photon jumps during the whole duration up to a scale factor of order 1. We will treat the number of steps as a continuous variable and find its optimum. In practice, we will use the closest integer:
\begin{eqnarray}
\nonumber \frac{d}{dS}F'_T&=&F'_T\cdot\frac{d}{dS}\left[S\cdot \log(f_0)+S\cdot \log\left(1+\frac{n_j}{S}\right)\right] \\
\nonumber &=&F'_T \cdot \left[ \log(f_0)+\log\left(1+\frac{n_j}{S}\right)+\frac{S}{\left(1+\frac{n_j}{S}\right)}\cdot\frac{-n_j}{S^2}\right] \\
&=&F'_T \cdot \left[ \log(f_0)+\log\left(1+\frac{n_j}{S}\right)-\frac{n_j}{S+n_j}\right]\mathop{=}^{want}0
\end{eqnarray}
We need to solve:
\begin{eqnarray}
\log(f_0)+\log\left(1+\frac{n_j}{S}\right)-\frac{n_j}{S+n_j}&=&0\\
\nonumber &\downarrow& \\
\log(f_0)+\log\left(1+\frac1r\right)-\frac{1}{1+r}&=&0
\label{eq:u}
\end{eqnarray}
where $r\equiv S/n_j$ (the number of steps per photon jump up to small correction). Although we cannot solve for $r$ explicitly, it is a function of $f_0$, the success probability of the parity measurement conditioned on no photon jumps ($r(f_0)$). The optimal number of steps is then:
\begin{equation}
S^{OPT}=r(f_0)\cdot n_j=\frac{r(f_0)\cdot f_1}{f_0}\overline{n}_0\left[1-e^{-\kappa_s T}\right],
\end{equation}
leading to the optimal average number of photon jumps per step:
\begin{equation}
\lambda_k=\frac{\overline{n}_0\left[1-e^{-\kappa_s T}\right]}{S^{OPT}}=\frac{f_0}{r(f_0)\cdot f_1}.
\end{equation}
In the limit $f_0\rightarrow1$, $r(f_0)$ approaches infinity. We can expand eq. \ref{eq:u} in $1/r$:
\begin{eqnarray}
\nonumber \log(f_0)&=&\frac{1}{1+r}-\log\left(1+\frac{1}{r}\right)\\
\nonumber \log(1-[1-f_0])&=&\frac{1}{r}\cdot\frac{1}{1+\frac1r}-\left[\frac1r-\frac1{2r^2}+\cdots\right]\\
\nonumber -[1-f_0]-\frac12[1-f_0]^2-\cdots&=& \frac1r-\frac1{r^2}+\cdots -\left[\frac1r-\frac1{2r^2}+\cdots\right]\\
1-f_0&\approx&\frac{1}{2r^2}
\end{eqnarray}
For $f_0,f_1\sim1$, the optimized average number of jumps per step is simply $1/r$. Hence, $1/2r^2$ is the Poissonian probability to have two jumps. The last approximation states that in this limit we need to match the parity measurement infidelity with the double-jump probability. As long as $f_0$ is above 90\%, this approximation will be correct within 50\% of the right value.
We can now substitute the optimal steps number and get the maximal success probability as a function of the total duration and initial cat size:
\begin{eqnarray}
\nonumber {F'_T}{}^{OPT}(T,\overline{n}_0)\equiv F'_T (T,\overline{n}_0,S^{OPT})&=&\left[f_0\left(1+\frac{n_j}{S^{OPT}}\right)\right]^{S^{OPT}} \\
\nonumber \log\left({F'_T}{}^{OPT}\right)&=&S^{OPT}\underbrace{\left[\log(f_0)+\log\left(1+\frac1r\right)\right]}_{=\frac1{1+r}}\\
&=&\frac{r(f_0)}{1+r(f_0)}\frac{f_1}{f_0}\cdot \overline{n}_0[1-e^{-\kappa_s T}]\\
\nonumber&\downarrow&\\
F_T{}^{OPT}(T,\overline{n}_0)= {F'_T}{}^{OPT}(T,\overline{n}_0)\cdot e^{-\overline{n}_0[1-e^{-\kappa_s T}]}&=& e^{+\overline{n}_0[1-e^{-\kappa_s T}]\frac{r(f_0)}{1+r(f_0)}\frac{f_1}{f_0}}\cdot e^{-\overline{n}_0[1-e^{-\kappa_s T}]}.
\end{eqnarray}
The second term is the uncorrected cat state decay, which is $\overline{n}_0$ faster than $\kappa_s$. The first term counteracts this decay, and so we identify it as the action of the QEC.
We define the unit-less parameter $G$ as follows:
\begin{eqnarray}
G&\equiv& \frac{1}{1-\frac{f_1}{f_0}\cdot\frac{r(f_0)}{1+r(f_0)}}\\
\nonumber&\downarrow&\\
F_T{}^{OPT}&=& e^{-\frac{\overline{n}_0[1-e^{-\kappa_s T}]}{G}}\mathop{\approx}_{\kappa_s T\ll1}e^{-\kappa_s T\cdot\frac{\overline{n}_0 }{G}}.
\end{eqnarray}
In other words, we slow down the decay of the cat state by a factor of $G$. This number is a constant of the system, a function of the various infidelities. It approaches infinity as $f_0,f_1$ get closer to 1.
\section{QEC System Process Tomography}\label{process_tomo}
In this section we focus on the details of the process fidelity results presented in the main text. In addition to expanding upon the calculations used to characterize the process of the QEC system, we address the consequences of post-selection and what it allows us to deduce about the confidence of a given measurement trajectory.
\subsection{Quantifying the Process Fidelity}\label{proc_fid_calc}
\subsubsection{The Process Matrix}
We seek to characterize the entire process of the QEC system on a qubit. The density matrix of the final state, $\rho_\mathrm{fin}=\ket{\psi_\mathrm{fin}}\bra{\psi_\mathrm{fin}}$, is the output of the entire QEC system process (see sec.~\ref{exptflow}) $\mathcal{E}(\rho_\mathrm{init})$: $\rho_\mathrm{fin}=\mathcal{E}(\rho_\mathrm{init})$, where $\rho_\mathrm{init}=\ket{\psi_\mathrm{init}}\bra{\psi_\mathrm{init}}$. Ideally, $\mathcal{E}(\rho_\mathrm{init})=\rho_\mathrm{init}$, where the process is simply given by the identity operator $\hat{I}$, corresponding to perfect error correction. In reality, however, due to decoherence in conjunction with experimental imperfections, $\mathcal{E}(\rho_\mathrm{init})$ is a combination of non-unitary and unitary operations on the encoded state.
In order to characterize the full process $\mathcal{E}(\rho_\mathrm{init})$, we find $\rho_\mathrm{fin}$ by performing state tomography of the ancilla following the correction step to measure the components of the final qubit Bloch vector $\vec{r}=\{r_x,r_y,r_z\}$: $\rho_\mathrm{fin}=(\hat{I}+r_x\hat{\sigma}_x+r_y\hat{\sigma}_y+r_z\hat{\sigma}_z)/2$, where $\hat{\sigma}_x,\hat{\sigma}_y,\hat{\sigma}_z$ are the single qubit Pauli operators. The results allow us to represent $\mathcal{E}(\rho_\mathrm{init})$ in the chi ($X$) matrix representation using the operator-sum notation~\cite{NielsenQI}: $\mathcal{E}(\rho_\mathrm{init})=\sum_{jk}\tilde{E}_j\rho_\mathrm{init}\tilde{E}_k^\dag X_{jk}$, where for a single qubit $\tilde{E}_0=\hat{I},\tilde{E}_1=\hat{\sigma}_x,\tilde{E}_2=-i\hat{\sigma}_y,\tilde{E}_3=\hat{\sigma}_z$ and the coefficients $X_{jk}$ comprise the process matrix $X$. This is a complex $4\times4$ matrix of trace $\mathrm{Tr}(X)=1$ that completely describes the action of our QEC system on an arbitrary input state. We define the fidelity $F$ to be the overlap of the measured chi matrix, $X^M$, with $X_0$, the ideal identity process: $F=\mathrm{Tr}(X^MX_0)$. In principle, only four cardinal points are needed to determine $X^M$, the two at the poles of the Bloch sphere ($+\vec{z}$, $-\vec{z}$) and those along $\hat{\sigma}_x$ ($+\vec{x}$) and $\hat{\sigma}_y$ ($+\vec{y}$). Following the derivation presented in~\cite{NielsenQI}, we can also find a simple formula for the $(0,0)$ entry of $X^M$, $X^M_{00}$, which is equivalent to the expression above for the fidelity to the identity process. It requires the results of qubit state tomography, $\vec{r}^{~\hat{n}}=\{r_x^{\hat{n}},r_y^{\hat{n}},r_z^{\hat{n}}\}$, for the four cardinal points ($\hat{n}=+\vec{x},+\vec{y},+\vec{z},-\vec{z}$)
\begin{align}\label{form:entanglement_fid}
X_{00}& = \frac{1}{4}(1+(r_x^{+\vec{x}}-\frac{r_x^{+\vec{z}}+r_x^{-\vec{z}}}{2})+(r_y^{+\vec{y}}-\frac{r_y^{+\vec{z}}+r_y^{-\vec{z}}}{2})+\frac{r_z^{+\vec{z}}-r_z^{-\vec{z}}}{2})
\end{align}
We perform these calculations with both $+\vec{x}$, $+\vec{y}$ and $-\vec{x}$,$-\vec{y}$, however, to verify that there are no unexpected asymmetries in the cat code. Indeed we find identical process fidelities regardless of whether or not we rotate the coordinate system by $\pi$.
\subsubsection{Maximizing the Fidelity in Software}
Given that the optimal control encoding and decoding pulses do not realize the intended unitary perfectly at each time step, there could be some overall unintended rotation of the final qubit. Following the approach in~\cite{Schindler:2011ch}, we allow for one and the same rotation to be applied in software to all six cardinal points simultaneously that maximizes the overlap $\mathrm{Tr}(X^MX_0)$. This is a simple change of reference frame that in no way compensates for measurement infidelity or an artificial enhancement of performance. It is equivalent to applying a fixed pre-determined unitary operation on the decoded qubit to adjust its orientation that most closely matches that of the initial state. In practice, rotations do not exceed several degrees around each axis and have only a small effect on the reported results.
\subsubsection{Depolarization Errors}\label{depolarization}
As depicted in Fig.~4b of the main text and Fig.~S\ref{fig:depolarization}a, while monitoring the error syndrome of the cat code for longer times we find that all of the cardinal points on the qubit Bloch sphere shrink uniformly towards the fully mixed state $\rho_\mathrm{fin}=\hat{I}/2$. This feature is characteristic of a depolarization error channel~\cite{NielsenQI}:
\begin{align}
\mathcal{E}(\rho)&=\frac{p\hat{I}}{2}+(1-p)\rho\\
&=(1-\frac{3p}{4})\rho+\frac{p}{4}(\hat{\sigma}_x\rho\hat{\sigma}_x+\hat{\sigma}_y\rho\hat{\sigma}_y+\hat{\sigma}_z\rho\hat{\sigma}_z)\\
&=\sum_{j}\tilde{E}_j\rho\tilde{E}_j^\dag X_{jj}\\
&\rightarrow X=
\begin{pmatrix}
1-\frac{3p}{4}&0&0&0\\
0&\frac{p}{4}&0&0\\
0&0&\frac{p}{4}&0\\
0&0&0&\frac{p}{4}\\
\end{pmatrix}
\end{align}
This simple formula shows that the signature of depolarization errors is a diagonal process matrix $X$ in which the value of $X_{00}$ decreases with $p$, while the remaining diagonal components increase with $p$, for $p$ a probability between $0$ and $1$. As the data in Fig.~S\ref{fig:depolarization}a and Fig.~4a of the main text demonstrates, for the cat code of initial size $\bar{n}_0=2$ we have a $X^M_{00}(t)=1-3p(t)/4=1/4+3e^{-t/\tau_{qec}}/4$, where $\tau_{qec}\approx 320\mathrm{\mu s}$. Equivalently, each term in eq.~\ref{form:entanglement_fid} decays with the time constant $\tau_{qec}$ as $X_{00}$ asymptotes to $0.25$.
The dominant source of depolarization stems from the incorrect application of the decoding pulse at the end of the QEC sequence due to either the forward propagation of errors or an incorrect knowledge of the number of errors that has occurred. If the decoding pulse is applied at either the wrong angle (quantified in sec.~\ref{leakage}) or attempts to decode the wrong parity (see sec.~\ref{grape_implementation}), at the end of the decoding sequence we are left with a completely mixed qubit after tracing over the resonator state. Although at first glance the basis states along the logical $Z$ axis, 2-cats, should be immune to double-errors within the waiting time between syndrome measurements, the resulting Kerr deflection after two photon jumps is sufficient to appreciably rotate the cat states out of the logical space. Thus qubit $T_1$ decay, errant syndrome mapping, as well as double jumps substantially degrade the efficacy of the decoding pulse in faithfully transferring the quantum information from the resonator back to the ancilla.
In contrast, both the Fock state and the transmon encodings are susceptible to generalized amplitude damping, which is given by the following process~\cite{NielsenQI}:
\begin{align}
\mathcal{E}(\rho) &= E_0\rho E_0^\dag + E_1\rho E_1^\dag + E_2\rho E_2^\dag + E_3\rho E_3^\dag\\
&\rightarrow E_0=\sqrt{1-n_{th}}
\begin{pmatrix}
1&0\\
0&\sqrt{1-f(t)}
\end{pmatrix}\\
&\rightarrow E_1=\sqrt{1-n_{th}}
\begin{pmatrix}
0&\sqrt{f(t)}\\
0&0,
\end{pmatrix}\\
&\rightarrow E_2=\sqrt{n_{th}}
\begin{pmatrix}
\sqrt{1-f(t)}&0\\
0&1
\end{pmatrix}\\
&\rightarrow E_3=\sqrt{n_{th}}
\begin{pmatrix}
0&0\\
\sqrt{f(t)}&0,
\end{pmatrix}
\end{align}
where $f(t)$ is a function of time of the form $f(t)=1-e^{t/t_0}$; for the transmon $t_0=T_1$ and $n_{th}=n^a_{th}$; for the Fock state $t_0=\tau_s$ and $n_{th}=n^s_{th}$ (see sec.~\ref{fidelities}). As seen in Fig.~S\ref{fig:depolarization}b, all Bloch sphere vectors preferentially decay toward the ground state of the codeword. The off-diagonal elements in the density matrices of these encodings decay with a time constant that in addition to the amplitude damping also includes the pure dephasing in the system; these combined rates are $T_2$ for the transmon and $T_2^s$ for the Fock state components. Thus, we find that $r_x^{+\vec{x}}$ and $r_y^{+\vec{y}}$ in eq. ~\ref{form:entanglement_fid} decay at $T_2^s$ ($T_2$) for the Fock state (transmon) encodings, while $(r_z^{+\vec{z}}-r_z^{-\vec{z}})/2$ decays at $\tau_s$ ($T_1$). The process fidelity as a function of time $X^M_{00}(t)$ therefore decays at two different rates, resulting in a double-exponential behavior. The single decay times reported in the main text (Fig.~4a) for each encoding strategy are simply the harmonic mean of the coherence times given in Table~\ref{coherence}, $\tau=3/(1/T_1+2/T_2)$. Visually one can only discern this trend in the decay of the transmon fidelity, as the time constant of the Fock state encoding is too high to see a double-exponential for the comparatively short time scales in the plot.
\subsection{Error Record vs. Error History}
\subsubsection{Confidence Level Estimation}\label{confidence}
With the calculations below, we expand upon why the process fidelity of the $2$-error case after two syndrome measurements, shown in Fig.~3 of the main text, differs substantially from that of the $0$ and $1$ error cases. We invoke Bayes' rule and the measured fidelity of successfully mapping parity in the presence of $\bar{n}=3$ photons in the cavity: $97.7\%$. Although very high, this number still leads to the somewhat surprising differences in the confidence of certain measurement results over others. It becomes clear that measurements of ``error" confirmed by subsequent measurements of ``no error" have a $\sim10\%$ higher fidelity than those without such a confirmation. Moreover, if two consecutive ``error" measurements are recorded, the probability drops substantially by $\sim20-30\%$. With these findings, many of the features in the data fall into place. One can see the effects of these conditional probabilities by looking at the 1-error cases in the Wigner tomography in Fig.~3 of the main text, where the parity and fringe contrast of the $01$ case appear to be less negative and sharp than that of $10$( $0\equiv \mathrm{``no~error"}$ and $1\equiv \mathrm{``error"}$). Along the same lines, the case $11$ has by far the lowest fidelity, as confirmed by the process tomography in Fig.~3e. The single-shot records that come with each repetition of the monitoring sequence thus provide us with crucial information beyond simply how to bin each result. Indeed, if one can tolerate certain levels of post-selection, by removing the trajectories with the lowest confidence we enhance the qubit lifetime by nearly a factor of two over the Fock state encoding.
We start by assuming that we have encoded a qubit in cat states of size $\bar{n}=3$ and that the ancilla is in $\ket{g}$ prior to the parity mapping. This is the system state after Fig.~3a in the main text. After a round of error monitoring where we use the parity protocol that maps even parity to $\ket{g}$ and odd parity to $\ket{e}$ (indicated by the superscript ``$-$"; see sec.~\ref{smart_tracking}), the probabilities to measure ancilla $\ket{g}$, $\ket{e}$ are given by:
\begin{align}
p(g)=&p^-(g|0\varepsilon)p(0\varepsilon)+p^-(g|1\varepsilon)p(1\varepsilon)\\
p(e)=&p^-(e|0\varepsilon)p(0\varepsilon)+p^-(e|1\varepsilon)p(1\varepsilon),
\end{align}
where $p(0\varepsilon)=e^{-(\bar{n}e^{-t/\tau_s})t_\mathrm{w}/\tau_s}$ is the probability that the resonator state had $0$ parity jumps, $p(1\varepsilon)=1-p(0\varepsilon)$, and $p^-(g|0\varepsilon)$ and $p^-(e|1\varepsilon)$ are respectively the probabilities to measure $\ket{g}$ when the resonator state had $0$ parity jumps and $\ket{e}$ when the resonator had $1$ parity jump. Likewise, when we use the parity protocol that maps odd parity to $\ket{g}$ and even parity to $\ket{e}$ (indicated by the superscript ``$+$"), the probabilities to measure ancilla $\ket{g}$, $\ket{e}$ are given by:
\begin{align}
p(g)=&p^+(g|0\varepsilon)p(0\varepsilon)+p^+(g|1\varepsilon)p(1\varepsilon)\\
p(e)=&p^+(e|0\varepsilon)p(0\varepsilon)+p^+(e|1\varepsilon)p(1\varepsilon),
\end{align}
where $p^+(g|0\varepsilon)$ and $p^+(e|1\varepsilon)$ are respectively the probabilities to measure $\ket{g}$ when the resonator had $0$ parity jumps and $\ket{e}$ when the resonator had $1$ parity jump. In our system, for average photon number $\bar{n}=3$ in the resonator, $p^-(g|0\varepsilon)=p^+(g|0\varepsilon)=0.983$ and $p^+(e|1\varepsilon)=p^-(e|1\varepsilon)=0.971$.
We seek to predict the statistics for monitoring errors for two steps and an initial cat size of average photon number $\bar{n}=3$, as presented in Fig.~3e of the main text. Following the flow of Fig.~3a-d, assuming at time $t=0$ we start with an even parity state in the resonator and perform the first round of error monitoring that maps even parity to $\ket{g}$, after $13.8\mathrm{\mu s}$ $p(g)=0.841$ and $p(e)=0.159$. Using Bayes' rule, we can now calculate the conditional probabilities for the resonator to be in a certain parity state given the measurement outcome:
\begin{align}
p^-(0\varepsilon|g)=&\frac{p^-(g|0\varepsilon)p(0\varepsilon)}{p(g)}=0.995\\
p^-(1\varepsilon|e)=&\frac{p^-(e|1\varepsilon)p(1\varepsilon)}{p(e)}=0.910
\end{align}
The key point here is the difference in the confidence as to the true occurrence of an error when the ancilla ends up in $\ket{e}$. The small measurement infidelities together with the relatively low probability to have an error in the first place leads to a considerable difference of $\sim8\%$ between the two conditional probabilities $p^-(0\varepsilon|g)$ and $p^-(1\varepsilon|e)$. This difference leads to a higher likelihood for the parity meter to suggest the occurrence of another error in the encoded state in the subsequent measurement, and thus leads to a substantially lower confidence in any trajectory that indicates consecutive errors, as we show next.
If we measure $\ket{g}$, we continue using the same protocol, but now to obtain $p^{\pm}(g)$ and $p^{\pm}(e)$ (probabilities to measure $\ket{g}$ and $\ket{e}$ for the two different parity mapping protocols) we no longer have the luxury of knowing that we start in an even state and thus must use the conditional probabilities obtained above for the following:
\begin{align}
p^-(g)=&[p^-(g|0\varepsilon)p(0\varepsilon)+p^-(g|1\varepsilon)p(1\varepsilon)]p^-(0\varepsilon|g)+[p^-(g|1\varepsilon)p(0\varepsilon)+p^-(g|0\varepsilon)p(1\varepsilon)][1-p^-(0\varepsilon|g)]=0.834,
\end{align}
and $p^-(e)=1-p^-(g)=0.166$. Similarly for the case where we instead measure $\ket{e}$ and the protocol is flipped in the second round:
\begin{align}
p^+(g)=&[p^+(g|0\varepsilon)p(0\varepsilon)+p^+(g|1\varepsilon)p(1\varepsilon)]p^-(1\varepsilon|e)+[p^+(g|1\varepsilon)p(0\varepsilon)+p^+(g|0\varepsilon)p(1\varepsilon)][1-p^-(1\varepsilon|e)]=0.777,
\end{align}
and $p^+(e)=1-p^+(g)=0.223$.
We now have the probabilities to obtain the following measurement records, which closely match those presented in Fig.~3e of the main text:
\begin{align}
p_{0\varepsilon}=&p(g)p^-(g)=0.841\times0.834\\
&=0.701\\
p_{1\varepsilon}=&p(g)p^-(e)+p(e)p^+(g)=0.841\times0.166+0.159\times0.777\\
&=0.263\\
p_{2\varepsilon}=&p(e)p^+(e)=0.159\times0.223\\
&=0.036
\end{align}
Beyond telling us that we understand the statistics of our system, this calculation also provides crucial information as to the confidence of certain trajectories over others. First, one may immediately note the slight asymmetry between measuring $\ket{g}$ and then $\ket{e}$ ($0.841\times0.166=0.140$) vs. the reverse order ($0.159\times0.777=0.124$). Indeed, with the following conditional probabilities for all possible error histories ($gg,eg,ge,ee$) we see the huge benefit of a ``confirmation" $g$ measurement on the probability that the measured trajectory faithfully reflects the error trajectory of the encoded state:
\begin{align}
p^-(0\varepsilon|gg)=&\frac{p^-(g|0\varepsilon)p(0\varepsilon)p^-(0\varepsilon|g)}{p^-(g)}=0.993\\
p^+(1\varepsilon|eg)=&\frac{p^+(g|0\varepsilon)p(0\varepsilon)p^-(1\varepsilon|e)}{p^+(g)}=0.978\\
p^-(1\varepsilon|ge)=&\frac{p^-(e|1\varepsilon)p(1\varepsilon)p^-(0\varepsilon|g)}{p^-(e)}=0.869\\
p^+(2\varepsilon|ee)=&\frac{p^+(e|1\varepsilon)p(1\varepsilon)p^-(1\varepsilon|e)}{p^+(e)}=0.592
\end{align}
We can extend these calculations to a general simulation that handles an arbitrary number of correction steps to show that this simple approach captures many of the features we see in our data. A detailed look at tracking for $\sim68\mathrm{\mu s}$ in Fig.~S\ref{fig:bayesian}a, for example, details the individual process fidelities we expect to measure for every possible measurement record. A particularly noteworthy conclusion is that trajectories such as $1010$ have a higher expected fidelity than $0001$; although the former suggests more errors in the codeword, each measurement result $1$ is confirmed by a subsequent $0$, whereas this is not the case in the latter. In summary, with each error syndrome measurement, all measurement infidelities are pushed onto records that report higher and higher error numbers, and with time, these records become more and more common. As seen in Fig.~S\ref{fig:bayesian}b, the occurrence of high-confidence trajectories falls in time, albeit slowly. In this sense, although the post-selection substantially improves the quality of the final qubit, it nonetheless results in an exponential decay of acceptable trajectories, an expected trade-off.
\subsubsection{The Increasing Fidelity of the Two-Error Case}
In this section we present process fidelity decay curves akin to those of Fig.~4a, here shown for encodings of initial size $\bar{n}_0=3$ (Fig.~S\ref{fig:chibyjump}a). The time constant of the QEC curve is lower since by using as $\bar{n}=3$ rather than $\bar{n}=2$ (main text), the rate of errors the QEC system must handle is higher and the parity measurement fidelity for $\bar{n}=3$ is $0.4\%$ lower than for $\bar{n}=3$ (see sec.~\ref{fidelities}). Taking this same data and plotting the fidelity decay conditioned on the numbers of errors detected, as shown in Fig.~S\ref{fig:chibyjump}b demonstrates the contrasting trends for the $0$-error case versus the $2$-error case. As expected, the $0$-error case decays with a time constant of $\sim 630\mathrm{\mu s}$, consistent with double jumps and dephasing due to qubit excitation as the dominant sources of error. This is the longest time constant in our error correction system, and demonstrates the high fidelity we can achieve if we use the cat code only as an error indicator, whether that error be due to photon jumps or ancilla dephasing. The fidelity of the $2$-error case, however, increases with time as the occurrence of two-error trajectories that contain higher confidence ``confirmation" measurements within them increases as well. For example, with four monitoring steps, one has the trajectory $1010$, which has a substantially higher fidelity than anything containing a $11$ (Fig.~S\ref{fig:bayesian}). Thus the paradoxical rise in fidelity for the two jump case with time simply amounts to an increase in the knowledge that two $\ket{e}$ results actually correspond to two errors in the encoded state. In terms of measured process matrices $X^M$, note that those shown in Fig.~S\ref{fig:chibyjump}c after $109\mathrm{\mu s}$ have the same form as those in Fig.~3e of the main text, except the fidelity of the two error case in the former is notably higher. The lower final process fidelity is due to the diminishing number of $0$-error cases with time.
Crucially, these results do not suggest that the cat code is ill-equipped to handle multiple errors throughout a monitoring sequence, but rather highlight the trade-off we make between mitigating the effects of ancilla back-action at the expense of statistics. Indeed, should we choose to post-select to remove any cases where we measure either anything with $11$ or any trajectory that ends in a $1$, we see a remarkable enhancement in the lifetime, and still keep a majority of the data (inset Fig.~S\ref{fig:chibyjump}b; Fig.~4b of the main text).
\newpage
\begin{table}[h]
\begin{tabular}{cc}
\hline\hline
Term & Measured \\
& (Prediction) \\
\hline
$\omega_a/2\pi$ & 6.2815 GHz \\
$\omega_s/2\pi$ & 8.3056 GHz \\
$\omega_r/2\pi$ & 9.3149 GHz \\
\hline
$K_a/2\pi$ & 297 MHz \\
$K_s/2\pi$ & 4.5 kHz \\
$K_r/2\pi$ & (0.5 kHz)\\ \hline
$\chi_{sa}/2\pi$ & 1.97 MHz \\
$\chi_{ra}/2\pi$ & 1 MHz \\
$\chi_{sr}/2\pi$ & (2 kHz)\\
\hline
\caption{\textbf{Hamiltonian parameters} }%
\label{table_params}
\end{tabular}
\end{table}
\begin{table}[h]
\begin{tabular}{cccc}
\hline\hline
& Ancilla & Storage & Readout \\ \hline
$T_1$ & $35\mu$s & - & -\\
$T_2$& $12\mu$s & - & -\\
$\tau_s$ & - & $250\mu$s & $100$ns\\
$T^s_2$ & - & $330\mu$s & $ - $\\
\hline
ground state (\%)& $96\%$ & $>98\%$ & $>99.3\%$ \\
\hline
\caption{\textbf{Coherence and thermal properties}}
\label{coherence}
\end{tabular}
\end{table}
\begin{figure*}[!ht]
\centering
\includegraphics[width=4in]{supp_setup_final.pdf}
\caption{\footnotesize
\textbf{Setup. (a)} Shown on the left is a picture of one half of the experimental device, consisting of two resonators machined out of high purity 4N aluminum and a transmon coupled to both of them. The resonators, one for storage and one for readout, are separated by a 2mm wall. The transmon is patterned on a thin piece of sapphire, and is comprised of a Josephson junction with and two antennas of different lengths that extend from the junction into either cavity. All system modes (storage, ancilla, readout) are realized and coupled to one another through these antennas. Input and output couplers are used to excite each mode with microwave tones. The image on the right depicts the JPC amplifier, used as the first stage of amplification in our measurement chain. We operate it at a gain of $\sim 20dB$ and bandwidth of $\sim 5MHz$. \textbf{(b)} Schematic of the experimental setup, from room temperature at $290\mathrm{K}$ to the base temperature of the dilution unit ($\sim 10\mathrm{mK})$. All microwave drives are produced and collected using our feedback setup, outline in purple. \textbf{(c)} A detailed schematic of our feedback architecture, which is comprised of three Input/Output (I/O) boards. The primary components of each board are: DACs to generate pulse envelopes, modulated at $50\mathrm{MHz}$ for the purposes of single-sideband modulation of local oscillators corresponding to each mode frequency; ADCs that digitize the incoming analog signal of the returning readout pulse, which is down-converted back to 50MHz after exiting the fridge; digital markers that operate microwave switches and are used for inter-board communication; an FPGA that synchronizes all components to realize the full experimental flow of pulse generation, readout signal integration, and real-time feedback and calculations; and finally the PCIe that allows the FPGA to send data to the PC. }
\label{fig:setup}
\end{figure*}
\begin{figure*}[!ht]
\centering
\includegraphics[width=7in]{supp_circuit_final_a.pdf}
\caption{\footnotesize
\textbf{Cat states in the Fock basis.} The logical basis states $\ket{C_\alpha^+}, \ket{C_{i\alpha}^+}$ are termed ``2-cats" because they are superpositions of two coherent states. They can be expanded in the Fock basis to elucidate certain cat code features. In particular, although $\ket{C_\alpha^+}$ and $\ket{C_{i\alpha}^+}$ are both eigenstates of even parity with non-zero amplitudes only for even Fock state components $\ket{2n}$ ($n$ an integer), the phase of $\ket{1_L^+}$ results in an extra minus sign on $\ket{2+4n}$ in its Fock state expansion. When one has the equal superposition of $\ket{X_L^+}=\ket{C_\alpha^+}+\ket{C_{i\alpha}^+}$ (normalization omitted), consequent destructive interference results in a ``4-cat" state that has non-zero components every fourth Fock state, $\ket{4n}$. This is still an eigenstate of even parity, but unlike the basis states individually, which are eigenstates of $\hat{a}_s^2$, $\ket{X_L^+}$ and indeed any arbitrary superposition of $\ket{C_\alpha^+}$ and $\ket{C_{i\alpha}^+}$ are eigenstates of $\hat{a}_s^4$. In the simple example here, one can see that by applying $\hat{a}_s$ four time on $\ket{X_L^+}$ in the Fock basis, one returns to a $\ket{4n}$ expansion. Hence the modulo-four behavior of the cat code.}
\label{fig:Circuit_a}
\end{figure*}
\begin{figure*}[!ht]
\centering
\includegraphics[width=7.2in]{supp_grape_final.pdf}
\caption{\footnotesize
\textbf{Optimal control pulses. (a)} Example encoding pulses for cat states of initial size $\bar{n}_0=3$. The pulses are played by the controller at the same time, minus a $2\mathrm{ns}$ offset to account different lengths of line going down to the resonator versus the ancilla. The y-axis is given in units of the voltage that needs to be applied on a particular mode (frequency $\omega_m$) for the same amount of time to insert one quantum of energy. For the resonator, this value is determined by finding the voltage needed to create a coherent state of amplitude $|\alpha|=1$ for a square pulse of duration $508\mathrm{ns}$; likewise, with the ancilla it's the voltage needed to perform a $\pi$ pulse in $508\mathrm{ns}$. Mixer quadratures $I$ and $Q$ are shown in red and blue, respectively. \textbf{(b)} The numerical optimization has no knowledge of our experimental imperfections, such as frequency-dependent reflections in our microwave lines and components. We calibrate a scaling factor on the amplitudes for both the resonator and ancilla drives by performing a 2D voltage sweep to see at which scalings we find both the maximum percentage of ancilla in its ground state $\ket{g}$ (ideally $100\%$) and the maximum parity of the resonator state (ideally $+1$). Shown here are images where we have already found what looks to be the optimal drive scaling, outlined in the black dotted circle. \textbf{(c)} Using joint Wigner tomography~\cite{Vlastakis:2015tw}, we plot $W_z(\beta)=\braket{\hat{\sigma}_z\hat{P}(\beta)}$ to we demonstrate the capability of a single pair of pulses to encode an arbitrary vector on the qubit Bloch sphere. The first (third) row shows all six cardinal points encoded in the cat code basis (Fock states $\ket{0}_f,\ket{1}_f$). The only difference in the pulse sequence is the initial qubit preparation pulse. The second row shows the action of cat state decoding pulses that immediately follow the encoding shown in the first panel. The first and sixth tomogram demonstrate that we map the 2-cat along the real axis back to vacuum with the ancilla in $\ket{g}$ ($W(\beta)=\braket{\hat{P}(\beta)}$) and a 2-cat along the imaginary axis back to vacuum with the ancilla in $\ket{e}$ ($W(\beta)=-\braket{\hat{P}(\beta)}$). The remaining four joint tomograms should ideally have no visible features for perfect encoding and decoding since $\braket{\hat{\sigma}_z}=0$. Experimental imperfections and primarily ancilla decoherence, however, result in residual resonator-ancilla entanglement at the end of the sequence and thus slightly visible interference features in the Wigner functions.}
\label{fig:GRAPE}
\end{figure*}
\begin{figure*}[!ht]
\centering
\includegraphics[width=5in]{supp_qnd_final.pdf}
\caption{\footnotesize
\textbf{QND parity measurements.} We repeat the same experiment demonstrated in~\cite{Sun:2013ha}, which quantifies the effect of measuring parity on the effective resonator decay rate $1/\tau_{tot}$. Ideally, for perfectly QND measurements $1/\tau_{tot}$ should match the natural decay rate $1/\tau_s$, regardless of the measurement rate. In reality, there is a small probability $p_d$ that by measuring parity we induce more photon jumps. The main plot shows three decay curves of average parity $\braket{P}$ versus time for three different measurement repetition intervals: $2\mathrm{\mu s}$, $5\mathrm{\mu s}$, and $20\mathrm{\mu s}$; the initial displacement is $|\alpha|=2$. The inset shows a plot of the extracted time constants for these three and other points. The data fits well to a model in which the natural decay rate acts in parallel with an induced decay rate $p_d/\tau_{\mathrm{rep}}$, for $\tau_{\mathrm{rep}}$ the repetition interval. In this experiment we find $p_d$ to be $0.1\%$ per measurement, in agreement with the results in~\cite{Sun:2013ha}. Note that by using the adaptive parity monitoring protocol (see sec.~\ref{smart_tracking}), regardless of measurement rate each curve saturates at a measured $\braket{P}\approx0.95$. This is consistent with a thermal population of the storage resonator $n^s_{th}<2\%$, which reduces the average parity from ideally $+1$ of the vacuum to $\braket{P}\approx1-2n^s_{th}$, and a parity measurement fidelity of $98.5\%$ for no photons in the resonator. Such performance would be impossible without the crucial application of the adaptive parity monitoring protocol.}
\label{fig:QND}
\end{figure*}
\begin{figure*}[!ht]
\centering
\includegraphics[width=5in]{supp_circuit_final_b.pdf}
\caption{\footnotesize
\textbf{Leaking out of the code space. (a)} The cat code logical basis states $\ket{C_{\alpha}^\pm}$ and $\ket{C_{i\alpha}^\pm}$ can be expanded in the Fock basis, where each component is rewritten in binary. For small cat sizes of $|\alpha|^2\lesssim2$, only three physical qubits are necessary to realize such a representation to high accuracy, as the Poisson coefficients $c_n$ for Fock states greater than $\ket{111}\equiv\ket{7}$ become vanishingly small. Should these coefficients deviate from their specified values without our knowledge, the integrity of the quantum information may start to suffer. \textbf{(b)} Measured loss of fidelity as a function of an intentional phase offset $\theta$ of a decoding pulse that immediately follows qubit encoding. With a standard deviation of $\sim24^\circ$, the broad Gaussian fit shows that for small deflections the fidelity suffers only quadratically. Hence, the Kerr-induced deflection per photon jump is not a major source of dephasing for low jump numbers even with a $t_\mathrm{w}\approx20\mathrm{\mu s}$. However, as ancilla decay during mapping can deflect the state by any angle, it causes a substantial degradation in process fidelity. \textbf{(c)} Code space leakage can be particularly acute if the ancilla $\ket{A}$ undergoes energy decay (or excitation) during the parity mapping. Shown in the first panel is an ideal parity mapping using the binary representation. The least significant bit, $d_0$, is the parity bit; the parity is even for $d_0=0$ and odd for $d_0=1$, regardless of $d_1$ and $d_2$. The parity mapping is thus a simple cNOT gate, depicted here as a controlled phase (solid black circles, $\pi$ phase shift) between two Hadamard gates (H). Such a circuit representation belies the fact that the mapping is finite in time, lasting $\pi/\chi_{sa}\approx250\mathrm{ns}$. To obtain a better approximation of the true dynamics, we split the cPHASE into two pieces, where the two empty circles are now controlled phase gates with a $\pi/2$ phase shift. A simple calculation in the Fock basis demonstrates that if the ancilla suddenly decays to $\ket{g}$ exactly halfway through the mapping, a logical bit flip occurs. For arbitrary decay times, we witness code space leakage, where the cat state is aligned with neither the real nor the imaginary axis. The third row shows an example of this for one more layer of granulation.}
\label{fig:Circuit_b}
\end{figure*}
\begin{figure}[h!]
\centering
\includegraphics[page=1,width=\linewidth]{supp_exptflow_final.pdf}
\caption{\footnotesize
\textbf{Experimental flow. (a)} The six steps of the QEC protocol using a standard flow-chart convention. 1. System reset: we use a long selective $R_\pi^y$ pulse on the ancilla ($\sigma\sim600$ns) that addresses only the Fock state $\ket{0}_f$ of the resonator~\cite{Schuster:2007ki} to verify that it is indeed in the vacuum. In order to boost our confidence, we require three consecutive verifications to trust the results (counter ``cnt" must be incremented from 0 to 3 for the process to continue). We then perform the ancilla reset protocol by measuring it and applying a short pulse ($\sigma=2$ns) to return to to $\ket{g}$ if it is found to be in $\ket{e}$. 2. Ancilla initialization: we apply a short pulse ($\sigma=2$ns) to encode the ancilla into one of 6 cardinal points on the qubit Bloch sphere. 3. Encoding: an optimized control pulse of length 508ns transfers the quantum information from the ancilla to the resonator, leaving the ancilla in $\ket{g}$. 4. Parity monitoring: we repeat the adaptive parity monitoring protocol. The number of steps and their durations are optimized for each total duration (e.g. three steps for $54\mathrm{\mu s}$ as in Fig.~4 of the main text), which we specify at the beginning of the experiment. Each monitoring step begins with a delay of some duration, followed by the right Ramsey-like sequence that maps the ancilla back to $\ket{g}$ if there was no photon jump during the delay. We then readout the ancilla; if we find it in $\ket{e}$, we reset it as soon as possible. This happens $332$ns from the moment the readout pulse ends ($200\mathrm{ns}$ of FPGA calculation latency, plus experimental delays such as finite microwave cable lengths). 5. Decoding: after a short delay to finalize the estimation of current state in the resonator, the decoding pulse is chosen in real-time and is played with a best estimate of a corrected resonator phase to account for Kerr-induced deflection for non-zero error cases (see sec.~\ref{undesired_couplings}). 6. Qubit state tomography: measuring the ancilla after a pre-rotation to find the final qubit density matrix. \textbf{(b)} The whole protocol set to scale, shown to emphasize that we interrogate the system for only fraction of the entire sequence duration.}
\label{fig:timings}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=6in]{supp_optimization_final.pdf}
\caption{\footnotesize
\textbf{Simulation results. (a)} The gain $G$ as a function of $T_1$ and $T_\phi$ of the ancilla. With the parameters of our ancilla we get $G=4.96$, and the predicted gain over the Fock state encoding is $1.65$. This value is higher than the measured $10\%$ improvement since this plot does not include the effects of dephasing due to Kerr or the degradation of information due to overlapping logical basis states. A key point is that with ancilla coherence on the order of $100\mathrm{\mu s}$, we already expect to see gains of an order of magnitude. \textbf{(b)} The optimal predicted process fidelity for our system given ancilla and resonator coherence times, and resonator Kerr (squares). Comparing this simulation with the data (circles) shown in Fig.~4a of the main text, we find that our model faithfully predicts the measurement results at each point. We also display the expected process fidelity we would have obtained had we fixed the number of steps, in colored curves. Commensurate with the top axis Fig.~4a of the main text, we chose optimal configuration for each of the total time durations.}
\label{fig:analyticalopt}
\end{figure}
\begin{figure*}[!ht]
\centering
\includegraphics[width=7.2in]{supp_depolarization_final.pdf}
\caption{\footnotesize
\textbf{Qubit depolarization. (a)} This data demonstrates that depolarization, in which every Bloch vector shrinks uniformly toward a fully mixed state at the origin, is the dominant error channel in our system. The six plots show the decay in time of the average $X$, $Y$, and $Z$ components of the qubit Bloch vector ($\braket{X}$, $\braket{Y}$, $\braket{Z}$ respectively) for each cardinal point after using the cat code QEC system. These six points are initialized by applying the identity ($I$), $\pi$ pulse about the $Y$ axis ($R^y_\pi$), or $\pm\pi/2$ rotations about the $Y$ or $X$ axes ($R^y_{\pi/2}$,$R^y_{-\pi/2}$,$R^x_{\pi/2}$,$R^x_{-\pi/2}$) on the ancilla prior to the error monitoring (see sec.~\ref{exptflow}). This data is used to calculate the process matrix $X^M$ of the corrected qubit in Fig.~4a of the main text and produce the images in Fig.~4b. In each of the six cases, only the non-zero coordinate of the Bloch vector at zero time decays while the other two remain at $0$ throughout the entire tracking duration. We find the decay rate of cat states along $\ket{\pm X^+_L}$ to be slightly more robust as these states are symmetric about both axes in the resonator's phase space, while $\ket{C_{\alpha}^+}$, $\ket{C_{i\alpha}^+}$ and $\ket{\pm Y^+_L}$ are symmetric about only one. Thus, rotations in phase space are somewhat less detrimental for $\ket{\pm X^+_L}$. \textbf{(b)} In contrast, the Fock state encoding shows decay curves typical of amplitude damping, or $T_1$-type decoherence, in which all coordinates preferentially decay to the energetically favorable resonator ground state $\ket{0}_f$. One can see in each plot that the value of the $\braket{Z}$ coordinate monotonically increases towards (or stays at) $+1$ regardless of the initial state, while every other coordinate decays to $0$.}
\label{fig:depolarization}
\end{figure*}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{supp_bayesian_final.pdf}
\caption{\footnotesize
\textbf{Assessing measurement record confidence. (a)} Predicted statistics and confidence for the corrected cat code after four syndrome measurements over $68\mathrm{\mu s}$ of monitoring; $\bar{n}_0=2$. After four rounds of error correction there are sixteen possible result records: $0000, 1000,\dots,1111$. The left plot shows the predicted probability to measure each of these records individually (green bars, top axis), and the cumulative probability (bottom axis). In the right plot, we also show the predicted process fidelity conditioned on measuring each record (red bars are even parity, blue bars are odd parity). This conditioned fidelity corresponds to our confidence in the output. In the column separating the plots, the numbers in green correspond to the actual values of each individual green bar (left plot), and the numbers in red and blue correspond to the values of the red and blue bars (right plot). The axis of the right plot, cumulative process fidelity, is the cumulative sum of the predicted conditional probabilities weighted by the trajectory occurrence probability. It is interesting to compare the records $1010$ and $0001$. The first suggests two photon jumps (during the first and third steps) and the second suggests a single photon jump during the last step. The conditional process fidelity for $1010$ is actually higher. This is because measuring ancilla $\ket{g}$, which indicates no change in parity, has a higher probability of being correct than measuring $\ket{e}$, which does indicate a change of parity. Thus, every error ($1$) in $1010$ is "verified" by the subsequent measurement of no error ($0$), while an error in the last step of $0001$ has a higher likelihood to be a faulty measurement. Outlined in purple is the set of data we accept as ``high-confidence" trajectories (Fig.~4, main text), wherein every $1$ is confirmed by a subsequent $0$. \textbf{(b)} This data shows the percentage of trajectories we accept in the post-selected data of Fig.~4 in the main text. The purple arrow corresponds to the predictions for $68\mathrm{\mu s}$ in (a). Although the percentage of high confidence trajectories falls off exponentially, it does so very slowly, and after $\sim100\mathrm{\mu s}$ we still keep $\sim80\%$ of the data.}
\label{fig:bayesian}
\end{figure}
\begin{figure*}[!ht]
\centering
\includegraphics[width=7.2in]{supp_chibyjump_final.pdf}
\caption{\footnotesize
\textbf{QEC program process tomography for $\bar{n}_0=3$. (a)} An identical plot to that shown in Fig.~4a of the main text, except the initial cat size here is $\bar{n}_0=3$. Every time constant involving cat code is lower than for $\bar{n}_0=2$ due to the increased error rate in the codeword for higher photon numbers, $\gamma=\bar{n}\kappa_s\approx3\kappa_s$ versus $\gamma\approx2\kappa_s$. As in the main text, on the top axis we plot the number of syndrome measurements used for each point in the corrected cat code; note that for this larger encoding we typically use more measurements for a given time step than for cat states of $\bar{n}_0=2$. \textbf{(b)} This plot shows the process fidelities conditioned on individual error trajectories for $j=0$, $1$, and $2$ errors (red circles, blue squares, and green triangles respectively). Unsurprisingly, the $0$ error case has the highest fidelity, followed by the $1$ and $2$ error cases. The (initially) surprising feature here is that the process fidelity of error cases increases with time, in particular for the $2$ error case. This is precisely a consequence of error syndrome measurement fidelity, wherein with more $2$ error trajectories we have more result records that have the aforementioned confirmation measurements. In other words, with more statistics (inset) we have greater knowledge that measured $2$ error cases in fact correspond to actual $2$ error cases in the encoded state. Non-monotonic variations in the data points throughout the entire curve are attributed to variations in the efficacy of the decoding pulses at different points in time. \textbf{(c)} Measured process matrices $X^M_j$ for $j=0$, $1$, $2$, and $3$ errors after $109\mathrm{\mu s}$ for an initial encoding size of $\bar{n}_0=3$; ideal processes are given by $X_j$ and are wire-outlined. As compared with the data in Fig.~3e of the main text, note that the $0$ error case has the greatest drop in fidelity, the $1$ error case goes down slightly, and the $2$ error case increases substantially. Note the $3$ error case also exhibits clear signatures of the correct $X_3$ form. The substantial drop in fidelity from $0.84$ of Fig.~3e in the main text to $0.69$ here is primarily due to the drop of $0$ error cases with time.}
\label{fig:chibyjump}
\end{figure*}
\clearpage
\footnotesize
\bibliographystyle{naturemag}
|
1,116,691,498,298 | arxiv | \section{Introduction}
Static and spherically symmetric solutions in General Relativity are relatively easy to construct and in many cases the equations can be reduced to some quadratures, if not exactly solvable. For example, the Reissner-Nordestr\o m (RN) black hole can be straightforwardly generalized to higher dimensions, with or without a cosmological constant. The situations for rotating geometries are much more complicated. While the Kerr metric \cite{Kerr:1963ud} and its neutral Plebanski extension \cite{Plebanski:1975xfb} can be generalized to higher dimensions \cite{Myers:1986un,Hawking:1998kw,Gibbons:2004uw,Chen:2006xh}, the generalization of the (charged) Kerr-Newman black hole \cite{Newman:1965my} to Einstein-Maxwell gravity in higher dimensions, on the other hand, has not been successful.
The situation improves with supergravities, the low energy effective theories of strings.
These theories typically have enhanced global symmetry, which makes it easier to construct new solutions. Charged rotating solutions in supergravities have been obtained through solution generating techniques, utilising the enhanced global symmetries in lower dimensions, see e.g.~\cite{Sen:1992ua,Cvetic:1996xz,Cvetic:1996dt}. Even though these global symmetries are broken in gauged supergravities where scalar potentials are introduced, many charged rotating AdS black holes were nevertheless constructed, e.g.~\cite{Chong:2005hr,Wu:2011gq}. These solutions involve mass, charge and angular momenta and are typically very complicated. For suitable such parameters, they describe black holes. However, the spacetime geometries for general parameters have not been thoroughly investigated.
Rotating geometry is particularly important in higher dimensions where black objects with new topologies other than spheres can arise. An important breakthrough in black hole research is the discovery of five-dimensional black rings whose horizon topology is not $S^3$, but $S^1\times S^2$ \cite{Emparan:2001wn,Pomeransky:2006bd}, and hence they are necessarily rotating. Supersymmetric black rings in five dimensional supergravities have since been constructed \cite{Elvang:2004rt}. All known black rings are asymptotically flat. There are no known examples of black rings that are asymptotic to the global anti-de Sitter (AdS) spacetimes. In particular, a no-go theorem was established that a supersymmetric AdS black ring is not possible \cite{Grover:2013hja}. The no-go theorem was further extended to the nonexistence of general extremal de Sitter black rings \cite{Khuri:2017zqg}.
In this paper, we focus on minimal supergravity in five dimensions, both gauged and ungauged, and study the global structures of the charged rotating solutions that were constructed in \cite{clpd5sol1}. The solution contains mass $M$, two equal angular momenta $J$ and electric charge $Q$. For suitable choices of these parameters, we find new spacetimes that can be best described as degenerate black rings (DBRs). These solutions are asymptotic to Minkowski or global AdS spacetimes. They are extremal black objects with the near-horizon geometry of locally AdS$_3\times S^2$. Specifically, the level surfaces of the spatial section outside the horizon are $S^3$, written as a $U(1)$ bundle over $S^2$. The $U(1)$ fibre untwists and collapses on the horizon and becomes the degenerate part of the AdS$_3$ factor. It appears that we have lost one dimension on the horizon with the radius of the $U(1)$ circle shrinking to zero. Unlike the usual black rings, our DBRs emerge in both gauged and ungauged supergravities.
The paper is organized as follows. In section \ref{sec:local}, we review the general charged rotating solutions in minimal gauged supergravity in five dimensions. We list possibilities of global structures that can emerge from these solutions. In section \ref{sec:dring}, we give the explicit solutions that describe DBRs. We analyse the near-horizon geometries of AdS$_3\times S^2$. In section \ref{sec:phase}, we observe that the original local solution has two branches of extremal black holes. They start at one end as the RN black holes of equal mass, but opposite charges, and join at the other end as the DBR. We show that there is a global discontinuity at the joining where the Gibbs free energies are not continuous. In section \ref{sec:soliton}, we notice that the soliton solutions and DBRs have the same charge/angular momentum relation. We can thus study the DBRs from the perspective of the solitons. We conclude the paper in section \ref{sec:conclusion}. In appendix \ref{app}, we present the global analysis for the black holes and time machines that emerge from the local solutions.
\section{The local solution}
\label{sec:local}
The bosonic sector of minimal (${\cal N}=2$) gauged supergravity in five dimensions consists of the metric $g_{\mu\nu}$ and the graviphoton $A_\mu$. The Lagrangian 5-form is
\begin{equation}
{\cal L} = (R-2\Lambda) {*\rlap 1\mkern4mu{\rm l}} -\fft12 {*F\wedge F} + \fft{1}{3\sqrt3} F\wedge F\wedge A\,,
\end{equation}
where $F=dA$. The theory has a negative cosmological constant $\Lambda=-6g^2$, where $g$ is the gauge coupling constant of the gravitino $\psi_\mu$, which we set zero in this paper. It follows that $\ell=1/g$ is the radius of the AdS vacuum. Setting $g=0$ gives rise to ungauged minimal supergravity with Minkowski vacuum. A general local solution describing charged rotating AdS black hole with two equal angular momenta was constructed in \cite{clpd5sol1}, given by
\begin{equation}
ds^2 = -\fft{f}{W} dt^2 + \fft{dr^2}{f} + \ft14 r^2 W (\sigma_3 + \omega dt)^2 +
\ft14 r^2 d\Omega_2^2\,,\qquad A = -\frac{\sqrt{3} a q}{2 r^2} \Big(\sigma_3 - \fft{2}{a} dt\Big)\,,\label{localsol}
\end{equation}
where
\setlength\arraycolsep{2pt} \begin{eqnarray}
W &=& 1+\frac{2 a^2 (\mu +q)}{r^4}-\frac{a^2 q^2}{r^6}\,,\qquad
\omega = \frac{2 a \left(q^2-q r^2-2 \mu r^2\right)}{r^6W}\,,\nonumber\\
f &=& W\left( 1- (1-a^2 g^2)\fft{r^2}{a^2}\right)+\frac{\left(r^4+a^2 q\right)^2}{a^2 r^6}\,.
\end{eqnarray}
Note that the spatial level surfaces are 3-sphere that is written as a $U(1)$ bundle $\sigma_3$ over $S^2$, with the round unit $S^3$ metric:
\begin{equation}
d\Omega_3^2 = \ft14 \sigma_3^2 + \ft14 d\Omega_2^2\,,\qquad
\sigma_3 = d\psi + \cos\theta d\phi\,,\qquad d\Omega_2^2 = d\theta^2 + \sin^2\theta d\phi^2\,.
\end{equation}
The $U(1)$ coordinate $\psi$ has a period $\Delta \psi=4\pi$. We shall also consider in some part of the paper the more general lens spaces $S^3/{\mathbb Z}_k$, in which case, we have
\begin{equation}
\Delta \psi = \fft{4\pi}{k}\,,\qquad k=1,2,3,\cdots\,.\label{psiperiod}
\end{equation}
Note that the original solution in \cite{clpd5sol1} has an additional parameter $\beta$, which was shown to be redundant \cite{Madden:2004ym}. We rename and set the parameters $(M,J,Q,\beta)$ of \cite{clpd5sol1} to be $(\mu, a, q,0)$ here.
The Riemann tensor squared is given by
\setlength\arraycolsep{2pt} \begin{eqnarray}
&&\ft14 r^{16}\, R^{\mu\nu\rho\sigma}R_{\mu\nu\rho\sigma}=\nonumber\\
&&34 a^4 q^4-12 r^6 \left(8 a^2 (2 \mu +q)^2+15 \mu q^2\right)-2 a^2 q^2 r^2 \left(192 a^2 (\mu +q)+113 q^2\right)\nonumber\\
&&+r^4 \left(384 a^4 (\mu +q)^2+96 a^2 q^2 (7 \mu +4 q)+127 q^4\right)+72 \mu ^2 r^8 + 2 g^2 r^2 \Big(113 a^4 q^4\nonumber\\
&&-2 a^2 r^6 \left(36 \mu ^2+13 q^2+36 \mu q\right)+6 a^2 r^4 \left(32 a^2 (\mu +q)^2+15 q^2 (2 \mu +q)\right)\nonumber\\
&&-a^2 q^2 r^2 \left(336 a^2 (\mu +q)+127 q^2\right)+q^2 r^8\Big) +
g^4 r^4 \Big(127 a^4 q^4-180 a^4 q^2 r^2 (\mu +q)\nonumber\\
&&+72 a^4 r^4 (\mu +q)^2-2 a^2 q^2 r^6+10 r^{12}\Big)\,.
\end{eqnarray}
The spacetime thus has a curvature power-law singularity at $r=0$. The metric approaches five-dimensional Minkowski ($g^2=0$), global AdS ($g^2>0$) or static de Sitter ($g^2<0$) spacetimes asymptotically as $r\rightarrow \infty$:
\begin{equation}
ds_5^2 = -(g^2 r^2 + 1) dt^2 + \fft{dr^2}{g^2 r^2 + 1} + \ft14 r^2 d\Omega_3^2\,.
\end{equation}
In some complicated solutions such as \eqref{localsol}, the same local solution can describe different manifolds in disconnected regions. For example, over rotating black holes in one choice of radial coordinate can be mapped to under rotating black holes in a different set of coordinates \cite{Chen:2006ea}. For simplicity, we shall consider the spacetime in \eqref{localsol} where $r$ is only positive and the 3-sphere coordinates are fixed.
The solution \eqref{localsol} has three integration constants $(\mu,a,q)$, parameterizing the mass, angular momentum and electric charge:
\begin{equation}
M=\ft14\pi \Big(3 \mu+ g^2 a^2 (\mu +q)\Big)\,,\qquad
J=\ft{1}{4} \pi a (2 \mu +q)\,,\qquad Q=\ft{1}{4} \sqrt{3} \pi q\,.\label{MJQ}
\end{equation}
The mass is calculated following the steps of \cite{Chen:2005zj}, based on the AMD formalism \cite{am,ad}. The angular momentum and electric charge are defined by
\setlength\arraycolsep{2pt} \begin{eqnarray}
J &=& \fft{1}{16\pi} \int_{S^3} {*dK}\,,\quad \hbox{with}\quad K=2\fft{\partial}{\partial \psi}\,,\nonumber\\
Q &=& \fft{1}{16\pi} \int_{S^3} \Big({*F} - \fft{1}{\sqrt3} A\wedge F\Big)\,.
\end{eqnarray}
Note that these quantities are presented assuming the level surfaces are $S^3$. If we consider the lens spaces $S^3/{\mathbb Z}_k$ instead, the extensive quantities \eqref{MJQ} should all be divided by the natural number $k$:
\begin{equation}
M_k=\fft{M}{k}\,,\qquad J_k = \fft{J}{k}\,,\qquad Q_k= \fft{Q}{k}\,.\label{mjqk}
\end{equation}
Depending on the parameters, the local solution \eqref{localsol} can describe a variety of objects with very different topology and spacetime structures. If the function $f$ has no real (positive) root, the curvature singularity at $r=0$ is naked. For appropriate parameters, there are real roots for $f$ and we assume that the largest one is $r_+$, which must be positive since only even powers of $r$ appear in $f$.
The metric is degenerate at $r=r_+$, and the property of the corresponding collapsed cycles depends also on the behavior of the function $W$. While the function $f$ could be absent from a real root, there must exist a real root for $W$ when the angular momentum is nonvanishing, since $W(\infty)=1$ and $W(0)=-\infty$. We refer to the largest root of $W$ as $r_L$. It is important to note that $r=r_L$ is not a coordinate singularity, but the velocity of light surface (VLS), a timelike hypersurface boundary inside which the periodic $U(1)$ coordinate $\psi$ becomes timelike, and hence the closed timelike cycles (CTCs) develop \cite{Cvetic:2005zi}. We consider the case where $r_+$ exists, three possibilities were proposed \cite{Cvetic:2005zi}
\setlength\arraycolsep{2pt} \begin{eqnarray}
r_L<r_+:&& \hbox{black holes;}\nonumber\\
r_L>r_+:&& \hbox{time machines;}\nonumber\\
r_L=r_+:&& \hbox{solitons.}\label{first3}
\end{eqnarray}
In the appendix, we give the global analysis for the first two cases, and present some discussions on the solitons in section \ref{sec:soliton}. In this paper, we consider a fourth possibility:
\begin{equation}
f(r_+)=f'(r_+)=W(r_+)=0\,.\label{dbrcond}
\end{equation}
In other words, $r_L=r_+$ is a double root for $f$, but a single root for $W$. The solution is neither a black hole, nor a soliton but something that can be best described as an extremal degenerate black ring (DBR). We shall discuss this in the next section.
\section{Degenerate black rings}
\label{sec:dring}
In this section, we present our main result, the DBR solutions that arise from the condition
\eqref{dbrcond}. Since the solution is particularly simple when $\Lambda=-6g^2=0$, we shall analyse this case first and then give the more general $g\ne 0$ later.
\subsection{$\Lambda = 0$}
When $g=0$, the constraint \eqref{dbrcond} implies $\mu=-q=a^2$. The mass, charge and angular momentum reduce to
\setlength\arraycolsep{2pt} \begin{eqnarray}
M=-\sqrt3 Q = \frac{3 \sqrt[3]{\pi }}{2^{2/3}} J^{\fft23}\,,\qquad
J=\ft18 \pi a^3\,.\label{mjqring1}
\end{eqnarray}
The solution simplifies significantly
\setlength\arraycolsep{2pt} \begin{eqnarray}
f&=& \left(1-\frac{a^2}{r^2}\right)^2\,,\qquad W=1-\frac{a^6}{r^6}\,,\qquad
\omega=-\frac{2 a^3}{a^4+a^2 r^2+r^4}\,,\nonumber\\
A &=& \fft{\sqrt3 a^3}{2r^2} \tilde \sigma_3\,,\qquad \tilde \sigma_3 = \sigma_3 - \fft{2}{a} dt\,.
\end{eqnarray}
In terms of $\tilde \sigma_3$, the metric because even simpler:
\begin{equation}
ds^2 = \fft{r^4 dr^2}{(r^2 - a^2)^2} - \Big(1-\fft{r^2}{a^2}\Big) dt^2 +
\fft{r^4-a^4}{a^2 r^2} dt \tilde \sigma_3 + \fft{r^6 - a^6}{4a^2 r^4}\tilde \sigma_3^2 +
\ft14 r^2 d\Omega_2^2\,.
\end{equation}
The metric is asymptotic to the Minkowski spacetime. The horizon is located at $r=a$ and it is instructive to define a new radial coordinate $z$, such that $r=a(1+z^2)$. As $z$ approaches zero, the solution, up to the next to the leading order of $z$, becomes
\setlength\arraycolsep{2pt} \begin{eqnarray}
ds^2 &\sim & \fft{a^2 dz^2}{z^2(1-3z^2)} + z^2 \left(- \ft13 (2 + z^2) dt^2 +
\ft34 a^2 (2-3z^2) \Big(\tilde \sigma_3 + \frac{4 (1+z ^2)}{3 a}dt\Big)^2\right)\nonumber\\
&& + \ft14 a^2 (1 + 2z^2) d\Omega_2^2\,,\nonumber\\
A &\sim & \ft{1}{2} \sqrt{3} a \left(1-2 z ^2\right) \tilde \sigma_3\,.\label{metricrho}
\end{eqnarray}
On the horizon, the solution carries a magnetic dipole moment
\begin{equation}
{\cal D} = \fft{1}{8} \int F = \ft14 \sqrt3\,\pi a\,.
\end{equation}
The near-horizon geometry is locally AdS$_3\times S^2$, since $\tilde \sigma_3$ is a periodic 1-form. The near-horizon geometry can be magnified and extracted as a solution on its own, under the decoupling limit
\begin{equation}
r=a (1 + \lambda z^2)\,,\qquad
t=\fft{a\tau}{\sqrt{\lambda}}\,,\qquad
\tilde \psi = \fft{1}{\sqrt{\lambda}} \Big(\hat \psi + \ft23 \tau\Big)\,,\qquad
\lambda\rightarrow 0\,.\label{decoupling}
\end{equation}
In this limit, the solution becomes
\setlength\arraycolsep{2pt} \begin{eqnarray}
ds^2 &=& a^2 \Big(\fft{dz^2}{z^2} + \ft16 z^2 (-4 d\tau^2 + 9 d\hat \psi^2) +
\ft14 (d\theta^2 + \sin^2\theta d\phi^2)\Big)\,,\nonumber\\
A &=& \ft12\sqrt3 a \cos\theta d\phi\,.\label{brhorizon}
\end{eqnarray}
The AdS$_3$ factor of the metric is written in the planar coordinates. The solution \eqref{brhorizon} resembles the near-horizon geometry of the magnetic string, but our solution should not be described as a string. We need to distinguish the near horizon geometry \eqref{metricrho} and the decoupling limit \eqref{brhorizon}. In the latter case, the $U(1)$ fibre is infinitely magnified and its global $U(1)$ property is lost. Nevertheless it does show that the $U(1)$ fibre over the $S^2$ becomes untwisted in the AdS throat. In addition, five-dimensional strings would have the two-dimensional world volume and three-dimensional transverse space and the asymptotic spacetime is either Mink$_4\times S^1$ or Mink$_4\times \mathbb R$, depending whether the string is circular or a real line. In our solution, it is clear from the asymptotic Minkowski structure that the transverse space is four dimensional, as in the case of an electrically charged black hole. The solution should be best described as a degenerate black ring. In a usual black ring, the horizon is $S^1\times S^2$, and in the near horizon geometry the $S^1$ has finite radius and it bundles over $\mathbb{R}^2$ or AdS$_2$ in the extremal case to form AdS$_3$, but asymptotically, it bundles over $S^2$ to form $S^3$ instead. In our solution, the radius of the $S^1$, i.e. the ring size, shrinks to zero on the horizon, and it appears as if we have lost one spatial dimension viewed from the asymptotic region.
It is worth commenting that the metric (\ref{metricrho}) is invariant under
\begin{equation}
z\leftrightarrow -z\,.
\end{equation}
If the AdS$_3$ horizon is geodesically complete, the solution would be regular with no singularity, since the inside of the horizon is isomorphic to the outside. This would be the
case of the magnetic string where $\psi$ describes a real line \cite{Gibbons:1994vm}. However, in our case, $\psi$ is periodic and hence the translational symmetry becomes singular on the horizon of the AdS$_3$. Thus our DBR solutions are not regular and the curvature singularity at $r=0$ can be reached geodesically. However, we can introduce
an orbifold singularity on the AdS$_3$ horizon so that the coordinate $z^2\ge 0$. This is a small price to pay to avoid the power-law singularity completely.
It is important to note that the charge/angular momentum relation is completely fixed:
\begin{equation}
Q = -\frac{\sqrt{3} \sqrt[3]{\pi }}{2^{2/3}} J^{2/3}\,.\label{Q(J)}
\end{equation}
As we shall see in section \ref{sec:soliton} that for the same relation, there exists further a smooth soliton that is asymptotic also to the Minkowski spacetime. It is sensible to compare their masses for given $Q(J)$ and we find
\begin{equation}
\fft{M_{\rm soliton}}{M_{\rm ring}} = \cos\left(\fft{\pi}{9}\right)<1\,.
\end{equation}
In other words, the soliton is more energetically favoured than the DBR.
\subsection{$\Lambda \ne 0$}
There is no known exact solution of black rings that are asymptotic to global AdS. Our AdS DBRs however arise naturally. The constraint \eqref{dbrcond} with nonvanishing $g$ yields
\begin{equation}
\mu=\frac{1}{2} r_+^2 \left(2+3 g^2 r_+^2+g^4 r_+^4\right)\,,\qquad
q=-r_+^2 \left(g^2 r_+^2+1\right)\,,\qquad a=\frac{r_+}{\sqrt{1+g^2 r_+^2}}\,.
\end{equation}
The solution now reduces to
\setlength\arraycolsep{2pt} \begin{eqnarray}
f&=& \Big(1 - \fft{r_+^2}{r^2}\Big)^2\Big(1+ g^2 \left(r^2+2 r_+^2\right)\Big)\,,\qquad
W=\Big(1 - \fft{r_+^2}{r^2}\Big)\Big(1 +
\fft{r_+^2}{r^2} + \frac{r_+^4 \left(g^2 r_+^2+1\right)}{r^4}\Big)\,,\nonumber\\
\omega &=& -\frac{2 r_+^3 \left(g^2 r_+^2+1\right)^{3/2}}{g^2 r_+^6+r^4+r^2 r_+^2+r_+^4}\,,\qquad A=\frac{r_+^3 \sqrt{3 g^2 r_+^2+3}}{2 r^2} \tilde \sigma_3\,.
\end{eqnarray}
Here $f$ has a double root $r_+$ and $W$ has the same single root. The solution is asymptotic to the global AdS$_5$. The mass, charge and angular momentum now depend on $r_+$, given by
\setlength\arraycolsep{2pt} \begin{eqnarray}
M &=&\frac{1}{8} \pi r_+^2 \left(6+9 g^2 r_+^2+4 g^4 r_+^4\right)\,,\qquad
Q = -\frac{1}{4} \sqrt{3} \pi r_+^2 \left(1+g^2 r_+^2\right)\,,\nonumber\\
J&=&\frac{1}{4} \pi r_+^3 \left(1+g^2 r_+^2\right)^{3/2}.
\end{eqnarray}
The near horizon geometry is again locally AdS$_3\times S^2$, which can be blown up to be a solution on its own under the same decoupling limit $\lambda\rightarrow 0$, with
\setlength\arraycolsep{2pt} \begin{eqnarray}
r &=& r_+(1 + \lambda z^2)\,,\qquad
t=\fft{r_+ \sqrt{1 + \fft13 g^2 r_+^2}}{1 + 3 g^2 r_+^2} \fft{\tau}{\sqrt{\lambda}}\,,\nonumber\\
\psi&=&\fft{1}{ \sqrt{\lambda(1 + \fft13 g^2 r_+^2)} (1 + 3 g^2 r_+^2)}\Big(\hat\psi +
\fft{2(1 + g^2 r_+^2)^{\fft32}}{3\sqrt{1 + 3 g^2 r_+^2}} \tau\Big).
\end{eqnarray}
The solution becomes
\setlength\arraycolsep{2pt} \begin{eqnarray}
ds^2 &=& \fft{r_+^2}{1 + 3g^2 r_+^2} \Big(\fft{dz^2}{z^2} + \ft16 z^2 (-4 d\tau^2 + 9 d\hat \psi^2)\Big) +
\ft14 r_+^2 (d\theta^2 + \sin^2\theta d\phi^2)\,,\nonumber\\
A &=& \ft12r_+ \sqrt{3(1 + g^2 r_+^2)}\, \cos\theta d\phi\,.\label{brhorizon2}
\end{eqnarray}
Note that if the $\hat \psi$ is a real line, then the corresponding magnetic string solution would not be asymptotic to AdS, but locally AdS. Here the coordinate has an origin of the $U(1)$ fibre of the $S^2$, and the DBR is indeed asymptotic to the global AdS. As in the earlier $g=0$ case, the AdS DBR solution can avoid the curvature power-low singularity at $r=0$ by introducing an orbifold singularity on the horizon.
The charge/angular momentum relation $Q(J)$ is independent of $g$, give by \eqref{Q(J)}. The mass on the other hand becomes more complicated, given by
\setlength\arraycolsep{2pt} \begin{eqnarray}
M &=& \frac{\left(\sqrt{8 \sqrt[3]{2} g^2 J^{2/3}+\pi ^{2/3}}-\sqrt[3]{\pi }\right) \left(16 \sqrt[3]{2} g^2 J^{2/3}+5 \sqrt[3]{\pi } \sqrt{8 \sqrt[3]{2} g^2 J^{2/3}+\pi ^{2/3}}+7 \pi ^{2/3}\right)}{32 g^2}\nonumber\\
&=&
\left\{
\begin{array}{ll}
\frac{3 \sqrt[3]{\pi } J^{2/3}}{2^{2/3}}+\frac{3 g^2 J^{4/3}}{\sqrt[3]{2 \pi }}-\frac{4 g^4 J^2}{\pi }+O\left(J^{8/3}\right), &\qquad J\rightarrow 0\,; \\
2 g J+\frac{3 \sqrt[3]{\pi } J^{2/3}}{2\ 2^{2/3}}+\frac{3 \pi ^{2/3} \sqrt[3]{J}}{8 \sqrt[3]{2} g}-\frac{\pi }{16 g^2}+O\left(J^{-\fft13}\right), &\qquad J\rightarrow \infty\,.
\end{array}
\right.\label{mjadsring}
\end{eqnarray}
For the same charge/angular momentum relation \eqref{Q(J)}, there also exist AdS solitons and we shall come back to the discussion of the mass in section \ref{sec:soliton}.
\section{DBRs as limiting extremal rotating black holes}
\label{sec:phase}
In the previous section, we gave the DBR solutions that arise from the condition \eqref{dbrcond}. We can impose this condition in two stages. The first is to set $f(r_+)=f'(r_+)=0$, which gives rise to the extremal rotating black holes, followed by requiring $W(r_+)=0$. Thus the DBRs can be viewed as the limiting case of black holes.
It turns out that there are two branches of extremal rotating black holes and the DBR sits where the two branches meet. The situations are sufficiently different whether $\Lambda=-6g^2$ vanishes or not and we discuss them separately.
\subsection{$\Lambda =0$}
The general non-extremal black hole solutions are discussed in the appendix, where all the black hole thermodynamical variables are given and the first law are derived. We find that the extremal condition
$f'(r_+) = f(r_+) =0$ can be satisfied by two sets of parameters. For $g=0$, they are
\setlength\arraycolsep{2pt} \begin{eqnarray}
\hbox{Case 1:}&& \quad \mu=r_+^2\,,\qquad q=-r_+^2\,,\nonumber\\
\hbox{Case 2:}&&\quad \mu=r_+^2\,,\qquad q=-2a^2 + r_+^2\,.
\end{eqnarray}
The first case is the BMPV black hole with \cite{Breckenridge:1996is}
\begin{equation}
f=\Big(1 - \fft{r_+^2}{r^2}\Big)^2\,,\qquad
W=1 - \fft{a^2 r_+^2}{r^6}\,,\qquad \omega = -\frac{2 a r_+^2 \left(r^2-r_+^2\right)}{r^6-a^2 r_+^4}\,.
\end{equation}
The solution is supersymmetric, with
\begin{equation}
M=-\sqrt3 Q=\frac{3 \pi r_+^2}{4}\,,\qquad J=\frac{1}{4} \pi a r_+^2\,.
\end{equation}
In other words, the angular momentum does not contribute to the mass. Since the black hole has zero temperature, the Helmholtz free energy is simply the mass. We can further define three types of Gibbs energy
\begin{equation}
G_1= M - 2\Omega_+ J\,,\qquad G_2 = M - 2\Omega_+ J - \Phi Q\,,\qquad
G_3 = M - \Phi Q\,.
\end{equation}
For the BMPV black hole, we find
\begin{equation}
G_1 = \ft34\pi r_+^2\,,\qquad G_2=G_3=0\,.
\end{equation}
Note that these free energies are completely independent of the parameter $a$, and therefore $J$.
We now examine the second case, for which the solution is
\setlength\arraycolsep{2pt} \begin{eqnarray}
f&=&\Big(1 - \fft{r_+^2}{r^2}\Big)^2\,,\qquad
W=1-\frac{4 a^2 \left(a^2-r_+^2\right)}{r^4}-\frac{a^2\left(r_+^2-2 a^2\right)^2}{r^6}\,,\nonumber\\
\omega &=& \frac{r^2 \left(4 a^3-6 a r_+^2\right)+2 a \left(r_+^2-2 a^2\right)^2}{r^6-4 a^2 r^2 \left(a^2-r_+^2\right) -\left(a r_+^2-2 a^3\right)^2}\,.
\end{eqnarray}
The mass, angular momentum and charge are
\begin{equation}
M=\ft34 \pi r_+^2\,,\qquad J=\ft{1}{4} \pi a \left(3 r_+^2-2 a^2\right)\,,\qquad Q=\ft{1}{4} \sqrt{3} \pi \left(r_+^2-2 a^2\right)\,.
\end{equation}
The solution is non-supersymmetric and the conserved quantities satisfy
\begin{equation}
8M^3 -18 M Q^2 - 6\sqrt3 Q^3 - 27\pi J^2=0\,.
\end{equation}
The Gibbs free energies for this solution are
\begin{equation}
G_1=\frac{\pi \left(8 a^4-6a^2 r_+^2 +3 r_+^4\right)}{4 \left(r_+^2 + 2 a^2\right)}\,,\qquad
G_2=\frac{\pi a^2 \left(3 r_+^2-2 a^2\right)}{2(r_+^2 + 2 a^2)}\,,\qquad G_3=\frac{3 \pi a^2 \left(3 r_+^2-2 a^2\right)}{2 \left(r_+^2 + 2 a^2\right)}\,.
\end{equation}
At this stage, it should be stated that both Case 1 and Case 2 solutions describe rotating black holes only when $r_+>a$, for which $W(r_+)>0$. When $r_+<a$, we have negative $W(r_+)$ and the solution describes a time machine, which was referred to as a repulson in \cite{Gibbons:1999uv}. Interestingly, when $r_+=\sqrt{3/2} a$, the angular momentum vanishes, we have a rotating time machine repulson with no angular momentum.
When $r_+=a$, the two different solutions become the same DBR studied in the previous section. The DBR sits in the same crossing between the black holes and time machines. While the Gibbs free energies are continuous when the black hole transit to the time machine in each case, they are not continuous when transit from Case 1 to Case 2 via the DBR. When $r_+=a$, the Case 2 solution has $G_{1,2,3} = \left\{\frac{5 \pi a^2}{12},\frac{\pi a^2}{6},\frac{\pi a^2}{2}\right\}$, which are all bigger than the counterparts of black ring limit of the Case 1 solution. The global discontinuity of black hole thermodynamics is rather peculiar and deserves further investigation.
\subsection{$\Lambda\ne 0$}
The situation is analogous when the cosmological constant is turned on with $g\ne 0$, but the details become more complicated. The condition $f(r_+)=0=f'(r_+)$ for extremal black holes again yields two sets of solutions
\begin{equation}
q_\pm=-\fft{1}{(1-a^2 g^2)^2}\left(a^2 \pm (r_+^2 \left(1-a^2 g^2\right)-a^2)
\sqrt{1+ 2 g^2 r_+^2 \left(1-a^2 g^2\right)}\right)\,,\label{qpm}
\end{equation}
with
\begin{equation}
\mu = \fft{2 a^2 g^2 q+r_+^2 \left(3 g^2 r_+^2+2\right)}{2(1-a^2 g^2)}\,.
\end{equation}
The plus and minus signs in the parenthesis in the above $q$ expression correspond respectively to the Case 1 and Case 2 solutions when $g=0$ and we shall use the same names here. As in the case of $g=0$, the metric function $f$ is the same for both solutions
\begin{equation}
f=\Big(1- \fft{r_+^2}{r^2}\Big)^2 \Big(1 + g^2 (r^2 + 2 r_+^2)\Big)\,.
\end{equation}
The functions $W$ and $\omega$ depend linearly on $q$ and give two different branches of solutions
\setlength\arraycolsep{2pt} \begin{eqnarray}
W &=& 1 + \frac{2 a^2 q \left(a^2 + (1-a^2 g^2) r^2\right)}{r^6 \left(1-a^2 g^2\right)^2}+\frac{a^2 r_+^2}{r^6 \left(1-a^2 g^2\right)^2} \Bigg(r^2 \left(1-a^2 g^2\right) \left(3 g^2 r_+^2+2\right)\nonumber\\
&&+2 a^2 \left(g^2 r_+^2+1\right)^2-r_+^2 \left(2 g^2 r_+^2+1\right)\Bigg)\,,\nonumber\\
\omega &=& \fft{2a}{r^6 W} \Bigg(\frac{((a^4 g^4-1) r^2-2 a^2)q}{\left(1-a^2 g^2\right)^2}-
\frac{r^2 r_+^2 \left(3 g^2 r_+^2+2\right)}{1-a^2 g^2}\nonumber\\
&&+\frac{r_+^4+2 g^2 r_+^6 -2 a^2r_+^2 \left(g^2 r_+^2+1\right)^2}{\left(1-a^2 g^2\right)^2}\Bigg)\,.
\end{eqnarray}
The two generally different black holes of (\ref{qpm}) become the same DBR discussed in the previous section when $r_+$ is
\begin{equation}
r_+ = \frac{a}{\sqrt{1-a^2 g^2}}\,.\label{rpag}
\end{equation}
However, the Gibbs free energies of the two rotating black holes are not the same when we take
this limit. They are given by
\setlength\arraycolsep{2pt} \begin{eqnarray}
G_1^\pm &=& \frac{\pi a^2 \left(2 a^4 g^4-7 a^2 g^2+14
\pm 4 \sqrt{2 a^2 g^2+1}\right)}{8 \left(1-a^2 g^2\right)^2 \left(3-2 a^2 g^2\right)}\,,\nonumber\\
G_2^\pm &=& \frac{\pi a^2 \left(2 a^4 g^4-7 a^2 g^2+2 \mp 2 \sqrt{2 a^2 g^2+1}\right)}{8 \left(1-a^2 g^2\right)^2 \left(3-2 a^2 g^2\right)}\,,\nonumber\\
G_3^\pm &=& \frac{\pi a^2 \left(6 - a^2 g^2 (3-a^2 g^2)(3-2a^2 g^2) \mp 6 \left(1-a^2 g^2\right) \sqrt{2 a^2 g^2+1} \right)}{8 \left(1-a^2 g^2\right)^3 \left(3-2 a^2 g^2\right)}\,.
\end{eqnarray}
In particular, the quantities $G_{1,2,3}^+$ and $G_{1,2,3}^-$ are for the Case 1 and Case 2 solutions respectively.
Note that for the above solutions we must have $1-a^2 g^2 > 0$. When $a^2 g^2 -1>0$, there is another values of $r_+$ where the Case 1 and Case 2 solutions become the same, namely
\begin{equation}
r_+=\frac{1}{g \sqrt{2(a^2 g^2-1)}}\,.
\end{equation}
The solution is a time machine with $W(r_+)<0$. In this case, the Gibbs free energies are all continuous crossing one branch of the solutions to the other. There is no asymptotically flat limit of this time machine.
As was discussed earlier, when $g=0$, the Case 1 solution give the supersymmetric BMPV black hole. When $g\ne 0$, the supersymmetric rotating black hole of Gutowski-Reall \cite{Gutowski:2004ez} arises instead from the Case 2 solutions when
\begin{equation}
a=\fft{g r_+^2}{2 + g^2 r_+^2}\,.
\end{equation}
(This Gutowski-Reall black hole is illustrated in Fig.~\ref{gpsipsi}.) Setting $g=0$ will not give the BMPV black hole, but the extremal static RN black hole with $M=\sqrt3 Q$.
\subsection{Illustration of DBRs from extremal black holes}
It is instructive to examine the metric component $g_{\psi\psi}(r_+)$ of the two extremal rotating black holes, since the sign of this quantity indicates whether we have a black hole, time machine or DBR. As is illustrated in Fig.~\ref{gpsipsi}, both branches of rotating black holes reduce to the RN black hole when $a=0$. To be precise, the Case 1 solution reduces to the one with $M=-\sqrt3 Q$, whilst the Case 2 solution reduces to the one with $M=+\sqrt3 Q$. In other words, they have the same mass, but opposite charges. As we increase the value of $a$, the two branches meet at some critical value of $a$, where $g_{\psi\psi}$ vanishes and the DBR emerges. The discontinuity of the Gibbs free energies may arise from the sharp angle at the joining.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=220pt]{phase1.pdf}
\includegraphics[width=220pt]{phase2.pdf}
\end{center}
\caption{\small\it The two branches of the rotating black holes start from the extremal RN black holes at $a=0$ of the same mass and opposite charges and then become the same DBR with $g_{\psi\psi}=0$. They will continue to be time machines if we further increase the parameter $a$. The global discontinuity of Gibbs free energies arises from the sharp angle at the joining. We have $r_+=1$ for both graphs, with the left having $g=0$ and the right having $g=1$. The Gutowski-Reall supersymmetric solution sits at $a=1/3$.}
\label{gpsipsi}
\end{figure}
It is also worth examining how the conserved quantities $(M,Q,J)$ change in the black hole to DBR limit. For simplicity, we consider $g=0$. We can either fix $M$ or fix $Q$ for both branches of the extremal rotating black holes. In the left plot of Fig.~\ref{mqjplots}, we fix $M=\sqrt3$, in which case, the extremal RN black holes can have $Q=\pm 1$. We see that the Case 1 solution passes the $Q=-1$ RN black hole, whilst the Case 2 solution passes the $Q=+1$ solution. The two DBR solutions sit at $J=\pm J_{\rm DBR}$ with $J_{\rm DBR}=\frac{2}{3^{3/4} \sqrt{\pi }}\sim 0.495$. For Case 1, when the solution is under rotating, namely $|J|< J_{\rm DBR}$, it describes a supersymmetric black hole. When it is over rotating with $|J|> J_{\rm DBR}$, it describes a time machine repulson. The nonsupersymmetric Case 2 solution has much richer structure. Both black holes and time machines can be either under rotating or over rotating. In particular, there is a rotating time machine with $Q=-2$ without angular momentum.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=220pt]{phase3.pdf}
\includegraphics[width=220pt]{phase4.pdf}
\end{center}
\caption{\small\it Both plots have $g=0$. In the left plot, we fix $M=\sqrt3$ and let $J$ take both negative and positive values. In this case, the two branches of extremal rotating black holes join as DBRs at both ends. In the right plot, we fix $Q=-1$. Case 2 gives a black hole when it is over rotating, completely opposite to Case 1.}
\label{mqjplots}
\end{figure}
In the right plot of Fig.~\ref{mqjplots}, we fix $Q=-1$, and draw the mass as a function of $J$. In this case the DBR again corresponds to $J=J_{\rm DBR}$. What is unusual is that the Case 2 solution describes a black hole when it is over rotating and a time machine when it is under rotating, completely opposite to the Case 1 solution.
\section{DBR from the soliton limit}
\label{sec:soliton}
We can also approach the DBR condition (\ref{dbrcond}) from a different angle, by first requiring $f(r_+)=W(r_+)=0$. This corresponds to the third possibility in \eqref{first3}, which was mentioned in \cite{Cvetic:2005zi} without further elaboration. That the local solution \eqref{localsol} in this case describes regular solitons when $r_L=r_+$ was elaborated in \cite{Andrews:2019hvq}. We review these solutions here, focusing on the properties that were not addressed in \cite{Andrews:2019hvq}, namely the relations between the conserved quantities $(M,J,Q)$, and how the DBRs arise from the soliton limit.
\subsection{$\Lambda \ne 0$}
The condition that $r_L=r_+\equiv r_0$ can be achieved by requiring
\begin{equation}
\mu=\frac{r_0^4 \left(a^2+r_0^2\right)}{2 a^4}\,,\qquad
q=-\frac{r_0^4}{a^2}\,.\label{muqsoliton}
\end{equation}
We first consider the case with nonvanishing $\Lambda = -6g^2$, and the spacetime is asymptotic to global AdS. The metric functions are
\setlength\arraycolsep{2pt} \begin{eqnarray}
W &=& \Big(1 - \fft{r_0^2}{r^2}\Big) \Big(1+\frac{r_0^2}{r^2}+\frac{r_0^6}{a^2 r^4}\Big)\,,
\qquad \omega=-\frac{2 r_0^6}{a \left(a^2 r^2 \left(r^2+r_0^2\right)+r_0^6\right)}\,,\nonumber\\
f &=& r^2 W\left( \frac{r_0^2 \left(a^2+r^2\right)+r^2 \left(a^2+r^2\right)-r_0^4}{a^2 r^4+a^2 r^2 r_0^2+r_0^6}-\frac{1}{a^2}+g^2\right).
\end{eqnarray}
It is important to note that the quantity in the parenthesis of $f$ should be positive for $r\in [r_0,\infty)$, which requires that
\begin{equation}
\eta\equiv 1 - (1-a^2 g^2) \fft{r_0^2}{a^2}>0\,.
\end{equation}
Apparently, this condition is always satisfied for $a^2 g^2\ge 1$; otherwise, it further requires a small enough $r_0$.
The mass, angular momentum and electric charge are not independent, with $Q$ precisely given by \eqref{Q(J)}. The mass is
\begin{equation}
M = \frac{J \left(a^2 g^2+3\right)}{2 a}+\frac{\sqrt[3]{\pi } J^{2/3} \left(3-a^2 g^2\right)}{2\ 2^{2/3}}\,.
\end{equation}
In other words, the charge $Q$ is negative and depends on $J$, but the mass $M$ appears to be an independent parameter, because it depends on the free parameter $a$.
However, there is an additional constraint as one studies the geometry at the coordinate singularity $r=r_0$. To see this, we let $r=r_0 + \ft14 \rho^2$ and in the vicinity of $\rho=0$, the metric becomes
\setlength\arraycolsep{2pt} \begin{eqnarray}
ds^2 &=&-\eta dt^2 + \frac{a^2 r_0}{2 \eta \left(2 a^2+r_0^2\right)} \Big(d\rho^2 + \ft14\kappa^2 \rho^2 \sigma_3^2\Big) + \ft14 r_0^2 d\Omega_2^2\,,
\end{eqnarray}
where the ``Euclidean surface gravity'' is
\begin{equation}
\kappa = \frac{(2 a^2+r_0^2)\sqrt{\eta}}{a^2}=\frac{\left(2 \sqrt[3]{\pi } a+2^{2/3} \sqrt[3]{J}\right) \sqrt{2^{2/3} \sqrt[3]{J} \left(a^2 g^2-1\right)+\sqrt[3]{\pi } a}}{\sqrt{\pi } a^{3/2}}\,.
\end{equation}
Thus in order to avoid the conic singularity at $r=r_0$, we must have $\Delta \psi = 4\pi/\kappa$. On the other hand, as was discussed in section \ref{sec:local}, the period of $\psi$ is constrained by the 3-sphere or lens space topology as in \eqref{psiperiod}, we thus must have
\begin{equation}
\kappa=k\,,\qquad k=1,2,3,\cdots\,.\label{kappacons}
\end{equation}
Consequently, the integration constant $a$, and hence the mass is not a free parameter, but should be determined by the angular momentum $J$ and the discrete topological parameter $k$. Specifically, $a$ is determined by the fourth order polynomial equation
\begin{equation}
4 (2 \pi )^{2/3} a^4 g^2 \sqrt[3]{J}-a^3 \left(\pi \left(k^2-4\right)-8 \sqrt[3]{2 \pi } g^2 J^{2/3}\right)+4 a^2 g^2 J- 6 \sqrt[3]{2 \pi }\, a J^{2/3}-4J=0\,.
\end{equation}
For large $J$, the function $a(J)$ approaches $1/g$, given by
\begin{equation}
a=\frac{1}{g}-\frac{\sqrt[3]{\pi } \sqrt[3]{\frac{1}{J}}}{2 \left(2^{2/3} g^2\right)}+\frac{\pi ^{2/3} \left(\frac{1}{J}\right)^{2/3}}{16 \sqrt[3]{2} g^3}+\frac{\pi k^2}{8 g^4 J}+O\left(J^{-\fft43}\right)\,.
\end{equation}
The behavior of $a(J)$ for small $J$ depends on the values of $k$:
\setlength\arraycolsep{2pt} \begin{eqnarray}
k=1:&& a= \frac{2^{2/3} c \sqrt[3]{J}}{\sqrt[3]{3 \pi }}\nonumber\\
&&\qquad -\frac{4 g^2 J \left(3 \sqrt[3]{3} c^2+4 c \left(3^{2/3}+6 \sqrt[6]{3} \cos \left(\frac{\pi }{18}\right)\right)+12+24 \sqrt{3} \cos \left(\frac{\pi }{18}\right)\right)}{27 \pi \left(\sqrt[3]{3} c^2-1\right)} + \cdots\nonumber\\
&&\phantom{a}= 1.23251 \sqrt[3]{J}-2.04349 g^2 J+\cdots\,,\qquad \left(c=\sqrt[3]{1+2 \sqrt{3} \cos \left(\frac{\pi }{18}\right)}\right)\,;\nonumber\\
k=2:&& a= \frac{\sqrt[3]{3} \sqrt[9]{J}}{2^{4/9} \sqrt[9]{\pi } g^{2/3}} - \frac{2\ 2^{2/3} \sqrt[3]{J}}{9 \sqrt[3]{\pi }} + \frac{13 g^{2/3} J^{5/9}}{81\ 2^{2/9} \sqrt[3]{3} \pi ^{5/9}}+\cdots\,;\nonumber\\
k\ge 3:&& a=\frac{\sqrt[3]{\pi } \left(k^2-4\right)}{4\ 2^{2/3} g^2 \sqrt[3]{J}} -\frac{2^{2/3} \sqrt[3]{J}}{\sqrt[3]{\pi }} - \frac{4 g^2 J \left(k^2-16\right)}{\pi \left(k^2-4\right)^2} + \cdots\,.
\end{eqnarray}
For $k=1,2$, $a(J)$ is a monotonic function increasing from 0 to $1/g$ as $J$ runs from zero to infinity. When $k\ge 3$, $a(J)$ has a positive minimum with $a(0)\rightarrow \infty$ and $a(\infty)=g^{-1}$.
The mass $M(J)$ is a monotonic function of the angular momentum for all $k$, with small $J$ expansion:
\begin{equation}
M=
\left\{
\begin{array}{ll}
2.60098 J^{2/3}+1.93329 g^2 J^{4/3}+O\left(J^2\right), &\quad k=1; \\
\frac{3 \sqrt[3]{\pi } J^{2/3}}{2\ 2^{2/3}}+\frac{3\ 3^{2/3} \sqrt[9]{\pi } g^{2/3} J^{8/9}}{4\ 2^{5/9}}+\frac{7 g^{4/3} J^{10/9}}{2\ 2^{4/9} 3^{2/3} \sqrt[9]{\pi }}+O\left(J^{4/3}\right), &\quad k=2; \\
-\frac{\pi \left(k^2-4\right)^2}{128 g^2}+\frac{3 \sqrt[3]{\pi } J^{2/3} k^2}{8\ 2^{2/3}}-\frac{3 J^{4/3} \left(g^2 \left(k^2-8\right)\right)}{2 \left(\sqrt[3]{2 \pi } \left(k^2-4\right)\right)}+O\left(J^2\right), &\quad k\ge 3\,.
\end{array}
\right.
\end{equation}
Note that mass becomes negative for $k\ge 3$ for small $J$. In fact the $J=0$ solution becomes the negative mass static AdS solitons from Einstein metrics studied in \cite{Clarkson:2005qx}. For large $J$, the $M(J)$ function takes a universal form:
\begin{equation}
M=2 g J+\frac{3 \sqrt[3]{\pi } J^{2/3}}{2\ 2^{2/3}}+\frac{3 \pi ^{2/3} \sqrt[3]{J}}{8 \sqrt[3]{2} g}-\frac{(2k^2+1)\pi }{16 g^2}+O\left(J^{-\fft13}\right)\,.\label{MJsolitonk}
\end{equation}
Note that for general $k$, the mass and charge should be replaced by $M_k$ and $J_k$ defined in \eqref{mjqk}.
Intriguingly, if we take $k=0$, so that the Euclidean surface gravity $\kappa=0$, we have $\eta=0$, and the local solution reduces precisely to the AdS DBR studied in section \ref{sec:dring}. Thus the DBR solution can also be viewed as the limiting case of the solitons, although the parameter $k$ is a discrete variable. However, taking $k=0$ does not mean that $\psi$ becomes a real line. The structure of the Killing horizon of the soliton now becomes the event horizon in this limit and there is no longer a further restriction on the period $\psi$, which can be $\Delta \psi=4\pi/k$ for any $k$. We prefer to choose $k=1$ so that the solution is asymptotic to the global AdS$_5$. In other words, the $k=0$ limit of the soliton is singular such that the topology changes completely and consequently the asymptotic AdS spacetime remains global, the same as the $k=1$ soliton.
Note that the large $J$ expression of the mass in \eqref{mjadsring} is precisely the special $(k=0)$ case of the more general expression \eqref{MJsolitonk}. In Fig.~\ref{MJplots}, we give the mass-angular momentum relation $M_k(J_k)$ for $k=0,1,2,3,4$, where $(M_k,J_k)$ are defined by \eqref{mjqk}, with the $k=0$ case corresponding to the AdS DBR. Since all these solutions have the same charge $Q(J)$, it is thus sensible to compare their mass for given $J$. However, the comparison makes sense only for solutions with the same asymptotic infinity.
Both the degenerate AdS black ring and the $k=1$ soliton has the same global AdS as their asymptotic infinity, and we find that the soliton has lower mass than the corresponding black ring. Since the period $\Delta \psi=4\pi/k$ for the black ring is allowed for any integer $k$, we can compare the mass of degenerate AdS black ring with $S^3/\mathbb{Z}_k$ and we find that it is always higher than the mass of the corresponding $k$-soliton.
\begin{figure}[htp]
\begin{center}
\includegraphics[width=270pt]{MJplots.pdf}
\end{center}
\caption{\small\it Here are the mass-angular momentum relations of the AdS solitons ($g=1$) with $\mathbb{R}\times S^3/\mathbb{Z}_k$ boundaries. The $k=0$ case corresponds to the degenerate AdS black ring, whose asymptotic infinity is the global AdS, the same as the $k=1$ soliton. The soliton mass can be negative for $k\ge 3$.}
\label{MJplots}
\end{figure}
\subsection{$\Lambda=0$}
We now analyse the soliton solutions when $g=0$, such that the solutions are asymptotic to the flat spacetime. It is straightforward to set $g=0$ for both Case 1 and Case 2 solutions and also the formulae of their conserved quantities $(M,J,Q)$. The condition of being a soliton \eqref{muqsoliton} is independent of $g$. The subtlety lies on the regularity condition $\kappa=k$. When $g=0$, we have
\begin{equation}
\kappa^2 - k^2 = 4-k^2 -\frac{r_0^6}{a^6}-\frac{3 r_0^4}{a^4}=0\,.
\end{equation}
This condition cannot be satisfied unless $k=1$ or 2. When $k=2$, we must have $a=\infty$, which implies that $Q=0=J$. The solution becomes simply a direct product of time and the Eguchi-Hanson instanton. The more interesting case is $k=1$. We have
\begin{equation}
\fft{r_0^2}{a^2}=2 \cos \left(\frac{\pi }{9}\right)-1\,.
\end{equation}
It follows that we obtain the asymptotically Minkowskian rotating soliton with \eqref{Q(J)} and
\begin{equation}
M=\frac{3 \sqrt[3]{\pi } \cos \left(\frac{\pi }{9}\right)}{2^{2/3}} J^{\fft23}\,.\label{mjqsoliton2}
\end{equation}
Thus we see that $M\sim -Q$, but not the supersymmetric condition $M=- \sqrt3 Q$, which gives the BMPV solution. Furthermore, as we have remarked in section \ref{sec:dring}, the mass of the soliton is smaller than the corresponding DBR with the same $Q(J)$.
\section{Conclusions}
\label{sec:conclusion}
In this paper, we analysed the charged rotating solutions \cite{clpd5sol1} in five-dimensional minimal supergravity, with or without the cosmological constant. The general solution has three nontrivial integration constants, parameterizing the mass $M$, electric charge $Q$ and two equal angular momenta $J$. For suitable $(M,Q,J)$, we found that new spacetime geometries could arise that could be best described degenerate black rings, or DBRs. The metric functions of the DBRs satisfy the condition \eqref{dbrcond} and thus they have one free parameter $J$; mass and charge are functions of $J$. The relation $Q(J)$ \eqref{Q(J)} is universal and does not depend on the cosmological constant, but $M(J)$ does. The DBRs are extremal black objects that are asymptotic to either Minkowski or global AdS spacetimes, and the solutions can avoid the curvature power-law singularity by introducing an orbifold singularity on its AdS$_3\times S^2$ near-horizon geometry. Two factors help to evade the no-go theorems of \cite{Grover:2013hja,Khuri:2017zqg}: our solutions are not supersymmetric; the ring sizes are all degenerate.
We studied two routes that lead to the DBR solutions from the general class of solutions. One is to consider extremal rotating black holes that satisfy $f(r_+)=f'(r_+)=0$. It turns out that there are two branches of such solutions and they join at the point of parameter space where the DBR emerges. At the DBR limit from the two generally different extremal black holes, the mass or the Helmholtz free energy is continuous, but the Gibbs free energies are not. The physical implication of this unusual global discontinuity requires further investigation.
We can also take the DBR limit from the rotating solitons that satisfy $f(r_+)=W(r_+)=0$. These solitons are characterized by the condition that the Euclidean surface gravity $\kappa$ is equal to the topological parameter $k$, where the level surfaces of the spatial slice is $S^3/\mathbb{Z}_k$. The DBR emerges locally as the $k=0$ limit; however, owing to the change of the topology, the DBR has the same asymptotic global AdS$_5$ or Mink$_5$ as the $k=1$ soliton. The DBRs and solitons have the same $Q(J)$ relation and the solitons have the smaller mass for given charge.
Five dimensional gauged supergravity has an origin of D3-brane in type IIB string theory and its AdS/CFT dual to four-dimensional super Yang-Mills are expected to be exact. Thus our DBRs may have implications in the AdS/CFT correspondence. The DBRs and AdS solitons are dual to spinning operators whose conformal weight, $U(1)$ global charge and spin are proportional to $(M,Q,J)$ of these bulk solutions. These operators are restricted by the negative $Q(J)$ relation \eqref{Q(J)}, and their mass for large $J$ takes the form
\begin{equation}
M=c_{\fft33} J + c_{\fft23} J^{\fft23} + c_{\fft13} J^{\fft13} + c_0 + \cdots\,,
\end{equation}
with $c_{\fft33}=2g$ being universal. (Of course, we can also give alternative but equivalent expression as $M(Q)$.) The bulk theory gives the explicit coefficients $c$'s, with the $k=1$ soliton being the ground state. It is of great interest whether this conclusion can be established in the dual CFT.
\section*{Acknowledgement}
We are grateful to Jiaju Zhang for the naming of the DBR. The paper is supported in part by
NSFC Grants No. 11875200 and No. 11935009.
|
1,116,691,498,299 | arxiv | \section{Introduction}
Time averaged potentials (TAP) offer a versatile tool for trapping both
charged and neutral particles. For neutral atoms the most common example in
this class of traps is the Time-averaged Orbiting Potential (TOP) which was
used in the experiments in which the first Bose-Einstein condensate (BEC) was
created \cite{Cornell,BEC1995}. The TOP trap consists a magnetic quadrupole
trap \cite{Migdall85,Bergeman87} shifted by a uniform magnetic modulation
field rotating at a high (audio) frequency. As this rotation is slow as
compared to the Larmor precession of the atomic magnetic moments, the atoms
remain polarized with respect to the instantaneous effective magnetic field
\cite{Cornell98} as follows from the adiabatic theorem. On the other hand the
rotation is fast as compared to the orbital motion of the atoms. As a
consequence, the atomic motion consists of a fast rotating part (micromotion),
superimposed on a slow oscillating part (macromotion). In the simplest
theoretical description, the static approximation, the micromotion is
eliminated by time-averaging the instantaneous potential over a full cycle of
the modulation field.
Suppose we load a particle with given momentum $\mathbf{p}_{0}$ at position
$\mathbf{r}_{0}$ in a TOP trap using a sudden switch-on procedure. One might
naively guess that the ensuing motion is given by the dynamics in the
time-averaged potential, subject to the initial conditions\textbf{
}$\mathbf{r}=\mathbf{r}_{0}$ and $\mathbf{p}=\mathbf{p}_{0}$ but this guess
turns out to be wrong. In fact, one can show that the initial conditions for
the slow motion depend on the phase of the TOP at the time of switch-on. This
phenomenon was analyzed by Ridinger and coworkers \cite{Rid1,Rid2} for the
special case of a one-dimensional rapidly oscillating potential (ROP) with
zero average. Ridinger et al.~also showed, first for a classical particle
\cite{Rid1} and subsequently for the quantum case \cite{Rid2}, that the
amplitude and energy associated with the slow motion can be altered by
applying a suitable phase jump in the rapidly oscillating field.
In this paper we show, both theoretically and experimentally, that the
dependence on initial phase and the possibility to influence the motion by
phase jumps, is also present for a two-dimensional rotating TOP field. In
particular we show that a cloud of atoms which is initially at rest with zero
momentum acquires a sloshing motion as soon as the TOP is suddenly switched
on. This is true even if the cloud is initially at the minimum of the
effective potential. The amplitude of this slow macromotion is much larger
than that of the fast micromotion while the direction of sloshing depends on
the TOP phase at switch-on. We also demonstrate that this macromotion can be
almost entirely quenched by applying a carefully timed and sized phase jump in
the TOP field.
The motion of atoms and ultracold atomic clouds in TOP traps have been
extensively described in the literature. Following the achievement of the
first BEC \cite{BEC1995}, the use of the axially symmetric TOP was described
theoretically in \cite{Edwards,Yukalov97,Kuklov97,Minogin,Franzosi,Challis}
and explored experimentally by other groups
\cite{Heinzen,Phillips1,Kasevich,Arimondo,Foot1,Scherer} to study properties
of the BEC. The idea of the TOP was extended to an asymmetric triaxial TOP
trap developed by \cite{Phillips1} and also used by other groups
\cite{Arimondo, Scherer}. Further a number of other variations were
introduced: In many cases, it turns out to be convenient to switch on the TOP
after a preparative stage of cooling in a conventional static trap such as a
magnetic quadrupole trap (see e.g.\thinspace\cite{Phillips1}), an optically
plugged magnetic quadrupole \cite{Raman} and Ioffe-configurations
\cite{Tiecke03,Thomas,Buggle04}. Often, the transfer of the cloud from the
static to the TOP trap cannot be performed adiabatically for topological
reasons. Bearing this in mind, it becomes relevant to carefully analyze the
dynamics that may be induced by a sudden switch-on of the TOP. In addition,
applications which require manipulation of a BEC are heavily dependent on
precise control of the location of the atomic cloud and can thus benefit from
the techniques described.
In our experiments the condensate is prepared in a Ioffe-Pritchard (IP) trap
before transferring to a TOP because the use of radio-frequency (rf) induced
evaporative cooling is more efficient in a static magnetic trap, resulting in
larger condensates. Once transferred to the TOP we can create trapping
geometries that are difficult to realize using a static magnetic potential
without introducing Majorana losses associated with the presence of zero-field
points. An example is the double well potential used in \cite{Tiecke03}.
The remainder of this paper is organized as follows. In Section \ref{theory}
we calculate the motion of a cloud of atoms in a TOP which at switch-on is at
rest at the center of the trap. We discuss the motion that results and derive
the conditions under which a phase jump can lead to a substantial reduction of
the energy associated with the slow motion of the cloud. In Section
\ref{experimentaldetails} we discuss the experimental details and the
preparation of the BEC and its transfer to the TOP. In Section \ref{results}
we present the experimental results and compare with the theory of Section
\ref{theory}. Finally in Section \ref{Discussion and Outlook} we give a
summary and conclusion.
\section{Theory\label{theory}}
\subsection{Time-averaged Ioffe-Pritchard potential}
In the literature the term TOP is most often used for a spherical-quadrupole
trap combined with a rotating uniform magnetic modulation field. In this paper
we will use the term TOP in a broader context, to include the magnetic
trapping potential created by combining a IP trap with rotating modulation
field. Challis et al.~\cite{Challis} have shown that the dynamical eigenstates
of a degenerate Bose gas in a TOP are given by solutions of the usual
Gross-Pitaevskii equation but taken in a circularly translating reference
frame, that is, a reference frame the origin of which performs a rapid
circular motion but retains a constant orientation. In particular this implies
that the center of mass of a condensate in its ground state performs the same
micromotion in a TOP as a point particle with the magnetic moment of an atom.
In this spirit we use as a $^{87}$Rb condensate to study the micromotion and
macromotion in a TOP.
We consider a cigar-shaped Ioffe-Pritchard potential
\cite{LuitenThesis93,Bergeman87,Surkov94}
\begin{equation}
U(\boldsymbol{\varrho},z)=\mu\sqrt{\alpha^{2}\varrho^{2}+(B_{0}+\tfrac{1
{2}\beta z^{2})^{^{2}}}, \label{instantpotential
\end{equation}
where $\boldsymbol{\varrho}(t)$ is the radial position of a test atom with
respect to the IP symmetry axis, $\mu$ the magnetic moment of the atom, and
$\alpha$, $\beta$, $B_{0}$ the parameters for the radial gradient, the axial
curvature and offset value of the IP magnetic field. Eq.\thinspace
(\ref{instantpotential}) represents an approximate expression for the IP trap
which is valid for $\alpha^{2}\gg\beta B_{0}$ and in the limit $\varrho
\ll\alpha/\beta$ \cite{LuitenThesis93,Bergeman87,Surkov94}.
In the presence of the TOP field we transform to the circularly translating
frame \cite{Challis} and hav
\begin{equation}
\boldsymbol{\varrho}(t)=\{x-\rho_{m}\cos(\omega t+\phi_{m}),y-\rho_{m
\sin(\omega t+\phi_{m})\}, \label{transformation
\end{equation}
where $\{x,y,z\}\equiv\{\boldsymbol{\rho},z\}\equiv\mathbf{r}$ is the position
of the atom in the laboratory frame and the IP symmetry axis is displaced over
a distance $\rho_{m}=B_{m}/\alpha$ in the direction
\begin{equation}
\boldsymbol{\hat{\rho}}_{m}=\{\cos(\omega t+\phi_{m}),\sin(\omega t+\phi
_{m})\}
\end{equation}
by the uniform modulation field
\begin{equation}
\mathbf{B}_{m}=B_{m}\{\cos(\omega t+\phi_{m}),-\sin(\omega t+\phi_{m})\}
\end{equation}
applied perpendicular to the $z$ axis. The $y$ axis is taken along the
vertical direction, the $xz$ plane being horizontal. The modulation field
$\mathbf{B}_{m}$ rotates at angular frequency $-\omega$ (phase $-\phi_{m}$)
about the horizontal $z$ axis as illustrated in Fig.\thinspace
\ref{Fig:CircleOfDeath}. Notice that the sense of rotation of the
IP-field-minimum is opposite to that of the $\mathbf{B}_{m}$ field, in
contrast to the original TOP configuration \cite{Cornell}, where the
field-zero rotates in the same direction as the bias field. This reflects the
difference between the 2D-quadrupole symmetry of the IP trap and the axial
symmetry of the spherical-quadrupole trap. The rotation of the modulation
field $\mathbf{B}_{m}$ also gives rise to a fictitious field $\mathbf{B
_{\omega}$ which has to be added or subtracted from the offset field
$\mathbf{B}_{0}$, depending on the sense of rotation,
\begin{equation}
\mathbf{B}_{0}\rightarrow\mathbf{B}_{0}(1\pm\mathbf{B}_{\omega}/\mathbf{B
_{0})=\mathbf{B}_{0}(1\pm\omega/\omega_{L}),
\end{equation}
where $\omega_{L}=g_{F}\mu_{B}B_{0}/\hbar$ is the Larmor frequency of magnetic
moment of the atoms, with $g_{F}$ the hyperfine $g$ factor and $\mu_{B}$ the
Bohr magneton. In a standard TOP, the fictitious field in combination with
gradient of the quadrupole field gives rise to a shift of the equilibrium
position of the cloud in the direction of the axis around which the field
rotates \cite{Cornell98,Arimondo}. In our IP-TOP the axial field is
homogeneous near the origin and the shift is absent; the change in $B_{0}$
turns out to be small and will be neglected in this paper.
For $\beta=0$ and $B_{0}=0$ the potential $U(\boldsymbol{\varrho},z)$
corresponds to that of a two-dimensional quadrupole field with a zero-field
line that rotates at distance $\rho_{m}$ about the $z$ axis as a result of the
modulation. For $B_{0}=0$ the distance $\rho_{m}$ is known as the radius of
the `circle of death'. For $B_{0}<0$ the potential corresponds to two TOP
traps separated by $\Delta z=2(2|B_{0}|/\beta)^{1/2}$ \cite{Tiecke03}. In this
paper we will consider only the case $B_{0}\geq0$.
In the common description of the TOP one analyzes the motion in an effective
potential, obtained by time averaging the static trap over a full rotation
period of the $\mathbf{B}_{m}$ field. For Eq.\thinspace(\ref{instantpotential
) this procedure yields the effective potentia
\begin{equation}
\mathcal{U}(\mathbf{r})=\frac{1}{2\pi}\int_{0}^{2\pi}U(x-\rho_{m}\cos
\zeta,y-\rho_{m}\sin\zeta,z)d\zeta, \label{TOPpotential
\end{equation}
where $\zeta=\omega t+\phi_{m}$. For the cigar-shaped IP potential we consider
the condition
\begin{equation}
\omega\gg\Omega_{\rho}\gg\Omega_{z}, \label{topcondition
\end{equation}
where, for an atom of mass $m$, the quantity $\Omega_{z}=(\mu\ \beta/m)^{1/2}$
is the axial harmonic oscillation frequency in the effective potential
$\mathcal{U}(0,0,z)$. Analogously, harmonic oscillation frequency in the
radial plane is given by
\begin{equation}
\Omega_{\rho}=\sqrt{\frac{\mu\alpha^{2}}{m\bar{B}_{0}}(1-\tfrac{1}{2}B_{m
^{2}/\bar{B}_{0}^{2})}\equiv\Omega, \label{Omega-Big
\end{equation}
where $\bar{B}_{0}\mathbf{=(}B_{0}^{2}+B_{m}^{2})^{1/2}$ is offset value of
the effective potential at the origin \cite{Tiecke03}.
The first inequality in Eq.\thinspace(\ref{topcondition}) ensures that the
fast and slow radial motions of the atoms can be separated, which is the
well-known operating regime for a TOP trap \cite{Cornell}. The second
inequality implies that the axial motion in the effective trap is slowest and
that the motion can be treated as quasi two-dimensional in the radial plane.
To account for the acceleration due to gravity $\left( g\right) $, the
gravitational potential $mgy$ has to be added to Eqs.\thinspace
(\ref{instantpotential}) and (\ref{TOPpotential}). The main effect is to shift
the minimum of the potentials in the negative $y$ direction by the amoun
\begin{equation}
\Delta y=g/\Omega^{2}.
\end{equation}
This expression holds as long as the gravitational sag $\Delta y$ is much
smaller than the harmonic radius $\rho_{h}\equiv\bar{B}_{0}/\alpha$.
Since $\rho_{h}\geq\rho_{m}$, the effective potential (\ref{TOPpotential}) may
be treated as harmonic as long as the motion is confined to a region around
the $z$ axis that is small compared to $\rho_{m}$. For our experiment the
harmonic approximation holds rather well and is sufficient for gaining
qualitative insight in the micro- and macromotion as will be shown in Section
\ref{sec:micromotion and macromotion}. Refinements associated with switch-on
transients and gravity are discussed in Appendix
\ref{app:Delayed Sudden-Step Model}. In the numerical analysis of Section
\ref{sec:Numerical analysis}, we solve the classical equations of motion in
the full time-dependent potential Eq.\thinspace(\ref{instantpotential}). In
this context we also comment on the validity of the harmonic approximation.
\subsection{Micromotion and macromotion
\label{sec:micromotion and macromotion}}
To analyze the effect of switching on the $\mathbf{B}_{m}$ field at $t=0$ we
first consider an atom `at rest' in the center of the effective trapping
potential $\mathcal{U}(\boldsymbol{\rho},z)$. Such an atom exhibits no
period-averaged dynamics (no macromotion) but only circular micromotion at a
frequency $\omega$ about the origin as illustrated in Fig.\thinspace
\ref{Fig:CircleOfDeath}. The radius of this stationary micromotion
\begin{equation}
\rho_{0}=\frac{\mu\alpha}{m\omega^{2}}\left( 1+B_{0}^{2}/B_{m}^{2}\right)
^{-1/2}, \label{rho-0
\end{equation}
follows from the condition $F_{c}=m\omega^{2}\rho_{0}$ for the centripetal
force $\mathbf{F}_{c}=-\boldsymbol{\nabla}_{\boldsymbol{\rho}}U|_{\rho=0
=\mu\alpha(1+B_{0}^{2}/B_{m}^{2})^{-1/2}\boldsymbol{\hat{\rho}}_{m}$. The
speed of this stationary micromotion,
\begin{equation}
v_{0}=\omega\rho_{0}=\frac{\mu\alpha}{m\omega}\left( 1+B_{0}^{2}/B_{m
^{2}\right) ^{-1/2}, \label{vel-0
\end{equation}
is directed orthogonally to the direction $\boldsymbol{\hat{\rho}}_{m}$. Such
pure micromotion only results if at $t=0$ the atom is already moving at speed
$v_{0}$ along a circle of radius $\rho_{0}$ about the origin and is located at
position $\boldsymbol{\rho}=-\rho_{0}\boldsymbol{\hat{\rho}}_{m}$ (see
Fig.\thinspace\ref{Fig:CircleOfDeath})
\begin{figure}[ptb
\centering
\includegraphics[
trim=0.506072in 0.234148in 0.462478in 0.291894in,
height=7.5493cm,
width=7.8319cm
{./fig1.eps
\caption{(color online) Schematic diagram of the magnetic field configuration
in relation to the orbit of stationary micromotion (solid blue circle). The
view is along the (horizontal) $z$ axis. The orbital position and velocity of
the micromotion are denoted by $\boldsymbol{\rho}=-\rho_{0}\boldsymbol{\hat
{\rho}}_{m}$ and $v_{0}$. The IP symmetry axis rotates at frequency $\omega$
(with initial phase $\phi_{m}$) about the $z$ axis on the circle of radius
$\rho_{m}$ (dashed black circle). Note that the TOP field $\mathbf{B
_{m}=B_{m}\boldsymbol{\hat{\rho}}_{m}$ rotates at frequency $-\omega$ (phase
$-\phi_{m}$), reflecting the 2D-quadrupole symmetry (dashed red circle) of the
IP trap.
\label{Fig:CircleOfDeath
\end{figure}
Obviously an atom at $t=0$ at rest at the origin, $\boldsymbol{\rho}=\{0,0\}$
does not satisfy these initial conditions and as a consequence its macromotion
will start with a finite launch speed. We will see that the result is
elliptical motion at frequency $\Omega$, with the long axis approximately
perpendicular to the initial direction of $\boldsymbol{\hat{\rho}}_{m}$ and
with a substantial amplitude, of order $\left( \omega/\Omega\right) \rho
_{0}$. Usually this motion is undesired and our aim is to quantify it and
subsequently quench it by imparting a phase jump to the TOP-field.
To gain insight into the way in which the sudden switch-on of the TOP
influences the macromotion of an atom initially at rest at the origin, we
first consider a simple model in which it is assumed that the motion in the
radial plane can be decomposed into two harmonic components, oscillating at
the micromotion and macromotion frequencies $\omega$ and $\Omega$,
respectively. The position $\boldsymbol{\rho}(t)$ and velocity
$\boldsymbol{\dot{\rho}}(t)$ are given b
\begin{align}
\boldsymbol{\rho}(t) & =\left\{ \rho_{0}\cos(\omega t+\phi),\rho_{0
\sin(\omega t+\phi)\right\} +\nonumber\\
& \ \ \ \ +\left\{ X_{0}\cos(\Omega t+\varphi_{x}),\ Y_{0}\sin(\Omega
t+\varphi_{y})\right\} \label{r_approx}\\
\boldsymbol{\dot{\rho}}(t) & =\left\{ -v_{0}\sin(\omega t+\phi),v_{0
\cos(\omega t+\phi)\right\} +\nonumber\\
& \ \ \ \ +\left\{ -V_{0,x}\sin(\Omega t+\varphi_{x}),\ V_{0,y}\cos(\Omega
t+\varphi_{y})\right\} , \label{v_approx
\end{align}
where $X_{0}$ $(Y_{0})$ is the amplitude, $V_{0,x}=\Omega X_{0}$
$(V_{0,y}=\Omega Y_{0})$ the velocity amplitude and $\varphi_{x}$
$(\varphi_{y})$ the initial phase of the macromotion in $x$ $(y)$ direction;
$\phi$ is the initial phase of the micromotion. The atom starts at rest at the
origin, hence the initial conditions are $\boldsymbol{\rho},\boldsymbol{\dot
{\rho}}=0$ at $t=0$. If the condition
\begin{equation}
\omega\gg\Omega\label{TOPcondition
\end{equation}
is satisfied, the acceleration due to the micromotion dominates over that of
the macromotion. The total acceleration may be approximated by
$\boldsymbol{\ddot{\rho}}\simeq\mathbf{F}_{c}/m$. In other words,
$\boldsymbol{\ddot{\rho}}$ points in the direction $\boldsymbol{\hat{\rho
}_{m}$, which is opposite to the direction of $\boldsymbol{\rho}$ (as per
Fig.\thinspace\ref{Fig:CircleOfDeath}). Hence, the initial phase of the
micromotion is $\phi\simeq\phi_{m}+\pi$, where $\phi_{m}$ is fixed by the
phase of the rotating $\mathbf{B}_{m}$ field \cite{ApproxPhase}. Without loss
of generality we can set $\phi_{m}=0$, which means that $\boldsymbol{\hat
{\rho}}_{m}$ is oriented along the positive $x$ direction at $t=0$. With this
choice and setting $\phi=\phi_{m}+\pi$, we find from the initial conditions:
$\varphi_{x},\varphi_{y}=0$, $X_{0}=\rho_{0}$, and $Y_{0}=(\omega/\Omega
)\rho_{0}$. Substituting these values in Eq.\thinspace(\ref{r_approx}) we
obtain an equation for the macromotion representing an elliptical orbit with
its major axis oriented perpendicular to the instantaneous direction
$\boldsymbol{\hat{\rho}}_{m}$ of the $\mathbf{B}_{m}$ field at $t=0$. Since
the amplitude of the macromotion along its major axis is larger than the
micromotion by the factor $\omega/\Omega$, a substantial sloshing motion
results from the sudden switch-on. Note that with increasing $\omega$, the
micromotion amplitude $\rho_{0}$ decreases like $1/\omega^{2}$ whereas the
amplitude of the sloshing motion $Y_{0}$ decreases only like $1/\omega$. For
this reason the sloshing cannot be neglected in most practical cases involving
audio-frequency modulation
\begin{figure}[ptb
\centering
\includegraphics[
height=4.228cm,
width=8.2621cm
{./fig2.eps
\caption{(color online) Numerically calculated trajectories in the $xy$ plane
with the $x$- and $y$-positions shown against time (left) and parametric plots
of the same trajectory in the $xy$ plane (middle and right) of a particle
initially at rest at the origin, after instant switch-on (black lines). The
dotted red curves correspond to a switch-on time of 3\textrm{~}$\mathrm{\mu
s}$\textrm{ }of the TOP field, a settling time for the value of $B_{0}$ as
well as the presence of gravity. The trap frequencies are $\omega/2\pi=4$ kHz
and $\Omega/2\pi=394~\mathrm{Hz}$. Units are scaled to the TOP radius
$\rho_{m}$.
\label{theortraject
\end{figure}
\subsection{Numerical analysis\label{sec:Numerical analysis}}
To validate the analytical model introduced in Section
\ref{sec:micromotion and macromotion}, we numerically integrate the classical
equations of motion in the full time-dependent potential given by
Eq.\thinspace(\ref{instantpotential}) for $z=0$, $v_{z}=0$ and $\phi_{m}=0$.
The result for the trajectory is given in Fig.\thinspace\ref{theortraject} and
exhibits the sloshing macromotion described above. The choice of parameters is
such that it matches the experimental conditions that will be presented in
Section \ref{experimentaldetails}
\begin{table}[t] \centering
\caption{Comparison of numerical results (num) with the analytical model (AM); +ab - including refinements (a) and (b); +abc - all refinements included
\begin{tabular}
[c]{c|c|c|c|c|c|c}\hline\hline
& $\phi_{m}$ & $\theta/\pi$ & $\varphi_{x}/\pi$ & $\varphi_{y}/\pi$ &
$X_{0}/\rho_{0}$ & $Y_{0}/\rho_{0}$\\\hline
{\small num} & $0$ & $0$ & $0$ & $0$ & $1$ & $10.2$\\
{\small AM} & $0$ & $0$ & $0$ & $0$ & $1$ & $10.2$\\\hline
{\small num+ab} & $0$ & $0.024$ & $0.22$ & $0.04$ & $1.34$ & $10.2$\\
{\small AM+ab} & $0$ & $0.024$ & $0.23$ & $0.04$ & $1.34$ & $10.2$\\\hline
{\small num+abc} & $0$ & $0.017$ & $0.20$ & $0.06$ & $0.82$ & $6.5$\\
{\small AM+abc} & 0 & $0.021$ & $0.23$ & $0.06$ & $0.85$ & $6.5
\end{tabular}
\label{table:FitResultsA
\vspace{3mm}
\end{table
The drawn black lines in Fig.\thinspace\ref{theortraject} correspond to sudden
switch-on of the TOP trap at $t=0$ for an atom initially at rest at the origin
in the absence of gravity. The figure clearly shows the micromotion
superimposed onto the macromotion orientated along the $y$ direction. The
amplitudes and phases of the macromotion obtained by fitting Eq.\thinspace
(\ref{r_approx}) to the results of the numerical calculation agree accurately
with the analytical model of Section \ref{sec:micromotion and macromotion}
(see Table~\ref{table:FitResultsA}). A more detailed comparison reveals that
anharmonicities play a minor role; the harmonics of both the micro- and
macromotion have amplitudes which are at least two orders of magnitude smaller
than those of the fundamentals.
In order to allow a better comparison with the experiments to be discussed
below we have also performed the numerical analysis including several
refinements that pertain to our specific experimental situation. These effects
are: (a) a difference $\left( \delta y\right) $ in gravitational sag between
the IP and the TOP trap; (b) an exponential switching transient of the current
in the TOP coils and correspondingly in the $\mathbf{B}_{m}$ field $\left(
\tau_{1/e}=3~\mu\mathrm{s}\right) $; (c) a switching transient of
$\sim0.5~\mathrm{ms}$ in the offset field from $B_{0}=9.5\times10^{-5
~\mathrm{T}$ at the $t=0$ to the final value $B_{0}=3.1\times10^{-5
~\mathrm{T}$.
The initial gravitational sag in the IP trap is $1.2\mathrm{~\mu m.}$ When
switching on the TOP, the sag $\Delta y$ jumps in $\sim3~\mu\mathrm{s}$ to
$1.7~\mathrm{\mu m}$ and settles in $\sim0.5~\mathrm{ms}$ to its final value
$1.6~\mathrm{\mu m}$ due to the decrease of $B_{0}$. Thus the gravitational
sag increases jump wise and settles at $\delta y=0.4~\mathrm{\mu m}$. During
the same transient the radius of the stationary micromotion grows from
$\rho_{0}=0.21~\mathrm{\mu m}$ to $\rho_{0}=0.33~\mathrm{\mu m}$ and $\Omega$
increases by about $5\%$.
The dotted red traces in Fig.\thinspace\ref{theortraject} correspond to the
numerical calculation including all the above refinements relevant to the
experiments. We have also investigated the effects of gravity, $\mathbf{B
_{m}$-switching and $B_{0}$-switching separately. We find that the main effect
of the settling time of $B_{0}$ is to reduce the amplitude along the major
axis by $\sim35\%$. The combined effect of changing gravitational sag and
$\mathbf{B}_{m}$ transient is to slightly increase the $x$ amplitude as well
as to produce a slight tilt angle of the trajectory (see right-most panel of
Fig.\thinspace\ref{theortraject}).
The tilt angle $\theta$ of the macromotion also follows from a fit of
Eq.\thinspace(\ref{r_approx}) to the numerical results: for known values of
$X_{0}$, $Y_{0}$, $\varphi_{x}$ and $\varphi_{y}$ the angle of rotation
$\vartheta$ to align the coordinate system along the major and minor axis is
given b
\begin{equation}
\vartheta=\tfrac{1}{2}\tan^{-1}\left[ 2\sin(\varphi_{x}-\varphi_{y
)X_{0}Y_{0}/(Y_{0}^{2}-X_{0}^{2})\right] \label{eq:rotation
\end{equation}
For $\phi_{m}=0$ the tilt angle equals the rotation angle $(\theta=\vartheta)$.
The results of a fit of Eq.\thinspace(\ref{r_approx}) to the numerical results
including only the refinements (a) and (b), as well as a fit including all
three refinements (a), (b) and (c) are also given in
Table~\ref{table:FitResultsA}. Extending the analytical model to include the
refinements (a) and (b) is straightforward and given in detail in Appendix
\ref{app:Delayed Sudden-Step Model}. The expressions for the amplitudes and
phases depend on the model parameter $\tau_{0}$ and are given by
Eqs.\thinspace(\ref{X0})-(\ref{Phi-y}) of the appendix. The model parameter
$\tau_{0}$ is chosen by ensuring that the value of the tilt angle $\theta$ of
the model reproduces that of a fit to the numerical solution for zero settling
time, $\theta=0.024\pi$. This results in $\tau_{0}=3.5~\mu\mathrm{s}$.
Excellent agreement is obtained with the numerical model as is shown in
Table~\ref{table:FitResultsA}. Insight in the cause of the reduction of the
major-axis amplitude associated with the settling behavior of $B_{0}$ can also
be gained using the analytical model. As discussed in Appendix
\ref{app:Delayed Sudden-Step Model} the major refinement is change the launch
speed corresponding to the initially smaller value of $\rho_{0}$. Although
this refinement captures the origin of the $35\%$ reduction of the major axis
amplitude, Table~\ref{table:FitResultsA} shows that the overall agreement with
the numerical model is less favorable.
\subsection{Phase jumps}
\label{sec:Phase jumps} Let us now analyze how the macromotion can be
quenched. For a one-dimensional, rapidly-oscillating potential it was
demonstrated in Ref.\thinspace\cite{Rid1} that the amplitude of the
macromotion can be quenched by an appropriate phase-jump of the modulation
field. For the 2D motion in a TOP, the success of such an approach is not
\emph{a priori} obvious because the phase jumps for the $x$- and $y$ motion
cannot be selected independently. Yet, as will be shown below, also for the
TOP it is possible to quench both the $X_{0}$- and $Y_{0}$ amplitudes more or
less completely by imposing a single phase jump $\Delta\phi_{m}$ to the
$\mathbf{B}_{m}$ field
\begin{figure}[ptb
\centering
\includegraphics[
height=4.848cm,
width=8.2642cm
{./fig3.eps
\caption{(color online) Explanatory diagram for the phase jump. Left: cloud
trajectory (black solid line) along with macromotion trajectory (blue dotted
line) The black dashed lines are the symmetry axes of the trap and the blue
arrows show the macromotion velocity on crossing the $x$-axis. Middle:
Expanded view of boxed region of the left panel; $\boldsymbol{\rho}\left(
t\right) $ is the position of the cloud at the time of the phase jump. The
red dashed (black dot-dashed) circle is micromotion just before (after) the
phase jump at $t=t_{a}$. Right: micromotion $(\mathbf{v})$ and
macromotion$(\mathbf{V})$ velocity vectors add up to the total velocity vector
$\boldsymbol{\dot{\rho}}\left( t\right) $.
\label{fig:jump
\end{figure}
For clarity we first restrict ourselves to the case $\phi_{m}=0$ and neglect
the effects of gravity and switching transients. This means that the cloud is
launched at $t=0$ in the vertical $y$ direction with a speed that is equal to
$v_{0}$, the micromotion speed. As can be seen from the trajectory depicted at
the left of Fig.\thinspace3 the macromotion speed will again be equal to
$v_{0}$ when the cloud returns close to the origin after an integer number of
macromotion half-periods. The total velocity $\boldsymbol{\dot{\rho}}\left(
t\right) $ is the vector sum of the micro- and macromotion velocities and
this quantity varies rapidly on a time scale of the micro-motion period.
The essence of the quenching procedure is to apply the phase jump at a time
$t_{a}$ chosen in the interval $t_{n}-\Delta t<t<t_{n}+\Delta t$ around times
$t_{n}=n\left( \pi/\Omega\right) $ corresponding to a multiple of the
macromotion half-period. We choose $t_{a}$ such that $\boldsymbol{\dot{\rho
}\left( t_{a}\right) $ has a magnitude equal to $v_{0}$. When the cloud
returns at the $x$ axis the micro- and macromotion speeds are both $v_{0}$ and
hence the resultant total velocity can only be equal to $v_{0}$ if the angle
between the macro- and micromotion directions is either $2\pi/3$ or $-2\pi/3$
corresponding to two distinct micromotion phases $\phi_{a}\equiv\phi
(t_{a})=\omega t_{2n-1}+\phi=\pm\pi/3$ (see Fig.\thinspace3-right). In other
words the micro- and macromotion velocity vectors form an equilateral
triangle. For each of these cases a corresponding phase jump exists,
$\Delta\phi_{m}=\pm\pi/3$ respectively, such that $\boldsymbol{\hat{\rho}
_{m}$ is set perpendicular to $\boldsymbol{\dot{\rho}}\left( t_{a}\right) $,
which sets the macromotion velocity to zero. The result is pure micromotion if
the orbit into which the particle is kicked is centered around the origin. For
each of the two choices of $\phi_{a}$, pure micromotion results only if the
macromotion position at the time of the phase jump is equal to $(\pm\rho
_{0},0)$, where the $+$ $(-)$ sign applies for even (odd) $n$. Complete
quenching can be achieved only for specific choices of the ratio
$\omega/\Omega$. The change of orbit upon a phase jump is explained
pictorially in the middle of Fig.\thinspace3.
We now generalize to the case where the ratio $\omega/\Omega$ is not precisely
fine tuned and allow for the possibility that the macromotion speed deviates
slightly from the value $v_{0}$ assumed above. One can show that, also in this
case, the maximal reduction in marcromotion energy resulting from a phase jump
is achieved when the jump is applied at a time $t_{a}$ when $\boldsymbol{\dot
{\rho}}\left( t_{a}\right) $ has a magnitude equal to $v_{0}$. The value of
$\Delta\phi_{m}$ is again selected such as to set $\boldsymbol{\hat{\rho}
_{m}$ perpendicular to $\boldsymbol{\dot{\rho}}\left( t_{a}\right) $. By a
reasoning similar to the case described above we find that the condition of an
equilateral triangle of the three velocity vectors is now replaced by one that
is isosceles-triangle condition with the micro-motion velocity and
$\boldsymbol{\dot{\rho}}\left( t_{a}\right) $ both having a magnitude
$v_{0}$. This in turn means that the magnitude of the phase jump will deviate
slightly from the values $\pm\pi/3$ found above. Also, the nearest distance to
the $x$ axis at which the isosceles-triangle condition can be met is in
general not equal to zero. This means that some residual macromotion will be
present after the phase jump, with an amplitude given by the distance to the
origin of the center of the circular orbit into which the cloud is transferred
by the phase jump. One can show that there is always a choice possible where
the isosceles-triangle condition is satisfied such that this distance is
approximately $2\rho_{0}$ or less. As a consequence, even in the worst case,
the macromotion amplitude is reduced from $(\omega/\Omega)\rho_{0}$ to an
amplitude of order $\rho_{0}$.
The criterion that the acceleration be set perpendicular to the total velocity
at the time that the macromotion speed is equal to $v_{0}$ can be expressed by
the following equation
\begin{equation}
\Delta\phi_{m}=\arctan\left[ \frac{\dot{\rho}_{y}\left( t_{a}\right)
{\dot{\rho}_{x}(t_{a})}\right] -\phi\left( t_{a}\right) +(-1)^{k}\frac{\pi
}{2} \label{eq:phasejump
\end{equation}
where $\dot{\rho}_{x}\left( t_{a}\right) =-v_{0}\sin\phi\left(
t_{a}\right) -V_{0,x}\sin(\Omega t_{a}+\varphi_{x})$ and $\dot{\rho
_{y}\left( t_{a}\right) =v_{0}\cos\phi\left( t_{a}\right) +V_{0,y
\cos(\Omega t_{a}+\varphi_{y})$ are $x$- and $y$ components of
$\boldsymbol{\dot{\rho}}$ at time $t_{a}$ and $k=1$ for $\dot{\rho}_{x
(t_{a})$ $>0$ and $k=0$ for $\dot{\rho}_{x}(t_{a})<0$. We return to selection
of the jump time and the use of Eq.\thinspace(\ref{eq:phasejump}) when
discussing the measurement procedure in Section
\ref{sec:Measurement procedure}.
Examples of the numerical calculations of the quenching procedure are shown in
Fig.\thinspace\ref{x-y-motiontheory}
\begin{figure}[ptb
\centering
\includegraphics[
height=8.4455cm,
width=8.2621cm
{./fig4.eps
\caption{(color online) Numerically calculated radial trajectories in the $x$-
(black) and $y$ (red) direction for the same trap parameters as used for
Fig.~\ref{theortraject}, with a quenching phase jump $\Delta\phi_{m}$ applied
at optimized $t=t_{a}$ $\simeq$ $t_{3}$ (three macromotion half-periods). (a)
instant switching, no gravity: $\Delta\phi_{m}=-\pi/3$, $t_{a}$
$=3.834\,\mathrm{ms}$; (b) including switching transients and gravity:
$\Delta\phi_{m}=-0.22\pi$, $t_{a}$ $=$ $3.834\,\mathrm{ms}$.
\label{x-y-motiontheory
\end{figure}
The near complete quenching of the macromotion shown in panel (a) is obtained
for $\delta y=0$ and $\tau=0$ with phase jump $\Delta\phi_{m}=-\pi/3$ at time
$t_{a}=3.834~\mathrm{ms}$ in the time interval around $t_{3}=3\pi/\Omega$. In
Fig.\thinspace\ref{x-y-motiontheory}b the refinements (a), (b) and (c) are
included in the simulation of the experiment. In this case the phase jump had
to be adjusted to $\Delta\phi_{m}=-0.22\pi$ for maximum quenching. Note that
the quenching is less complete. By adjusting, at constant $\Omega$, the
micromotion frequency to $\omega=4.068~\mathrm{kHz}$ and the jump time to
$t_{a}=3.769~\mathrm{ms}$, complete quenching similar to that shown in panel
(a) was obtained also when including all refinements in the numerical model.
\section{Experimental\label{experimentaldetails}}
\subsection{Apparatus}
The experiments are done with$\,$the apparatus described in detail in
\cite{DiecTh} and \cite{BuggleTh}. We produce a \ BEC of $2.5\times10^{5}$
atoms of $^{87}$Rb in the $|F=2,m_{F}=2\rangle$ state in a Ioffe-Pritchard
trap using radio-frequency (rf) evaporative cooling. The symmetry axis ($z$
axis) of the trap lies horizontal with trap frequencies ($\Omega_{\rho
/2\pi=455(5)~$\textrm{Hz}, $\Omega_{z}/2\pi=21~$\textrm{Hz}) and the magnetic
field offset $B_{0}=9.5(3)\times10^{-5}~\mathrm{T}$, $\alpha=3.53~\mathrm{T/m
$ and $\beta=266~\mathrm{T/m}^{2}$. The Thomas-Fermi radius of the BEC is
$2.2~\mu\mathrm{m}$. The TOP field is produced by two pairs of coils, one in
the $x$ direction, the other in the $y$ direction as described previously in
\cite{Tiecke03}. The coils consist of only two windings to keep the inductance
low. The current for the TOP is generated by a TTI 4 channel arbitrary
waveform generator (TGH 1244), amplified by a standard audio-amplifier (Yamaha
AX-496). The current used is $I_{m}=3.0~\mathrm{A}$ and the field produced is
$B_{m}=6.8(2)\times10^{-5}~\mathrm{T}$. All measurements in the TOP are done
with $\Omega/2\pi=394(4)~\mathrm{Hz}$ ($B_{0}=3.1\times10^{-5}~\mathrm{T}$).
Detection is done by time-of-flight absorption imaging along the $z$ axis
using a one-to-one transfer telescope to image the $xy$ plane onto a Princeton
TE/CCD-512EFT CCD camera with $15~\mu\mathrm{m}$ pixel resolution. All
measurements are carried out with the same flight time $\Delta t_{\mathrm{TOF
}=23~\mathrm{ms}$, giving rise to an expanded cloud radius of $\sim
140~\mu\mathrm{m}$.
\subsection{Measurement procedure\label{sec:Measurement procedure}}
Our experiments on phase-jump-controlled motion in a TOP trap are done with
the $\mathbf{B}_{m}$ field operated at $\omega/2\pi=4\mathrm{~kHz}$. This
frequency is sufficiently high $\left( \omega/\Omega\gtrsim10\right) $ to
satisfy the `TOP condition' Eq.\thinspace(\ref{TOPcondition}). The frequency
is chosen lower than in a typical TOP to ensure that the speed of the
stationary micromotion, $9~\mathrm{mm/s}$ as estimated with Eq.\thinspace
(\ref{vel-0}), is accurately measurable. In the experiments we start with an
equilibrium BEC in the IP trap described above. At $t=0$ we switch on the
$\mathbf{B}_{m}$ field, using $B_{0}$ to tune the measured trap frequency to
$\Omega/2\pi=394~\mathrm{Hz}$. As the trap minimum shifts down by $\delta
y=0.40~\mu\mathrm{m}$, the initial position of the cloud is slightly above the
trap center. The $1/e$-switching time of the $\mathbf{B}_{m}$ field was
measured to be $\tau\thickapprox3~\mu\mathrm{s}$, which corresponds to
$\omega\tau\thickapprox0.08$. When changed, the $B_{0}$ field settles to a new
value after a damped oscillation with a frequency of 650 Hz and a damping time
$\tau^{\prime}$ of 0.56 ms. This corresponds to $\Omega\tau^{\prime
}\thickapprox0.2$. The velocity $\boldsymbol{\dot{\rho}}$ of the BEC in the
radial plane at the time of release is determined by time-of-flight absorption
imaging along the $z$ axis. For the chosen flight time of $23~\mathrm{ms,}$ a
speed of $1~\mathrm{mm/s}$ corresponds to a displacement of $23~\mu\mathrm{m}$
with respect to a cloud released from the same position at zero velocity. A
cloud released at rest at time $t_{rel}$ is imaged at position \ $\mathbf{R
_{0}=\boldsymbol{\rho}(t_{rel})+\frac{1}{2}\boldsymbol{\ddot{\rho}
_{g}\boldsymbol{~}\Delta t_{\mathrm{TOF}}^{2}$, where $\boldsymbol{\ddot{\rho
}}_{g}$ is the gravitational acceleration. For a finite release velocity
$\boldsymbol{\dot{\rho}}(t_{rel})$ the cloud will be imaged at $\mathbf{R
=\boldsymbol{\dot{\rho}}(t_{rel})\Delta t_{\mathrm{TOF}}+\mathbf{R}_{0}$.
In practice we may neglect the small variation in the release position due to
the macromotion, approximating $\boldsymbol{\rho}(t_{rel})\simeq\rho(0)$,
because this variation is smaller than the shot-to-shot reproducibility of the
cloud position. From the model analysis of Section
\ref{sec:micromotion and macromotion} the variation in release position due to
the macromotion is estimated to be $\delta\boldsymbol{\rho}(t_{rel
)\lesssim(\omega/\Omega)\rho_{0}\approx4~\mu\mathrm{m}$. The centroid of the
image of the expanded cloud is determined using a simple Gaussian fitting
procedure and has a shot-to-shot reproducibility of $\sim8\mathrm{~\mu m}$,
small as compared to the $140~\mu\mathrm{m}$ radius of the expanded cloud. No
improvement in shot-to-shot reproducibility was found by changing to a higher
magnification. Since our measurements depend only on the position of the cloud
center they are insensitive to fluctuations in atom number or density
\begin{figure}[ptb
\centering
\includegraphics[
height=5.972cm,
width=8.2621cm
{./fig5.eps
\caption{(color online) The centroid position after $23~\mathrm{ms}$ TOF
plotted in camera pixel units against holding time in the TOP trap: upper
datatset: $R_{x}$; lower dataset: $R_{y}$. The solid lines represent the fit
of Eqs.\thinspace(\ref{velFit-x}) and (\ref{velFit-y}) to the data. Note that
by a stroboscopic measurement at $0.25~\mathrm{ms}$ intervals the micromotion
is eliminated. Each point represents a single measurement.
\label{rawdataandfits
\end{figure}
To reconstruct the motion of the condensate in the trap we image the cloud at
$t=t_{i}$, where $t_{i}$ is the holding time in the TOP. We obtain the release
velocity by measuring the $x$ and $y$ components of the cloud centroid
$(R_{x},R_{y})$. A typical set of data is shown in Fig.\thinspace
\ref{rawdataandfits}. The micromotion is recognized as the rapid modulation on
the slow macromotion. As the frequency of the micromotion is accurately known
we avoid aliasing by sampling the motion in steps of $0.025~\mathrm{ms}$, much
shorter than the micromotion period. If we wish to look only at the
macromotion in a stroboscopic manner, we can sample precisely at the
micromotion period of $0.25~\mathrm{ms}$, with best results obtained when
sampling on the crests of the micromotion. Fitting the expression
\begin{align}
R_{x} & =-v_{0}\Delta t_{\mathrm{TOF}}\sin(\omega t+\phi)-\nonumber\\
& \ \ \ \ \ \ \ \ \ \ \ \ \ -V_{0,x}\Delta t_{\mathrm{TOF}}\sin(\Omega
t+\varphi_{x})+R_{0,x}\label{velFit-x}\\
R_{y} & =v_{0}\Delta t_{\mathrm{TOF}}\cos(\omega t+\phi)+\nonumber\\
& \ \ \ \ \ \ \ \ \ \ \ \ \ +V_{0,y}\Delta t_{\mathrm{TOF}}\cos(\Omega
t+\varphi_{y})+R_{0,y} \label{velFit-y
\end{align}
to the data, using the TOP frequency $\omega$ and $\Delta t_{\mathrm{TOF}}$ as
known parameters, we obtain the amplitudes $v_{0}$, $V_{0,x}$, $V_{0,y}$ as
well as the macromotion frequency $\Omega$ and the phases $\phi,\varphi
_{x},\varphi_{y}.$ Note that the fit also yields the reference position
$\mathbf{R}_{0}=\{R_{0,x},R_{0,y}\}$ but this information is superfluous for
the reconstruction of the in-trap motion. Once these quantities are determined
the motion of the condensate in the TOP trap is readily reconstructed with
Eq.\thinspace(\ref{r_approx})
\begin{figure}[ptb
\centering
\includegraphics[
trim=0.217032in 0.216686in 0.217448in 0.363322in,
height=5.6451cm,
width=8.2621cm
{./fig6.eps
\caption{(color online) Illustration of how to choose the optimal phase jump
and its timing. In both panels: solid black curve - experimental conditions;
dashed red curve - analytical model of Section \ref{sec:Phase jumps} for the
case of instant switching - no gravity. (a) The total speed of the cloud in
units of the micromotion speed $v_{0}$ (optimal phase jump time $t_{a}$
corresponds to $\dot{\rho}(t_{a})=v_{0}$); the dashed blue line (scale on
right) shows the $Y$ component of the macromotion position crossing zero at
$t=t_{3}$ (stationary micromotion can be achieved by adjusting $\omega$ such
that $t_{3}=t_{a}$); (b) The optimal phase jump as a function of jump time as
calculated by Eq.\thinspace(\ref{eq:phasejump}).
\label{timing
\end{figure}
To investigate the effect of phase jumps, we implement the approach described
in Section \ref{sec:Phase jumps}. First we determine for given $\omega$ and
$\Delta t_{\mathrm{TOF}}$ all parameters to reconstruct the motion with the
method just described. This enables us to determine the time intervals
$t_{n}-\Delta t<t<t_{n}+\Delta t$, where the cloud returns close to the
origin, and choose within this interval the time $t_{a}$, where the total
velocity $\boldsymbol{\dot{\rho}}\left( t_{a}\right) $ has magnitude $v_{0}$
as shown in Fig.\thinspace\ref{timing}a. The red dashed lines correspond to
the analytical model of Section \ref{sec:Phase jumps} for the case of instant
switching - no gravity (the case of Fig.\thinspace\ref{x-y-motiontheory}a).
The black solid lines correspond to the calculation including all relevant
experimental constraints. The phase jump $\Delta\phi_{m}$ that sets
$\boldsymbol{\hat{\rho}}_{m}$ perpendicular to $\boldsymbol{\dot{\rho}}\left(
t_{a}\right) $ is given by Eq.\thinspace(\ref{eq:phasejump}). This optimal
phase jump $\Delta\phi_{m}$ is plotted versus $t_{a}$ in a time interval
around $t_{3}=3\pi/\Omega$ in Fig.\thinspace\ref{timing}b. For the case of
instant switching - no gravity the optimum phase jump is seen to be
$\Delta\phi_{m}=-\pi/3$. At the chosen time $t_{a}$ we vary the phase jump
$\Delta\phi_{m}$ about the value suggested by Eq.\thinspace(\ref{eq:phasejump
) in search for optimal quenching. To reconstruct the residual macromotion, we
hold the cloud for a variable additional time $t_{b}$, before TOF imaging at
time $t=t_{a}+t_{b}$.
\section{Results and discussion}
\label{results}
In this section we show the results obtained with the experimental procedure
described in the previous section. We measured the macromotion induced by
switching on the $\mathbf{B}_{m}$ field for three values of the initial TOP
phase, $\phi_{m}=0,\pi/4,\pi/2$. For $\phi_{m}=0$ part of the raw data are
shown in Fig.\thinspace\ref{rawdataandfits}. In Fig.\thinspace
\ref{figure-parametric} we show the measured velocity of the macromotion
obtained with the stroboscopic method. The upper and lower panels correspond
to $\phi_{m}=0$ and $\pi/4$ respectively. The data for $\phi_{m}=\pi/2$ are
not shown but are similar to those for $\phi=0$ but with the roles of $x$ and
$y$ interchanged.
The solid lines in the left panels of Fig.\thinspace\ref{figure-parametric}
are obtained by fitting Eqs.\thinspace(\ref{velFit-x}) and (\ref{velFit-y}) to
the full data including micromotion and provide the input for calculating the
amplitudes. Using the known TOP frequency $\omega/2\pi=4~\mathrm{kHz}$ and
flight time $\Delta t_{\mathrm{TOF}}=23~\mathrm{ms}$, the fit yields for the
velocity amplitudes, phases, and frequency: $v_{0}=7.6(2)~\mathrm{mm/s,}$
$V_{0,x\text{ }}=0.7(2)~\mathrm{mm/s}$, $V_{0,y}=5.6(2)~\mathrm{mm/s,}$
$\phi=1.00(1)\pi$, $\varphi_{x}=0.5(2)\pi$, $\varphi_{y}=$ $0.05(2)\pi$, and
$\Omega/2\pi=394(4)~\mathrm{Hz}$. The corresponding in-trap amplitudes are
$\rho_{0}\equiv v_{0}/\omega=0.30(1)~\mu\mathrm{m,}$ $X_{0}\equiv V_{0,x\text{
}}/\Omega=0.28(7)~\mu\mathrm{m}$, and\ $Y_{0}\equiv V_{0,y\text{ }
/\Omega=2.3(1)~\mu\mathrm{m}$
\begin{figure}[ptb
\centering
\includegraphics[
trim=0.214174in 0.257684in 0.236814in 0.204854in,
height=6.3284cm,
width=8.2642cm
{./fig7.eps
\caption{(color online) The panels on the left show the macromotion velocities
(taken with the stroboscopic method) of the cloud centroid $x$- (black) and
$y$- (red) versus time for $\phi_{m}=0$ and $\pi/4$. The solid curves are fits
of Eqs.\thinspace(\ref{velFit-x}) and (\ref{velFit-y}) to the data. The panels
on the right represent the reconstructed trajectories in parametric form (in
units of the TOP radius $\rho_{m}=19.5$ $\mu\mathrm{m}$. The difference in
aspect ratio is caused by the gravity shift.
\label{figure-parametric
\end{figure}
The right panels in Fig.\thinspace\ref{figure-parametric} are parametric plots
of the trajectories obtained by reconstructing the motion in the trap from the
velocity fits described above. The trajectories provide a useful way to see
the effect of the initial phase of the applied $\mathbf{B}_{m}$ field and in
addition the upper panel can be directly compared with the theoretical
prediction shown in Fig.\thinspace\ref{theortraject}. As expected, the
orientation of the major axis of the macromotion is dependent on the initial
phase $\phi_{m}$ of the $\mathbf{B}_{m}$ field. The small tilt $\theta$ away
from the direction perpendicular to $\boldsymbol{\hat{\rho}}_{m}$ is clearly
visible and consistent with the calculations for a finite switch-on time and
the presence of gravity. The value obtained for $\rho_{0}$ is slightly smaller
than the value calculated with Eq.\thinspace(\ref{rho-0}) but in view of
experimental uncertainties certainly consistent with the value of $\alpha$.
The results for $\varphi_{x}/\pi$, $\varphi_{y}/\pi$, $X_{0}/\rho_{0}
,$Y_{0}/\rho_{0}$ and the tilt angle $\theta$ obtained for $\phi_{m}=0$ and
$\pi/4$ are given in Table~\ref{table:FitResults}. For comparison, also the
numerical results are included
\begin{table}[b] \centering
\caption{Experimental results (exp) for macromotion induced by the switch-on of the TOP field for $\phi_{m}=0,\pi/4$. The data are compared with the results of the numerical calculation of Section \ref{sec:Numerical analysis} (num). In all cases the tilt angle has be calculated with the aid of Eq.\,\ref{eq:rotation}
\begin{tabular}
[c]{c|c|c|c|c|c|c}\hline\hline
& $\phi_{m}$ & $\theta/\pi$ & $\varphi_{x}/\pi$ & $\varphi_{y}/\pi$ &
$X_{0}/\rho_{0}$ & $Y_{0}/\rho_{0}$\\\hline
{\small exp} & $0$ & $0.04(2)$ & $0.5(2)$ & $0.05(2)$ & $0.9(2)$ &
$7.7(3)$\\\hline
{\small num} & $0$ & $0.016$ & $0.20$ & $0.06$ & $0.82$ & $6.5$\\\hline
{\small exp} & $\pi/4$ & $0.04(2)$ & $0.49(2)$ & $0.12(2)$ & $6.6(4)$ &
$5.2(3)$\\\hline
{\small num} & $\pi/4$ & $0.013$ & $0.47$ & $0.04$ & $4.95$ & $4.55
\end{tabular}
\label{table:FitResults
\vspace{3mm}
\end{table
We now turn to the results of a quenching experiment. The time $t_{a
\simeq3.83~\mathrm{ms}$ and magnitude $\Delta\phi_{m}=-0.22\pi$ of the phase
jump have been chosen to meet the conditions necessary to quench the
macromotion as introduced in Section \ref{sec:Measurement procedure} and
illustrated in Fig.\thinspace\ref{timing}. In Fig.\thinspace\ref{quenching} we
show velocity data taken with the stroboscopic method. For $t<3.83~\mathrm{ms
$ the data coincide with those shown in the upper panel of Fig.\thinspace
\ref{figure-parametric} but the solid lines are not a fit but represent the
macromotion velocity predicted by the numerical calculation on the basis of
the experimental parameters. These velocity curves correspond to the
macromotion part of the position plot Fig. \ref{x-y-motiontheory}b and have no
adjustable parameters. Both experiment and theory show pronounced reduction in
the amplitude of the macromotion. Although, the phases of the quenched motion
cannot be determined convincingly with our signal to noise ratio, the
agreement between theory and experiment is satisfactory
\begin{figure}[ptb
\centering
\includegraphics[
trim=0.212714in 0.199177in 0.199334in 0.199631in,
height=4.0108cm,
width=8.2621cm
{./fig8.eps
\caption{(color online) Measured and calculated velocity of the macromotion
before and after a phase jump of $\Delta\phi_{m}=-0.22\pi$ at $t_{a
=3.83~\mathrm{ms}$ for initial phase $\phi_{m}=0$ . The open black squares
(solid red circles) correspond to the measured $V_{x}$ $\left( V_{y}\right)
$ velocity component (for $t_{a}<3.83~\mathrm{ms}$ the data coincide with
those of Fig.\thinspace\ref{figure-parametric}). Each point represents a
single measurement. The solid lines correspond to the numerical model without
any adjustable parameter as described in the text.
\label{quenching
\end{figure}
In general a jump in the micromotion phase produces an abrupt change in
macromotion phase and amplitude. For the case illustrated in Fig.
\ref{quenching} we obtain a reduction of more than a factor of $5$ in the
amplitude of oscillation in the $y$ direction at the expense of only a slight
increase of the amplitude in the $x$ direction. As a result the macromotion is
reduced to the size of the micromotion. The energy associated with the
macromotion is consequently reduced by a factor of about 15, reducing it to a
small fraction of the micromotion energy. This demonstrates that the initial
sloshing motion of the cloud can be efficiently quenched by applying an
appropriate phase jump angle. As pointed out in the last paragraph of Section
\ref{sec:Phase jumps} we expect that it should be possible to suppress the
macromotion almost completely by adjusting the micromotion frequency such that
$t_{3}=t_{a}$
\begin{figure}[ptb
\centering
\includegraphics[
trim=0.216679in 0.199046in 0.199459in 0.199294in,
height=5.4385cm,
width=8.2621cm
{./fig9.eps
\caption{(color online) Ratio of macromotion energy over micromotion energy
following a phase jump plotted against $\Delta\phi_{m}$ at $1.32$~\textrm{ms}
(open black squares) and $1.33$~\textrm{ms} (red circles). Each data point is
obtained from fits as described in the text. The horizontal blue dashed line
shows the initial value of the energy before the phase jump. The solid black
lines and dotted red are the numerical calculation for $1.31$~\textrm{ms} and
$1.32$~\textrm{ms} respectively. The inset shows the dependence on jump time
for fixed value of $\Delta\phi_{m}$.
\label{vary phase jump
\end{figure}
Even a small variation in the phase jump magnitude or its timing can result in
a substantial difference in quenching efficiency. This is illustrated in
Fig.\thinspace\ref{vary phase jump}, where we plot the ratio of macro- and
micromotion energy,
\begin{equation}
\frac{E_{macro}}{E_{micro}}=\frac{V_{0,x\text{ }}^{2}+V_{0,y}^{2}}{v_{0}^{2}},
\end{equation}
as the phase jump is varied in steps of $10$ degrees, for $t_{a
=1.32~\mathrm{ms}$ and $1.33~\mathrm{ms}$, where the position and velocity
criteria are well satisfied. For most phase jumps $\Delta\phi_{m}$ the result
is an \textit{increase} in energy. The drawn lines are the predictions from
the numerical model for the same conditions at $t_{a}=1.31~\mathrm{ms}$ and
$t_{a}=1.32~\mathrm{ms}$. The plot for $t_{a}=1.32~\mathrm{ms}$ shows a deeper
reduction than that for $t_{a}=1.31~\mathrm{ms}$, as well as a shifted optimal
$\Delta\phi_{m}$. The common shift of $\sim0.01~\mathrm{ms}$ between the data
and the numerical results remains unexplained.
\section{Summary and conclusion}
\label{Discussion and Outlook}
We have shown that a cold atomic cloud initially at rest at the minimum of the
effective potential of a TOP trap, acquires a macroscopic sloshing motion, in
addition to near circular micromotion, when the TOP is suddenly switched on.
The energy associated with this macromotion is of the same order as the energy
of the micromotion and the amplitude of the former is larger than that of the
latter by a factor $\sim\omega/\Omega$. We have theoretically described the
phenomenon and the predictions compare well with our experimental results.
As the micromotion is shared in common mode by all trapped atoms, the
associated energy does not affect the thermodynamics of the cloud in any way.
In contrast, the macromotion energy is generally unwanted and potentially
harmful. Fortunately, as we have shown, it is possible to quench this
macromotion almost completely and instantly, by applying a suitable and
properly timed phase jump to the rotating magnetic field that defines the TOP.
We have shown theoretically that this procedure works, even for the 2D case of
the TOP, which is an extension of previous theory describing similar phenomena
in 1D \cite{Rid1, Rid2}. We have presented a framework which allows a
deterministic procedure for choosing the optimal parameters for the phase
jump. Our experiments corroborate the theoretical model for the TOP in a
quantitative manner.
The macromotion induced by the switch-on and the subsequent possibility to
alter this motion by phase jumps have several consequences, some of which we
now briefly mention. For example, the sloshing motion may affect the time of
flight imaging once the fields have been switched off. When comparing
TOF-images for different holding times it is in general not sufficient to
synchronize the release time to the micromotion period. The position after TOF
can be easily polluted by the non-zero macromotion, which evolves
asynchronously with the micromotion. The time scales in this experiment are on
the order of a few macromotion periods. The physics of interest of the cloud
is usually seen on much longer time scales of hundreds of such periods. On
these longer time scales, the presence of even small anharmonicities can lead
to the conversion of macromotion energy into heat. The macromotion can be of
order of the chemical potential which can have consequences for the stability
of the condensate.
The possibility to excite or quench macromotion by phase jumps of the rotating
field is a valuable feature of the TOP trap that has received little attention
in the literature. Our work shows that this feature is well understood and can
be applied in a well-controlled manner. We have primarily focussed on
quenching with a single phase jump. However, the reverse effect in which the
macromotion is excited may prove equally useful in some experiments. Also the
consequences for multiple phase-jump applications deserve attention in this
respect. We established numerically that it should be possible to excite or
deexcite large macromotion with a series of $\pi$ phase jumps at intervals of
the macromotion half-period. At each of these phase jumps, either component of
the macromotion velocity can be increased or decreased by $\sim2v_{0}$. Being
outside the primary focus of this paper, we do not further elaborate on this
interesting topic.
\section*{Acknowledgments}
We would like to thank T.G. Tiecke for fruitful discussions and sharing his
trap calculation program. JTMW. thanks J. Dalibard for a valuable discussion.
This work is part of the research program on Quantum Gases of the Stichting
voor Fundamenteel Onderzoek der Materie (FOM), which is financially supported
by the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO).
|
1,116,691,498,300 | arxiv | \section{Introduction}
As first claimed by Euler in 1741 \cite{Eul}, and proved by Hierholzer and Wiener in 1873 \cite{HieWie}, it is now well known that a connected graph admits an Euler tour --- that is, a closed walk traversing each edge exactly once --- if and only if it has no vertices of odd degree. In this paper, we are concerned with the analogous problem for hypergraphs. As we shall see, this extended problem is much more complex; in fact, there is more than one natural way to generalize the notion of an Euler tour to hypergraphs. In this paper we shall consider three such natural extensions.
To our knowledge, not much has been previously known about eulerian properties of hypergraphs. The most in-depth treatment to date can be found in \cite{LonNar}, where Euler tours (closed walks traversing each edge exactly once) of $k$-uniform hypergraphs are considered. In particular, the authors of \cite{LonNar} determine some necessary conditions (see our Lemma~\ref{lem:NecCond}) for existence of an Euler tour in a hypergraph, and show that these are also sufficient for certain classes of $k$-uniform hypergraphs (Theorem~\ref{the:LonNar}). They also show that the problem {\sc Euler Tour} is NP-complete on the class of $k$-uniform hypergraphs for each $k \ge 3$, and even just on the class of 3-uniform hypergraphs with a connected skeleton; see \cite[Theorem 7]{LonNar}.
Some results on Euler tours in block designs have been previously obtained under the disguise of universal cycles and rank-2 universal cycles \cite{Dew}, as well as 1-overlap cycles \cite{HorHur, HorHur4}. In particular, these results imply existence of Steiner triple systems \cite[Theorem 22]{HorHur} and Steiner quadruple systems \cite[Theorem 1.2]{HorHur4} with an Euler tour for all admissible orders greater than 4, existence of twofold triple systems with an Euler tour for many congruency classes of admissible orders \cite[Theorem 5.10]{Dew}, as well as existence of an Euler tour in every cyclic twofold triple system \cite[Corolllary 5.11]{Dew} and every cyclic Steiner triple system of order greater than 3 \cite[Corolllary 5.10]{Dew}.
As we shall see in Subsection~\ref{sec:L(H)}, existence of an Euler tour in a hypergraph implies existence of a Hamilton cycle in its intersection graph, but not conversely. A lot of work has been done on hamiltonicity of block-intersection graphs of designs. The most comprehensive of these results, due to Alspach et al. \cite{AlsHeiMoh}, shows that the block-intersection graph of every pairwise balanced design with index 1 is hamiltonian; however, this construction does not give rise to an Euler tour in the design. Finally, we mention the article \cite{SamPas}, where certain closed walks in hypergraphs (called s-eulerian, p-eulerian, 2-eulerian, and f-eulerian) are studied; however, these concepts are only remotely related to eulerian properties of hypergraphs considered presently.
In this paper, we investigate three natural generalizations of the notion of Euler tour to hypergraphs: (1) a {\em flag-traversing tour} of a hypergraph, which corresponds to an Euler tour of the incidence graph; (2) an {\em Euler tour}, a closed walk that traverses each edge of the hypergraph exactly once; and (3) an {\em Euler family}, a family of closed walks that cannot be concatenated and that jointly traverse each edge of the hypergraph exactly once. We show that the second of these problems is NP-complete even on the very restricted class of linear 2-regular 3-uniform hypergraphs, while the other two problems are polynomial on the class of all hypergraphs. In fact, Euler's Theorem \cite{Eul} implies a complete and easy characterization of hypergraphs with flag-traversing tours, and this result will be presented in Subsection~\ref{sec:ET}. The rest of the paper is devoted to Euler tours and Euler families. As expected for an NP-complete problem, only partial results will be offered for the problem of existence of Euler tours in hypergraphs; we shall present some necessary conditions, some sufficient conditions, complete characterization for certain classes of hypergraphs, as well as characterization in terms of the intersection graph, the incidence graph, and the blocks of the hypergraph. Analogous, as well as additional results will be presented for hypergraphs with Euler families.
The paper is organized as follows. In the remainder of this section, we introduce basic hypergraph terminology and the three eulerian substructures in hypergraphs, as well as completely characterize hypergraphs with a flag-traversing Euler tour. In the second part of the paper (Section~\ref{sec:main}), we focus on hypergraphs with Euler tours and Euler families. First, we examine the necessary conditions for a hypergraph to admit an Euler tour or Euler family; we show that while these necessary conditions are sufficient for connected graphs, they are not sufficient for general hypergraphs. On the other hand, we exhibit a new class of hypergraphs for which these necessary conditions are also sufficient. In Subsection~\ref{sec:L(H)} we then give a partial characterization in terms of the intersection graph of the hypergraph, and in Subsection~\ref{sec:G}, a complete (but not easy to verify) characterization in terms of the incidence graph. Block structure with respect to eulerian properties of a hypergraph is considered in Subsection~\ref{sec:blocks}. In Subsection~\ref{sec:compl}, we determine the complexity of the problems {\sc Euler Tour} and {\sc Euler Family}, and in the next two sections we focus on the latter. We give a complete verifiable characterization of hypergraphs with an Euler family using a theorem of Lov\'{a}sz, and then show that every 3-uniform hypergraph without cut edges admits an Euler family. Finally, in Subsection~\ref{sec:CD}, we show that a hypergraph admits an Euler family if and only if it can be decomposed into cycles, and exhibit a relationship between 2-factors in a hypergraph and eulerian properties of its dual.
\subsection{Preliminaries}
For any graph-theoretic terms not defined here, the reader is referred to \cite{BonMur}, and for hypergraph definitions, to our earlier manuscript \cite{BahSaj}. Note that, since 2-uniform hypergraphs can be thought of as loopless graphs, most terms defined below also extend to graphs.
A hypergraph $H$ is an ordered pair $(V,E)$, where $V$ and $E$ are disjoint finite sets such that $V \ne \emptyset$, together with a function $\psi: E \rightarrow 2^V$, called the {\em incidence function}. The elements of $V=V(H)$ are called {\em vertices}, and the elements of $E=E(H)$ are called {\em edges}. The number of vertices $|V|$ and number of edges $|E|$ are called the {\em order} and {\em size} of the hypergraph, respectively. Often we denote $n=|V|$ and $m=|E|$. A hypergraph with a single vertex is called {\em trivial}, and a hypergraph with no edges is called {\em empty}.
Two edges $e,e' \in E$ are said to be {\em parallel} if $\psi(e)=\psi(e')$, and the number of edges parallel to edge $e$ (including $e$) is called the {\em multiplicity} of $e$. A hypergraph $H$ is called {\em simple} if no edge has multiplicity greater than 1; that is, if $\psi$ is injective.
\bigskip
As is customary for graphs, the incidence function may be omitted when no ambiguity can arise (in particular, when the hypergraph is simple, or when we do not need to distinguish between distinct parallel edges). An edge $e$ is then identified with the subset $\psi(e)$ of $V$, and for $v \in V$ and $e \in E$, we then more conveniently write $v \in e$ or $v \not\in e$ instead of $v \in \psi(e)$ or $v \not\in \psi(e)$, respectively.
\bigskip
Let $H=(V,E)$ be a hypergraph. If $v,w \in V$ are distinct vertices and there exists $e \in E$ such that $v,w \in e$, then $v$ and $w$ are said to be {\em adjacent} in $H$ (via edge $e$). Similarly, if $e,f \in E$ are distinct (but possibly parallel) edges and $v \in V$ is such that $v \in e \cap f$, then $e$ and $f$ are said to be {\em adjacent} in $H$ (via vertex $v$).
Each ordered pair $(v,e)$ such that $v \in V$, $e \in E$, and $v \in e$ is called a {\em flag} of $H$; the set of flags is denoted by $F(H)$. If $(v,e)$ is a flag of $H$, then we say that vertex $v$ is {\em incident} with edge $e$.
The {\em degree} of a vertex $v \in V$ (denoted by $\deg_H(v)$ or simply $\deg(v)$ if no ambiguity can arise) is the number of edges $e \in E$ such that $v \in e$. A vertex of degree 0 is called {\em isolated}, and a vertex of degree 1 is called {\em pendant}. A hypergraph $H$ is said to be {\em $r$-regular on $V'$}, for $V' \subseteq V$, if every vertex in $V'$ has degree $r$ in $H$, and simply {\em $r$-regular} if it is $r$-regular on $V$. Similarly, $H$ is said to be {\em even (odd) on $V'$} if every vertex of $V'$ has even (respectively, odd) degree in $H$, and simply {\em even (odd)} if it is even (respectively, odd) on $V$.
The maximum (minimum) cardinality $|e|$ of any edge $e \in E$ is called the {\em rank} ({\em corank}, respectively) of $H$. A hypergraph $H$ is {\em uniform of rank} $r$ (or {\em $r$-uniform}) if $|e|=r$ for all $e \in E$. An edge $e \in E$ is called
{\em empty} if $|e|=0$.
\bigskip
A hypergraph $H'=(V',E')$ is called a {\em hypersubgraph} of a hypergraph $H=(V,E)$ if $V' \subseteq V$ and $E' \subseteq E$. For $E' \subseteq E$, the hypergraph $(\cup_{e \in E'} e,E')$, denoted by $H[E']$, is called the {\em hypersubgraph of $H$ induced by the edge set $E'$}.
If $e \in E$, we write shortly $H-e$ for the hypersubgraph $(V,E-\{e \})$, also called an {\em edge-deleted hypersubgraph}.
A hypersubgraph $H'=(V',E')$ of $H$ is called {\em spanning} if $V'=V$.
An {\em $r$-factor} of $H$ is a spanning $r$-regular hypersubgraph of $H$.
Furthermore, if $a$ and $b$ are integers with $0 \le a \le b$, then we define an {\em $(a,b)$-factor} of $H$ as a spanning hypersubgraph of $H$ in which every vertex has degree in the interval $[a,b]$.
\bigskip
If $H=(V,E)$ is a hypergraph and $V' \subseteq V$, then $H[V']$, called the {\em subhypergraph of $H$ induced by $V'$}, is the hypergraph obtained from $H$ by deleting all vertices of $V-V'$ from $V$ and from every edge of $H$, and subsequently deleting all empty edges. (See \cite{BahSaj} for a definition of general subhypergraphs.) For $v \in V$, the hypergraph $H[V-\{ v \}]$ is also denoted by $H \b v$ and called a {\em vertex-deleted subhypergraph} of $H$.
\bigskip
A hypergraph is called {\em linear} if every pair of distinct edges intersect in at most one vertex.
\bigskip
The {\em union} of hypergraphs is a straight generalization of union of graphs.
If the edge set of a hypergraph $H$ is a disjoint union of the edge sets of its hypersubgraphs $H_1,\ldots,H_k$, then we say that $H$ {\em decomposes} into $H_1,\ldots,H_k$, and write $H=H_1 \oplus \ldots \oplus H_k$. A decomposition of a hypergraph $H$ into its $r$-factors is called an {\em $r$-factorization} of $H$.
\bigskip
The main tool used in this paper is the conversion of a problem about hypergraphs to a problem about graphs. The incidence graph, to be defined below, is particularly helpful since it contains complete information about its hypergraph.
Let $H=(V,E)$ be a hypergraph with incidence function $\psi$. The {\em incidence graph} $\G(H)$ of $H$ is the graph $\G(H)=(V_{G},E_G)$ with $V_G=V \cup E$ and $E_G=\{ ve: v\in V, e \in E, v \in \psi(e)\}$.
Thus $\G(H)$ is a bipartite simple graph with bipartition $\{ V, E \}$. We call a vertex $x$ of $\G(H)$ a {\em v-vertex} if $x \in V$, and an {\em e-vertex} if $x \in E$. Note that the edge set of $\G(H)$ can be identified with the flag set $F(H)$; that is, $E_G=\{ ve: (v,e) \in F(H)\}$.
The {\em intersection graph} (or {\em line graph}) of the hypergraph $H=(V,E)$, denoted $\L(H)$, is the graph with vertex set $E$ and edge set $\{ e e': e, e'\in E, e \ne e', e \cap e' \ne \emptyset \}$. More generally, for any positive integer $\ell$, we define the {\em $\ell$-intersection graph} of the hypergraph $H=(V,E)$, denoted $\L_{\ell}(H)$, as the graph with vertex set $E$ and edge set $\{ e e': e, e'\in E, e \ne e', |e \cap e'| = \ell \}$, and the {\em $\ell^*$-intersection graph} of $H$, denoted $\L_{\ell}^*(H)$, as the graph with vertex set $E$ and edge set $\{ e e': e, e'\in E, e \ne e', |e \cap e'| \ge \ell \}$. Note that the intersection graphs do not contain full information about the hypergraph.
\bigskip
Let $H=(V,E)$ be a hypergraph, let $u,v \in V$, and let $k \ge 0$ be an integer. A {\em $(u,v)$-walk of length $k$} in $H$ is a sequence $v_0 e_1 v_1 e_2 v_2 \ldots v_{k-1} e_k v_k$ of vertices and edges (possibly repeated) such that $v_0,v_1,\ldots,v_k \in V$, $e_1,\ldots,e_k \in E$, $v_0=u$, $v_k=v$, and for all $i=1,2,\ldots, k$, the vertices $v_{i-1}$ and $v_i$ are adjacent in $H$ via the edge $e_i$. Vertices $v_0,v_1,\ldots,v_k$ are called the {\em anchors} of $W$; $v_0$ and $v_k$ are the {\em endpoints}, and $v_1,\ldots,v_{k-1}$ are the {\em internal vertices}.
Observe that since adjacent vertices are by definition distinct, no two consecutive vertices in a walk are the same. Concatenation of walks is defined in the usual way.
A walk $W=v_0 e_1 v_1 e_2 v_2 \ldots v_{k-1} e_k v_k$ in a hypergraph $H=(V,E)$ is called (i) a {\em trail}, if the anchor flags $(v_0,e_1),(v_1, e_1),(v_1,e_2),\ldots,(v_{k-1},e_k),(v_k,e_k)$ are pairwise distinct; (ii) a {\em strict trail} if the edges $e_1,\ldots,e_k$ are pairwise distinct; and (iii) a {\em path} if both the vertices $v_0,v_1,\ldots,v_k$ and the edges $e_1,\ldots,e_k$ are pairwise distinct. (Here, ``distinct'' should be understood in the strict sense; that is, parallel edges need not be distinct.)
A walk $W=v_0 e_1 v_1 e_2 v_2 \ldots v_{k-1} e_k v_k$ in a hypergraph $H=(V,E)$ is called {\em closed} if $k \ge 2$ and $v_0=v_k$. A {\em closed trail} and {\em closed strict trail} are defined analogously. If the vertices $v_0,v_1,\ldots,v_{k-1}$ and the edges $e_1,\ldots,e_k$ are pairwise distinct, then the closed walk $W$ is called a {\em cycle} (sometimes called a {\em Berge cycle} in the literature).
The following result describes the correspondence between the various types of walks in a hypergraph and its incidence graph.
\begin{lemma}\cite{BahSaj}\label{lem:W-W_G}
Let $H=(V,E)$ be a hypergraph and $G=\G(H)$ its incidence graph. Let $v_i \in V$ for $i=0,1,\ldots,k$, and $e_i \in E$ for $i=1,\ldots,k$, and let $W=v_0 e_1 v_1 e_2 v_2 \ldots v_{k-1} e_k v_k$. Denote the corresponding sequence of vertices in $G$ by $W_G$. Then the following hold:
\begin{enumerate}
\item $W$ is a (closed) walk in $H$ if and only if $W_G$ is a (closed) walk in $G$ with no two consecutive v-vertices being the same.
\item $W$ is a trail (path, cycle) in $H$ if and only if $W_G$ is a trail (path, cycle, respectively) in $G$.
\item $W$ is a strict trail in $H$ if and only if $W_G$ is a trail in $G$ that visits every $e \in E$ at most once.
\end{enumerate}
\end{lemma}
A hypergraph $H=(V,E)$ is said to be {\em connected} if for every pair of distinct vertices $u,v \in V$ there exists a $(u,v)$-walk (or equivalently, a $(u,v)$-path) in $H$. The {\em connected components} of $H$ are the maximal connected hypersubgraphs of $H$ that have no empty edges. The number of connected components of $H$ is denoted by $\cc(H)$.
\begin{theo}\cite{BahSaj}\label{the:conn}
Let $H=(V,E)$ be a hypergraph without empty edges. Then $H$ is connected if and only if its incidence graph $G=\G(H)$ is connected.
\end{theo}
\subsection{Introduction to eulerian properties of hypergraphs}
An Euler tour in a graph is usually defined as a closed trail that traverses every edge of the graph. Equivalently, an Euler tour in a graph is a closed trail that traverses each flag exactly once. This observation suggests two natural ways to generalize Euler tours to hypergraphs. We add a third one, observing that in a connected graph, a family of closed trails that jointly traverse each edge exactly once, can always be concatenated into an Euler tour.
\begin{defn}{\rm
Let $H=(V,E)$ be a hypergraph.
\begin{enumerate}
\item A {\em flag-traversing tour} of $H$ is a closed trail of $H$ traversing every flag of $H$.
\item An {\em Euler tour} of $H$ is a closed strict trail of $H$ traversing every edge of $H$.
\item An {\em Euler family} of $H$ is a family ${\cal F}=\{ T_1,\ldots,T_k\}$ of closed strict trails of $H$ such that: \\ (i) each edge of $H$ lies in exactly one trail of the family, and \\ (ii) the trails $T_1,\ldots,T_k$ are
pairwise anchor-disjoint.
\end{enumerate}}
\end{defn}
We remark that a hypergraph admits a family of closed strict trails satisfying Property (i) if and only if it admits a family satisfying Properties (i) and (ii), since two closed strict trails with a common anchor can be concatenated into a longer closed strict trail.
The main objective of this paper is to characterize hypergraphs with a flag-traversing tour, Euler tour, and Euler family, respectively. As we shall see, these three seemingly similar problems --- which are, in fact, equivalent for connected graphs --- greatly differ in their difficulty. In the next section, we completely solve the first problem. Then, in the remainder of the paper, we give partial solutions to the latter two problems. In particular, we show that the second problem is NP-complete even on a very restricted subclass of hypergraphs, while the third is polynomial on the set of all hypergraphs.
\subsection{Hypergraphs with a flag-traversing tour}\label{sec:ET}
\begin{theo}\label{the:Etour}
A connected hypergraph $H=(V,E)$ has a flag-traversing tour if and only if its incidence graph has an Euler tour, that is, if and only if $\deg_H(v)$ and $|e|$ are even for all $v \in V$ and $e \in E$.
\end{theo}
\begin{proof}
Let $H=(V,E)$ be a connected hypergraph, and $G$ its incidence graph. A flag-traversing of $H$ is a closed walk $W$ of $H$ that traverses each flag $(v,e)$ of $H$ exactly once. Hence, by Lemma~\ref{lem:W-W_G}, $W$ corresponds to a closed walk of $G$ that traverses each edge of $G$ exactly once, that is, an Euler tour of $G$; and vice-versa. The result follows by Euler's Theorem \cite{Eul}.
\end{proof}
\begin{cor}
Let $H=(V,E)$ be a hypergraph such that $\deg_H(v)$ and $|e|$ are even for all $v \in V$ and $e \in E$. Then $H$ is admits a collection of cycles such that each flag of $H$ is an anchor flag of exactly one of these cycles.
\end{cor}
\begin{proof}
With the assumptions of the corollary, the incidence graph $G=\G(H)$ is even, and hence is by Veblen's Theorem \cite{Veb} an edge-disjoint union of cycles. Each cycle $C_G$ of $G$ corresponds to a cycle $C_H$ in $H$ by Lemma~\ref{lem:W-W_G}, and the edges of $C_G$ correspond to the anchor flags of $C_H$. Hence $H$ admits a collection of cycles such that each flag of $H$ is an anchor flag of exactly one of them.
\end{proof}
Note that the above corollary does not claim that $H$ admits a decomposition into cycles; compare Theorem~\ref{the:CD}.
Since hypergraphs with a flag-traversing tour are completely characterized in Theorem~\ref{the:Etour}, we shall focus on Euler tours and Euler families for the rest of the paper. We hence define the following terms.
\begin{defn}{\rm
A hypergraph is called {\em eulerian} if it admits an Euler tour, and {\em quasi-eulerian} if it admits an Euler family. }
\end{defn}
Clearly, every eulerian hypergraph is also quasi-eulerian, but as we shall see soon, the converse does not hold.
\section{Eulerian and quasi-eulerian hypergraphs}\label{sec:main}
All hypergraphs in this section are assumed to have no empty edges.
\subsection{Examining the necessary conditions}\label{sec:NC}
Lonc and Naroski \cite{LonNar} determined the following necessary conditions for a ($k$-uniform) hypergraph to be eulerian. We extend their observation to general quasi-eulerian hypergraphs, and since the two conditions are equivalent, we shall refer to them in the singular.
\begin{lemma}\label{lem:NecCond}
Let $H=(V,E)$ be a quasi-eulerian hypergraph, and $V_{odd}$ the set of odd-degree vertices in $H$. Then
\begin{equation}\label{eq1}
|E| \le \sum_{v \in V} \lfloor \frac{\deg_H(v)}{2} \rfloor
\end{equation}
and
\begin{equation}\label{eq2}
|V_{odd}| \le \sum_{e \in E} (|e|-2).
\end{equation}
Moreover, the two inequalities are equivalent.
\end{lemma}
\begin{proof}
First, we show Inequality~(\ref{eq1}). Let $\cal F$ be an Euler family of $H$, and $T$ any closed strict trail in $\cal F$. For all $u \in V$, let $m_T(u)$ denote the number of times $u$ is traversed on $T$ as an anchor vertex. (Here, the endpoints of the trail together count as one traversal.) Since vertices and edges alternate along $T$, and each edge of $H$ is traversed exactly once by a trail in $\cal F$, we have $\sum_{T \in {\cal F}}\sum_{v \in V} m_T(v) = |E|$. Clearly, for all $v \in V$, we have $\sum_{T \in {\cal F}}m_T(v) \le \lfloor \frac{\deg_H(v)}{2} \rfloor$. Hence, as claimed,
$$\sum_{v \in V} \lfloor \frac{\deg_H(v)}{2} \rfloor \ge \sum_{v \in V} \sum_{T \in {\cal F}} m_T(v) = \sum_{T \in {\cal F}}\sum_{v \in V} m_T(v) =|E| .$$
It now suffices to show that Inequalities~(\ref{eq1}) and (\ref{eq2}) are equivalent. Observe that
\begin{eqnarray*}
\sum_{v \in V} \lfloor \frac{\deg_H(v)}{2} \rfloor &=& \sum_{v \in V_{odd}} \frac{\deg_H(v)-1}{2} + \sum_{v \in V-V_{odd}} \frac{\deg_H(v)}{2} \\
&=& \frac{1}{2} \left( \sum_{v\in V} \deg_H(v) - |V_{odd}| \right)=\frac{1}{2} \left( \sum_{e\in E} |e| - |V_{odd}| \right).
\end{eqnarray*}
Hence $\sum_{v \in V} \lfloor \frac{\deg_H(v)}{2} \rfloor \ge |E|$ if and only if
$\sum_{e \in E} |e| - |V_{odd}| \ge 2|E|$, that is, if and only if $|V_{odd}| \le \sum_{e \in E} (|e|-2)$.
\end{proof}
We remark that a hypergraph with a single edge cannot be quasi-eulerian, both by the definition of a closed walk, as well as by Lemma~\ref{lem:NecCond}.
\begin{quest}\label{quest1}{\rm
Is the necessary condition in Lemma~\ref{lem:NecCond} also sufficient?
}
\end{quest}
The answer to Question~\ref{quest1} is positive for graphs: if $G=(V,E)$ is a graph, then $\sum_{e \in E} (|e|-2)=0$, and if $G$ satisfies the condition in Lemma~\ref{lem:NecCond}, then $G$ has no vertices of odd degree, and every connected component of $G$ has an Euler tour. Hence $G$ is quasi-eulerian.
For hypergraphs, however, it is easily seen that in general the answer to the above question is negative. We shall now present some counterexamples, starting with the more trivial ones. We first state the most obvious limiting property, which follows straight from the definition of a walk.
\begin{lemma}\label{lem:CE-0}
A quasi-eulerian hypergraph has no edge of cardinality less than two.
\end{lemma}
\begin{example}{\rm
Fix any $n \ge 3$, and let $V=\{ v_1, v_2,\ldots,v_n \}$ and $E=\{ e_1, e_2,\ldots,e_n \}$, where $e_1=\{ v_1 \}$, $e_2=\{ v_1, v_2, v_n \}$, and $e_i=\{ v_{i-1},v_{i} \}$ for $i=3,\dots,n$.
Then $H=(V,E)$ is 2-regular, whence $\sum_{v \in V} \lfloor \frac{\deg(v)}{2} \rfloor = n=|E|$. Thus $H$ satisfies the necessary conditions in Lemma~\ref{lem:NecCond}. However, $H$ is not quasi-eulerian since $e_1$ is an edge of cardinality 1.
}
\end{example}
\begin{lemma}\label{lem:CE-1}
Every edge of a quasi-eulerian hypergraph contains at least two vertices that are not pendant.
\end{lemma}
\begin{proof}
An anchor vertex of a closed trail (necessarily traversing at least two edges) must have degree at least 2. If a hypergraph is quasi-eulerian, every edge lies in a closed trail, and hence every edge contains at least 2 vertices that are not pendant.
\end{proof}
\begin{example}{\rm
Fix any $n \ge 7$, and let $V=\{ v_1, v_2,\ldots,v_n \}$ and $E=\{ e_1, e_2,\ldots,e_n \}$, where $e_1=\{ v_1,v_2,v_3 \}$, $e_i=\{ v_{i+1},v_{i+2},v_{i+3} \}$ for $i=2,\dots,n-3$, $e_{n-2}=\{ v_3,v_{n-1},v_{n} \}$, $e_{n-1}=\{ v_4,v_{n-1},v_{n} \}$, $e_{n}=\{ v_5,v_{n-1},v_{n} \}$.
Then $H=(V,E)$ has no isolated vertices, has exactly two vertices of degree 1 (namely, $v_1$ and $v_2$), and at least two vertices of degree at least 4. It follows that $\sum_{v \in V} \lfloor \frac{\deg(v)}{2} \rfloor \ge n=|E|$. Thus $H$ satisfies the necessary conditions in Lemma~\ref{lem:NecCond}. However, since $e_1$ is an edge with exactly one non-pendant vertex, $H$ is not quasi-eulerian by Lemma~\ref{lem:CE-1}.
}
\end{example}
Observe that if $V'$ is the set of pendant vertices in a hypergraph $H$, then $H$ is quasi-eulerian (eulerian) if and only if $H[V-V']$ is. Hence we shall now look for counterexamples to the sufficiency of the condition in Lemma~\ref{lem:NecCond} among hypergraphs without pendant vertices (and of course, without edges of cardinality less than 2).
\medskip
A {\em cut edge} in a hypergraph $H=(V,E)$ is an edge $e \in E$ such that $\cc(H-e) > \cc(H)$. A graph with a cut edge obviously does not admit an Euler tour. The issue is more complex for hypergraphs. First, we need to distinguish between two types of cut edges. As we showed in \cite[Lemma 3.15]{BahSaj}, if $e$ is a cut edge in a hypergraph $H=(V,E)$, then $\cc(H-e) \le \cc(H)+|e|-1$. A cut edge that achieves the upper bound in this inequality is called {\em strong}; all other cut edges are called {\em weak}. Observe that a cut edge has cardinality at least two, and that any cut edge of cardinality two (and hence any cut edge in a graph) is necessarily strong.
\begin{theo}\label{the:cut-edges}
Let $H=(V,E)$ be a hypergraph with a cut edge $e$.
\begin{enumerate}
\item If $e$ is a strong cut edge, then $H$ is not quasi-eulerian.
\item If $H-e$ has at least two non-trivial connected components, then $H$ is not eulerian.
\end{enumerate}
\end{theo}
\begin{proof}
\begin{enumerate}
\item Assume that $e$ is a strong cut edge. If $H$ admits an Euler family, then $e$ lies in a closed strict trail, and consequently in a cycle of $H$. However, by \cite[Theorem 3.18]{BahSaj}, no strong cut edge lies in a cycle --- a contradiction. Hence $H$ is not quasi-eulerian.
\item Assume that $H-e$ has at least two non-trivial connected components. Since $e$ is a cut edge of $H$, it is a cut e-vertex in its incidence graph $G=\G(H)$ \cite[Theorem 3.23]{BahSaj}, and the connected components of $G\b e$ are the incidence graphs of the connected components of $H-e$ \cite[Lemma 2.8 and Corollary 3.12]{BahSaj}. Hence, by assumption, $G\b e$ has at least two connected components with e-vertices.
Suppose $H$ has an Euler tour. By Lemma~\ref{lem:W-W_G}, $G$ has a closed trail $T$ traversing each e-vertex (including $e$) exactly once. Hence $T\b e$ is a trail that traverses every e-vertex in $G\b e$, contradicting the above. Thus $H$ is not eulerian.
\end{enumerate}
\vspace{-10mm}
\end{proof}
\begin{center}
\begin{figure}[t]
\centerline{\includegraphics[scale=0.5]{pic4.eps}}
\caption{The incidence graph of a 3-uniform quasi-eulerian hypergraph with a cut edge. (Black dots represent the v-vertices.)}\label{fig:pic4}
\end{figure}
\end{center}
Note that if a hypergraph has no strong cut edges, then it may or may not be quasi-eulerian; an example of each kind is given in Figures~\ref{fig:pic4} and \ref{fig:pic5}, respectively. Both of these examples have weak cut edges and satisfy the necessary condition from Lemma~\ref{lem:NecCond}; they are also 3-uniform.
\begin{center}
\begin{figure}[t]
\centerline{\includegraphics[scale=0.5]{pic5.eps}}
\caption{The incidence graph of a 3-uniform hypergraph with cut edges that is not quasi-eulerian. (Black dots represent the v-vertices.)} \label{fig:pic5}
\end{figure}
\end{center}
As we saw in Theorem~\ref{the:cut-edges}, a cut edge may prevent a hypergraph from admitting an Euler family. We now give a counterexample to the sufficiency of the condition in Lemma~\ref{lem:NecCond} that has no cut edges. It can be easily generalized to give an infinite family of such hypergraphs.
\begin{example}\label{count:2}{\rm
Let $H=(V,E)$ be a hypergraph whose incidence graph is shown in Figure~\ref{fig:counter2}. Then $H$ has no cut edges and no Euler family, yet it satisfies the necessary condition from Lemma~\ref{lem:NecCond}.
}
\end{example}
\begin{center}
\begin{figure}[b]
\centerline{\includegraphics[scale=0.5]{counter2.eps}}
\caption{The incidence graph of a hypergraph without cut edges that satisfies the necessary condition from Lemma~\ref{lem:NecCond} but is not quasi-eulerian. (Black dots represent the v-vertices.)}\label{fig:counter2}
\end{figure}
\end{center}
Note that the previous counterexample contains edges of cardinality 2; Stamplecoskie \cite{Sta} recently showed that for every $c \ge 2$ and $m \ge5$, there exists a connected hypergraph of corank $c$, size $m$, and without cut edges that satisfies the necessary condition from Lemma~\ref{lem:NecCond} but is not quasi-eulerian. However, in Theorem~\ref{the:3-uniform} we shall see that every 3-uniform hypergraph without cut edges is quasi-eulerian. Hence the following question.
\begin{quest}{\rm
Does there exist a connected $k$-uniform hypergraph (for $k \ge 4$) with no cut edges that satisfies the necessary condition from Lemma~\ref{lem:NecCond} but is not quasi-eulerian? }
\end{quest}
The following example shows that Theorem~\ref{the:3-uniform} does not extend to eulerian hypergraphs; that is, not all 3-uniform hypergraphs without cut edges are eulerian.
\begin{example}\label{count:3}{\rm
Let $H=(V,E)$ be a hypergraph whose incidence graph is shown in Figure~\ref{fig:counter3}. Observe that $H$ is 3-uniform and has no cut edges. Furthermore, it is quasi-eulerian but not eulerian, and it satisfies the necessary condition in Lemma~\ref{lem:NecCond}. Observe that its 2-intersection graph is disconnected --- see Theorem~\ref{the:LonNar} below.
}
\end{example}
\begin{center}
\begin{figure}[t]
\centerline{\includegraphics[scale=0.5]{counter3old.eps}}
\caption{The incidence graph of a 3-uniform hypergraph without cut edges that is quasi-eulerian but not eulerian. (Black dots represent the v-vertices.)}\label{fig:counter3}
\end{figure}
\end{center}
To conclude this section, we shall present a new class of hypergraphs for which the necessary condition from Lemma~\ref{lem:NecCond} is also sufficient, extending the following result by Lonc and Naroski.
\begin{theo}\cite{LonNar}\label{the:LonNar}
Let $k \ge 3$, and let $H=(V,E)$ be a $k$-uniform hypergraph with a connected $(k-1)$-intersection graph. Then $H$ is eulerian if and only if $$\sum_{v \in V} \lfloor \frac{\deg_H(v)}{2} \rfloor \ge |E|.$$
\end{theo}
We propose the following two generalizations (Theorem~\ref{the:SuffCond} and Corollary~\ref{cor:SuffCond} below). The main idea of the proof is based on the proof of the above theorem from \cite{LonNar} for $k\ge 4$.
For any hypergraph $H=(V,E)$, define a digraph $\D_3(H)$ as follows: its vertex set is $E$ and its arc set is $\{ (e,f): e,f \in E, |f-e|=1, |e \cap f|\ge 3 \}$. Recall that an {\em arborescence} is a directed graph whose underlying undirected graph is a tree, and whose arcs are all directed towards a root.
\begin{theo}\label{the:SuffCond}
Let $H=(V,E)$ be a hypergraph such that its digraph $\D_3(H)$ has a spanning subdigraph that is a vertex-disjoint union of non-trivial arborescences. Then $H$ is quasi-eulerian.
\end{theo}
\begin{proof}
For convenience, we say that a digraph satisfies Property P if has a spanning subdigraph that is a vertex-disjoint union of non-trivial arborescences.
We shall prove by induction on the number of edges that every hypergraph $H$ whose digraph $\D_3(H)$ satisfies Property P possesses an Euler family. Observe that such a hypergraph necessarily has at least two edges.
First, let $H=(V,E)$ be a hypergraph with $E=\{ e,f \}$ such that its digraph $\D_3(H)$ satisfies Property P. Then $\D_3(H)$ must have a spanning arborescence. Moreover, we have that $|e \cap f| \ge 3$. Take any $u,v \in e \cap f$ such that $ u \ne v$. Then $T=uevfu$ is an Euler tour of $H$. Thus $H$ possesses an Euler family as claimed.
Assume that for some $m\ge 2$, every hypergraph $H$ with at least $m$ edges whose digraph $\D_3(H)$ satisfies Property P possesses an Euler family. Let $H=(V,E)$ be a hypergraph with $|E|=m+1$ such that its digraph $\D_3(H)$ has a spanning subdigraph $D'$ that is a vertex-disjoint union of non-trivial arborescences. If each arborescence in $D'$ is of order 2, then (just as in the base case above) each gives rise to a closed strict trail of length 2 in $H$, and the union of all these trails is an Euler family in $H$.
Hence assume that $D'$ has a weakly connected component $A$ that is an arborescence of order at least 3. Let $e\in E$ be a leaf (that is, vertex of indegree 0 and outdegree 1) of $A$ and $f$ its outneighbour in $A$. Then $|f-e|=1$ and $|e \cap f| \ge 3$. Now $\D_3(H-e)$ has a spanning digraph $D' \setminus e$ that is a vertex-disjoint union of non-trivial arborescences. Hence by the induction hypothesis, the hypergraph $H-e$ possesses an Euler family $\cal F$. Let $T=ufvW$ --- where $u$ and $v$ are distinct vertices in $f$, and $W$ is an appropriate $(v,u)$-walk --- be a closed strict trail in $\cal F$. We now reroute $T$ to include the edge $e$, resulting in a closed strict trail $T'$ of $H$, as follows.
Since $|f-e|=1$, at least one of $u$ and $v$ --- say $v$ without loss of generality --- is also in $e$, and since $|e \cap f| \ge 3$, there exists $w \in e \cap f$ such that $w \ne u,v$. Then $T'=ufwevW$ is a closed strict trail of $H$. Finally, replace $T$ in $\cal F$ by $T'$ to obtain an Euler family of $H$.
The result follows by induction.
\end{proof}
With very minor changes to the above proof we obtain the following.
\begin{cor}\label{cor:SuffCond}
Let $H=(V,E)$ be a hypergraph such that its digraph $\D_3(H)$ has a non-trivial spanning arborescence. Then $H$ is eulerian.
\end{cor}
Observe that Corollary~\ref{cor:SuffCond} extends Theorem~\ref{the:LonNar} to a (much) larger family of hypergraphs, while Theorem~\ref{the:SuffCond} generalizes it to quasi-eulerian hypergraphs. Still, the sufficient conditions in Theorem~\ref{the:SuffCond} and Corollary~\ref{cor:SuffCond} are very strong, and the converses clearly do not hold.
\subsection{Characterization using the intersection graphs}\label{sec:L(H)}
We shall now take another look at the intersection graphs of a hypergraph $H$ to determine some necessary and some sufficient conditions for $H$ to be eulerian or quasi-eulerian. The following observation will be an essential tool to establish the necessary conditions.
\begin{lemma}\label{lem:line}
Let $H=(V,E)$ be a hypergraph and $L=\L(H)$ its intersection graph. Furthermore, let $W=v_0 e_1 v_1 e_2 v_2 \ldots v_{k-1} e_k v_k$ be a walk in $H$ (so that all $v_i \in V$ and all $e_i \in E$), and let $W_L= e_1 e_2 \ldots e_k$ and $W_L^*= e_1 e_2 \ldots e_k e_1$. Then:
\begin{enumerate}
\item $W_L$ is a walk in $L$.
\item If $W$ is a strict trail, then $W_L$ is a path.
\item If $W$ is a closed strict trail and $k \ge 3$, then $W_L^*$ is a cycle.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item As defined, $W_L$ is a sequence of vertices in $L$ such that any two consecutive vertices are adjacent; that is, $W_L$ is a walk.
\item If $W$ is a strict trail, then none of the edges in $W$ are repeated, and hence none of the vertices in $W_L$ are repeated. Thus $W_L$ is a path.
\item If $W$ is a closed strict trail, then $e_1,\ldots,e_k$ are pairwise distinct and $v_0=v_k$, implying that $e_k$ and $e_1$ are adjacent in $L$. If $k \ge 3$, we may conclude that $W_L^*$ is a cycle in $L$.
\end{enumerate}
\vspace{-10mm}
\end{proof}
\begin{theo}\label{the:L(H)a}
Let $H=(V,E)$ be a hypergraph with at least 3 edges, and $L=\L(H)$ its intersection graph.
\begin{enumerate}
\item If $H$ is eulerian, then $L$ has a Hamilton cycle.
\item If $H$ is quasi-eulerian, then $L$ has a spanning subgraph whose connected components are 1-regular or 2-regular.
\item If $H$ has an Euler family with no strict closed trail of length less than 3, then $L$ has a 2-factor.
\end{enumerate}
\end{theo}
\begin{proof}
Let $W$ be an Euler tour in $H$. By Lemma~\ref{lem:line}, since $H$ has at least 3 edges, the corresponding sequence $W_L^*$ of vertices in $L$ is a cycle, and since $W$ traverses each edge of $H$ exactly once, the cycle $W_L^*$ traverses each vertex of $L$ exactly once. Thus, $W_L^*$ is a Hamilton cycle of $L$.
Similarly, let ${\cal F}$ be an Euler family of $H$. Each closed strict trail in ${\cal F}$ of length 2 gives rise to a path of length 1 in $L$, while each closed strict trail of length at least 3 corresponds to a cycle in $L$. Since the closed strict trails in ${\cal F}$ are pairwise edge-disjoint, the corresponding subgraphs in $L$ are pairwise vertex-disjoint, and since the members of ${\cal F}$ jointly cover all the edges of $H$, the corresponding subgraphs in $L$ form a spanning subgraph whose connected components are 1-regular or 2-regular. If ${\cal F}$ contains no strict closed trails of length 2, then this spanning subgraph is in fact a 2-factor.
\end{proof}
Note that in general, the converse of Theorem~\ref{the:L(H)a} does not hold: for a walk $W$ in $\L(H)$, it may happen that every corresponding sequence of vertices and edges in $H$ contains two consecutive vertices that are the same. In Theorem~\ref{the:L(H)b} below, however, we present three families of hypergraphs for which the converse does hold. But first, we need the following lemma.
\begin{lemma}\label{lem:line2}
Let $H=(V,E)$ be a hypergraph and $L=\L(H)$ its intersection graph. Furthermore, let
$W_L= e_0 e_1 \ldots e_{k-1} e_0$ be a cycle in $L$. Assume that one of the following hold:
\begin{description}
\item[(a)] $\deg_H(v) \le 2$ for all $v \in V$; or
\item[(b)] $k$ is even and $|e_i \cap e_{i+1}| \ge 2$ for all $i\in \ZZ_k$; or
\item[(c)] $|e_i \cap e_{i+1}| \ge 2$ for all $i\in \ZZ_k$, and $|e_{k-2} \cap e_{k-1}| \ge 3$.
\end{description}
Then there exist $v_0, v_1, \ldots, v_{k-1} \in V$ such that $W=v_0 e_0 v_1 e_1 v_2 \ldots v_{k-1} e_{k-1} v_0$ is a closed strict trail in $H$.
\end{lemma}
\begin{proof}
It suffices to choose, for each $i \in \ZZ_k$, a vertex $v_i \in e_{i-1} \cap e_{i}$ such that $v_i \ne v_{i-1}$. Then $W=v_0 e_0 v_1 e_1 v_2 \ldots v_{k-1} e_{k-1} v_0$ will be a closed strict trail in $H$. Consider the following algorithm:
\begin{enumerate}
\item Choose any $v_0 \in e_{k-1} \cap e_0$.
\item For all $i=1,\ldots,k-2$, let $v_i \in e_{i-1} \cap e_{i} - \{ v_{i-1} \}$.
\item Choose $v_{k-1} \in e_{k-2} \cap e_{k-1} - \{ v_0,v_{k-2} \}$.
\end{enumerate}
Steps 1--2 of the algorithm will be successful with any of the assumptions (a), (b), and (c), since $\deg_H(v) \le 2$ for all $v \in V$, or $|e_i \cap e_{i+1}| \ge 2$ for all $i\in \ZZ_k$. Step 3 will also be successful in Cases (a) and (c) since $\deg_H(v) \le 2$ for all $v \in V$, or $|e_{k-2} \cap e_{k-1}| \ge 3$. Hence consider Step 3 in Case (b).
If Step 3 cannot be executed, then we must have that $e_{k-2} \cap e_{k-1}=\{ v_0,v_{k-2} \}$. Consider the subgraph of the cycle $W_L$ induced by the edges of the form $e_{i-1} e_{i}$ such that $e_{i-1} \cap e_{i}=\{ v_0,v_{k-2} \}$. This subgraph is either the (even-length) cycle $W_L$ itself, or it is a vertex-disjoint union of paths. In either case, its edges can be alternately labelled with vertices $v_0$ and $v_{k-2}$, resulting in a revised choice of vertices $v_0,v_1,\ldots,v_{k-1} \in V$ that yields a closed strict trail in $H$.
\end{proof}
\begin{theo}\label{the:L(H)b}
Let $H=(V,E)$ be a hypergraph. Assume $L$ is a graph satisfying one of the following:
\begin{description}
\item[(a)] $L=\L(H)$ if $\deg_H(v) \le 2$ for all $v \in V$; or
\item[(b)] $L=\L_2^*(H)$ and $L$ is bipartite; or
\item[(c)] $L=\L_3^*(H)$.
\end{description}
Then $H$ is eulerian (quasi-eulerian) whenever $L$ has a Hamilton cycle (2-factor, respectively). Moreover, in Cases (b) and (c), $H$ is quasi-eulerian whenever $L$ has a spanning subgraph whose connected components are 1-regular or 2-regular.
\end{theo}
\begin{proof}
Assuming one of the Conditions (a)--(c), by Lemma~\ref{lem:line2}, a cycle in $L$ corresponds to a closed strict trail in $H$. Moreover, a Hamilton cycle in $L$ corresponds to a closed strict trail containing all the edges of $H$ (that is, an Euler tour), and a 2-factor corresponds to a family of edge-disjoint closed strict trails that jointly traverse all the edges. Sequentially concatenating any closed strict trails in this family with a common anchor vertex we obtain an Euler family of $H$.
In Cases (b) and (c), each 1-regular component of a spanning subgraph of $L$ gives rise to closed strict trail of length 2 in $H$, and an Euler family is obtained as above.
\end{proof}
\subsection{Characterization in terms of the incidence graph}\label{sec:G}
The following characterization of eulerian and quasi-eulerian hypergraphs in terms of their incidence graph will be henceforth our main tool.
\begin{theo}\label{the:ET-G}
Let $H=(V,E)$ be a connected hypergraph and $G$ its incidence graph. Then
$H$ is quasi-eulerian if and only if $G$ has an even subgraph that is 2-regular on $E$, and it is eulerian if and only if $G$ has such a subgraph with a single non-trivial connected component.
\end{theo}
\begin{proof}
Assume $H$ has an Euler family ${\cal F}$. Then each edge of $H$ is traversed exactly once by a closed strict trail in ${\cal F}$. Hence ${\cal F}$ corresponds to a family ${\cal F}_G$ of closed trails of $G$ such that each $e \in E$ is traversed exactly once by a trail in ${\cal F}_G$. Let $G'$ be the subgraph of $G$ corresponding to ${\cal F}_G$. Then clearly $G'$ is even on $V$ and 2-regular on $E$.
Similarly, if $H$ has an Euler tour $T$, then $T$ is a closed strict trail that traverses each edge of $H$ exactly once, and hence corresponds to a closed trail $T_G$ of $G$ that traverses each $e \in E$ exactly once. Then $T_G$ may be viewed as the unique connected component of a subgraph $G'$ of $G$ that is even on $V$ and 2-regular on $E$.
Conversely, suppose $G'$ is an even subgraph of $G$ that is 2-regular on $E$. Then each non-trivial connected component of $G'$ has an Euler tour; let ${\cal T}$ be the family of these closed trails in $G$.
The closed trails in ${\cal T}$ are pairwise vertex-disjoint and jointly traverse each e-vertex exactly once. Hence ${\cal T}$ corresponds to a family ${\cal F}$ of closed strict trails in $H$ that are pairwise anchor-disjoint and edge-disjoint, and together traverse each edge of $H$ exactly once; that is, an Euler family of $H$. If $G'$ has a single non-trivial connected component, then ${\cal F}$ contains a single closed strict trail; that is, an Euler tour of $H$.
\end{proof}
Using Theorem~\ref{the:ET-G}, certain families of hypergraphs can be easily seen to be eulerian or quasi-eulerian. The first of the following corollaries is immediate.
\begin{cor}\label{cor:2-factor}
Let $H$ be a hypergraph with the incidence graph $G$. If $G$ has a 2-factor, then $H$ is quasi-eulerian. If $G$ is hamiltonian, then $H$ is eulerian.
\end{cor}
\begin{cor}
Let $H$ be an $r$-regular $r$-uniform hypergraph for $r \ge 2$. Then $H$ is quasi-eulerian.
\end{cor}
\begin{proof}
The incidence graph $G$ of $H$ is an $r$-regular bipartite graph with $r \ge 2$. Therefore, as a corollary of Hall's Theorem \cite{Hal}, $G$ admits two edge-disjoint perfect matchings, and hence a 2-factor. Thus $H$ is quasi-eulerian by Corollary~\ref{cor:2-factor}.
\end{proof}
\begin{cor}
Let $H$ be a $2k$-uniform even hypergraph. Then $H$ is quasi-eulerian.
Moreover, $H$ has a collection of Euler families $\{{\cal F}_1,\ldots, {\cal F}_k\}$ such that each flag of $H$ occurs as an anchor flag of exactly one family ${\cal F}_i$ in this collection.
\end{cor}
\begin{proof}
Let $G$ be the incidence graph of $H$. In $G$, every e-vertex has degree $2k$, and every v-vertex has even degree. A result by Hilton \cite[Theorem 8]{Hil} then shows that $G$ has an {\em evenly equitable} $k$-edge colouring; that is, a $k$-edge colouring such that (i) every vertex is incident with an even number of edges of each colour, and (ii) for each vertex, the numbers of edges of any two colours that are incident with this vertex differ by at most two. Hence the $i$-th colour class, for $i=1,2,\ldots,k$, induces an even subgraph $G_i$ of $G$ that is 2-regular on $E$. By Theorem~\ref{the:ET-G}, each $G_i$ corresponds to an Euler family ${\cal F}_i$ of $H$, and $H$ is quasi-eulerian. Since every edge of $G$ lies in exactly one of $G_1,\ldots,G_k$, it follows that each flag of $H$ occurs as an anchor flag of exactly one family among ${\cal F}_1,\ldots, {\cal F}_k$.
\end{proof}
\subsection{Characterization using blocks}\label{sec:blocks}
In this section, we reduce the problem of existence of an Euler family in a hypergraph to the identical problem on its blocks (to be defined below, analogously to blocks in graphs). As expected, this reduction is a little more complicated, and perhaps not as useful, in the case of Euler tours. We refer the reader to \cite{BahSaj} for more information on blocks in a hypergraph.
\begin{defn} {\rm \cite{BahSaj}
Let $H=(V,E)$ be a connected hypergraph without empty edges. A vertex $v \in V$ is a {\em separating vertex} for $H$ if $H$ decomposes into two non-empty connected hypersubgraphs with just vertex $v$ in common. That is, $H=H_1 \oplus H_2$, where $H_1$ and $H_2$ are two non-empty connected hypersubgraphs of $H$ with $V(H_1) \cap V(H_2)=\{ v \}$.
}
\end{defn}
\begin{defn} {\rm \cite{BahSaj}
A connected hypergraph without empty edges that has no separating vertices is called {\em non-separable}. A {\em block} of a hypergraph $H$ is a maximal non-separable hypersubgraph of $H$.
}
\end{defn}
\begin{theo}\label{the:blocks}
Let $H=(V,E)$ be a hypergraph. Then:
\begin{enumerate}
\item $H$ has an Euler family if and only if each block of $H$ has an Euler family.
\item $H$ has an Euler tour (necessarily traversing every separating vertex of $H$) if and only if each block $B$ of $H$ has an Euler tour that traverses every separating vertex of $H$ contained in $B$.
\end{enumerate}
\end{theo}
\begin{proof}
Let $B$ be any block of $H$, and $T$ a closed strict trail of $H$. In $T$, delete all vertices and edges of $H$ that are not in $B$. By \cite[Theorem 3.36]{BahSaj}, every cycle of $H$ is contained within a block; consequently, each of the remaining subsequences of $T$ ends with the first vertex of the next subsequence (in cyclical order). Denote the concatenation of these remaining subsequences of $T$ by $T\vert_B$, and observe that $T\vert_B$ is a closed strict trail in $B$.
\begin{enumerate}
\item Assume $H$ has an Euler family $\cal F$, and let $B$ be any block of $H$. Then the set of all closed strict trails of the form $T\vert_B$, for all trails $T$ in ${\cal F}$ that contain edges of $B$, is an Euler family of $B$.
Conversely, suppose that each block $B$ of $H$ has an Euler family ${\cal F}_B$. Take the union of all ${\cal F}_B$ and pairwise concatenate any closed strict trails in this union that have an anchor in common, until every pair of resulting trails are anchor-disjoint. The result is an Euler family of $H$.
\item Assume $H$ has an Euler tour $T$ (necessarily traversing every separating vertex of $H$), and let $B$ be a block of $H$. Then $T\vert_B$ is an Euler tour of the block $B$ traversing every separating vertex of $H$ contained in $B$.
Conversely, if each block $B$ of $H$ has an Euler tour $T_B$ traversing every separating vertex of $H$ contained in $B$, then all the $T_B$ can be concatenated to give an Euler tour of $H$.
\end{enumerate}
\vspace{-10mm}
\end{proof}
Using our characterization in terms of the incidence graph (Theorem~\ref{the:ET-G}), the above theorem can be immediately augmented as follows.
\begin{cor}
Let $H=(V,E)$ be a hypergraph. Then:
\begin{enumerate}
\item $H$ has an Euler family if and only if for each block $B$ of $H$, the incidence graph $G_B$ of $B$ has an even subgraph $G_B'$ that is 2-regular on $E(B)$.
\item $H$ has an Euler tour if and only if for each block $B$ of $H$, the incidence graph $G_B$ of $B$ has an even subgraph $G_B'$ that is 2-regular on $E(B)$ and has a unique non-trivial connected component, which necessarily contains every separating vertex of $H$ that lies in $B$.
\end{enumerate}
\end{cor}
With the insight of Theorem~\ref{the:blocks}, one can easily construct a connected hypergraph $H$ that is quasi-eulerian but not eulerian, namely, one with a separating vertex. Let $B$ be an eulerian non-separable hypergraph that has more vertices than edges, and let $A$ be any eulerian hypergraph. For each vertex $v$ of $B$, let $A_v$ be a copy of the hypergraph $A$ such that $V(A_v) \cap V(B)=\{ v \}$ and the hypergraphs $A_v$ are all pairwise vertex-disjoint. Now let $H$ be the union of $B$ with all $A_v$. Each $v \in V(B)$ is now a separating vertex of $H$, and since $B$ cannot have an Euler tour traversing every vertex, $H$ has no Euler tour. However, the Euler tours of $B$ and all the $A_v$ (concatenating any that have common anchor vertices) will give rise to an Euler family for $H$.
One may then ask whether a connected hypergraph without separating vertices that admits an Euler family necessarily admits an Euler tour. The answer is negative, as shown by the counterexample below.
\begin{example}{\rm
Let $H$ be a hypergraph whose incidence graph is shown in Figure~\ref{fig:Count1}. Observe that $H$ is quasi-eulerian but not eulerian, has no cut edges and no separating vertices, and satisfies the necessary condition from Lemma~\ref{lem:NecCond}.
}
\end{example}
\begin{center}
\begin{figure}[h!]
\centerline{\includegraphics[scale=0.5]{counter1.eps}}
\caption{The incidence graph of a quasi-eulerian but not eulerian hypergraph, without cut edges and without separating vertices. (Black dots represent the v-vertices.)}\label{fig:Count1}
\end{figure}
\end{center}
\subsection{Complexity of the {\sc Euler Tour} and {\sc Euler Family} problems}\label{sec:compl}
We shall now turn our attention to the complexity of the two problems. First, we define the {\em bigness} of a hypergraph as the maximum of the order (number of vertices) and the size (number of edges) of the hypergraph. In this section, we show that the problem of determining whether or not a given hypergraph is eulerian is NP-complete, while --- perhaps surprisingly --- the problem of determining whether or not a given hypergraph is quasi-eulerian is polynomial in the bigness of the hypergraph.
We begin by formally defining our first decision problem.
\begin{problem}
{\sc Euler Tour}
{\sc Given:} A hypergraph $H$.
{\sc Decide:} Does $H$ have an Euler tour?
\end{problem}
Recall that a hypergraph is called {\em linear} if every pair of distinct edges intersect in at most one vertex.
Lonc and Naroski \cite{LonNar} showed that {\sc Euler Tour} is NP-complete on the set of $k$-uniform hypergraphs for any $k\ge 3$ (as well as on the set of 3-uniform hypergraphs with a connected skeleton). Their proof for $k=3$ actually shows that the problem is NP-complete on the smaller class of linear 2-regular 3-uniform hypergraphs, as stated in Theorem~\ref{the:ETproblem1} below. For completeness, we include the proof from \cite{LonNar}.
In the polynomial reduction, the following known NP-complete problem is used.
\begin{problem}
{\sc Hamilton Cycle}
{\sc Given:} A graph $G$.
{\sc Decide:} Does $G$ have a Hamilton cycle?
\end{problem}
\begin{theo}\label{the:ETproblem1}
Let ${\cal LH}_3^2$ denote the family of linear 2-regular 3-uniform hypergraphs. Then {\sc Euler Tour} is NP-complete on ${\cal LH}_3^2$.
\end{theo}
\begin{proof} \cite{LonNar}
Clearly, {\sc Euler Tour} is in the class NP since a potential solution can be verified in time that is polynomial in the number of edges, and hence also in the bigness of the hypergraph.
Let $G=(V_G,E_G)$ be a simple cubic graph. Define a hypergraph $H=(V_H,E_H)$ as follows: $V_H=E_G$ and $E_H=\{ e_v: v\in V_G \}$, where for each vertex $v \in V_G$, we let $e_v$ be the set of edges of $G$ incident with $v$. (In other words, $H$ is the dual of $G$.) Observe that $H$ is linear, 2-regular, and 3-uniform. Now, if $C=v_0 e_1 v_1 e_2 \ldots v_{n-1} e_n v_0$ is a Hamilton cycle in $G$, then $e_1 e_{v_1} e_2 \ldots e_{v_{n-1}} e_n e_{v_0} e_1$ is an Euler tour of $H$. Conversely, suppose $T=e_0 e_{v_1} e_1 \ldots e_{v_{n-1}} e_{n-1} e_{v_n} e_0$ is an Euler tour of $H$. Then $C=v_1 e_1 \ldots v_{n-1} e_{n-1} v_n e_0 v_1$ is a Hamilton cycle in $G$.
These conversions are clearly polynomial in the number of vertices of $G$, and hence also in the size and bigness of $H$. Therefore, since {\sc Hamilton Cycle} is NP-complete on the set of all cubic graphs \cite{GarJohTar}, {\sc Euler Tour} is NP-complete on the set ${\cal LH}_3^2$.
\end{proof}
Next, we define our second decision problem.
\begin{problem}
{\sc Euler Family}
{\sc Given:} A hypergraph $H$.
{\sc Decide:} Does $H$ have an Euler family?
\end{problem}
Below, we show that {\sc Euler Family} is a polynomial problem on the set of all hypergraphs. In the reduction, the following known polynomial problem \cite{Edm} will be used.
\begin{problem}
{\sc 1-Factor}
{\sc Given:} A graph $G$.
{\sc Decide:} Does $G$ have a 1-factor?
\end{problem}
For a graph $X$ and a function $f: V(X) \rightarrow \NN$, an {\em $f$-factor} of $X$ is a spanning subgraph $X'$ of $X$ such that $\deg_{X'}(v)=f(v)$ for all $v \in V(G)$. Bondy and Murty \cite{BonMur} describe a polynomial reduction, originally due to Tutte \cite{Tut}, of the problem of existence of an $f$-factor in a graph without loops to the problem of existence of a 1-factor in a graph. This reduction can be extended to graphs with loops as follows.
\begin{lemma}\label{lem:EFconversion}
Let $X=(V,E)$ be a graph obtained from a simple graph by adjoining $\ell(u)$ loops to each vertex $u$, where $\ell(u)$ is polynomial in the order of $X$. Let $f: V \rightarrow \NN$ be a function with $f(u) \le \deg_X(u)$ for all $u \in V$.
For each $v \in V$, construct a graph $Y_v$ as follows:
\begin{itemize}
\item The vertex set of $Y_v$ has a partition $\{ S_v, T_v, U_v \}$, with $|S_v|=\deg_{X}(v)-f(v)$, $|T_v|=\deg_{X}(v)-2\ell(v)$, and $|U_v|=2\ell(v)$.
\item The edge set of $Y_v$ consists of all edges of the form $uv$ for $u\in S_v$ and $v \in T_v \cup U_v$, as well as a perfect matching on the set $U_v$.
\end{itemize}
A graph $X_f$ is obtained from $X$ by taking the vertex-disjoint union of the graphs $Y_v$, for all $v \in V$, and inserting a single linking edge with one endpoint in $T_u$ and the other in $T_v$ if and only if $uv \in E$ and $u \ne v$. The endpoints of these edges are chosen so that each vertex in $T_v$, for each $v \in V$, is an endpoint of exactly one of these linking edges.
A 1-factor in $X_f$ then corresponds to an $f$-factor in $X$, and vice-versa, and this conversion is polynomial in the order of $X$.
\end{lemma}
\begin{proof}
First, observe that the graph $X_f$ is well defined: each vertex $v \in V$ has exactly $\deg_X(v)-2\ell(v)=|T_v|$ neighbours $u \ne v$ in $X$, and so it is indeed possible to join each vertex of $T_v$ to exactly one vertex in some $T_u$ such that $u \ne v$ and $uv \in E$.
Next, we show that a 1-factor in $X_f$ corresponds to an $f$-factor in $X$. Let $M_f$ be the edge set of a 1-factor (that is, a perfect matching) in $X_f$. For each $v \in V$, let $\ell_f(v)$ denote the number of edges of $M_f$ with both ends in $U_v$. Then let $E'$ be the subset of $E$ containing all edges
$uv$ such that $u, v \in V$, $u \ne v$, for which there exist $x \in T_u$ and $y \in T_v$ with $xy\in M_f$; in addition, let $E'$ contain exactly $\ell_f(v)$ loops incident with $v$ for each vertex $v \in V$.
Let $F=(V,E')$. We claim $F$ is an $f$-factor of $X$. Fix any vertex $v\in V$. Let $\tau(v)$ be the number of edges of $M_f$ with one endpoint in $T_v$ and the other in a set $T_u$ for all $u \ne v$, and observe that $\deg_F(v)=\tau(v) +2\ell_f(v)$. Furthermore, let $\sigma(v)$ be the number of edges of $M_f$ with one endpoint in $T_v$ and the other in $S_v$, and let $\nu(v)$ be the number of edges of $M_f$ with one endpoint in $S_v$ and the other in $U_v$. Then
\begin{eqnarray*}
|T_v| &=& \deg_X(v)-2\ell(v)=\tau(v)+\sigma(v), \\
|S_v| &=& \deg_X(v)-f(v) = \sigma(v)+\nu(v), \\
|U_v|&=& 2\ell(v)= \nu(v)+ 2\ell_f(v),
\end{eqnarray*}
which yields $f(v)=\tau(v)+2\ell_f(v)=\deg_F(v)$. We conclude that, indeed, $F$ is an $f$-factor of $X$.
Conversely, take any $f$-factor $F$ of $X$, and construct a subset $M_f$ of edges of $X_f$ as follows.
\begin{itemize}
\item[(1)] For each $u,v \in V$, $u \ne v$, if $uv \in E(F)$, then let $M_f$ contain the unique edge with one endpoint in $T_u$ and the other in $T_v$.
\item[(2)] For each $v \in V$, if $E(F)$ contains $\ell_f(v)$ loops incident with $v$, then let $M_f$ contain $\ell_f(v)$ edges with both ends in $U_v$.
\item[(3)] For each $v \in V$, let $M_f$ contain $|S_v|$ independent edges with one endpoint in $S_v$ and the other in $U_v \cup T_v$.
\end{itemize}
We now show that the edges in (3) can be chosen to be independent from the edges chosen in (1) and (2). Indeed, in (1), for each $v \in V$, $f(v)-2\ell_f(v)$ edges with one endpoint in $T_v$ were chosen, and in (2), $2\ell_f(v)$ edges with both ends in $U_v$ were put into $M_f$. This leaves
$$|T_v|-\left(f(v)-2\ell_f(v)\right)=\left(\deg_X(v)-2\ell(v)\right)-\left(f(v)-2\ell_f(v)\right)$$ vertices in $T_v$, and
$$|U_v|-2\ell_f(v)=2\ell(v)-2\ell_f(v)$$
vertices in $U_v$ unsaturated.
Since
$$\left(\deg_X(v)-2\ell(v)\right)-\left(f(v)-2\ell_f(v)\right)+\left(2\ell(v)-2\ell_f(v)\right)=\deg_X(v)-f(v)=|S_v|,$$
the edges in (3) can be chosen so that $M_f$ is an independent set and every vertex in $X_f$ is $M_f$-saturated. We conclude that $(V(X_f),M_f)$ is a 1-factor in $X_f$.
Since for each $v \in V$, the number of loops $\ell(v)$ incident with $v$ is polynomial in the order of $X$, these conversions are polynomial in the order of $X$.
\end{proof}
\begin{theo}\label{the:EFcomplexity}
Let ${\cal H}$ be the family of all hypergraphs. Then {\sc Euler Family} is polynomial on ${\cal H}$.
\end{theo}
\begin{proof}
Let $H=(V,E)$ be a hypergraph, and $G$ its incidence graph. By Theorem~\ref{the:ET-G}, $H$ admits an Euler family if and only if $G$ has an even subgraph $G'$ that is 2-regular on $E$; in this proof, we shall call such a subgraph $G'$ an {\em EF-factor} of $G$.
Starting from $G$, construct a graph $G^*$ by appending $\lfloor \frac{\deg_G(v)}{2} \rfloor$ loops to each $v \in V$. Then define a function $f: V \cup E \rightarrow \NN$ by $f(e)=2$ for all $e \in E$, and $f(v)=2\lfloor \frac{\deg_G(v)}{2} \rfloor$ for all $v \in V$.
We claim that $G$ has an EF-factor $G'$ if and only if $G^*$ has an $f$-factor. Indeed, take an EF-factor $G'$ of $G$. Appending $\frac{1}{2}(f(v)-\deg_{G'}(v))$ loops to each vertex $v \in V$ results in an $f$-factor of $G^*$. Conversely, removing the loops from any $f$-factor of $G^*$ will result in an EF-factor $G'$ of $G$. This conversion is clearly polynomial in the order of $G$, and hence in the bigness of $H$.
By Lemma~\ref{lem:EFconversion} and \cite{Edm}, the problem of finding an $f$-factor in the graph $G^*$ is polynomial in the order of $G^*$. Hence the problem of finding an Euler family in $H$ is polynomial in the bigness of $H$. We conclude that {\sc Euler Family} is polynomial on ${\cal H}$.
\end{proof}
Note that the reduction to the problem of a $1$-factor in a graph as described in Lemma~\ref{lem:EFconversion}, together with Edmonds' Algorithm \cite{Edm} for finding a maximum matching in an arbitrary graph, gives us a polynomial-time algorithm for constructing an Euler family in a quasi-eulerian hypergraph. We should also mention that in the proof of Theorem~\ref{the:EFcomplexity}, instead of Lemma~\ref{lem:EFconversion}, we could have used the fact that the general $f$-factor problem is polynomial \cite[Theorem 6.2]{AnsJAlg85}.
\subsection{Quasi-eulerian hypergraphs: necessary and sufficient conditions}\label{NAC}
Theorem~\ref{the:ET-G} shows that a hypergraph $H=(V,E)$ admits an Euler family if and only if its incidence graph $\G(H)$ has an even subgraph that is 2-regular on $E$. We shall now combine this observation with Lovasz's Theorem~\ref{the:Lovasz} (below) to give more easily verifiable necessary and sufficient conditions.
For a graph $G$ and functions $f,g: V(G) \rightarrow \NN$, a {\em $(g,f)$-factor} of $G$ is a spanning subgraph $F$ of $G$ such that $g(x) \le \deg_F(x) \le f(x)$ for all $x \in V(G)$. An $f$-factor is then simply an $(f,f)$-factor. For any subgraph $G_1$ of $G$ and any sets $V_1, V_2 \subseteq V(G)$, let $\eps_{G_1}(V_1,V_2)$ denote the number of edges of $G_1$ with one end in $V_1$ and the other in $V_2$.
\begin{theo} \cite{Lov} \label{the:Lovasz}
Let G be a graph and let $f,g : V (G) \rightarrow \NN$ be functions such that
$g(x) \le f(x)$ and $g(x) \equiv f(x) \pmod{2}$ for all $x \in V (G)$. Then G has a $(g, f)$-factor $F$ such that $\deg_F(x) \equiv f(x) \pmod{2}$ for all $x \in V(G)$ if and only if all disjoint subsets $S$ and $T$ of $V(G)$ satisfy
\begin{equation}\label{eq:Lov}
\sum_{x \in S} f(x) + \sum_{x \in T} (\deg_G(x)-g(x))- \eps_G(S,T)-q(S,T) \ge 0,
\end{equation}
where $q(S,T)$ is the number of connected components $C$ of $G \setminus (S \cup T)$ such that $$\sum_{x \in V(C)} f(x) + \eps_G(V(C),T) \quad \mbox{is odd}.$$
\end{theo}
\begin{cor}\label{cor:Lovasz}
Let $H=(V,E)$ be a hypergraph and $G=\G(H)$ its incidence graph. Then $H$ is quasi-eulerian if and only if all disjoint sets $S \subseteq E$ and $T\subseteq V \cup E$ of $V(G)$ satisfy
\begin{equation}\label{eq:LovH}
2|S| + \sum_{x \in T} \deg_G(x) - 2|T \cap E| - \eps_G(S,T \cap V)-q(S,T) \ge 0,
\end{equation}
where $q(S,T)$ is the number of connected components $C$ of $G \setminus (S \cup T)$ such that $\eps_G(V(C),T)$ is odd.
\end{cor}
\begin{proof}
By Theorem~\ref{the:ET-G}, $H$ has an Euler family if and only if $G$ has an even subgraph $G'$ that is 2-regular on $E$. Define functions $f,g: V \cup E \rightarrow \NN$ as follows:
$$g(x)=\left\{ \begin{array}{ll}
0 & \mbox{ if } x \in V \\
2 & \mbox{ if } x \in E
\end{array} \right.
\qquad \mbox{ and } \qquad
f(x)=\left\{ \begin{array}{ll}
K & \mbox{ if } x \in V \\
2 & \mbox{ if } x \in E
\end{array} \right.
,$$
where $K$ is a sufficiently large even integer.
Observe that $f$ and $g$ satisfy the assumptions of Theorem~\ref{the:Lovasz}. Moreover, a subgraph $G'$ of $G$ with the required properties is a $(g,f)$-factor $F$ of $G$ with $\deg_F(x) \equiv f(x) \pmod{2}$ for all $x \in V(G)$, and conversely.
For any subsets $S$ and $T$ of $V \cup E$, if $S \cap V \ne \emptyset$, then $\sum_{x \in S} f(x)$ is very large, and Condition~(\ref{eq:Lov}) clearly holds for $S$ and $T$. Thus Theorem~\ref{the:Lovasz} asserts that $G$ has an $(f,g)$-factor if and only if Condition~(\ref{eq:Lov}) holds for all disjoint sets $S \subseteq E$ and $T\subseteq V \cup E$ of $V(G)$.
Observing that $\sum_{x \in V(C)} f(x) + \eps_G(V(C),T) \equiv \eps_G(V(C),T) \pmod{2}$, it is then straightforward to show that Condition~(\ref{eq:Lov}) in Theorem~\ref{the:Lovasz} is equivalent to Condition~(\ref{eq:LovH}) in the statement of this corollary. The result follows as claimed.
\end{proof}
To express the necessary conditions in Corollary~\ref{cor:Lovasz} in the language of the hypergraph itself, we introduce the following term. For a hypergraph $H=(V,E)$ and sets $V' \subseteq V$ and $E' \subseteq E$, the symbol $H[V',E']$ will denote the hypergraph with vertex set $V'$ and edge set $\{ e \cap V': e \in E'\}$. Observe that the incidence graph of $H[V',E']$ is then the subgraph of $\G(H)$ induced by the vertex set $V' \cup E'$.
\begin{cor}\label{cor:LovaszH}
A hypergraph $H=(V,E)$ is quasi-eulerian if and only if every subset $V' \subseteq V$ and all disjoint subsets $E', E''\subseteq E$ satisfy
\begin{equation}\label{eq:LovH2}
2|E''| + \sum_{v \in V'} \deg_H(v) + \sum_{e \in E'} |e| - 2|E'| - |F(H[V',E''])| - q_H(E'',V' \cup E') - q_e(V') \ge 0,
\end{equation}
where $q_H(E'',V' \cup E')$ is the number of connected components $C$ of $(H-(E' \cup E'')) \setminus V'$ such that
$|F(H[V(C),E'])|+|F(H[V',E(C)])|$ is odd, and
$q_e(V')$ is the number of edges $e \in E-(E' \cup E'')$ such that $e \subseteq V'$ and $|e|$ is odd.
\end{cor}
\begin{proof}
Let $G$ be the incidence graph of $H$.
It suffices to show that the condition in Corollary~\ref{cor:LovaszH} holds if and only if the condition in Corollary~\ref{cor:Lovasz} holds.
Take any subset $V' \subseteq V$ and disjoint subsets $E', E''\subseteq E$, and let $S=E''$ and $T=V' \cup E'$. Clearly,
$$2|S| + \sum_{x \in T} \deg_G(x) - 2|T \cap E|=
2|E''|+\sum_{v \in V'} \deg_H(x) + \sum_{e \in E'} |e| - 2|E'|.$$
Next, we have
$$ \eps_G(S,T \cap V)=\eps_G(E'',V')=|F(H[V',E''])|.$$
Observe that the incidence graph of $(H-(E' \cup E''))\setminus V'$ is obtained from
$G \setminus (S \cup T)=G\setminus (E'' \cup V' \cup E')$ by deleting any isolated e-vertices; these are precisely the edges $e \in E-(E'\cup E'')$ such that $e \subseteq V'$.
Hence, by Theorem~\ref{the:conn}, the connected components of $G \setminus (S \cup T)$ are either the incidence graphs of the connected components of $(H-(E' \cup E''))\setminus V'$, or else correspond to the edges $e \in E-(E'\cup E'')$ such that $e \subseteq V'$.
Take any connected component $C_G$ of $G \setminus (S \cup T)$. If $C_G$ is the incidence graph of a connected component $C$ of $(H-(E' \cup E''))\setminus V'$, then
$$\eps_G(V(C_G),T)=\eps_G(V(C)\cup E(C),V' \cup E')=|F(H[V(C),E'])|+
|F(H[V',E(C)])|.$$
If however, $C_G$ corresponds an isolated e-vertex $e$, then
$$\eps_G(V(C_G),T)=\eps_G(\{e\},V' \cup E')=|e|.$$
Thus
$$q(S,T)=q(E'',V' \cup E')=q_H(E'',V' \cup E') + q_e(V'),$$
and Conditions~(\ref{eq:LovH}) and (\ref{eq:LovH2}) are equivalent.
\end{proof}
Using Theorem~\ref{the:blocks}, we immediately obtain the following.
\begin{cor}
A hypergraph $H$ is quasi-eulerian if and only if the necessary and sufficient condition in Corollary~\ref{cor:LovaszH} holds for every block of $H$.
\end{cor}
\subsection{Quasi-eulerian 3-uniform hypergraphs}\label{sec:EF3}
In Theorem~\ref{the:cut-edges}, we saw that a hypergraph with strong cut edges cannot be quasi-eulerian, while the examples in Figures~\ref{fig:pic4} and \ref{fig:pic5} show that a hypergraph with cut edges, none of which is strong, may or may not be quasi-eulerian. The following theorem completes the picture for 3-uniform hypergraphs. The main ingredient in the proof is the following result by Fleischner.
\begin{theo}\cite{Fle}\label{the:Fle}
Every graph without cut edges and of minimum degree at least 3 has a spanning even subgraph without isolated vertices.
\end{theo}
\begin{theo}\label{the:3-uniform}
Let $H=(V,E)$ be a 3-uniform hypergraph without cut edges. Then $H$ is quasi-eulerian.
\end{theo}
\begin{proof}
Let $G=\G(H)$ be the incidence graph of $H$. We claim that, since $H$ has no cut edges, the graph $G$ has no cut edges. Suppose, to the contrary, that $ve$ is a cut edge of $G$ (where $v\in V$ and $e \in E$), and let $G_v$ and $G_e$ be the connected components of $G-ve$ containing vertex $v$ and $e$, respectively. Since $|e|>1$, the component $G_e$ must contain a v-vertex $w$, and $v$ and $w$ are disconnected in $G-ve$. Hence they are disconnected in $H-e$, showing that $e$ is a cut edge of $H$, a contradiction. Therefore $G$ has no cut edges as claimed.
Note that we may assume that $H$, and hence $G$, has no isolated vertices. Clearly, $G$ has no vertices of degree 1, since the edge incident with such a vertex would necessarily be a cut edge. Suppose $G$ has a vertex of degree 2. Then it must be a v-vertex, since $|e|=3$ for all $e \in E$. Obtain a graph $G^*$ from $G$ by replacing, for every vertex $v$ of degree 2, the 2-path $e_1ve_2$ in $G$ with an edge $e_1e_2$. Observe that in $G^*$, all e-vertices have degree 3, and all v-vertices have degree at least 3. Moreover, $G^*$ has no cut edges since $G$ does not. Therefore, by Theorem~\ref{the:Fle}, $G^*$ has a spanning even subgraph $G_1^*$ without isolated vertices. We construct a subgraph $G_1$ of $G$ as follows: for any vertex $v$ of degree 2 in $G$, and its incident edges $e_1$ and $e_2$, if $e_1e_2$ is an edge of $G_1^*$, then replace it with the 2-path $e_1ve_2$. The resulting graph $G_1$ is an even subgraph of $G$ without isolated e-vertices. Since every e-vertex of $G$ has degree 3 in $G$, it has degree 2 in $G_1$. Thus, by Theorem~\ref{the:ET-G}, $G_1$ gives rise to an Euler family of $H$.
\end{proof}
The reader may have noticed that an Euler family of $H$ constructed in the proof of Theorem~\ref{the:3-uniform} traverses every vertex of $H$ except possible some of the vertices of degree 2. Observe that Theorem~\ref{the:3-uniform} does not hold for graphs (that is, 2-uniform hypergraphs); an example is a cycle with a chord. More generally, it does not hold for all hypergraphs in which every edge has size 2 or 3; such an example is given in Figure~\ref{fig:pic6}.
\begin{center}
\begin{figure}[h!]
\centerline{\includegraphics[scale=0.5]{pic6old.eps}}
\caption{The incidence graph of a hypergraph without cut edges that is not quasi-eulerian; observe that every edge has size 2 or 3. (Black dots represent the v-vertices.)} \label{fig:pic6}
\end{figure}
\end{center}
\begin{cor}
Let $H=(V,E)$ be a 3-uniform hypergraph with at least two edges such that each pair of vertices lie together in at least one edge. Then $H$ is quasi-eulerian.
\end{cor}
\begin{proof}
By Theorem~\ref{the:3-uniform}, it suffices to show that $H$ has no cut edges. If $|V|=3$, then clearly none of the edges are cut edges. Hence assume $|V|\ge 4$, and suppose that $e \in E$ is a cut edge of $H$. Let $u_1, u_2 \in V$ be vertices of $e$ that lie in distinct connected components of $H-e$, and consider any vertex $w \not\in e$. Then there exist edges $e_1,e_2$ such that $w,u_i \in e_i$ for $i=1,2$. Since obviously $e_1,e_2 \ne e$, vertex $w$ must lie in the same connected component of $H-e$ as both $u_1$ and $u_2$, contradicting the fact that $u_1$ and $u_2$ lie in distinct connected components of $H-e$. Hence $H$ has no cut edges, and by Theorem~\ref{the:3-uniform} it is quasi-eulerian.
\end{proof}
Recall that a triple system TS($n$,$\lambda$) is a 3-uniform hypegraph of order $n$ such that every pair of vertices lie together in exactly $\lambda$ edges.
\begin{cor}
Every triple system TS($n$,$\lambda$) with $(n,\lambda) \ne (3,1)$ is quasi-eulerian.
\end{cor}
We mention that Wagner and the second author recently proved that all triple systems TS($n$,$\lambda$), except for TS(3,1), are in fact eulerian \cite{SajWag}.
The proof of \cite[Theorem 2]{LonNar} for the case $k=3$ is very long and technical; as another corollary of our Theorem~\ref{the:3-uniform}, we show that every 3-uniform hypergraph with a connected 2-intersection graph is eulerian provided that it has no pendant vertices.
\begin{cor} Every 3-uniform hypergraph with a connected 2-intersection graph and without pendant vertices is eulerian.
\end{cor}
\begin{proof}
Let $H=(V,E)$ be a 3-uniform hypergraph with a connected 2-intersection graph $L$ and without pendant vertices. Hence $H$ has at least 2 edges. Suppose it has a cut edge $e$. Since $L$ is connected, $e$ shares exactly two of its vertices with another edge; consequently, these two vertices lie in the same connected component of $H-e$. Thus $H-e$ has exactly two connected components; let $H_1$ be the connected component containing a single vertex, $w$, of $e$. By assumption, $w$ is not a pendant vertex in $H$, so $E(H_1) \ne \emptyset$. Take any $e_1 \in E(H_1)$ and any $e_2 \in E-E(H_1)$. Then $e_1 \cap e_2 \subseteq \{ w \}$, whence $e_1e_2 \not\in E(L)$. It follows that $L$ is disconnected, a contradiction.
We conclude that $H$ has no cut edges, and hence is quasi-eulerian by Theorem~\ref{the:3-uniform}.
Let ${\cal F}=\{T_1,\ldots,T_k\}$ be an Euler family of $H$ with a minimum number of components, and suppose $k \ge 2$. Let $G$ be the incidence graph of $H$, let $G'$ be the subgraph of $G$ corresponding to ${\cal F}$, and $G_1,\ldots,G_k$ the connected components of $G'$ corresponding to the closed strict trails $T_1,\ldots,T_k$ of $H$. Since $L$ is connected, without loss of generality, there exist e-vertices $e_1$ of $G_1$ and $e_2$ of $G_2$ that are adjacent in $L$, and hence in $G$ have two common neighbours, say $v_1$ and $v_2$. Since $e_1$ and $e_2$ are of degree 3 in $G$, and of degree 2 in $G'$, each is adjacent to at least one of $v_1$ and $v_2$ in $G'$, and since they lie in distinct connected components of $G'$, we may assume without loss of generality that $v_1e_1, v_2e_2 \in E(G')$. Obtain $G''$ by replacing these two edges of $G'$ with edges $v_1e_2$ and $v_2e_2$. Then $G''$ an even subgraph of $G$ that is 2-regular on $E$, so it corresponds to an Euler family of $H$. But since $G''$ one fewer connected component than $G'$, it contradicts the minimality of ${\cal F}$.
We conclude that $k=1$, that is, $H$ admits an Euler tour.
\end{proof}
We conclude this section with an alternative, more detailed characterization of quasi-eulerian 3-uniform hypergraphs that are either even or odd, in terms of their incidence graph.
\begin{theo}
Let $H=(V,E)$ be an even 3-uniform hypergraph and $G$ its incidence graph. The following are equivalent:
\begin{enumerate}
\item $H$ is quasi-eulerian.
\item $G$ has a subgraph $G''$ that is even on $V$ and 1-regular on $E$.
\item $E$ can be partitioned into pairs $\{ e, e' \}$ such that $e \cap e' \ne \emptyset$.
\end{enumerate}
\end{theo}
\begin{proof}
(1) $\Leftrightarrow$ (2): Assume $H$ is quasi-eularian. By Theorem~\ref{the:ET-G}, $G$ has an even subgraph $G'$ that is 2-regular on $E$. Define $G''=(V(G),E(G)-E(G'))$. Since $G$ and $G'$ are both even on $V$, so is $G''$, and since $G$ is 3-regular and $G'$ is 2-regular on $E$, $G''$ is 1-regular on $E$. The converse is proved very similarly.
(2) $\Rightarrow$ (3): Assume $G$ has a subgraph $G''$ that is even on $V$ and 1-regular on $E$. For each $v \in V$, the set of all $e \in E$ such that $ve \in E(G'')$ is of even cardinality, and hence can be partitioned into pairs $\{ e, e' \}$ such that $v \in e \cap e'$.
(3) $\Rightarrow$ (2): Let $\cal P$ be such a partition of $E$. For each pair $\{ e, e' \} \in {\cal P}$, choose $v \in e \cap e'$, and let $G''$ be induced by the set of all edges of the form $ve$ and $ve'$. Then $G''$ is even on $V$ and 1-regular on $E$ as required.
\end{proof}
The analogous result for odd hypergraphs (below), is proved similarly.
\begin{theo}\label{the:odd3hgs}
Let $H=(V,E)$ be an odd 3-uniform hypergraph and $G$ its incidence graph. The following are equivalent:
\begin{enumerate}
\item $H$ is quasi-eulerian.
\item $G$ has an odd subgraph $G''$ that is 1-regular on $E$.
\item $E$ can be partitioned into sets $S$ of odd cardinality such that $ \bigcap_{e \in S} e \ne \emptyset$.
\end{enumerate}
\end{theo}
\begin{cor}
Let $H=(V,E)$ be an odd 3-uniform quasi-eulerian hypergraph. Then $|V| \le |E|$.
\end{cor}
\begin{proof}
By Theorem~\ref{the:odd3hgs}, $\G(H)$ has an odd subgraph $G''$ that is 1-regular on $E$. Since every $v \in V$ has degree at least 1 in $G''$, and no two v-vertices can have a common neighbour in $G''$, we must have $|V| \le |E|$.
\end{proof}
\subsection{Cycle decomposition and 2-factors of quasi-eulerian \\ hypergraphs}\label{sec:CD}
The well-known Veblen's Theorem \cite{Veb} states that a connected graph is even (and hence eulerian) if and only if it admits a decomposition into cycles. The analogous result for hypergraphs is presented below.
\begin{theo}\label{the:CD}
A hypergraph is quasi-eulerian if and only if it admits a decomposition into cycles.
\end{theo}
\begin{proof}
Let $H=(V,E)$ be a quasi-eulerian hypergraph and $G$ its incidence graph. By Theorem~\ref{the:ET-G}, $G$ has an even subgraph $G'$ that is 2-regular on $E$. Hence $G'$ admits a decomposition into cycles, ${\cal C}_G$, and every e-vertex lies in exactly one of the cycles in ${\cal C}_G$. Let ${\cal C}_H$ be the corresponding family of cycles in $H$. Then every $e \in E$ lies in exactly one of the cycles in ${\cal C}_H$, so ${\cal C}_H$ is a cycle decomposition of $H$.
Conversely, assume that $H=(V,E)$ is a hypergraph with a cycle decomposition ${\cal C}$. Sequentially concatenating pairs of cycles in ${\cal C}$ with a common anchor, until no such pairs remain, yields an Euler family for $H$.
\end{proof}
In the remainder of this section, we shall focus on the relationship between eulerian properties and existence of 2-factors in a hypergraph. In Section~\ref{sec:L(H)} we observed that an Euler family in a hypergraph corresponds to a 2-factor in the intersection graph, and an Euler tour corresponds to a Hamilton cycle, but not conversely. As we shall see below, a stronger relationship exists between eulerian properties of a hypergraph and existence of 2-factors in its dual.
Recall that the {\em dual} of a non-empty hypergraph $H=(V,E)$ is the hypergraph $H^T=(E,V^T)$, where $V^T=\{v^T:v \in V \}$ and $v^T=\{ e \in E: v \in e\}$ for all $v \in V$. Observe that $(v,e) \in F(H)$ if and only if $(e,v^T) \in F(H^T)$, whence $(H^T)^T=H$ and the incidence graphs of $H$ and $H^T$ are isomorphic.
It is easy to see that a hypergraph is 2-regular if and only if its dual is 2-uniform. Below, we extend this observation to existence of 2-factors.
\begin{lemma}\label{lem:2-factor}
Let $H=(V,E)$ be a non-empty hypergraph and $H^T=(E,V^T)$ its dual. Let $E' \subseteq E$, $H'=(V,E')$, and $G'=\G(H')$. Then the following are equivalent:
\begin{enumerate}
\item $H'$ is a 2-factor of $H$.
\item $G'$ satisfies $\deg_{G'}(v)=2$ for all $v \in V$, $\deg_{G'}(e)=|e|$ for all $e \in E'$, and $V(G')\cap E=E'$.
\item $H^T[E']$ is 2-uniform with $|V|$ edges; that is, each edge of $H^T$ intersects $E'$ in exactly 2 vertices.
\end{enumerate}
\end{lemma}
\begin{proof}
It is clear that Statements (1) and (2) are equivalent.
Let $V'=\bigcup_{e \in E'} e$, and recall that $H[E']=(V',E')$. To see that (1) and (3) are equivalent, first observe that $\G(H[E'])$ and $\G(H^T[E'])$ are isomorphic with the isomorphism $\phi: V(\G(H[E'])) \rightarrow V(\G(H^T[E']))$ defined by $\phi(v)=v^T \cap E'$ for all $v \in V'$ and $\phi(e)=e$ for all $e \in E'$. If $(V,E')$ is a 2-factor, then $(V,E')=H[E']$, and since $\G(H[E'])$ and $\G(H^T[E'])$ are isomorphic and $H[E']$ is 2-regular with $|V|$ vertices, $H^T[E']$ is 2-uniform with $|V|$ edges. Conversely, if $H^T[E']$ is 2-uniform with $|V|$ edges, then $H[E']$ is 2-regular with $|V|$ vertices. Hence $H[E']=(V,E')$ and this subhypergraph is a 2-factor of $H$.
\end{proof}
The above lemma easily implies the following.
\begin{cor}
Let $H=(V,E)$ be a non-empty hypergraph and $H^T=(E,V^T)$ its dual. Then $H$ admits a 2-factorization if and only if there exists a partition $\{ E_1,\ldots, E_k\}$ of $E$ such that for all $i\in \{1,\ldots,k\}$, the vertex-induced subgraph $H^T[E_i]$ is 2-uniform with $|V|$ edges (that is, each edge of $H^T$ intersects $E_i$ in exactly 2 vertices).
\end{cor}
We are now ready to demonstrate the correspondence between 2-factors in a hypergraph with no odd-size edges and particular Euler families in its dual.
\begin{theo}\label{the:2-factor}
Let $H=(V,E)$ be a non-empty hypergraph without empty edges
such that $|e|$ is even for all $e \in E$, and let $H^T=(E,V^T)$ be its dual. Let $E' \subseteq E$.
Then $(V,E')$ is a 2-factor (connected 2-factor) of $H$ if and only if $H^T$ has an Euler family (Euler tour, respectively) with anchor set $E'$ that traverses every vertex $e \in E'$ exactly $\frac{|e|}{2}$ times.
\end{theo}
\begin{proof}
Observe that, with the assumptions of the theorem, the dual $H^T$ has no vertices of odd degree.
Assume $F=(V,E')$ is a 2-factor of $H$, and let $G'$ be its incidence graph. By Lemma~\ref{lem:2-factor}, $G'$ is a subgraph of the incidence graph $\G(H)$ such that $\deg_{G'}(v)=2$ for all $v \in V$, $\deg_{G'}(e)=|e|$ for all $e \in E'$, and $V(G')\cap E=E'$. Hence the incidence graph of the dual $H^T$ admits a subgraph that is 2-regular on $V^T$ and even on $E$, which implies that $H^T$ admits an Euler family. Since $\deg_{G'}(e)=|e|$ for all $e \in E'$, this Euler family of $H^T$ traverses each $e \in E'$ exactly $\frac{|e|}{2}$ times, and each $e \in E-E'$ not at all. If, in addition, the 2-factor $F$ is connected, then $G'$ is connected by \cite[Theorem 3.11]{BahSaj}, and hence corresponds to an Euler tour of $H^T$.
The converse is proved by reversing the above steps.
\end{proof}
\begin{cor}
Let $H=(V,E)$ be a non-empty hypergraph without empty edges such that $|e|$ is even for all $e \in E$, and let $H^T=(E,V^T)$ be its dual.
Then $H$ admits a 2-factorization if and only if there exists a partition $\{ E_1,\ldots, E_k\}$ of $E$ such that for each $i=1,\ldots,k$, the dual $H^T$ admits an Euler family with anchor set $E_i$ that traverses every vertex $e \in E_i$ exactly $\frac{|e|}{2}$ times..
\end{cor}
\begin{proof}
Let $\{F_1,\ldots,F_k\}$ be a 2-factorization of $H$. For each $i \in \{ 1,\ldots,k\}$, let $E_i=E(F_i)$. Then $\{ E_1,\ldots,E_k\}$ is a partition of $E$, and by Theorem~\ref{the:2-factor}, for each $i \in \{ 1,\ldots,k\}$,
the dual $H^T$ admits an Euler family with anchor set $E_i$ that traverses every vertex $e \in E_i$ exactly $\frac{|e|}{2}$ times.
Conversely, assume that $\{ E_1,\ldots,E_k\}$ is a partition of $E$ such that, for each $i \in \{ 1,\ldots,k\}$,
the dual $H^T$ admits an Euler family with anchor set $E_i$ that traverses every vertex $e \in E_i$ exactly $\frac{|e|}{2}$ times. Then by Theorem~\ref{the:2-factor}, each $F_i=(V,E_i)$ is a 2-factor of $H$, and $\{F_1,\ldots,F_k\}$ is a 2-factorization.
\end{proof}
\begin{center}
{\large \bf Acknowledgement}
\end{center}
The first author wishes to thank the Department of Mathematics and Statistics, University of Ottawa, for its hospitality during his postdoctoral fellowship, when this research was conducted. The second author gratefully acknowledges financial support by the Natural Sciences and Engineering Research Council of Canada (NSERC).
\medskip
|
1,116,691,498,301 | arxiv | \section{Introduction}
\label{sec:intro}
\def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\setcounter{equation}{0}
Pion-nucleon scattering is a fundamental process involving the lightest meson and baryon. Therefore, at low energies it is amenable to be studied by the
low-energy effective field theory of QCD, Chiral Perturbation Theory (CHPT) \cite{weinberg,gasser1}, which takes into account both the spontaneous as well as the explicit breaking of chiral symmetry in strong interactions. While the baryon field transforms linearly under the chiral group the pions do non-linearly \cite{cole}. A first step in extending CHPT to the systems with baryon number one was undertaken in ref.~\cite{gasser2}. Contrary to standard CHPT, it was established that due to the presence of the large nucleon mass, loops do not respect the chiral power counting, and the lower order counterterms are renormalized because of higher order loops. The power counting was recovered by applying Heavy Baryon CHPT (HBCHPT) \cite{jenkins}, where the heavy components of the baryon fields are integrated out \cite{kambor,review} so that manifest Lorentz invariance
is lost. On the other hand, for some loop functions the expansion in inverse powers of the nucleon mass and the loop integration do not commute so that the non-relativistic expansion does not converge \cite{becher,review}. The recovery of the power counting, while keeping manifest Lorentz invariance,
was achieved by the Infrared Regularization method (IR) \cite{becher}, based on the
ideas in ref.~\cite{elli}.
IR was extended to the multi-nucleon sector \cite{goity} and to multi-loop diagrams \cite{gegen}.
Another relativistic approach to baryon CHPT is the so-called extended-on-mass-shell (EOMS) renormalization scheme \cite{eoms1,eoms2}. The latter is based on removing explicitly the power counting breaking terms appearing in the loop
integrals in dimensional regularization since they are re-absorbed by the finite set of low-energy counterterms up to the order the calculation is performed. For a recent review on baryon CHPT see ref.~\cite{bernard}.
Here we focus on the application of IR CHPT methods to low-energy pion-nucleon scattering. In HBCHPT there is already an extensive list of detailed calculations with several degrees of precision.
In refs.~\cite{mojzis,elli,fettes3} an ${\cal O}(q^3)$ calculation is performed, with the additional inclusion of the $\Delta(1232)$ in ref.~\cite{elli}.
The calculation of pion-nucleon scattering was extended up to ${\cal O}(q^4)$ in ref.~\cite{fettes4}, while isospin violation (including both strong isospin breaking terms and electromagnetism) is worked out up to ${\cal O}(q^3)$ in ref.~\cite{fettes_ep}. The same authors also studied the
influence of the $\Delta$-isobar within the small $\epsilon$-expansion \cite{hemmert} up to ${\cal O}(\epsilon^3)$ in ref.~\cite{fettes_small}.\footnote{Other chiral power-countings including the explicit $\Delta(1232)$ resonance are the $\delta$-expansion \cite{pascalutsa12} and the more recent one of ref.~\cite{kolck}.}
Isospin breaking corrections for the pion-nucleon scattering lengths to ${\cal O}(q^3)$
are calculated within IR in refs.~\cite{lipartia,hoferichter}. This is part of an on-going effort for providing high precision determinations
of the pion-nucleon scattering lengths (a recent review on this issue is ref.~\cite{gassrev}).
Within IR CHPT $\pi N$ scattering was already considered in refs.~\cite{beche2,elli2}. Ref.~\cite{beche2} performed an ${\cal O}(q^4)$ one-loop calculation.
Its main conclusion was that the one-loop representation is not precise enough to allow a sufficiently accurate extrapolation of physical data to the Cheng-Dashen point.
On the other hand, ref.~\cite{elli2} was interested in the complementary aspects of comparing IR CHPT at ${\cal O}(q^3)$ to data and to previous
HBCHPT studies \cite{elli,fettes3,fettes4}.
The conclusions were unexpected and rather pessimistic. The description obtained was restricted to very low center-of-mass (CM) pion kinetic energy (less than 40~MeV), such that the IR results badly diverge from the experimental values above that energy in several partial waves \cite{elli2}. In comparison, the resulting phase shifts in HBCHPT \cite{elli,fettes3} fit pion-nucleon phase shifts up to significantly higher energies and then start deviating smoothly from data. Last but not least, ref.~\cite{elli2} also found
an unrealistically large violation (20--30\%) of the Goldberger-Treiman (GT) relation \cite{goldberger} for the pion-nucleon coupling.
As we shall show, our results are somewhat more optimistic that those of ref.~\cite{elli2} because we obtain that IR is able to describe low-energy pion-nucleon scattering comparable to HBCHPT at ${\cal O}(q^3)$. Nevertheless, the caveat about the large violation of the GT
relation in a full IR calculation at ${\cal O}(q^3)$ remains. When this calculation is restricted to strict ${\cal O}(q^3)$, as in HBCHPT,
more realistic values around 2\% are obtained for the violation of the GT relation.
As a consequence of unitarity, $\pi N$ partial wave amplitudes develop a right-hand or unitarity cut with a branch
point at the reaction threshold. The first derivative of the partial waves at this point is singular.
Based on this, ref.~\cite{beche2} advocates for applying the chiral expansion to the subthreshold region of the
$\pi N$ scattering amplitude where it is expected to be smoother.
This singularity can also be avoided by applying the chiral expansion to an interaction kernel which, by construction, has no right-hand cut.
This is the so-called Unitary CHPT (UCHPT) \cite{npa,nd,pin,plb}.
One of the consequences of this framework, is that the calculated $\pi N$ partial waves fulfill
unitarity. We compare in this work the purely perturbative results with those obtained by UCHPT, with the latter being able to
fit data closely up to higher energies as shown below. There are other methods already employed to provide unitarized $\pi N$ amplitudes from the given chiral expansions,
e.g.\cite{gomez,grana,gaspa}. We will also
explore the region of the $\Delta(1232)$ resonance by including a Castillejo-Dalitz-Dyson (CDD) pole \cite{cdd} in the inverse of the
amplitude. The resulting amplitude has the same discontinuities along the right- and left-hand-cuts as the one without
the CDD poles. Traditionally, this was a serious drawback for the bootstrap hypothesis \cite{demo,demo2}
since the source for dynamics in this approach is given precisely by the discontinuities of the amplitudes along the cuts.
From our present knowledge based on QCD the presence of these extra solutions (including an arbitrary number of CDD poles)
can be expected on intuitive grounds since they would be required in order to accommodate pre-existing resonances due to the elementary degrees of freedom of QCD.
Alternatively,
one could also include explicitly within the effective field theory the $\Delta(1232)$ resonances
as a massive field \cite{38,bora,hemmert,elli2,pascalutsa12,kolck}.
After this introduction we give in section \ref{sec2} our conventions and Lagrangians employed,
including kinematics and equations used to project into partial waves.
The calculation at the one-loop level up to ${\cal O}(q^3)$ is performed in section \ref{sec3},
where we compare directly with the expressions given in ref.~\cite{beche2}. We also present fits to the experimental data at the perturbative level and discuss the resulting values for the
chiral counterterms, scattering lengths and volumes and the violation of the GT relation.
The resummation of the right-hand cut by means of UCHPT is undertaken in section \ref{sec4},
where we discuss the comparison with experimental data and the significant increase of the energy range
for the reproduction of data. The conclusions are given in section \ref{sec5}.
\section{Prelude: Generalities, kinematics and Lagrangians}
\label{sec2}
\def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\setcounter{equation}{0}
We consider the process $\pi^a(q) N(p,\sigma;\alpha)\to \pi^{a'}(q') N(p',\sigma';\alpha')$.
Here $a$ and $a'$ denote the Cartesian coordinates in the isospin space of the initial and final pions with
four-momentum $q$ and $q'$, respectively.
Regarding the nucleons, $\sigma$($\sigma'$) and $ \alpha(\alpha')$ correspond to the third-components
of spin and isospin of the initial (final) states, in order. The usual Mandelstam variables are
defined as $s=(p+q)^2=(p'+q')^2$, $t=(q-q')^2=(p-p')^2$
and $u=(p-q')^2=(p'-q)^2$, that fulfill $s+t+u=2M_\pi^2+2 m^2$ for on-shell scattering, with $m$ and $M_\pi$ the
nucleon and pion mass, respectively.
It is convenient to consider Lorentz- and isospin-invariant amplitudes (along our study exact isospin symmetry is presummed.)
We then decompose the scattering amplitude as \cite{hohler}
\begin{align}
T_{ a a' }&=\delta_{a'a} T^++\frac{1}{2}[\tau_a,\tau_{a'}]T^{-}~,\nonumber\\
T^{\pm}&=\bar{u}(p',\sigma')\left[A^{\pm}+\frac{1}{2}(\barr{q}+{\barr{q}}\,')B^{\pm}\right]u(p,\sigma)~.
\label{apmbpmdef}
\end{align}
Here, the Pauli matrices are indicated by $ \tau_c$.
In the next section we will proceed with the calculation of $A^{\pm}$ and $B^ {\pm}$
perturbatively up to ${\cal O}(q^3)$.
In IR the Feynman diagrams for $\pi N$ scattering follow the
standard chiral power counting \cite{wein}
\begin{align}
\nu=1+2L+\sum_{i}V_i(d_i+\frac{1}{2}n_i-2)~,
\label{counting}
\end{align}
where $L$ is the number of loops, $V_i$ is the number of vertices of type $i$
consisting of $n_i$ baryon fields (in our case $n_i=0$, 2) and $d_i$ pion derivatives or masses.
In this way, a given Feynman diagram for $\pi N$ scattering counts as $q^\nu$.
For the calculation of pion-nucleon scattering up to ${\cal O}(q^3)$ we employ the chiral Lagrangian
\begin{align}
{\cal L}_{CHPT}&={\cal L}_{\pi\pi}^{(2)}+
{\cal L}_{\pi\pi}^{(4)}+
{\cal L}_{\pi N}^{(1)}+
{\cal L}_{\pi N}^{(2)}+
{\cal L}_{\pi N}^{(3)}~,
\end{align}
where the superscript indicates the chiral order, according to eq.~\eqref{counting}.
Here, ${\cal L}_{\pi\pi}^{(n)}$ refers to the purely mesonic Lagrangian without baryons and
${\cal L}_{\pi N}^{(n)}$ corresponds to the one bilinear in the baryon fields. We follow the same notation
as in ref.~\cite{beche2} to make easier the comparison. Then,
\begin{align}
{\cal L}_{\pi\pi}^{(2)}&=\frac{F^2}{4}\langle u_\mu u^\mu+\chi_+\rangle ~\nonumber~,\\
{\cal L}_{\pi\pi}^{(4)}&=\frac{1}{16}\ell_4 \left(2 \langle u_\mu u^\mu\rangle \langle \chi_+ \rangle
+\langle \chi_+\rangle^2\right)+\ldots
\label{lagpi}
\end{align}
where the ellipsis indicate terms that are not needed in the calculations given here.
For the different symbols, $F$ is the pion weak decay constant in the chiral limit and
\begin{align}
u^2&=U~,~u_\mu=i u^\dagger \partial_\mu U\, u^\dagger~,~\chi_{\pm}=u^\dagger \chi u^\dagger\pm u \chi^\dagger u~.
\end{align}
The explicit chiral symmetry breaking due to the non-vanishing quark masses (in the isospin limit $m_u=m_d=\hat{m}$)
is introduced through $\chi=2 B_0 \hat{m}$. The constant $B_0$ is proportional to the quark condensate in the chiral limit
$\langle 0|\bar{q}^j q^i|0\rangle=-B_0 F^2 \delta^{ij}$.
In the following we employ the so-called sigma-parameterization where
\begin{align}
U(x)&=\sqrt{1-\frac{\vec{\pi}(x)^2}{F^2}}+i\frac{\vec{\pi}(x)\cdot \vec{\tau}}{F}
\end{align}
In eq.~\eqref{lagpi} we denote by $\langle \cdots \rangle$ the trace of the resulting $2\times 2$ matrix.
For the pion-nucleon Lagrangian we have
\begin{align}
{\cal L}_{\pi N}^{(1)}&=\bar{\psi}(i\barr{D}-\krig{m})\psi+\frac{g}{2}\bar{\psi}\barr{u}\gamma_5 \psi~,\nonumber\\
{\cal L}_{\pi N}^{(2)}&=c_1 \langle \chi_+\rangle \bar{\psi}\psi-\frac{c_2}{4m^2}\langle u_\mu u_\nu\rangle(\bar{\psi}D^\mu D^\nu \psi+\hbox{h.c.})+\frac{c_3}{2}\langle u_\mu u^\mu\rangle \bar{\psi}\psi-\frac{c_4}{4}\bar{\psi}\gamma^\mu\gamma^\nu[u_\mu,u_\nu]\psi+\ldots~,\nonumber\\
{\cal L}_{\pi N}^{(3)}&=\bar{\psi}\Biggl(-\frac{d_1+d_2}{4m}([u_\mu,[D_\nu,u^\mu]+[D^\mu,u_\nu]]D^\nu
+\hbox{h.c.})\nonumber\\
&+\frac{d_3}{12 m^3}([u_\mu,[D_\nu,u_\lambda]](D^\mu D^\nu D^\lambda+\hbox{sym.})+\hbox{h.c.})
+i\frac{d_5}{2 m}([\chi_-,u_\mu]D^\mu+\hbox{h.c.})\nonumber\\
&+i\frac{d_{14}-d_{15}}{8 m}\left(\sigma^{\mu \nu}\langle [D_\lambda,u_\mu]u_\nu-u_\mu [D_\nu,u_\lambda]\rangle
D^\lambda+\hbox{h.c.}\right)\nonumber\\
&+\frac{d_{16}}{2}\gamma^\mu\gamma_5\langle\chi_+\rangle u_\mu+\frac{id_{18}}{2}\gamma^\mu \gamma_5 [D_\mu,\chi_-]\Biggr) \psi
+\ldots
\label{lagN}
\end{align}
In the previous equation $\krig{m}$ is the nucleon mass in the chiral limit ($m_u=m_d=0$) and the covariant derivative $D_\mu$ acting on the baryon fields is given by $\partial_\mu+\Gamma_\mu$ with $\Gamma_\mu= [u^\dagger,\partial_\mu u]/2$.
The low-energy counterterms (LECs) $c_i$ and $d_i$ are not fixed by chiral symmetry and we fit them to $\pi N$ scattering data.
Again only the terms needed for the present study are shown in eq.~\eqref{lagN}. For more details on the definition and derivation of the different
monomials we refer to refs.~\cite{fettes3,opv}.
The free one-particle states are normalized according to the Lorentz-invariant normalization
\begin{align}
\langle \mathbf{p}',\sigma';\gamma|\mathbf{p},\sigma;\gamma\rangle=
2 E_p (2\pi)^3\delta(\mathbf{p}'-\mathbf{p}) \delta_{\sigma\sigma'}\delta_{\gamma\gamma'}~,
\end{align}
where $E_p$ is the energy of the particle with three-momentum $\mathbf{p}$ and $\gamma$ indicates any internal quantum number. A free two-particle state
is normalized accordingly and it can be decomposed in states with well defined total spin $S$ and total angular momentum $J$. For $\pi N$ scattering $S=1/2$ and one has in the CM frame
\begin{align}
|\pi(-\mathbf{p};a)N(\mathbf{p},\sigma;\alpha)\rangle&=\sqrt{4\pi}\sum_{\ell,m} (m \sigma \mu|\ell \frac{1}{2} J) Y_\ell^m(\hat{\mathbf{p}})^*|J \mu \ell;a \alpha \rangle~,
\label{waves}
\end{align}
with $\hat{\mathbf{p}}$ the unit vector of the CM nucleon three-momentum $\mathbf{p}$, $\ell$ the orbital angular momentum, $m$ its third component and $ \mu=m+\sigma$ the third-component of the total angular momentum.
The Clebsch-Gordan coefficient is denoted by $(m_1 m_2 m_3|j_1 j_2 j_3)$, corresponding to the composition of the spins $j_1$ and $j_2$ (with third-components $m_1$ and $m_2$, in order) to give the third spin $j_3$, with third-component $m_3$.
The state with total angular momentum well-defined, $|J \mu \ell;a \alpha\rangle$, satisfies the normalization
condition
\begin{align}
\langle J' \mu' \ell';a' \alpha'|J \mu \ell;a \alpha\rangle=\delta_{J J'}\delta_{\mu'\mu}\delta_{\ell \ell'}
\frac{4\pi \sqrt{s}}{|\mathbf{p}|} \delta_{a'a}\delta_{\alpha'\alpha}~.
\label{jdef.norma}
\end{align}
The partial wave expansion of the $\pi N$ scattering amplitude can be worked out straightforwardly from eq.~\eqref{waves}.
By definition, the initial baryon three-momentum $\mathbf{p}$ gives the positive direction of the ${\mathbf{z}}$-axis. Inserting the series of eq.~\eqref{waves} one has for the scattering amplitude
\begin{align}
\langle \pi(-\mathbf{p}';a')N(\mathbf{p}',\sigma';\alpha')|T|\pi(-\mathbf{p};a)N(\mathbf{p},\sigma;\alpha)\rangle&=
4\pi\sum_{\ell,m,J}Y_\ell^0(\hat{\mathbf{z}})(m\sigma'\sigma|\ell\frac{1}{2}J)
(0\sigma\sigma|\ell\frac{1}{2}J) Y_{\ell}^m(\hat{\mathbf{p}}') T_{J\ell}(s)~,
\label{series_t}
\end{align}
where $T$ is the T-matrix operator and $T_{J\ell}$ is the partial wave amplitude with total angular momentum $J$ and orbital angular momentum $\ell$. Notice that in eq.~\eqref{series_t} we made use of the fact that $Y_\ell^m(\hat{\mathbf{z}})$ is non-zero only for $m=0$. Recall also that because of parity conservation partial wave amplitudes with different orbital angular momentum do not mix.
From eq.~\eqref{series_t} it is straightforward to isolate $T_{J\ell}$ with the result
\begin{align}
T_{J\ell}(a',\alpha';a,\alpha)&=\frac{1}{\sqrt{4\pi(2\ell+1)}(0\sigma\sigma|\ell\frac{1}{2}J)}
\sum_{m,\sigma'}\int d\hat{\mathbf{p}}'\,\langle \pi(-\mathbf{p}';a')N(\mathbf{p}',\sigma';\alpha')|T|\pi(-\mathbf{p};a)N(\mathbf{p},\sigma;\alpha)\rangle
\nonumber\\
&\times (m\sigma'\sigma|\ell\frac{1}{2}L) Y_\ell^m(\hat{\mathbf{p}}')^*~.
\label{tjl}
\end{align}
In the previous expression the resulting $T_{J\ell}$ is of course independent
of choice of $\sigma$.
The relation between the Cartesian and charge bases is given by
\begin{align}
|\pi^+\rangle&=\frac{1}{\sqrt{2}}(|\pi^1\rangle+i|\pi^2\rangle)~,\nonumber\\
|\pi^-\rangle&=\frac{1}{\sqrt{2}}(|\pi^1\rangle-i|\pi^2\rangle)~,\nonumber\\
|\pi^0\rangle&=|\pi^3\rangle~.
\label{pion_charged}
\end{align}
According to the previous definition of states
$|\pi^+\rangle=-|1,+1\rangle$, $|\pi^-\rangle=|1,-1\rangle$ and $|\pi^0\rangle=|\pi^3\rangle=|1,0\rangle$, where the states of the isospin basis
are placed to the right of the equal sign. Notice the minus sign in the relationship for $|\pi^+\rangle$. Then, the amplitudes with well-defined isospin,
$I=3/2$ or 1/2, are denoted by $T_{IJ\ell}$ and can be obtained employing the appropriate linear
combinations of $T_ {J\ell}(a',\alpha';a,\alpha)$, eq.~\eqref{tjl}, in terms of standard Clebsch-Gordan
coefficients.
Due to the normalization of the states with well-defined total angular momentum, eq.~\eqref{jdef.norma},
the partial waves resulting from eq.~\eqref{tjl} with
well defined isospin satisfy the unitarity relation
\begin{align}
\hbox{Im}T_{IJ\ell}=\frac{|\mathbf{p}|}{8\pi \sqrt{s}}|T_{IJ\ell}|^2
\label{unita}
\end{align}
for $|\mathbf{p}|>0$ and below the inelastic threshold due the one-pion production at $|\mathbf{p}|\simeq 210$~MeV.
Given the previous equation, the $S$-matrix element with well defined $I$, $J$ and $\ell$, denoted by
$S_{I J\ell}$, corresponds to
\begin{align}
S_{I J\ell}=1+i\frac{|\mathbf{p}|}{4\pi\sqrt{s}}T_{I J\ell}~,
\label{s.def}
\end{align}
satisfying $S_{I J \ell}S_{I J \ell}^*=1$ in the elastic physical region.
In the same region we can then write
\begin{align}
S_{I J\ell}=e^{2 i\delta_{I J\ell}}~,
\label{s.def.2}
\end{align}
with $\delta_{I J\ell}$ the corresponding phase shifts.
\section{Perturbative calculation and its results}
\label{sec3}
\def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\setcounter{equation}{0}
From eq.~\eqref{counting} the leading order contribution to $\pi N$-scattering has $\nu=1$ and it
consists only of the lowest order pion-nucleon vertices with $d_i=1$ and of no loops ($L=0$).
These diagrams correspond to the first two topologies shown from left to right in the first line of
fig.~\ref{fig:diag_list}, where all the diagrams up-to-and-including ${\cal O}(q^ 3)$ are shown.
The ${\cal O}(q^2)$ or next-to-leading order (NLO)
contribution still has no loops ($L=0$) and contains an ${\cal O}(q^2)$ vertex with $d_i=2$. It is shown by the
third diagram in the first line of fig.~\ref{fig:diag_list}. The NLO pion-nucleon vertex is depicted by
the filled square. The ${\cal O}(q^3)$ or next-to-next-to-leading (N$^2$LO) contributions consists of tree-level diagrams
with at least one vertex of $d_i=3$ type, with the other ones with $d_i=1$. They are shown
by the diagrams in the second line of fig.~\ref{fig:diag_list}, where the diamond corresponds to the $d_i=3$ vertex.
Finally, at N$^2$LO one also has the one loop ($L=1$) diagrams involving only
vertices with $d_i=1$ from the LO pion-nucleon Lagrangian, ${\cal L}_{\pi N}^{(1)}$, and
with $d_i=2$ from the LO pure mesonic Lagrangian, ${\cal L}_{\pi\pi}^{(2)}$. The one-loop diagrams are the rest of those shown in the figure and
are labelled with a latin letter (a)--(v). In addition, one also has the wave function renormalization of pions and nucleons affecting the LO contribution.
The calculation is finally given in terms of $m$, $F_\pi$ and $g_A$, which implies
some reshuffling of pieces once the constants $\krig{m}$, $F$ and $g$ in the chiral limit
are expressed in terms of the physical $m$, $F_\pi$ and $g_A$ making use of their expressions at ${\cal O}(q^3)$ \cite{beche2}. In this work
we employ the numerical values $F_\pi=92.4$~MeV, $M_\pi=139$~MeV, $m_N=939$~MeV, $g_A=1.267$ and $\mu=m_N$.
The set of diagrams in fig.~\ref{fig:diag_list} was evaluated within IR CHPT in ref.~\cite{beche2} and we have re-evaluated it
independently. We keep the same labelling
for the one-loop diagrams as in this reference for easier comparison. We agree with all the one-loop integrals
given in detail there. Regarding their contributions to $A^{\pm}$ and $B^{\pm}$
we also agree with all of them except for the contributions of the so-called integral $I_B^{(2)}$, that results
from the tensor one-loop integrals with one meson and two
baryon propagators, see appendix C of ref.~\cite{beche2}. We find that systematically all
its contributions as given in ref.~\cite{beche2} should be reversed in sign. These contributions appear in diagrams (c)+(d),
(g)+(h) and (i).
Apart from the direct calculation, we have checked that the expressions given in ref.~\cite{beche2} violate
perturbative unitarity. The latter results because unitarity, eq.~\eqref{unita}, is a non-linear
relation that mixes up orders in a power expansion. Denoting with a superscript the chiral order so that
$T_{IJ\ell}=T_{IJ\ell}^{(1)}
+T_{IJ\ell}^{(2)}+T_{IJ\ell}^{(3)}+{\cal O}(q^4)$ the unitarity relation eq.~\eqref{unita} up to ${\cal O}(q^3)$ implies
\begin{align}
\hbox{Im}T_{IJ\ell}^{(3)}=\frac{|\mathbf{p}|}{8\pi\sqrt{s}}\left(T_{IJ\ell}^{(1)}\right)^2~.
\label{per_uni}
\end{align}
\begin{figure}[H]
\centerline{\epsfig{file=diag_list.eps,width=.65\textwidth,angle=0}}
\vspace{0.2cm}
\caption[pilf]{\protect \small Set of diagrams for $\pi N$ scattering up to-and-including ${\cal O}(q^3)$.
The ${\cal O}(q)$ diagrams are the first two in the first line, from left to right.
The ${\cal O}(q^2)$ contributions
correspond to the third one still in the first line, where the $d_i=2$, $n_2=2$ vertex is indicated
with a square. The rest of the diagrams from the second line until the bottom of the figure
are ${\cal O}(q^3)$. The $d_i=3$, $n_i=2$ vertices are indicated with a diamond. The one-loop diagrams
only have lowest order pion-nucleon vertices.
\label{fig:diag_list}}
\end{figure}
Our expressions fulfill eq.~\eqref{per_uni}, and those of ref.~\cite{beche2} also do once
the sign in front of $I_B^{(2)}$ is reversed for all its contributions.\footnote{The authors of ref.~\cite{beche2} state in page 30 that the scattering amplitude calculated obeys perturbative unitarity.
It seems then that the difference in the sign referred above corresponds to a typo of \cite{beche2}.} Technically we follow the
general procedure of refs.~\cite{becher} for calculating within IR and we do not give any expression
for the different integrals calculated here because they were given already in refs.~\cite{kubis_vfm,beche2,becher}.
Now we proceed to compare our perturbative calculation with the experimental phase shifts for
the low-energy data on the $\pi N$ $S$- and $P$-waves (which are the relevant partial waves for such energies.)
Since our solution is perturbative one should evaluate
the phase shifts in a chiral expansion too. From the relation between $S_{IJ\ell}$ and $T_{IJ\ell}$, eq.~\eqref{s.def}, one has
\begin{align}
T_{IJ\ell}&=\frac{8\pi\sqrt{s}}{|\mathbf{p}|}\sin \delta_{IJ\ell} \,e^{i\delta_{IJ\ell}}~,\nonumber\\
\cos\delta_{IJ\ell} \sin\delta_{IJ\ell}&=\frac{|\mathbf{p}|}{8\pi\sqrt{s}}\hbox{Re}T_{IJ\ell}~.
\end{align}
Which implies that $\delta$ starts at ${\cal O}(q^2)$ so that up to ${\cal O}(q^4)$ one can write
\begin{align}
\delta_{IJ\ell}=\frac{|\mathbf{p}|}{8\pi\sqrt{s}}\hbox{Re}T_{IJ\ell}~,
\label{delta.pert}
\end{align}
with $T_{IJ\ell}$ evaluated in the IR CHPT series (in our present case up to ${\cal O}(q^3)$.)
\begin{figure}[ht]
\psfrag{ss}{{\small $\sqrt{s}$ (GeV)}}
\psfrag{S11per}{$S_{11}$}
\psfrag{S31per}{$S_{31}$}
\psfrag{P11per}{$P_{11}$}
\psfrag{P13per}{$P_{13}$}
\psfrag{P31per}{$P_{31}$}
\psfrag{P33per}{$P_{33}$}
\centerline{\epsfig{file=IR.pert.ka85.ps,width=.7\textwidth,angle=-90}}
\vspace{0.2cm}
\caption[pilf]{\protect \small (Color online.) Fits to the KA85 pion-nucleon phase shifts \cite{ka84} as a
function of $\sqrt{s}$ (in GeV) for $\sqrt{s}_{max}=1.13~$GeV in IR CHPT at ${\cal O}(q^3)$. The KA85-1 fit corresponds to the
solid curves and the KA85-2 fit to the dashed ones. Data points: circles are KA85 and squares WI08 data.
\label{fig:res.ir.pert.ka85}}
\end{figure}
We now consider the reproduction of the $\pi N$ phase shifts of the partial wave analyses of the Karlsruhe (KA85) group \cite{ka84} and the current one of
the GWU (WI08) group \cite{wi08}.
The fits are done with the full IR CHPT calculation to ${\cal O}(q^3)$. Due to the absence of error in these analyses \cite{ka84,wi08} there is some
ambiguity in the definition of the $\chi^2$. Here we follow a similar strategy to that of ref.~\cite{pin} and define an error assigned to every point as the sum in quadrature of a systematic plus a statistical error,
\begin{align}
\hbox{err}(\delta)=\sqrt{e_s^2+e_r^2 \delta^2}~,
\label{err.def}
\end{align}
where $e_s$ is the systematic error and $e_r$ the relative one. In ref.~\cite{fettes3} a relative error of $3\%$ was taken while in ref.~\cite{pin} a 5\% error was considered. In the following we take for $e_s$ just 0.1 degrees and $e_r=2\%$. Regarding these values for the errors notice that isospin breaking corrections in $\pi N$ scattering are estimated to be rather small (ref.~\cite{fettes_com}
estimates for $S$-waves an isospin breaking correction $\lesssim 1\%$.) We then consider the larger $2\%$ value as a safer estimate for isospin breaking effects not taken into account in our isospin symmetric study. Notice also that the ${\cal O}(q^4)$ contributions are expected to be suppressed compared with the leading term by a relative
factor $\sim (M_\pi/\Lambda)^3\sim (0.14/0.5)^3\sim 0.02$. Although small, a finite value for $e_s$ helps to stabilize fits. Otherwise,
with $e_s=0$, extra weight is given to the small energy region close to threshold, where the phase shifts are smaller and absolute errors decrease. Tiny differences between the calculation and points in the input become then exceedingly relevant. We take $e_s=0.1$ degrees since it is much smaller than typical values of the phase shifts and is also the typical size for the difference between the phase shifts of refs.~\cite{ka84,wi08} in the low-energy region for the $P_{11}$ partial wave (compare the squares \cite{ka84} and the circles \cite{wi08} in fig.~\ref{fig:res.ir.pert.ka85}.) We have also convinced ourselves that changes in these values for $e_s$ and $e_r$ do not affect our conclusions.
The $\chi^2$ function to be minimized is defined in a standard way as
\begin{align}
\chi^2&=\sum_{i}\frac{(\delta-\delta_{th})^2}{\hbox{err}(\delta)^2}~,
\label{chi2.def}
\end{align}
with $\delta_{th}$ the phase shift calculated theoretically. For the minimization process we employ the program MINUIT \cite{cern}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|r|r|r|r|r|r|r|}
\hline
{\small LEC} & KA85-1 & KA85-2 & HBCHPT & HBCHPT & HBCHPT & RS \\
& & & ${\cal O}(q^3)$ \cite{fettes3} & Disp. \cite{buttiker} & ${\cal O}(q^3)$ \cite{aspects} & \cite{aspects} \\
\hline
$c_1$ & $-0.71\pm 0.49$ & $-0.79\pm 0.51$ & $(-1.71,-1.07)$ &$-0.81\pm 0.12$& $-1.02\pm 0.06$ & \\
$c_2$ & $ 4.32 \pm 0.27$& $3.49\pm 0.25$ & $(3.0,3.5)$ &$8.43\pm 56.9$ & $3.32\pm 0.03$ &3.9 \\
$c_3$ & $-6.53 \pm 0.33$& $-5.40\pm 0.13$ & $(-6.3,-5.8)$ &$-4.70\pm 1.16$& $-5.57\pm 0.05$ &$-5.3$ \\
$c_4$ & $3.87\pm 0.15$ & $3.32\pm 0.13$ & $(3.4,3.6)$ &$3.40\pm 0.04 $& &3.7 \\
\hline
$d_1+d_2$ & $2.48\pm 0.59$ & $0.94\pm 0.56$ & $(3.2,4.1)$ && & \\
$d_3$ & $-2.68 \pm 1.02$ & $-1.10\pm 1.16$ & $(-4.3,-2.6)$ && & \\
$d_5$ & $2.69 \pm 2.20$ & $1.86\pm 2.28$ & $(-1.1,0.4)$ && & \\
$d_{14}-d_{15}$ & $-1.71\pm 0.73$ & $1.03\pm 0.71$& $(-5.1,-4.3)$ && & \\
$d_{18}$ & $-0.26\pm 0.40$ & $-0.07\pm0.44$& $(-1.6,-0.5)$ && & \\
\hline
\end{tabular}
{\caption[pilf]{\protect \small Columns 2--5: Values of the low-energy constants for the KA85-1 and KA85-2 fits. The $c_i$ are given in
GeV$^{-1}$ and the $d_i$ (or their combinations) in GeV$^{-2}$. The renormalization scale for $d_i(\lambda)$ is $\lambda=1$~GeV. The interval of values obtained in \cite{fettes3} by fitting low-energy $\pi N$ scattering data with HBCHPT at ${\cal O}(q^3)$ is given in the fourth column. Other determinations are given in columns fifth \cite{buttiker} and sixth \cite{aspects}. Resonance saturation estimates are collected in the last column \cite{aspects}.
\label{table.cs.ds.pert.ka85}}}
\end{center}
\end{table}
First, we discuss the reproduction of the KA85 data \cite{ka84} and later the WI08 \cite{wi08} ones. As a first strategy, we fit directly these data from threshold up to an upper value denoted by $\sqrt{s}_{max}$, and consider several values for $\sqrt{s}_{max}$. A data point is included every 4~MeV in $\sqrt{s}$. One observes that the $\chi^2$ per degree of freedom ($\chi^2_{d.o.f.}$) is below 1 for $\sqrt{s}_{max}\lesssim 1.13$~GeV, and then rises fast with energy so that for $\sqrt{s}_{max}=1.14$~GeV the $\chi^2_{d.o.f.}$ is 2.1 and for $\sqrt{s}_{max}=1.15~$GeV it becomes 3.6. In fig.~\ref{fig:res.ir.pert.ka85} we show by the solid line the result of the fit for $\sqrt{s}_{max}=1.13~$GeV.
At the level of the resulting curves the differences are small when varying $\sqrt{s}_{max}$ within the range indicated above. A good reproduction of the data is achieved up to around $\sqrt{s}\lesssim 1.14$~GeV, a similar range of energies to that
obtained in the ${\cal O}(q^3)$ HBCHPT fits of Fettes and Mei{\ss}ner \cite{fettes3}. From fig.~\ref{fig:res.ir.pert.ka85} one can readily see the origin of the rise in the $\chi^2$ with increasing $\sqrt{s}_{max}$. It stems from the last points of the partial waves $P_{33}$, $P_{31}$ and $P_{11}$ for which the resulting curves depart from them, getting worse as the energy increases. The fast rising of the $P_{33}$ phase shifts is due to the $\Delta(1232)$ resonance. Though the tail of this resonance is mimicked in CHPT by the LECs, its energy dependence is too steep to be completely accounted for at ${\cal O}(q^3)$ due to the closeness of the $\Delta(1232)$ to the $\pi N$ threshold. Indeed, this deficiency already occurred in the ${\cal O}(q^3)$ HBCHPT calculation of ref.~\cite{fettes3}. However, at ${\cal O}(q^4)$ the fit to data improves because of the appearance of new higher order LECs \cite{fettes4}.
The resulting values for the CHPT LECs are shown in the second column of table~\ref{table.cs.ds.pert.ka85}, denoted by KA85-1, in units
of GeV$^{-1}$ and GeV$^{-2}$ for the $c_i$ and $d_i$, respectively.
Note that at ${\cal O}(q^3)$ only the combinations of counterterms $d_1+d_2$, $d_3$, $d_5$, $d_{14}-d_{15}$ and $d_{18}$ appear in $\pi N$ scattering. The first four combinations were already explicitly shown in the expression for ${\cal L}_{\pi N}^{(3)}$, eq.~\eqref{lagN}. The counterterm $d_{16}$ does not appear because it is re-absorbed in the physical value of the pion-nucleon axial-vector coupling $g_A$, once the lowest order $g$
constant is fixed in terms of the former \cite{beche2}. Under variations of $\sqrt{s}_{max}$ most of the counterterms present a rather stable behavior, with the ${\cal O}(q^3)$ ones being the most sensitive. The change in the LECs when varying $\sqrt{s}_{max}$ between 1.12 to 1.15~GeV is
a source of uncertainty that is added in quadrature with the statistical error from the fit with $\sqrt{s}_{max}=1.13$~GeV, which has a $\chi^2_{d.o.f.}$ of 0.9. The central values shown correspond to the same fit too. We also show in the table the values obtained from other approaches at ${\cal O}(q^3)$ \cite{aspects,mojzis,fettes3,buttiker}, including the
${\cal O}(q^3)$ HBCHPT fit to $\pi N$ data \cite{fettes3}, the dispersive analysis within the Mandelstam triangle of ref.~\cite{buttiker} and the results at ${\cal O}(q^3)$ from ref.~\cite{aspects}, that also includes an estimation of the ${\cal O}(q^2)$ LECs from resonance saturation (RS). Within uncertainties, our values for $c_1$, $c_3$ and $c_4$ are compatible with these other determinations. Instead, $c_2$ is somewhat
larger, which is one of the main motivations for considering other fits to $\pi N$ scattering following the so called strategy 2, as explained below. Our values are also
compatible with those determined from the $\pi N$ parameters up to ${\cal O}(q^4)$ in ref.~\cite{akaki_gasser} that gives the intervals
$c_1=(-1.2, -0.9)$, $c_2=(2.6, 4.0)$ and $c_3=(-6.1, -4.4)$. The threshold parameters taken in this analysis are those calculated in
ref.~\cite{ka84}.
Regarding the ${\cal O}(q^3)$ counterterms the comparison with HBCHPT is not so clear due to the large uncertainties both from our side as well as from \cite{fettes3}. As discussed in more detail below, the ${\cal O}(q^3)$ contribution is typically the smallest between the different orders studied so that it is harder to pin down precise values for these counterterms. Indeed, we observe from the second column in table \ref{table.cs.ds.pert.ka85} that $d_3$, $d_5$ and $d_{14}-d_{15}$ have large errors, much larger than those of the ${\cal O}(q^2)$ counterterms (although the error estimated for $c_1$ is also large because
the fits are not very sensitive to this counterterm which is multiplied by the small $M_\pi^2$ without energy dependence.) Our values for the LECs $d_i$, again within the large uncertainties, are compatible
with those of ref.~\cite{fettes3}. Only $d_{14}-d_{15}$ is larger in our case, out of the range given in
\cite{fettes3} by around a factor 2.
The threshold parameters for the fit KA85-1 are collected in the second column of table \ref{table.ir.pert.as.ka85}. We have evaluated the different scattering lengths and volumes by performing an effective range expansion (ERE) fit to our results in the low-energy region (namely, for $|\mathbf{p}|<M_\pi^2(1-M_\pi^2/m^2)$ which sets the range of the ERE.)\footnote{Numerical problems arising for $|\mathbf{p}|\to 0$ prevent to calculate directly the threshold parameters as $\lim_{\mathbf{p}\to 0} |\mathbf{p}| \hbox{Re T}/8\pi\sqrt{s}|\mathbf{p}|^{1+2 L}$.} The error given to our threshold parameters is
just statistical. It is so small because the values of the scattering lengths and volumes are rather stable under changes of $\sqrt{s}_{max}$ and LECs within their uncertainties (taking into account the correlation among them.) If treated in an uncorrelated way the error would be much larger. We also vary the numbers of terms in the ERE expansion from 3 to 5 and the slight variation in the resulting scattering lengths/volumes is also taken into account in the errors given.
In the last two columns of table \ref{table.ir.pert.as.ka85}, we give the values from the partial wave analyses of refs.~\cite{ka84,wi08}. Notice that the differences between the central values from the latter two references are larger than
one standard deviation, except for the $P_{33}$ case.
The differences between the $S_{31}$ scattering lengths and $P_{13}$ scattering volumes are specially large. Given this situation we consider that our calculated scattering lengths and volumes are consistent with the values obtained in the KA85 and WI08 partial wave analyses, except for the $P_{33}$ one for which our result is significantly larger. It is also too large compared with the values obtained in the ${\cal O}(q^3)$ HBCHPT fits to phase-shifts of ref.~\cite{fettes3}.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|r|r|r|r|r|r|r|}
\hline
{\small Partial} & KA85-1 & KA85-2 & KA85 & WI08 \\
{\small Wave} & & & & \\
\hline
$a_{S_{31}}$ & $ -0.100\pm 0.001$ & $-0.103\pm0.001$ & $-0.100\pm 0.004$ & $-0.084$\\
$a_{S_{11}}$ & $ 0.171\pm 0.001$ & $0.172\pm 0.002$ &$0.175\pm 0.003$ & $0.171$\\
$a_{0+}^+$ & $-0.010\pm 0.001$ & $-0.011\pm 0.001$ &$-0.008^a$ & $-0.0010\pm 0.0012$ \\
$a_{0+}^-$ & $0.090\pm 0.001$ & $0.092\pm 0.001$ & $0.092^a$ & $0.0883\pm 0.0005$ \\
$a_{P_{31}}$ & $-0.052\pm 0.001$ & $-0.051\pm 0.001$ & $-0.044\pm 0.002$ & $-0.038$ \\
$a_{P_{11}}$ & $-0.078 \pm 0.001$ & $-0.088\pm 0.001$ & $-0.078\pm 0.002$ & $-0.058$ \\
$a_{P_{33}}$ & $0.251 \pm 0.002$ & $0.214 \pm 0002$ & $0.214\pm 0.002$ & $0.194$ \\
$a_{P_{13}}$ & $-0.034\pm 0.001$ & $-0.035\pm 0.001$ &$-0.030\pm 0.002$ & $-0.023$ \\
\hline
\end{tabular}
{\caption[pilf]{\protect \small $S$-wave scattering lengths and $P$-wave scattering volumes in units of $M_\pi^{-1}$ and $M_\pi^{-3}$, respectively. Our results for the fits to the KA85-1 and KA85-2 are given in the second and third columns, respectively. The fourth column corresponds to the values of the
KA85 analysis \cite{ka84}. The values for WI08 are extracted from ref.~\cite{wi08} and the errors, when given, from ref.~\cite{fa02}.\\
$^a$ These numbers are given without errors because no errors are provided in ref.~\cite{ka84}. They are deduced from the KA85 ones for $a_{S_{31}}$ and $a_{S_{11}}$.
\\
\label{table.ir.pert.as.ka85}}}
\end{center}
\end{table}
Due to the large values for $c_2$ and $a_{P_{33}}$ we consider that the fit KA85-1 is not completely satisfactory and try a second strategy (KA85-2). As it was commented above, the rapid increase in the phase shifts due to the tail of the $\Delta(1232)$ is not well reproduced at ${\cal O}(q^3)$. As a result, instead of fitting the $P_{33}$ phase shifts as a function of energy we fit now the function $\tan \delta_{P_{33}}/|\mathbf{p}|^3$ for three points with energy less than 1.09~GeV, where $\delta_{P_{33}}$ is the phase shifts for the $P_{33}$ partial wave.
The form of this function is, of course, dictated by the ERE and at threshold it
directly gives the corresponding scattering volume. We take a 2\% of error for these points because within errors this is the range of values spanned in table \ref{table.ir.pert.as.ka85} by the KA85 and WI08 results for $a_{P33}$. A relative error of 2\% was also taken for $e_r$ in eq.~\eqref{err.def}. The resulting values for the LECs are given in the third column of Table~\ref{table.cs.ds.pert.ka85} and the curves for $\sqrt{s}_{max}=1.13$~GeV are shown in
fig.~\ref{fig:res.ir.pert.ka85} by the dashed lines, that have a $\chi^2_{d.o.f.}=0.86$. We observe that these curves are quite similar to the ones previously obtained in KA85-1. Nevertheless, for the $P_{11}$ partial wave the description is slightly worse above 1.12~GeV and it is the main contribution to the final $\chi^2$. For the $P_{33}$ phase shifts one also observes a clear difference between the two curves as the dashed line runs lower than the solid line. The former reproduces the standard values for the $P_{33}$ scattering volume, see column three of Table~\ref{table.ir.pert.as.ka85}, while for the latter it is larger. This is another confirmation that the description of the rapid rise of the $P_{33}$ phase shifts at ${\cal O}(q^3)$ enforces the fit to enlarge the value of the resulting scattering volume.
It is remarkable that now the value of the ${\cal O}(q^2)$ LEC $c_2$ is smaller and perfectly compatible with the interval of values of \cite{fettes3}. It is also interesting to note that $c_3$ is also smaller, which is a welcome feature especially for two- and few-nucleon systems that are rather sensitive to large sub-leading two-pion exchange $NN$ potential that is generated by the inclusion of the $c_1$, $c_3$ and $c_4$ \cite{kaiser_peri}. See refs.~\cite{epe1,epe2,fews} for a thorough discussion on this issue for two- and few-nucleon systems. Related to this point, one has determinations of $c_3$ and $c_4$ by a partial wave analysis of the $pp$ and $np$ scattering data
from ref.~\cite{rent_cs} with the results
\begin{align}
c_3&=-4.78\pm 0.10~\hbox{GeV}^{-1}~,\nonumber\\
c_4&=+3.96\pm 0.22~\hbox{GeV}^{-1}~.
\end{align}
The systematic errors are not properly accounted for yet in these determinations due to the dependence on the matching point that distinguishes between the long-range part of the $NN$ potential (parameterized from CHPT) and the short-range one (with a purely phenomenological parameterization.) Namely, the same authors in ref.~\cite{remt_prl} considered this issue and when varying the matching point from 1.8~fm to 1.4~fm the LECs changed significantly: $c_3=-5.08(28)\to -4.99(21)$ and $c_4=4.70(70)\to 5.62(69)$~GeV$^{-1}$.
With respect to the ${\cal O}(q^3)$ counterterms we see that the central values have shifted considerably compared with KA85-1. This clearly indicates that these LECs cannot be properly pinned down by fitting $\pi N$ scattering data. Within uncertainties $d_3$, $d_5$ and $d_{18}$ overlap at the level of one sigma. The LECs $d_{1}+d_2$ and $d_{14}-d_{15}$ require to take into account a variation of 2 sigmas. In view of this situation we consider that one should be conservative and give ranges of values for these latter combination of LECs in order to make them compatible
\begin{align}
d_1+d_2&=+0.4\,\ldots \,+3~\hbox{GeV}^{-2}~,\nonumber\\
d_{14}-d_{15}&=-2.4\,\ldots\, +1.75~\hbox{GeV}^{-2}~.
\label{dbad.ka85}
\end{align}
These values correspond to the minimum and maximum of those shown in the second and third columns of table~\ref{table.cs.ds.pert.ka85} allowing a variation of one sigma.
The scattering lengths and volumes for KA85-2 are collected in the third column of Table~\ref{table.ir.pert.as.ka85}. They are calculated
from our results similarly as explained above for the KA85-1 fit.
It is remarkable that now the value for the $P_{33}$ scattering volume is perfectly compatible with the determinations from KA85 and WI08.
We see a good agreement between our ${\cal O}(q^3)$ IR CHPT results and the scattering lengths/volumes for KA85. Only the $P_{11}$ scattering volume is slightly different, though the difference between the KA85 and WI08 results is significantly large for this case too.
One also observes differences beyond the error estimated in KA85 for the $P_{13}$ scattering volume between the KA85 and WI08 values. Ours is closer to the KA85 one.
It is also worth emphasizing that our fits to the phase shifts of the KA85 analysis (KA85-1 and KA85-2), as shown in fig.~\ref{fig:res.ir.pert.ka85}, offer a good reproduction of the data and the worsening for higher energies stems in a smooth way as in ${\cal O}(q^3)$ HBCHPT \cite{fettes3}. This is certainly an improvement compared with the previous $\pi N$ study in IR CHPT to ${\cal O}(q^3)$ of ref.~\cite{elli2}. In this latter reference, data could only be fitted up to around 1.12~GeV and large discrepancies above that energy, rapidly increasing with energy, emerged in the $S_{31}$, $P_{13}$ and $P_{11}$ partial waves.
\begin{figure}[ht]
\psfrag{ss}{{\small $\sqrt{s}$ (GeV)}}
\psfrag{S11per}{$S_{11}$}
\psfrag{S31per}{$S_{31}$}
\psfrag{P11per}{$P_{11}$}
\psfrag{P13per}{$P_{13}$}
\psfrag{P31per}{$P_{31}$}
\psfrag{P33per}{$P_{33}$}
\centerline{\epsfig{file=IR.pert.wi08.ps,width=.7\textwidth,angle=-90}}
\vspace{0.2cm}
\caption[pilf]{\protect \small (Color online.) Fits to the WI08 pion-nucleon phase shifts \cite{wi08} as a
function of $\sqrt{s}$ (in GeV) for $\sqrt{s}_{max}=1.13~$GeV in IR CHPT at ${\cal O}(q^3)$. The WI08-1 fit corresponds to the
solid curves and the WI08-2 fit to the dashed ones. Data points: circles are KA85 and squares WI08 data.
\label{fig:res.ir.pert.wi08}}
\end{figure}
We proceed along similar lines and perform fits of type 1 and 2 to the current solution of the GWU group \cite{wi08} (WI08). These fits are denoted by WI08-1 and WI08-2, in that order. The resulting curves for $\sqrt{s}_{max}=1.13~$GeV are shown by the solid and dashed lines in fig.~\ref{fig:res.ir.pert.wi08}, respectively. One observes very similar curves to the KA85-1 and KA85-2 fits except for the $P_{11}$ phase shifts. Here, the agreement with the WI08 data is considerably worse. This has a clear translation into the values of the $\chi^2$ for this partial wave, which increases almost by a factor 3, from 20 (KA85-2) to 55 (WI08-2) (the number of fitted points is 12.) It is clear from fig.~\ref{fig:res.ir.pert.wi08} that IR CHPT at ${\cal O}(q^3)$ does not compare well with the $P_{11}$ WI08 phase shifts even at very low energies, $\sqrt{s}<1.11$~GeV. For the KA85 data the situation is much better, compare with fig.~\ref{fig:res.ir.pert.ka85}. Indeed, previous solutions of the GWU (and prior VPI) group had a behavior similar to that of KA85 for the $P_{11}$ phase-shifts, e.g. the solution SM01 employed in the analysis of ref.~\cite{elli2} also using IR CHPT at ${\cal O}(q^3)$. In view of the difficulties of our study based on
IR CHPT at ${\cal O}(q^3)$ for reproducing the $P_{11}$ phase shifts of WI08 at low energies we consider advisable
a revision of the current solution WI08 of the GWU group and
the way the $\eta N$ data affect the low-energy $P_{11}$ phase shifts in the coupled channel approach followed \cite{briscoe}.
The other distinctive features when comparing strategies 1 and 2 for the fits to the WI08 data are similar to those already discussed for the KA85 fits. In this way, one has for WI08-1 that $c_2$ and $c_3$ have a value in modulus larger by around 1~GeV$^{-1}$ than for WI08-2.
Related to this, the $P_{33}$ scattering volume is also significantly larger for WI08-1 than for WI08-2. The values of the fitted LECs for WI08-1 and WI08-2 are collected in the second and third columns of Table~\ref{table.ir.pert.cs.ds.wi08}.
One observes that the resulting LECs at ${\cal O}(q^2)$ are quite similar between KA85-1, WI08-1, on the one hand, and KA85-2, WI08-2, on the other, so that within uncertainties they are compatible in either of the two strategies.
In the third column of Table~\ref{table.ir.pert.cs.ds.wi08} we present the average of the LECs from our fits in Tables~\ref{table.cs.ds.pert.ka85} and \ref{table.ir.pert.cs.ds.wi08}.
The error given for every LEC is the sum in quadrature of the largest of the statistical errors shown in the previous tables and the one resulting from the dispersion in the central values. This is a conservative procedure which recognizes that both strategies are acceptable for studying low-energy $\pi N$ scattering and that takes into account the dispersion in the LECs that results from
changes in the data set.
Within errors, the values of the LECs in the last column of Table~\ref{table.ir.pert.cs.ds.wi08} are compatible
with those from HBCHPT at ${\cal O}(q^3)$ ($d_{14}-d_{15}$ is the only counterterm that differs by more than one standard deviation
from the interval of values of ref.~\cite{fettes3}.)
Regarding the threshold parameters we list in the second and third columns of Table~\ref{table.ir.pert.as.wi08} the values for the $S$-wave scattering lengths and $P$-wave scattering volumes corresponding to the fits WI08-1 and WI08-2 with $\sqrt{s}_{max}=1.13$~GeV (the same fits shown in fig.~\ref{fig:res.ir.pert.wi08}.) The procedure for their determination is the same as the one already discussed for KA85-1. The largest changes compared with the values of the KA85-1 and KA85-2 fits, respectively, occur for the $a_{S_{31}}$ and $a_{P_{11}}$ scattering length and volume, in order. The latter also shows the largest difference between the results of the fits following strategy 1 and 2 (a 12\% of relative difference for the KA85 case and a 9\% for the WI08 one.) Nonetheless, neither of our results for $a_{P_{11}}$, including strategy 1 and 2 KA85 and WI08 fits, is compatible with the value of WI08 \cite{wi08}, shown in the last column of Table~\ref{table.ir.pert.as.ka85}. The largest difference occurs for the value of KA85-2 which is a 50\% smaller than the one from WI08.
Coming back to the WI08 fits, we note that the isoscalar $S$-wave scattering length $a_{0+}^+$ is now a vanishing positive number while the $P_{11}$ scattering volume has decreased, and is compatible with the KA85 result within one sigma. We notice that the tiny errors estimated for the threshold parameters resulting from our fits in Tables~\ref{table.ir.pert.as.ka85} and \ref{table.ir.pert.as.wi08} are just statistical and are determined in the same way as explained above for the KA85-1 fit. Of course, systematic errors due to higher orders in the chiral expansion and different data sets taken induce larger systematic uncertainties than the small errors shown. In this sense, the difference between the values obtained for each partial wave in these columns provides a better estimation of uncertainties. We then calculate the average\footnote{Not the weighted average. The given errors are calculated by adding in quadrature for each LEC the largest of the errors in Tables~\ref{table.cs.ds.pert.ka85} and \ref{table.ir.pert.cs.ds.wi08} and the one resulting from the average of values.} of the four values for each scattering length/volume shown altogether in Tables~\ref{table.ir.pert.as.ka85} and \ref{table.ir.pert.as.wi08}. This is given in the last column of Table~\ref{table.ir.pert.as.wi08}. We also see that our averaged values for the $a_{0+}^+$ and $a_{0+}^-$ scattering lengths are compatible with the results obtained in ref.~\cite{raha}, $a_{0+}^+=0.0015\pm 0.0022$ and $a_{0+}^-=0.0852\pm 0.0018$ $M_\pi^{-1}$, that takes into account isospin breaking corrections in the analysis of recent experimental results on pionic hydrogen and pionic deuterium data
\begin{table}[ht]
\begin{center}
\begin{tabular}{|r|r|r|r|}
\hline
LEC & WI08-1 & WI08-2 & Average \\
\hline
$c_1$ & $-0.27\pm 0.51$ & $-0.30\pm 0.48$ & $-0.52\pm 0.60$\\
$c_2$ & $4.28\pm 0.27$ & $3.55\pm 0.30$ & $3.91\pm 0.54$ \\
$c_3$ & $-6.76\pm 0.27$ & $-5.77\pm 0.29$ & $-6.12\pm 0.72$\\
$c_4$ & $4.08\pm 0.13$ & $3.60\pm 0.16$ & $3.72 \pm 0.37$ \\
$d_1+d_2$ & $2.53\pm 0.60$ & $1.16\pm 0.65$ & $1.78\pm 1.1$\\
$d_3$ & $-3.65\pm 1.01$ & $-2.32\pm 1.04$ & $-2.44\pm 1.6$\\
$d_5$ & $5.38\pm 2.40$ & $4.83\pm 2.18$ & $3.69\pm 2.93$\\
$d_{14}-d_{15}$ & $-1.17\pm 1.00$ & $1.27\pm1.11$ & $-0.145\pm 1.88$ \\
$d_{18}$ & $-0.86\pm 0.43$ & $-0.72\pm 0.40$ & $-0.48\pm 0.58$\\
\hline
\end{tabular}
{\caption[pilf]{\protect \small Fitted LECs in units of GeV$^{-1}$ ($c_i$) and GeV$^{-2}$ ($d_i$) for the fits WI08-1 and WI08-2 with $\sqrt{s}_{max}=1.13$~GeV. The last columns
is the average of all the fits in Tables~\ref{table.cs.ds.pert.ka85} and \ref{table.ir.pert.cs.ds.wi08}.
\label{table.ir.pert.cs.ds.wi08}}}
\end{center}
\end{table}
\begin{table}[ht]
\begin{center}
\begin{tabular}{|r|r|r|r|}
\hline
Partial & WI08-1 & WI08-2 & Average \\
Wave & & & \\
\hline
$a_{S_{31}}$ & $ -0.081\pm 0.001$ & $-0.082 \pm 0.001$ & $-0.092\pm 0.012$ \\
$a_{S_{11}}$ & $ 0.165\pm 0.002$ & $0.167 \pm 0.002$ & $0.169\pm 0.004$ \\
$a_{0+}^+$ & $0.001\pm 0.001$ & $0.001 \pm 0.001$ & $-0.005\pm 0.007$ \\
$a_{0+}^-$ & $0.082\pm 0.001$ & $0.083 \pm 0.001$ & $0.087\pm 0.005$ \\
$a_{P_{31}}$ & $-0.048\pm 0.001$ & $-0.051 \pm 0.001$ & $-0.051\pm 0.002$ \\
$a_{P_{11}}$ & $-0.073\pm 0.001$ & $-0.080 \pm 0.001$ & $-0.080\pm 0.006$ \\
$a_{P_{33}}$ & $0.252\pm 0.002$ & $0.222 \pm 0.002$ & $0.232\pm 0.017$ \\
$a_{P_{13}}$ & $-0.032\pm 0.001 $ & $-0.035 \pm 0.001$ & $-0.034\pm 0.002$ \\
\hline
\end{tabular}
{\caption[pilf]{\protect \small $S$-wave scattering lengths and $P$-wave scattering volumes in units of $M_\pi^{-1}$ and $M_\pi^{-3}$, respectively,
for the fits WI08-1 and WI08-2 with $\sqrt{s}_{max}=1.13$~GeV. The last column corresponds to the averaged values of the threshold parameters
of all the fits in Tables~\ref{table.ir.pert.as.ka85} and \ref{table.ir.pert.as.wi08}.
\label{table.ir.pert.as.wi08}}}
\end{center}
\end{table}
Finally, we show in fig.~\ref{fig:ordenes} the different chiral order contributions to the total phase shifts (depicted by the solid lines) for the fit KA85-1 (shown in fig.~\ref{fig:res.ir.pert.ka85} by the solid lines.) The dotted lines correspond to the leading result, the dashed ones to NLO and the dash-dotted ones to N$^2$LO. A general trend observed is the partial cancellation between the ${\cal O}(q^2)$ and ${\cal O}(q^3)$ contributions. For the $P$-waves, the cancellation is almost exact at low energies while at higher energies the ${\cal O}(q^2)$ contribution is larger in modulus than the ${\cal O}(q^3)$ one (except for the $P_{31}$ partial wave where the cancellation is almost exact all over the energy range shown, so that the first order describes well this partial wave.) For the $S$-waves at low energies ($\sqrt{s}\lesssim 1.11$~GeV) the first order contributions dominates, though the second order one tends to increase rapidly with energy. For these partial waves the second order contribution is much larger than the third order one and the partial cancellation between these orders is weak (even both orders add with the same sign for $S_{31}$ at the highest energies shown.) The smallness of the third order contribution for the $S$-waves together with the fact that it is also clearly smaller than the second order one for most of the $P$-waves explain the difficulties to pin down precise values for the ${\cal O}(q^3)$ LECs (the $d_i$'s), as already indicated above.
The LEC $d_{18}$ is important as it is directly involved in the violation of the GT relation \cite{goldberger}.
Up to ${\cal O}(M_\pi^3)$ one has \cite{fettes3,beche2}
\begin{align}
g_{\pi N}&=\frac{g_A m}{F_\pi}\left(1-\frac{2 M_\pi^2 d_{18}}{g_A}\right)~.
\label{goldberger}
\end{align}
We quantify the deviation from the GT relation by
\begin{align}
\Delta_{GT}&=\frac{g_{\pi N}F_\pi}{g_A m}-1~.
\label{delta.def}
\end{align}
Inserting our averaged value of $d_{18}$ in the third column of Table~\ref{table.ir.pert.cs.ds.wi08} into eq.~\eqref{goldberger},
we then find
\begin{align}
\Delta_{GT}&=0.015\pm 0.018~,
\label{gt.per}
\end{align}
which is compatible with the values around 2--3\% that are nowadays preferred
from $\pi N$ and $NN$ partial wave analyses \cite{arndtcc,schroder,rentcc}.
In terms of the $\pi N$ coupling constant, from eq.~\eqref{goldberger} our value for $d_{18}$ translates in
\begin{align}
g_{\pi N}&=13.07\pm 0.23
\end{align}
or $f^2=\left(g_{\pi N} M_\pi/4m\right)^2/\pi=0.077\pm 0.003$. Within uncertainties our result at strict ${\cal O}(M_\pi^3)$
is compatible at the level of one sigma with the determinations of refs.~\cite{arndtcc,schroder,rentcc}.
However, IR CHPT at ${\cal O}(q^3)$ gives rise to a caveat concerning the GT relation.
The point is that the full calculation at this order (IR CHPT contains higher orders due to the $1/m$ relativistic resummation)
produces a huge GT relation violation of about a 20\%, similarly as in ref.~\cite{elli2}. For the evaluation of the GT relation discrepancy
in our present calculations we study the $\pi^-p\to \pi^- p$ scattering. We select this particular
process in the charge basis of states because the crossed $u$-channel process, $\pi^+p\to \pi^+p$,
is purely $I=3/2$ and thus there is no $u$-channel nucleon pole, which requires the same quantum numbers as for the nucleon, in the isospin limit.
Otherwise the $s$- and $u$-channel nucleon poles
overlap for some values of the scattering angle. When projecting the $u$-channel nucleon pole in a partial wave it produces a cut
for $m^2-2M_\pi^2+M_\pi^4/m^2<s<m^2+2M_\pi^2$, with the branch points very close to the nucleon pole at $s=m^2$. As a result, there is not soft way to
calculate the residue at the $s$-channel nucleon pole unless the $u$-channel nucleon pole is removed, as done by considering the $\pi^-p\to \pi^-p$ scattering.
The latter is finally projected in the partial wave $P_{11}$, with the same quantum numbers as the nucleon.
The ratio of the residues
at the nucleon pole of the full ${\cal O}(q^3)$ IR CHPT partial wave and the direct ($s$-channel) Born term calculated with $g_A$, $M_\pi$ and $m$ at
their physical values, gives us directly the ratio between the squares of the full pion-nucleon coupling and the one from the GT relation.\footnote{Note that there is no crossed Born term for $\pi^- p\to \pi^- p$ and that
the LO Born term in term of physical parameters satisfies exactly the GT relation.}
Numerically we find that the full calculation gives rise to a violation of the GT relation of around 20-25\%, while its strict ${\cal O}(M_\pi^3)$ restriction
is much smaller, eq.~\eqref{gt.per}. Related to this one has a significant renormalization scale dependence on the GT violation.\footnote{Eq.~\eqref{gt.per} is renormalization scale independent because the beta function for $d_{18}$ is zero \cite{fettes3}.} In this way, for the fit KA85-1 (second column of Table~\ref{table.cs.ds.pert.ka85}) at $\lambda=1$~GeV one has a 22\% of violation of the GT relation while for $\lambda=0.5$~GeV a 15\% stems. On the other hand, ref.~\cite{gasser2} performed a relativistic calculation of $\Delta_{GT}$ directly in dimensional regularization within the $\overline{MS}-1$ renormalization scheme and obtained
a natural (much smaller) and renormalization scale independent loop contribution to $\Delta_{GT}$. It seems then that the problem that we find for the calculation of $\Delta_{GT}$ with IR, obtained earlier in ref.~\cite{elli2}, is related to the peculiar way the chiral counting is restored in the IR approach \cite{pinto,gorgorito}.
We tentatively conclude that a neat advance in the field would occur once a relativistic regularization method were available that conserved the chiral counting in the evaluation of loops while, at least, avoided any residual renormalization scale dependence.
\begin{figure}[ht]
\psfrag{ss}{{\small $\sqrt{s}$ (GeV)}}
\psfrag{S11}{$S_{11}$}
\psfrag{S31}{$S_{31}$}
\psfrag{P11}{$P_{11}$}
\psfrag{P13}{$P_{13}$}
\psfrag{P31}{$P_{31}$}
\psfrag{P33}{$P_{33}$}
\centerline{\epsfig{file=ordenes.fit95.ps,width=.7\textwidth,angle=-90}}
\vspace{0.2cm}
\caption[pilf]{\protect \small (Color online.) Different chiral orders contributing to the phase shifts for the KA85-1 fit.
The (black) dotted, (green) dashed and (blue) dash-dotted are the first, second and third order, respectively. The (red) solid
line is the sum of all of them.
\label{fig:ordenes}}
\end{figure}
\section{Unitarized amplitudes and higher energies}
\label{sec4}
\def\arabic{section}.\arabic{equation}{\arabic{section}.\arabic{equation}}
\setcounter{equation}{0}
In order to resum the right-hand cut or unitarity cut we consider the unitarization method of refs.~\cite{pin,plb}, to which we refer
for further details. Notice that this method does not only provide a unitary $\pi N$ amplitude but also takes care of the
analyticity properties associated with the right-hand cut. In ref.~\cite{pin} this approach was used for unitarizing
the ${\cal O}(q^ 3)$ HBCHPT $\pi N$ partial waves from ref.~\cite{fettes3}. However, no explicit Lorentz-invariant one-loop
calculation for $\pi N$ scattering has been
unitarized in the literature yet. This is an interesting point since by taking explicitly into account the presence of the
unitarity cut the rest of the amplitude is expected to have a softer chiral expansion. According to ref.~\cite{plb} we
express ${\cal T}_{IJ\ell}$ as
\begin{align}
T_{IJ\ell}&=\frac{1}{{\cal T}_{IJ\ell} ^{-1}+g(s)}~,
\label{basic}
\end{align}
where the unitarity pion-nucleon loop function is given by
\begin{align}
g(s)&=\frac{1}{(4\pi)^2}\biggl\{
a_1+\log\frac{m^2}{\mu^2}-\frac{M_\pi^2-m^2+s}{2s}\log\frac{m^2}{M_\pi^2}
+\frac{|\mathbf{p}|}{\sqrt{s}}\biggl[\log(s-\Delta+2\sqrt{s}|\mathbf{p}|)\nonumber\\
&+\log(s+\Delta+2\sqrt{s}|\mathbf{p}|)
-\log(-s+\Delta+2\sqrt{s}|\mathbf{p}|)-\log(-s-\Delta+2\sqrt{s}|\mathbf{p}|)
\biggr]\biggr\}\,,
\label{g.def}
\end{align}
with $\Delta=M_{\pi}^2-m^2$. The interaction kernel ${\cal T}_{IJ\ell}$ has no right-hand cut and is determined by
matching order by order with the perturbative chiral expansion of $T_{IJ\ell}$ calculated in CHPT. In this way, with $g(s)=
{\cal O}(q)$, one has \cite{plb}
\begin{align}
T_{IJ\ell}^{(1)}+T_{IJ\ell}^{(2)}+T_{IJ\ell}^{(3)}={\cal T}_{IJ\ell}^{(1)}+{\cal T}_{IJ\ell}^{(2)}
+{\cal T}_{IJ\ell}^{(3)}
-g(s)\left(T_{IJ\ell}^{(1)}\right)^2~,
\end{align}
so that
\begin{align}
{\cal T}_{IJ\ell}^{(1)}&=T_{IJ\ell}^{(1)}~,\nonumber\\
{\cal T}_{IJ\ell}^{(2)}&=T_{IJ\ell}^{(2)}~,\nonumber\\
{\cal T}_{IJ\ell}^{(3)}&=T_{IJ\ell}^{(3)}+g(s)\left(T_{IJ\ell}^{(1)}\right)^2~,
\label{det}
\end{align}
and ${\cal T}_{IJ\ell}={\cal T}_{IJ\ell}^{(1)}+{\cal T}_{IJ\ell}^{(2)}+{\cal T}_{IJ\ell}^{(3)}$ is then replaced in eq.~\eqref{basic}.
Since the resulting partial wave is now unitary, we calculate the phase shifts directly from the relation
$T_{IJ\ell}= \frac{8\pi\sqrt{s}}{|\mathbf{p}|}e^{i\delta_{IJ\ell}}\sin \delta_{IJ\ell}$ that follows from
eqs.~\eqref{s.def} and \eqref{s.def.2}.
The subtraction constant $a_1$ is determined by requiring that $g(s)$ vanishes at the nucleon mass $s=m^2$. In this way the $P_{11}$ partial-wave
has the nucleon pole at its right position, otherwise it would disappear. This is due to the fact that for this partial wave
${\cal T}_{\frac{1}{2}\frac{1}{2}1}^{-1}$ vanishes at $s=m^2$ so it is required that $g(m^2)=0$. Otherwise $T_{\frac{1}{2}\frac{1}{2}1}$,
eq.~\eqref{basic}, would be finite at $s=m^2$.
Due to the closeness of the $\Delta(1232)$ resonance to the $\pi N$ threshold it is expedient to implement a method to take into account its presence in order to provide a higher energy description of $\pi N$ phase-shifts beyond the purely perturbative results discussed in section~\ref{sec3}.
As commented in the introduction we can add a CDD pole \cite{cdd} in the $P_{33}$ channel so as to reach the region of the $\Delta(1232)$
resonance. The addition of the CDD pole conserves the discontinuities of the partial wave amplitude across the cuts. A CDD pole corresponds to
a zero of the partial wave-amplitude along the real axis and hence to a pole in the inverse of the amplitude. We then modify eq.~\eqref{basic}
by including such a pole in $T_{\frac{3}{2}\frac{3}{2}1}^{-1}$,
\begin{align}
T_{\frac{3}{2}\frac{3}{2}1}=\Biggl({\cal T}_{\frac{3}{2}\frac{3}{2}1}^{-1}+\frac{\gamma}{s-s_P}+g(s)\Biggr)^{-1}~,
\label{uni.cdd}
\end{align}
where $\gamma$ and $s_P$ are the residue and pole position of the CDD pole, in order, so that two new free parameters
enter. The amplitude ${\cal T}_{IJ\ell}$ is determined as in eq.~\eqref{det}. We also distinguish here between the fits to the KA85 \cite{ka84} and WI08 \cite{wi08} phase-shifts. The fits are done up to $\sqrt{s}=\sqrt{s}_{max}=1.25$~GeV for all the partial waves. One cannot afford to go to higher energies because of an intrinsic limitation of IR CHPT. Additional unphysical cuts and poles are generated by the infinite order resummation of the sub-leading $1/m$ kinetic energy terms accomplished in IR \cite{pinto,gorgorito,bernard}. In our case the limiting circumstance is the appearance of a pole when the Mandelstam variable $u=0$.\footnote{Many of the tensor integrals involved in the one-loop calculations of $\pi N$ scattering develop such a pole. In particular, it arises in the simplest scalar two-point loop function $I(u)$, following the notation of ref.~\cite{becher}.} When projecting in the different partial waves this singularity gives rise to a strong branch point at $s=2(m^2+M_\pi^2)\simeq 1.34^2~$GeV$^2$, which indicates the onset of a non-physical right-hand cut that extends to infinity and that produces strong violation of unitarity. This translates into strong rises of the phase-shifts calculated employing eq.~\eqref{uni.cdd} for energies $\sqrt{s}\gtrsim 1.26$~GeV. This is why we have taken $\sqrt{s}_{max}=1.25$~GeV because for higher
energies these effects are clearly visible in the calculated phase-shifts. The $\chi^2$ to be minimized is the same as already used for the pure perturbative study, eq.~\eqref{chi2.def}, employing also the same definition for err$(\delta)$. The resulting fits are shown in fig.~\ref{fig:ir.uni}, where the solid lines correspond to the fit of the KA85 data and the dashed ones to WI08. One can see a rather good agreement with data in the whole energy range from threshold up to 1.25~GeV, including the reproduction of
the raise in the $P_{33}$ phase shifts associated with the $\Delta(1232)$ resonance.
The improvement is manifest in the $P_{11}$ partial wave although some discrepancy with the WI08
data in the lower energy region remains,
being better the agreement with KA85 phase-shifts.
Compared with the perturbative treatment of section~\ref{sec3} one observes a drastic increase in the range of energies for which a globally acceptable description of the data is achieved.
\begin{figure}[ht]
\psfrag{ss}{{\small $\sqrt{s}$ (GeV)}}
\psfrag{S11per}{$S_{11}$}
\psfrag{S31per}{$S_{31}$}
\psfrag{P11per}{$P_{11}$}
\psfrag{P13per}{$P_{13}$}
\psfrag{P31per}{$P_{31}$}
\psfrag{P33per}{$P_{33}$}
\centerline{\epsfig{file=IR.uni.ps,width=.7\textwidth,angle=-90}}
\vspace{0.2cm}
\caption[pilf]{\protect \small (Color online.) Fits to the KA85 and WI08 pion-nucleon phase shifts as a
function of $\sqrt{s}$ (in GeV) employing the unitarized $\pi N$ amplitudes, eq.~\eqref{uni.cdd}.
The solid (dashed) lines correspond to the fit of the KA85 (WI08) data.
\label{fig:ir.uni}}
\end{figure}
The values of the resulting LECs are collected in Table~\ref{table.cs.ds.uni}. We consider that the pure perturbative study of section~\ref{sec3} is the proper way to determine the chiral LECs. The new values in Table~\ref{table.cs.ds.uni} do not constitute an alternative determination to those offered in Tables~\ref{table.cs.ds.pert.ka85} and \ref{table.ir.pert.cs.ds.wi08} and should be employed within UCHPT studies. Nonetheless, it is remarkable that the values for the LECs obtained are compatible with the average of values given in the fourth column of Table~\ref{table.ir.pert.cs.ds.wi08}, in particular, for the ${\cal O}(q^2)$ LECs the central values are also rather close to the fitted values in Table~\ref{table.cs.ds.uni}.
Since we have a procedure to generate the $\Delta(1232)$ resonance through the CDD pole in eq.~\eqref{uni.cdd}, such agreement is surprising since the contribution of this resonance to the LECs is very important \cite{aspects}. The point is that the typical value of $\gamma/(s-s_P)$ in the low-energy region studied in section~\ref{sec3} is only around a factor 2 larger in modulus than the subtraction constant $a_1/(4\pi)^2$ in eq.~\eqref{g.def}, being the latter a quantity of first chiral order. As a result, at low energies, the CDD pole gives a contribution that can be computed as ${\cal O}(q^3)$, since the lowest order ones comes from
$-( {T^{(1)}_{IJL}})^2 \gamma/(s-s_P)$. This explains why the values of the second order LECs are preserved, despite having included the CDD pole.
\begin{table}[ht]
\begin{center}
\begin{tabular}{|r|r|r|r|r|r|}
\hline
LEC & Fit & Fit & Partial & Fit & Fit \\
& KA85 & WI08 & Wave & KA85 & WI08 \\
\hline
$c_1$ & $-0.48\pm 0.51$ & $-0.53\pm 0.48$ & $a_{S_{31}}$ & $-0.115$ & $-0.104$ \\
$c_2$ & $4.62\pm 0.27$ & $4.73\pm 0.30$ & $a_{S_{11}}$ & $0.152$ & $0.150$ \\
$c_3$ & $-6.16\pm 0.27$ & $-6.41\pm 0.29$ & $a_{0+}^+$ & $-0.026$ & $-0.020$ \\
$c_4$ & $3.68\pm 0.13$ & $3.81\pm 0.16$ & $a_{0+}^-$ & $0.089$ & $0.085$\\
$d_1+d_2$ & $2.55\pm 0.60$ & $2.70\pm 0.65$ & $a_{P_{31}}$ & $-0.050$ & $-0.048$ \\
$d_3$ & $-1.61\pm 1.01$ & $-1.73\pm 1.04$ & $a_{P_{11}}$ & $-0.080$ & $-0.075$ \\
$d_5$ & $0.93\pm 2.40$ & $1.13\pm 2.18$ & $a_{P_{33}}$ & $0.245$ & $0.250$\\
$d_{14}-d_{15}$ & $-0.46\pm 1.00$ & $-0.61\pm1.11$ & $a_{P_{13}}$ & $-0.41$ & $-0.039$ \\
$d_{18}$ & $0.01\pm 0.21$ & $-0.03\pm 0.20$ & & & \\
\hline
\end{tabular}
{\caption[pilf]{\protect \small Fitted LECs in units GeV$^{-1}$ ($c_i$) and GeV$^{-2}$ ($d_i$)
for the fits KA85 and WI08 employing the unitarized partial waves.
We also give the scattering lengths and volumes in units of $M_\pi$ and $M_\pi^{-3}$, respectively.
\label{table.cs.ds.uni}}}
\end{center}
\end{table}
The values of the resulting threshold parameters with the present unitarized amplitudes are collected in the last two columns
of Table~\ref{table.cs.ds.uni}. We observe that all of them are compatible with the averaged values given in the last column
of Table~\ref{table.ir.pert.as.wi08}. The $P_{33}$ scattering volume turns out a bit too high in the lines
of the values obtained with the perturbative fits following strategy 1, despite the reproduction of the $\Delta(1232)$ resonance.
Finally, we also mention that similarly huge values for the GT violation are also obtained from the unitarized amplitudes as in the
pure perturbative treatment. Indeed, the same value for $\Delta_{GT}$, eq.~\eqref{delta.def}, is obtained in the unitarized case for the same values of the
LECs because $g(m^2)=0$ (there is no CDD pole in the $P_{11}$ partial wave.)
\section{Summary and conclusions}
\label{sec5}
We studied elastic pion-nucleon scattering employing covariant CHPT up-to-and-including ${\cal O}(q^3)$ in Infrared Regularization \cite{becher}. We followed two strategies for fitting the phase shifts provided the partial wave
analysis of refs.~\cite{ka84,wi08}. In one of them, instead of fitting the $P_{33}$ phase-shifts, we considered the reproduction
of the function $|\mathbf{p}|^3/\tan \delta_{P_{33}}$ around the threshold region (for $\sqrt{s}\leq 1.09$~GeV.) The rational behind this is
to reduce the impact of the $\Delta(1232)$ when performing fits to data,
avoiding the rapid rise of phase-shifts with energy that tends to increase the value of the resulting scattering volume. An accurate reproduction of
pion-nucleon phase-shifts up to around 1.14~GeV results. The main difference between both strategies has to do with the values of the ${\cal O}(q^2)$ LECs $c_2$ and $c_3$, that are smaller in absolute value for strategy 2 fits. As expected, the $P_{33}$ scattering volume is also smaller for these fits and compatible with previous determinations. We have discussed separately the fits to data of the Karlsruhe \cite{ka84} and GWU \cite{wi08} groups. We obtain a much better reproduction of the $P_{11}$ phase shifts for the former partial wave analysis. IR CHPT at ${\cal O}(q^3)$ is not able to reproduce the $P_{11}$ phase shifts of the current solution of the GWU group even at very low energies. This suggests that a revision of this solution would be in order. The averaged values for the LECs and threshold parameters resulting from the two strategies and all data sets are given in the last columns of Tables~\ref{table.ir.pert.as.wi08} and \ref{table.ir.pert.cs.ds.wi08} in good agreement with other previous determinations. The reproduction of experimental phase-shifts is similar in quality to that obtained previously with ${\cal O}(q^3)$ HBCHPT \cite{fettes3}, showing also a smooth onset of the departure from experimental data for higher energies. This is an improvement compared with previous work \cite{elli2}.
In addition, we obtain a small violation of the
Goldberger-Treiman relation at strict ${\cal O}(M_\pi^3)$, compatible with present determinations.
However, the deviation from the Goldberger-Treiman relation is still a caveat because when all the terms in the full IR CHPT calculation
at ${\cal O}(q^3)$ are kept the resulting discrepancy is much higher, around 20-30\%.
We have also employed the non-perturbative methods of Unitary CHPT \cite{plb,pin} to resum the right-hand cut of the pion-nucleon partial waves. The $\Delta(1232)$ resonance is incorporated in the approach
as a Castillejo-Dalitz-Dyson pole in the inverse of the amplitude. A good reproduction of the phase shifts is reached for $\sqrt{s}$ up to around 1.25~GeV. There is an intrinsic limitation in IR CHPT for reaching higher energies due to the presence of a branch cut at $s=2(m^2+M_\pi^2)\simeq 1.34^2~$GeV$^2$. Above that energy strong violations of unitarity occurs due to the onset of an unphysical cut associated with the infinite resummation of relativistic corrections accomplished in IR. This also originates a strong rise of phase-shifts noticeable already for $\sqrt{s}\gtrsim 1.25$~GeV. The values of the LECs at ${\cal O}(q^2)$ is compatible to those obtained with the pure perturbative study.
\section*{Acknowledgements}
We thank R.~Workman for useful correspondence in connection with GWU group partial-wave analyses and
SAID program.
This work is partially funded by the grants MEC FPA2007-6277, FPA2010-17806 and the Fundaci\'on S\'eneca 11871/PI/09.
We also thank the financial support from the BMBF grant 06BN411, the EU-Research Infrastructure
Integrating Activity
``Study of Strongly Interacting Matter" (HadronPhysics2, grant n. 227431)
under the Seventh Framework Program of EU and
the Consolider-Ingenio 2010 Programme CPAN (CSD2007-00042). JMC acknowledges the MEC contract FIS2006-03438, the EU Integrated Infrastructure Initiative Hadron Physics Project contract RII3-CT-2004-506078 and the Science and Technology
Facilities Council [grant number ST/H004661/1] for support.
|
1,116,691,498,302 | arxiv | \section{Introduction}
Colorectal cancer is a serious threat to human health, with the third highest morbidity and mortality among all cancers~\cite{siegel2020cancer}. As one of the most critical precursors of this disease, polyp localization and segmentation play a key role in the early diagnosis and treatment of colorectal cancer. At present, colonoscopy is the most commonly used means of examination, but this process involves manual and thus expensive labor, not to mention its higher misdiagnosis rate~\cite{van2006polyp}. Therefore, automatic and accurate polyp segmentation is of great practical significance. However, polyp segmentation has always been a challenging task due to the diversity of polyp in shape and size. Some examples of polyp segmentation are displayed in Fig.~\ref{fig1}.
In recent years, with the prevalence of deep learning technology, a series of convolutional neural network variants have been applied to polyp segmentation and have made breakthrough progress. Early fully convolutional neural networks~\cite{long2015fully,brandao2017fully,akbari2018polyp,li2018contrast} replaced the fully connected layers of the neural network with convolutional ones. In order to enlarge the receptive field of the neurons, the neural network gradually reduces the scale of the feature map and finally generates the prediction with very low resolution, resulting in a rough segmentation result and prone to inaccurate boundaries. Later, UNet~\cite{ronneberger2015u} based structure was proposed, which adopts a stepwise upsample learning to restore the feature map resolution while maintaining the relatively large receptive field of the neurons. At the same time, the skip connection is used to enhance the fusion of shallow and deep features to improve the original FCN, greatly improving the segmentation performance and boundary localization of the specific organs or diseased regions. SegNet~\cite{wickstrom2018uncertainty} is similar to UNet, but utilizes the max pooling indices to achieve up-sample operation in the decoder branch. SFANet~\cite{fang2019selective} incorporates a sharing encoder branch and two decoder branches to detect polyp regions and boundaries respectively, and includes a new boundary-sensitive loss to mutually improve both polyp region segmentation and boundary detection. In addition, by adopting the upward concatenation to fuse multi-level features and embedding the selective kernel module to learn multi-scale features, the model is further enhanced and achieves competitive results. However, most of the methods have not taken proper measures to deal with the shape and size variance of polyps regions.
In this paper, we propose the Adaptive Context Selection Network (ACSNet). Inspired by~\cite{fu2019adaptive}, we believe that the global context features are helpful for the segmentation of large polyps, while the local context information is crucial for the identification of small ones. Therefore, the intent of our designed network is to adaptively select context information as contrast learning and feature enhancement based on the size of the polyp region to be segmented. Specifically, our ACSNet is based on the encoder-decoder framework, with Local Context Attention (LCA) module, Global Context Module (GCM), and Adaptive Selection Module (ASM). LCAs and GCM are responsible for mining local and global context features and sending them to the ASM modules in each decoder layer. Through channel-wise attention, ASM well achieves adaptive feature fusion and selection. In summary, the contributions of this paper mainly include: (1) Our designed ACSNet can adaptively attend to different context information to better cope with the impact of the diversity of polyp size and shape on segmentation. (2) Our tailored LCA and GCM modules can achieve more consistent and accurate polyp segmentation through complementary selection of local features and cross-layer enhancement of global context. (3) ACSNet achieves new state-of-the-art results on two widely used public benchmark datasets.
\begin{figure}[htp]
\includegraphics[width=0.9\textwidth]{./examples.eps}
\centering
\caption{Two examples of polyp segmentation} \label{fig1}
\end{figure}
\section{Method}
The architecture of our ACSNet is shown in Fig.~\ref{fig2}, which can be regarded as an enhanced UNet~\cite{ronneberger2015u} or Feature Pyramid Network (FPN)~\cite{lin2017feature}. We utilize ResNet34~\cite{he2016deep} as our encoder, which contains five blocks in total. Accordingly, the decoder branch also has five blocks. Each decoder block is composed of two Conv-BN-ReLU combinations, and generates one prediction map with different resolution, which is supervised by the down-sampled ground truth respectively.
The GCM is placed on top of the encoder branch, which captures the global context information and densely concatenates to the ASM of each layer in the decoder path. At the same time, each skip-connection between the encoder and decoder paths of UNet~\cite{ronneberger2015u} is replaced by the LCA module, which gives each positional feature column of every decoding layer a local context enhancement of different receptive field and at the same time delicately leverages the prediction confidence of the previous layer as a guidance to force the current layer to focus on harder regions. Finally, we utilize the ASM modules to integrate the features output from each previous decoder block, the LCA module and the GCM, based on a channel-wise attention scheme for context selection.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{./ACSNet.eps}
\caption{Overview of our proposed ACSNet} \label{fig2}
\end{figure}
\subsection{Local Context Attention Module (LCA)}
LCA is designed as a kind of spatial attention scheme, which aims to incorporate hard sample mining when merging shallow features and pay more attention to the uncertain and more complex area to achieve layer-wise feature complementation and prediction refinement. As shown in Fig.~\ref{fig3}, the attention map of each LCA module is determined by the prediction map generated from the upper layer of the decoder stream. Specifically, the attention map of the $i^{th}$ LCA module is denoted as ${Att}_i\in\mathbb{R}^{1\times H_i\times W_i}$, in which $H_i$,$W_i$ are the height and width of the attention map respectively. The value of position $j\in[1,2,\cdots,H_i\times W_i ]$, denoted as ${Att}_{i}^{j}$ can be calculated as follows:
\begin{equation}
{Att}_{i}^{j}=1-\frac{\left|p^j_{i+1}-T\right|}{\max{(T,1-T)}}\ ,
\label{equ1}
\end{equation}
where ${P}_{i+1}^{j}\in(0,1)$ is the $j^{th}$ location value of the prediction map ${P}_{i+1}\in\mathbb{R}^{1\times H_i\times W_i}$ which is generated by the ${(i+1)}^{th}$ decoder block. $T$ is the threshold to determine whether the specific position belongs to foreground or background. We calculate the absolute difference between the prediction value and threshold T, and limit it to the range of 0 to 1 by dividing the maximum difference. We believe that the closer the predicted value is to the threshold T, the more uncertain the prediction of the corresponding position is, so it should be given a larger attention weight in the forwarding layer, and vice versa. Finally, we multiply the features by the attention values, and then sum with the original features to get the output of this module. For simplicity, $T$ is set to $0.5$ in our experiments.
\begin{figure}[htp]
\includegraphics[width=0.8\textwidth]{./LCA.eps}
\centering
\caption{Local Context Attention Module (LCA)} \label{fig3}
\end{figure}
\subsection{Global Context Module (GCM)}
We borrow the idea from pyramid pooling~\cite{zhao2017pyramid,liu2019simple,he2019non} to design our GCM and also put it as an independent module for global context inferring on top of the encoder branch. Meanwhile, GCM forwards the output to each ASM module to compensate the global context which is gradually diluted during layer-wise refinement.
As shown in Fig.~\ref{fig4}, GCM contains four branches to extract context features at different scales. Specifically, this module is composed of a global average pooling branch, two adaptive local average pooling branches, and outputs three feature maps of spatial size $1\times1$, $3\times3$, $5\times5$, respectively. It also contains an identity mapping branch with non local operation~\cite{wang2018non} to capture the long range dependency while maintaining the original resolution. We introduce a non-local operation based feature representation here to finely capture the global dependency of each positional feature to enhance the output of the encoder. In the end, we up-sample the above four feature maps and concatenate them to obtain the resulted global context feature of this module, which will be densely fed to each designed ASM module in the decoder stream.
\begin{figure}[htp]
\centering
\includegraphics[width=0.8\textwidth]{./GCM.eps}
\caption{Global Context Module (GCM)} \label{fig4}
\end{figure}
\subsection{Adaptive Selection Module (ASM)}
We believe that local context and global context have different reference values for the segmentation of polyp regions with different appearances, sizes, and feature contrasts. Therefore, we attach an adaptive context selection module (ASM) to each block in the decoder stream. Based on the local context features generated by the LCA, the global context features from the GCM, and the output features of previous decoder block as inputs, it learns to adaptively select context feature for aggregation in each block.
As shown in Fig.~\ref{fig5}, we incorporate a ``Squeeze-and-Excitation" block~\cite{hu2018squeeze} to adaptively recalibrate channel-wise feature responses for feature selection. Specifically, ASM takes the concatenated feature as input, and employs global average pooling to squeeze the feature map to a single vector which is further fed to a fully connected layer to learn the weight of each channel. After sigmoid operation, the attention weight is limited to the range of 0 to 1. Through multiplying the original feature maps with the attention values, some informative context features can be picked out while those not conducive to improving discrimination will be suppressed. Noted that we also apply non local operation~\cite{wang2018non} to the features output from previous decoder block before concatenation to enhance the decoder features with long range dependency.
\begin{figure}
\includegraphics[width=0.8\textwidth]{./ASM.eps}
\centering
\caption{Adaptive Selection Module (ASM)} \label{fig5}
\end{figure}
\section{Experiments}
\subsection{Datasets}
We evaluate our proposed method on two benchmark colonoscopy image datasets, collected from the examination of colorectal cancer. The first is the EndoScene Dataset~\cite{vazquez2017benchmark}, which contains 912 images and each of which has at least one polyp region. It is divided into the training set, validation set and test set, with 547, 183, and 182 images respectively. For simplicity, we resize the images to $384\times288$ uniformly in our experiments. The second is Kvasir-SEG Dataset~\cite{jha2020kvasir} containing 1000 images with polyp regions. We randomly use 60$\%$ of the dataset as training set, 20$\%$ as validation set, and the remaining 20$\%$ as test set. Since the image resolution of this dataset varies greatly, we refer to the setting of~\cite{jha2020kvasir} and set all images to a fixed size of $320\times320$.
\subsection{Implementation Details and Evaluation Metrics}
In the training stage, we use data augmentation to enlarge the training set, including random horizontal and vertical flips, rotation, zoom and shift. All the images are randomly cropped to $224\times224$ as input. We set batch size to 4, and use SGD optimizer with a momentum of 0.9 and a weight decay of 0.0005 to optimize the model. A poly learning rate policy is adopted to adjust the initial learning rate, which is $lr=init\_lr\times(1-\frac{epoch}{nEpoch})^{power}$, where $init\_lr=0.001$, $power=0.9$, $nEpoch=150$. We utilize the combination of a binary cross entropy loss and a dice loss as the loss function. Our model is implemented using PyTorch~\cite{paszke2019pytorch} framework.
As in~\cite{fang2019selective}, we use eight metrics to evaluate the segmentation performance, including ``Recall'', ``Specificity'', ``Precision'', ``Dice Score'', ``Intersection-over-Union for Polyp (IoUp)'', ``IoU for Background (IoUb)'', ``Mean IoU (mIoU)'' and ``Accuracy''.
\begin{figure}[t]
\centering
\includegraphics[width=\textwidth]{./results.eps}
\caption{Visual comparison of polyp region segmentation from state-of-the-art methods. The ground truth (GT) is shown in the penultimate column. Our proposed method consistently produces segmentation results closest to the ground truth. The hard region mining result is shown in the rightmost column.} \label{fig6}
\end{figure}
\begin{table*}[h]
\centering
\caption{Comparison with other state-of-the-art methods on the EndoScene dataset}
\resizebox{\textwidth}{!}{
\begin{tabular}{p{80pt}|p{30pt}|p{30pt}|p{30pt}|p{30pt}|p{30pt}|p{30pt}|p{30pt}|p{30pt}}
\toprule
Methods & $Rec$ & $Spec$ & $Prec$ & $Dice$ & $IoUp$ & $IoUb$ & $mIoU$ & $Acc$ \\
\midrule
FCN8s~\cite{akbari2018polyp} &60.21&98.60&79.59&61.23&48.38&93.45&70.92&93.77 \\
UNet~\cite{ronneberger2015u} &85.54&98.75&83.56&80.31&70.68&95.90&83.29&96.25 \\
UNet++~\cite{zhou2018unet++} &78.90&99.15&86.17&77.38&68.00&95.48&81.74&95.78 \\
SegNet~\cite{wickstrom2018uncertainty} &86.48&99.04&86.54&82.67&74.41&96.33&85.37&96.62 \\
SFANet~\cite{fang2019selective} &85.51&98.94&86.81&82.93&75.00&96.33&85.66&96.61 \\
\textbf{Ours} &\textbf{87.96}&\textbf{99.16}&\textbf{90.99}&\textbf{86.59}&\textbf{79.73}&\textbf{96.86}&\textbf{88.29}&\textbf{97.11} \\
\bottomrule
\end{tabular}}
\label{table1}
\end{table*}
\subsection{Results on the EndoScene Dataset}
We compare our ACSNet with FCN8s~\cite{akbari2018polyp}, UNet~\cite{ronneberger2015u}, UNet++~\cite{zhou2018unet++}, SegNet~\cite{wickstrom2018uncertainty} and SFANet~\cite{fang2019selective} on the test set. As shown in Table.~\ref{table1}, our method achieves the best performance over all metrics, with $Dice$ of $86.59\%$, a $3.66\%$ improvement over the second best algorithm. Some visualization results are shown in Fig.~\ref{fig6} (Col.1-8), as can be seen that our algorithm is very robust to some complex situations such as polyp region sizes and image brightness changes. At the same time, due to the introduction of the effective context selection module and especially the hard region mining~(abbr.HRM) mechanism, the algorithm is significantly more accurate for polyp boundary positioning. In the rightmost column of Fig.~\ref{fig6}, it can be observed that the hard regions mined by our method are usually located in the border area of polyps, which is worthy of attention during prediction refinement.
\subsection{Results on the Kvasir-SEG Dataset}
On this dataset, we compare our ACSNet with UNet~\cite{ronneberger2015u}, UNet++~\cite{zhou2018unet++}, SegNet~\cite{wickstrom2018uncertainty}, ResUNet~\cite{jha2020kvasir} and SFANet~\cite{fang2019selective}. The results are listed in Table.~\ref{table2}. Similarly, our method achieves the best performance and outperforms others by large margins, further demonstrating the robustness and effectiveness of our method.
\begin{table*}
\centering
\caption{Comparison with other state-of-the-art methods and Ablation study on the Kvasir-SEG dataset}
\resizebox{\textwidth}{!}{
\begin{tabular}{p{100pt}|p{30pt}|p{30pt}|p{30pt}|p{30pt}|p{30pt}|p{30pt}|p{30pt}|p{30pt}}
\toprule
Methods & $Rec$ & $Spec$ & $Prec$ & $Dice$ & $IoUp$ & $IoUb$ & $mIoU$ & $Acc$ \\
\midrule
UNet~\cite{ronneberger2015u}&87.89&97.69&83.89&82.85&73.95&94.73&84.34&95.65 \\
UNet++~\cite{zhou2018unet++} &88.67&97.49&83.17&82.80&73.74&94.49&84.11&95.42 \\
ResUNet~\cite{jha2020kvasir}&81.25&98.31&87.88&81.14&72.23&94.00&83.11&94.90 \\
SFANet~\cite{fang2019selective} &91.99&97.05&82.95&84.68&77.06&94.83&85.94&95.71 \\
SegNet~\cite{wickstrom2018uncertainty} &90.03&98.13&87.51&86.43&79.11&95.90&87.51&96.68 \\
\hline
\textbf{Ours} &\textbf{93.14}&98.55&\textbf{91.59}&\textbf{91.30}&\textbf{85.80}&\textbf{97.00}&\textbf{91.40}&\textbf{97.64} \\
\hline
Baseline&89.53&98.63&90.32&88.21&81.59&96.27&88.93&96.99\\
Baseline+LCAs&91.79&98.39&89.15&89.00&82.47&96.41&89.44&97.15\\
Baseline+LCAs+GCM&92.18&\textbf{98.72}&90.90&90.28&84.35&96.88&90.62&97.52\\
\bottomrule
\end{tabular}}
\label{table2}
\end{table*}
\subsection{Ablation study}
To validate the effectiveness and necessity of each of the three modules in our proposed method, we compare ACSNet with its three variants in Table.~\ref{table2}. Specifically, the baseline model refers to the original U-shape encoder-decoder framework with skip-connections, and we gradually add LCAs, GCM, and ASMs to it, denoted as Baseline+LCAs, Baseline+LCAs+GCM and Ours, respectively. As shown in the table, with the progressive introduction of LCAs, GCM, and ASMs, our algorithm has witnessed a certain degree of performance improvement, boosting $Dice$ by $0.79\%$, $1.28\%$, $1.02\%$ respectively.
\section{Conclusion}
In this paper, we believe that an efficient perception of local and global context is essential to improve the performance of polyps region localization and segmentation. Based on this, we propose an adaptive context selection
based encoder-decoder framework which contains the LCA module for hard region mining based local context extraction, the GCM module for global feature representation and enhancement in each decoder block, and the ASM component for contextual information aggregation and selection. Extensive experimental results and ablation studies have demonstrated the effectiveness and superiority of the proposed method.
\section*{Acknowledgement. }This work is supported in part by the Guangdong Basic and Applied Basic Research Foundation (No.2020B1515020048), in part by the National Natural Science Foundation of China (No.61976250 and No.61702565), in part by the ZheJiang Province Key Research $\&$ Development Program (No. 2020C03073) and in part by the Key Area R\&D Program of Guangdong Province (No. 2018B030338001).
\bibliographystyle{splncs04}
|
1,116,691,498,303 | arxiv | \section{Introduction}
Recently, voice assistants have become a staple in the flagship products of many big technology companies such as Google, Apple, Amazon, and Microsoft. One challenge for voice assistant products is that the language that a speaker is using needs to be preset. To improve user experience on this and similar tasks such as automated speech detection or speech to text transcription, automatic language detection is a necessary first step.
The technique described in this paper, language identification for audio spectrograms (LIFAS), uses spectrograms of raw audio signals as input to a convolutional neural network (CNN) to be used for language identification. One benefit of this process is that it requires minimal pre-processing. In fact, only the raw audio signals are input into the neural network, with the spectrograms generated as each batch is input to the network during training. Another benefit is that the technique can utilize short audio segments (approximately 4 seconds) for effective classification, necessary for voice assistants that need to identify language as soon as a speaker begins to talk.
LIFAS binary language classification had an accuracy of 97\%, and multi-class classification with six languages had an accuracy of 89\%.
\section{Background}
Finding a dataset of audio clips in various languages sufficiently large for training a network was an initial challenge for this task. Many datasets of this type are not open sourced \cite{mozilla}. VoxForge \cite{voxforge}, an open-source corpus that consists of user-submitted audio clips in various languages, is the source of data used in this paper.
Previous work in this area used deep networks as feature extractors, but did not use the networks themselves to classify the languages \cite{conference, unified}. LIFAS removes any feature extraction performed outside of the network. The network is fed a raw audio signal, and the spectrogram of the data is passed to the neural network during training. The last layer of the network outputs a vector of probabilities with one prediction per language. Thus, the whole process from raw audio signal to prediction of language is performed automatically by the neural network.
In \cite{lstmpaper}, a CNN was combined with a long short-term memory (LSTM) network to classify language using spectrograms generated from audio. The network presented in \cite{lstmpaper} classified 4 languages using 10-second audio clips for training \cite{blog}, while LIFAS achieves similar performance for 6 languages using 4-second audio clips. This demonstrates the robustness of the architecture and its improvement upon earlier techniques.
\subsection{Residual and Convolutional Neural Networks}
CNNs have been shown to give state of the art results for image classification and a variety of other tasks. As neural networks using back propagation were constructed to be deeper, with more layers, they ran into the problem of vanishing gradient \cite{gradient}. A network updates its weights based on the partial derivatives of the error function from the previous layers. Many times, the derivatives can become very small and the weight updates become insignificant. This can lead to a degradation in performance.
One way to mitigate this problem is the use of Residual Neural Networks (ResNets \cite{resnet}). ResNets utilize skip connections in layers which connects two non-adjacent layers. ResNets have shown state-of-the-art performance on image recognition tasks, which makes them a natural choice for a network architecture for this task \cite{imageresidual}.
\subsection{Spectrogram Generation}
A spectrogram is an image representation of the frequencies present in a signal over time. The frequency spectrum of a signal can be generated from a time series signal using a Fourier Transform.
In practice, the Fast Fourier Transform (FFT) can be applied to a section of the time series data to calculate the magnitude of the frequency spectrum for a fixed moment in time. This will correspond to a line in the spectrogram. The time series data is then windowed, usually in overlapping chunks, and the FFT data is strung together to form the spectrogram image which allows us to see how the frequencies change over time.
Since we were generating spectrograms on audio data, the data was converted to the mel scale, generating "melspectrograms". These images will be referred to as simply "spectrograms" for the duration of this paper. The conversion from $f$ hertz to $m$ mels that we use is given by,
$$m = 2595 \log_{10} \left( 1 + \frac{f}{700} \right).$$
An example of a spectrogram generated by an English data transmission is shown in figure \ref{spec}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{spec.png}
\caption{Spectrogram generated from an English audio file.}
\label{spec}
\end{figure}
\section{Data Preparation}
Audio data was collected from VoxForge \cite{voxforge}. Each audio signal was sampled at a rate of 16kHz and cut down to be 60,000 samples long. In this context, a sample refers to the number of data points in the audio clip. This equates to 3.75 seconds of audio. The audio files were saved as WAV files and loaded into Python using the librosa library and a sample rate of 16kHz.
Each audio file of 60,000 samples was saved separately and is referred to as a clip. The training set consisted of 5,000 clips per language, and the validation set consisted of 2,000 clips per language.
Audio clips were gathered in English, Spanish, French, German, Russian, and Italian. Speakers had various accents and were of different genders. The same speakers may be speaking in more than one clip, but there was no cross contamination in the training and validation sets.
Spectrograms were generated using parameters similar to the process discussed in \cite{audioblog} which used a frequency spectrum of 20Hz to 8,000Hz and 40 frequency bins. Each FFT was computed on a window of 1024 samples. No other pre-processing was done on the audio files. Spectrograms were generated on-the-fly on a per-batch basis while the network was running (i.e. spectrograms were not saved to disk).
\section{Network}
We utilized the fast.ai \cite{fastai} deep learning library built on PyTorch \cite{pytorch}. The network used was a pretrained Resnet50. The spectrograms were generated on a per-batch basis, with a batch size of 64 images. Each image was $432 \times 288$ pixels in size.
During training, the 1-cycle-policy described in \cite{leslie} was used. In this process, the learning rate is gradually increased and then decreased in a linear fashion during one cycle \cite{onecycleblog}. The learning rate finder within the fast.ai library was first used to determine the maximum learning rate to be used in the 1-cycle training of the network. The maximum learning rate was then set to be $1 \times 10^{-2}$. The learning rate increases until it hits the maximum learning rate, and then it gradually decreases again. The length of the cycle was set to be 8 epochs, meaning that throughout the cycle 8 epochs are evaluated.
\section{Experiments}
\subsection{Binary Classification with Varying Number of Samples}
Binary classification was performed on two languages using clips of 60,000 samples. English and Russian were chosen to use for training and validation. To test the impact of the number of samples on classification while keeping the sample rate constant, binary classification was also performed on clips of 100,000 samples.
\subsection{Multiple Language Classification}
For each language (English, Spanish, German, French, Russian, and Italian), 5,000 clips were placed in the training set. Each clip was 60,000 samples in length. 2,000 clips per language were placed in the validation set, and no speakers or clips appeared in both the training and validation sets.
\section{Results}
Accuracy was calculated for both binary classification and multiclass classification as: $$Accuracy = \frac{Number \; of \; Correct \; Predictions}{Total \;Number \;of \; Predictions}.$$
LIFAS binary classification accuracy for Russian and English clips of length 60,000 samples was 94\%. In comparison, LIFAS binary classification accuracy on the clips of 100,000 samples was 97 \%. The accuracy totals given in the experiments above are calculated on the total number of clips in the validation set. The accuracy can also be broken up into accuracy for English clips, or accuracy for Russian clips, where there was essentially no difference in the accuracy for English clips and the accuracy for Russian clips.
To confirm that the network performance was not dependent on English and Russian language data, binary classification was tested on other languages with little to no impact on validation accuracy.
LIFAS accuracy for the multi-class network with six languages was 89 \%. These results were based on clips of 60,000 samples since a sufficient number of longer clips were unavailable. Results from the 100,000 sample clips in the binary classification model suggest performance could be improved in the multi-class setting with longer clips.
The confusion matrix for the multi-class classification is shown in figure \ref{confusion}.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{confusion.png}
\caption{The confusion matrix for the multiclass language identification problem.}
\label{confusion}
\end{figure}
\section{Discussion and Limitations}
Notably, the highest rate of false negative classifications came when Spanish clips were classified as Russian, and when Russian clips were classified as Spanish. Additionally, almost no other language is misclassified as Russian or Spanish. One hypothesis for this observation is the fact that Russian is the only Slavic language in the training set. Therefore, the network may be performing some thresholding at one layer that separates Russian from other languages, and by chance Spanish clips are near the threshold.
One limitation in our findings is that all of the data came from the same dataset. Since audio formats can have a wide variety of parameters such as bit rate, sampling rate, and bits per sample, we would expect clips from other datasets collected in different formats to potentially confuse the network. There is potential for this drawback to be overcome assuming appropriate pre-processing steps were taken for the audio signals so that the spectrograms did not contain artifacts from the dataset itself. This is a problem that should be explored as more data becomes available.
\section{Conclusion}
This work shows the viability of using deep network architectures commonly used for image classification in identifying languages from images generated from audio data. Robust performance can be accomplished using relatively short files with minimal pre-processing. We believe that this model can be extended to classify more languages so long as sufficient, representative training and validation data is available. A next step in testing the robustness of this model would be to include test data from additional (e.g. non-VoxForge) datasets.
Additionally, we would want the network to be performant on environments with varying levels of noise. VoxForge data is all user submitted audio clips, so the noise profiles of the clips vary, but more regimented tests should be done to see how robust the network is to different measured levels of noise. Simulated additive white Gaussian noise could be added to the training data to simulate low quality audio, but still might not fully mimic the effect of background noise such as car horns, clanging pots, or multiple speakers in a real life environment.
Another way to potentially increase the robustness of the model would be to implement SpecAugment \cite{specaugment} which is a method that distorts spectrogram images in order to help overfitting and increase performance of networks by feeding in deliberately corrupted images. This may help to add scalability and robustness to the network, assuming the spectral distortions generated in SpecAugment accurately represent distortions in audio signals observed in the real world.
\printbibliography
\end{document}
|
1,116,691,498,304 | arxiv | \section{Introduction}
Euler systems (of rank $1$) were introduced by Kolyvagin in order to study the structure of Selmer groups.
The definition of Euler system says nothing about $L$-values. However, once we get an Euler system which related to a certain $L$-value, we obtain an upper bound of the order of a Selmer group by using this $L$-value.
While the importance of Euler systems is obvious, the structure of the module of Euler systems has not been well studied. In this paper, we study the structure of the module of Euler systems.
To explain the result we obtain in this paper more precisely,
we fix some notations.
Let $k$ be a totally real field with degree $r$ and
\[
\chi \colon \Gal(\overline{k}/k) \longrightarrow \overline{\mathbb{Q}}^{\times}
\]
a non-trivial finite order even character. Put $k_{\chi} := \overline{k}^{\ker(\chi)}$.
Let $p$ be an odd prime with $p \nmid h_{k} \cdot [k_{\chi} \colon k]$, where $h_{k}$ denotes the class number of $k$.
Fix an embedding $\overline{\mathbb{Q}} \lhook\joinrel\longrightarrow \overline{\mathbb{Q}}_{p}$ and put
\begin{align*}
\mathcal{O} := \mathbb{Z}_{p}[\im(\chi)] \ \textrm{ and } \ T := \mathcal{O}(1) \otimes \chi^{-1}.
\end{align*}
We write $M_{k_{\chi},\infty}$ for the maximal $p$-ramified pro-$p$ abelian extension of $k_{\chi}(\mu_{p^{\infty}})$.
Let $k_{\infty}/k$ denote the cyclotomic $\mathbb{Z}_{p}$-extension.
Note that we have the canonical isomorphism
\[
\Gal(k_{\chi}(\mu_{p^{\infty}})/k) \stackrel{\sim}{\longrightarrow} \Gal(k_{\infty}/k) \times \Gal(k_{\chi}(\mu_{p})/k)
\]
since $p \nmid [L \colon k]$.
We put
\begin{align*}
X^{\chi}_{K} := e_{\chi}(\mathcal{O} \otimes_{\mathbb{Z}_{p}} \Gal(M_{k_{\chi},\infty}/k_{\chi}(\mu_{p^{\infty}})))
\end{align*}
where $e_{\chi} := [k_{\chi}(\mu_{p}) \colon k]^{-1} \sum_{\sigma \in \Gal(k_{\chi}(\mu_{p})/k)}\chi(\sigma)\sigma^{-1}$.
For each integer $s \geq 0$, we denote by ${\rm ES}_{s}(T)$ the module of Euler systems of rank $s$ for $T$ (see Definition~\ref{def:euler system}).
Let $\mathcal{K}$ denote the maximal pro-$p$ abelian extension of $k$ satisfying $S_{\rm ram}(\mathcal{K}/k) \cap (S_{p}(k) \cup S_{\rm ram}(k_{\chi}/k)) = \emptyset$, where, for an algebraic extension $K/k$, we denote by $S_{\rm ram}(K/k)$ the set of primes at which $K/k$ is ramified and $S_{p}(k)$ denotes the set of primes of $k$ above $p$. Then by definition, the module ${\rm ES}_{s}(T)$ has a natural $\mathcal{O}[[\Gal(\mathcal{K}/k)]]$-action.
Under this setting, the following is the main result of this paper.
\begin{theorem}[Theorem~\ref{thm:main}]\label{main}
Let $\mathfrak{m}$ denote the maximal ideal of $\mathcal{O}$.
Suppose that
\begin{itemize}
\item $H^{0}(\Gal(\overline{k}_{\mathfrak{p}}/k_{\mathfrak{p}}), T/\mathfrak{m} T) = H^{2}(\Gal(\overline{k}_{\mathfrak{p}}/k_{\mathfrak{p}}),T/\mathfrak{m} T) = 0$ for each prime $\mathfrak{p}$ of $k$ above $p$,
\item the module $X_{k}^{\chi}$ is $p$-torsion-free,
\item Greenberg conjecture (Conjecture~\ref{conj:greenberg}) holds true.
\end{itemize}
Then the $\mathcal{O}[[\Gal(\mathcal{K}/k)]]$-module ${\rm ES}_{r}(T)$ is free of rank $1$.
Furthermore, a generator of ${\rm ES}_{r}(T)$ relates to $p$-adic $L$-functions.
\end{theorem}
When $k = \mathbb{Q}$, Theorem~\ref{main} is closely related to a conjecture about the universality of the circular distribution proposed by Coleman.
This conjecture implies that any Euler system of rank $1$ for the multiplicative group over $\mathbb{Q}$ is essentially made out of cyclotomic units.
The Coleman conjecture was studied by Seo in \cite{Seo1, Seo2} and by Burns and Seo in \cite{BS}.
The authors obtained strong evidence in support of the Coleman conjecture.
Although Theorem~\ref{main} in the case that $k = \mathbb{Q}$ was essentially proved by Seo in \cite[Theorem~A]{Seo1}, the proof of the main result of \cite{Seo1} has an error as mentioned in \cite[Remark~3.6]{BS}. Some arguments in \cite{Seo1} can only be corrected either by assuming certain Galois descent property on distributions or by inverting certain primes.
Recently, in \cite{BDSS}, Burns, Daoud, Sano, and Seo formulated a natural generalization of the conjecture of Coleman, which assert that, modulo minor technical issues concerning torsion, an Euler system of an appropriate rank for the multiplicative group over a number field is basic (see \cite[Conjecture~2.5]{BDSS} for the detail).
In the present article, we do not give the definition of basic Euler systems.
However, it is worth mentioning that, by using Theorem~\ref{main}, we can give a new evidence in support of \cite[Conjecture~2.5]{BDSS} (Theorem~\ref{thm:main2}).
\subsection{Notation}
For a field $k$, we fix a separable closure $\overline{k}$ of $k$ and
denote by $G_{k} := \Gal(\overline{k}/k)$ the absolute Galois group of $k$.
For a profinite group $G$ and a topological $G$-module $M$, let $C^{\bullet}(G,M)$ denote the complex of inhomogeneous continuous cochains of $G$ with values in $M$.
We also denote the object in the derived category corresponding to the complex $C^{\bullet}(G,M)$ by ${\bf R}\Gamma(G,M)$.
For each integer $i \geq 0$, we write $H^{i}(G,M)$ for its $i$-th cohomology group.
For any algebraic extension $k/\mathbb{Q}$ and places $v$ of $\mathbb{Q}$, we denote by $S_{v}(k)$ the set of places of $k$ above $v$. For a finite set $S$ of places of $k$,
we denote by $k_{S}$ the maximal extension of $k$ contained in $\overline{k}$ which is unramified outside $S$. Set
\[
G_{k,S} := \Gal(k_{S}/k).
\]
For a prime $\mathfrak{q}$ of $k$, we denote by $k_{\mathfrak{q}}$ the completion of $k$ at $\mathfrak{q}$.
For an algebraic extension $K/k$, we denote by $S_{\rm ram}(K/k)$ the set of primes at which $K/k$ is ramified.
\subsection{Acknowledgments}
The author would like to express his gratitude to his supervisor Takeshi Tsuji for many helpful discussions.
The author is also grateful to David Burns for helpful remarks on the conjecture of Coleman.
This work was supported by the Program for Leading Graduate Schools, MEXT, Japan and JSPS KAKENHI Grant Number 17J02456.
\section{Definition of Euler systems}\label{sec:euler}
In this section, we recall the definition of Euler systems. The contents of this section are based on \cite[\S2]{hres}.
First, let us fix some notations.
Throughout this paper, $p$ denotes an odd prime and $k$ denotes a number field.
We fix a finite set $S$ of places of $k$ satisfying $S_{\infty}(k) \cup S_{p}(k) \subset S$.
Let $\mathcal{O}$ be a complete discrete valuation ring with maximal ideal $\mathfrak{m}$ and $T$ a free $\mathcal{O}$-module of finite rank on which $G_{k,S}$ acts continuously.
Suppose that
\begin{itemize}
\item the odd prime $p$ is coprime to the class number of $k$,
\item the field $\mathcal{O}/\mathfrak{m}$ has characteristic $p$, and
\item the module $H^{0}(G_{k,S}, T/\mathfrak{m} T)$ vanishes.
\end{itemize}
\begin{definition}
For an abelian gruop $G$, we put
\[
\iota \colon \mathbb{Z}[G] \longrightarrow \mathbb{Z}[G]; \, \sum_{g \in G}a_{g}g \mapsto \sum_{g \in G}a_{g}g^{-1}.
\]
For a $G$-module $M$, we define a $G$-module $M^{\iota}$ by $M^{\iota} := M \otimes_{\mathbb{Z}[G], \iota} \mathbb{Z}[G]$.
\end{definition}
Let $k_{\infty}/k$ be a $\mathbb{Z}_{p}^{s}$-extension such that $s \geq 1$ and no prime of $k$ splits completely in $k_{\infty}$. We put
\[
\Lambda := \mathcal{O}[[\Gal(k_{\infty}/k)]] \,\,\, \text{ and } \,\,\, \mathbb{T} := T \otimes_{\mathcal{O}} \mathcal{O}[[\Gal(k_{\infty}/k)]]^{\iota}.
\]
We also take a pro-$p$ abelian extension $\mathcal{K} \subset \overline{k}$ of $k$ satisfying $S_{\rm ram}(\mathcal{K}/k) \cap S = \emptyset$.
Let $\Omega$ denote the set of all finite extensions of $k$ in $\mathcal{K}$:
\[
\Omega := \{K \mid k \subset K \subset \mathcal{K}, \, [K \colon k] < \infty\}.
\]
For a field $K \in \Omega$, we put
\begin{itemize}
\item $S_{K} := S \cup S_{\rm ram}(K/k)$,
\item $\Lambda_{K} := \mathcal{O}[[\Gal(k_{\infty}K/k)]]$,
\item $\mathbb{T}_{K} := T \otimes_{\mathcal{O}} \Lambda_{K}^{\iota}$.
\end{itemize}
Since $p$ is coprime to the class number of $k$, we have the canonical isomorphism
\[
\Gal(k_{\infty}K/k) \stackrel{\sim}{\longrightarrow} \Gal(k_{\infty}/k) \times \Gal(K/k).
\]
In this paper, by using this isomorphism, we identify $\Lambda_{K}$ with the group ring $\Lambda[\Gal(K/k)]$.
\begin{definition}
Let $\mathfrak{q}$ be a prime of $k$.
Throughout this paper, we fix a lift of the arithmetic Frobenius element ${\rm Fr}_{\mathfrak{q}} \in G_{k}$.
When $\mathfrak{q} \not\in S$, we define the Frobenius characteristic polynomial at $\mathfrak{q}$ by
\[
P_{\mathfrak{q}}(x) := \det(1- x \cdot {\rm Fr}_{\mathfrak{q}} \mid T) \in \mathcal{O}[x].
\]
\end{definition}
\begin{definition}
For a field $K \in \Omega$, let $M_{K}$ be a $\Lambda_{K}$-module.
Suppose that $\{M_{K}\}_{K \in \Omega}$ is an inverse system of $\mathcal{O}[[\Gal(k_{\infty}\mathcal{K}/k)]]$-modules with transition maps $\varphi_{K, L} \colon M_{K} \longrightarrow M_{L}$ for $K, L \in \Omega$ with $L \subset K$.
We then define a module ${\rm ES}(\{M_{K}\}_{K\in\Omega})$ by
\[
{\rm ES}(\{M_{K}\}_{K\in\Omega}) := \left\{ (m_{K})_{K \in \Omega} \in \prod_{K \in \Omega}M_{K} \ \middle| \
\begin{array}{l}
\varphi_{K, L}(m_{K}) = \left(\prod_{\mathfrak{q} \in S_{K} \setminus S_{L}} P_{\mathfrak{q}}({\rm Fr}_{\mathfrak{q}}^{-1})\right) \cdot m_{L}
\\
\textrm{ for any fields $K, L \in \Omega$ with $L \subset K$ }
\end{array}
\right\}.
\]
\end{definition}
\begin{definition}
For a commutative ring $R$ and an $R$-module $M$,
we put
\[
M^{*} := \Hom_{R}(M,R).
\]
For an integer $r \geq 0$, we define an $r$-th exterior bi-dual ${\bigcap}^{r}_{R}M$ of $M$ by
\[
{\bigcap}^{r}_{R}M := \left({\bigwedge}^{r}_{R}(M^{*})\right)^{*}.
\]
\end{definition}
\begin{lemma}[{\cite[Lemma~2.5]{hres}}]\label{lem:inverse-system}
Let $r \geq 0$ be an integer.
For any fields $K, L \in \Omega$ with $L \subset K$, the canonical homomorphism $\mathbb{T}_{K} \longrightarrow \mathbb{T}_{L}$ induces an $\mathcal{O}[[\Gal(k_{\infty}\mathcal{K}/k)]]$-homomorphism
\[
{\bigcap}^{r}_{\Lambda_{K}}H^{1}(G_{k,S_{K}},\mathbb{T}_{K}) \longrightarrow {\bigcap}^{r}_{\Lambda_{L}}H^{1}(G_{k,S_{L}},\mathbb{T}_{L}).
\]
Hence we have an inverse system of $\mathcal{O}[[\Gal(k_{\infty}\mathcal{K}/k)]]$-modules
\[
\left\{{\bigcap}^{r}_{\Lambda_{K}}H^{1}(G_{k,S_{K}},\mathbb{T}_{K})\right\}_{K\in \Omega}.
\]
\end{lemma}
\begin{definition}\label{def:euler system}
For an integer $r \geq 0$,
we define the module ${\rm ES}_{r}(T)$ of Euler systems of rank $r$ (for $T$) by
\[
{\rm ES}_{r}(T) := {\rm ES}\left(\left\{{\bigcap}^{r}_{\Lambda_{K}}H^{1}(G_{k,S_{K}},\mathbb{T}_{K})\right\}_{K \in \Omega}\right).
\]
\end{definition}
\section{Divisibility property}
We use the same notation as in \S\ref{sec:euler}.
Let $K, L \in \Omega$ be fields with $L \subset K$.
To simplify the notation, we set
\begin{itemize}
\item $\mathcal{G}_{K} := \Hom(\Gal(K/k), \overline{\mathbb{Q}}^{\times})$, and
\item $\Lambda_{K, \overline{\mathbb{Q}}_{p}} := \Lambda_{K} \otimes_{\mathcal{O}}\overline{\mathbb{Q}}_{p}$.
\end{itemize}
We have the canonical injection $\mathcal{G}_{L} \lhook\joinrel\longrightarrow \mathcal{G}_{K}$.
Hence we identify $\mathcal{G}_{L}$ with the subgroup $\{\psi \in \mathcal{G}_{K} \mid \psi(\Gal(K/L)) = 1\}$ of $\mathcal{G}_{K}$ by using this injection.
For a character $\psi \in \mathcal{G}_{K}$, we set
\[
e_{K,\psi} := \frac{1}{[K \colon k]} \sum_{g \in \Gal(K/k)}\psi(g)g^{-1} \in \overline{\mathbb{Q}}_{p}[\Gal(K/k)].
\]
We note that
\[
\Lambda_{K, \overline{\mathbb{Q}}_{p}}= \prod_{\psi \in \mathcal{G}_{K}}\Lambda_{K, \overline{\mathbb{Q}}_{p}}e_{K,\psi}
\]
and $\Lambda_{K, \overline{\mathbb{Q}}_{p}}e_{K,\psi} \cong \Lambda \otimes_{\mathcal{O}} \overline{\mathbb{Q}}_{p}$ is a principal ideal domain for any character $\psi \in \mathcal{G}_{K}$.
We write $\pi_{K,L} \colon \Lambda_{K, \overline{\mathbb{Q}}_{p}} \longrightarrow \Lambda_{L, \overline{\mathbb{Q}}_{p}}$ for the canonical projection.
For any character $\psi \in \mathcal{G}_{L}$, we have
\[
\pi_{K,L}(e_{K,\psi}) = e_{L, \psi}.
\]
Hence the homomorphism $\pi_{K,L}$ induces an isomorphism
\[
\Lambda_{K, \overline{\mathbb{Q}}_{p}}e_{K,\psi} \stackrel{\sim}{\longrightarrow} \Lambda_{L, \overline{\mathbb{Q}}_{p}}e_{L,\psi}.
\]
Let $\mathfrak{q} \not\in S$ be a prime of $k$.
We write $\mathcal{I}_{K,\mathfrak{q}}$ for the inertia subgroup of $\Gal(k_{\infty}K/k)$ at $\mathfrak{q}$. We then have the arithmetic Frobenius element ${\rm Fr}_{\mathfrak{q}} \in \Gal(k_{\infty}K/k)/\mathcal{I}_{K,\mathfrak{q}}$.
The canonical homomorphism $\mathcal{I}_{K,\mathfrak{q}} \longrightarrow \Gal(K/k)$ is injective
since $k_{\infty}/k$ is unramified at $\mathfrak{q} \nmid p$.
Hence we identify $\mathcal{I}_{K,\mathfrak{q}}$ with the inertia subgroup of $\Gal(K/k)$ at $\mathfrak{q}$.
\begin{definition}
Let $\mathfrak{q} \not\in S$ be a prime of $k$.
We define an element $f_{K,\mathfrak{q}} \in \Lambda_{K, \overline{\mathbb{Q}}_{p}}$ by
\[
f_{K,\mathfrak{q}}e_{K,\psi} :=
\begin{cases}
e_{K,\psi} & \textrm{ if } \,\,\, \psi(\mathcal{I}_{K,\mathfrak{q}}) \neq 1,
\\
P_{\mathfrak{q}}({\rm Fr}_{\mathfrak{q}}^{-1})e_{K,\psi} & \textrm{ if } \,\,\, \psi(\mathcal{I}_{K,\mathfrak{q}}) = 1
\end{cases}
\]
for each character $\psi \in \mathcal{G}_{K}$.
We put
\[
f_{K} := \prod_{\mathfrak{q} \in S_{K} \setminus S}f_{K,\mathfrak{q}}.
\]
Note that $f_{K} \in \Lambda_{K}[1/p]$ and $f_{K}$ is a regular element of $\Lambda_{K}[1/p]$.
\end{definition}
\begin{lemma}\label{lemma:euler-rel}
For any fields $K, L \in \Omega$ satisfying $L \subset K$, we have
\[
\pi_{K, L}(f_{K}) = \left(\prod_{\mathfrak{q} \in S_{K} \setminus S_{L}}P_{\mathfrak{q}}({\rm Fr}_{\mathfrak{q}}^{-1})\right)f_{L}.
\]
Hence $(f_{K})_{K \in \Omega} \in {\rm ES}(\{\Lambda_{K}[1/p]\}_{K \in \Omega})$.
\end{lemma}
\begin{proof}
For a prime $\mathfrak{q} \not\in S_{L}$ of $k$, we note that $f_{L, \mathfrak{q}} = P_{\mathfrak{q}}({\rm Fr}_{\mathfrak{q}}^{-1})$ since $\mathcal{I}_{L,\mathfrak{q}} = 1$.
Hence it suffices to show that
\[
\pi_{K,L}(f_{K,\mathfrak{q}})e_{L,\psi} = f_{L,\mathfrak{q}}e_{L,\psi}
\]
for each character $\psi \in \mathcal{G}_{L}$ and prime $\mathfrak{q} \not\in S$ of $k$.
This equality follows from the fact that the canonical map $\Gal(K/k) \longrightarrow \Gal(L/k)$ induces a surjection $\mathcal{I}_{K,\mathfrak{q}} \longrightarrow \mathcal{I}_{L,\mathfrak{q}}$.
\end{proof}
\begin{proposition}\label{prop:key_div}
For a system $(c_{K})_{K \in \Omega} \in {\rm ES}(\{\Lambda_{K}[1/p]\}_{K \in \Omega})$, we have
\[
c_{K} \in f_{K} \Lambda_{K}[1/p]
\]
for any field $K \in \Omega$.
\end{proposition}
\begin{proof}
Take a field $K \in \Omega$.
Let us prove this proposition by induction on $\#S_{\rm ram}(K/k)$.
When $\#S_{\rm ram}(K/k) = 0$, we have $f_{K} = 1$, and hence $c_{K} \in f_{K} \Lambda_{K}[1/p]$.
Suppose that $\#S_{\rm ram}(K/k) > 0$.
Take a character $\psi \in \mathcal{G}_{K}$.
Since the $\Lambda_{K}[1/p]$-algebra $\Lambda_{K, \overline{\mathbb{Q}}_{p}}$ is faithfully flat,
we only need to show that
\begin{align}\label{claim1}
c_{K}e_{K,\psi} \in f_{K} \Lambda_{K, \overline{\mathbb{Q}}_{p}} e_{K,\psi}.
\end{align}
If $\psi(\mathcal{I}_{K,\mathfrak{q}}) \neq 1$ for any prime $\mathfrak{q} \in S_{\rm ram}(K/k)$, then we have $f_{K}e_{K,\psi} = e_{K,\psi}$, and hence $c_{K}e_{K,\psi} \in f_{K} \Lambda_{K, \overline{\mathbb{Q}}_{p}} e_{K,\psi}$.
Suppose that there is a prime $\mathfrak{q} \in S_{\rm ram}(K/k)$ with $\psi(\mathcal{I}_{K,\mathfrak{q}}) = 1$.
Put $L := K^{\mathcal{I}_{K,\mathfrak{q}}}$.
Since $\pi_{K,L}$ induces an isomorphism
$\Lambda_{K, \overline{\mathbb{Q}}_{p}}e_{K,\psi} \stackrel{\sim}{\longrightarrow} \Lambda_{L, \overline{\mathbb{Q}}_{p}}e_{L,\psi}$,
the claim~\eqref{claim1} is equivalent to
\[
\pi_{K, L}(c_{K})e_{L, \psi} \in \pi_{K,L}(f_{K}) \Lambda_{L,\overline{\mathbb{Q}}_{p}}e_{L, \psi}.
\]
The definition of Euler systems implies that
\[
\pi_{K, L}(c_{K}) = \left(\prod_{\mathfrak{q} \in S_{K} \setminus S_{L}}P_{\mathfrak{q}}({\rm Fr}_{\mathfrak{q}}^{-1}) \right) c_{L}.
\]
Furthermore, since $\mathfrak{q} \in S_{\rm ram}(K/k) \setminus S_{\rm ram}(L/k)$,
we have $\# S_{\rm ram}(L/k) \leq \setminus \# S_{\rm ram}(K/k) - 1$, and hence the induction hypothesis shows $c_{L} \in f_{L} \Lambda_{L}[1/p]$.
Hence, by Lemma~\ref{lemma:euler-rel}, we conclude that
\begin{align*}
\pi_{K, L}(c_{K})e_{L, \psi}
&= \left(\prod_{\mathfrak{q} \in S_{K} \setminus S_{L}}P_{\mathfrak{q}}({\rm Fr}_{\mathfrak{q}}^{-1}) \right) c_{L} e_{L,\psi}
\\
&\in \left(\prod_{\mathfrak{q} \in S_{K} \setminus S_{L}}P_{\mathfrak{q}}({\rm Fr}_{\mathfrak{q}}^{-1}) \right) f_{L} \Lambda_{L, \overline{\mathbb{Q}}_{p}} e_{L, \psi}
\\
&= \pi_{K,L}(f_{K}) \Lambda_{L, \overline{\mathbb{Q}}_{p}} e_{L, \psi}.
\end{align*}
\end{proof}
\begin{corollary}
The homomorphism
\begin{align}\label{hom}
\varprojlim_{K \in \Omega} \Lambda_{K}[1/p] \longrightarrow {\rm ES}(\{\Lambda_{K}[1/p]\}_{K \in \Omega}); \, (\lambda_{K})_{K \in \Omega} \mapsto (\lambda_{K}f_{K})_{K\in \Omega}
\end{align}
is an isomorphism.
\end{corollary}
\begin{proof}
By Lemma~\ref{lemma:euler-rel}, the homomorphism~\eqref{hom} is well-defined.
Since $f_{K}$ is a regular element of $\Lambda_{K}[1/p]$ for any field $K \in \Omega$, the homomorphism~\eqref{hom} is injective. Furthermore, Proposition~\ref{prop:key_div} shows that the homomorphism~\eqref{hom} is surjective.
\end{proof}
\section{Characteristic ideals}
In this section, we recall the definition of the characteristic ideal of a finitely generated module over a noetherian ring and its basic properties studied in \cite[Appendix~C]{hres}.
\begin{definition}[{\cite[Definition~2.8]{hres}}] Let $R$ be a noetherian ring and $M$ a finitely generated $R$-module.
Take an exact sequence $0 \longrightarrow N \longrightarrow R^{s} \longrightarrow M \longrightarrow 0$ of $R$-modules with $s \geq 1$.
We then define the characteristic ideal of $M$ by
\[
{\rm char}_{R}(M) := {\rm im }\left({\bigcap}_{R}^{s}N \longrightarrow {\bigcap}_{R}^{s}R^{s} = R\right).
\]
We see that the characteristic ideal ${\rm char}_{R}(M)$ does not depend on the choice of the exact sequence $0 \longrightarrow N \longrightarrow R^{s} \longrightarrow M \longrightarrow 0$ of $R$-modules (see \cite[Remark~C.5]{hres}).
\end{definition}
\begin{remark}
Since the exterior bi-dual commutes with flat base change, for a flat homomorphism $R \longrightarrow R'$ of noetherian rings and a finitely generated $R$-module $M$, we have ${\rm char}_{R}(M)R' = {\rm char}_{R'}(M \otimes_{R} R')$.
\end{remark}
\begin{lemma}[{\cite[Proposition~C.7]{hres}}]\label{lemma:fitt}
Let $R$ be a noetherian ring and $M$ a finitely generated $R$-module.
\begin{itemize}
\item[(i)] We have ${\rm Fitt}_{R}^{0}(M) \subset {\rm char}_{R}(M)$.
\item[(ii)] If the projective dimension of $M$ is at most 1, then we have ${\rm Fitt}_{R}^{0}(M) = {\rm char}_{R}(M)$.
\end{itemize}
\end{lemma}
\begin{definition}
Let $R$ be a noetherian ring and $n \geq 0$ an integer.
\begin{itemize}
\item[(i)] The ring $R$ is said to satisfy the condition (G$_{n}$) if the local ring $R_{\mathfrak{r}}$ is Gorenstein for any prime $\mathfrak{r}$ of $R$ with ${\rm ht}(\mathfrak{r}) \leq n$.
\item[(ii)] We say that the ring $R$ satisfies the Serre's condition (S$_{n}$) if the inequality
\[
{\rm depth}(R_{\mathfrak{r}}) \geq \min\{n, {\rm ht}(\mathfrak{r})\}
\]
holds for all prime ideal $\mathfrak{r}$ of $R$.
\end{itemize}
\end{definition}
\begin{lemma}[{\cite[Lemma~C.1]{hres}}]\label{lemma:bidual-inj}
Let $R$ be a noetherian ring satisfying (G$_{0}$) and (S$_{1}$).
For an integer $r \geq 0$ and an injection $M \lhook\joinrel\longrightarrow N$ of finitely generated $R$-modules,
the induced homomorphism ${\bigcap}^{r}_{R}M \longrightarrow {\bigcap}^{r}_{R}N$ is injective.
\end{lemma}
Recall that $(-)^{*} := \Hom_{R}(-,R)$ denotes the $R$-dual functor.
If a noetherian ring $R$ satisfies (G$_{0}$) and (S$_{1}$), the canonical homomorphism $I^{**} \longrightarrow R^{**} = R$ is injective for an ideal $I$ of $R$ by Lemma~\ref{lemma:bidual-inj}.
Hence by identifying $I^{**}$ with the image of this injection, we regard $I^{**}$ as an ideal of $R$ when $R$ satisfies (G$_{0}$) and (S$_{1}$).
\begin{lemma}[{\cite[Proposition~C.12]{hres}}]\label{lemma:ref}
Let $R$ be a noetherian ring satisfying (G$_{0}$) and (S$_{1}$). Let $M$ be a finitely generated $R$-module.
Then the ideal ${\rm char}_{R}(M)$ is reflexive, that is, ${\rm char}_{R}(M) = {\rm char}_{R}(M)^{**}$.
\end{lemma}
\begin{lemma}[{\cite[Lemma~C.13]{hres}}]\label{lemma:bidualeq}
Let $R$ be a noetherian ring satisfying (G$_{0}$) and (S$_{2}$).
Let $I$ and $J$ be ideals of $R$. If $IR_{\mathfrak{r}} \subset JR_{\mathfrak{r}}$ for any prime ideal $\mathfrak{r}$ of $R$ with ${\rm ht}(\mathfrak{r}) \leq 1$, then $I^{**} \subset J^{**}$.
\end{lemma}
\begin{corollary}[{\cite[Remark~C.14~(i)]{hres}}]\label{cor:pseudo}
Let $R$ be a noetherian ring satisfying (G$_{0}$) and (S$_{2}$).
If a finitely generated $R$-module $M$ is pseudo-isomorphic to an $R$-module $N$, then we have ${\rm char}_{R}(M) = {\rm char}_{R}(N)$.
\end{corollary}
\begin{proof}
The characteristic ideals of $M$ and $N$ are reflexive by Lemma~\ref{lemma:ref}.
Hence by Lemma~\ref{lemma:bidualeq}, we may assume that the Krull dimension of $R$ is at most $1$.
In this case, $M$ is isomorphic to $N$, and we have ${\rm char}_{R}(M) = {\rm char}_{R}(N)$.
\end{proof}
\begin{corollary}[{\cite[Remark~C.14~(ii)]{hres}}]\label{cor:char-fitt}
Let $R$ be a normal ring and $M$ a finitely generated $R$-module.
Then we have ${\rm char}_{R}(M) = {\rm Fitt}_{R}^{0}(M)^{**}$.
Hence, in this case, the notion of the characteristic ideal coincides with the usual one.
\end{corollary}
\begin{proof}
Lemma~\ref{lemma:ref} shows that ${\rm char}_{R}(M)$ is reflexive. Hence by Lemma~\ref{lemma:bidualeq}, we may assume that the Krull dimension of $R$ is at most $1$.
Thus $R$ is a regular ring, and the projective dimension of $M$ is at most $1$.
Lemma~\ref{lemma:fitt}~(ii) shows that ${\rm Fitt}_{R}^{0}(M) = {\rm char}_{R}(M)$.
\end{proof}
\begin{lemma}\label{lemma:torsion}
Let $R$ be a noetherian ring satisfying (G$_{0}$) and (S$_{2}$). Let $M$ be a finitely generated torsion $R$-module.
Let $r \in R$ be a regular element. If $r$ is $M$-regular, then we have
\[
{\rm char}_{R}(M) \cap rR = r \cdot {\rm char}_{R}(M).
\]
Furthermore, for any prime ideal $\mathfrak{r}$ of $R$ with ${\rm ht}(\mathfrak{r}) \leq 1$ and $r \in \mathfrak{r}$, the module $M \otimes_{R} R_{\mathfrak{r}}$ vanishes.
\end{lemma}
\begin{proof}
Take an element $x \in R$ with $y := rx \in {\rm char}_{R}(M)$. Let us show $x \in {\rm char}_{R}(M)$.
By Lemma~\ref{lemma:ref}, the ideal ${\rm char}_{R}(M)$ is reflexive. Since $R$ satisfies (S$_{2}$) and localization of modules is an exact functor, by Lemma~\ref{lemma:bidualeq}, we may assume that $R$ is a Cohen-Macaulay local ring with $\dim(R) \leq 1$.
Furthermore, we may also assume that $r \in \mathfrak{m}_{R}$. Here $\mathfrak{m}_{R}$ denotes the maximal ideal of $R$.
Put $I := \mathrm{Ann}_{R}(M)$. Let us show that $I = R$. We assume the contradiction, namely, $I \subset \mathfrak{m}_{R}$.
Since $r$ is $M$-regular, the homomorphism
\[
\End_{R}(M) \longrightarrow \End_{R}(M); \, f \mapsto rf
\]
is injective, and $r$ is also $R/I$-regular since the homomorphism $R/I \longrightarrow \End_{R}(M)$ is injective.
This fact implies that
\[
\dim(R/I) = \dim(R/(I + rR)) + 1.
\]
Furthermore, since $R$ is Cohen-Macaulay, we have
\[
\dim(R) = {\rm ht}(I) + \dim(R/I) = {\rm ht}(I) + \dim(R/(I + rR)) + 1.
\]
The fact that $M$ is a torsion $R$-module implies that ${\rm ht}(I) = {\rm grade}(I) \geq 1$, and hence $\dim(R) \geq 2$. This contradicts the fact that $\dim(R) \leq 1$.
Since $I = R$, the module $M$ vanishes, which implies $x \in R = {\rm char}_{R}(0) = {\rm char}_{R}(M)$.
\end{proof}
\begin{proposition}\label{prop:char-local}
For a field $K \in \Omega$ and a prime $\mathfrak{q} \not\in S$ of $k$, we have
\[
{\rm char}_{\Lambda_{K}}(H^{1}(G_{k_{\mathfrak{q}}}, \mathbb{T}_{K}))\Lambda_{K}[1/p] = f_{K,\mathfrak{q}} \Lambda_{K}[1/p].
\]
\end{proposition}
\begin{proof}
Put $\mathbb{V}_{K} := \mathbb{T}_{K} \otimes_{\mathcal{O}} \overline{\mathbb{Q}}_{p}$ and $H^{1}(G_{k_{\mathfrak{q}}}, \mathbb{V}_{K}) := H^{1}(G_{k_{\mathfrak{q}}}, \mathbb{T}_{K}) \otimes_{\mathcal{O}} \overline{\mathbb{Q}}_{p}$.
Since the ring homomorphism $\Lambda_{K}[1/p] \lhook\joinrel\longrightarrow \Lambda_{K, \overline{\mathbb{Q}}_{p}}$ is faithfully flat,
it suffice to prove that
\[
{\rm char}_{\Lambda_{K, \overline{\mathbb{Q}}_{p}}}(H^{1}(G_{k_{\mathfrak{q}}}, \mathbb{V}_{K}))e_{K,\psi} = f_{K,\mathfrak{q}} \Lambda_{K, \overline{\mathbb{Q}}_{p}}e_{K,\psi}.
\]
for any character $\psi \in \mathcal{G}_{K}$.
By \cite[Corollary~B.3.6]{R}, we have an isomorphism
\begin{align}\label{isom:unr}
\mathbb{V}_{K}^{\mathcal{I}_{K,\mathfrak{q}}}/(1-{\rm Fr}_{\mathfrak{q}})\mathbb{V}_{K}^{\mathcal{I}_{K,\mathfrak{q}}} \cong H^{1}(G_{k_{\mathfrak{q}}}, \mathbb{V}_{K}).
\end{align}
Let $\psi \in \mathcal{G}_{K}$ be a character. When $\psi(\mathcal{I}_{K,\mathfrak{q}}) \neq 1$, the isomorphism~\eqref{isom:unr} shows that the module
$H^{1}(G_{k_{\mathfrak{q}}}, \mathbb{V}_{K})e_{K,\psi}$ vanishes.
Hence we have
\[
{\rm char}_{\Lambda_{K, \overline{\mathbb{Q}}_{p}}}(H^{1}(G_{k_{\mathfrak{q}}}, \mathbb{V}_{K}))e_{K,\psi} = \Lambda_{K, \overline{\mathbb{Q}}_{p}}e_{K,\psi} = f_{K,\mathfrak{q}} \Lambda_{K, \overline{\mathbb{Q}}_{p}}e_{K,\psi}.
\]
If $\psi(\mathcal{I}_{K,\mathfrak{q}}) = 1$, the isomorphism~\eqref{isom:unr} shows that we have an exact sequence
\[
0 \longrightarrow \mathbb{V}_{K}e_{K,\psi} \xrightarrow{1-{\rm Fr}_{\mathfrak{q}}} \mathbb{V}_{K}e_{K,\psi} \longrightarrow H^{1}(G_{k_{\mathfrak{q}}}, \mathbb{V}_{K})e_{K,\psi} \longrightarrow 0.
\]
Since $\mathbb{V}_{K}e_{K,\psi}$ is a free $\Lambda_{K, \overline{\mathbb{Q}}_{p}}e_{K,\psi}$-module, we have
\[
{\rm char}_{\Lambda_{K, \overline{\mathbb{Q}}_{p}}}(H^{1}(G_{k_{\mathfrak{q}}}, \mathbb{V}_{K}))e_{K,\psi} = \det(1-{\rm Fr}_{\mathfrak{q}} \mid \mathbb{V}_{K}e_{K,\psi}).
\]
Furthermore, since $\mathbb{T}_{K} = T \otimes_{\mathcal{O}} \Lambda_{K}^{\iota}$, we conclude that
\[
\det(1-{\rm Fr}_{\mathfrak{q}} \mid \mathbb{V}_{K}e_{K,\psi}) = P_{\mathfrak{q}}({\rm Fr}_{\mathfrak{q}}^{-1})e_{K,\psi},
\]
which completes the proof.
\end{proof}
\section{On the structure of the module of Euler systems}\label{sec:euler G_{m}}
Let $k$ be a totally real field with degree $r$ and $\chi \colon G_{k} \longrightarrow \overline{\mathbb{Q}}^{\times}$
a non-trivial finite order even character.
Put $k_{\chi} := \overline{k}^{\ker(\chi)}$.
Let $p$ be an odd prime, coprime to the class number of $k$ and $[k_{\chi} \colon k]$.
We put
\[
\mathcal{O} := \mathbb{Z}_{p}[\im(\chi)] \, \text{ and } \, S := S_{\infty}(k) \cup S_{p}(k) \cup S_{\rm ram}(k_{\chi} /k).
\]
Let
\[
T := \mathcal{O}(1) \otimes \chi^{-1},
\]
that is, $T \cong \mathcal{O}$ as $\mathcal{O}$-modules and the Galois group $G_{k,S}$ acts on $T$ via the character
$\chi_{\rm cyc}\chi^{-1}$, where $\chi_{\rm cyc}$ denotes the cyclotomic character of $k$.
We write $k_{\infty}/k$ for the cyclotomic $\mathbb{Z}_{p}$-extension of $k$.
We also set
\[
\Lambda := \mathcal{O}[[\Gal(k_{\infty}/k)]] \ \text{ and } \ \mathbb{T} := T \otimes_{\mathcal{O}} \Lambda^{\iota}.
\]
Let $\mathcal{K}$ denote the maximal pro-$p$ abelian extension of $k$ satisfying $S_{\rm ram}(\mathcal{K}/k) \cap S = \emptyset$.
Let $\Omega$, $\Lambda_{K}$, and $\mathbb{T}_{K}$ be as in \S\ref{sec:euler}.
For a field $K \in \Omega$ and an integer $i \geq 0$, put
\[
H^{i}(G_{k_{p}}, \mathbb{T}_{K}) := \bigoplus_{\mathfrak{p} \in S_{p}(k)}H^{i}(G_{k_{\mathfrak{p}}},\mathbb{T}_{K}).
\]
\subsection{Stickelberger elements}
In this subsection, we will recall the definition of Stickelberger elements.
The contents of this subsection are based on \cite[\S3.1]{hres}.
Let $K \in \Omega$ be a field and $n \geq 0$ an integer.
Let $\mu_{p^{n}}$ denote the set of $p^{n}$-th roots of unity in $\overline{\mathbb{Q}}$ and $\mu_{p^{\infty}} := \bigcup_{n>0}\mu_{p^{n}}$.
We set
\[
G_{K,n} := \Gal(k_{\chi}K(\mu_{p^{n}})/k) = \Gal(k_{\chi}/k) \times \Gal(K/k) \times \Gal(k(\mu_{p^{n}})/k).
\]
Let
\[
\omega \colon G_{k,1} \longrightarrow \Gal(k(\mu_{p})/k) \lhook\joinrel\longrightarrow \mathbb{Z}_{p}^{\times}
\]
denote the Teichm\"ullar character.
We write $\zeta_{k_{\chi}K,n,S}(s,\sigma)$ for the partial zeta function for $\sigma \in G_{K,n}$:
\[
\zeta_{k_{\chi}K, n, S}(s,\sigma) := \sum_{(\mathfrak{a}, k_{\chi}K(\mu_{p^{n}})/k) = \sigma}N(\mathfrak{a})^{-s},
\]
where $\mathfrak{a}$ runs over all integral ideals of $k$ coprime to all the primes in $S_{K}$ such that the Artin symbol $(\mathfrak{a}, k_{\chi}K(\mu_{p^{n}})/k)$ is equal to $\sigma$ and $N(\mathfrak{a})$ denotes the norm of $\mathfrak{a}$.
We put
\[
\theta_{k_{\chi}K,n,S} := \sum_{\sigma \in G_{K,n}}\zeta_{k_{\chi}K,n,S}(0,\sigma)\sigma^{-1}
\]
which is contained in $\mathbb{Q}[G_{K,n}]$ (see \cite{Sie70}).
The elements $\{\theta_{k_{\chi}K,n,S}\}_{n>0}$ are norm-coherent by \cite[Proposition~IV.1.8]{tatebook}.
In addition, Deligne--Ribet proved in \cite{DR} that the element $e_{\omega\chi^{-1}}\theta_{k_{\chi}K,n,S}$ is contained in
$\mathcal{O}[G_{K,n}]e_{\omega\chi^{-1}}$.
Here
\[
e_{\omega\chi^{-1}} := \frac{1}{[k_{\chi}(\mu_{p}) \colon k]} \sum_{\sigma \in \Gal(k_{\chi}(\mu_{p})/k)}\omega\chi^{-1}(\sigma)\sigma^{-1}.
\]
Hence we obtain an element
\[
e_{\omega\chi^{-1}}\theta_{k_{\chi}K, \infty,S} := \varprojlim_{n>0}e_{\omega\chi^{-1}}\theta_{k_{\chi}K, n, S} \in \mathcal{O}[[\Gal(k_{\chi}K(\mu_{p^{\infty}})/k)]]e_{\omega\chi^{-1}}.
\]
Let ${\rm Tw} \colon \mathcal{O}[[\Gal(k_{\chi}K(\mu_{p^{\infty}})/k)]] \longrightarrow \mathcal{O}[[\Gal(k_{\chi}K(\mu_{p^{\infty}})/k)]]$
denote the homomorphism induced by $\sigma \mapsto \chi_{\rm cyc}(\sigma)\sigma^{-1}$ for $\sigma \in \Gal(k_{\chi}K(\mu_{p^{\infty}})/k)$.
Then we get an element
\[
\tilde{L}_{p,K}^{\chi} := {\rm Tw}(e_{\omega\chi^{-1}}\theta_{Kk_{\chi},\infty,S}) \in \Lambda_{K}.
\]
For each prime $\mathfrak{q} \not\in S$, we set
\[
u_{\mathfrak{q}} := \chi({\rm Fr}_{\mathfrak{q}})^{-1}\chi_{\rm cyc}({\rm Fr}_{\mathfrak{q}}){\rm Fr}_{\mathfrak{q}}^{-1}.
\]
For a field $K \in \Omega$, we define a modified $p$-adic $L$-function $L_{p,K}^{\chi} \in \Lambda_{K}$ to be
\[
L_{p,K}^{\chi} := \left(\prod_{\mathfrak{q} \in S_{K} \setminus S}(-u_{\mathfrak{q}}) \right) \cdot \tilde{L}_{p,K}^{\chi}.
\]
\begin{lemma}[{\cite[Lemma~3.5]{hres}}]\label{lem:rel2}
We have $\{L_{p,K}^{\chi}\}_{K \in \Omega} \in \mathrm{ES}_{0}(T)$.
\end{lemma}
\subsection{Iwasawa modules and characteristic ideals}
In this subsection, we introduce several Iwasawa modules and compute its characteristic ideals.
We set
\[
e_{\chi} := \frac{1}{[k_{\chi}(\mu_{p}) \colon k]} \sum_{\sigma \in \Gal(k_{\chi}(\mu_{p})/k)}\chi(\sigma)\sigma^{-1}.
\]
\begin{definition}
Let $K \in \Omega$ be a field.
\begin{itemize}
\item[(i)] We write $M_{k_{\chi}K, \infty}$ for the maximal $p$-ramified pro-$p$ abelian extension of $k_{\chi}K(\mu_{p^{\infty}})$ and set
\begin{align*}
X_{K}^{\chi} := e_{\chi}\left(\Gal(M_{k_{\chi}K,\infty}/k_{\chi}K(\mu_{p^{\infty}})) \otimes_{\mathbb{Z}_{p}} \mathcal{O}\right).
\end{align*}
\item[(ii)] We write $M_{k_{\chi}K, S, \infty}$ for the maximal $S_{K}$-ramified pro-$p$ abelian extension of $k_{\chi}K(\mu_{p^{\infty}})$ and set
\begin{align*}
X_{K, S}^{\chi} := e_{\chi}\left(\Gal(M_{k_{\chi}K,S,\infty}/k_{\chi}K(\mu_{p^{\infty}})) \otimes_{\mathbb{Z}_{p}} \mathcal{O}\right).
\end{align*}
\item[(iii)] We write $N_{k_{\chi}K, \infty}$ for the maximal unramified pro-$p$ abelian extension of $k_{\chi}K(\mu_{p^{\infty}})$ and set
\begin{align*}
Y_{K}^{\chi} := e_{\chi}\left(\Gal(N_{k_{\chi}K,\infty}/k_{\chi}K(\mu_{p^{\infty}})) \otimes_{\mathbb{Z}_{p}} \mathcal{O}\right).
\end{align*}
\item[(iv)] We write $H_{K, p}^{\chi}$ for the $\Lambda_{K}$-submodule of $Y_{K}^{\chi}$ generated by the set of the Frobenius elements $\{{\rm Frob}_{\mathfrak{p}} \mid \mathfrak{p} \in S_{p}(k_{\chi}K(\mu_{\infty})) \}$.
\end{itemize}
\end{definition}
Let $K \in \Omega$ be a field.
By the weak Leopoldt conjecture proved by Iwasawa in \cite{Iwa73a}, the localization map
\[
H^{1}(G_{k,S_{K}},\mathbb{T}_{K}) \longrightarrow H^{1}(G_{k_{p}}, \mathbb{T}_{K})
\]
is injective. Hence by global duality, we obtain the following exact sequences of $\Lambda_{K}$-modules (see \cite[(2) and (3) in page~12]{hres}):
\begin{align}\label{exact:fundamental}
0 \longrightarrow H^{1}(G_{k,S_{K}},\mathbb{T}_{K}) \longrightarrow H^{1}(G_{k_{p}}, \mathbb{T}_{K}) \longrightarrow X_{K}^{\chi}
\longrightarrow Y_{K}^{\chi}/H_{K,p}^{\chi} \longrightarrow 0,
\end{align}
\begin{align}\label{exact:diff}
0 \longrightarrow \bigoplus_{\mathfrak{q} \in S_{K} \setminus S} H^{1}(G_{k_{\mathfrak{q}}}, \mathbb{T}_{K}) \longrightarrow
X_{K,S}^{\chi} \longrightarrow X_{K}^{\chi} \longrightarrow 0.
\end{align}
We put
\[
C_{K} := {\rm coker}\left(H^{1}(G_{k,S_{K}},\mathbb{T}_{K}) \longrightarrow H^{1}(G_{k_{p}}, \mathbb{T}_{K}) \right).
\]
By the exact sequence~\eqref{exact:fundamental}, we get the following exact sequences
\begin{align}\label{exact:1}
0 \longrightarrow H^{1}(G_{k,S_{K}},\mathbb{T}_{K}) \longrightarrow H^{1}(G_{k_{p}}, \mathbb{T}_{K}) \longrightarrow C_{K} \longrightarrow 0,
\end{align}
\begin{align}\label{exact:2}
0 \longrightarrow C_{K} \longrightarrow X_{K}^{\chi}
\longrightarrow Y_{K}^{\chi}/H_{K,p}^{\chi} \longrightarrow 0.
\end{align}
The following proposition is proved by Iwasawa in \cite{Iwa73b}.
\begin{lemma}[{\cite{Iwa73b}}]\label{lemma:p-torsion}
If $X_{k}^{\chi}$ is $p$-torsion-free, then so is $X_{K,S}^{\chi}$ for any field $K \in \Omega$.
\end{lemma}
\begin{proposition}[{\cite[Proposition~2.2]{Kur03}}]\label{prop:char-wiles}
Suppose that $X_{k}^{\chi}$ is $p$-torsion-free.
For a field $K \in \Omega$, we have
\[
{\rm char}_{\Lambda_{K}}(X_{K,S}^{\chi}) = L_{p,K}^{\chi} \Lambda_{K}.
\]
\end{proposition}
\begin{proof}
Note that $X_{K,S}^{\chi}$ is $p$-torsion-free by Lemma~\ref{lemma:p-torsion}.
By using the Iwasawa main conjecture proved by Wiles in \cite{Wiles90}, Kurihara proved that
\[
{\rm Fitt}_{\Lambda_{K}}^{0}(X_{K,S}^{\chi}) = L_{p,K}^{\chi} \Lambda_{K}.
\]
Since $L_{p,K}^{\chi}$ is a regular element of $\Lambda_{K}$, we have $ {\rm Fitt}_{\Lambda_{K}}^{0}(X_{K,S}^{\chi}) = {\rm Fitt}_{\Lambda_{K}}^{0}(X_{K,S}^{\chi})^{**}$.
Hence by Lemmas~\ref{lemma:ref}~and~\ref{lemma:bidualeq}, it suffices to show that
\[
{\rm Fitt}_{\Lambda_{K}}^{0}(X_{K,S}^{\chi}) \Lambda_{K, \mathfrak{r}} = {\rm char}_{\Lambda_{K}}(X_{K,S}^{\chi}) \Lambda_{K,\mathfrak{r}}
\]
for any prime $\mathfrak{r}$ of $\Lambda_{K}$ with ${\rm ht}(\mathfrak{r}) \leq 1$. If $p \not\in \mathfrak{r}$, then the ring $\Lambda_{K, \mathfrak{r}}$ is regular, and hence we have
\[
{\rm Fitt}_{\Lambda_{K}}^{0}(X_{K,S}^{\chi}) \Lambda_{K, \mathfrak{r}} = L_{p,K}^{\chi}\Lambda_{K, \mathfrak{r}} =
(L_{p,K}^{\chi}\Lambda_{K, \mathfrak{r}})^{**} =
{\rm char}_{\Lambda_{K}}(X_{K,S}^{\chi}) \Lambda_{K,\mathfrak{r}}
\]
by Corollary~\ref{cor:char-fitt}.
If $p \in \mathfrak{r}$, the module $X_{K,S}^{\chi} \otimes_{\Lambda_{K}}\Lambda_{K,\mathfrak{r}}$ vanishes by Lemma~\ref{lemma:torsion}, which implies ${\rm Fitt}_{\Lambda_{K}}^{0}(X_{K,S}^{\chi}) \Lambda_{K, \mathfrak{r}} = \Lambda_{K, \mathfrak{r}} = {\rm char}_{\Lambda_{K}}(X_{K,S}^{\chi}) \Lambda_{K,\mathfrak{r}}$.
\end{proof}
We note that $f_{K}^{-1}L_{p, K}^{\chi} \in \Lambda_{K}[1/p]$ by Proposition~\ref{prop:key_div} and Lemma~\ref{lem:rel2}.
\begin{proposition}\label{prop:char}
For a field $K \in \Omega$, we have
\[
{\rm char}_{\Lambda_{K}}(X_{K}^{\chi})\Lambda_{K}[1/p] = f_{K}^{-1}L_{p, K}^{\chi} \Lambda_{K}[1/p].
\]
\end{proposition}
\begin{proof}
Since $\Lambda_{K}[1/p]$ is the finite product of principal ideal domain,
the notion of the characteristic ideal coincides with the usual one by Corollary~\ref{cor:char-fitt}.
In particular, the characteristic ideal is additive on short exact sequences of finitely generated $\Lambda_{K}[1/p]$-modules.
Hence the exact sequence~\eqref{exact:diff} shows that
\[
{\rm char}_{\Lambda_{K}}(X_{K,S}^{\chi})\Lambda_{K}[1/p]
= {\rm char}_{\Lambda_{K}}(X_{K}^{\chi}) \prod_{\mathfrak{q} \in S_{K} \setminus S} {\rm char}_{\Lambda_{K}}(H^{1}(G_{k_{\mathfrak{q}}}, \mathbb{T}_{K}))\Lambda_{K}[1/p].
\]
Thus this proposition follows from Propositions~\ref{prop:char-local}~and~\ref{prop:char-wiles}.
\end{proof}
We recall the conjecture concerning the structure of $Y_{K}^{\chi}$ proposed by Greenberg in \cite{Gre}.
\begin{conjecture}[Greenberg]\label{conj:greenberg}
The $\Lambda_{K}$-module $Y_{K}^{\chi}$ is pseudo-null for any field $K \in \Omega$.
\end{conjecture}
\subsection{On the structure of Euler systems for the multiplicative group}
Recall that $r = [k \colon \mathbb{Q}]$.
Suppose that
\[
H^{0}(G_{k_{p}},T/\mathfrak{m} T) = H^{2}(G_{k_{p}},T/\mathfrak{m} T) = 0.
\]
Then, for a field $K \in \Omega$, we have an isomorphism $H^{1}(G_{k_{p}}, \mathbb{T}_{K}) \cong \Lambda_{K}^{r}$ such that the following diagram commutes for any field $L \in \Omega$ with $L \subset K$:
\[
\xymatrix{
H^{1}(G_{k_{p}}, \mathbb{T}_{K}) \ar[r]^-{\cong} \ar[d] & \Lambda_{K}^{r} \ar[d]
\\
H^{1}(G_{k_{p}}, \mathbb{T}_{L}) \ar[r]^-{\cong} & \Lambda_{L}^{r},
}
\]
where the left vertical arrow is induced by the canonical homomorphism $\mathbb{T}_{K} \longrightarrow \mathbb{T}_{L}$ and the right vertical arrow is the canonical projection.
The localization map at $p$ induces a homomorphism
\[
{\bigcap}^{r}_{\Lambda_{K}}H^{1}(G_{k,S_{K}}, \mathbb{T}_{K}) \longrightarrow {\bigcap}^{r}_{\Lambda_{K}}H^{1}(G_{k_{p}}, \mathbb{T}_{K}) \cong {\bigcap}^{r}_{\Lambda_{K}}\Lambda_{K}^{r} = \Lambda_{K},
\]
and we obtain a homomorphism
\[
{\rm ES}_{r}(T) \longrightarrow {\rm ES}_{0}(T).
\]
\begin{proposition}[{\cite[Proposition~2.10]{hres}}]\label{prop:image}
The homomorphism ${\rm ES}_{r}(T) \longrightarrow {\rm ES}_{0}(T)$ is injective and we have
\[
{\rm im}\left({\rm ES}_{r}(T) \longrightarrow {\rm ES}_{0}(T)\right) = {\rm ES}_{0}(T) \cap \prod_{K \in \Omega}{\rm char}_{\Lambda_{K}}(C_{K}).
\]
\end{proposition}
\begin{proof}
Let $K$ be a field.
Recall that, by \eqref{exact:1}, we have an exact sequence of $\Lambda_{K}$-modules
\[
0 \longrightarrow H^{1}(G_{k,S_{K}},\mathbb{T}_{K}) \longrightarrow H^{1}(G_{k_{p}}, \mathbb{T}_{K}) \longrightarrow C_{K} \longrightarrow 0.
\]
Hence Lemma~\ref{lemma:bidual-inj} shows that the homomorphism ${\rm ES}_{r}(T) \longrightarrow {\rm ES}_{0}(T)$ is injective.
Furthermore, the definition of the characteristic ideal shows that
\[
{\rm im}\left({\bigcap}^{r}_{\Lambda_{K}}H^{1}(G_{k,S_{K}}, \mathbb{T}_{K}) \longrightarrow {\bigcap}^{r}_{\Lambda_{K}}H^{1}(G_{k_{p}}, \mathbb{T}_{K}) \cong {\bigcap}^{r}_{\Lambda_{K}}\Lambda_{K}^{r} = \Lambda_{K}\right) = {\rm char}_{\Lambda_{K}}(C_{K}).
\]
\end{proof}
\begin{theorem}\label{thm:main}
Suppose that
\begin{itemize}
\item both $H^{0}(G_{k_{p}},T/\mathfrak{m} T)$ and $H^{2}(G_{k_{p}},T/\mathfrak{m} T)$ vanish,
\item the module $X_{k}^{\chi}$ is $p$-torsion-free,
\item Greenberg conjecture (Conjecture~\ref{conj:greenberg}) holds true.
\end{itemize}
Then the $\mathcal{O}[[\Gal(\mathcal{K}/k)]]$-module ${\rm ES}_{r}(T)$ is free of rank $1$.
Furthermore, there exists a basis $\{c_{K}\}_{K \in \Omega} \in {\rm ES}_{r}(T)$ such that
its image under the injection ${\rm ES}_{r}(T) \lhook\joinrel\longrightarrow {\rm ES}_{0}(T)$ is $\{L_{p,K}^{\chi}\}_{K \in \Omega}$.
\end{theorem}
\begin{proof}
Since we assume Greenberg conjecture,
by the exact sequence~\eqref{exact:2} and Corollary~\ref{cor:pseudo},
we have
\[
{\rm char}_{\Lambda_{K}}(X_{K}^{\chi}) = {\rm char}_{\Lambda_{K}}(C_{K})
\]
for any field $K \in \Omega$.
Hence, by Proposition~\ref{prop:image}, it suffices to show that the homomorphism
\begin{align}\label{hom2}
\mathcal{O}[[\Gal(\mathcal{K}/k)]] \longrightarrow
{\rm ES}_{0}(T) \cap \prod_{K \in \Omega}{\rm char}_{\Lambda_{K}}(X_{K}^{\chi}); \, \lambda \mapsto \{\lambda L_{p,K}^{\chi}\}_{K \in \Omega}
\end{align}
is an isomorphism.
Since $L_{p,K}^{\chi}$ is a regular element of $\Lambda_{K}$ for any field $K \in \Omega$, the homomorphism~\eqref{hom2} is injective.
To show the surjectivity of the homomorphism~\eqref{hom2}, take an Euler system
\[
(c_{K})_{K \in \Omega} \in {\rm ES}_{0}(T) \cap \prod_{K \in \Omega}{\rm char}_{\Lambda_{K}}(X_{K}^{\chi}).
\]
Let $K \in \Omega$ be a field.
Proposition~\ref{prop:char} shows that there is an element $\alpha_{K} \in \Lambda_{K}[1/p]$ such that
\[
c_{K} = \alpha_{K}f_{K}^{-1}L_{p,K}^{\chi}.
\]
Lemmas~\ref{lemma:euler-rel}~and~\ref{lem:rel2} imply that $\{f_{K}^{-1}L_{p,K}^{\chi}\}_{K \in \Omega}$ are norm-coherent.
Hence for any field $L \in \Omega$ with $L \subset K$, we have
\begin{align*}
\pi_{K,L}(\alpha_{K})f_{L}^{-1}L_{p,L}^{\chi}
&= \pi_{K,L}(c_{K})
\\
&= \left(\prod_{\mathfrak{q} \in S_{K} \setminus S_{L}}P_{\mathfrak{q}}({\rm Fr}_{\mathfrak{q}}^{-1})\right)c_{L}
\\
&= \left(\prod_{\mathfrak{q} \in S_{K} \setminus S_{L}}P_{\mathfrak{q}}({\rm Fr}_{\mathfrak{q}}^{-1})\right)\alpha_{L}f_{L}^{-1}L_{p,L}^{\chi}.
\end{align*}
Since $f_{K}^{-1}L_{p,K}^{\chi}$ is a regular element of $\Lambda_{K}[1/p]$, we conclude that $(\alpha_{K})_{K \in \Omega} \in {\rm ES}(\{\Lambda_{K}[1/p]\}_{K \in \Omega})$.
Therefore, Proposition~\ref{prop:key_div} shows that
\[
\alpha_{K} \in f_{K} \Lambda_{K}[1/p],
\]
and we have
\[
c_{K} \in \Lambda_{K} \cap L_{p,K}^{\chi} \Lambda_{K}[1/p].
\]
We note that $ {\rm char}_{{\Lambda}_{K}}(X_{K,S}^{\chi}) = L_{p,K}^{\chi} \Lambda_{K}$ by Proposition~\ref{prop:char-wiles}.
The module $X_{K,S}^{\chi}$ is $p$-torsion-free by Lemma~\ref{lemma:p-torsion} since we assume that $X_{k}^{\chi}$ is $p$-torsion-free.
Hence Lemma~\ref{lemma:torsion} implies that
\[
\Lambda_{K} \cap L_{p,K}^{\chi} \Lambda_{K}[1/p] = \bigcup_{m>0}p^{-m}\left(p^{m}\Lambda_{K} \cap L_{p,K}^{\chi} \Lambda_{K} \right) = L_{p,K}^{\chi} \Lambda_{K}.
\]
This shows that $c_{K} \in L_{p, K} \Lambda_{K}$, and hence the homomorphism~\eqref{hom2} is surjective.
\end{proof}
\subsection{Vertical determinantal systems }
For an integer $n \geq 0$, let $k_{n}$ denote the subextension $k_{n}/k$ of $k_{\infty}/k$ satisfying $[k_{n} \colon k] = p^{n}$.
For a field $K\in \Omega$, put
\[
K_{n} := k_{n}K \, \textrm{ and } \, T_{K_{n}} := T \otimes_{\mathcal{O}} \mathcal{O}[\Gal(K_{n}/k)]^{\iota}.
\]
\begin{definition}[{\cite[Definition~2.9]{sbA}}]
For a field $K\in \Omega$ and an integer $n \geq 0$, we set
\[
{\bf R}\Gamma_{c}(G_{k, S_{K_{n}}}, T_{K_{n}}) := {\bf R}\Hom_{\mathcal{O}}({\bf R}\Gamma(G_{k, S_{K_{n}}}, T_{K_{n}}), \mathcal{O})[-3] \oplus \left(\bigoplus_{v \in S_{\infty}(k)} H^{0}(G_{k_{v}}, T_{K_{n}})\right)[-1].
\]
As explained in \cite[Definition~2.9]{sbA}, we have the canonical homomorphism
\[
{\det}^{-1}({\bf R}\Gamma_{c}(G_{k, S_{K'_{m}}}, T_{K'_{m}}))^{\iota} \longrightarrow {\det}^{-1}({\bf R}\Gamma_{c}(G_{k, S_{K_{n}}}, T_{K_{n}}))^{\iota}
\]
for any field $K' \in \Omega$ with $K \subset K'$ and integer $m$ with $n \leq m$.
We then define the $\Lambda[[\Gal(\mathcal{K}/k)]]$-module of vertical determinantal systems by
\[
{\rm VS}(T) := \varprojlim_{K \in \Omega, \, n \geq 0} {\det}^{-1}({\bf R}\Gamma_{c}(G_{k, S_{K_{n}}}, T_{K_{n}}))^{\iota}.
\]
\cite[Proposition~2.10]{sbA} shows that the $\Lambda[[\Gal(\mathcal{K}/k)]]$-module ${\rm VS}(T)$ is free of rank $1$.
\end{definition}
\begin{remark}
As Burns and Sano mentioned in \cite[Remark~2.11]{sbA}, the equivariant Tamagawa number conjecture predicts the existence of a unique basis of ${\det}^{-1}({\bf R}\Gamma_{c}(G_{k, S_{K_{n}}}, T_{K_{n}}))^{\iota}$ such that its image under a period-regulator isomorphism is
the leading term of an $L$-series.
\end{remark}
Recall that $\overline{T}$ denotes the residual representation of $T$ and $r = [k\colon \mathbb{Q}]$.
Let ${\rm KS}_{r}(\overline{T})$ and ${\rm SS}_{r}(\overline{T})$ denote the module of Kolyvagin and Stark systems of rank $r$ associated with the canonical Selmer structure on $\overline{T}$, respectively (see \cite[Definition~3.2.1]{MRkoly} for the definition of the canonical Selmer structure and \cite[Definitions~3.1~and~4.1]{sbA} for the definition of Kolyvagin and Starks systems).
In the paper \cite{sbA}, Burns and Sano proved the following
\begin{proposition}[{\cite[Theorems~3.12~and~4.16]{sbA}}]\label{prop:sbA}
We have the following commutative diagram:
\begin{align*}
\xymatrix{
{\rm VS}(T) \ar[r] \ar[d] & {\rm ES}_{r}(T) \ar[r] & {\rm KC}_{r}(\overline{T})
\\
{\rm SS}_{r}(\overline{T}) \ar[rr] & & {\rm KS}_{r}(\overline{T}). \ar@{^{(}->}[u]
}
\end{align*}
Here ${\rm KC}_{r}(\overline{T})$ denotes the module of Kolyvagin collections of rank $r$ (see \cite[Definition~4.11]{sbA}).
Furthermore, if the module $H^{2}(G_{k_{p}}, \overline{T})$ vanishes, then the homomorphism ${\rm VS}(T) \longrightarrow {\rm SS}_{r}(\overline{T})$ is surjective.
\end{proposition}
Furthermore, Barns, Sakamoto, and Sano proved in the paper \cite{bss} the following
\begin{proposition}[{\cite[Theorems~5.2~(ii)~and~6.12]{bss}}]\label{prop:bss}\
\begin{itemize}
\item[(i)] The image of the homomorphism ${\rm ES}_{r}(T) \longrightarrow {\rm KC}_{r}(\overline{T})$ is contained in ${\rm KS}_{r}(\overline{T})$.
\item[(ii)] If the module $H^{2}(G_{k_{p}}, \overline{T})$ vanishes, the homomorphism ${\rm SS}_{r}(\overline{T}) \longrightarrow {\rm KS}_{r}(\overline{T})$ is an isomorphism.
\end{itemize}
\end{proposition}
\begin{proof}
Claim~(i) follows from \cite[Theorem~6.12]{bss}.
Since claim~(ii) follows from \cite[Theorem~5.2~(ii)]{bss}, we only need to check that \cite[Hypotheses~3.2,~3.3,~and~4.2]{bss} are satisfied.
\cite[Lemma~6.1.5]{MRkoly} implies that $\overline{T}$ satisfies \cite[Hypotheses~3.2~and~3.3]{bss}.
By \cite[Lemma~3.7.1~(i)]{MRkoly}, the canonical Selmer structure $\mathcal{F}_{\rm can}$ on $\overline{T}$ is cartesian.
Hence \cite[Lemma~6.6]{MRselmer} shows that if the core rank $\chi(T, \mathcal{F}_{\rm can})$ is $[k \colon \mathbb{Q}]$, then \cite[Hypotheses~4.2]{bss} is satisfied.
By \cite[Theorem~5.2.15]{MRkoly}, we have
\[
\chi(T, \mathcal{F}_{\rm can}) = \sum_{v \in S_{\infty}(k)}{\rm rank}_{\mathcal{O}} \left( H^{0}(G_{k_{v}}, T^{*}(1)) \right) + {\rm rank}_{\mathcal{O}}\left( H^{2}(G_{k_{p}}, T)^{\vee} \right).
\]
Here $T^{*} := \Hom_{\mathcal{O}}(T, \mathcal{O})$.
Since we assume that the module $H^{2}(G_{k_{p}}, \overline{T})$ vanishes, we have $H^{2}(G_{k_{p}}, T) = 0$. Furthermore, the fact that $\chi$ is an even character implies that $H^{0}(G_{k_{v}}, T^{*}(1)) \cong \mathcal{O}$.
Hence since $k$ is a totally real field, we conclude that $\chi(T, \mathcal{F}_{\rm can}) = \# S_{\infty}(k) = [k \colon \mathbb{Q}]$.
\end{proof}
\begin{theorem}\label{thm:main2}
Suppose that
\begin{itemize}
\item both $H^{0}(G_{k_{p}},T/\mathfrak{m} T)$ and $H^{2}(G_{k_{p}},T/\mathfrak{m} T)$ vanish,
\item the module $X_{k}^{\chi}$ is $p$-torsion-free,
\item Greenberg conjecture (Conjecture~\ref{conj:greenberg}) holds true.
\end{itemize}
Then the canonical homomorphism
\[
{\rm VS}(T) \longrightarrow {\rm ES}_{r}(T)
\]
defined in \cite[Theorem~2.18]{sbA} is an isomorphism.
\end{theorem}
\begin{proof}
To simplify the notation, we put $\mathcal{R} := \Lambda[[\Gal(\mathcal{K}/k)]]$, and $\mathfrak{m}_{\mathcal{K}}$ denotes the maximal ideal of $\mathcal{R}$.
By Propositions~\ref{prop:sbA}~and~\ref{prop:bss}, we obtain the commutative diagram
\begin{align*}
\begin{split}
\xymatrix{
{\rm VS}(T) \ar[r] \ar@{->>}[rd] & {\rm ES}_{r}(T) \ar[d]
\\
& {\rm KS}_{r}(\overline{T}),
}
\end{split}
\end{align*}
where ${\rm VS}(T) \longrightarrow {\rm KS}_{r}(\overline{T})$ is surjective.
Since the $\mathcal{R}$-modules ${\rm VS}(T)$ and ${\rm ES}_{r}(T)$ are free of rank $1$ by Theorem~\ref{thm:main}, the facts that ${\rm VS}(T) \longrightarrow {\rm KS}_{r}(\overline{T})$ is surjective and that ${\rm KS}_{r}(\overline{T})$ is an $\mathcal{R}/\mathfrak{m}_{\mathcal{K}}$-vector space implies that ${\rm VS}(T) \longrightarrow {\rm ES}_{r}(T)$ is an isomorphism.
\end{proof}
|
1,116,691,498,305 | arxiv |
\section{Introduction}
\label{S-Introduction}
Our need to measure the magnetic field
threading the solar corona has never been more urgent. Society
depends on electrical
infrastructure in space and on the ground to an unprecedented and ever-increasing degree. The greatest single source of electrical perturbations on the
Earth is the Sun, as has been known for at least a century and a half. Variable high energy radiation, ejection of
magnetized plasma, the interactions of streams in adjacent sectors of the solar wind, all these lead to potentially dangerous effects
on a technologically-dependent society. These
and other issues are discussed in a variety of
monographs, white papers and reviews
\citep[e.g.][]{Billings1966,
2001STIN...0227999J,Eddy2009,
2013SoPh..288..467J, 2017SSRv..210..145C,Ji+Karpen+others2020}.
The urgency of finding a reliable
method to measure coronal magnetic fields
arises from the fortunate
conjunction of
three unique observational
opportunities. The Daniel K. Inouye Solar Telescope (DKIST, formerly ATST,
see \citealp{Rimmele+others2003,rimmele+others2020}),
the Parker Solar Probe (PSP, formerly Solar Probe Plus, see
\citealp{PSP}) and the Solar Orbiter mission (SolO, \citealp{Orbiter,Orbiter2}) are all now
operational.
While the PSP and SolO orbit the Sun beyond 9 solar radii (R$_\odot$), sampling \textit{in-situ} plasmas, neutral particles and magnetic fields,
the 4 meter DKIST observatory will be able to measure
components of magnetic fields at elongations
$y \lesssim 1.5$ R$_\odot$. Such a large aperture, coronagraphic telescope
operating from the peak of Haleakala marks
a huge step up from early feasibility
efforts with far smaller telescopes
\citep[e.g. Evans Solar Facility, SOLARC, COMP;][]{Lin+Penn+Tomczyk2000,Lin+Kuhn+Coulter2004,CoMP}. We also eagerly anticipate synoptic measurements with UCOMP \citep{Tomczyk+landi2019} and the 1.5 meter
aperture coronagraph (\href{https://www2.hao.ucar.edu/cosmo/large-coronagraph}{www2.hao.ucar.edu/cosmo/large-coronagraph}) of the COSMO suite of instruments, currently under review by the community.
In the current work, we describe
a numerical method for
studying magnetic signatures imprinted in the polarized light from magnetic dipole (M1)
lines emitted at visible and infrared wavelengths by the corona. These M1 lines are formed in the saturated Hanle effect regime and are optically thin across the corona. Concerns have been
expressed regarding line-of-sight (LOS) confusion
\citep[e.g.][]{2013SoPh..288..467J,2020SoPh..295...98S}. Mathematically, null spaces exist where variations in
vector magnetic fields have no effect on the emergent spectra. Contributions to the familiar observed Stokes parameters $I,Q,U,$ and $V$ come from
different regions along the LOS.
\figsym
Single-point algorithms, like the one presented here must be considered a first step until stereoscopic observations, involving spacecraft in orbits significantly away from the
Earth-Sun line
of the corona, become available. Alternatively
the Sun's rotation might be used to
try to probe the 3D coronal structure using stereoscopy
\citep[e.g.][]{Kramar+2016}, assuming rigid coronal rotation over periods of days or longer. The corona may or may not comply with this
assumption, and stationary structures which do comply may be of
limited physical interest anyway.
Thus, our purpose is
to present a method along with a python-based tool to allow coronal observers with DKIST and other telescope systems, to obtain
a first estimate of properties of
the emitting plasma including components of the vector magnetic field.
Our primary
simplification is
\begin{quote}
\textit{to seek solutions
for the emitting plasma assuming it is dominated
by one location along the line-of-sight.}
\end{quote}
For practical purposes,
such ``locations'' make sense if they span LOS lengths
smaller than, say, 0.1$R_\odot$. Naturally, without observations from
a very different LOS
in the solar system, the
measurements represent differently-weighted averages of physical conditions along each LOS.
%
These solutions inherently possess well-known ambiguities arising from specific symmetries
associated with the
line formation problem, as shown for example in Figure~\ref{fig:sym}. Therefore, our inversion scheme allows one to identify, but not necessarily resolve, all
ambiguities from a set of observed Stokes profiles, as revealed in Section~\ref{S-CLEDB}. Section~\ref{S-discussion} provides a summary of the findings, and discusses multiple emission
locations as suggested by multiple components in the emission lines, where solutions for each can, in principle, be obtained.
\section{Review of the Formation of Forbidden Coronal Lines}\label{S-formalism}
\subsection{Emission Coefficients in Statistical Equilibrium}
We adopt the formalism and notation of \citet{Casini+Judge1999}, that expands upon
earlier work on loosely related topics
\citep[e.g.][]{Sahal,1991A&A...244..391L,landi82}. We must solve
for the magnetic substate
populations of the radiating ions assuming statistical equilibrium (SE).
The problem is cast into
the framework of
spherical tensors to take advantage of geometrical symmetries \citep[see Chapter 3 in][]{Landi}. Magnetic Dipole (M1) coronal lines form under regimes
where Zeeman frequency splittings are of order of the classical Larmor frequency $\nu_L$, $h\nu_L =\mu_0\cdot B$ and are far smaller than the Doppler widths $\Delta\nu_D$, and where
the Einstein - A coefficients $A_{JJ_0}$ are, in turn $ \ll \nu_L$. The first inequality
permits an accurate Taylor expansion of line profiles in terms of
the small quantity $\delta
= \nu_L/\Delta\nu_D \ll 1$
\citep{Casini+Judge1999}.
The second defines the ``strong-field limit of the Hanle effect'' in which coherences between
magnetic sub-states of the decaying level are negligible in the magnetic-field reference frame.
From the solutions
to the SE equations, the emission coefficients for the Stokes
vector $(S_0,S_1,S_2,S_3)^T$ $\equiv
(I,Q,U,V)^T$ are then (see Equations~35a-35c in
\citet{Casini+Judge1999}):
\begin{eqnarray}
\varepsilon_0^{(0)}(\freq,\hat {\bf k})
&=&\epsilon_{ J J_0}\,\phi(\freq_0-\freq)
\left[1+D_{J J_0}\,\sigma^2_0( J)\,
{\cal T}^2_0(0,\hat{\bf k})\right]\;,
\label{eqn:eps00} \\
\noalign{\smallskip}
\varepsilon_i^{(0)}(\freq,\hat {\bf k})
&=&\epsilon_{ J J_0}\,\phi(\freq_0-\freq)\,D_{J J_0}\,
\sigma^2_0( J)\,
{\cal T}^2_0(i,\hat {\bf k})\;,
\qquad (i=1,2)
\label{eqn:epsi0} \\
\noalign{\smallskip}
\varepsilon_3^{(1)}(\freq,\hat {\bf k})
&=&-{\textstyle\sqrt{\frac{2}{3}}}\,\freq_{\rm L}\,
\epsilon_{ J J_0}\,\phi'(\freq_0-\freq)
\left[\bar{g}_{ J, J_0}+E_{J J_0}\,
\sigma^2_0( J)\right]
{\cal T}^1_0(3,\hat {\bf k})\;,
\label{eqn:eps31}
\end{eqnarray}
\noindent where
the M1 transition occurs from level with angular momentum
$J$ to $J_0$, and where
\begin{equation}
\epsilon_{ J J_0} ={\frac{h \freq}{4\pi}}\,N_{ J}\, A_{JJ_0}.
\end{equation}
$\phi(\freq_0-\freq)$ is the (field-free) line
profile (in units of Hz$^{-1}$),
with
$\int_0^\infty\phi(\freq_0-\freq) d\nu =1$
and $\phi'(\freq_0-\freq)$ denotes its first derivative with respect to $\nu$.
When integrated along a specific LOS, the
expressions for the emission coefficient
$\varepsilon^{(i)}_i(\nu,\hat {\bf k})$, with units of
erg~cm$^{-3}$ sr$^{-1}$s$^{-1}$,
yield the emergent Stokes vectors from the
corona. $\epsilon_{ J J_0}$ is the usual coefficient
for the frequency-integrated isotropic
emission only from the line, ignoring
stimulated emission, where the $D_{ J J_0}$ and $E_{ J J_0}$ coefficients are dimensionless parameters associated with the polarizability of the two atomic levels.
The superscripts on $\varepsilon^{i}$ are the leading orders in the Taylor expansion of the line profile
\begin{equation}
\phi(x+dx) = \phi(x) + \sum_{j=1,...} \frac{d^j}{dx^j} \phi(x) \cdot dx^j \ldots
\end{equation}
with $dx \propto \delta=\nu_L/\Delta\nu_D \ll1$. Second order terms
in $\delta$ are negligible for
weak coronal fields and broad
line profiles. Lastly, here it is assumed
that
the photospheric radiation
is spectrally flat across the corona line profiles \citep{Casini+Judge1999}.
\subsection{Physical Interpretation}
These equations are readily
understood physically.
The leading order in the $IQU$ Stokes signals is zero, for Stokes $V$ it is one. $IQ$ and $U$ arise from a combination of thermal emission and scattering of photospheric radiation, both include the populations $N_J$
and the atomic alignment
$\sigma^2_0(J)$. Both
quantify local solutions
to the SE equations, entirely
equivalent to solving for
populations of magnetic sub-states \citep{House1977,Sahal}.
Alignment
is generated entirely by the anisotropic irradiation of ions by the underlying solar photospheric radiation. Information on the magnetic field in $IQU$ is contained
implicitly in $\sigma^2_0(J)$ and is independent of
magnetic field strength, as corresponding to the mathematical statement of the
strong field limit. In contrast, Stokes $V$ for the M1 lines is
formed entirely through the
Zeeman effect, modified by
the alignment factor \citep{Casini+Judge1999}.
When the atomic alignment
factor is zero, the expression for Stokes V
reduces to the well-known
``magnetograph formula''
of the Zeeman effect, to
first order. The leading
terms for $QU$ in the
Zeeman effect are only second order in $\delta$ leading them to be considered negligible in coronal cases due to small B magnitudes. Together with the first order term in $V$ they form the basis of most solar ``vector polarimeters'' \citep[e.g.][]{Lites2000}.
The coefficients $D_{JJ_0}$,
$E_{JJ_0}$
are properties fixed by quantum numbers $J$ and $J_0$
of the two atomic levels. f
$D_{JJ_0}$ is fixed by
$J$ and $J_0$, but $E_{JJ_0}$
also depends on each level's ``Land\'e g-factor'' that are used to build
``effective Land\'e factors'' of the transition, $\bar{g}_{J,J_0}$.
The Land\'e g-factors also depends on
quantum numbers other than
$J,J_0$, like
the mixing of atomic states
and orbital and spin
angular momenta. These can be measured or
may be computed using an atomic structure calculation. See, for example, new calculations of special relevance to this work by \citet{Schiffman+others2020}.
The Taylor expansion of $\varepsilon_i^{(o)}(\nu,\uvec{k})$ with frequency has leading orders $o=1$ when $i=3$ and $o=0$ otherwise \citep{Casini+Judge1999}.
The terms $N_J$ and $\sigma^2_0(J)$,
the population and alignment of level with total angular momentum $J$,
are solutions to SEs. These solutions, which are linear combinations of the populations of magnetic substates of level $J$ \citep[e.g.,][]{Sahal},
depend both on the
scattering geometry,
the magnetic unit vector
$\uvec{b}$, and the plasma
temperature and density.
The atomic alignment $\sigma^2_0(J)$
is created by the bright, anisotropic
photospheric cone of radiation seen by the coronal ions, and destroyed by collisions with plasma particles
having
isotropic distributions.
This ``atomic polarization'' is
modified by the magnetic field as
the ion's magnetic moment precesses around the local B-field.
The appearance of
$\sigma^2_0(J)$ in the SE
equations underlying
Equations~\ref{eqn:eps00}-\ref{eqn:eps31} show that
linearly polarized light in the corona originates
from atomic polarization, and also that the intensity and Zeeman-induced
circular polarization are modified by it.
\subsection{Scattering Geometry}
Finally, the spherical tensors ${\cal T}^K_0(i,\hat {\bf k})$
define the geometry of the scattering
of solar radiation
for Stokes component $i$ from the coronal plasma. The tensors play
no role in the SE calculations, as is
readily appreciated, in that the SE states cannot depend on the observer. Figure~\ref{fig:sym}(b)
shows instead how the solutions
depend on $\vartheta_B$, the angle between the local magnetic field and
the radius vector $\uvec{r}$ to the local vertical of the Sun ({\it l.v.s}).
\section{CLEDB, a Database Approach for ``Single-Point Inversions''}\label{S-CLEDB}
\subsection{The CLEDB Algorithm}
\figcledb
The Coronal Line Emission DataBase (CLEDB) inversion algorithm is created to harness all available information in polarization measurements of the corona to infer local plasma properties and vector magnetic fields. A non-commercial open-source python-based code package of CLEDB, designed for both personal computer jobs and SLURM (Simple Linux Utility for Resource Management) enabled research computing jobs, is freely available online. More information about the code, package, and method documentation along with persistent links are found in the data availability section.
The algorithm uses the equations and framework described in Section \ref{S-formalism} together with symmetries and line profile properties to extract magnetic and thermal information from measured Stokes parameters through a search of a database of computed Stokes parameters.
A single emission line does not contain sufficient information for a full inversion. This will become clear below, but see
\citet{Plowman_2014}, \citet{Dima+2020}, and \citet{Judge+Casini+Paraschiv2021} for detailed discussions. Therefore, the CLEDB approach is primarily designed for two or more coronal lines. A secondary code branch will be used to derive basic thermal parameters and LOS magnetic fields only, when Stokes observations of 1-line are provided instead of 2-line using analytical approximations incrementally developed by \citet{Casini+Judge1999}, \citet{Plowman_2014} and \citet{Dima+2020}.
In the CLEDB 2-line configuration, solutions that are deemed a good fit, currently by using a reduced $\chi^2$ metric, are returned along with database model magnetic, geometric and thermal parameters as acceptable solutions to the inverse problem.
The algorithm seeks thermal and magnetic conditions
from a single point along the LOS. This is
a gross oversimplification
in general, but it is well known that coronal images
frequently reveal discrete structures, such as in polar plumes or more especially in loops over magnetically active regions. These are the regions of great interest
for space weather disturbances at the Earth \citep{Ji+Karpen+others2020}.
However, in cases such as the quiet Sun, the emission is distributed diffusely and our method will represent some
poorly-defined average
of quantities along the LOS. We explore this
assumption below.
In essence, we replace the integrals of equations \ref{eqn:eps00}-\ref{eqn:eps31}
over the LOS
with a 1-point quadrature using a length scale $\ell$. For convenience, we choose $\ell=1$ and use henceforth,
\begin{equation} \label{eq:S}
S_i(\freq, \uvec{k}) \equiv \epsilon_i^{(o)} (\freq, \uvec{k}),
\end{equation}
which is the emergent Stokes parameter of the emission line for a path length of 1 cm along $\uvec{k}$.
Even with this simplification,
there is always some ambiguity in the solutions owing to inherent
symmetries. Our algorithm
therefore returns all such
solutions deemed to be compatible with the
data.
\tabangles
\subsection{Frames of Reference}
Figures~\ref{fig:sym},~\ref{fig:spheresm}
and Table~\ref{tab:angles}
define various angles
in terms of a Cartesian
reference frame with its origin at the center of the Sun.
The axes
$\mathbf{ \hat x},
\mathbf{ \hat y},
\mathbf{ \hat z}$
point along the Sun center-observer line, the E-W direction and S-N
direction relative to the Sun's
rotational axis in the plane-of-the-sky. We adopt the reference direction for linear polarization
to be along the $z$- axis (vertical).
This corresponds to the direction of a linear polarizer measuring $\frac{1}{2} (I+Q)$ \citep[see p. 19 of][]{Landi}.
Two unit vectors $\uvec{r}$ and $\uvec{b}$ specify the direction of
the center of the cone of photospheric
radiation and magnetic field, and a third $\uvec{k}$ specifies the LOS\footnote{Strictly speaking, the $\uvec{k}$ vectors drawn at points $O$ and $P$
are not quite parallel, but here we ignore this small difference, as they
are $\lesssim0.5^\circ$
different when observing plasma with an elongation of $\lesssim 2R_\odot$. See Figure~\ref{fig:spheresm} and Table \ref{tab:discretization}.}.
We define two reference frames, the ``solar" frame and the ``observer'' frame. All angles in the solar frame
are specified as Greek lowercase letters.
Two more angles are defined
in uppercase, defined relative to the observer.
$\Theta_B$ is the angle
between the LOS vector $\uvec{k}$ and
$\uvec{b}$. The angle
$\Phi_B$ follows from our adoption of a reference direction parallel to the $\uvec{z}$- axis.
With this geometry,
\begin{equation} \label{eq:Phib}
\Phi_B = \pi - \gamma_B= + \frac{1}{2} \arctan \frac{U}{Q}
\end{equation}
for each line with measurable $Q$ and $U$.
\tabsols
\subsection{Symmetries to Minimize Numerical Work}
\figspheresm
Frequency-dependent line profiles are not
required because we know \textit{a priori} that, under the single-point contribution assumption, the
profiles for $I,Q,U$ are identical, namely the zeroth-order term in the Taylor expansion. The leading order in the $V$ profile is the first order
term $\propto dI/d\nu$.
Therefore we need only create a database of quantities
$\varepsilon^{(i)}(\nu,\uvec{k})$ appropriately integrated over frequency,
\begin{equation}\label{eq:si}
S_i=\int_{\mathrm{line}}[ I(i,\lambda)-I(i,\lambda_c) ] \;d\lambda.
\end{equation}
The integration for an observed set of Stokes $O_i$ follows the same formalism, when subtracting $\lambda_c$ continuum emission and setting any Doppler shifts to zero. The integral for $V$
requires weights of opposite sign at either side of the line center. If two or more components are
identifiable
in the $I(\nu,\uvec{k})$ profiles, for example by
multiple fits of Gaussian profiles,
the components can be extracted beforehand, and searches made for each component.
Even with these simplifications, minimal implementations of a search algorithm would generate
databases of impractically large
sizes. A 3D
Cartesian grid built around a
quadrant around the solar disk
(Figure~\ref{fig:spheresm}), would demand computation of
the $S_i$
parameters at each of, say, $50\times50\times50$ ``voxels''. Each such voxel requires a grid of magnetic vectors
$\mathbf{B}=(B_x,B_y,B_z)^T$, the LOS components of
velocity field $v_x$, temperature $T$, density
$n_e$, elemental abundance $\mathcal{A}$, and
a spectroscopic turbulence representing unresolved non-thermal motions $v_T$. With
over $10^5$ voxels, the
number of database entries
would exceed
$N_C \ge 10^{13}$,
using just 10 values for each of the
magnetic and thermodynamic
variables listed above. But
the database size can be
dramatically reduced based upon the following arguments:
\begin{enumerate}
\item Observations are subject to the geometrical rotation of the $Q$ and $U$ profiles using equations~\ref{eq:qrot} and \ref{eq:urot}.
All $QU$ data can be rotated around the $x$-axis by the azimuth angle $-\alpha$, as shown in Figure~\ref{fig:sym}(a).
Database searches can then be limited to those LOS within
the $z=0$ plane instead of the entire 3D volume; e.g. point Q is different from point P in Figure~\ref{fig:sym}(a) only by the $\alpha$-rotation of the Q and U Stokes profiles.
Afterwards, matching magnetic vectors are simply rotated back by $+\alpha$. The $I$ and $V$ Stokes parameters are invariant
to rotations about the LOS ($x$- axis).
\item We need only to search along the LOS $x$- direction using $n_y$ separately stored database files for each observation with an observed elongation $y$ closest to the computed CLEDB height $y_0$,
minimizing CPU and memory requirements.
\item We suggest adopting line pairs from a single ion,
eliminating the need to account for relative abundances and differential temperatures along each LOS. However, it is possible to use different ions, even of different elements, although this is not advisable for reasons that will become clear below. (see \citet{Judge+Casini+Paraschiv2021} for detailed discussions.)
\item We can compute the
Stokes parameters and store them
for a single field strength
$B=|{\mathbf B}|$. We then compute the ratio between
the computed and the observed values of circular polarization. This simplification results from the strong field limit of the Hanle effect. In other words, CLEDB will solve for the geometry, thermal, and magnetic orientation, and afterwards scale the magnetic field strength using Zeeman diagnostics (equation \ref{eqn:eps31}).
\end{enumerate}
Thus, in this example, the CLEDB scheme's database will encompass $N_C\approx 10^6$
entries for each of the $n_y$ database computed elongations, as shown in Table~\ref{tab:discretization}. The numbers quoted in this example are not absolute and represent just a starting point. In CLEDB the database parameter configuration is a user editable feature when building databases within the CLEDB\_BUILD module.
The first simplification is
equivalent to a rotation of our choice of
reference direction for linear polarization. The $Q$ and $U$ parameters fed to the search algorithm are simply
\newcommand\hp{\hphantom{-}}
\begin{eqnarray} \label{eq:qrot}
Q_\alpha&=& Q\cos2\alpha - U\sin2\alpha,\\
U_\alpha&=&Q\sin2\alpha + U\cos2\alpha. \label{eq:urot}
\end{eqnarray}
The preference for lines belonging to single ions described in Point 3 is not a serious restriction,
because ions of the $np^2$ and $np^4$ iso-electronic sequences with $n=2$ and $n=3$, such as Fe~XIII, possess two M1 lines in the $^3P$ ground terms whose dependencies of $\varepsilon_i^{(o)}(\freq,\hat {\bf k})$ on electron temperature are essentially identical, determined by collisional ionization equilibrium.
The Fe~XIII 1.0747 $\mu$m and 1.0798 $\mu$m line pair has served as the primary target for previous instruments
\citep{Querfeld1977,CoMP},
and remains a prime candidate for new observations with DKIST.
Point 4 entails the significant benefit of
finding solutions
which depend on higher signal \textit{wavelength-integrated} Stokes profiles (equation \ref{eq:si}), rather than noisier differences,
of Stokes $V$ profiles (cf. Equation~11 of \citealp{Dima+2020}). We note that the accuracy of database vs. observation scaling is dependent on LOS effects that are not currently fully quantified, as can be seen in the $\chi^2$ values of Table \ref{tab:table1} that indicate overfitting.
\figflow
The search over angles can then be further restricted. We use the ratio $U_\alpha/Q_\alpha$ to estimate the azimuth angle $\Phi_B$ modulo $\pi/2$ for every M1 line (see Figure~\ref{fig:sym}).
In the database we adopt grids for the magnetic field vector in spherical coordinates at the point $P$
for angles $\phi$ and $\theta$ in Table~\ref{fig:spheresm}.
For each $\Phi_B$, the $\phi$ and $\theta$ angles are
related by their definitions by:
\begin{equation} \label{eq:qu}
\tan \Phi_B =\tan\theta \sin \phi.
\end{equation}
Ultimately we are left only to
search a 4-dimensional discretized
hyperspace for each
elongation $y$, to identify
matching values of
$n_e$, $x$, $\phi$, and $\bar\theta(\phi)
$ remembering that B is scaled afterwards, as discussed above.
Here $\bar\theta(\phi)$ includes only
values of $\theta$ compatible with
equation~\ref{eq:qu}.
In our Table \ref{tab:table1} example of CLEDB sorting, we simply presort the 10 values of $\bar\theta$ in the numerical grid that are most compatible with Equation~\ref{eq:qu}. We see that solutions are degenerate in pairs of two in terms of supplementary $\Phi_B$ and complementary $\Theta_B$ angles. The number of presorted $\bar\theta$ solutions is configurable via CLEDB controlling parameters. Interpolation is of course possible, but it is not currently implemented due to the yet unknown effects of potential uncertainties.
Yet more computational savings are made noting that the electron densities $n_e$ are strong functions of $r$ because of stratification and solar wind expansion.
Thus, we can reasonably seek solutions of a fixed analytical form for $n_e(r)$ as shown in Table~\ref{tab:discretization}. The function
\begin{equation}
n_0(r) = 3\cdot 10^8 \cdot \exp \left(- \frac{r-1}{h}\right) + 10^8\cdot(0.036 r^{-3/2} + 1.55r^{-6}),
\end{equation}
has $r$ in units of $R_\odot$, and scale height $h=0.0718R_\odot$ ($\equiv$ 50 Mm), where the second term is the formula of Baumbach \citep{Allen1973}.
The grid-sizes that we have used for testing, and we consider a reasonable starting point are given in Table~\ref{tab:discretization}. The resulting density is given by a smaller array of say 15 discretized values centered on the base $n_o$ electron density, which span orders of magnitude of -2 to 2 in logscale.
\tabdisc
We used the reduced $\chi^2$
metric as a goodness of fit,
with Stokes
`observations' $S_i$ taken
from values on the database grid.
Then we write $\chi^2$ as the sum of
\begin{eqnarray}
\label{eq:chisq}
\chi^2_{\text{\tiny IQU}} =& \dfrac{1}{d - p}&
\left [\sum_{i=0,1,2} { \frac{(S_i - O_i)^2}{\sigma^2_i} }\right ]\text{ and}\\
\chi^2_{\text{\tiny V}} =& \dfrac{1}{d - p } &\frac{(S_3 -O_3)^2}{\sigma^2_3} \label{eq:chisq2}
\end{eqnarray}
where $O_i$ and $\sigma_i$ are for the observed Stokes $I,Q,U$ parameters, and $O_3$ and $\sigma_3$ correspond to Stokes $V$. Here $\sigma^2$ is a variance associated with noise, not to be confused with
the alignment $\sigma^2_0(J)$ which always is specified by $J$. The distribution of noise in $O_i$ is normal with standard deviation $\sigma_i$. The
rms noise is added to $\sigma_i$ as a function
of the number of photons
detected in the line. We normalize
the set of 8 Stokes parameters with respect to the Stokes $I$ parameter corresponding to the strongest line in the set, in order to bypass the need for absolute intensity calibrations.
Here, $d = 4\,n_{line}-1$ is the number of independent data points.
The number of free parameters in the model is $p=4$.
With $d=7$ for two lines, the factor
$(d-p)^{-1}$ in Equation~\ref{eq:chisq}
is $\frac{1}{3}$, and the sum would be over two lines.
\figamba
The reasoning behind separating the first three and last Stokes parameters in Equations~\ref{eq:chisq} and \ref{eq:chisq2} comes from the strong-field limit of the Hanle effect.
As already described, the
first three Stokes parameters
$IQU$ depend only on the direction
(e.g. unit vector $\uvec{b}$), and not
the magnitude $B$ of the magnetic field. On the other hand, Stokes $V$ parameters scale only with the magnitude $B$ of the magnetic field. Thus, the $\chi^2$ sorting needs to be separated into its two components, as shown in Equations~\ref{eq:chisq}-\ref{eq:chisq2}.
We store in the database Stokes vectors $S_i$ computed only with $B=1$ G. The first 3 Stokes parameters, are determined by minimization of Equation~\ref{eq:chisq} yielding acceptable values of $\uvec{b}$, along with the smallest normed differences in integrated Stokes $V$, as given by a database search.
Once the direction $\hat{\bf b}$ is known, the contribution of Stokes $V$ to $\chi^2$ in Equation~\ref{eq:chisq2} is identically zero only when
\begin{eqnarray}
\nonumber
S_3&=&O_3, \mathrm{\ \ hence} \\
B &=& O_3/ S_3(B=1), \label{eq:algebraic}
\end{eqnarray}
which is the analytical solution for $B$ because $S_3 = B \cdot S_3$, where $B=1$.
Equation~\ref{eq:algebraic} then yields the magnetic field strength compatible with all the observed and computed Stokes parameters, without reference to the values $\sigma_3$ for each line.
The value of $V$ used for estimating $B$ can be taken either from the strongest line or the weighted mean of a number of observed lines via a CLEDB configuration parameter.
This procedure justifies argument number 4 listed above.
To sum up, the number of calculations needed becomes of the order of $10^6$ when using the discretization example in Table \ref{tab:discretization}, so that searches become fully tractable even on desktop computers. Figures~\ref{fig:flowcledb} and \ref{fig:flow} show the overview and detailed CLEDB scheme as flowcharts.
\subsection{Performance}
In some initial tests using Python,
solutions are obtained
in 0.2 sec. for
the parameters listed in Table~\ref{tab:discretization},
using a fairly current off-the-shelf laptop like a 64-bit Macbook Pro with a 2.3 GHz Quad-Core Intel Core i7, with 16GB RAM. By compressing database files storage to 32 bit integers,
we halve the disk space required, while
incurring about 2 sec. of overhead each time the data are
read and decompressed.
There is therefore a small advantage in finding all observations matching a given database value of $y_0$ before
searching for solutions. CLEDB implements such a pre-search in its CLEDB\_PREPINV module, where for any measured cluster of $y$- heights, CLEDB searches and selects the nearest $y_0$ database position.
\figambb
Characteristics of the typical
performance are shown in Figures~\ref{fig:amba} and \ref{fig:ambb}, applied to
the Fe XIII line pair at 1.0747 $\mu$m and 1.0798 $\mu$m.
Figure~\ref{fig:amba} shows
the derived physical parameters
for a search of synthetic
Stokes parameters drawn randomly from the database in the upper panel. The rms uncertainties
assigned to the synthetic observations are for photon-counting noise associated with a total of 6 million counts
in the brightest (1.0747 $\mu$m) line. The lower panel shows the corresponding differences between the observed and computed Stokes
parameters for these solutions.
Figure~\ref{fig:ambb} shows how the number of acceptable solutions varies with the noise levels.
As
anticipated, sufficient counts must be accumulated to constrain
the plasma and magnetic properties
of coronal plasma using forbidden
coronal lines. Unanticipated is the result that $\approx 10^7$
counts are required to
arrive at the minimally ambiguous set of solutions. There is no benefit to accumulating more counts except that the magnitude of
$B$ can be better constrained using
Equation~\ref{eq:algebraic}. Also shown are estimates of the counts
that might be accumulated with
a DKIST CRYO-NIRSP like
instrument in 1 second for a $0.5"\times0.5"$ region.
Assuming that
the instrument can achieve
photon-limited noise,
a factor of 30 more
counts should be easily
achievable with longer integrations and spatial binning. It remains to be seen what the nature of the noise of the instrument might be to affect the estimates given here.
\figtwos
\section{Discussion}\label{S-discussion}
The CLEDB algorithm is centered on a straightforward
least-squares match of observed and computed $I,Q$, and $U$
Stokes parameters, which determine the magnetic field unit vector $\uvec{b}$. This is
combined with
magnetic field strength $B$
given algebraically by
the ratio of observed to computed $V$ parameters (Equation~\ref{eq:algebraic}).
The algorithm uses
line profiles
integrated over
frequency, which may include multiple components separated perhaps
using multiple
Gaussian fits.
An
arbitrary number of two or more M1 coronal
emission lines, each formed in the
strong-field limit of the Hanle effect \citep{Casini+Judge1999,Sahal}, can be used for a full vector solution, while a LOS magnetic approximation is available for one line.
However, the physics dictates that the use of lines of the same ion minimizes potentially damaging systematic errors.
The algorithm delivers the closest solutions to those in computed databases, including all solutions acceptable with the $\chi^2$- statistic
for each measured component.
Natural symmetries imply that
at least two solutions are found for each component, even in the limit of negligible noise. To achieve
this limit, one test calculation (Figure~\ref{fig:amba}) required $>$ 6 million counts integrated along the line profile. We
also estimated that the CRYO-NIRSP instrument at the new DKIST observatory can achieve this with a combination of exposures as short as a few seconds, with modest spatial binning.
In Figure~\ref{fig:twos} we show the results of a numerical experiment in which we force the algorithm to return solutions from a situation from a scenario
entirely incompatible with a single source. Two sources of equal intensity are placed, one at $x=-2$ in units of $R_\odot$, the other at ten points between
$x=-2$ and 0. A total of $3\times 10^6$ counts were assumed to be accumulated in Stokes $I$.
To avoid confusion, we kept the
magnetic field vector identical, seeking only to explore the
ability of the algorithm to recognize through $\chi^2$ values that there is no single match in the database. While this is a simple case, it is
in one sense a ``worst case'' scenario in that the two sources are equally bright along the $x$- direction. As expected, the algorithm shows successes and failures. The solutions at $x=2$ have the smallest $\chi^2$, which increase almost monotonically with increasing source separation. This is good news, as the algorithm not only recovers the correct solution when the sources are in the same location, but also $\chi^2$ increases significantly when the sources are separate. Thus, the $\chi^2$ can show that there is indeed sufficient information in the spectra in order to reveal a poor fit. The other good news is the expected increase in mean electron density as the second source approaches $x=0$.
However, the middle panel shows that the $x$- coordinates returned do not follow
anything approaching a linear trend. The line shows the position of a single source found at the same coordinates as the second source from above.
If the algorithm were linear in its response to the $x$- coordinate,
then we would expect the points plotted to follow
a line starting from (-2,-2) with half the slope shown. Clearly, the algorithm is sufficiently non-linear to disallow the possibility of finding centers of emission along the LOS if two or more sources exist with the same or similar brightness. This is just as expected from the discussion in section 3.3 of \citet{Judge+Casini+Paraschiv2021}.
We finish by making some general observations.
Our earlier work \citep{Judge+Casini+Paraschiv2021} clarifies how the present algorithm resolves earlier problems by
solving for the scattering geometry as well as the thermal
and magnetic parameters of the emitting plasma. The companion paper of this work \citep{par2022} will focus on benchmarking CLEDB on synthetic data, while waiting for the first full Stokes coronal observations to become available.
First, these inversions are far less dependent on the
signal-to-noise ratios of the very weak Zeeman-induced Stokes $V$ profiles, a result contrasting with the earlier methods examined
\citep{Plowman_2014,Dima+2020}. While our solutions depend linearly on the ratio of observed to computed $V$ values (Equation~\ref{eq:algebraic}),
the earlier solutions depend on the observed differences between measured $V$ values (see Equation~11 in \citealp{Dima+2020} and Equation~7 in
\citealp{Judge+Casini+Paraschiv2021}, which have correspondingly larger propagated uncertainties.
This is good news because the $V$ signals are small, being first order in the small parameter $\nu_L/\Delta\nu_D$.
Secondly, it is clear that once applied, any user of this scheme is left to see which of the various solutions might make best sense when the pixel-to-pixel variations are taken into account, or if other constraints are available (e.g. independent knowledge of the geometry of the emitting plasma).
This research area should be explored in the future, and may be ripe for application of machine-learning techniques.
Thirdly, using lines from the same ions in fact have advantages. We gain accuracy by using such ions without worrying about unknown factors such as temperatures, ionization fractions and abundances, and
with this methodology we need not worry about the special degeneracies identified by \citet{Dima+2020}.
Lastly, we note that because of the physical separation underlying Equations~\ref{eq:chisq}-\ref{eq:chisq2}, any independent knowledge of B$_{LOS}$ or $|B|$ can be easily
included in a CLEDB implementation. One example might be the use of oscillation data once the density is solved for from just IQU observations, in order to determine the value of $|B|$ from the observed oscillation phase speeds
\citep[see][]{2007Sci...317.1192T,2020ScChE..63.2357Y}.
\acknowledgements
The authors thank R. Casini for discussions and the careful reading and review of the initial submission.
Furthermore, we are grateful for the anonymous reviewer's pertinent suggestions that improved this work.
\begin{dataavailability}
CLEDB and sample test data are available on Github via \\ \href{https://github.com/arparaschiv/solar-coronal-inversion}{github.com/arparaschiv/solar-coronal-inversion} or directly from the corresponding author on reasonable request. Furthermore, the CLEDB package provides detailed documentation. \href{https://github.com/arparaschiv/solar-coronal-inversion/blob/master/codedoc-latex/README-CODEDOC.pdf}{See README-CODEDOC.pdf}
\end{dataavailability}
\begin{fundinginformation}
A.R.P. was primarily funded for this work by the National Solar Observatory (NSO), a facility of the NSF, operated by the Association of Universities for Research in Astronomy (AURA), Inc., under Cooperative Support Agreement number AST-1400405. A.R.P. and P.G.J. are funded by the National Center for Atmospheric Research, sponsored by the National Science Foundation under cooperative agreement No. 1852977.
\end{fundinginformation}
\begin{conflict}
The authors declare that there is no conflict of interest.
\end{conflict}
\bibliographystyle{spr-mp-sola}
|
1,116,691,498,306 | arxiv | \section{Introduction}
Let $X$ be a topological space and for $x\in X$ let $\mathcal{N}_x$ denote
the family of all open neighborhoods of $x$ in $X$. For a nonempty subset $A$
of $X$ we denote by $\mathcal{U}_A$ the set of all families
$\mathcal{U}=\{U_a: a\in A, U_a\in\mathcal{N}_a\}$ and by
$\mathcal{C}_A$ the set of all families
$\mathcal{C}=\{\overline{U}_a: a\in A, U_a\in\mathcal{N}_a\}$.
The \emph{$\theta$-closure} of a set $A$ in a space $X$ is the set
$\cl_\theta(A) = \{x\in X :$ for every
$U\in \mathcal{N}_x, \overline{U}\cap A \ne \emptyset\}$. $A$ is
called \emph{$\theta$-closed} if $A=\cl_\theta(A)$ and $A$ is
$\theta$-dense if $\cl_\theta(A)=X$ (see \cite{Vel66}).
The smallest $\theta$-closed set containing $A$, i.e. the intersection
of all $\theta$-closed sets containing $A$, is denoted by $[A]_\theta$
and is called \emph{the $\theta$-closed hull of $A$} \cite{BelCam88}.
The $\theta$-density of a space $X$ is
$d_\theta(X):=\min\{|A|:A\subset X, \cl_\theta(A)=X\}$.
Recall that a space $X$ is called \emph{Urysohn} if every two distinct
points in $X$ have disjoint closed neighborhoods.
\begin{definition}[\cite{AlaKoc00}]
For a topological space $X$, $\kappa(X)$ is the smallest cardinal
number $\kappa$ such that for each point $x\in X$, there is a
collection $\mathcal{V}_x$ of closed neighborhoods of $x$ such that
$|\mathcal{V}_x|\le \kappa$ and if $W\in \mathcal{N}_x$
then $\overline{W}$ contains a member of $\mathcal{V} _x$.
\end{definition}
\begin{remark}
In \cite{AlaKoc00}, $\kappa(X)$ is defined only for Hausdorff spaces but clearly $\kappa(X)$ is well-defined for every topological space $X$. Also, an example of a Urysohn space $X$ is constructed in \cite{AlaKoc00} such that $\kappa(X)<\chi(X)$, where $\chi(X)$ is the character of the space $X$.
\end{remark}
\begin{definition}[\cite{BonCamMat11}] \label{D1}
The \emph{Urysohn number} of a topological space
$X$, denoted by $U(X)$, is the smallest cardinal $\kappa$ such that for
every $A\subset X$ with $|A|\ge \kappa$, there exists a family
$\mathcal{C}\in\mathcal{C}_A$ such that
$\cap\mathcal{C} =\emptyset$.
\end{definition}
Spaces $X$ with $U(X)=n$ for some integer $n\ge 2$ appeared first in \cite{BonCamMat07} and
\cite{BonCamMatPan08} under the name \emph{$n$-Urysohn} and were studied further in \cite{BonPan12}. In \cite{CamCatPanTsa12} such spaces were called \emph{finitely-Urysohn}.
Clearly, $U(X)\ge 2$ and $U(X)\le |X|^+$ for every topological space
$X$. If $X$ is Hausdorff then $U(X)\le |X|$ and $X$ is Urysohn if and
only if $U(X)=2$ \cite{BonCamMat11}.
\section{On some questions related to the cardinality of the $\theta$-closed hull}\label{S2}
It was shown in \cite[Theorem 1]{BelCam88} that for every Urysohn space
$X$, $|[A]_\theta|\le |A|^{\chi(X)}$ and the authors asked if that
inequality holds true for every Hausdorff space (see
\cite[Question]{BelCam88}). In \cite{BonCamMat11} the authors extended
that result to all spaces with finite Urysohn number.
\begin{theorem}[{\cite[Proposition 4]{BonCamMat11}}]\label{TBCM}
For a set $A$ in a space $X$, if $U(X)$ is finite then
$|[A]_\theta|\le |A|^{\chi(X)}$.
\end{theorem}
Since the proof given in \cite{BonCamMat11} does not apply for spaces
with infinite Urysohn numbers the authors naturally asked the following
question.
\begin{question}[{\cite[Question 5]{BonCamMat11}}]\label{QBCM1}
Is it true that for a set $A$ in a (assume Hausdorff if necessary)
space $X$, $|[A]_\theta|\le |A|^{\chi(X)}U(X)$?
\end{question}
In \cite{BonPan12} the authors improved the inequality in
Theorem \ref{TBCM} as follows and asked if even a stronger inequality
than the one in Question \ref{QBCM1} holds true.
\begin{theorem}[{\cite[Proposition 7]{BonPan12}}]\label{TBP}
For a set $A$ in a space $X$, if $U(X)$ is finite then
$|[A]_\theta|\le |A|^{\kappa(X)}$.
\end{theorem}
\begin{question}[{\cite[Question 9]{BonPan12}}]\label{QBCM2}
Is it true that for a set $A$ in a (Hausdorff) space $X$,
$|[A]_\theta|\le |A|^{\kappa(X)}U(X)$?
\end{question}
The following example shows that the answer of Question \ref{QBCM1}
(and therefore of the other two questions) is in the negative even for
Hausdorff spaces with Urysohn numbers $U(X)=\omega$.
Even more, our example shows that for Hausdorff spaces it is even possible that
$|\cl_\theta(A)|>|A|^{\chi(X)}U(X)$ and
$|[A]_\theta|>(|A|\cdot U(X))^{\chi(X)}$.
(For a different example see \cite[Example 3]{CamCatPanPor12}).
\begin{example}\label{E1}
Let $\mathbb{N}$ denote the set of all positive integers, for
$m\in \mathbb{N}$ let $\mathbb{N}_m:=\{n:n\in\mathbb{N}, n\ge m\}$,
$\mathbb{R}$ be the set of all real numbers, and
$\mathfrak{c}=|\mathbb{R}|$. Let
also $S:=\{1/n: n\in \mathbb{N}\}\cup\{0\}$ and $\mathbb{N}\times S$
be the subspace of $\mathbb{R}\times\mathbb{R}$
with the inherited topology from $\mathbb{R}\times\mathbb{R}$.
Let $\alpha$ be an initial ordinal and $\{B_\beta:\beta<\alpha\}$ be a
family of $\alpha$ many pairwise disjoint copies of $\mathbb{N}\times S$.
We will refer to the points in $B_\beta$ as $(n,r)_\beta$, where
$n\in \mathbb{N}$ and $r\in S$.
For each ordinal number $\beta<\alpha$,
let $M_\beta:=B_\beta\cup \{\beta\}$ be the topological space with a
topology such that $\{\beta\}$ is closed in $M_\beta$, all points in
$B_\beta$ have the topology inherited from $\mathbb{R}\times\mathbb{R}$
and the point $\beta$ has as basic neighborhoods all sets of the form
$\{\beta\}\cup\{\mathbb{N}_m\times (S\setminus\{0\})\}_\beta$,
$m\in \mathbb{N}$. Now, let $X$ be the topological space
obtained from the disjoint union of all spaces $M_\beta$,
$\beta<\alpha$ after identifying for each $n\in\mathbb{N}$ all points
of the form $(n,0)_\beta$, $\beta<\alpha$. We will denote those points
by $(n,0)$. Then it is not difficult to verify that $X$ is a Hausdorff
space (but not Urysohn) with Urysohn number $U(X)=\omega$,
$\chi(X)=\kappa(X)=\omega$, and if $A$ is the subset
$\{(n,0):n\in\mathbb{N}\}$ of $X$ then
$|\cl_\theta(A)|=|[A]_\theta|=\alpha$ and
$|A|^{\chi(X)}U(X)=|A|^{\kappa(X)}U(X)=\omega^\omega\cdot\omega=\mathfrak{c}$.
Therefore if $\alpha > \mathfrak{c}$ then we have
$|\cl_\theta(A)|>|A|^{\chi(X)}U(X)$ and
$|\cl_\theta(A)|>(|A|\cdot U(X))^{\chi(X)}$.
\end{example}
\section{Spaces with finite versus spaces with infinite Urysohn numbers}
We begin with the following lemma.
\begin{lemma}\label{LIG0}
Let $A$ be a nonempty subset of a topological space $X$ such that $\cap\mathcal{C}\ne\emptyset$ for every
$\mathcal{C}\in\mathcal{C}_A$. Then
$A\subset\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_A\}$.
\end{lemma}
\begin{proof}
Let $\mathcal{C}_0\in\mathcal{C}_A$ and let $G=\cap\mathcal{C}_0\neq \emptyset$.
Suppose that there exists $a_0\in A$ such that
$a_0\notin \cl_\theta(G)$. Then there is $W_{a_0}\in \mathcal{N}_{a_0}$
such that $\overline{W}_{a_0}\cap G=\emptyset$. Let
$\overline{V}_{a_0}:= \overline{U}_{a_0} \cap \overline{W}_{a_0}$, where
$\overline{U}_{a_0}\in \mathcal{C}_0$ and $U_{a_0}\in \mathcal{N}_{a_0}$.
Then the family $\mathcal{C}_1:=\{\overline{V}_{a_0}\}\cup \{\overline{U}_a:U_a\in\mathcal{C}_0,a\in A\setminus\{a_0\}\}$ has
the property that $\cap\mathcal{C}_1=\emptyset$, a
contradiction. Therefore $A\subset \cl_\theta(\cap\mathcal{C})$ for
every $\mathcal{C}\in\mathcal{C}_A$, hence
$A\subset\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_A\}$.
\end{proof}
\begin{theorem}\label{TIG1}
Let $X$ be a topological space and $1<n<\omega$. Then $U(X)=n$
if and only if there exists a set $A\subset X$ with $|A|=n-1$ such that
$A=\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_A\}$
and
$\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_B\}=\emptyset$
for every set $B$ with $|B|\ge n$.
\end{theorem}
\begin{proof}
Suppose first that there exists a subset $A$ of $X$ with
$|A|=n-1\ge 1$, such that
$A=\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_A\}$
and $\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_B\}=\emptyset$
for every set $B$ with $|B|\ge n$. Then $A\subset \cl_\theta(\cap\mathcal{C})$ for every $\mathcal{C}\in\mathcal{C}_A$. Hence $\cap\mathcal{C}\ne \emptyset$ for every
$\mathcal{C}\in\mathcal{C}_A$ and therefore $U(X)>|A|=n-1$. Thus
$U(X)\ge n$. Suppose that $U(X)>n$. Then there exists a set $B\subset X$
with $|B|= n$ such that $\cap\mathcal{C}\ne \emptyset$ for every
$\mathcal{C}\in\mathcal{C}_B$. Then it follows from Lemma \ref{LIG0}
that $B\subset\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_B\}=\emptyset$, a contradiction. Therefore $U(X)=n$.
Now let $U(X)=n>1$. Then for every set $B\subset X$ such that $|B|=n$
there exists $\mathcal{C}\in\mathcal{C}_B$ such that
$\cap\mathcal{C}=\emptyset$ and therefore $\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_B\}=\emptyset$. Also, there exists a set $A$ with $|A|=n-1$ such that
for every $\mathcal{C}\in\mathcal{C}_A$ we have
$\cap\mathcal{C}\ne\emptyset$. Then it follows from Lemma \ref{LIG0}
that $A\subset\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_A\}$. To show that $A=\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_A\}$,
suppose that there is
$x\in \cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_A\}\setminus A$.
Therefore $\overline{U}\cap(\cap\mathcal{C})\ne\emptyset$ for every
$U\in \mathcal{N}_x$ and every $\mathcal{C}\in\mathcal{C}_A$.
Then for the set $B:=A\cup\{x\}$ we have that if
$\mathcal{C'}\in\mathcal{C}_B$ then $\cap\mathcal{C'}\ne\emptyset$.
Thus $U(X)>|B|=n$, a contradiction.
\end{proof}
\begin{remark}\label{R1}
Consider the sets $A:=\{(n,0):n<\omega\}\subset X$ and $\alpha\subset X$ in Example \ref{E1} and let $B_f$ and $B_i$ be a nonempty finite subset and an infinite subset of $\alpha$, respectively. If $\mathcal{C}\in\mathcal{C}_{B_f}$ then $\cap\mathcal{C}\subset A$ and $B_f\subsetneq\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_{B_f}\}=\cl_\theta(A)=\alpha$, while there is $\mathcal{C}\in\mathcal{C}_{B_i}$ such that $\cap\mathcal{C}=\emptyset$, hence $\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_{B_i}\}=\emptyset$.
Therefore the space $X$ in Example \ref{E1} shows that Theorem \ref{TIG1} is not always valid when $U(X)$ is infinite even for Hausdorff spaces $X$, or in other words, the subsets in spaces with finite and infinite Urysohn numbers that determine the Urysohn number have different properties. Therefore we should not be surprised that theorems which are valid for spaces with finite Urysohn numbers are not necessarily valid for spaces with infinite Urysohn numbers (see Section \ref{S2}).
\end{remark}
The following two observations are valid for topological spaces with finite or infinite Urysohn numbers.
\begin{lemma}\label{LIG1}
Let $X$ be a topological space and $A$ be a nonempty subset of $X$.
If $\cap\mathcal{C}\ne\emptyset$ for every
$\mathcal{C}\in\mathcal{C}_F$ and every finite nonempty subset $F$ of $A$ then
$A\subset\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_F, \emptyset\ne F\subset A, |F|<\omega\}$.
\end{lemma}
\begin{proof}
Let $F$ be a nonempty subset of $A$, $\mathcal{C}_0\in\mathcal{C}_F$ and $G=\cap\mathcal{C}_0$. Suppose that there exist $a_0\in A$ such that
$a_0\notin \cl_\theta(G)$. Then there is $W_{a_0}\in \mathcal{N}_{a_0}$
such that $\overline{W}_{a_0}\cap G=\emptyset$. Let
$\overline{V}_{a_0}:= \overline{W}_{a_0}$ if $a_0\notin F$ and
$\overline{V}_{a_0}:= \overline{U}_{a_0} \cap \overline{W}_{a_0}$ if $a_0\in F$, where
$\overline{U}_{a_0}\in \mathcal{C}_0$ and $U_{a_0}\in \mathcal{N}_{a_0}$. Then the family $\mathcal{C}_1:=\{\overline{V}_{a_0}\}\cup \{\overline{U}_a:U_a\in\mathcal{C}_0,a\in F\setminus\{a_0\}\}$ has
the property that $\cap\mathcal{C}_1=\emptyset$, a
contradiction. Therefore $A\subset \cl_\theta(\cap\mathcal{C})$ for
every $\mathcal{C}\in\mathcal{C}_F$ and every nonempty finite subset $F$ of $A$, hence
$A\subset\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_F, \emptyset\ne F\subset A, |F|<\omega\}$.
\end{proof}
\begin{theorem}\label{TIG2}
Let $X$ be a topological space and $A$ be a nonempty subset of $X$.
If $\cap\mathcal{C}\ne\emptyset$ for every
$\mathcal{C}\in\mathcal{C}_F$ and every nonempty finite subset $F$ of $A$ then there exists a subset $M$ of $X$ such that $A\subset M$ and
$M=\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_F, \emptyset\ne F\subset M, |F|<\omega\}$.
\end{theorem}
\begin{proof}
Let $\alpha$ be an initial ordinal such that $\alpha=|X|^+$ and let $A$
satisfies the hypotheses of our claim. Then it follows from
Lemma \ref{LIG1} that
$A\subset\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_F, \emptyset\ne F\subset A, |F|<\omega\}$. Suppose that there is
$x_0\in \cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_F, \emptyset\ne F\subset A, |F|<\omega\}\setminus A$.
Then $\overline{U}\cap(\cap\mathcal{C})\ne\emptyset$ for every
$U\in \mathcal{N}_{x_0}$, every $\mathcal{C}\in\mathcal{C}_F$ and every nonempty finite subset $F$ of $A$. Then for the set $A_1:=A\cup\{x_0\}$ we have that if $F$ is a nonempty finite
subset of $A_1$ and $\mathcal{C}\in\mathcal{C}_{F}$ then $\cap\mathcal{C}\ne\emptyset$.
Therefore, according to Lemma \ref{LIG1},
$A_1\subset\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_F, \emptyset\ne F\subset A_1, |F|<\omega\}$. We continue this process for every
$\beta<\alpha$ as follows. If $\beta=\gamma+1$ for some $\gamma<\alpha$ and $A_\gamma\subsetneq\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_F, \emptyset\ne F\subset A_\gamma, |F|<\omega\}$ then we choose $x_\gamma\in \cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_F, \emptyset\ne F\subset A_\gamma, |F|<\omega\}\setminus A_\gamma$ and define $A_\beta:=A_\gamma\cup\{x_\gamma\}$. If $\beta$ is a limit ordinal then
$A_\beta:=\cup\{A_\gamma:\gamma<\beta\}$. In that case it is clear that
$\cap\mathcal{C}\ne\emptyset$ for every
$\mathcal{C}\in\mathcal{C}_F$ and every nonempty finite subset $F$ of $A_\beta$ since every such $F$ is a subset of $A_\gamma$ for some $\gamma<\beta$. Therefore, according to Lemma \ref{LIG1}, we have
$A_\beta\subset\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_F, \emptyset\ne F\subset A_\beta, |F|<\omega\}$. If for some $\beta<\alpha$ we have $A_\beta=\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_F, \emptyset\ne F\subset A_\beta, |F|<\omega\}$ then we stop and take $M$ to be $A_\beta$. This process will eventually stop since for each $\gamma<\beta< \alpha$ we have $A_\gamma\subsetneq A_\beta\subseteq X$ and $\alpha > |X|$.
\end{proof}
\section{The non-Urysohn number of a space}
Motivated by the observations in the previous section we give the following definition.
\begin{definition}\label{DIG1}
A nonempty subset $A$ of a topological space $X$ is called
\emph{finitely non-Urysohn} if for every nonempty finite subset $F$ of $A$ and
every $\mathcal{C}\in\mathcal{C}_F$, $\cap\mathcal{C}\ne\emptyset$.
$A$ is called \emph{maximal finitely non-Urysohn subset of $X$} if $A$ is a
finitely non-Urysohn subset of $X$ and if $B$ is a finitely non-Urysohn subset of $X$ such
that $A\subset B$ then $A=B$.
\end{definition}
\begin{remark}\label{R2} {\rm (a)} Using Lemma \ref{LIG1} and Definition \ref{DIG1} one can easily verify that a nonempty subset $M$ of a topological space is maximal finitely non-Urysohn if and only if $M=\cap\{\cl_\theta(\cap\mathcal{C}):\mathcal{C}\in\mathcal{C}_F, \emptyset\ne F\subset M, |F|<\omega\}$.
{\rm (b)} It follows from Theorem \ref{TIG2} and Remark \ref{R2}(a) that every finitely non-Urysohn subset of a topological space is contained in a maximal one.
{\rm (c)} Using disjoint union of spaces as those constructed in Example \ref{E1} one can construct a Hausdorff topological space with (disjoint) maximal finitely non-Urysohn subsets with different cardinality.
{\rm (d)} In a Urysohn space $X$ the only (maximal) finitely non-Urysohn subsets of $X$ are the singletons.
\end{remark}
Now we are ready to introduce the concept of a non-Urysohn number of a topological space $X$.
\begin{definition}\label{DIG2}
Let $X$ be a topological space. We define \emph{the non-Urysohn number
$nu(X)$ of $X$} as follows: $nu(X):=1+\sup\{|M|:M$ is a (maximal)
finitely non-Urysohn subset of $X\}$.
\end{definition}
\begin{remark}
It follows from Theorem \ref{TIG1} and Definition \ref{DIG2}
that if $X$ is a topological space with a finite Urysohn number then
$nu(X)=U(X)$. Also, $nu(X)\ge 2$ and $nu(X)\ge U(X)$ for every topological
space $X$. For the space $X$ in Example \ref{E1}, $nu(X)=\alpha$ for
every $\alpha$ while $U(X)=\alpha$ only if $\alpha<\omega$ and $U(X)=\omega$ if
$\alpha\ge\omega$. Therefore there are even Hausdorff spaces $X$ for which
$nu(X) > U(X)$.
\end{remark}
\section{On the cardinality of the $\theta$-closed hull}
In Theorem \ref{TIG3}, using the cardinal invariant non-Urysohn number of a space, we give an upper bound for $|\cl_\theta(A)|$ and $|[A]_\theta|$ of a subset $A$ in a topological space $X$. That theorem generalizes simultaneously all the results included in Theorem \ref{TT}. The proof of Theorem \ref{TIG3} follows proofs given in \cite{BelCam88}, \cite{AlaKoc00}, \cite{BonCamMatPan08}, \cite{BonCamMat11} or \cite{BonPan12}.
\begin{theorem}\label{TT}
Let $X$ be a space and $A\subset X$.
\begin{itemize}
\item[(a)] If $X$ is Urysohn then
$|[A]_\theta|\le |A|^{\chi(X)}$ \cite{BelCam88};
\item[(b)] If $X$ is Urysohn then
$|\cl_\theta(A)|\le |A|^{\kappa(X)}$ \cite{AlaKoc00};
\item[(c)] If $U(X)$ is finite then
$|[A]_\theta|\le |A|^{\chi(X)}$ \cite{BonCamMatPan08}, \cite{BonCamMat11};
\item[(d)] If $U(X)$ is finite then
$|[A]_\theta|\le |A|^{\kappa(X)}$ \cite{BonPan12}.
\end{itemize}
\end{theorem}
\begin{theorem}\label{TIG3}
Let $A$ be a subset of a topological space $X$. Then
$|\cl_\theta(A)|\le |A|^{\kappa(X)}\cdot nu(X)$ and
$|[A]_\theta|\le (|A|\cdot nu(X))^{\kappa(X)}$.
\end{theorem}
\begin{proof}
Let $\kappa(X) = m$, $nu(X) = u$, and $|A| = \tau$. For each $x\in X$
let $\mathcal{V}_x$ be a collection of closed neighborhoods of $x$ with
$|\mathcal{V}_x|\le m$ and such that if $W$ is
a closed neighborhood of $x$ then $W$ contains a member of
$\mathcal{V}_x$. For every $x\in \cl_\theta(A)$ and every
$V\in \mathcal{V}_x$, fix a point $a_{x,V}\in V\cap A$, and let
$A_x := \{a_{x,V}:V\in \mathcal{V}_x\}$. Let also
$\Gamma_x := \{V\cap A_x : V\in \mathcal{V}_x\}$. Then $\Gamma_x$ is a
centered family (the intersection of any finitely many elements of
$\Gamma_x$ is nonempty). It is not difficult to see that there are
at most $\tau^m$ such centered families. Indeed $A_x\in [A]^{\le m}$
and $V\cap A_x \in [A]^{\le m}$, for every $V \in \mathcal{V}_x$.
Since each centered family $\Gamma_x$ is a subset of $[A]^{\le m}$ and
$|\Gamma_x|\le m$, the cardinality of the set of all such families is at
most $(|A|^m)^m=|A|^m=\tau^m$.
We claim that the mapping $x\rightarrow \Gamma_x$ is $(\le u)$-to-one. Assume the contrary. Then there is a subset $K \subset \cl_\theta(A)$ such that $|K| = u^+$ and
every $x\in K$ corresponds to the same centered family $\Gamma$. Since
$nu(X)=u$, there exists
a nonempty finite subset $F$ of $K$ and $\mathcal{C}\in \mathcal{C}_F$
such that $\cap{\mathcal{C}}=\emptyset$. Then for every $x \in F$ and
$U_x\in \mathcal{C}$ we have $U_x \cap A_x \in \Gamma$; hence
$\Gamma$ is not centered, a contradiction.
Therefore the mapping $x \rightarrow \Gamma_x$ from $\cl_\theta(A)$ to
$[[A]^{\le m}]^{\le m}$ is $(\le u)$-to-one, and thus
\begin{equation}\label{Eq1}
|\cl_\theta(A)|\le u\cdot (\tau^m)^m = u\cdot \tau^m
\end{equation}
(Note that the proof that the
mapping $x\rightarrow \Gamma_x$ is $(\le u)$-to-one does not
depend upon the cardinality of the set $A$.)
It is not difficult to see (e.g., as in the proof of Theorem 1 in
\cite{BelCam88}) that if we set $A_0 = A$ and
$A_\alpha = \cl_\theta(\bigcup_{\beta<\alpha}A_\beta)$
for all $0<\alpha\le m^+$, then $[A]_\theta = A_{m^+}$.
Let $\kappa=u\cdot \tau$. It follows from (\ref{Eq1}) that
$|A_2|\le u\cdot (u\cdot \tau^m)^m=u^m\cdot\tau^m=(u\cdot \tau)^m=\kappa^m$.
To finish the proof we will show that if $\alpha$ is such that
$2\le\alpha\le m^+$ then $|A_\alpha|\le \kappa^m$, and therefore
$|[A]_\theta|\le \kappa^m$.
Suppose that $\alpha_0\le m^+$ is the smallest ordinal such that
$|A_{\alpha_0}|> \kappa^m$. Then we have $|A_\beta|\le \kappa^m$
for each $\beta<\alpha_0$ and therefore
$|\cup_{\beta<\alpha_0}A_\beta|\le\kappa^m\cdot m^+=\kappa^m$.
Now using (\ref{Eq1}) again we get
$|A_{\alpha_0}|\le u\cdot(\kappa^m)^m =\kappa^m$, a contradiction.
\end{proof}
\begin{corollary}
If $X$ is a topological space then
$|X|\le (d_\theta(X))^{\kappa(X)}nu(X)$.
\end{corollary}
\begin{remark}\label{R3}
Example \ref{E1} shows that the inequality $|\cl_\theta(A)|\le |A|^{\kappa(X)}\cdot nu(X)$
in Theorem \ref{TIG3} is exact.
To see that, let $\alpha\ge\mathfrak{c}$ and take
$A=\{(n,0):n\in\mathbb{N}\}\subset X$ and
$M=\{\beta:\beta<\alpha\}\subset X$. Then $\cl_\theta(A)=A\cup M$ and
therefore $|\cl_\theta(A)|=\alpha\le |A|^{\kappa(X)}\cdot nu(X)=\omega^\omega\cdot\alpha=\alpha$.
To see that the inequality
$|[A]_\theta|\le (|A|\cdot nu(X))^{\kappa(X)}$ in Theorem \ref{TIG3}
is also exact, one can construct a Hausdorff (non-Urysohn) space $Y$
and a set $A\subset Y$ with $|[A]_\theta|=(|A|\cdot nu(Y))^{\kappa(Y)}$
as follows. Take the space $X_1:=X$ from Example \ref{E1} and the set
$M=\{\beta:\beta<\alpha\}\subset X_1$. Represent the set $M$ as a disjoint
union $\cup_{\beta<\alpha} M_\beta$ of countable infinite subsets of $M$.
Take $\alpha$ many disjoint copies $\{X_\beta:\beta<\alpha\}$ of the space
$X$ and for each $\beta<\alpha$ identify the points of $A_\beta$ with the
points $\{(n,0):n<\omega\}\subset X_\beta$. Call the resulting space
$X_2$. Now $X_1\subset X_2$ and in $X_2$ we have $\alpha$ many new copies
of the set $M$. For each such set repeat the previous procedure to obtain
the space $X_3$ and continue this procedure for each $n<\omega$. Call the
resulting space $Y$. It is not difficult to see that $U(Y)=\omega$,
$\chi(Y)=\kappa(Y)=\omega$, $nu(Y)=\alpha$, and if $A$ is the subset
$\{(n,0):n\in\mathbb{N}\}$ of $X_1$ then
$|[A]_\theta|=\alpha^\omega=(\omega\cdot\alpha)^\omega=(|A|\cdot nu(Y))^{\kappa(Y)}$.
Notice that if $\alpha>\mathfrak{c}$ is chosen to be a cardinal with a countable cofinality then $|[A]_\theta|=\alpha^\omega > \alpha=\omega^\omega\cdot\alpha= |A|^{\kappa(Y)}\cdot nu(Y)$ and therefore the right-hand side of the second inequality cannot be replaced by the right-hand side of the first inequality.
\end{remark}
\section{Some cardinal inequalities involving the non-Urysohn number}
We recall some definitions.
\begin{definition}[{\cite{WilDis84}, \cite{Hod06}}]
The \emph{almost Lindel\"{o}f degree of a space $X$ with respect to closed
sets} is $aL_c(X):=\sup\{aL(F,X):F$ is a closed subset of $X\}$, where
$aL(F,X)$ is the minimal cardinal number $\tau$ such that for every open
(in $X$) cover $\mathcal{U}$ of $F$ there is a subfamily
$\mathcal{U}_0\subset \mathcal{U}$ such that
$|\mathcal{U}_0|\le \tau$ and
$F\subset \cup\{\overline{U}:U\in\mathcal{U}_0\}$.
$aL(X,X)$ is called \emph{almost Lindel\"{o}f degree of $X$} and is
denoted by $aL(X)$.
\end{definition}
\begin{remark}\label{R4}
The cardinal function $aL_c(X)$ was introduced in \cite{WilDis84} under
the name Almost Lindel\"{o}f Degree and was denoted by $aL(X)$. Here we
follow the notation and terminology used in \cite{Hod06} and suggested in
\cite{BelCam88}.
\end{remark}
\begin{definition}[\cite{Ala93}]
The cardinal function $wL_c(X)$ is the smallest cardinal $\tau$
such that if $A$ is a closed subset of $X$ and $\mathcal{U}$ is an open
(in $X$) cover of $A$, then there exists $\mathcal{V}\subset \mathcal{U}$
with $|\mathcal{V}|\le \tau$ such that
$A\subset \overline{\cup\mathcal{V}}$.
\end{definition}
\begin{definition}[\cite{Arh95}]
The cardinal function $sL(X)$ is the smallest cardinal $\tau$
such that if $A\subset X$ and $\mathcal{U}$ is an open (in $X$) cover of
$\overline{A}$, then there exists $\mathcal{V}\subset \mathcal{U}$ with
$|\mathcal{V}|\le \tau$ such that $A\subset \overline{\cup\mathcal{V}}$.
\end{definition}
\begin{definition}[\cite{AlaKoc00}]
The cardinal function $sL_\theta(X)$ is the smallest cardinal $\tau$
such that if $A\subset X$ and $\mathcal{U}$ is an open (in $X$) cover of
$\cl_\theta(A)$, then there exists $\mathcal{V}\subset \mathcal{U}$ with
$|\mathcal{V}|\le \tau$ such that $A\subset \overline{\cup\mathcal{V}}$.
\end{definition}
Clearly $sL_\theta(X)\le sL(X)\le wL_c(X)\le L(X)$ and
$aL(X)\le aL_c(X)\le L(X)$, where $L(X)$ is the Lindel\"{o}f degree of
$X$. In \cite{AlaKoc00} an example of a Urysohn space $X$ is constructed
such that $sL_\theta(X)<sL(X)$. For examples of Urysohn spaces such that
$aL(X)<wL_c(X)$ or $wL_c(X)<aL(X)$ see \cite{Hod06} and for an example
of a Urysohn space for which $aL(X)<aL_c(X)<L(X)$ see \cite{WilDis84}
or \cite{Hod06}.
Here are some cardinal inequalities that involve the cardinal functions
defined above. For more related results see the survey paper \cite{Hod06}.
\begin{theorem}\label{T}
\noindent
\begin{itemize}
\item[(a)] If $X$ is a Hausdorff space, then $|X|\le 2^{\chi(X)L(X)}$
\cite{Arh69}.
\item[(b)] If $X$ is a Hausdorff space, then $|X|\le 2^{\chi(X)aL_c(X)}$
\cite{WilDis84}.
\item[(c)] If $X$ is a Urysohn space, then $|X|\le 2^{\chi(X)aL(X)}$
\cite{BelCam88}.
\item[(d)] If $X$ is a Hausdorff space with $U(X)<\omega$,
then $|X|\le 2^{\chi(X)aL(X)}$ \cite{BonCamMat11}.
\item[(e)] If $X$ is a Urysohn space, then
$|X|\le 2^{\chi(X)wL_c(X)}$ \cite{Ala93}.
\item[(f)] If $X$ is a topological space with $U(X)<\omega$, then
$|X|\le 2^{\chi(X)wL_c(X)}$ \cite{BonCamMat11}.
\item[(g)] If $X$ is a Hausdorff space, then
$|X|\le 2^{\chi(X)sL(X)}$ \cite{Arh95}.
\item[(h)] If $X$ is a Urysohn space, then
$|X|\le 2^{\kappa(X)sL_\theta(X)}$ \cite{AlaKoc00}.
\item[(i)] If $X$ is a topological space with $U(X)<\omega$, then
$|X|\le 2^{\kappa(X)sL_\theta(X)}$ \cite{BonPan12}.
\end{itemize}
\end{theorem}
Recently in \cite{BonPan12}, after proving the inequality given in Theorem
\ref{T}(i), the authors asked the following question.
\begin{question}[{\cite[Question 11]{BonPan12}}]
Can one conclude that the inequality
$$|X|\le U(X)^{\kappa(X)sL_\theta(X)}$$ is
true for every Hausdorff space $X$?
\end{question}
We show below that the answer of the above question is in the affirmative
if the Urysohn number $U(X)$ is replaced by the non-Urysohn number
$nu(X)$.
\begin{theorem}\label{TIG4}
For every topological space $X$, $|X|\le nu(X)^{\kappa(X)sL_\theta(X)}$.
\end{theorem}
\begin{proof}
Let $\kappa(X)sL_\theta(X)=m$ and $nu(X)=u$. For each $x\in X$
let $\mathcal{V}_x$ be a collection of closed neighborhoods of $x$ with
$|\mathcal{V}_x|\le m$ and such that if $W$ is
a closed neighborhood of $x$ then $W$ contains a member of
$\mathcal{V}_x$. Let $x_0$ be an arbitrary point in $X$.
Recursively we construct a family
$\{F_\alpha:\alpha<m^+\}$ of subsets of $X$ with the following
properties:
\begin{itemize}
\item[(i)] $F_0=\{x_0\}$ and
$\cl_\theta(\cup_{\beta<\alpha}F_\beta)\subset F_\alpha$ for
every $0<\alpha<m^+$;
\item[(ii)] $|F_\alpha|\le u^m$ for every $\alpha < m^+$;
\item[(iii)] for every $\alpha<m^+$, and every
$F\subset \cl_\theta(\cup_{\beta<\alpha}F_\beta)$ with $|F|\le m$ if
$X\setminus \overline{\cup\mathcal{U}}\ne\emptyset$ for some
$\mathcal{U}\in \mathcal{U}_F$, then
$F_\alpha\setminus \overline{\cup\mathcal{U}}\ne \emptyset$.
\end{itemize}
Suppose that the sets $\{F_\beta:\beta<\alpha\}$ satisfying (i)-(iii) have
already been defined. We will define $F_\alpha$. Since $|F_\beta|\le u^m$
for each $\beta < \alpha$, we have
$|\cup_{\beta<\alpha}F_\beta|\le u^m\cdot m^+=u^m$.
Then it follows from Theorem \ref{TIG3}, that
$|\cl_\theta(\cup_{\beta<\alpha}F_\alpha)|\le u^m$.
Therefore there are at most $u^m$ subsets $F$ of
$\cl_\theta(\cup_{\beta<\alpha}F_\alpha)$ with $|F|\le m$ and
for each such set $F$ we have $|\mathcal{U}_F|\le m^m=2^m\le u^m$.
For each $F\subset\cl_\theta(\cup_{\beta<\alpha}F_\alpha)$ with $|F|\le m$
and each $\mathcal{U}\in\mathcal{U}_F$ for which
$X\setminus \overline{\cup\mathcal{U}}\ne\emptyset$ we choose a point in
$X\setminus \overline{\cup\mathcal{U}}\ne\emptyset$ and let $E_\alpha$ be
the set of all these points. Clearly $|E_\alpha|\le\ u^m$. Let
$F_\alpha=\cl_\theta(E_\alpha\cup (\cup_{\beta<\alpha}F_\alpha))$. Then it follows from our
construction that $F_\alpha$ satisfies (i) and (iii) while (ii) follows
from Theorem \ref{TIG3}.
Now let $G=\cup_{\alpha<m^+}F_\alpha$. Clearly $|G|\le u^m\cdot m^+=u^m$.
We will show that $G$ is $\theta$-closed. Suppose
the contrary and let $x\in \cl_\theta(G)\setminus G$. Then for
each $U\in\mathcal{V}_x$ we have $U\cap G\ne\emptyset$ and therefore there
is $\alpha_U < m^+$ such that $U \cap F_{\alpha_U}\ne\emptyset$. Since
$|\{\alpha_U: U\in \mathcal{V}_x\}|\le
m$, there is $\beta< m^+$ such that $\beta > \alpha_U$ for every
$U\in\mathcal{V}_x$ and therefore $x\in \cl_\theta(F_\beta)\subset G$, a
contradiction.
To finish the proof it remains to check that $G = X$. Suppose that there
is $x\in X\setminus G$. Then there is $V\in \mathcal{V}_x$ such that
$V\cap G=\emptyset$. Hence, for every $y\in G$ there is
$V_y \in \mathcal{V}_y$ such that $V_y \cap \Int(V)=\emptyset$. Since
$\{\Int(V_y) : y \in G\}$ is an open cover of $G$ and $G$ is
$\theta$-closed, there is $F\subset G$ with $|F|\le m$ such
that $G \subset \overline{\cup_{y\in F}\Int(V_y)}$.
Since $|F|\le m$, there is $\beta<m^+$ such that $F\subset F_\beta$.
Then for $\mathcal{U}:= \{\Int(V_y) : y \in F\}$ we have
$\mathcal{U}\in \mathcal{U}_F$ and
$x\in X\setminus \overline{\cup\mathcal{U}}$.
Then it follows from our construction that
$F_{\beta+1}\setminus \overline{\cup\mathcal{U}}\ne\emptyset$, a
contradiction since $F_{\beta+1}\subset G\subset \overline{\cup\mathcal{U}}$.
\end{proof}
\begin{corollary}\label{CIG5}
For every topological space $X$, $|X|\le nu(X)^{\chi(X)wL_c(X)}$.
\end{corollary}
\begin{remark}
In 1979, A. V. Arhangel{\cprime}ski{\u\i} asked if the inequality $|X|\le 2^{\chi(X)wL_c(X)}$ was true for every Hausdorff topological space $X$ (see \cite[Question 2]{Hod06}). It follows immediately from Corollary \ref{CIG5} that the answer of his question is in the affirmative for all spaces with $nu(X)\le 2^\omega$. But as Example 3.10 in \cite{Got12b} shows, there are $T_0$-topological spaces for which that inequality is not true (in that example $nu(X)>2^{\omega}$).
\end{remark}
Modifying slightly the proof of Theorem \ref{TIG4} one can prove the
following result.
\begin{theorem}
For every topological space $X$, $|X|\le nu(X)^{\kappa(X)aL(X)}$.
\end{theorem}
\begin{proof}
Let $\kappa(X)aL(X)=m$ and $nu(X)=u$. For each $x\in X$
let $\mathcal{V}_x$ be a collection of closed neighborhoods of $x$ with
$|\mathcal{V}_x|\le m$ and such that if $W$ is
a closed neighborhood of $x$ then $W$ contains a member of
$\mathcal{V}_x$. Let $x_0$ be an arbitrary point in $X$.
Recursively we construct a family
$\{F_\alpha:\alpha<m^+\}$ of subsets of $X$ with the following
properties:
\begin{itemize}
\item[(i)] $F_0=\{x_0\}$ and
$\cl_\theta(\cup_{\beta<\alpha}F_\beta)\subset F_\alpha$ for
every $0<\alpha<m^+$;
\item[(ii)] $|F_\alpha|\le u^m$ for every $\alpha < m^+$;
\item[(iii)] for every $\alpha<m^+$, and every
$F\subset \cl_\theta(\cup_{\beta<\alpha}F_\beta)$ with $|F|\le m$ if
$X\setminus \cup\mathcal{C}\ne\emptyset$ for some
$\mathcal{C}\in \mathcal{C}_F$, then
$F_\alpha\setminus \cup\mathcal{C}\ne \emptyset$.
\end{itemize}
Suppose that the sets $\{F_\beta:\beta<\alpha\}$ satisfying (i)-(iii) have
already been defined. We will define $F_\alpha$. Since $|F_\beta|\le u^m$
for each $\beta < \alpha$, we have
$|\cup_{\beta<\alpha}F_\beta|\le u^m\cdot m^+=u^m$.
Then it follows from Theorem \ref{TIG3}, that
$|\cl_\theta(\cup_{\beta<\alpha}F_\alpha)|\le u^m$.
Therefore there are at most $u^m$ subsets $F$ of
$\cl_\theta(\cup_{\beta<\alpha}F_\alpha)$ with $|F|\le m$ and
for each such set $F$ we have $|\mathcal{C}_F|\le m^m=2^m\le u^m$.
For each $F\subset\cl_\theta(\cup_{\beta<\alpha}F_\alpha)$ with $|F|\le m$
and each $\mathcal{C}\in\mathcal{C}_F$ for which
$X\setminus \cup\mathcal{C}\ne\emptyset$ we choose a point in
$X\setminus \cup\mathcal{C}\ne\emptyset$ and let $E_\alpha$ be
the set of all these points. Clearly $|E_\alpha|\le\ u^m$. Let
$F_\alpha=\cl_\theta(E_\alpha\cup (\cup_{\beta<\alpha}F_\alpha))$. Then it follows from our
construction that $F_\alpha$ satisfies (i) and (iii) while (ii) follows
from Theorem \ref{TIG3}.
Now let $G=\cup_{\alpha<m^+}F_\alpha$. Clearly $|G|\le u^m\cdot m^+=u^m$.
We will show that $G$ is $\theta$-closed. Suppose
the contrary and let $x\in \cl_\theta(G)\setminus G$. Then for
each $U\in\mathcal{V}_x$ we have $U\cap G\ne\emptyset$ and therefore there
is $\alpha_U < m^+$ such that $U \cap F_{\alpha_U}\ne\emptyset$. Since
$|\{\alpha_U: U\in \mathcal{V}_x\}|\le
m$, there is $\beta< m^+$ such that $\beta > \alpha_U$ for every
$U\in\mathcal{V}_x$ and therefore $x\in \cl_\theta(F_\beta)\subset G$, a
contradiction.
To finish the proof it remains to check that $G = X$. Suppose that there
is $x\in X\setminus G$. Then there is $V\in \mathcal{V}_x$ such that
$V\cap G=\emptyset$. Hence for every $y\in G$ there is
$V_y \in \mathcal{V}_y$ such that $V_y \cap \Int(V)=\emptyset$ and for
every $z\in (X\setminus \{x\})\setminus G$ there is
$V_z \in \mathcal{V}_z$ such that $V_z \cap G=\emptyset$. Since
$\{\Int(V_y) : y \in G\}\cup\{\Int(V_z) : z \in (X\setminus \{x\})\setminus G\}\cup\{\Int(V)\}$ is an open cover of $X$, there is $F'\subset X$ with $|F'|\le m$ such
that $X \subset {\cup_{t\in F'}V_t}$. Let $F:=F'\cap G\neq\emptyset$.
Then $G\subset \cup\{V_y : y \in F\}$.
Since $|F|\le m$, there is $\beta<m^+$ such that $F\subset F_\beta$.
Then for $\mathcal{C}:= \{V_y : y \in F\}$ we have
$\mathcal{C}\in \mathcal{C}_F$ and
$x\in X\setminus \cup\mathcal{C}$.
Then it follows from our construction that
$F_{\beta+1}\setminus \cup\mathcal{C}\ne\emptyset$, a
contradiction since $F_{\beta+1}\subset G\subset \cup\mathcal{C}$.
\end{proof}
\begin{corollary}
For every topological space $X$ with $nu(X)<\omega$ (or, equivalently,
$U(X)<\omega$), $|X|\le 2^{\kappa(X)aL(X)}$.
\end{corollary}
\begin{corollary}
For every Urysohn space $X$, $|X|\le 2^{\kappa(X)aL(X)}$.
\end{corollary}
\begin{remark}
In parallel with Definition \ref{DIG2} one can introduce the notion of a \emph{non-Hausdorff number} of a topological space. For results related to that notion see \cite{Got12b}.
\end{remark}
|
1,116,691,498,307 | arxiv | \section{Introduction}
An inverse problem of signal processing
is to reconstruct the original information from its degraded
version. It is not limited to image processing, but it often
arises in the image processing.
When a natural image is reconstructed, the reconstructed
image sometimes does not look natural while it is close to
the original image by a reasonable metric, for example
mean squared error.
When the reconstructed information is close to the original,
it is often believed that it should also look natural.
Blau and Michaeli \cite{cvpr2018}
questioned this unproven belief.
In their research \cite{cvpr2018},
they mathematically formulated the \emph{naturalness}
of the reconstructed information by a
distance of the probability distributions of
the reconstructed information and the original information.
The reasoning behind this is that the perceptional quality of
a reconstruction method is often evaluated by
how often a human observer can distinguish an output of
the reconstruction method from natural ones.
Such a subjective evaluation can mathematically be
modeled as a hypothesis testing \cite{cvpr2018}.
A reconstructed image is more easily distinguished
as the variational distance
$\sigma(P_R$, $P_N)$ increases \cite{cvpr2018},
where $P_R$ is the probability distribution of
the reconstructed information and
$P_N$ is that of
the natural one.
They regarded the perceptional quality of reconstruction as
a distance between $P_R$ and $P_N$.
The distance between the reconstructed information
and the original information is conventionally called as distortion.
They discovered that there exists a tradeoff
between perceptional quality and distortion, and
named it as the \emph{perception-distortion tradeoff}.
Claude Shannon \cite[Chapter 5]{hanbook} initiated the rate-distortion
theory in 1950's.
It clarifies the tradeoff between
information rate and distortion in the lossy source coding
(lossy data compression).
The rate-distortion theory has served as a theoretical
foundation of image coding for past several decades, as
drawing a rate-distortion curve is a common practice in
research articles of image coding.
Since distortion and perceptional quality are now considered
two different things,
it is natural to consider a tradeoff among
information rate, distortion and perceptional quality.
Blau and Michaeli \cite{cvpr2018}
briefly mentioned the rate-distortion theory,
but they did not clarify the tradeoff among the three.
Then the author \cite{matsumotocomex} clarified the tradeoff
among the three for fixed-length coding, but did not
clarified variable-length coding, where
fixed and variable refer the length of a codeword is
fixed or variable, respectively \cite[Chapter 5]{hanbook}.
The variable-length lossy source coding is practically
more important than
the fixed-length counterpart because
most of image and audio coding methods are variable-length.
The purpose of this letter is to mathematically
define the tradeoff of variable-length lossy source
coding for general information sources,
and to express the tradeoff in terms of
information spectral quantities introduced by Han and Verd\'u
\cite{hanbook}.
We also discuss the fixed-length coding with average distortion
criterion that was missing in the previous letter \cite{matsumotocomex}.
Since the length limitation is strict in this journal,
citations to the original papers are replaced by
those to the textbook \cite{hanbook}, and
the mathematical proof is a bit compressed.
The author begs readers' kind understanding.
The base of $\log$ is an arbitrarily fixed real number $>1$
unless otherwise stated.
\section{Preliminaries}
The following definitions are borrowed from Han's textbook
\cite{hanbook}.
Let
\[
\mathbf{X} = \left\{ X^n = (X_1^{(n)}, \ldots, X_n^{(n)} ) \right\}_{n=1}^\infty
\]
be a general information source, where the alphabet of the random variable $X^n$
is the $n$-th Cartesian product $\mathcal{X}^n$ of some finite alphabet
$\mathcal{X}$.
For a sequence of real-valued random variables $Z_1$, $Z_2$, \ldots
we define
\[
\textrm{p-}\limsup_{n\rightarrow \infty} Z_n =\inf\left\{
\alpha \mid \lim_{n\rightarrow\infty} \mathrm{Pr}[Z_n > \alpha] = 0 \right\}.
\]
For two general information sources $\mathbf{X}$ and $\mathbf{Y}$ we define
\begin{eqnarray*}
\overline{I}(\mathbf{X}; \mathbf{Y}) &=& \textrm{p-}\limsup_{n\rightarrow \infty}
\frac{1}{n} \log \frac{P_{X^nY^n}(X^n, Y^n)}{P_{X^n}(X^n)P_{Y^n}(Y^n)},\\
H_K(\mathbf{X}) &=& \limsup_{n\rightarrow \infty}
\frac{1}{n} H_K(X^n),\\
F_\mathbf{X}(R)& =& \limsup_{n\rightarrow \infty} \mathrm{Pr}\left[
\frac{1}{n} \log \frac{1}{P_{X^n}(X^n)} \geq R \right],
\end{eqnarray*}
where $H_K(X^n)$ is the Shannon entropy of $X^n$ in $\log_K$.
For two distributions $P$ and $Q$ on an alphabet $\mathcal{X}$,
we define the variational distance $\sigma(P,Q)$ as
$\sum_{x\in \mathcal{X}} |P(x)-Q(x)|/2$. In the rate-distortion theory, we usually
assume a reconstruction alphabet different from a source alphabet.
In order to consider the distribution similarity of reconstruction,
in this letter we assume $\mathcal{X}^n$ as both source and
reconstruction alphabets.
\section{Variable-length source coding}\label{sec2}
An encoder of length $n$ is a \emph{stochastic}
mapping $f_n: \mathcal{X}^n \rightarrow
\mathcal{U}^*$, where $\mathcal{U} = \{1$, \ldots, $K\}$
and $\mathcal{U}^*$ is the set of finite-length sequences over $\mathcal{U}$.
By \emph{stochastic} we mean that the encoder output $f_n(x^n)$
is probabilistic with a fixed input $x^n \in \mathcal{X}^n$.
The corresponding decoder of length $n$
is a deterministic mapping
$g_n: \mathcal{U}^* \rightarrow \mathcal{X}^n$.
We denote by $|f_n(x^n)|$ the (random variable of)
length of sequence $f_n(x^n) \in \mathcal{U}^*$
for $x^n \in \mathcal{X}^n$.
We denote by $\delta_n: \mathcal{X}^n \times \mathcal{X}^n
\rightarrow [0,\infty)$ a general distortion function.
\subsection{Average distortion criterion}
\begin{definition}
A triple $(R,D,S)$ is said to be $va$-achievable
if there exists a sequence of encoder and decoder
$(f_n$, $g_n)$ such that
\begin{eqnarray}
\limsup_{n\rightarrow \infty} \frac{\mathbf{E}[\log |f_n(X^n)|]}{n} & \leq & R,\label{eq3}\\
\limsup_{n\rightarrow \infty} \frac{1}{n}\mathbf{E}[\delta_n(X^n, g_n(f_n(X^n)))] & \leq & D,\label{eq4}\\
\limsup_{n\rightarrow \infty} \sigma(P_{g_n(f_n(X^n))}, P_{X^n}) & \leq & S.\label{eq5}
\end{eqnarray}
Define the function $R_{va}(D,S)$ by
\[
R_{va}(D,S) = \inf\{ R \mid (R,D,S) \mbox{ is $va$-achievable }\}.
\]
\end{definition}
\begin{theorem}\label{thm1}
\[
R_{va}(D,S) = \inf_{\mathbf{Y}} H_K(\mathbf{Y})
\]
where the infimum is taken with respect to all
general information sources $\mathbf{Y}$ satisfying
\begin{eqnarray}
\limsup_{n\rightarrow \infty} \frac{1}{n}\mathbf{E}[\delta_n(X^n, Y^n)] & \leq & D,\label{eq6}\\
\limsup_{n\rightarrow \infty} \sigma(P_{Y^n}, P_{X^n}) & \leq & S.\label{eq7}
\end{eqnarray}
\end{theorem}
\noindent\textbf{Proof:}
Let a pair of encoder $f_n$ and decoder $g_n$ satisfies
Eqs.\ (\ref{eq3})--(\ref{eq5}).
Let $Y^n=g_n(f_n(X^n))$, and define the general information
source $\mathbf{Y}$ from $Y^n$.
We immediately see that $\mathbf{Y}$ satisfies Eqs.\ (\ref{eq6})
and $(\ref{eq7})$.
By the same argument as
\cite[p.\ 349]{hanbook}
we immediately see
\[
\limsup_{n\rightarrow \infty} \frac{\mathbf{E}[\log |f_n(X^n)|]}{n} \geq H_K(\mathbf{Y}).
\]
On the other hand,
suppose that a general information source
$\mathbf{Y}$ satisfies Eqs.\ (\ref{eq6}) and (\ref{eq7}).
Let $f'_n$ and $g'_n$ be a \emph{lossless} variable-length
encoder and its decoder \cite[Section 1.7]{hanbook}
for $\mathbf{Y}$
such that $Y^n = g'_n(f'_n(Y^n))$ and
\[
\limsup_{n\rightarrow \infty} \frac{\mathbf{E}[\log |f'_n(Y^n)|]}{n} = H_K(\mathbf{Y}).
\]
For a given information sequence $x^n \in \mathcal{X}^n$,
the encoder randomly chooses $y^n\in \mathcal{Y}^n$ according to the
conditional distribution $P_{Y^n|X^n}(\cdot | x^n)$,
and define the codeword as $f'_n(y^n)$.
The decoding result is $y^n=g'_n(f'_n(y^n))$.
Since the probability distribution of
decoding result $g'_n(f'_n(y^n))$ is $P_{Y^n}$,
we see that the constructed encoder and decoder
satisfy Eqs.\ (\ref{eq4}) and (\ref{eq5}).
\rule{1ex}{1ex}
\subsection{Maximum distortion criterion}
\begin{definition}
A triple $(R,D,S)$ is said to be $vm$-achievable
if there exists a sequence of encoder and decoder
$(f_n$, $g_n)$ such that
\begin{eqnarray*}
\limsup_{n\rightarrow \infty} \frac{\mathbf{E}[\log |f_n(X^n)|]}{n} & \leq & R,\\
\textrm{p-}\limsup_{n\rightarrow \infty} \frac{1}{n}\delta_n(X^n, g_n(f_n(X^n))) & \leq & D,\\
\limsup_{n\rightarrow \infty} \sigma(P_{g_n(f_n(X^n))}, P_{X^n}) & \leq & S.
\end{eqnarray*}
Define the function $R_{vm}(D,S)$ by
\[
R_{vm}(D,S) = \inf\{ R \mid (R,D,S) \mbox{ is $vm$-achievable }\}.
\]
\end{definition}
\begin{theorem}\label{thm2}
\[
R_{vm}(D,S) = \inf_{\mathbf{Y}} H_K(\mathbf{Y})
\]
where the infimum is taken with respect to all
general information sources $\mathbf{Y}$ satisfying Eq.\ (\ref{eq7}) and
\[
\textrm{p-}\limsup_{n\rightarrow \infty} \frac{1}{n}\delta_n(X^n, Y^n) \leq D.
\]
\end{theorem}
\noindent\textbf{Proof:}
The proof is almost the verbatim copy of that of Theorem \ref{thm1} and
is omitted.
\rule{1ex}{1ex}
\begin{remark}
The tradeoff for variable-length coding
with the average distortion criterion and without
the perception criterion was also determined by
using \emph{stochastic} encoders \cite[Section 5.7]{hanbook},
but with the maximum distortion criterion without the perception
criterion, only the \emph{deterministic} encoders were sufficient
to clarify the tradeoff \cite[Section 5.6]{hanbook}.
It is not clear at present whether or not
we can remove the randomness from encoders in Theorem \ref{thm2}.
\end{remark}
\section{Fixed-length coding with the average distortion criterion}
In this section we state the tradeoff for
fixed-length coding with the average distortion criterion,
because it has never been stated elsewhere. The proof is
almost the same as \cite{matsumotocomex}.
Note that the definition of encoder will be
different from Section \ref{sec2} and that an assumption on
the distortion $\delta_n$ will be added.
An encoder of length $n$ is a \emph{deterministic}
mapping $f_n: \mathcal{X}^n \rightarrow
\{1$, \ldots, $M_n\}$, and
the corresponding decoder of length $n$
is a deterministic mapping
$g_n: \{1$, \ldots, $M_n\} \rightarrow \mathcal{X}^n$.
We require an additional assumption that
$\delta_n(x^n, x^n)=0$ for all $n$ and $x^n \in \mathcal{X}^n$.
\begin{definition}
A triple $(R,D,S)$ is said to be $fa$-achievable
if there exists a sequence of encoder and decoder
$(f_n$, $g_n)$ such that
\begin{eqnarray*}
\limsup_{n\rightarrow \infty} \frac{\log M_n}{n} & \leq & R,\\
\limsup_{n\rightarrow \infty} \frac{1}{n}\mathbf{E}[\delta_n(X^n, g_n(f_n(X^n)))] & \leq & D,\\
\limsup_{n\rightarrow \infty} \sigma(P_{g_n(f_n(X^n))}, P_{X^n}) & \leq & S.
\end{eqnarray*}
Define the function $R_{fa}(D,S)$ by
\[
R_{fa}(D,S) = \inf\{ R \mid (R,D,S) \mbox{ is $fa$-achievable }\}.
\]
\end{definition}
\begin{theorem}
\[
R_{fa}(D,S) = \max\left\{ \inf_{\mathbf{Y}} \overline{I}(\mathbf{X}; \mathbf{Y}),
\inf\{R \mid F_{\mathbf{X}}(R) \leq S\} \right\}
\]
where the infimum is taken with respect to all
general information sources $\mathbf{Y}$ satisfying
\[
\limsup_{n\rightarrow \infty} \frac{1}{n}\mathbf{E}[\delta_n(X^n, Y^n)] \leq D.\label{eq1}
\]
\end{theorem}
\noindent\textbf{Proof:}
Proof is almost the verbatim copy of that of \cite{matsumotocomex}.
\end{document}
|
1,116,691,498,308 | arxiv | \section{Introduction}
One salient feature of human intelligence is the ability to perform well in a single attempt at a new task instance, by recognizing critical characteristics of the instance and immediately executing appropriate behavior based on experience in similar instances.
Artificial agents must do likewise in applications where success must be achieved in one attempt and failure is irreversible.
This problem setting, \textit{single episode transfer}, imposes a challenging constraint in which an agent experiences---and is evaluated on---only \textit{one} episode of a test instance.
As a motivating example, a key challenge in precision medicine is the uniqueness of each patient's response to therapeutics
\citep{hodson2016precision,bordbar2015personalized,whirl2012pharmacogenomics}.
Adaptive therapy is a promising approach that formulates a treatment strategy as a sequential decision-making problem \citep{zhang2017integrating,West476507,petersen2019deep}.
However, heterogeneity among instances may require explicitly accounting for factors that underlie individual patient dynamics.
For example, in the case of adaptive therapy for sepsis \citep{petersen2019deep}, predicting patient response prior to treatment is not possible. However, differences in patient responses can be observed via blood measurements very early after the onset of treatment \citep{Cockrell2018responders}.
As a first step to address \textit{single episode transfer} in reinforcement learning (RL), we propose a general algorithm for near-optimal test-time performance in a family of environments where differences in dynamics can be ascertained early during an episode.
Our key idea is to train an inference model and a probe that together achieve rapid inference of latent variables---which account for variation in a family of similar dynamical systems---using a small fraction (e.g., 5\%) of the test episode, then deploy a universal policy conditioned on the estimated parameters for near-optimal control on the new instance.
Our approach combines the advantages of robust transfer and adaptation-based transfer, as we learn a single universal policy that requires no further training during test, but which is adapted to the new environment by conditioning on an unsupervised estimation of new latent dynamics.
In contrast to methods that quickly adapt or train policies via gradients during test but assume access to multiple test rollouts and/or dense rewards \citep{finn2017model,killian2017robust,rakelly2019efficient}, we explicitly optimize for performance in one test episode without accessing the reward function at test time.
Hence our method applies to real-world settings in which rewards during test are highly delayed or even completely inaccessible---e.g., a reward that depends on physiological factors that are accessible only in simulation and not from real patients.
We also consider computation time a crucial factor for real-time application, whereas some existing approaches require considerable computation during test \citep{killian2017robust}. Our algorithm builds on variational inference and RL as submodules, which ensures practical compatibility with existing RL workflows.
Our main contribution is a simple general algorithm for single episode transfer in families of environments with varying dynamics, via rapid inference of latent variables and immediate execution of a universal policy.
Our method attains significantly higher cumulative rewards, with orders of magnitude faster computation time during test, than the state-of-the-art model-based method \citep{killian2017robust}, on benchmark high-dimensional domains whose dynamics are discontinuous and continuous in latent parameters.
We also show superior performance over optimization-based meta-learning and favorable performance versus baselines for robust transfer.
\vspace{-3pt}
\section{Single episode transfer in RL: problem setup}
Our goal is to train a model that performs close to optimal within a single episode of a test instance with new unknown dynamics.
We formalize the problem as a family $( \Scal, \Acal, \Tcal, R, \gamma )$, where $( \Scal, \Acal, R, \gamma)$ are the state space, action space, reward function, and discount of an episodic Markov decision process (MDP).
Each \textit{instance} of the family is a stationary MDP with transition function $\Tcal_z(s'|s,a) \in \Tcal$.
When a set $\Zcal$ of physical parameters determines transition dynamics \citep{konidaris2014hidden}, each $\Tcal_z$ has a hidden parameter $z \in \Zcal$ that is sampled once from a distribution $P_{\Zcal}$ and held constant for that instance.
For more general stochastic systems whose modes of behavior are not easily attributed to physical parameters, $\Zcal$ is induced by a generative latent variable model that indirectly associates each $\Tcal_z$ to a latent variable $z$ learned from observed trajectory data.
We refer to ``latent variable'' for both cases, with the clear ontological difference understood.
Depending on application, $\Tcal_z$ can be continuous or discontinuous in $z$.
We strictly enforce the challenging constraint that latent variables are never observed, in contrast to methods that use known values during training \citep{Yu-RSS-17},
to ensure the framework applies to challenging cases without prior knowledge.
This formulation captures a diverse set of important problems.
Latent space $\Zcal$ has physical meaning in systems where $\Tcal_z$ is a continuous function of physical parameters (e.g., friction and stiffness) with unknown values.
In contrast, a discrete set $\Zcal$ can induce qualitatively different dynamics, such as a 2D navigation task where $z \in \lbrace 0,1 \rbrace$ decides if the same action moves in either a cardinal direction or its opposite \citep{killian2017robust}.
Such drastic impact of latent variables may arise when a single drug is effective for some patients but causes serious side effects for others \citep{Cockrell2018responders}.
\textbf{Training phase.}
Our training approach is fully compatible with
RL for episodic environments.
We sample many instances, either via a simulator with controllable change of instances or using off-policy batch data in which demarcation of instances---but not values of latent variables---is known, and train for one or more episodes on each instance.
While we focus on the case with known change of instances, the rare case of unknown demarcation can be approached either by preprocessing steps such as clustering trajectory data or using a dynamic variant of our algorithm (\Cref{app:dynasept}).
\textbf{Single test episode.}
In contrast to prior work that depend on the luxury of multiple experience rollouts for adaptation during test time \citep{doshi2016hidden,killian2017robust,finn2017model,rakelly2019efficient}, we introduce the strict constraint that the trained model has access to---and is evaluated on---\textit{only one} episode of a new test instance.
This reflects the need to perform near-optimally as soon as possible in critical applications such as precision medicine, where an episode for a new patient with new physiological dynamics is the entirety of hospitalization.
\section{Single episode policy transfer}
We present Single Episode Policy Transfer (SEPT), a high-level algorithm for single episode transfer between MDPs with different dynamics.
The following sections discuss specific design choices in SEPT, all of which are combined in synergy for near-optimal performance in a single test episode.
\subsection{Policy transfer through latent space}
\label{subsec:transfer}
Our best theories of natural and engineered systems involve physical constants and design parameters that enter into dynamical models.
This physicalist viewpoint motivates a partition for transfer learning in families of MDPs:
1. learn a representation of latent variables with an inference model that rapidly encodes a vector $\hat{z}$ of discriminative features for a new instance; 2. train a universal policy $\pi(a|s,z)$ to perform near-optimally for dynamics corresponding to any latent variable in $\Zcal$; 3. immediately deploy both the inference model and universal policy on a given test episode.
To build on the generality of model-free RL, and for scalability to systems with complex dynamics, we do not expend computational effort to learn a model of $\Tcal_z(s'|s,a)$, in contrast to model-based approaches \citep{killian2017robust,yao2018direct}.
Instead, we leverage expressive variational inference models to represent latent variables and provide uncertainty quantification.
In domains with ground truth hidden parameters, a latent variable encoding is the most succinct representation of differences in dynamics between instances.
As the encoding $\zhat$ is held constant for all episodes of an instance, a universal policy $\pi(a|s,z)$ can either adapt to all instances when $\Zcal$ is finite, or interpolate between instances when $\Tcal_z$ is continuous in $z$ \citep{schaul2015universal}.
Estimating a discriminative encoding for a new instance enables immediate deployment of $\pi(a|s,z)$ on the single test episode, bypassing the need for further fine-tuning.
This is critical for applications where further training complex models on a test instance is not permitted due to safety concerns.
In contrast, methods that do not explicitly estimate a latent representation of varied dynamics must use precious experiences in the test episode to tune the trained policy \citep{finn2017model}.
In the training phase, we generate an optimized\footnote{In the sense of machine teaching, as explained fully in \Cref{subsec:probe}}
dataset $\Dcal := \lbrace \tau^i \rbrace_{i=1}^N$ of short trajectories, where each $\tau^i := (s^i_1,a^i_1,\dotsc,s^i_{T_p},a^i_{T_p})$
is a sequence of early state-action pairs at the start of episodes of instance $\Tcal_i \in \Tcal$ (e.g. $T_p=5$).
We train a variational auto-encoder, comprising an approximate posterior inference model $q_{\phi}(z|\tau)$ that produces a latent encoding $\hat{z}$ from $\tau$ and a parameterized generative model $p_{\psi}(\tau|z)$.
The dimension chosen for $\hat{z}$ may differ from the exact true dimension when it exists but is unknown; domain knowledge can aid the choice of dimensionality reduction.
Because dynamics of a large variety of natural systems are determined by independent parameters (e.g., coefficient of contact friction and Reynolds number can vary independently),
we consider a disentangled latent representation where latent units capture the effects of independent generative parameters.
To this end, we bring $\beta$-VAE \citep{higgins2017beta} into the context of families of dynamical systems, choosing an isotropic unit Gaussian as the prior and imposing the constraint $D_{KL}(q_{\phi}(z|\tau^i)\Vert p(z)) < \epsilon$.
The $\beta$-VAE is trained by maximizing the variational lower bound $\Lcal(\psi,\phi;\tau^i)$ for each $\tau^i$ across $\Dcal$:
\begin{align}\label{eq:variational-lowerbound}
\max_{\psi,\phi} \log p_{\psi}(\tau^i) \geq \Lcal(\psi,\phi;\tau^i) := -\beta D_{KL}(q_{\phi}(z|\tau^i) \Vert p(z)) + \Ebb_{q_{\phi}(z|\tau^i)} \bigl[ \log p_{\psi}(\tau^i|z) \bigr]
\end{align}
This subsumes the VAE \citep{kingma2013auto} as a special case ($\beta=1$), and we refer to both as VAE in the following.
Since latent variables only serve to differentiate among trajectories that arise from different transition functions, the meaning of latent variables is not affected by isometries and hence the value of $\zhat$ by itself need not have any simple relation to a physically meaningful $z$ even when one exists.
Only the partition of latent space is important for training a universal policy.
Earlier methods for a family of similar dynamics
relied on Bayesian neural network (BNN) approximations of the entire transition function $s_{t+1} \sim \hat{\Tcal}^{(\textrm{BNN})}_z(s_t,a_t)$, which was either used to perform computationally expensive fictional rollouts during test time \citep{killian2017robust} or used indirectly to further optimize a posterior over $z$ \citep{yao2018direct}.
Our use of variational inference is more economical: the encoder $q_{\phi}(z|\tau)$ can be used immediately to infer latent variables during test, while the decoder $p_{\psi}(\tau|z)$ plays a crucial role for optimized probing in our algorithm (see \Cref{subsec:probe}).
In systems with ground truth hidden parameters, we desire two additional properties.
The encoder should produce low-variance encodings, which we implement by minimizing the entropy of $q_{\phi}(z|\tau)$:
\begin{align}\label{eq:entropy-encoder}
\min_{\phi} H(q_{\phi}(z|\tau)) &:= - \int_z q_{\phi}(z|\tau) \log q_{\phi}(z|\tau) dz = \frac{D}{2} \log(2\pi) + \frac{1}{2}\sum_{d=1}^D \bigl( 1 + \log \sigma_d^2 \bigr)
\end{align}
under a diagonal Gaussian parameterization, where $\sigma_d^2 = \mathrm{Var}(q_{\phi}(z|\tau))$ and $\dim(z) = D$.
We add $-H(q_{\phi}(z|\tau))$ as a regularizer to \eqref{eq:variational-lowerbound}.
Second, we must capture the impact of $z$ on higher-order dynamics.
While previous work neglects the order of transitions $(s_t,a_t,s_{t+1})$ in a trajectory \citep{rakelly2019efficient}, we note that a single transition may be compatible with multiple instances whose differences manifest only at higher orders.
In general, partitioning the latent space requires taking the ordering of a temporally-extended trajectory into account.
Therefore, we parameterize our encoder $q_{\phi}(z|\tau)$ using a bidirectional LSTM---as both temporal directions of $(s_t,a_t)$ pairs are informative---and we use an LSTM decoder $p_{\psi}(\tau|z)$ (architecture in \Cref{app:architecture}).
In contrast to embedding trajectories from a \textit{single} MDP for hierarchical learning \citep{co2018self}, our purpose is to encode trajectories from \textit{different instances} of transition dynamics for optimal control.
\subsection{Transfer of a universal policy}
We train a single universal policy $\pi(a|s,z)$ and deploy the same policy during test (without further optimization), for two reasons: robustness against imperfection in latent variable representation and significant improvement in scalability.
Earlier methods trained multiple optimal policies $\lbrace \pi^*_i(a|s) \rbrace_{i=1}^N$ on training instances with a set $\lbrace z^i \rbrace_{i=1}^N$ of hidden parameters, then employed either behavioral cloning \citep{yao2018direct} or off-policy Q-learning \citep{arnekvist2019vpe} to train a final policy $\pi(a|s,z)$ using a dataset $\lbrace (s_t, \hat{z}^i; a_t\sim \pi^*_i(a|s_t)) \rbrace$.
However, this supervised training scheme may not be robust \citep{Yu-RSS-17}: if $\pi(a|s,z)$ were trained only using instance-specific optimal state-action pairs generated by $\pi^*_i(a|s)$ and posterior samples of $\hat{z}$ from an optimal inference model,
it may not generalize well when faced with states and encodings that were not present during training.
Moreover, it is computationally infeasible to train a collection $\lbrace \pi^*_i \rbrace_{i=1}^N$---which is thrown away during test---when faced with a large set of training instances from a continuous set $\Zcal$.
Instead, we interleave training of the VAE and a single policy $\pi(a|s,z)$, benefiting from considerable computation savings at training time, and higher robustness due to larger effective sample count.
\subsection{Optimized probing for accelerated latent variable inference}
\label{subsec:probe}
To execute near-optimal control within a single test episode, we first rapidly compute $\hat{z}$ using a short trajectory of initial experience.
This is loosely analogous to the use of preliminary medical treatment to define subsequent prescriptions that better match a patient's unique physiological response.
Our goal of rapid inference
motivates two algorithmic design choices to optimize this initial phase.
First, the trajectory $\tau$ used for inference by $q_{\phi}(z|\tau)$ must be optimized, in the sense of machine teaching \citep{zhu2018overview}, as certain trajectories are more suitable than others for inferring latent variables that underlie system dynamics.
If specific degrees of freedom are impacted the most by latent variables, an agent should probe exactly those dimensions to produce an informative trajectory for inference.
Conversely, methods that deploy a single universal policy without an initial probing phase \citep{yao2018direct} can fail in adversarial cases, such as when the initial placeholder $\zhat$ used in $\pi_{\theta}(a|s,\cdot)$ at the start of an instance causes failure to exercise dimensions of dynamics that are necessary for inference.
Second, the VAE must be specifically trained on a dataset $\Dcal$ of short trajectories consisting of initial steps of each training episode.
We cannot expend a long trajectory for input to the encoder during test, to ensure enough remaining steps for control.
Hence, single episode transfer motivates the machine teaching problem of learning to distinguish among dynamics: our algorithm must have learned both to generate and to use a short initial trajectory to estimate a representation of dynamics for control.
Our key idea of optimized probing for accelerated latent variable inference is to train a dedicated probe policy $\pi_{\varphi}(a|s)$ to generate a dataset $\Dcal$ of short trajectories at the beginning of all training episodes, such that the VAE's performance on $\Dcal$ is optimized\footnote{In general, $\Dcal$ is not related to the replay buffer commonly used in off-policy RL algorithms.}.
Orthogonal to training a meta-policy for faster exploration \textit{during} standard RL training \citep{xu2018learning},
our probe and VAE are trained for the purpose of performing well on a \textit{new} test MDP.
For ease of exposition, we discuss the case with access to a simulator, but our method easily allows use of off-policy batch data.
We start each training episode using $\pi_{\varphi}$ for a \textit{probe phase} lasting $T_p$ steps, record the probe trajectory $\tau_p$ into $\Dcal$, train the VAE using minibatches from $\Dcal$, then use $\tau_p$ with the encoder to generate $\hat{z}$ for use by $\pi_{\theta}(a|s,\hat{z})$ to complete the remainder of the episode (\Cref{alg:sept}).
At test time, SEPT only requires lines 5, 8, and 9 in \Cref{alg:sept} (training step in 9 removed; see \Cref{alg:sept-test}).
The reward function for $\pi_{\varphi}$ is defined as the VAE objective, approximated by the variational lower bound (\ref{eq:variational-lowerbound}): $R_p(\tau) := \Lcal(\psi,\phi;\tau) \leq \log p_{\psi}(\tau)$.
This feedback loop between the probe and VAE directly trains the probe to help the VAE's inference of latent variables that distinguish different dynamics (\Cref{fig:architecture}).
We provide detailed justification as follows.
First we state a result derived in \Cref{app:proof-exploration}:
\begin{restatable}{proposition}{propexploration}
\label{prop:exploration-gradient}
Let $p_{\varphi}(\tau)$ denote the distribution of trajectories induced by $\pi_{\varphi}$.
Then the gradient of the entropy $H(p_{\varphi}(\tau))$ is given by
\begin{align}\label{eq:exploration-gradient}
\nabla_{\varphi} H(p_{\varphi}(\tau)) &= \Ebb_{p_{\varphi}(\tau)} \bigl[ \nabla_{\varphi} \sum_{i=0}^{T_p-1} \log(\pi_{\varphi}(a_i|s_i)) (-\log p_{\varphi}(\tau)) \bigr]
\end{align}
\end{restatable}
Noting that dataset $\Dcal$ follows distribution $p_{\varphi}$
and that the VAE is exactly trained to maximize the log probability of $\Dcal$,
we use $\Lcal(\psi,\phi;\tau)$ as a tractable lowerbound on $\log p_{\varphi}(\tau)$.
Crucially, to generate optimal probe trajectories for the VAE, we take a minimum-entropy viewpoint and \textit{descend} the gradient (\ref{eq:exploration-gradient}).
This is opposite of a maximum entropy viewpoint that encourages the policy to generate diverse trajectories \citep{co2018self}, which would minimize $\log p_{\varphi}(\tau)$ and produce an adversarial dataset for the VAE---hence, optimal probing is not equivalent to diverse exploration.
The degenerate case of $\pi_{\varphi}$ learning to ``stay still'' for minimum entropy is precluded by any source of environmental stochasticity: trajectories from different instances will still differ, so degenerate trajectories result in low VAE performance.
Finally we observe that \eqref{eq:exploration-gradient} is the defining equation of a simple policy gradient algorithm \citep{williams1992simple} for training $\pi_{\varphi}$, with $\log p_{\varphi}(\tau)$ interpreted as the cumulative reward of a trajectory generated by $\pi_{\varphi}$.
This completes our justification for defining reward $R_p(\tau) := \Lcal(\psi,\phi;\tau)$.
We also show empirically in ablation experiments that this reward is more effective than choices that encourage high perturbation of state dimensions or high entropy (\Cref{sec:results}).
\begin{wrapfigure}{r}{0.55\textwidth}
\centering
\includegraphics[width=0.55\textwidth]{figures/architecture.pdf}
\caption{$\pi_{\varphi}$ learns to generate an optimal dataset for the VAE, whose performance is the reward for $\pi_{\varphi}$. Encoding $\zhat$ by the VAE is given to control policy $\pi_{\theta}$.}
\label{fig:architecture}
\end{wrapfigure}
The VAE objective function may not perfectly evaluate a probe trajectory generated by $\pi_{\varphi}$ because the objective value increases due to VAE training regardless of $\pi_{\varphi}$.
To give a more stable reward signal to $\pi_{\varphi}$, we can use a second VAE whose parameters slowly track the main VAE according to $\psi' \leftarrow \alpha \psi + (1-\alpha) \psi'$ for $\alpha \in [0,1]$, and similarly for $\phi'$.
While analogous to target networks in DQN \citep{mnih2015human}, the difference is that our second VAE is used to compute the \textit{reward} for $\pi_{\varphi}$.
\begin{algorithm}[t]
\caption{Single Episode Policy Transfer: training phase}
\label{alg:sept}
\begin{algorithmic}[1]
\footnotesize
\Procedure{SEPT-train}{}
\State Initialize encoder $\phi$, decoder $\psi$, probe policy $\varphi$, control policy $\theta$, and trajectory buffer $\Dcal$
\For{each instance $\Tcal_z$ with transition function sampled from $\Tcal$}
\For{each episode on instance $\Tcal_z$}
\State Execute $\pi_{\varphi}$ for $T_p$ steps and store trajectory $\tau_p$ into $\Dcal$
\State Use variational lower bound (\ref{eq:variational-lowerbound}) as the reward to train $\pi_{\varphi}$ by descending gradient (\ref{eq:exploration-gradient})
\State Train VAE using minibatches from $\Dcal$ for gradient ascent on (\ref{eq:variational-lowerbound}) and descent on (\ref{eq:entropy-encoder})
\State Estimate $\hat{z}$ from $\tau_p$ using encoder $q_{\phi}(z|\tau)$
\State Execute $\pi_{\theta}(a|s,z)$ with $\hat{z}$ for remaining time steps and train it with suitable RL algorithm
\EndFor
\EndFor
\EndProcedure
\end{algorithmic}
\end{algorithm}
\section{Related work}
Transfer learning in a family of MDPs with different dynamics manifests in various formulations \citep{taylor2009transfer}.
Analysis of $\epsilon$-stationary MDPs and $\epsilon$-MDPs
provide theoretical grounding by showing that an RL algorithm that learns an optimal policy in an MDP can also learn a near-optimal policy for multiple transition functions \citep{kalmar1998module,szita2002varepsilon}.
Imposing more structure, the hidden-parameter Markov decision process (HiP-MDP) formalism posits a space of hidden parameters that determine transition dynamics, and implements transfer by model-based policy training after inference of latent parameters \citep{doshi2016hidden,konidaris2014hidden}.
Our work considers HiP-MDP as a widely applicable yet special case of a general viewpoint, in which the existence of hidden parameters is not assumed but rather is induced by a latent variable inference model.
The key structural difference from POMDPs \citep{kaelbling1998planning} is that given fixed latent values, each instance from the family is an MDP with no hidden states; hence, unlike in POMDPs, tracking a history of observations provides no benefit.
In contrast to multi-task learning \citep{caruana1997multitask}, which uses the same tasks for training and test, and in contrast to parameterized-skill learning \citep{da2012learning}, where an agent learns from a collection of rewards with given task identities in one environment with fixed dynamics,
our training and test MDPs have different dynamics and identities of instances are not given.
Prior latent variable based methods for transfer in RL depend on a multitude of optimal policies during training \citep{arnekvist2019vpe},
or learn a surrogate transition model for model predictive control with real-time posterior updates during test \citep{perez2018efficient}.
Our variational model-free approach does not incur either of these high computational costs.
We encode trajectories to infer latent representation of differing dynamics, in contrast to state encodings in \citep{zhang2018decoupling}.
Rather than formulating variational inference in the space of optimal value functions \citep{tirinzoni2018transfer},
we implement transfer through variational inference in a latent space that underlies dynamics.
Previous work for transfer across dynamics with hidden parameters employ model-based RL with Gaussian process and Bayesian neural network (BNN) models of the transition function \citep{doshi2016hidden,killian2017robust}, which require computationally expensive fictional rollouts to train a policy from scratch during test time and poses difficulties for real-time test deployment.
DPT uses a fully-trained BNN to further optimize latent variable during a single test episode, but faces scalability issues as it needs one optimal policy per training instance \citep{yao2018direct}.
In contrast, our method does not need a transition function and can be deployed without optimization during test.
Methods for robust transfer either require access to multiple rounds from the test MDP during training \citep{rajeswaran2016epopt}, or require the distribution over hidden variables to be known or controllable \citep{paul2019fingerprint}.
While meta-learning \citep{finn2017model,rusu2018meta,zintgraf2018fast,rakelly2019efficient} in principle can take one gradient step during a single test episode, prior empirical evaluation were not made with this constraint enforced, and adaptation during test is impossible in settings without dense rewards.
\section{Experimental setup}
\label{sec:experimental-setup}
We conducted experiments on three benchmark domains with diverse challenges to evaluate the performance, speed of reward attainment, and computational time of SEPT versus five baselines in the single test episode\footnote{Code for all experiments is available at \url{https://github.com/011235813/SEPT}.}.
We evaluated four ablation and variants of SEPT to investigate the necessity of all algorithmic design choices.
For each method on each domain, we conducted 20 independent training runs.
For each trained model, we evaluate on $M$ independent test instances, all starting with the same model; adaptations during the single test episode, if done by any method, are not preserved across the independent test instances.
This means we evaluate on a total of $20M$ independent test instances per method per domain.
Hyperparameters were adjusted using a coarse coordinate search on validation performance.
We used DDQN with prioritized replay \citep{van2016deep,schaul2015prioritized} as the base RL component of all methods for a fair evaluation of transfer performance; other RL algorithms can be readily substituted.
\textbf{Domains. }
We use the same continuous state discrete action HiP-MDPs proposed by \citet{killian2017robust} for benchmarking.
Each isolated instance from each domain is solvable by RL, but it is highly challenging, if not impossible, for na\"ive RL to perform optimally for all instances because significantly different dynamics require different optimal policies.
In \textbf{2D navigation}, dynamics are discontinuous in $z \in \lbrace 0, 1\rbrace$ as follows: location of barrier to goal region, flipped effect of actions (i.e., depending on $z$, the \textit{same} action moves in either a cardinal direction or its opposite), and direction of a nonlinear wind.
In \textbf{Acrobot} \citep{sutton1998reinforcement}, the agent applies $\lbrace +1,0,-1\rbrace$ torques to swing a two-link pendulum above a certain height.
Dynamics are determined by a vector $z = (m_1,m_2,l_1,l_2)$ of masses and lengths, centered at 1.0.
We use four unique instances in training and validation, constructed by sampling $\Delta z$ uniformly from $\lbrace -0.3,-0.1,0.1,0.3 \rbrace$ and adding it to all components of $z$.
During test, we sample $\Delta z$ from $\lbrace -0.35,-0.2,0.2,0.35 \rbrace$ to evaluate both interpolation and extrapolation.
In \textbf{HIV}, a patient's state dynamics is modeled by differential equations with high sensitivity to 12 hidden variables and separate steady-state regions of health, such that different patients require unique treatment policies \citep{adams2004dynamic}.
Four actions determine binary activation of two drugs.
We have $M = 10,5,5$ for 2D navigation, Acrobot, and HIV, respectively.
\textbf{Baselines. }
First, we evaluated two simple baselines that establish approximate bounds on test performance of methods that train a single policy:
as a lower bound, \textbf{Avg} trains a single policy $\pi(a|s)$ on all instances sampled during training and runs directly on test instances;
as an upper bound in the limit of perfect function approximation for methods that use latent variables as input, \textbf{Oracle} $\pi(a|s,z)$ receives the true hidden parameter $z$ during both training and test.
Next we adapted existing methods, detailed in \Cref{app:algorithms}, to single episode test evaluation:
1. we allow the model-based method \textbf{BNN} \citep{killian2017robust} to fine-tune a pre-trained BNN and train a policy using BNN-generated fictional episodes every 10 steps during the test episode; 2. we adapted the adversarial part of EPOpt \citep{rajeswaran2016epopt}, which we term \textbf{EPOpt-adv}, by training a policy $\pi(a|s)$ on instances with the lowest 10-percentile performance; 3.
we evaluate \textbf{MAML} as an archetype of meta-learning methods that require dense rewards or multiple rollouts \citep{finn2017model}.
We allow MAML to use a trajectory of the same length as SEPT's probe trajectory for one gradient step during test.
We used the same architecture for the RL module of all methods (\Cref{app:architecture}).
To our knowledge, these model-free baselines are evaluated on single-episode transfer for the first time in this work.
\textbf{Ablations. }
To investigate the benefit of our optimized probing method for accelerated inference, we designed an ablation called \textbf{SEPT-NP},
in which trajectories generated by the control policy are used by the encoder for inference and stored into $\Dcal$ to train the VAE.
Second, we investigated an alternative reward function for the probe, labeled \textbf{TotalVar} and defined as $R(\tau) := 1/T_p \sum_{t=1}^{T_p-1} \sum_{i=1}^{\text{dim}(\Scal)} |s_{t+1,i} - s_{t,i}|$ for probe trajectory $\tau$.
In contrast to the minimum entropy viewpoint in \Cref{subsec:probe}, this reward encourages generation of trajectories that maximize total variation across all state space dimensions.
Third, we tested the maximum entropy viewpoint on probe trajectory generation, labeled \textbf{MaxEnt}, by giving \textit{negative} lowerbound as the probe reward: $R_p(\tau) := -\Lcal(\psi,\phi;\tau)$.
Last, we tested whether \textbf{DynaSEPT}, an extension that dynamically decides to probe or execute control (\Cref{app:dynasept}), has any benefit for stationary dynamics.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.20\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures20/2D_steps_better.pdf}
\caption{2D navigation}
\label{fig:2D-steps}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.20\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures20/acrobot_steps.pdf}
\caption{Acrobot}
\label{fig:acrobot-steps}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[width=0.95\linewidth]{figures20/2D_better.pdf}
\caption{2D navigation}
\label{fig:2D}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[width=0.95\linewidth]{figures20/acrobot.pdf}
\caption{Acrobot}
\label{fig:acrobot}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[width=0.95\linewidth]{figures20/hiv_new.pdf}
\caption{HIV}
\label{fig:hiv}
\end{subfigure}
\begin{subfigure}[t]{0.20\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures20/2D_ablation_steps_dyna.pdf}
\caption{2D navigation}
\label{fig:2D-ablation-steps}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.20\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures20/acrobot_ablation_steps.pdf}
\caption{Acrobot}
\label{fig:acrobot-ablation-steps}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[width=0.95\linewidth]{figures20/2D_ablation.pdf}
\caption{2D navigation}
\label{fig:2D-ablation}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[width=0.95\linewidth]{figures20/acrobot_ablation.pdf}
\caption{Acrobot}
\label{fig:acrobot-ablation}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.19\linewidth}
\centering
\includegraphics[width=0.95\linewidth]{figures20/hiv_ablation.pdf}
\caption{HIV}
\label{fig:hiv-ablation}
\end{subfigure}
\caption{(a-e): Comparison against baselines. (a-b): Number of steps to solve 2D navigation and Acrobot in a single test episode; failure to solve is assigned a count of 50 in 2D nav and 200 in Acrobot. (c-e): Cumulative reward versus test episode step. BNN requires long computation time and showed low rewards on HIV, hence we report 3 seeds in \Cref{fig:hiv-with-bnn}. (f-j): Ablation results. DynaSEPT is out of range in (g), see \Cref{fig:acrobot-ablation-dyna}. Error bars show standard error of mean over all test instances over 20 training runs per method.}
\label{fig:main-comparisons}
\vspace{-0.5cm}
\end{figure}
\section{Results and discussion}
\label{sec:results}
2D navigation and Acrobot are solved upon attaining terminal reward of 1000 and 10, respectively.
SEPT outperforms all baselines in 2D navigation and takes significantly fewer number of steps to solve
(\Cref{fig:2D-steps,fig:2D}).
While a single instance of 2D navigation is easy for RL, handling multiple instances is highly non-trivial.
EPOpt-adv and Avg almost never solve the test instance---we set ``steps to solve'' to 50 for test episodes that were unsolved---because interpolating between instance-specific optimal policies in policy parameter space is not meaningful for any task instance.
MAML did not perform well despite having the advantage of being provided with rewards at test time, unlike SEPT. The gradient adaptation step was likely ineffective because the rewards are sparse and delayed.
BNN requires significantly more steps than SEPT, and it uses four orders of magnitude longer computation time (\Cref{table:test-times}), due to training a policy from scratch during the test episode.
Training times of all algorithms except BNN are in the same order of magnitude (\Cref{table:train-times}).
In Acrobot and HIV, where dynamics are continuous in latent variables, interpolation within policy space can produce meaningful policies, so all baselines are feasible in principle.
SEPT is statistically significantly faster than BNN and Avg, is within error bars of MAML, while EPOpt-adv outperforms the rest by a small margin (\Cref{fig:acrobot-steps,fig:acrobot}).
\Cref{fig:percent} shows that SEPT is competitive in terms of percentage of solved instances.
As the true values of latent variables for Acrobot test instances were interpolated and extrapolated from the training values, this shows that SEPT is robust to out-of-training dynamics.
BNN requires more steps due to simultaneously learning and executing control during the test episode.
On HIV, SEPT reaches significantly higher cumulative rewards than all methods.
Oracle is within the margin of error of Avg. This may be due to insufficient examples of the high-dimensional ground truth hidden parameters.
Due to its long computational time, we run three seeds for BNN on HIV, shown in \Cref{fig:hiv-with-bnn}, and find it was unable to adapt within one test episode.
Comparing directly to reported results in DPT \citep{yao2018direct}, SEPT solves 2D Navigation at least 33\% (>10 steps) faster,
and the cumulative reward of SEPT (mean and standard error) are above DPT's mean cumulative reward in Acrobot (\Cref{table:acrobot-errorbar}).
Together, these results show that methods that explicitly distinguish different dynamics (e.g., SEPT and BNN) can significantly outperform methods that implicitly interpolate in policy parameter space (e.g., Avg and EPOpt-adv) in settings where $z$ has large discontinuous effect on dynamics, such as 2D navigation.
When dynamics are continuous in latent variables (e.g., Acrobot and HIV), interpolation-based methods fare better than BNN, which faces the difficulty of learning a model of the entire family of dynamics.
SEPT worked the best in the first case and is robust to the second case because it explicitly distinguishes dynamics and does not require learning a full transition model. Moreover, SEPT does not require rewards at test time allowing it be useful on a broader class of problems than optimization-based meta-learning approaches like MAML.
\Cref{app:results} contains training curves.
\begin{figure}[t]
\centering
\begin{subfigure}[t]{0.16\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures8/2D_length.pdf}
\caption{2D navigation}
\label{fig:2D-length}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.16\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures8/acrobot_length.pdf}
\caption{Acrobot}
\label{fig:acrobot-length}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.16\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{figures8/hiv_length.pdf}
\caption{HIV}
\label{fig:hiv-length}
\end{subfigure}
\begin{subfigure}[t]{0.16\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures8/2D_latent.pdf}
\caption{2D navigation}
\label{fig:2D-r-latent}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.16\linewidth}
\centering
\includegraphics[width=1.0\linewidth]{figures8/acrobot_latent.pdf}
\caption{Acrobot}
\label{fig:acrobot-r-latent}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.16\linewidth}
\centering
\includegraphics[width=0.9\linewidth]{figures8/hiv_latent.pdf}
\caption{HIV}
\label{fig:hiv-r-latent}
\end{subfigure}
\caption{Cumulative reward on test episode for different $T_p$ (a-c) and different $\dim(z)$ (d-f). $8M$ independent test instances for each hyperparameter setting.}
\label{fig:hyperparam}
\end{figure}
\textbf{Ablation results.}
Comparing to SEPT-NP, \Cref{fig:2D-ablation-steps,fig:acrobot-ablation-steps,fig:hiv-ablation} show that the probe phase is necessary to solve 2D navigation quickly, while giving similar performance in Acrobot and significant improvement in HIV.
SEPT significantly outperformed TotalVar in 2D navigation and HIV, while TotalVar gives slight improvement in Acrobot, showing that directly using VAE performance as the reward for probing in certain environments can be more effective than a reward that deliberately encourages perturbation of state dimensions.
The clear advantage of SEPT over MaxEnt in 2D navigation and HIV supports our hypothesis in \Cref{subsec:probe} that the variational lowerbound, rather than its negation in the maximum entropy viewpoint, should be used as the probe reward, while performance was not significantly differentiated in Acrobot.
SEPT outperforms DynaSEPT on all problems where dynamics are stationary during each instance.
On the other hand, DynaSEPT is the better choice in a non-stationary variant of 2D navigation where the dynamics ``switch'' abruptly at $t=10$ (\Cref{fig:2D-switch}).
\textbf{Robustness.}
\Cref{fig:hyperparam} shows that SEPT is robust to varying the probe length $T_p$ and $\dim(z)$.
Even with certain suboptimal probe length and $\dim(z)$, it can outperform all baselines on 2D navigation in both steps-to-solve and final reward; it is within error bars of all baselines on Acrobot based on final cumulative reward;
and final cumulative reward exceeds that of baselines in HIV.
Increasing $T_p$ means foregoing valuable steps of the control policy and increasing difficulty of trajectory reconstruction for the VAE in high dimensional state spaces;
$T_p$ is a hyper-parameter that should be validated for each application.
\Cref{app:latent-dynamics} shows the effect of $\beta$ on latent variable encodings.
\section{Conclusion and future directions}
We propose a general algorithm for single episode transfer among MDPs with different stationary dynamics, which is a challenging goal with real-world significance that deserves increased effort from the transfer learning and RL community.
Our method, Single Episode Policy Transfer (SEPT), trains a probe policy and an inference model to discover a latent representation of dynamics using very few initial steps in a single test episode, such that a universal policy can execute optimal control without access to rewards at test time.
Strong performance versus baselines in domains involving both continuous and discontinuous dependence of dynamics on latent variables show the promise of SEPT for problems where different dynamics can be distinguished via a short probing phase.
The dedicated probing phase may be improved by other objectives, in addition to performance of the inference model, to mitigate the risk and opportunity cost of probing.
An open challenge is single episode transfer in domains where differences in dynamics of different instances are not detectable early during an episode, or where latent variables are fixed but dynamics are nonstationary.
Further research on dynamic probing and control, as sketched in DynaSEPT, is one path toward addressing this challenge.
Our work is one step along a broader avenue of research on general transfer learning in RL equipped with the realistic constraint of a single episode for adaptation and evaluation.
\subsubsection*{Acknowledgments}
This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under contract DE-AC52-07NA27344. Lawrence Livermore National Security, LLC. LLNL-JRNL-791194.
\bibliographystyle{natbib}
|
1,116,691,498,309 | arxiv | \section[short title]{Introduction}
\bigskip
Technicolor with scalars is a very simple and calculable kind of
technicolor model, in which a scalar doublet with a positive mass
squared is introduced to couple the technifermions with the ordinary
fermions. When the technicolor interaction becomes strong and
technifermions condense, this scalar develops a vacuum expectation value
(VEV), which breaks the electroweak symmetry together with the
technipions. This scalar VEV is also responsible for giving masses
to ordinary fermions. It is very interesting that this model can be
treated as a kind of low energy effective theory of strongly-coupled ETC
(SETC) models~\cite{Sekhar:Cohen}, when some degree of fine tuning is
allowed. This fine-tuning is necessary in any workable SETC model to
maintain a sufficient hierarchy between the ETC and technicolor
scales~\cite{Lane}. Some phenomenological issues have been explored
previously in the literature and the model has been proved to be able
to stand the experimental tests so far~\cite{EHS}-\cite{CC:EHS:YM}.
In this article, we consider the process $B \to X_s \mu^+\mu^-$
in the hope of testing the model and constraining its
parameter space further. The present exclusive limit on this channel
from CDF,
$\mbox{BR}(B^0 \to {K^*}^0 \mu^+ \mu^-)_{\mbox{{\scriptsize CDF}}}<2.5
\times10^{-5}$~\cite{CDF},
is within an order of magnitude of the Standard Model (SM) prediction,
$\mbox{BR}(B \to X_s \mu^+ \mu^-)_{\mbox{{\scriptsize SM}}}=(5.7\pm1.2)
\times10^{-6}$~\cite{Ali}. The CLEO exclusive limit on the branching
ratio ( $3.1\times 10^{-5}$ ) is less stringent~\cite{CLEO}.
In the SM, this process can only occur at loop level, and the error in
the evaluation of $\Gamma(B \to X_s \mu^+ \mu^-)$ can not be reduced to
less than $10 - 20\%$ due to the uncertainties in quark masses and
the interference effects from excited charmonium states~\cite{Ligeti}.
Still, given the large mass of top quark, one may expect this decay
to be sensitive to new physics contributions, if these contributions
significantly overwhelm the QCD uncertainties. Therefore, the measurement of
$B \to X_s \mu^+\mu^-$ can provide a probe of the validity of critical
ingredients of the Standard Model and, possibly, of the existence of
new physics beyond. If the branching ratio does not lie significantly below
the SM prediction, positive signals are expected to be observed with
the upgraded CLEO detector and in the CDF Run II\footnote{CDF Run II is
expected to observe the decay of $B^0 \to {K^*}^0 \mu^+ \mu^-$ even if
its branching ratio is as low as $3.4 \times10^{-7}$~\cite{CDF:RunII}.}.
In section 2, we present the technicolor with scalars model. We then compute
the $B \to X_s\mu^+\mu^-$ branching ratio and discuss the results in section 3.
Finally, in section 4 we give our conclusions.
\section[short title]{Technicolor with Scalars}
In the Standard Model, $B \to X_s \mu^+ \mu^-$ is dominated by
one-loop contributions involving the exchange of a virtual $W$ and
top quark (See figure \ref{smdiag}). In the technicolor with scalars
model, there exists at least one physical charged scalar, which can be
exchanged in the loop\footnote{Here, we ignore the
box diagram which can be obtained
from figure 1(b) by changing $W$ into $\pi_p$, because the $\pi_p$
coupling to the muon is small (proportional to the muon mass).},
together with a top quark (figure \ref{tcsdiag}).
Therefore, we need to know the mass of this
scalar and its interaction with quarks.
\begin{figure}
\vskip -3.5em
\centerline{\epsfig{file=diag1.eps,height=14cm,width=14cm}}
\vskip -2.5em
\caption{\label{smdiag} Penguin (a) and box (b) diagrams for
$b \to s \mu^+ \mu^-$ in the Standard Model. }
\end{figure}
\begin{figure}
\vskip -3.5em
\centerline{\epsfig{file=diag2.eps,height=12cm,width=12cm}}
\vskip -2.5em
\caption{\label{tcsdiag}
Additional Feynman diagrams contributing to $b \to s \mu^+
\mu^-$ in technicolor with scalars. $\pi_p$ is the charged physical
scalar in this model.}
\end{figure}
We will now give a brief summary of the model, focusing on the fact
that we need to determine these quantities needed in our computation.
For more details of the model, we refer the reader to \cite{EHS} and
\cite{CC:HG}.
The gauge group in the model is
$SU(N)_{TC}\times SU(3)_C \times SU(2)_L \times U(1)_Y$,
with ordinary particle content the same as in the
Standard Model. These ordinary particles are singlets under the SU(N)
technicolor group. Let us now consider the simplest case where there
is a single weak doublet of technifermions
$$\upsilon_L=\left(\begin{array}{c} p\\m
\end{array}\right)_L ,\ \ p_R,\ m_R,$$
where their hypercharges
$Y(\upsilon_L)=0,Y(p_R)=\frac{1}{2},Y(m_R)=-\frac{1}{2}$
are chosen to cancel gauge anomalies. In addition to the above
particle spectrum, there exists a scalar doublet, $\phi$,
\begin{displaymath}
\phi=\left(\begin{array}{c} \phi^+\\\phi^0
\end{array}\right),
\end{displaymath}
which has hypercharge 1/2. This scalar couples to the technifermions, as well
as ordinary fermions. After the condensation of technifermions,
because of the common scalar $\phi$, ordinary fermions obtain masses.
The isotriplet scalar bound state of $p$ and $m$ (technipions),
and the isotriplet components of $\phi$ will mix. One linear combination
becomes the longitudinal component of $W$ and $Z$, while the orthogonal
combination remains in the low energy theory as an isotriplet of
physical scalars, $\pi_p$, whose coupling to quarks is~\cite{CC:HG}
\begin{equation}
i\left(\frac{f}{v}\right)\left[\bar{D}_LV^{\dag}\pi_p^-h_UU_R
+ \bar{U}_L\pi_p^+Vh_DD_R +h.c.\right].
\end{equation}
Here V is the CKM matrix, f is the technipion decay constant,
and $v$ is the electroweak scale ($ \approx 250$GeV); $U$, and $D$ are
column vectors in flavor space; $h_U$ and $h_D$ are Yukawa coupling
matrices. The above looks like the interaction of Higgs doublet with
quarks in a type-I two Higgs doublet model~\cite{2HD}.
The physical scalar mass can be estimated by the chiral Lagrangian
analysis~\cite{CC:EHS,CC:HG}. At the lowest order,
\begin{equation}\label{pi:mass}
{m_{\pi_p}}^2 = 2c_1\sqrt{2}\frac{4\pi f}{f^\prime}v^2h,
\end{equation}
where $f^\prime$ is the scalar VEV , which is constrained together with f by
\begin{equation}\label{f:fp:v2}
f^2+{f^\prime}^2=v^2;
\end{equation}
and $h=(h_++h_-)/2$, is the average technifermion Yukawa coupling of
$h_+$ (Yukawa coupling to $p$) and $h_-$ (Yukawa coupling to $m$).
$c_1$ is an undetermined coefficient in the chiral expansion, but of
order unity by naive dimensional analysis~\cite{NDA}. We set the value
of it to be 1, leaving its uncertainty in that of h since they always
appear together when we work in the lowest order.
The effective one-loop potential for the Higgs field $\sigma$, which
is the isoscalar component of $\phi$, has the following
form~\cite{CC:EHS,CC:HG},
\begin{equation}\label{potential}
V(\sigma)=\frac{1}{2}{M_\phi}^2\sigma^2 +\frac{\lambda}{8}\sigma^4
-\frac{1}{64\pi^2}\left[3h_t^4
+N(h_+^4+h_-^4))\right]\sigma^4\log{\left(\frac{\sigma^2}{\mu^2}\right)}
-8\sqrt{2}c_1\pi f^3h\sigma,
\end{equation}
where $h_t$ is the top quark Yukawa coupling, $N=4$,
and $\mu$ is an arbitrary renormalization scale. The first three
terms in equation (\ref{potential}) are standard one loop
Coleman--Weinberg terms~\cite{Coleman:Weinberg}.
The last term enters the potential through the technicolor interactions.
Apart from the Standard Model parameters, we have four additional
parameters in this model: $(M_\phi, \lambda, h_+, h_-)$. Two limits have
been studied in the literature~\cite{EHS},\cite{CC:EHS}:
{\it (i)} the limit in which $\lambda$ is small and can be neglected;
and {\it (ii)} the limit in which $M_\phi$ is small and can be
neglected. The nice thing of working in these two limits is that at the
lowest order the phenomenology depends on the average of $h_+$ and $h_-$
not the difference of them. Let us look at two limits of this potential:\\
{\it (i) $\lambda\approx 0 $, assuming the $\phi^4$ coupling is
small and can be neglected.}\\
We assume the Higgs field $\sigma$ has no VEV, and therefore terms
in the potential that are linear in $\sigma$ should vanish:
\begin{equation}
V^\prime(\sigma)=0,
\end{equation}
or
\begin{equation}\label{no:VEV:1}
{\widetilde{M}_\phi}^2f^\prime=8\sqrt{2}c_1\pi hf^3,
\end{equation}
where the shifted scalar mass $\widetilde{M}_\phi$ is connected to the
unshifted mass $M_\phi$ by
\begin{equation}\label{shifted:mass}
{\widetilde{M}_\phi}^2=M_\phi^2
+\left(\frac{44}{3}\right)\frac{1}{64\pi^2}\left[3h_t^4
+2Nh^4\right]{f^\prime}^2.
\end{equation}
In deriving the above two equations, we have defined the
renormalized $(\phi^{\dag}\phi)^2$ coupling as
$\lambda_r=V^{\prime\prime\prime\prime}(f^\prime)/3$ to remove the $\mu$
dependence. For simplicity, we also set $h_+=h_-$ in
eq. (\ref{shifted:mass}). By using the shifted scalar mass, we can
absorb radiative corrections which affect the phenomenology of the
charged scalar. However, these corrections still appear in the mass of
the $\sigma$ field, which is determined by $V^{\prime\prime}(f^\prime)$,
\begin{equation}
{ m_\sigma}^2= \widetilde{M}_\phi^2+
\left(\frac{64}{3}\right)\left(\frac{1}{64\pi^2}\right)\left[3h_t^4
+2Nh^4\right]{f^\prime}^2.
\end{equation}
In this limit, the phenomenology can be described in terms of
$(\widetilde{M}_\phi,h)$, since $h_t$ can be expressed in terms of $f$ and
$f^\prime$ ($h_t=\sqrt{2}m_t/f^\prime$). This parameterization was adopted
previously in the literature \cite{EHS}-\cite{CC:EHS:YM}. Alternatively, we
can trade the unphysical parameter $\widetilde{M}_\phi$ for a physical
parameter, e.g., the mass of the isoscalar field, $m_\sigma$. Then the
free parameters will be two physical quantities: $(m_\sigma, h)$. \\
{\it(ii) $M_\phi\approx 0$, assuming the scalar mass is small
and can be neglected. }\\
As in the case of limit {\it (i)}, we assume the Higgs field has no
vacuum expectation value, in other words,
$V^\prime(\sigma)=0$
so that
\begin{equation}\label{no:VEV:2}
\frac{\tilde{\lambda}}{2}{f^\prime}^3=8\sqrt{2}c_1\pi hf^3,
\end{equation}
where the shifted coupling $\tilde{\lambda}$ is defined by
\begin{equation}
\tilde{\lambda}=\lambda+\frac{11}{24\pi^2}\left[3h_t^4 + 2Nh^4\right].
\end{equation}
The same renormalization scheme as that in limit {\it (i)} is used.
The effects of radiative corrections are absorbed
into the shifted coupling $\tilde{\lambda}$ but still manifest in the
$\sigma$ mass, which is given by
\begin{equation}
{m_\sigma}^2=\frac{3}{2}\tilde{\lambda}{f^\prime}^2
- \frac{1}{8\pi^2}\left[3h_t^4+2Nh^4\right]{f^\prime}^2.
\end{equation}
In this limit, we can choose $(\tilde{\lambda},h)$ to be our free
parameters as in refs. \cite{EHS}-\cite{CC:EHS:YM}, or again
use $(m_\sigma,h$).\\
We should keep in mind that these results are only valid in the part
of the parameter space where the technifermion masses ($\approx
hf^\prime$) are much smaller than the technicolor scale ($\approx 4\pi f$).
If the technifermions are heavier than this scale, the chiral
$SU(2)_L \times SU(2)_R$ will cease to be an approximate symmetry of the
theory and consequently the effective chiral lagrangian analysis will
not make sense.
\section{$B \to X_s \mu^+ \mu^-$ in the model}
As mentioned earlier, in addition to the one-loop graphs of the
Standard Model, additional one-loop graphs with $\pi_p$ as internal
particles are present in this model (figures
\ref{smdiag},\ref{tcsdiag}).
The scalar mass can be derived in limit {\it (i)} by using equations
(\ref{f:fp:v2}) and (\ref{no:VEV:1}) to evaluate equation
(\ref{pi:mass}), and in limit {\it (ii)} by
similarly combining equations (\ref{f:fp:v2}),(\ref{no:VEV:2}) and
(\ref{pi:mass}) .
The inclusive rate for the meson level process $B \to X_s \mu^+\mu^-$
may be approximated by the rate for the free quark transition $b \to s
\mu^+ \mu^-$~\cite{Falk}, provided that the invariant mass of the
dilepton pair is not near any resonances in the charm system such as the
$\psi$. Following refs. \cite{CDF} and \cite{Cho}, we restrict our
analysis to the dilepton invariant mass regions
\begin{equation}\label{mregion}
m_{\mu^+\mu^-}\in(2m_\mu,2.9\mbox{GeV})\cup(3.3\mbox{GeV},
3.6\mbox{GeV})\cup(3.8\mbox{GeV},4.6\mbox{GeV}),
\end{equation}
to ensure the validity of the free quark approximation.
In our calculation of the nonresonant $B \to X_s \mu^+ \mu^-$
branching fraction in the above mentioned disjoint
dilepton mass intervals, we adopt the formalism from ref.~\cite{Cho}.
For the reader' convenience, details can be found in the appendix.
The branching ratio of
$b \to s \mu^+ \mu^-$ may be normalized to the semileptonic ratio,
$b \to c e\bar{\nu}$, to cancel the uncertainties arising from
the KM angles,
\begin{equation}
\mbox{BR}(b \to s \mu^+ \mu^-)
=\frac{\Gamma(b \to s \mu^+ \mu^-)}{\Gamma(b \to c e \bar{\nu})}
\mbox{BR}(b \to c e \bar {\nu}).
\end{equation}
Contours of the nonresonant branching ratio in both limits of the model
are plotted in figures \ref{mm1} to \ref{sigma2}.\\
\begin{figure}
\vskip -.5in
\centerline{\epsfig{file=fig1.eps,height=9cm,width=9cm}}
\vskip -1em
\caption{\label{mm1}
Contours of $\mbox{BR}(B \to X_s \mu^+ \mu^-)_{\mbox{NR}}$
in the $(h,\widetilde{M}_\phi)$ plane in limit {\it (i)}.
The allowed parameter space is bordered by B-line, $hf^\prime=4\pi f$,
and $m_\pi=43.5$GeV. The exclusive limit on the nonresonant
branching ratio from CDF, $1.9\times 10^{-5}$ lies outside the
allowed region.}
\end{figure}
\begin{figure}
\vskip -.5in
\centerline{\epsfig{file=fig2.eps,height=9cm,width=9cm}}
\vskip -1em
\caption{\label{mm2}
Contours of $\mbox{BR}(B \to X_s \mu^+ \mu^-)_{\mbox{NR}}$
in the $(h,\tilde{\lambda})$ plane in limit {\it (ii)}.
The allowed parameter space is confined between the B-line and
$m_\sigma=58.4$GeV. The exclusive limit on the nonresonant
branching ratio from CDF, $1.9\times 10^{-5}$ lies outside the
allowed region.}
\end{figure}
\begin{figure}
\vskip -.5in
\centerline{\epsfig{file=mm1.eps,height=9cm,width=9cm}}
\vskip -1em
\caption{\label{sigma1}
Contours of $\mbox{BR}(B \to X_s \mu^+ \mu^-)_{\mbox{NR}}$
in the $(h, m_\sigma)$ plane in limit {\it (i)}. The allowed parameter
space is bounded by B-line, $hf^\prime=4\pi f$ and $m_\pi=43.5$GeV.
The exclusive limit on the nonresonant branching ratio from CDF,
$1.9\times 10^{-5}$ lies outside the allowed region.}
\end{figure}
\begin{figure}
\vskip -.5in
\centerline{\epsfig{file=mm2.eps,height=9cm,width=9cm}}
\vskip -1em
\caption{\label{sigma2}
Contours of $\mbox{BR}(B \to X_s \mu^+ \mu^-)_{\mbox{NR}}$
in the $(h, m_\sigma)$ plane in limit {\it (ii)}.
The allowed parameter space is confined between B-line and
$m_\sigma=58.4$GeV.
The exclusive limit on the nonresonant branching ratio from CDF,
$1.9\times 10^{-5}$ lies outside the allowed region.}
\end{figure}
In the {\it ``Conventional''} Parameterization:
we show the nonresonant branching ratio and the allowed parameter space
in the $(h,\widetilde{M}_\phi)$ plane in figure \ref{mm1}, and in the
$(h,\tilde{\lambda})$ plane in figure \ref{mm2}. The allowed region
in figure \ref{mm1} is bounded by the ``B-line'', and $hf^\prime=4\pi f$
while in figure \ref{mm2} is bounded by the ``B-line'' and
$m_\sigma=58.4$GeV. The region to the right of the ``B-line'' is
excluded by the experimental constraints on
$B^0 - \bar{B^0}$ mixing~\cite{EHS}. In figure \ref{mm1},
The chiral Lagrangian analysis breaks down~\cite{CC:HG} above the
$hf^\prime = 4 \pi f$ line . In figure \ref{mm2}, the constraint
on the mass of the isoscalar from LEP~\cite{LEP:mass} excludes the
region above and to the left of the curve
$m_\sigma = 58.4$GeV~\cite{CC:EHS:YM}.
In both figures, we also show the curve $m_{\pi_p}=m_t-m_b$. Below and
to the left of this curve, $\pi_p$ is lighter than the top quark, and
the decay rate of $t\to \pi_p b$ is given by
\begin{displaymath}
\Gamma(t\to \pi_p b)=\frac{1}{4\pi}\left[\frac{f}{f^\prime}\right]^2
\frac{(m_t^2-m_{\pi_p}^2)^2}{m_tv^2}.
\end{displaymath}
The branching fraction of the top decays into $W$ and $b$ measured by
CDF is $\mbox{BR}(t
\to Wb)=0.87^{+0.13}_{-0.30}(stat.)^{+0.13}_{-0.11}(syst.)$~\cite{twb}.
The Standard Model value for $\Gamma(t\to Wb)$ is 1.6GeV. At $2\sigma$,
the CDF data indicates that $\Gamma(t\to \pi_pb)<10.42$GeV. This will
further constrain the parameter space. However, because the
uncertainties in the measured $\mbox{BR}(t \to Wb)$ are too big at the
moment, we do not attempt to make any definite claim on the mass of
$\pi_p$ or to constrain parameter space further but will wait for
more accurate data. In any case, the lower limit on
the mass of $\pi_p$ from the experimental search for charged Higgs or
technipions~\cite{lower:pion} excludes the region below the line
$m_{\pi_p}=43.5$GeV in figure \ref{mm1}. In figure \ref{mm2}, the
physical scalar is heavier than the experimental lower bound in the
entire allowed space.
In terms of the {\it physical} parameterization, $(m_\sigma, h)$, in
both limits
of the model, we plot contours of the nonresonant branching ratio in
figures \ref{sigma1} and \ref{sigma2}. In this case, the boundaries of
the allowed parameter space are the same as in the
``conventional'' parameterization. There are some
advantages of this choice of free parameters. First, it enables us to
visualize what the parameter space looks like for a fixed not-so-small
$M_\phi$ (or $\lambda$); second, when the limit
on the isoscalar mass changes, we simply need to move the
vertical line $m_\sigma=58.4\mbox{GeV}$ to get the updated parameter
space without carrying out a lengthy computation.
In figures \ref{mm1} and \ref{sigma2}, as we move from upper left
to lower right in the allowed region of the parameter space , we find
that the nonresonant branching ratio
increases from the Standard Model value, $4.9 \times 10^{-6}$, to about
$8.0 \times 10^{-6}$. This corresponds to a maximum enhancement relative
to the value in the Standard Model about $60\%$. The exclusive limit on
the nonresonant branching fraction from CDF~\cite{CDF},
$1.9 \times 10^{-5}$, lies below the region allowed by the
$B^0 - \bar{B^0}$ mixing.
The parameter space is not constrained further by $B \to X_s \mu^+\mu^-$.
It is also not surprising to see that the ``B-Line'' and the contours look similar,
since the $B^0 - \bar{B^0}$ mixing and $b \to s \mu^+\mu^-$ involve the same
$\pi_p - t$ loop.
The branching ratio depends on the sign and magnitude of the Wilson
coefficient $C_7$ of the electromagnetic operator in the effective
Hamiltonian for the $B \to X_s \ell^+ \ell^-$ decay~\cite{Grinstein}.
The uncertainty in the calculation of
$C_7$ (about $15\%$)~\cite{Grinstein} can shift
the lines $\mbox{BR}(B \to X_s\mu^+\mu^-)_{\mbox{\scriptsize NR}}=1.6
\times10^{-5}$ by at most $5\%$ only, which will not move this line
above the ``B-line''. The corresponding shift in $\mbox{BR}(B \to
X_s\mu^+\mu^-)_{\mbox{\scriptsize NR}}=8.0 \times10^{-6}$ is about $1\%$.
We also compute the process $B \to X_s e^+ e^-$ in the
model. The maximal enhancement of the nonresonant branching ratio
relative to its SM counterpart is about $20\%$, which is comparable to the
10-20$\%$ uncertainties in the SM calculation. Experimentally, it is hard
to distinguish this model from the SM in the $B \to X_s e^+ e^-$ decay
channel.
However, given the fact that CLEO is upgrading their detector and the
sensitivity of CDF Run II, the $60\%$ enhancement of the
$B \to X_s \mu^+ \mu^-$ nonresonant branching ratio in technicolor
with scalars will enable them to distinguish the SM from this model in
the $B \to X_s \mu^+ \mu^-$ channel.
\section[short title]{Conclusions}
To extend the phenomenology of the technicolor with scalars model, we have
computed the inclusive decay $B \to X_s \mu^+ \mu^- $ in it. The model
predicts a significant
enhancement of the branching ratio in part of its parameter space.
While current experiments can not quite see it, CDF Run II and the
upgraded CLEO detector would be sensitive enough to detect it, if nature
does not trifle with us as to make the branching ratio much smaller than
the prediction of the Standard Model; on the other hand, if some completely
different physics makes the branching ratio too small, the experiments will
still set a new upper limit. Then, we would be able to determine the
model's viability.
\section*{Acknowledgments}
The author would like to thank E. H. Simmons for helpful
discussions and comments the manuscript, and D. Loomba and B. Zhou
for their help with diagrams. {\em This work was supported in part by the
Department of Energy under grant DE-FG02-91ER40676.}
\renewcommand{\theequation}{A\arabic{equation}}
\setcounter{equation}{0}
\section*{Appendix}
This appendix contains details on the calculation of the $B \to X_s
\ell^+\ell^- (\ell = e,\mu)$ branching ratio. We adopt the formalism
from ref~\cite{Cho}.
For the reader's convenience, we give the explicit formulas below.
Define the rescaled lepton energies in the b quark rest frame,
\begin{equation}
y_+=\frac{2E_{\ell^+}}{m_b}\ \ \mbox{and}\ \ \ y_-=\frac{2E_{\ell^-}}{m_b}
\end{equation}
and the rescaled invariant dilepton mass $\hat{s}=y_+-y_--1$. The
differential decay rate is,
\begin{eqnarray}
\frac{d^2\Gamma(b\to s\ell^+\ell^-)}{dy_+dy_-}&=&
\frac{G_F^2m_b^5|K_{ts}^*K_{tb}|^2}{16\pi^3}
\left(\frac{\alpha_{\scriptstyle EM}}{4\pi}
\right)^2\Bigg\{\bigg[y_+(1-y+)
\nonumber\\
& & +y_-(1-y_-)\bigg]
(|C^{\mbox{\scriptsize eff}}_9(\hat{s})|^2
+C_{10}^2)\nonumber \\
& & +\frac{4}{\hat{s}}\left[\hat{s}(1-\hat{s})+(1-y_+)^2+(1-y_-)^2
+\frac{2m_{\ell}^2}{\hat{s}m_b^2}\right](C^{\mbox{{\scriptsize
eff}}}_7)^2\nonumber\\
& & +4(1-\hat{s})C_7^{\mbox{{\scriptsize
eff}}}\mbox{Re}(C_9^{\mbox{{\scriptsize eff}}}(\hat{s}))\nonumber\\
& & +2(y_+-y_-)C_{10}\left[2C_7^{\mbox{{\scriptsize
eff}}}+\hat{s}\mbox{Re}(C_9^{\mbox{{\scriptsize
eff}}}(\hat{s}))\right]\Bigg\},
\end{eqnarray}
where,
\begin{eqnarray}
C_7^{\mbox{{\scriptsize eff}}}&=&
C_7(m_{\scriptstyle W})\eta^{16/23}
+\frac{8}{3}C_8(m_{\scriptstyle W})(\eta^{14/23}-\eta^{16/23})
+\sum_{i=1}^8h_i\eta^{a_i},\label{C7eff}\\
C_9^{\scriptsize eff}&=&\left(\frac{\pi}{\alpha_s(m_{\scriptstyle W})}
+\frac{\omega(\hat{s})}{\eta}\right)(-0.1875
+\sum_{i=1}^8p_i\eta^{a_i+1}) \nonumber\\
& & +\frac{Y(x_t)+Y^{\scriptsize TCS}}{\sin^2{\theta}}
-4(Z(x_t)+Z^{\scriptsize TCS})+(E(x_t)
+E^{\scriptsize TCS})(0.1405+\sum_{i=1}^8q_i\eta^{a_i+1})
\nonumber\\
& & +1.2468+\sum_{i=1}^8\eta^{a_i}\left[r_i
+s_i\eta+t_ih(\frac{m_c}{m_b},\hat{s})
+u_ih(1,\hat{s})+v_ih(0,\hat{s})\right].\label{C9eff}\\
C_{10}&=&-\frac{Y(x_t)+Y^{\mbox{\scriptsize TCS}}}{\sin^2{\theta}}.\label{c10}
\end{eqnarray}
In the above, $\eta$ is defined by
$\eta=\alpha_s(m_{\scriptstyle W})/\alpha_s(m_b)$, and
the dimension-8 vectors are given by
\begin{eqnarray}
a_i&=&(0.6087, 0.6957,0.2609,-0.5217,0.4086,-0.4230,-0.8994,0.1456),
\nonumber\\
h_i&=&(2.2996,-1.0880,-0.4286,-0.0714,-0.6494,-0.0380,-0.0186,-0.0057,
\nonumber\\
p_i&=&(0,0,-0.3941,0.2424,0.0433,0.1384,0.1648,-0.0073),
\nonumber\\
q_i&=&(0,0,0,0,0.0318,0.0918,-0.2700,0.0059),
\nonumber\\
r_i&=&(0,0,0.8331,-0.1219,-0.1642,0.0793,-0.0451,-0.1638),\\
s_i&=&(0,0,-0.2009,-0.3579,0.0490,-0.3616,-0.3554,0.0072),
\nonumber\\
t_i&=&(0,0,1.7143,-0.6667,0.1658,-0.2407,-0.0717,0.0990),
\nonumber\\
u_i&=&(0,0,0.2857,0,-0.2559,0.0083,0.0180,-0.0562),
\nonumber\\
v_i&=&(0,0,0.1429,0.1667,-0.1731,-0.1120,-0.0178,-0.0067).
\nonumber
\end{eqnarray}
And the functions $h(z,\hat{s})$, and $\omega(\hat{s})$ are given by
\begin{eqnarray}
h(z,\hat{s})&=&-\frac{8}{9}\log{z}+\frac{8}{27}+\frac{4}{9}x\nonumber\\
& & -\frac{2}{9}(2+x)\sqrt{|1-x|}\left\{\begin{array}{l}
\log\left|\frac{\sqrt{1-x}+1}{\sqrt{1-x}-1}\right|-i\pi,\
\ \ \ \ \mbox{for}\ x\equiv 4z^2/\hat{s}<1,\nonumber\\
2\arctan{(1/\sqrt{x-1})},\ \mbox{for}\ x\equiv
4z^2/\hat{s}>1; \end{array} \right.\nonumber\\
\omega(\hat{s})&=&-\frac{4}{3}\mbox{Li}_2(\hat{s})
-\frac{2}{3}\log{(\hat{s})}\log{(1-\hat{s})}-\frac{2}{9}\pi^2
-\frac{5+4\hat{s}}{3(1+2\hat{s})}\log{(1-\hat{s})}\nonumber\\
& & -\frac{2\hat{s}(1+\hat{s})(1-2\hat{s})}{3(1-\hat{s})^2(1
+2\hat{s})}\log{(\hat{s})}
+\frac{5+9\hat{s}-6\hat{s}^2}{6(1-\hat{s})(1+2\hat{s})}.
\end{eqnarray}
The differential rate is integrated over the dilepton invariant
mass region given by (\ref{mregion}) to get the partial with for
$b \to s \ell^+ \ell^-$, which then is normalized to the semileptonic
rate
\begin{equation}
\Gamma(b \to
ce^+\nu)=\frac{G_F^2m_b^5|K_{cb}|^2}{192\pi^3}g(\frac{m_c}{m_b})
\left\{1-\frac{2\alpha_s(m_b)}{3\pi}\left[(\pi^2-\frac{31}{4})
(1-\frac{m_c}{m_b})^2+\frac{3}{2}\right]\right\}
\end{equation}
to cancel the uncertainties in the KM angles;
here $g(z)=1-8z^2+8z^6-z^8-24z^4\log{z}$.
Adopting the values: $m_c=1.3$GeV,
$m_b=4.7$GeV, $m_t=176$GeV,
$\alpha_s(m_Z)=0.118$ and $|K^*_{ts}K_{tb}/K_{cb}|^2=0.95$,
the nonresonant branching ratios are given by
\begin{equation}\label{bree}
\begin{array}{l}
\mbox{BR}(B \to X_s e^+ e^-)_{\mbox{\scriptsize NR}}=
3.0 \times 10^{-7}[5.5
+ 2.3R_7^2 +17.6R_Y^2\\
\ \ \ \ \ \ \ \ \ \ \ +3.7 R_Z^2 -2.1R_7R_Y +1.4R_7R_Z -11.5R_YR_Z\\
\ \ \ \ \ \ \ \ \ \ \ +4.6R_7 +8.1R_Y -5.3R_Z],
\end{array}
\end{equation}
\begin{equation}\label{brmm}
\begin{array}{l}
\mbox{BR}(B \to X_s \mu^+ \mu^-)_{\mbox{\scriptsize NR}}=
3.0 \times 10^{-7}[2.9
+ 0.8R_7^2 +17.5R_Y^2\\
\ \ \ \ \ \ \ \ \ \ \ +3.7 R_Z^2 -2.1R_7R_Y +1.4R_7R_Z -11.4R_YR_Z\\
\ \ \ \ \ \ \ \ \ \ \ +0.7R_7 +8.1R_Y -5.3R_Z],
\end{array}
\end{equation}
where,
\begin{eqnarray}
R_7&=&\frac{C_7(m_{\scriptstyle W})_{\mbox{{\scriptsize SM}}}
+C_7(m_{\scriptstyle W})_{\mbox{{\scriptsize
TCS}}}}{C_7(m_{\scriptstyle W})_{\mbox{{\scriptsize SM}}}}
,\nonumber\\
R_Y&=&\frac{Y(x_t)+Y^{\mbox{{\scriptsize TCS}}}}{Y(x_t)},\\
R_Z&=&\frac{Z(x_t)+Z^{\mbox{{\scriptsize TCS}}}}{Z(x_t)}.\nonumber
\end{eqnarray}
Here, TCS stands for the extra contributions from technicolor with
scalars, and SM stands for the Standard Model. $R_7=R_Y=R_Z=1$ gives
the Standard Model nonresonant branching ratio $7.3\times 10^{-6}$ for
$B \to X_s e^+ e^-$ and $4.9\times 10^{-6}$ for $B \to X_s\mu^+\mu^-$.
The extra contributions in TCS are
\begin{eqnarray}
Y^{\mbox{{\scriptsize TCS}}}&=
&-\frac{1}{8}\left(\frac{f}{f^\prime}\right)^2
x_tf_5\left(\frac{m_t^2}{m_{\pi_p^\pm}^2}\right),
\nonumber\\
Z^{\mbox{{\scriptsize TCS}}}&=
&-\frac{1}{8}\left(\frac{f}{f^\prime}\right)^2
x_t f_5\left(\frac{m_t^2}{m_{\pi_p^\pm}^2}\right)
-\frac{1}{72}\left(\frac{f}{f^\prime}\right)^2
f_6\left(\frac{m_t^2}{m_{\pi_p^\pm}^2}\right),
\label{Ztcs}
\end{eqnarray}
where $x_t$ is defined by $x_t=(m_t/m_{\scriptstyle W})^2$.
The functions $Y(x)$, $Z(x)$, and $E(x)$ appearing in $R_Y$, $R_Z$,
$C_9^{\scriptsize eff}$, and $C_{10}$ are determined by the following:
\begin{eqnarray}
Y(x)&=&\frac{4x-x^2}{8(1-x)}+\frac{3x^2}{8(1-x)^2}\log{x},\nonumber\\
Z(x)&=&\frac{108x-259x^2+163x^3-18x^4}{144(1-x)^3}\nonumber\\
&\ &+\frac{-8+50x-63x^2-6x^3+24x^4}{72(1-x)^4}\log{x},\\
E(x)&=&\frac{18x-11x^2-x^3}{12(1-x)^3}
-\frac{4-16x+9x^2}{6(1-x)^4}\log{x}.\nonumber
\end{eqnarray}
$C_7(m_{\scriptstyle W})$ is the Wilson coefficient evaluated at the
scale $m_{\scriptstyle W}$,
\begin{eqnarray}
C_7(m_{\scriptstyle W})_{\mbox{{\scriptsize
SM}}}&=&\frac{1}{2}(-\frac{x}{2}f_1(x))\label{C7sm}\nonumber\\
C_7(m_{\scriptstyle W})_{\mbox{{\scriptsize TCS}}}&=
&\frac{1}{6}\left(\frac{f}{f^\prime}\right)^2
\left[-f_2\left(\frac{m_t^2}{m_{\pi_p^\pm}^2}\right)
+\frac{1}{2}\frac{m_t^2}{m_{\pi_p^\pm}^2}
f_1\left(\frac{m_t^2}{m_{\pi_p^\pm}^2}\right)\right].\label{C7tcs}
\end{eqnarray}
Here, $C_7(m_{\scriptstyle W})_{\mbox{{\scriptsize TCS}}}$ has the same
form as in a type-I two Higgs doublet model~\cite{Grinstein}.
Finally, the f functions appearing in (\ref{Ztcs})
and (\ref{C7tcs}) are expressed as,
\begin{eqnarray}
f_1(x)&=&\frac{-7+5x+8x^2}{6(1-x)^3}-\frac{2x-3x^2}{(1-x)^4}\log{x}\nonumber\\
f_2(x)&=&\frac{3x-5x^2}{2(1-x)^2}+\frac{2x-3x^2}{(1-x)^3}\log{x}\nonumber\\
f_5(x)&=&\frac{x}{1-x}+\frac{x}{(1-x)^2}\log{x}\nonumber\\
f_6(x)&=&\frac{38x-79x^2+47x^3}{6(1-x)^3}+\frac{4x-6x^2+3x^4}{(1-x)^4}\log{x}.
\end{eqnarray}
|
1,116,691,498,310 | arxiv | \section{Introduction}
The Dirac equation on Minkowski space ${\mathbb M}$ can be written in the form $i\gamma^\alpha\partial_\alpha\psi=m\psi$. The gamma matrices $\gamma^\alpha$ give a representation of the Clifford algebra on ${\mathbb M}$. These matrices act on a complex four dimensional vector space $V$, and spinors $\psi$ are $V$--valued fields $\psi:{\mathbb M}\rightarrow V$. Although $V$ and ${\mathbb M}$ are both four dimensional spaces, they are fundamentally different. Indeed, if $\Lambda$ is a Lorentz transformation, then ${\bf v}'=\Lambda{\bf v}$ is the transformation rule for four-vectors in ${\mathbb M}$. In contrast, the corresponding transformation rule for spinors in $V$ is $\psi'=S\psi$, where $S=\Sigma(\Lambda)$ is the image of $\Lambda$ under the spin representation $\Sigma$ of the Lorentz group constructed from the gamma matrices.
The standard approach to generalizing the Dirac equation to a curved spacetime $M$ is to assume that spinors are fields of a rank four complex vector bundle $E$ over $M$. The connection on $E$ is given by the spin connection $\nabla_\mu\psi=\partial_\mu\psi+\Gamma_\mu^{(s)}\psi$. The spin connection matrix $\Gamma_\mu^{(s)}$ can be expressed in terms of the metric $\eta_{ab}$ and gamma matrices $\gamma_a$ for Minkowski space, as well as vierbein $e_\mu^a$ that determine the metric on $M$: $g_{\mu\nu}=e_\mu^ae_\nu^b\eta_{ab}$. Indeed,
\begin{equation}\label{eq:spincon}
\Gamma_\mu^{(s)}\doteq
\tfrac{1}{8}([\gamma^\nu,\partial_\mu\gamma_\nu]
-\Gamma_{\nu\mu}^\rho[\gamma^\nu,\gamma_\rho])
\end{equation}
where $\gamma_\mu=e_\mu^a\gamma_a$. The Dirac equation on $M$ is then $i\gamma^\mu\nabla_\mu\psi=m\psi$. See \cite{Pollock}.
A drawback of this approach is that the geometric and topological relationship between the spinor bundle $E$ and spacetime $M$ is not specified a priori, so it is not clear how non--zero mass solutions to the Dirac equation couple with gravity. The main idea of this article is to use use the (real) Clifford bundle ${\it Cl}_\ast M$ in place of $E$, thus completely specifying the relation between spinor space and spacetime. The immediate price paid however, is that spinors are now sixteen dimensional. We will see that an even steeper price must be paid: the spin action constructed from the gamma matrices must be replaced by the action of the Lorentz group on $M$ when extended to ${\it Cl}_\ast M$. The spin action is still present, but it is relegated to a secondary role.
The work presented here is related to the spacetime algebra formulation of the Dirac equation on Minkowski space by Hestenes \cite{Hestenes}. The spacetime algebra is the Clifford algebra ${\it Cl}({\mathbb M})$ of Minkowski space, and spinors are fields of the real eight dimensional subspace consisting of even order elements. Using the complex structure induced by the unit pseudoscalar of ${\it Cl}({\mathbb M})$, this subspace can also be identified as a four dimensional complex vector space. Under this identification, the spacetime algebra formulation is equivalent to the usual Dirac equation. In contrast, the formulation presented in this article, when restricted to Minkowski space, uses the entire Clifford algebra instead of a subspace. Although possible to do so, we do not make use of the complex structure.
{\it Article summary.} We start off by reviewing the basic constructions used with the Clifford bundle, such as the extending the metric, connection, and curvature. We then indicate how these constructions transform under a change of coordinate basis, and write down some basic invariant quantities. After briefly discussing the spin action, we construct a Lagrangian from the geometric invariants and compute its variation with respect to (1) the spinor field, and (2) the metric. We end by discussing how minimal coupling can be used to incorporate other forces.
\section{Geometric framework}
Let $M$ denote spacetime. That is, $M$ is a four--dimensional Lorentz manifold with metric $g$. Throughout we let $x=x^\alpha$, for $\alpha=0,1,2,3$, denote local coordinates for $M$, and let ${\bf e}_\alpha\doteq\partial/\partial x^\alpha$ be the corresponding basis for the fiber of the tangent bundle $T_\ast M$ of $M$ at $x$. We adopt the usual conventions: $g_{\alpha\beta}\doteq g({\bf e}_\alpha,{\bf e}_\beta)$ are the components of $g$, and $g^{\alpha\beta}$ are the components of the metric inverse $g^{-1}$, so that $g_{\alpha\beta}\,g^{\beta\mu}=\delta_\alpha^\mu$. Here and throughout the summation convention is used. The canonical metric--compatible connection on $T_\ast M$ is denoted by $\nabla$: $\nabla_\alpha{\bf e}_\beta=\Gamma_{\alpha\beta}^\mu{\bf e}_\mu$, where $\Gamma_{\alpha\beta}^\mu=g^{\mu\nu}\Gamma_{\nu\alpha\beta}$ and $\Gamma_{\nu\alpha\beta}=\tfrac{1}{2}(g_{\nu\alpha\beta}-g_{\alpha\beta\nu}+g_{\beta\nu\alpha})$, with $g_{\alpha\beta\nu}\doteq\partial_\nu g_{\alpha\beta}=\partial g_{\alpha\beta}/\partial x^\nu$.
Although it has little bearing on the discussion, we take the signature of $g$ to be $(-,+,+,+)$. Note however, that when combined with the Dirac algebra sign convention $\gamma_\alpha\gamma_\beta+\gamma_\beta\gamma_\alpha=2g_{\alpha\beta}$, the Dirac equation takes the form $\gamma^\alpha\partial_\alpha\psi=m\psi$.
\medskip
\noindent
{\bf Clifford bundle.}
Let ${\it Cl}_\ast M$ denote the real Clifford bundle of $M$, which is formed by taking the Clifford algebra of the fibers of the tangent bundle of $M$. In other words, ${\it Cl}_\ast M$ is the result of imposing the relations
$${\bf e}_\alpha{\bf e}_\beta+{\bf e}_\beta{\bf e}_\alpha=2g_{\alpha\beta}$$
on the tensor algebra of $T_\ast M$. The Clifford bundle is a real vector bundle of rank sixteen over $M$. For example, we can use
$${\bf e}_\emptyset\doteq 1,\,
{\bf e}_0,\,
{\bf e}_1,\,
{\bf e}_2,\,
{\bf e}_3,\,
{\bf e}_{01},\,
{\bf e}_{02},\,
{\bf e}_{03},\,
{\bf e}_{12},\,
{\bf e}_{13},\,
{\bf e}_{23},\,
{\bf e}_{012},\,
{\bf e}_{013},\,
{\bf e}_{023},\,
{\bf e}_{123},\,
{\bf e}_{0123}
$$
as a local real vector space basis for the fibers of ${\it Cl}_\ast M$, where we have set ${\bf e}_I\doteq{\bf e}_{\alpha_1}{\bf e}_{\alpha_2}\cdots{\bf e}_{\alpha_k}$ for the multi--index $I=\alpha_1\alpha_2\cdots\alpha_k$. If $I$ has an even or odd number of indices, we say that ${\bf e}_I$ is of even or odd order, respectively.
A spinor field $\psi$ on ${\it Cl}_\ast M$ is a section, and we write $\psi=\psi^I{\bf e}_I$ for some real--valued functions $\psi^I=\psi^I(x^\alpha)$ with $I=\emptyset,0,1,\dots,0123$. We say that $\psi$ is even (odd) if it can be expressed a combination of only even (odd) order basis vectors.
\medskip
\noindent
{\bf Extended metric.}
We extend the metric on $T_\ast M$ to a metric on ${\it Cl}_\ast M$, which we denote as $\hat{g}$, by setting
$$\hat{g}(\psi,\phi)
\doteq -\tfrac{1}{2}\langle\psi^\dagger\phi
+\phi^\dagger\psi\rangle_\emptyset.
$$
Here, $\psi^\dagger$ denotes the involution on ${\it Cl}_\ast M$ obtained by the linear extension of $({\bf e}_{\alpha_1\alpha_2\cdots\alpha_k})^\dagger\doteq(-1)^k{\bf e}_{\alpha_k\cdots\alpha_2\alpha_1}$, and $\langle\psi\rangle_\emptyset$ denotes the linear projection onto the ${\bf e}_\emptyset$--component of $\psi$; i.e., $\langle\psi^I{\bf e}_I\rangle_\emptyset=\psi^\emptyset$. We see that $\hat{g}$ so defined is symmetric: $\hat{g}(\psi,\phi)=\hat{g}(\phi,\psi)$, or equivalently as a $16\times 16$ matrix, $\hat{g}^T=\hat{g}$. Moreover, we have $\hat{g}({\bf u}\psi,\phi)+\hat{g}(\psi,{\bf u}\phi)=0$ for all four--vector fields ${\bf u}$ on $T_\ast M$.
\medskip
\noindent
{\bf Extended connection.}
The metric--compatible connection $\nabla$ on $T_\ast M$ can also be extended to ${\it Cl}_\ast M$, which we denote by $\hat\nabla$, via the Leibniz rule:
$$\hat\nabla_\mu({\bf e}_I{\bf e}_J)
=(\nabla_\mu{\bf e}_I){\bf e}_J
+{\bf e}_I(\nabla_\mu{\bf e}_J).
$$
Define the extended Christoffel symbol $\hat\Gamma_{\alpha I}^J$ such that $\hat\nabla_\alpha{\bf e}_I=\hat\Gamma_{\alpha I}^J{\bf e}_J$. We may write $\hat\nabla_\mu\psi=\partial_\mu\psi+\hat\Gamma_\mu\psi$, where $\hat\Gamma_\mu$ denotes the $16\times 16$ matrix with components $(\hat\Gamma_\mu)_I^J=\hat\Gamma_{\mu I}^J$. The extended metric is compatible with the extended connection: $\hat{g}(\hat\nabla_\mu\psi,\phi)+\hat{g}(\psi,\hat\nabla_\mu\phi)=\partial_\mu\hat{g}(\psi,\phi)$. Equivalently in matrix form,
\begin{equation}\label{eq:metric-compat}
\hat\Gamma_\mu^T\,\hat{g}+\hat{g}\,\hat\Gamma_\mu
=\partial_\mu\hat{g}.
\end{equation}
We remark that the extended connection is not equal to the spin connection, equation \eqref{eq:spincon}. Indeed, the spin connection is not compatible with the extended metric.
\medskip
\noindent
{\bf Gamma matrices.}
For a four--vector field ${\bf v}$ on $T_\ast M$, we define the gamma matrix $\gamma_{\bf v}$ as Clifford multiplication on the left: $\gamma_{\bf v}\psi\doteq{\bf v}\psi$. In particular, we set $\gamma_\alpha\doteq\gamma_{{\bf e}_\alpha}$. As usual, we define $\gamma^\alpha\doteq g^{\alpha\beta}\gamma_\beta$. One verifies that
\begin{equation}\label{eq:clifford}
\gamma_\alpha\gamma_\beta+\gamma_\beta\gamma_\alpha
=2g_{\alpha\beta}
\quad\text{and}\quad
\gamma^\alpha\gamma^\beta+\gamma^\beta\gamma^\alpha
=2g^{\alpha\beta}
\end{equation}
and
\begin{equation}\label{eq:gamma-xmetric}
\gamma_\alpha^T\,\hat{g}+\hat{g}\,\gamma_\alpha=0
\end{equation}
as well as the commutation relations with the extended Christoffel symbols
\begin{equation}\label{eq:gamma-xconn}
[\gamma_\alpha,\hat\Gamma_\beta]
=\partial_\beta\gamma_\alpha-\Gamma_{\alpha\beta}^\epsilon\gamma_\epsilon
\quad\text{and}\quad
[\gamma^\alpha,\hat\Gamma_\beta]
=\partial_\beta\gamma^\alpha+\Gamma_{\beta\epsilon}^\alpha\gamma^\epsilon.
\end{equation}
\medskip
\noindent
{\bf Extended curvature.}
Recall that the Riemann curvature operator $\Omega_{\alpha\beta}$ on $T_\ast M$ is given by $\Omega_{\alpha\beta}=\nabla_\alpha\nabla_\beta-\nabla_\beta\nabla_\alpha-\nabla_{[{\bf e}_\alpha,{\bf e}_\beta]}$. Note that as we are assuming that the ${\bf e}_\alpha$ are coordinate frames: ${\bf e}_\alpha=\partial/\partial x^\alpha$, we have that $[{\bf e}_\alpha,{\bf e}_\beta]=0$. By using the extended connection in place of the connection, we obtain the extended curvature operator $\hat\Omega_{\alpha\beta}$ for ${\it Cl}_\ast M$. In matrix form, we have
\begin{equation}\label{eq:xcurvature}
\hat\Omega_{\alpha\beta}
=\partial_\alpha\hat\Gamma_\beta
-\partial_\beta\hat\Gamma_\alpha
+[\hat\Gamma_\alpha,\hat\Gamma_\beta].
\end{equation}
\medskip
\noindent
{\bf Dirac operator.}
Clifford multiplication allows us to define the Dirac operator on ${\it Cl}_\ast M$ by $D\psi\doteq{\bf e}^\alpha\hat\nabla_\alpha\psi$. Equivalently,
\begin{equation}\label{eq:dirac-op}
D\psi=\gamma^\alpha\partial_\alpha\psi
+\gamma^\alpha\hat\Gamma_\alpha\psi.
\end{equation}
\medskip
\noindent
{\bf Extended group action.}
Suppose that $A$ is a (local) transformation on $T_\ast M$, so that $A{\bf e}_\alpha=A_\alpha^\beta{\bf e}_\beta$. We extend to a transformation on ${\it Cl}_\ast M$:
\begin{equation}\label{eq:xaction}
\hat{A}{\bf e}_I
\doteq(A{\bf e}_{\alpha_1})\cdots(A{\bf e}_{\alpha_k})
=A_{\alpha_1}^{\beta_1}\cdots A_{\alpha_k}^{\beta_k}
{\bf e}_{\beta_1}\cdots{\bf e}_{\beta_k}
=\hat{A}_I^J{\bf e}_J.
\end{equation}
$A$ is defined to fix ${\bf e}_\emptyset$: $A{\bf e}_\emptyset={\bf e}_\emptyset$. We may view $\hat{A}$ as a $16\times 16$ matrix that contains the $4\times 4$ matrix $A$: $\hat{A}_\alpha^\beta=A_\alpha^\beta$. Note that if $A$ is invertible, then $(A^{-1})^{\hat{}}=\hat{A}^{-1}$.
\medskip
Most of the definitions given here are standard. See \cite{Lawson} for instance, although the notation and conventions employed are different.
\section{Change of bases and invariants}
Let $B$ be a local change of basis for $T_\ast M$, so that we have the new fiber basis ${\bf e}_\alpha'=(B^{-1})_\alpha^\beta{\bf e}_\beta$. Using \eqref{eq:xaction}, we extend $B$ to a change of basis for ${\it Cl}_\ast M$, ${\bf e}_I'\doteq\hat{B}^{-1}{\bf e}_I=(\hat{B}^{-1})_I^J{\bf e}_J$. We indicate how the geometric quantities of the previous section are affected by such a change of basis.
For a field $\psi$ on ${\it Cl}_\ast M$, we have $\psi=\psi^I{\bf e}_I$ and $\psi=\psi'^I{\bf e}_I'$, so that $\psi'^I=\hat{B}_J^I\psi^J$ gives the transformation rule for fields on ${\it Cl}_\ast M$. In matrix form
\begin{equation}\label{eq:xfield}
\psi'=\hat{B}\psi.
\end{equation}
From the definition of the extended metric on ${\it Cl}_\ast M$, we find that $\hat{g}$ transforms according to the rule $\hat{g}_{IJ}'=(\hat{B}^{-1})_I^K(\hat{B}^{-1})_J^L\hat{g}_{KL}$. I.e.,
\begin{equation}\label{eq:xmetric}
\hat{g}'=\hat{B}^{-T}\,\hat{g}\hat{B}^{-1}.
\end{equation}
To deduce the transformation rule for the extended connection, we note that for a four--vector field ${\bf v}=v^\alpha{\bf e}_\alpha$, we have $\hat\nabla_{\bf v}=v^\alpha\hat\nabla_\alpha$. Thus $\hat\nabla_\alpha'=\hat\nabla_{{\bf e}_\alpha'}=(B^{-1})_\alpha^\beta\hat\nabla_\beta$. Setting $\tilde{B}\doteq\hat{B}^{-1}$ for notational convenience, one computes $\hat\nabla_\alpha'{\bf e}_I'=(B^{-1})_\alpha^\beta\hat{B}_J^L(\partial_\beta\tilde{B}_I^J+\hat\Gamma_{\beta K}^J\tilde{B}_I^K){\bf e}_L'$. In matrix notation, $\hat\Gamma_\alpha'=(B^{-1})_\alpha^\beta\hat{B}(\partial_\beta\hat{B}^{-1}+\hat\Gamma_\beta\hat{B}^{-1})$. Or equivalently,
\begin{equation}\label{eq:xchristoffel}
\hat\Gamma_\alpha'
=(B^{-1})_\alpha^\beta
(-\partial_\beta\hat{B}+\hat{B}\hat\Gamma_\beta)\hat{B}^{-1}
\end{equation}
For the gamma matrix transformation rule, we compute that $\gamma_\alpha'{\bf e}_I'={\bf e}_\alpha'{\bf e}_I'=(B^{-1})_\alpha^\beta\hat{B}_K^L\gamma_{\beta J}^K\tilde{B}_I^J{\bf e}_L'$. It follows that
\begin{equation}\label{eq:xgamma}
\gamma_\alpha'
=(B^{-1})_\alpha^\beta\,\hat{B}\,\gamma_\beta\,\hat{B}^{-1}
\quad\text{and}\quad
{\gamma'}^\alpha
=B_\beta^\alpha\,\hat{B}\,\gamma^\beta\,\hat{B}^{-1}
\end{equation}
The second equation follows from the first via the identity ${g'}^{\alpha\beta}=B_\rho^\alpha g^{\rho\sigma}B_\sigma^\beta$.
Using equations \eqref{eq:xfield}, \eqref{eq:xchristoffel}, and \eqref{eq:xgamma} one computes that $D\psi$ transforms like a field:
\begin{equation}\label{eq:dirac}
D'\psi'=\hat{B}D\psi.
\end{equation}
The curvature of any connection is well--known to be tensorial. Thus the extended curvature operator $\hat\Omega_{\alpha\beta}$ transforms as
\begin{equation}\label{eq:curvature-trans}
\hat\Omega_{\alpha\beta}'
=(B^{-1})_\alpha^\rho(B^{-1})_\beta^\sigma
\hat{B}\,\hat\Omega_{\rho\sigma}\,\hat{B}^{-1}.
\end{equation}
This equation may also be obtained directly from the definition of the extended curvature.
\begin{theorem}\label{thm:invscalars}
The following scalars are invariant under an extended change of basis:
$$\psi^T\,\hat{g}\,\psi,\quad
\psi^T\,\hat{g}\,D\psi,\quad
{\rm tr}_k(\gamma^\alpha\gamma^\beta\hat\Omega_{\alpha\beta})
\qed
$$
\end{theorem}
\noindent
Here ${\rm tr}_k(A)$ denotes the $k$--th order trace of the $n\times n$ matrix $A$, and is defined by $\det(I+sA)=\sum_{k=0}^n s^k\,{\rm tr}_k(A)$. In particular, ${\rm tr}_1(A)={\rm tr}(A)$, the usual trace of a matrix. And ${\rm tr}_n(A)=\det(A)$, the determinant of $A$.
It should be noted that although we have only considered a change of basis transformation on ${\it Cl}_\ast M$ obtained by extending one on $M$, we can also consider a general change of basis. That is, locally ${\bf e}_I'=(A^{-1})_I^J{\bf e}_J$. If we do so, then only transformation rules \eqref{eq:xfield} and \eqref{eq:xmetric} are valid, and only the scalar $\psi^T\hat{g}\psi$ is invariant.
\section{Spin action}
The Lorentz group $O(g)$ consists of (local) transformations $\Lambda$ on $T_\ast M$ that preserve the metric on $M$: $\Lambda^T g\Lambda=g$. We may extend the action of $O(g)$ on $M$ to ${\it Cl}_\ast M$ just as in equation \eqref{eq:xaction}. One shows that the extended action preserves the extended metric. However, not every transformation that preserves the extended metric is obtained in this manner. The standard construction with gamma matrices can be used to generate another action, the spin action on ${\it Cl}_\ast M$. This action also preserves the extended metric.
Recall that the Lorentz algebra ${\it so}(g)$ consists of (local) transformations $L$ on $T_\ast M$ such that $L^Tg+gL=0$. Moreover, ${\it so}(g)$ is generated as a (real) vector space by transformations of the form
\begin{equation}\label{eq:lorentz}
{\bf u}\wedge^g{\bf v}
\doteq({\bf u}{\bf v}^T-{\bf v}{\bf u}^T)g
\end{equation}
In fact, the six elements of the form ${\bf e}_\alpha\wedge^g{\bf e}_\beta$ with $\alpha<\beta$ form a real vector space basis of $so(g)$. We obtain the (local) spin representation $\sigma$ of ${\it so}(g)$ by making the assignment
\begin{equation}\label{eq:spinrep}
\sigma({\bf u}\wedge^g{\bf v})
\doteq\tfrac{1}{4}(\gamma_{\bf u}\gamma_{\bf v}
-\gamma_{\bf v}\gamma_{\bf u}).
\end{equation}
and extending linearly over all of ${\it so}(g)$. This defines a Lie algebra representation. Moreover, equation \eqref{eq:gamma-xmetric} implies that $\sigma(L)$ is in the Lie algebra $so(\hat{g})$. That is, $\sigma(L)^T\hat{g}+\hat{g}\sigma(L)=0$.
Exponentiation in $so(g)$ gives the proper orthochronous subgroup ${\it SO}_+(g)$ of $O(g)$: if $L\in so(g)$, then $\exp(L)$ is in ${\it SO}_+(g)$. On the other hand, exponentiation of the spin Lie algebra representation gives the spin Lie group representation $\Sigma$ of ${\it SO}_+(g)$. In particular if $\Lambda=\exp(L)$, then $\Sigma(\Lambda)=\exp(\sigma(L))$. It should be noted that $\Sigma$ is actually a projective representation in that it is only defined up to sign. Specifically, if $\exp(L_1)=\exp(L_2)$, then $\exp(\sigma(L_1))=\pm\exp(\sigma(L_2))$.
The Lie group $SO_+(g)$ thus acts locally on ${\it Cl}_\ast M$: $\psi\mapsto\Sigma(\Lambda)\psi$. We will call this this the {\bf spin action}. Technically, since $\Sigma$ is only a projective representation, we are actually acting on the (fiber--wise) projectivization ${\it PCl}_\ast M$, where two elements of ${\it Cl}_\ast M$ are identified if one is a nonzero scalar multiple of the other. This is implicit in the following. Using the techniques in \cite{me}, one obtains the well--known transformation law for gamma matrices.
\begin{theorem}
The extended metric is invariant under the spin action. Moreover, if $S\doteq\Sigma(\Lambda)$, then $S\gamma_{\bf v}S^{-1}=\gamma_{\Lambda{\bf v}}$ for all four--vectors fields ${\bf v}$.\qed
\end{theorem}
When we apply the spin action to ${\it Cl}_\ast M$, we have the transformation rules $\psi'=S\psi$ and $\hat{g}'=\hat{g}$. On the other hand, the spin action is not always compatible with the extended connection. Indeed, one computes that
$$\hat\nabla_\alpha S\psi
=(\partial_\alpha S+[\hat\Gamma_\alpha,S])\psi
+S\hat\nabla_\alpha\psi
$$
So that $\hat\nabla_\alpha\psi'=S\hat\nabla_\alpha\psi$ if and only if $\partial_\alpha S=[S,\hat\Gamma_\alpha]$. This is the case when $M$ is Minkowski space. In general if $g$ constant, then $\partial_\alpha S=0$ and $\hat\Gamma_\alpha=0$.
\section{Variational formulas}
Let $\omega\doteq\sqrt{-{\it det}(g)}$ denote the spacetime density. We may use the invariant scalars in theorem \eqref{thm:invscalars} to form the invariant Lagrangian
\begin{equation}\label{eq:lagrangian}
{\mathcal L}
\doteq\int_M(L_d+\mu L_m+\kappa L_g)\,dV
\end{equation}
where $dV$ is the spacetime volume element, $\mu$ and $\kappa$ are constants, and
$$L_m\doteq\omega\,\psi^T\,\hat{g}\,\psi
\quad\text{and}\quad
L_d\doteq\omega\,\psi^T\,\hat{g}\,D\psi
\quad\text{and}\quad
L_g\doteq\omega\,{\rm tr}(\gamma^\alpha\gamma^\beta\hat\Omega_{\alpha\beta})
$$
We remark that we may also use $L_g=\omega R$, where $R$ is the scalar curvature of the (non--extended) connection $\Omega$ on $T_\ast M$. Both choices are equivalent, as we will see in section \ref{sec:gravity}. However, the use of the extended curvature is more amenable to minimal coupling, as discussed in section \ref{sec:mcouple}.
\subsection{Field variation}
As $L_g$ does not depend on $\psi$, we only need to compute the variations of $L_m$ and $L_d$ with respect to $\psi^I$, where $\psi=\psi^I {\bf e}_I$. Write $L_m=\psi^I\hat{g}_{IJ}\psi^J$. The symmetry of $\hat{g}$ implies that $\delta L_m/\delta\psi^I=2\omega\hat{g}_{IJ}\psi^J$. We will abbreviate this as
\begin{equation}\label{eq:mass-var}
\frac{\delta L_m}{\delta\psi}
=2\omega\,\hat{g}\,\psi.
\end{equation}
We now compute the variation of $L_d$, with respect to $\psi$. We will show that $\delta L_d/\delta\psi^I=2\omega\hat{g}_{IJ}(D\psi)^I$. I.e.,
\begin{equation}\label{eq:dirac-var}
\frac{\delta L_d}{\delta\psi}
=2\omega\,\hat{g}\,D\psi
\end{equation}
Write $L_d=\omega\psi^I\hat{g}_{IJ}\gamma_K^{\alpha J}(\psi_\alpha^K+\hat\Gamma_{\alpha L}^K\psi^L)$, where $\psi_\alpha\doteq\partial_\alpha\psi$. It is well--known that $\partial_\alpha\omega=\Gamma_{\beta\alpha}^\beta\omega$. Therefore,
\begin{align*}
\partial_\alpha\frac{\partial L_d}{\partial\psi_\alpha^I}
&=\partial_\alpha(\omega\psi^J\hat{g}_{JK}\gamma_I^{\alpha K})\\
&=\omega_{,\alpha}\psi^J\hat{g}_{JK}\gamma_I^{\alpha K}
+\omega\psi_\alpha^J\hat{g}_{JK}\gamma_I^{\alpha K}
+\omega\psi^J\hat{g}_{JK,\alpha}\gamma_I^{\alpha K}
+\omega\psi^J\hat{g}_{JK}\gamma_{I,\alpha}^{\alpha K}\\
&=\Gamma_{\epsilon\alpha}^\epsilon\omega\psi^J\hat{g}_{JK}\gamma_I^{\alpha K}
+\omega\psi_\alpha^J\hat{g}_{JK}\gamma_I^{\alpha K}
+\omega\psi^J(\hat\Gamma_{\alpha J}^L\hat{g}_{LK}
+\hat{g}_{JL}\hat\Gamma_{\alpha K}^L)\gamma_I^{\alpha K}\\
&\quad\quad
+\omega\psi^J\hat{g}_{JK}(\gamma_L^{\alpha K}\hat\Gamma_{\alpha I}^L
-\hat\Gamma_{\alpha L}^K\gamma_I^{\alpha L}
-\Gamma_{\alpha\epsilon}^\alpha\gamma_I^{\epsilon K})\\
&=\omega\psi_\alpha^J\hat{g}_{JK}\gamma_I^{\alpha K}
+\omega\psi^J\hat\Gamma_{\alpha J}^L\hat{g}_{LK}\gamma_I^{\alpha K}
+\omega\psi^J\hat{g}_{JK}\gamma_L^{\alpha K}\hat\Gamma_{\alpha I}^L
\end{align*}
Here we have made use of equations \eqref{eq:metric-compat} and \eqref{eq:gamma-xconn}. The variation of $L_d$ is then
\begin{align*}
\frac{\delta L_d}{\delta\psi^I}
&=\frac{\partial L_d}{\partial\psi^I}
-\partial_\alpha\frac{\partial L_d}{\partial\psi_\alpha^I}\\
&=\omega\hat{g}_{IJ}\gamma_K^{\alpha J}(\psi_\alpha^K+\hat\Gamma_{\alpha L}^K\psi^L)
+\omega\psi^L\hat{g}_{LJ}\gamma_K^{\alpha J}\hat\Gamma_{\alpha I}^K\\
&\quad\quad
-(\omega\psi_\alpha^J\hat{g}_{JK}\gamma_I^{\alpha K}
+\omega\psi^J\hat\Gamma_{\alpha J}^L\hat{g}_{LK}\gamma_I^{\alpha K}
+\omega\psi^J\hat{g}_{JK}\gamma_L^{\alpha K}\hat\Gamma_{\alpha I}^L)\\
&=\omega\hat{g}_{IJ}\gamma_K^{\alpha J}(\psi_\alpha^K+\hat\Gamma_{\alpha L}^K\psi^L)
-(\omega\psi_\alpha^J\hat{g}_{JK}\gamma_I^{\alpha K}
+\omega\psi^J\hat\Gamma_{\alpha J}^L\hat{g}_{LK}\gamma_I^{\alpha K})\\
&=\omega\hat{g}_{IJ}\gamma_K^{\alpha J}(\psi_\alpha^K+\hat\Gamma_{\alpha L}^K\psi^L)
+\omega\psi_\alpha^J\gamma_J^{\alpha K}\hat{g}_{KI}
+\omega\psi^J\hat\Gamma_{\alpha J}^L\gamma_L^{\alpha K}\hat{g}_{KI}\\
&=\omega\hat{g}_{IJ}\gamma_K^{\alpha J}(\psi_\alpha^K+\hat\Gamma_{\alpha L}^K\psi^L)
+\omega\gamma_J^{\alpha K}(\psi_\alpha^J
+\hat\Gamma_{\alpha L}^J\psi^L)\hat{g}_{KI}\\
&=\omega\hat{g}_{IJ}(D\psi)^J+\omega(D\psi)^K\hat{g}_{KI}
\end{align*}
where we have used equation \eqref{eq:gamma-xmetric}. As $\hat{g}$ is symmetric, equation \eqref{eq:dirac-var} follows.
The variation of ${\mathcal L}$ with respect to $\psi$ therefore leads to the (real) Dirac equation
\begin{equation}\label{eq:rdirac}
D\psi+\mu\psi=0,
\quad\text{where}\quad
D\psi=\gamma^\alpha(\partial_\alpha\psi+\hat\Gamma_\alpha\psi)
\end{equation}
\subsection{Metric variation}
We now vary the individual summands in the Lagrangian \eqref{eq:lagrangian} with respect to the metric. As expected, varying the gravity term $L_g$ will give the Einstein tensor. Varying the mass and Dirac terms $L_m$ and $L_d$ will give a source term for the Einstein equation.
\subsubsection{Gravity term}\label{sec:gravity}
To compute the metric variation of $L_g$, a computer algebra system was used. In the case when the metric is diagonal, the value of $L_g=\omega\,{\rm tr}(\gamma^\alpha\gamma^\beta\hat\Omega_{\alpha\beta})$ is found to be a scalar multiple of the scalar curvature of $M$. Specifically,
$$L_g=-8\omega R$$
where $R$ is the scalar curvature of $M$. This equality must then hold for all metrics. Standard results from general relativity then imply that
\begin{equation}\label{eq:lgvar}
\frac{\delta L_g}{\delta g_{\alpha\beta}}
=-8\omega\,G^{\alpha\beta}
\end{equation}
with $G^{\alpha\beta}=R^{\alpha\beta}-\tfrac{1}{2}g^{\alpha\beta}R$ the Einstein tensor, and $R_{\alpha\beta}$ the Ricci curvature tensor.
\subsubsection{Mass and Dirac terms}
Recall that $\partial\omega/\partial g_{\alpha\beta}=\tfrac{1}{2}g^{\alpha\beta}\omega$. As $L_m=\omega\psi^T\hat{g}\psi$, we consequently have
$$\frac{\delta L_m}{\delta g_{\alpha\beta}}
=\frac{\partial L_m}{\partial g_{\alpha\beta}}
=\omega\psi^TA^{\alpha\beta}\psi
\quad\text{where}\quad
A^{\alpha\beta}
\doteq\tfrac{1}{2}g^{\alpha\beta}\hat{g}
+\frac{\partial\hat{g}}{\partial g_{\alpha\beta}}
$$
Now consider $L_d=\omega\psi^T\hat{g}\gamma^\rho(\partial_\rho\psi+\hat\Gamma_\rho\psi)$, which is a function of the metric and its first order derivatives. Set $g_{\alpha\beta\epsilon}\doteq\partial_\epsilon\,g_{\alpha\beta}$. One computes that
$$\frac{\delta L_d}{\delta g_{\alpha\beta}}
=\frac{\partial L_d}{\partial g_{\alpha\beta}}
-\partial_\epsilon\frac{\partial L_d}{\partial g_{\alpha\beta\epsilon}}
=\omega\left(
\psi^TA^{\alpha\beta}D\psi
+\psi^TP^{\alpha\beta}\psi
+\psi^TQ^{\alpha\beta\epsilon}\nabla_\epsilon\psi\right)
$$
with $A^{\alpha\beta}$ as above, and
\begin{align*}
P^{\alpha\beta}
&\doteq
\hat{g}\gamma^\nu\frac{\partial\hat\Gamma_\nu}{\partial g_{\alpha\beta}}
-\Gamma_{\mu\epsilon}^\mu\hat{g}\gamma^\nu
\frac{\partial\hat\Gamma_\nu}{\partial g_{\alpha\beta\epsilon}}
-\hat{g}_{,\epsilon}\gamma^\nu
\frac{\partial\hat\Gamma_\nu}{\partial g_{\alpha\beta\epsilon}}
-\hat{g}\gamma_{,\epsilon}^\nu
\frac{\partial\hat\Gamma_\nu}{\partial g_{\alpha\beta\epsilon}}
-\hat{g}\gamma^\nu
\partial_\epsilon\frac{\partial\hat\Gamma_\nu}{\partial g_{\alpha\beta\epsilon}}\\
&\quad\quad
+\hat\Gamma_\epsilon^T\hat{g}\gamma^\nu
\frac{\partial\hat\Gamma_\nu}{\partial g_{\alpha\beta\epsilon}}
+\hat{g}\gamma^\nu\frac{\partial\hat\Gamma_\nu}
{\partial g_{\alpha\beta\epsilon}}
\hat\Gamma_\epsilon\\
Q^{\alpha\beta\epsilon}
&\doteq
\hat{g}\frac{\partial\gamma^\epsilon}{\partial g_{\alpha\beta}}
-\frac{\partial\hat\Gamma_\nu^T}{\partial g_{\alpha\beta\epsilon}}
\gamma^{\nu T}\hat{g}
-\hat{g}\gamma^\nu
\frac{\partial\hat\Gamma_\nu}{\partial g_{\alpha\beta\epsilon}}
\end{align*}
If we use equations \eqref{eq:metric-compat} and \eqref{eq:gamma-xconn} in the above expression for $P^{\alpha\beta}$, and the fact that $\gamma^\nu$ and $\hat{g}$ do not depend on the first derivatives of the metric in the expression for $Q^{\alpha\beta\epsilon}$, we may rewrite the previous equations in the form
\begin{equation}\label{eq:var1}
\begin{aligned}
P^{\alpha\beta}&=\hat{g}\tilde{P}^{\alpha\beta}\\
\tilde{P}^{\alpha\beta}
&=\gamma^\nu\frac{\partial\hat\Gamma_\nu}{\partial g_{\alpha\beta}}
+(\Gamma_{\epsilon\mu}^\nu\gamma^\mu-\Gamma_{\mu\epsilon}^\mu\gamma^\nu)
\frac{\partial\hat\Gamma_\nu}{\partial g_{\alpha\beta\epsilon}}
-\gamma^\nu
\partial_\epsilon\frac{\partial\hat\Gamma_\nu}{\partial g_{\alpha\beta\epsilon}}
+\gamma^\nu\bigl[\frac{\partial\hat\Gamma_\nu}
{\partial g_{\alpha\beta\epsilon}},
\hat\Gamma_\epsilon\bigr]
\end{aligned}
\end{equation}
\begin{align}
Q^{\alpha\beta\epsilon}&=
\hat{g}\frac{\partial\gamma^\epsilon}{\partial g_{\alpha\beta}}
-\frac{\partial}{\partial g_{\alpha\beta\epsilon}}
\bigl(\hat{g}\gamma^\nu\hat\Gamma_\nu
+(\hat{g}\gamma^\nu\hat\Gamma_\nu)^T\bigr)\label{eq:var2}
\end{align}
Unfortunately, manual computation of $P^{\alpha\beta}$ and $Q^{\alpha\beta\epsilon}$ does not appear to be practical. A computer algebra system was used with a metric that is diagonal. The quantity $P^{\alpha\beta}$ is found to be identically zero:
$$P^{\alpha\beta}=0.$$
Note that if we {\em assume} that $P^{\alpha\beta}$ is a tensor, then this implies that $P^{\alpha\beta}$ is zero, regardless of whether the metric is diagonal or not.
On the other hand, the quantity $Q^{\alpha\beta\epsilon}$ is not trivial, and for a diagonal metric is given by
\begin{equation}\label{eq:Q}
Q^{\alpha\beta\epsilon}
=-\tfrac{1}{2}\hat{g}\,K(\alpha\beta\epsilon)\,\gamma^\alpha\gamma^\beta\gamma^\epsilon
\quad\text{for $\alpha\leq\beta$}
\end{equation}
and $Q^{\beta\alpha\epsilon}=Q^{\alpha\beta\epsilon}$. Here $K(\alpha\beta\epsilon)$ is the diagonal matrix defined by the rules
\begin{equation}\label{eq:K}
\begin{aligned}
K(\alpha\alpha\alpha)
&\doteq I\quad\text{($16\times 16$ identity)}\\
K(\alpha\alpha\epsilon)
&\doteq\hat{S}(\alpha\epsilon)\\
K(\alpha\beta\alpha)
&\doteq -I\\
K(\alpha\beta\beta)
&\doteq -K(\alpha\alpha\beta)\\
K(\alpha\beta\epsilon)
&\doteq -S(\alpha\beta\epsilon)
\end{aligned}
\end{equation}
for $\alpha,\beta,\epsilon$ taking distinct values with $\alpha<\beta$. The map $S(\alpha\epsilon)$ is reflection within the 2--plane in ${\mathbb R}^4$ spanned by the vectors ${\bf e}_{\alpha}$ and ${\bf e}_\beta$. I.e.,
$$S(\alpha\epsilon){\bf e}_\alpha=-{\bf e}_\alpha
\quad\text{and}\quad
S(\alpha\epsilon){\bf e}_\epsilon=-{\bf e}_\epsilon
$$
and 4--vectors in the complementary 2--plane are fixed. For example, $S(01)$ is the $4\times 4$ matrix with $S(01){\bf e}_0=-{\bf e}_0$, $S(01){\bf e}_1=-{\bf e}_1$, $S(01){\bf e}_2={\bf e}_2$, and $S(01){\bf e}_3={\bf e}_3$. $K(001)$ is then the extension of this map in the sense of equation \eqref{eq:xaction}. For instance,
$$K(001){\bf e}_{02}
=\hat{S}(01){\bf e}_{02}
=\bigl(S(01){\bf e}_0\bigr)\bigl(S(01){\bf e}_2\bigr)
=(-{\bf e}_0)({\bf e}_2)=-{\bf e}_{02}.
$$
On the other hand, $S(\alpha\beta\epsilon)$ is the reflection within the 4--plane of ${\it Cl}({\mathbb R}^4)$ spanned by the basis vectors ${\bf e}_{\alpha\epsilon}$, ${\bf e}_{\alpha\epsilon^\ast}$, ${\bf e}_\beta$, and ${\bf e}_{\beta^\ast}$. The notation $I^\ast$ denotes the index set complementary to $I$ in $\{0,1,2,3\}$. E.g., $02^\ast=13$ and $1^\ast=023$. Explicitly,
\begin{align*}
S(\alpha\beta\epsilon){\bf e}_{\alpha\epsilon}
&=-{\bf e}_{\alpha\epsilon},
& S(\alpha\beta\epsilon){\bf e}_{\alpha\epsilon^\ast}
&=-{\bf e}_{\alpha\epsilon^\ast},\\
S(\alpha\beta\epsilon){\bf e}_\beta
&=-{\bf e}_\beta,
& S(\alpha\beta\epsilon){\bf e}_{\beta^\ast}
&=-{\bf e}_{\beta^\ast}
\end{align*}
while vectors in the complementary $12$--plane are fixed. Thus for example, $S(012)$ is $16\times 16$ linear map such that
\begin{align*}
S(012){\bf e}_{02}=-{\bf e}_{02},\,
S(012){\bf e}_{13}=-{\bf e}_{13},\,
S(012){\bf e}_1=-{\bf e}_1,\,
S(012){\bf e}_{023}=-{\bf e}_{023}
\end{align*}
and $S(012)$ leaves all other basis vectors of ${\it Cl}({\mathbb R}^4)$ fixed.
\subsubsection{Coupled Einstein equation}
Taking the above computations together, we find that the variation of ${\mathcal L}$ with respect to $g_{\alpha\beta}$ leads to the Einstein equation with source term:
$$8\kappa G^{\alpha\beta}
=\psi^TA^{\alpha\beta}(D\psi+\mu\psi)
+\psi^TQ^{\alpha\beta\epsilon}\nabla_\epsilon\psi
$$
In particular, if $\psi$ satisfies the Dirac equation $D\psi+\mu\psi=0$, then we have
\begin{equation}\label{eq:einstein}
G_{\alpha\beta}
=\tfrac{1}{8\kappa}\psi^T{Q_{\alpha\beta}}^\epsilon\nabla_\epsilon\psi
\end{equation}
after lowering indices. We note that if we add the summand $L_c=\lambda\omega$ to the Lagrangian in \eqref{eq:lagrangian}, we will obtain a cosmological term in equation \eqref{eq:einstein}.
\section{Minimal coupling}\label{sec:mcouple}
To incorporate forces other than gravity into our framework, we use minimal coupling. That is, we replace the extended connection $\hat\Gamma_\alpha$ in equation \eqref{eq:rdirac} with the {\bf total connection}
\begin{equation}\label{eq:totcon}
C_\alpha=\hat\Gamma_\alpha+\theta_\alpha.
\end{equation}
Here the $\theta_\alpha$ are $16\times 16$ matrices that correspond to the additional force. Under a local change of basis $B$ for $T_\ast M$, they must transform according to the rule
\begin{equation}\label{eq:xforce}
\theta_\alpha'=(B^{-1})_\alpha^\beta\hat{B}\theta_\beta\hat{B}^{-1}.
\end{equation}
Indeed, the same argument that was used to deduce the transformation rule for $\hat\Gamma_\alpha$, equation \eqref{eq:xchristoffel}, implies that $C_\alpha$ transforms in exactly the same way as $\hat\Gamma_\alpha$. The transformation rule for $\theta_\alpha$ then follows.
The matrices $\theta_\alpha$ matrix must have some additional properties. We require that the field variation of the sum of Lagrangian densities $L_d+L_m$ yield the minimally coupled Dirac equation. That is, if we use the total connection $C_\alpha$ in place of the extended connection, then we should still arrive at equation \eqref{eq:dirac-var} when we vary $L_d$ with respect to the field $\psi$. By examining the computation that follows equation \eqref{eq:dirac-var}, we find that the two conditions
$$C_\alpha^T\hat{g}+\hat{g}C_\alpha
=\partial_\alpha\hat{g}
\quad\text{and}\quad
[\gamma^\alpha,C_\alpha]
=\partial_\beta\gamma^\alpha+\Gamma_{\beta\epsilon}^\alpha\gamma^\epsilon
$$
are sufficient to do this. On the other hand, the extended connection already satisfies equations \eqref{eq:metric-compat} and \eqref{eq:gamma-xconn}. So the matrices $\theta_\alpha$ must satisfy
\begin{equation}\label{eq:force}
\theta_\alpha^T\hat{g}+\hat{g}\theta_\alpha=0
\quad\text{and}\quad
[\gamma^\alpha,\theta_\alpha]=0.
\end{equation}
The distinct types of matrices that satisfy equation \eqref{eq:force} will be examined in a separate article.
We remark that it is not clear how to incorporate non--gravitational forces into the Lagrangian, equation \eqref{eq:lagrangian}. The ``minimal'' choice would be to set
$$L_g=\omega\,{\rm tr}(\gamma^\alpha\gamma^\beta F_{\alpha\beta})$$
where $F_{\alpha\beta}$ is the curvature of the total connection. However, in flat spacetime where the extended connection is trivial, the fact that $\theta_\alpha$ commutes with the gamma matrices will imply that $L_g=0$. To avoid this, we may make the choice
$$L_g=\omega\,{\rm tr}(F_{\alpha\beta}
\,F^{\alpha\beta}).$$
Another choice is
$$L_g=\omega\,{\rm tr}_2(\gamma^\alpha\gamma^\beta
F_{\alpha\beta})$$
where ${\rm tr}_2$ is the 2--trace mentioned after theorem \ref{thm:invscalars}. In the case of a flat spacetime, these two choices are equivalent. Also possible, but even more computationally formidable, is
\begin{align*}
L_g
&=\omega\,\det(I+\tau\gamma^\alpha\gamma^\beta F_{\alpha\beta})\\
&=\omega
+\omega\tau\,{\rm tr}(\gamma^\alpha\gamma^\beta F_{\alpha\beta})
+\omega\tau^2\,{\rm tr}_2(\gamma^\alpha\gamma^\beta F_{\alpha\beta})
+\cdots
+\omega\tau^{16}\,\det(\gamma^\alpha\gamma^\beta F_{\alpha\beta})
\end{align*}
where $\tau$ is a constant.
|
1,116,691,498,311 | arxiv | \section{Introduction}
Over the years, the size of language models have grown exponentially~\citep{2018Amodei,2020arXiv200705558T,bender_gebru_2021}. Additional parameters have improved quality on a variety of downstream NLP tasks, but drive up the cost of training~\citep{2014Horowitz,strubelL2019energy,patterson2021carbon} and increase the latency and memory footprint at inference time~\citep{warden2019tinyml, Samala_2018}.
Extending state-of-the-art language models to low-resource languages requires addressing what we term the \textit{low-resource double bind}. Low-resourcedness goes beyond mere data availability and reflects systemic issues in society~\cite{afocus, nekoto-etal-2020-participatory}.
Classifications of languages with respect to ``resourcedness'' have focused on the relative availability of data ~\citep{zoph-etal-2016-transfer,joshi-etal-2020-state}, and the concentration of NLP researchers from these regions or the over-fitting of model design around a small set of high resource languages~\citep{cieri-etal-2016-selection,nekoto-etal-2020-participatory}.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{images/cost_language_resources.png}
\caption{Cost of mobile data by country per language rank according to the taxonomy by~\citet{joshi-etal-2020-state}.}
\label{fig:double_resource}
\end{figure}
Less well documented and explored is the over-indexing of low-resource languages in ecosystems which simultaneously present severe constraints of computational resource. In Fig.~\ref{fig:double_resource} we plot 22 languages grouped by the availability of labelled and unlabelled data as proposed by~\citet{joshi-etal-2020-state} against the cost of 1 GB of data as a percentage of monthly income. Each language is mapped to the country with the most speakers. The cost of data is a valuable proxy for the cost of access to technology in an ecosystem~\citep{oughton2021}. Here, this visibly co-occurs with the limitations in available data for different languages.
In computationally constrained environments, access to machine learning technology depends upon optimizing jointly for both model performance and compactness. Pruning and quantization are widely applied techniques for compressing deep neural networks prior to deployment, as compressed models require less memory, energy consumption and have lower inference latency \citep{2017Andre,8364435,sun_computation_sparse}. To-date, evaluating the merits and trade-offs incurred by compression have overwhelmingly centered on settings where the data is relatively abundant~\citep{2019arXiv190209574G,Li2020LearningLT,Hou2020DynaBERTDB,Chen2021EarlyBERTEB,Bai2020BinaryBERTPT,tessera2021gradients}.
In this work, we \emph{instead} ask how these design choices trade-off with performance in data-limited regimes typical of low resource languages. We conduct large scale experiments on Neural Machine Translation (NMT) models trained to translate between English and three low resource African languages (Yoruba, Igbo and Hausa) and one high resourced language (German). We compare performance across models independently trained to very different levels of sparsity --- ranging from 50 \% to 98 \% --- and evaluate performance on the original distribution, in addition to establishing sensitivity to distribution shift across multiple corpora.
Recent work restricted to the computer vision domain has found that sparse models with comparable top-line performance metrics diverge considerably in behavior on the long-tail of the distribution and are sensitive to distribution shifts ~\citep{hooker2020compressed,liebenwein}. Here, we rigorously characterize the impact of sparsity on learned decision boundaries in NMT. In addition to held-out set BLEU, we measure sub-group performance on sentences grouped by prototypicality and study generalization properties over test corpora with different out-of-vocabulary ratios. We also evaluate whether humans prefer translations from sparse or dense models.
Our contributions can be enumerated as follows:
\begin{enumerate}
\itemsep0em
\item We introduce the term \textit{low-resource double-bind} and develop an extensive experimental framework to understand the impact of compression in a data-limited regime across 4 languages and 5 different data sets.
\item We find that models are \emph{tolerant of high levels of sparsity} while retaining BLEU performance and also human-judged translation quality. This holds until extremely high levels of sparsity (95\%--99\% of all weights removed) where a severe decline in BLEU is notable.
\item There is a more pronounced degradation when evaluation includes less frequent input patterns. On closer investigation, we find that \emph{sparsity disproportionately degrades performance on the long-tail of the data distribution}.
\item Curbing memorization of the long-tail can provide unexpected benefits. In a data-limited regime, we find that \emph{sparsity benefits generalization to out-of-distribution corpora}.
\end{enumerate}
\paragraph{Implications of Our Work} Understanding the impact of compression on low-resource languages is key to making technology accessible and inclusive. Our work suggests that compression in these settings alters generalization in ways that can be beneficial and go beyond merely fulfilling deployment constraints. A challenge in low-resource NLP is that the existing publicly available corpora often come from very specific domains, such as missionary websites or translations of religious texts. These sources do not adequately reflect the reality of the potential applications of NLP technologies, and are rarely sufficient for deployment~\citep{tigrinya,tico,congolese}. Thus, a task of great interest is establishing what model design choices can lead to generalization properties that extend beyond the immediate task at hand. Our work suggests that sparsity can play an important role in aiding generalization by curbing the memorization of rare long-tail instances.
\section{Methodology}
Addressing the low-resource double bind requires a careful setup of experiments to reflect the realities of low-resource translation. In particular, we want to control the effects of (1) network sparsity, (2) training data size, (3) target language, and (4) domain shifts.
In this work we focus on pruning, a widely favored compression technique due to remarkably high levels of compression that can be achieved while retaining top-line performance \citep{tgale_shooker_2019}. Pruning typically involves three separate stages: 1) training a dense model, 2) progressively removing a subset of weights estimated to be unimportant, and 3) continuing to train the smaller sparse network for a certain number of steps to recoup performance ~\citep{248452,blalock2020state}.
Pruning is the subject of considerable research and numerous techniques have been proposed, which differ in how weights are identified for removal and the schedule for introducing sparsity/allowing recovery ~\citep{Cun90optimalbrain,1993optimalbrain,Strom97sparseconnection,2017l0_reg,2016abigail,evci2019rigging, 2017Narang}.
The development of specialized software kernels has enabled the acceleration of sparse networks on traditional hardware \citep{gale2020sparse,elsen2019fast,sparse_tensor_core} with new generations of hardware directly facilitating sparse training~\citep{sparse_tensor_core}.
State of art pruning techniques can achieve a far higher level of compression and performance than simply using a smaller dense network \citep{to-prune-or-not, rethinking_model_size_li}. In our setting, a 90\% sparse base transformer greatly outperforms a tiny dense one across all the languages despite having a fraction of the parameters (14M vs 4.6M) (Appendix Table~\ref{tab:parameter_count}).
\begin{figure*}[ht!]
\centering
\vskip 0.15in
\begin{small}
\begin{sc}
\begin{subfigure}{0.28\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{images/new_plots/jw300_global_test_full}
\caption{Global Test \texttt{Full}}\label{subfig:global_test_full}
\end{subfigure}
\hspace{1em}%
\begin{subfigure}{0.28\linewidth}
\centering \includegraphics[width=0.99\linewidth]{images/new_plots/sampled_data_jw300globaltestbleu}
\caption{Global Test \texttt{Limited}}\label{subfig:global_test_limited}
\end{subfigure}
\hspace{1em}%
\begin{subfigure}{0.28\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{images/new_plots/sampled_data_jw300randomtestbleu}
\caption{Random Test \texttt{Limited}}
\end{subfigure}
\\
\begin{subfigure}{0.28\linewidth}
\centering \includegraphics[width=0.99\linewidth]{images/new_plots/jw300_global_test_change_in_bleu_full}
\caption{Global Test \texttt{Full}}
\end{subfigure}
\hspace{1em}%
\begin{subfigure}{0.28\linewidth}
\centering \includegraphics[width=0.99\linewidth]{images/new_plots/jw300_global_test_change_in_bleu_limited}
\caption{Global Test \texttt{Limited}}
\end{subfigure}
\hspace{1em}%
\begin{subfigure}{0.28\linewidth}
\centering \includegraphics[width=0.99\linewidth]{images/new_plots/jw300_random_test_change_in_bleu}
\caption{Random Test \texttt{Limited}}\label{fig:randomtest}
\end{subfigure}
\end{sc}
\end{small}
\caption{
Impact of pruning on BLEU performance across languages, sparsity levels and training data regimes. We evaluate test set performance on both a \texttt{Global} test set designed around common phrases to allow comparability between data corpus, and a \texttt{Random} test set with sentences sampled at random. \textbf{Top row:} Absolute change to BLEU and by test set and sample size \textbf{Bottom row:}
Change in BLEU relative to the dense (0\% sparse) model.}
\label{fig:degradation}
\end{figure*}
\subsection{Magnitude Pruning}\label{magnitude pruning technique}
We use magnitude pruning ~\citep{to-prune-or-not} to introduce sparsity across all experiment variants. It consistently achieves comparable or better results than competing state of art approaches on large scale benchmarks of computer vision and language models ~\citep{2019arXiv190209574G} and is widely used in practice due to the ease of implementation. Magnitude pruning estimates weight importance as the absolute weight value, and removes the weights with lowest magnitude according to a pre-specified schedule which determines the interval of training steps and frequency between begin and end step across which sparsity is introduced.
Magnitude pruning allows for the pre-specification of desired sparsity such that we can train models from random initialization to precise levels of end sparsity. We carry out extensive experiments and train networks independently for each language to end sparsity of 0--98 where 98\% designates a network with 98\% of the weights removed by the end of training. 0\% is a dense network (no weights removed).
\subsection{Languages}
We validate the effectiveness of magnitude-based pruning method in NMT models trained to translate from English into German (\texttt{de}), Yoruba (\texttt{yo)}, Igbo (\texttt{ig}) and Hausa (\texttt{ha}). While German as a high-resource language serves as a point of comparison to previous works, Yoruba, Igbo and Hausa represent three of the highest-resource African languages with (near-)sufficient resources for reliable MT experimentation, i.e. multiple publicly-available parallel corpora. \citet{joshi-etal-2020-state} classify Yoruba and Hausa as ``Hopeful'' in terms of available NLP resources and research, whereas Igbo is slightly lower-resourced and classified as ``Scraping-by''. All constitute important test beds for developing technologies that improve treatment of low-resource technologies, since they each have more than 50 million native speakers. Yoruba and Igbo belong to the Niger-Congo language family and use diacritics that pose challenges for text-based NLP~\citep{orifeDiacritics,OkwuGbe}. Hausa is a Chadic language which is part of the Afroasiatic phylum. It features complex pluralization and agglutination.
\begin{table}
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{l| c |c c c c c c}
\toprule
& Training & \multicolumn{5}{c}{Distribution Shift Test} \\
& \textbf{JW300} & \textbf{Gnome} & \textbf{Ubuntu} & \textbf{Flores} &\textbf{ParaCrawl} & \textbf{Tanzil} & \textbf{Tatoeba} \\
\midrule
\texttt{de} & 1.9M & 5,963 & 11,161 & 1012 & 2,000 & 2,000 & 10,145 \\
\texttt{yo} & 414.0k & 1,467 & 120 & 1012 & - & - & - \\
\texttt{ig} & 414.9k & 3,173 & 608 & 1012 & 2,000 & - & - \\
\texttt{ha} & 211.9k & 998 & 219 & 1012 & 2,000 & 2,000 & - \\
\bottomrule
\end{tabular}%
}
\caption{Number of sentences in each parallel corpora we evaluate. For ParaCrawl and Tanzil, we sample $2000$ sentences from the full dataset.}
\label{tab:data}
\end{table}
\subsection{Training and Test Data}
\paragraph{JW300} Training data for all languages is obtained from the JW300 parallel corpus ~\citep{agic-vulic-2019-jw300}, since it is the largest source of data that covers all languages we evaluate. It comprises more than 300 languages of which 101 are African, and is collected from religious magazines by Jehovah’s Witnesses (JW) published on \url{jw.org}.
\paragraph{Pre-processing} Parallel sentences are tokenized and encoded using BPE \citep{sennrich-etal-2016-neural}, resulting in a shared vocabulary of \SI{4096} tokens. Sentences are batched together with a maximum sequence length of \SI{64}{}. For each training batch, the approximate number of source/target tokens is \SI{2048}{}. We compute detokenized and case-sensitive BLEU using a helper script in tensor2tensor \citep{tensor2tensor} equivalent to SacreBLEU \citep{post-2018-call}.
\paragraph{Full vs limited data regime} For our experiments, we train on these datasets in two settings: First, with all data available for each of the languages, sizes listed in Table~\ref{tab:data}. In this setting, the dataset sizes range from 212k for Hausa to 1.9M for German.
Our second setting holds constant the amount of data available by sampling a uniform number of sentence pairs for each language. We randomly sample 200k sentences from the train set of each language, limited by the smallest corpus Hausa which consists of approximately 210k sentences. We refer to these settings in experiment discussion as \texttt{Full} and \texttt{Limited}.
\paragraph{Validation \& testing}
The need for multiple test sets to capture performance on a variety of downstream conditions has already been recommended by recent work~\citep{sogaard-etal-2021-need,lazaridou2021pitfalls}.
The JW300 test sets were constructed and released by ~\citet{nekoto-etal-2020-participatory} to contain the most frequent sentences in the JW300 corpus across African languages and were filtered from the training corpus. This construction ensures that test sets across languages contain similar content, which leads to increased comparability. However, this cross-lingual selection may introduces a bias towards frequent sentences, and under-represents language-specific outliers.
Only measuring performance on frequent sentences across languages may be a particular concern in evaluating the impact of sparse models, as prior work has shown that the introduction of sparsity disproportionately impacts the long-tail of the data distribution ~\citep{hooker2020compressed,hooker2020characterising}.
To capture possible disparate impact on the long-tail, we also sample at random from the remainder of the data to craft a secondary test set (as has been done for validation). In the results section, we refer to the ~\citet{nekoto-etal-2020-participatory} test data as the \texttt{Global} test set and random sample as the \texttt{Random} test set. A comparison of differences in performance between \texttt{Global} and \texttt{Random} test sets provides insights into how sparsity impacts generalization performance on text which is common relative to a more typical Zipf distribution with long-tail features~\citep{zipf1999psycho}.
\subsection{Sensitivity to Distribution Shift}
We select corpora which differ from the training distribution in both domain (ranging from everyday sentences to technical documentation), sentence length and OOV rate (ranging from 2.68\% to 20.42\%). Given these large deviations in statistics from the JW300 training corpus, our expectation is \emph{not} that the model preserves performance but rather to understand the sensitivity of sparse models to distribution shift \textit{relative} to dense models.
Our selection of corpora is also guided by the size of public data sets that cover Yoruba, Hausa, Igbo and German. When the test set is small, reliability in BLEU scores between models and inferred conclusions may be compromised~\citep{card-etal-2020-little}. To estimate the impact that limitation in test size can have on our results, we simulate the variability of BLEU under different amounts of test data in Figure~\ref{fig:subset}. As can be seen, a sample size of at least $100$ reasonably reduces variance in BLEU, so we only investigate out-of-distribution sensitivity with datasets of at least that size.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{images/bleu_subsets.png}
\caption{Mean BLEU scores (shaded: $\pm$ standard variation) for the dense en-de models on subsets of the Tatoeba data.}
\label{fig:subset}
\end{figure}
The domains of the datasets can be characterized below. Statistics for corpus sizes are given in Table~\ref{tab:data}, and out-of-vocabulary rates (OOV) and average source lengths in Table~\ref{tab:oov}.
\subsection{Datasets evaluated}
The domains of each dataset considered are characterized below. Additionally, we include statistics for 1) corpus size in Table \ref{tab:oov}, and 2) out-of-vocabulary rates (OOV) and average source lengths in Table~\ref{tab:data}:
\begin{itemize}
\item \textbf{Gnome} is a dataset in the technical domain that contains
187 languages pairs derived from the translation of GNOME documentation.\footnote{\url{https://www.gnome.org/}} The size of test sets for this corpus ranges between 998 (Hausa) and 5,963 (German). ``Sentences'' are often entities or phrases, with an average length of only 6-9 tokens.
\item \textbf{Ubuntu} is a dataset in the technical domain. It consists of 42 languages pairs generated from translating localization files of the Ubuntu OS.\footnote{\url{https://ubuntu.com/}} The size of test sets for this corpus ranges between 120 (Yoruba) and 11,161 (German), and it shows similar length statistics to GNOME.
\item \textbf{Tanzil} is a religious dataset with 42 language pairs. It is a collection of Quran translations compiled by the Tanzil project.\footnote{\url{https://tanzil.net/}} We sample 2000 sentences for both German and Hausa, which have an average length of 23 tokens, being slightly longer than the average JW300 training sentence.
\item \textbf{ParaCrawl} is a dataset obtained from mining the web for parallel sentences~\citep{banon-etal-2020-paracrawl}. v8 covers 41 mostly European languages, but a pre-release of Igbo and Hausa allowed evaluation here. \footnote{\url{https://bit.ly/3f7WfVI}} The crawled websites for Hausa and Igbo are largely religious but some also publish news. We sample 2000 sentences for Hausa, Igbo and German with an average length of 22 tokens.
\item \textbf{Tatoeba} is a crowdsourced dataset of short sentences concerning every day life translated by users of~\url{https://tatoeba.org/}. We only report Tatoeba results for German as this is the only corpus with more than 100 sentences. Tatoeba sentences have similar length to Gnome and Ubuntu, but are full sentences.
\item \textbf{Flores} is a multi-domain dataset containing professional translations of sentences extracted from English Wikipedia in 101 languages~\citep{flores}. The size of test sets released for this corpus is 1012 sentences across all languages with similar length to Tanzil and Paracrawl.
\end{itemize}
Our choice of datasets is guided by a desire to capture datasets with different degrees of difference from the original training corpus. JW300 is a religious dataset, so one could expect more overlap with both \textbf{ParaCrawl} and \textbf{Tanzil} and far less with \textbf{Ubuntu} and \textbf{Gnome} which are both technical writing to document the use of a technology. We include \textbf{Flores} which covers a variety of different domains and finally \textbf{Tatoeba} for completeness, as a more general dataset consisting of everyday sentences.
\begin{table*}[]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{l|ccc|ccc|ccc}
\toprule
& \multicolumn{3}{c|}{\textbf{Training}} & \multicolumn{3}{c|}{\textbf{\texttt{Global} test}} & \multicolumn{3}{c}{\textbf{\texttt{Random} test}}\\
& Avg Len & Dense & 90\% Sparse & Avg Len & Dense & 90\% Sparse & Avg Len & Dense & 90\% Sparse \\
\midrule
Low & 33.01 & 62.04 & 24.93 & 23.58 & 31.41 & 29.45 & 32.73 & 22.35 & 23.08\\
Mid & 18.26 & 80.09 & 26.82 & 14.03 & 35.37 & 34.68 & 17.53 & 23.99 & 23.58\\
High & 8.99 & 78.91 & 28.58 & 9.86 & 48.56 & 48.05 & 9.09 &25.47 & 24.86\\
\bottomrule
\end{tabular} %
}
\caption{BLEU for different sets split according to sentence typicality for German, which is defined as average token log-frequencies in the training corpus ($F_S$ in ~\citep{raunak-etal-2020-long}).}
\label{tab:typicality}
\end{table*}
\begin{table*}[]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{l|cc|cc|cc|cc|cc|cc|cc|cc}
\toprule
& \multicolumn{2}{c|}{\textbf{JW300 \texttt{Global}}} & \multicolumn{2}{c|}{\textbf{JW300 \texttt{Random}}} & \multicolumn{2}{c|}{\textbf{Tanzil}} & \multicolumn{2}{c|}{\textbf{Tatoeba}} & \multicolumn{2}{c|}{\textbf{ParaCrawl}} & \multicolumn{2}{c|}{\textbf{Gnome}} & \multicolumn{2}{c|}{\textbf{Ubuntu}} & \multicolumn{2}{c}{\textbf{Flores}}\\
& OOV & Len & OOV & Len & OOV & Len & OOV & Len & OOV & Len & OOV & Len & OOV & Len & OOV & Len\\
\midrule
\texttt{de} & 0.25 & 15.81 & 0.66 & 19.76 & 2.68 & 22.48 & 4.89 & 8.78 & 9.86 & 20.09 & 12.37 & 8.09 &16.64 & 5.56 & \multirow{4}{*}{15.53} & \multirow{4}{*}{21.64}\\
\texttt{ha} & 0.26 & 16.28 & 0.37 & 18.72 & 4.95 & 22.86 & 7.00 & 7.76 & 3.39 & 24.67 & 15.22 & 9.81 & 20.42 & 7.83 \\
\texttt{ig} & 0.30 & 15.98 & 0.50 & 18.58 & - & - & 12.10 & 6.89 & 6.95 & 20.91 & 14.19 & 6.99 & 13.99 & 6.81\\
\texttt{yo} & 0.24 & 15.98 & 0.56 & 18.77 & - & - & 9.05 & 5.69 & - &- & 16.66 & 6.36 & 13.55 & 6.46\\
\bottomrule
\end{tabular}%
}
\caption{Out-of-vocabulary rates (OOV, \%) and average source lengths (Len) for different test set sources.}
\label{tab:oov}
\vspace{-0.2cm}
\end{table*}
\subsection{Architecture and Training}\label{sparse_transformer}
We train transformer models~\citep{vaswani2017attention} for each NMT task with a modified version of the tensor2tensor library~\citep{tensor2tensor} from~\citep{tgale_shooker_2019}. The transformer base model consists of 60M parameters, with $31\%$ of the parameters in the attention layers, and $41\%$ in the position wise feed-forward layers. Training hyperparameters are detailed in Appendix Section~\ref{sec:hyper-training}. We release our code here \url{https://github.com/orevaahia/mc4lrnmt}.
Throughout training we introduce sparsity of levels percentages [0, 50, 60, 70, 80, 90, 95, 98] using magnitude pruning~\citep{to-prune-or-not}. All fully-connected layers and embeddings making up 99.87\% of all of the parameters in the model are considered for sparsification.
The tuning of pruning hyper-parameter is described in Appendix Section~\ref{sec:hyper}.
\subsection{Human Evaluation: Dense vs Sparse}\label{sec:human}
We complement automatic BLEU evaluation with a human evaluation study to compare the translation quality of dense and sparse models. We elicit absolute ratings on a 6-point scale for 500 pairs of differing translations of the JW300 \texttt{Global} and \texttt{Random} test set on a crowd-sourcing platform.
\begin{figure*}[th!]
\centering
\vskip 0.15in
\begin{small}
\begin{sc}
\begin{subfigure}{0.23\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{images/new_plots/sampled_data_gnomebleu}
\caption{Gnome}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{images/new_plots/sampled_data_ubuntubleu}
\caption{Ubuntu}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{images/new_plots/sampled_data_paracrawlbleu}
\caption{ParaCrawl}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{images/new_plots/sampled_data_floresbleu}
\caption{Flores}
\end{subfigure} \\
\begin{subfigure}{0.30\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{images/new_plots/sampled_data_tanzilbleu}
\caption{Tanzil}
\end{subfigure}
\begin{subfigure}{0.30\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{images/new_plots/tatoeba_limited_BLEU}
\caption{Tatoeba: \texttt{Limited} vs \texttt{Full}}
\label{fig:de_full_sample}
\end{subfigure}
\begin{subfigure}{0.30\linewidth}
\centering \includegraphics[width=0.99\linewidth]{images/new_plots/sensitivitygerman_withflores_samp}
\caption{All (German)}
\end{subfigure}
\end{sc}
\end{small}
\caption{
Robustness to distribution shift at different levels of sparsity for models trained in a data-limited regime (\texttt{Limited}). For Tatoeba with only German we compare performance for a model trained on \texttt{Full} is added for comparison.
}
\label{fig:sensitivity}
\vspace{-0.5cm}
\end{figure*}
\section{Results}
\paragraph{Sparsity BLEU trade-off} In Figure~\ref{fig:degradation}, we can see that models are tolerant of moderate to high levels of sparsity (50\% - 80\%) while retaining BLEU performance relative to dense. Between 50\% and 80\% sparsity, any degradation is minimal as sparse performance remains at 95\% or more of dense performance for all languages. Hausa even has a slight exception where pruning 70\% or 80\% of the model parameters performs on par with the baseline or even better. However, for both \texttt{Global} and \texttt{Random} test sets, there is a noticeably sharp degradation in BLEU when progressing to extremely high sparsity levels of 90\% and beyond.
\paragraph{Long-tail test set} Translation quality on the \texttt{Global} and \texttt{Random} test sets differs considerably. We control for data size by comparing on the same \texttt{Limited} datasets. Languages perform within a narrow band of comparable BLEU for \texttt{Global}, with the degradation in BLEU at higher levels of sparsity occurring at a similar rate across languages. In contrast, absolute BLEU scores on \texttt{Random} are noticeably lower at both dense and all sparsity levels, coupled with a far wider spread of BLEU between languages. This suggests that \emph{a low data regime disproportionately impacts translation quality on the long-tail} and that the \texttt{Random} set is a more discriminative evaluation protocol.
When we compare relative differences of sparse models to dense, we can see that relative to \texttt{Global}, there is sharper degradation in \texttt{Random} under high sparsity (90\%+). However, with mid-level sparsity, the quality of the dense model is maintained or even slightly outperformed (German) on all test sets.
\paragraph{Learning prototypical instances is less sensitive to data size} In Figure~\ref{fig:degradation}, it is noticeable that performance on the \texttt{Global} test set, does not vary noticeably between the \texttt{Limited} (\ref{subfig:global_test_limited}) and \texttt{Full} (\ref{subfig:global_test_full}) training setting. This is surprising given the large difference in training corpus size for many of the languages (for German 1.9 M in \texttt{Full} vs 200,000 in \texttt{Limited}).
Additionally, even when restricting attention to the \texttt{Full} training setting, the ranking of absolute BLEU scores on the \texttt{Global} test set does not appear to be sensitive to the size of the training corpus, as Igbo (414.9K) and Yoruba (414.0K) achieve nominally higher BLEU on \texttt{Global} than German (1.9M) despite having only a fifth of training data in the \texttt{Full} setup. This suggests that learning a representation for the most frequent patterns in the dataset does not require a substantial amount of data.
\paragraph{Data size for long-tail patterns} In contrast, learning a good representation for the long-tail appears to be far more sensitive to the size of the training corpus. In Figure~\ref{fig:de_full_sample}, we compare the OOD performance on Tatoeba of the \texttt{Full} vs \texttt{Limited} model trained on German. Here, we are evaluating performance on a dataset with a much higher OOV ratio of $2.68\%$. Here, on a dataset with more rare instances and distribution shift, the amount of training data makes a larger difference. Across all levels of sparsity the model trained on \texttt{Full} generalizes better than the \texttt{Limited} model.
\paragraph{Do humans notice the difference between dense and sparse models?}
Human annotators rate test translations of the dense model and 90\%-sparsity model as described in Section~\ref{sec:human}.
Table~\ref{tab:ratings} reports the average ratings (1-6 scale, the higher the better) for both types of models across languages for both in-domain test sets.
The ratings reveal that there is no clear preference of dense over sparse model outputs across languages and sets. For German the sparse model scores 0.1 lower on average for both test sets. For Igbo and Hausa sparse scores slightly higher on the \texttt{Global} set, but this gain is lost on \texttt{Random}. Hausa receives the nominally highest ratings on both test sets, but we note that
raters might not be well calibrated across languages, and test sets are not completely identical. Nevertheless, the \texttt{Random} translations score consistently lower than the \texttt{Global} translations, indicating that the quality loss on long-tail examples was indeed noticeable.
All in all, \emph{the roughly 2-8\% drop in BLEU that occurred through pruning at 90\% is not negatively affecting human-judged translation quality in any of the studied languages}, which is a promising finding for the deployment of such models in practice. However, this evaluation is oblivious of effects like translation biases~\citep{edunov-etal-2020-evaluation} that could be caused by less memorization.
\paragraph{How sensitive are sparse models to distribution shift?} \label{sec:distribution_shift_explanation}
Figure~\ref{fig:sensitivity} shows the absolute change in BLEU when evaluating the \texttt{Limited} models on the out-of-distribution datasets.
We find that across dense and sparse models, degradation in performance is sensitive to OOV rates and difference in sentence lengths, with the most pronounced degradation on Tanzil (longer average sentence length), Ubuntu and Gnome (technical domains with far higher OOV rates of 12--20\%). The transfer to ParaCrawl was the most successful across languages. For Flores, we don't see a uniform performance across all languages. Our results show that the transfer to all languages but Yoruba is almost similar to that of Paracrawl.
One trend that is consistent across languages and domains is an increase in quality around 70\%--95\% sparsity (even more visible when plotting relative change in Figure~\ref{fig:relative_degradation_distribution2} in the Appendix).
As a result, 90\% sparse models outperform their baselines across all languages under the \texttt{Limited} condition. This means that \emph{increased sparsity during training with limited data has a beneficial influence for out-of-distribution generalization}. With larger amounts of training data---in the case of German (\texttt{Entire}) a factor of 10--- however, this relative advantage is lost (Figure~\ref{fig:de_full_sample}). This finding is highly relevant for the reality of low-resource languages, where training data is limited and often available only for a narrow domain, so strong generalization is essential.
\paragraph{Does sparsity curb memorization?} The results on OOD generalization are surprising, as it suggests that in a low-data regime \textit{less capacity rather than more can aid generalization}. It is worth placing these results in the context of recent work in computer vision that has found that sparsity curbs memorization of the long-tail of the training distribution \citep{hooker2020compressed,hooker2020characterising}. Impeding memorization constrains model learning to the most general high level features, rather than atypical low-frequency features and noisy examples which typically require memorization and are less likely to generalize across other tasks \citep{brown2020memorization,NEURIPS2020_1e14bfe2}. In the setting of low-resource languages, the training corpus is often highly restricted to a narrow domain such as religion. Here, it may be more likely the long-tail will contain specialized artefacts and noise that do not generalize.
To explore this further, in Table~\ref{tab:typicality} we look at training and test performance grouped by sentence typicality for German (typicality measured as per \citep{raunak-etal-2020-long}). At training time, the dense model evidences clear overfitting and outperforms the sparse on low, mid and high typicality. The difference in relative performance on sentences of low typicality is striking (62.04 dense vs 24.93 sparse BLEU), confirming that capacity aids memorization of the long-tail. However, at test time the dense memorization of a specialized training set exhibits a negative knowledge transfer cost relative to sparse. On the \texttt{Random} test set, sparse in fact slightly outperforms relative to dense on low typicality. Both Table~\ref{tab:typicality} and the OOD result in Figure~\ref{fig:sensitivity} show that sparsity has an important role to play in data limited regimes at curbing memorization of rare artefacts that do not generalize.
\begin{table}[t]
\centering
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{l|cc|cc}
\toprule
& \multicolumn{2}{c|}{\textbf{\texttt{Global}}} & \multicolumn{2}{c}{\textbf{\texttt{Random}}} \\
& Dense & 90\% Sparse & Dense & 90\% Sparse \\
\midrule
\texttt{de} & 4.17 & 4.06 & 3.80 & 3.66 \\%& 28 \\
\texttt{yo} & 3.66 & 3.66 & 3.51 & 3.51 \\
\texttt{ig} & 3.87 & 3.96 & 3.81 & 3.85 \\
\texttt{ha} & 4.77 & 4.85 & 4.53 & 4.53\\
\bottomrule
\end{tabular}%
}
\caption{Average human ratings of 500 test set translations comparing \textbf{dense} and \textbf{90\%-sparse} models (\texttt{Limited}) on \texttt{Global} and \texttt{Random} JW300 test sets.}
\label{tab:ratings}
\vspace{-0.2cm}
\end{table}
\section{Related Work}
\paragraph{Compression techniques for NMT} There has been recent works on compressing recurrent neural networks (RNN) and transformer networks for NMT ~\citep{2019arXiv190209574G,narang2017exploring, see-etal-2016-compression,zhang-etal-2017-towards,rethinking_model_size_li}. With the exception of a proof-of-concept experiment with RNN on a small-scale English-Vietnamese translation task~\citep{see-etal-2016-compression}, all of the works above focus on compressing models trained on large data sets, and exclude African Languages.
To the best of our knowledge, our work is the first to apply pruning methods to train transformer NMT models on low-resourced data, and on African languages with different syntactic and morphological features distinct from English. Moreover, all of the above works rely solely on automatic evaluation metrics, and do not qualify translation quality using human annotation or sensitivity to different distribution shifts.
\paragraph{Optimized training for low-resource NMT}
\citet{sennrich-zhang-2019-revisiting} find that hyperparameter tuning is essential for NMT on low-resource data, such as the depth or regularization of the network. \citet{duh-etal-2020-benchmarking} highlight the importance of hyper-parameter tuning for NMT for Somali and Swahili. \citet{fadaee-etal-2017-data,sennrich-zhang-2019-revisiting} and \citet{xu-etal-2020-dynamic} explore tailored augmentation and curriculum learning strategies for data-limited regimes. \citet{sennrich-zhang-2019-revisiting} additionally assume limitations to compute at training time when modeling Somali and Gujarati. In contrast, in our work, we consider the impact of resource constraints present at inference time/deployment of a model.
\paragraph{Transformer size for low-resource NMT} More relevant to our work are works that have evaluated transformers of different sizes in the light of low-resource translation. \citet{Biljon2020OnOT} investigate the effect of transformer depth on low-resource translation for three South-African languages. \citet{murray-etal-2019-auto} study auto-size feed-forward and attention layers in transformers for low-resource translations of Hausa, Tigrinya, and Arabic, and find BLEU and efficiency improvements with smaller models. \citet{tapo-etal-2020-neural} succeed in training a smaller transformer model for Bambara with as few as 10k examples, but find only limited generalization under distribution shift ~\citep{Tapo2021}.
In contrast to these works, we study generalization at different levels of sparsity. Pruning is a more precise experimental framework to understand the relationship between capacity and generalization because we can exactly vary the sparsity in a range between 0\% and 100\% controlling for the same architecture. Pruning also achieves far higher levels of compression in terms of the number of parameters relative to substitutes evaluated in these works such as the tiny transformer. In our work, we also seek to not only measure the impact of capacity but also to better understand why counter-intuitively higher levels of sparsity aid generalization. Finally, our experiments are extensive relative to \citep{Biljon2020OnOT,tapo-etal-2020-neural}, both in terms of number of languages and variety of training and evaluation conditions. Furthermore, we are the first to report human evaluation on the effects of pruning for MT.
\section{Future Work}
Our work introduces the term \textit{low-resource double-bind} and conducts extensive experiments to study the impact of pruning. In this setting, we are concerned with resource constraints present at deployment. An important area for further work is to explore a setting where resource constraints are present at both training and deployment time. For example, a consideration of the impact of pruning on pre-trained models, such as large multilingual MT models that are known to boost low-resource NMT quality~\citep{aharoni-etal-2019-massively,massiveWild}. Additionally, the minimal differences observed in our human evaluation of preferences open up a range of questions for deeper qualitative analysis of the resulting translations: Under which conditions do humans notice differences, and how do translations differ in style? There may be interesting connections to recent findings about output hallucinations occurring on memorized examples~\citep{curiousHallucinations}, or with respect to translation bias~\citep{koppel-ordan-2011-translationese}.
\section{Conclusion}\label{sec:conclusion}
We demonstrate the effectiveness of introducing sparsity when training NMT models for low-resourced languages. We show that small performance drops in extremely sparse regimes according to automatic metrics are not reflected in human-judged translation quality. Our extensive study of the impact of pruning on out-of-distribution generalization reveals that sparse models improve over dense models in a limited data regime.
Overall, these insights are promising for overcoming the low-resource double bind: Pruned models reduce resource requirements for deployment, and increase the robustness towards out-of-domain samples due to reduced memorization during training.
\section{Acknowledgements}\label{sec:acknowledgement}
We thank Colin Cherry, Rubungo Andre Niyongabo, Kelechi Ogueji and Trevor Gale for their invaluable feedback and comments on the paper draft. We also thank the anonymous reviewers for their time and comments on our paper. We thank CURe for providing compute credits, and the institutional support of Jonathan Caton and Natacha Mainville.
\newpage
\section{Introduction}
Over the years, the size of language models have grown exponentially~\citep{2018Amodei,2020arXiv200705558T,bender_gebru_2021}. Additional parameters have improved quality on a variety of downstream NLP tasks, but drive up the cost of training~\citep{2014Horowitz,strubelL2019energy,patterson2021carbon} and increase the latency and memory footprint at inference time~\citep{warden2019tinyml, Samala_2018}.
Extending state-of-the-art language models to low-resource languages requires addressing what we term the \textit{low-resource double bind}. Low-resourcedness goes beyond mere data availability and reflects systemic issues in society~\cite{afocus, nekoto-etal-2020-participatory}.
Classifications of languages with respect to ``resourcedness'' have focused on the relative availability of data ~\citep{zoph-etal-2016-transfer,joshi-etal-2020-state}, and the concentration of NLP researchers from these regions or the over-fitting of model design around a small set of high resource languages~\citep{cieri-etal-2016-selection,nekoto-etal-2020-participatory}.
\begin{figure}
\centering
\includegraphics[width=0.8\columnwidth]{images/cost_language_resources.png}
\caption{Cost of mobile data by country per language rank according to the taxonomy by~\citet{joshi-etal-2020-state}.}
\label{fig:double_resource}
\end{figure}
Less well documented and explored is the over-indexing of low-resource languages in ecosystems which simultaneously present severe constraints of computational resource. In Fig.~\ref{fig:double_resource} we plot 22 languages grouped by the availability of labelled and unlabelled data as proposed by~\citet{joshi-etal-2020-state} against the cost of 1 GB of data as a percentage of monthly income. Each language is mapped to the country with the most speakers. The cost of data is a valuable proxy for the cost of access to technology in an ecosystem~\citep{oughton2021}. Here, this visibly co-occurs with the limitations in available data for different languages.
In computationally constrained environments, access to machine learning technology depends upon optimizing jointly for both model performance and compactness. Pruning and quantization are widely applied techniques for compressing deep neural networks prior to deployment, as compressed models require less memory, energy consumption and have lower inference latency \citep{2017Andre,8364435,sun_computation_sparse}. To-date, evaluating the merits and trade-offs incurred by compression have overwhelmingly centered on settings where the data is relatively abundant~\citep{2019arXiv190209574G,Li2020LearningLT,Hou2020DynaBERTDB,Chen2021EarlyBERTEB,Bai2020BinaryBERTPT,tessera2021gradients}.
In this work, we \emph{instead} ask how these design choices trade-off with performance in data-limited regimes typical of low resource languages. We conduct large scale experiments on Neural Machine Translation (NMT) models trained to translate between English and three low resource African languages (Yoruba, Igbo and Hausa) and one high resourced language (German). We compare performance across models independently trained to very different levels of sparsity --- ranging from 50 \% to 98 \% --- and evaluate performance on the original distribution, in addition to establishing sensitivity to distribution shift across multiple corpora.
Recent work restricted to the computer vision domain has found that sparse models with comparable top-line performance metrics diverge considerably in behavior on the long-tail of the distribution and are sensitive to distribution shifts ~\citep{hooker2020compressed,liebenwein}. Here, we rigorously characterize the impact of sparsity on learned decision boundaries in NMT. In addition to held-out set BLEU, we measure sub-group performance on sentences grouped by prototypicality and study generalization properties over test corpora with different out-of-vocabulary ratios. We also evaluate whether humans prefer translations from sparse or dense models.
Our contributions can be enumerated as follows:
\begin{enumerate}
\itemsep0em
\item We introduce the term \textit{low-resource double-bind} and develop an extensive experimental framework to understand the impact of compression in a data-limited regime across 4 languages and 5 different data sets.
\item We find that models are \emph{tolerant of high levels of sparsity} while retaining BLEU performance and also human-judged translation quality. This holds until extremely high levels of sparsity (95\%--99\% of all weights removed) where a severe decline in BLEU is notable.
\item There is a more pronounced degradation when evaluation includes less frequent input patterns. On closer investigation, we find that \emph{sparsity disproportionately degrades performance on the long-tail of the data distribution}.
\item Curbing memorization of the long-tail can provide unexpected benefits. In a data-limited regime, we find that \emph{sparsity benefits generalization to out-of-distribution corpora}.
\end{enumerate}
\paragraph{Implications of Our Work} Understanding the impact of compression on low-resource languages is key to making technology accessible and inclusive. Our work suggests that compression in these settings alters generalization in ways that can be beneficial and go beyond merely fulfilling deployment constraints. A challenge in low-resource NLP is that the existing publicly available corpora often come from very specific domains, such as missionary websites or translations of religious texts. These sources do not adequately reflect the reality of the potential applications of NLP technologies, and are rarely sufficient for deployment~\citep{tigrinya,tico,congolese}. Thus, a task of great interest is establishing what model design choices can lead to generalization properties that extend beyond the immediate task at hand. Our work suggests that sparsity can play an important role in aiding generalization by curbing the memorization of rare long-tail instances.
\section{Methodology}
Addressing the low-resource double bind requires a careful setup of experiments to reflect the realities of low-resource translation. In particular, we want to control the effects of (1) network sparsity, (2) training data size, (3) target language, and (4) domain shifts.
In this work we focus on pruning, a widely favored compression technique due to remarkably high levels of compression that can be achieved while retaining top-line performance \citep{tgale_shooker_2019}. Pruning typically involves three separate stages: 1) training a dense model, 2) progressively removing a subset of weights estimated to be unimportant, and 3) continuing to train the smaller sparse network for a certain number of steps to recoup performance ~\citep{248452,blalock2020state}.
Pruning is the subject of considerable research and numerous techniques have been proposed, which differ in how weights are identified for removal and the schedule for introducing sparsity/allowing recovery ~\citep{Cun90optimalbrain,1993optimalbrain,Strom97sparseconnection,2017l0_reg,2016abigail,evci2019rigging, 2017Narang}.
The development of specialized software kernels has enabled the acceleration of sparse networks on traditional hardware \citep{gale2020sparse,elsen2019fast,sparse_tensor_core} with new generations of hardware directly facilitating sparse training~\citep{sparse_tensor_core}.
State of art pruning techniques can achieve a far higher level of compression and performance than simply using a smaller dense network \citep{to-prune-or-not, rethinking_model_size_li}. In our setting, a 90\% sparse base transformer greatly outperforms a tiny dense one across all the languages despite having a fraction of the parameters (14M vs 4.6M) (Appendix Table~\ref{tab:parameter_count}).
\begin{figure*}[ht!]
\centering
\vskip 0.15in
\begin{small}
\begin{sc}
\begin{subfigure}{0.28\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{images/new_plots/jw300_global_test_full}
\caption{Global Test \texttt{Full}}\label{subfig:global_test_full}
\end{subfigure}
\hspace{1em}%
\begin{subfigure}{0.28\linewidth}
\centering \includegraphics[width=0.99\linewidth]{images/new_plots/sampled_data_jw300globaltestbleu}
\caption{Global Test \texttt{Limited}}\label{subfig:global_test_limited}
\end{subfigure}
\hspace{1em}%
\begin{subfigure}{0.28\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{images/new_plots/sampled_data_jw300randomtestbleu}
\caption{Random Test \texttt{Limited}}
\end{subfigure}
\\
\begin{subfigure}{0.28\linewidth}
\centering \includegraphics[width=0.99\linewidth]{images/new_plots/jw300_global_test_change_in_bleu_full}
\caption{Global Test \texttt{Full}}
\end{subfigure}
\hspace{1em}%
\begin{subfigure}{0.28\linewidth}
\centering \includegraphics[width=0.99\linewidth]{images/new_plots/jw300_global_test_change_in_bleu_limited}
\caption{Global Test \texttt{Limited}}
\end{subfigure}
\hspace{1em}%
\begin{subfigure}{0.28\linewidth}
\centering \includegraphics[width=0.99\linewidth]{images/new_plots/jw300_random_test_change_in_bleu}
\caption{Random Test \texttt{Limited}}\label{fig:randomtest}
\end{subfigure}
\end{sc}
\end{small}
\caption{
Impact of pruning on BLEU performance across languages, sparsity levels and training data regimes. We evaluate test set performance on both a \texttt{Global} test set designed around common phrases to allow comparability between data corpus, and a \texttt{Random} test set with sentences sampled at random. \textbf{Top row:} Absolute change to BLEU and by test set and sample size \textbf{Bottom row:}
Change in BLEU relative to the dense (0\% sparse) model.}
\label{fig:degradation}
\end{figure*}
\subsection{Magnitude Pruning}\label{magnitude pruning technique}
We use magnitude pruning ~\citep{to-prune-or-not} to introduce sparsity across all experiment variants. It consistently achieves comparable or better results than competing state of art approaches on large scale benchmarks of computer vision and language models ~\citep{2019arXiv190209574G} and is widely used in practice due to the ease of implementation. Magnitude pruning estimates weight importance as the absolute weight value, and removes the weights with lowest magnitude according to a pre-specified schedule which determines the interval of training steps and frequency between begin and end step across which sparsity is introduced.
Magnitude pruning allows for the pre-specification of desired sparsity such that we can train models from random initialization to precise levels of end sparsity. We carry out extensive experiments and train networks independently for each language to end sparsity of 0--98 where 98\% designates a network with 98\% of the weights removed by the end of training. 0\% is a dense network (no weights removed).
\subsection{Languages}
We validate the effectiveness of magnitude-based pruning method in NMT models trained to translate from English into German (\texttt{de}), Yoruba (\texttt{yo)}, Igbo (\texttt{ig}) and Hausa (\texttt{ha}). While German as a high-resource language serves as a point of comparison to previous works, Yoruba, Igbo and Hausa represent three of the highest-resource African languages with (near-)sufficient resources for reliable MT experimentation, i.e. multiple publicly-available parallel corpora. \citet{joshi-etal-2020-state} classify Yoruba and Hausa as ``Hopeful'' in terms of available NLP resources and research, whereas Igbo is slightly lower-resourced and classified as ``Scraping-by''. All constitute important test beds for developing technologies that improve treatment of low-resource technologies, since they each have more than 50 million native speakers. Yoruba and Igbo belong to the Niger-Congo language family and use diacritics that pose challenges for text-based NLP~\citep{orifeDiacritics,OkwuGbe}. Hausa is a Chadic language which is part of the Afroasiatic phylum. It features complex pluralization and agglutination.
\begin{table}
\centering
\resizebox{1.0\columnwidth}{!}{
\begin{tabular}{l| c |c c c c c c}
\toprule
& Training & \multicolumn{5}{c}{Distribution Shift Test} \\
& \textbf{JW300} & \textbf{Gnome} & \textbf{Ubuntu} & \textbf{Flores} &\textbf{ParaCrawl} & \textbf{Tanzil} & \textbf{Tatoeba} \\
\midrule
\texttt{de} & 1.9M & 5,963 & 11,161 & 1012 & 2,000 & 2,000 & 10,145 \\
\texttt{yo} & 414.0k & 1,467 & 120 & 1012 & - & - & - \\
\texttt{ig} & 414.9k & 3,173 & 608 & 1012 & 2,000 & - & - \\
\texttt{ha} & 211.9k & 998 & 219 & 1012 & 2,000 & 2,000 & - \\
\bottomrule
\end{tabular}%
}
\caption{Number of sentences in each parallel corpora we evaluate. For ParaCrawl and Tanzil, we sample $2000$ sentences from the full dataset.}
\label{tab:data}
\end{table}
\subsection{Training and Test Data}
\paragraph{JW300} Training data for all languages is obtained from the JW300 parallel corpus ~\citep{agic-vulic-2019-jw300}, since it is the largest source of data that covers all languages we evaluate. It comprises more than 300 languages of which 101 are African, and is collected from religious magazines by Jehovah’s Witnesses (JW) published on \url{jw.org}.
\paragraph{Pre-processing} Parallel sentences are tokenized and encoded using BPE \citep{sennrich-etal-2016-neural}, resulting in a shared vocabulary of \SI{4096} tokens. Sentences are batched together with a maximum sequence length of \SI{64}{}. For each training batch, the approximate number of source/target tokens is \SI{2048}{}. We compute detokenized and case-sensitive BLEU using a helper script in tensor2tensor \citep{tensor2tensor} equivalent to SacreBLEU \citep{post-2018-call}.
\paragraph{Full vs limited data regime} For our experiments, we train on these datasets in two settings: First, with all data available for each of the languages, sizes listed in Table~\ref{tab:data}. In this setting, the dataset sizes range from 212k for Hausa to 1.9M for German.
Our second setting holds constant the amount of data available by sampling a uniform number of sentence pairs for each language. We randomly sample 200k sentences from the train set of each language, limited by the smallest corpus Hausa which consists of approximately 210k sentences. We refer to these settings in experiment discussion as \texttt{Full} and \texttt{Limited}.
\paragraph{Validation \& testing}
The need for multiple test sets to capture performance on a variety of downstream conditions has already been recommended by recent work~\citep{sogaard-etal-2021-need,lazaridou2021pitfalls}.
The JW300 test sets were constructed and released by ~\citet{nekoto-etal-2020-participatory} to contain the most frequent sentences in the JW300 corpus across African languages and were filtered from the training corpus. This construction ensures that test sets across languages contain similar content, which leads to increased comparability. However, this cross-lingual selection may introduces a bias towards frequent sentences, and under-represents language-specific outliers.
Only measuring performance on frequent sentences across languages may be a particular concern in evaluating the impact of sparse models, as prior work has shown that the introduction of sparsity disproportionately impacts the long-tail of the data distribution ~\citep{hooker2020compressed,hooker2020characterising}.
To capture possible disparate impact on the long-tail, we also sample at random from the remainder of the data to craft a secondary test set (as has been done for validation). In the results section, we refer to the ~\citet{nekoto-etal-2020-participatory} test data as the \texttt{Global} test set and random sample as the \texttt{Random} test set. A comparison of differences in performance between \texttt{Global} and \texttt{Random} test sets provides insights into how sparsity impacts generalization performance on text which is common relative to a more typical Zipf distribution with long-tail features~\citep{zipf1999psycho}.
\subsection{Sensitivity to Distribution Shift}
We select corpora which differ from the training distribution in both domain (ranging from everyday sentences to technical documentation), sentence length and OOV rate (ranging from 2.68\% to 20.42\%). Given these large deviations in statistics from the JW300 training corpus, our expectation is \emph{not} that the model preserves performance but rather to understand the sensitivity of sparse models to distribution shift \textit{relative} to dense models.
Our selection of corpora is also guided by the size of public data sets that cover Yoruba, Hausa, Igbo and German. When the test set is small, reliability in BLEU scores between models and inferred conclusions may be compromised~\citep{card-etal-2020-little}. To estimate the impact that limitation in test size can have on our results, we simulate the variability of BLEU under different amounts of test data in Figure~\ref{fig:subset}. As can be seen, a sample size of at least $100$ reasonably reduces variance in BLEU, so we only investigate out-of-distribution sensitivity with datasets of at least that size.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{images/bleu_subsets.png}
\caption{Mean BLEU scores (shaded: $\pm$ standard variation) for the dense en-de models on subsets of the Tatoeba data.}
\label{fig:subset}
\end{figure}
The domains of the datasets can be characterized below. Statistics for corpus sizes are given in Table~\ref{tab:data}, and out-of-vocabulary rates (OOV) and average source lengths in Table~\ref{tab:oov}.
\subsection{Datasets evaluated}
The domains of each dataset considered are characterized below. Additionally, we include statistics for 1) corpus size in Table \ref{tab:oov}, and 2) out-of-vocabulary rates (OOV) and average source lengths in Table~\ref{tab:data}:
\begin{itemize}
\item \textbf{Gnome} is a dataset in the technical domain that contains
187 languages pairs derived from the translation of GNOME documentation.\footnote{\url{https://www.gnome.org/}} The size of test sets for this corpus ranges between 998 (Hausa) and 5,963 (German). ``Sentences'' are often entities or phrases, with an average length of only 6-9 tokens.
\item \textbf{Ubuntu} is a dataset in the technical domain. It consists of 42 languages pairs generated from translating localization files of the Ubuntu OS.\footnote{\url{https://ubuntu.com/}} The size of test sets for this corpus ranges between 120 (Yoruba) and 11,161 (German), and it shows similar length statistics to GNOME.
\item \textbf{Tanzil} is a religious dataset with 42 language pairs. It is a collection of Quran translations compiled by the Tanzil project.\footnote{\url{https://tanzil.net/}} We sample 2000 sentences for both German and Hausa, which have an average length of 23 tokens, being slightly longer than the average JW300 training sentence.
\item \textbf{ParaCrawl} is a dataset obtained from mining the web for parallel sentences~\citep{banon-etal-2020-paracrawl}. v8 covers 41 mostly European languages, but a pre-release of Igbo and Hausa allowed evaluation here. \footnote{\url{https://bit.ly/3f7WfVI}} The crawled websites for Hausa and Igbo are largely religious but some also publish news. We sample 2000 sentences for Hausa, Igbo and German with an average length of 22 tokens.
\item \textbf{Tatoeba} is a crowdsourced dataset of short sentences concerning every day life translated by users of~\url{https://tatoeba.org/}. We only report Tatoeba results for German as this is the only corpus with more than 100 sentences. Tatoeba sentences have similar length to Gnome and Ubuntu, but are full sentences.
\item \textbf{Flores} is a multi-domain dataset containing professional translations of sentences extracted from English Wikipedia in 101 languages~\citep{flores}. The size of test sets released for this corpus is 1012 sentences across all languages with similar length to Tanzil and Paracrawl.
\end{itemize}
Our choice of datasets is guided by a desire to capture datasets with different degrees of difference from the original training corpus. JW300 is a religious dataset, so one could expect more overlap with both \textbf{ParaCrawl} and \textbf{Tanzil} and far less with \textbf{Ubuntu} and \textbf{Gnome} which are both technical writing to document the use of a technology. We include \textbf{Flores} which covers a variety of different domains and finally \textbf{Tatoeba} for completeness, as a more general dataset consisting of everyday sentences.
\begin{table*}[]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{l|ccc|ccc|ccc}
\toprule
& \multicolumn{3}{c|}{\textbf{Training}} & \multicolumn{3}{c|}{\textbf{\texttt{Global} test}} & \multicolumn{3}{c}{\textbf{\texttt{Random} test}}\\
& Avg Len & Dense & 90\% Sparse & Avg Len & Dense & 90\% Sparse & Avg Len & Dense & 90\% Sparse \\
\midrule
Low & 33.01 & 62.04 & 24.93 & 23.58 & 31.41 & 29.45 & 32.73 & 22.35 & 23.08\\
Mid & 18.26 & 80.09 & 26.82 & 14.03 & 35.37 & 34.68 & 17.53 & 23.99 & 23.58\\
High & 8.99 & 78.91 & 28.58 & 9.86 & 48.56 & 48.05 & 9.09 &25.47 & 24.86\\
\bottomrule
\end{tabular} %
}
\caption{BLEU for different sets split according to sentence typicality for German, which is defined as average token log-frequencies in the training corpus ($F_S$ in ~\citep{raunak-etal-2020-long}).}
\label{tab:typicality}
\end{table*}
\begin{table*}[]
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{l|cc|cc|cc|cc|cc|cc|cc|cc}
\toprule
& \multicolumn{2}{c|}{\textbf{JW300 \texttt{Global}}} & \multicolumn{2}{c|}{\textbf{JW300 \texttt{Random}}} & \multicolumn{2}{c|}{\textbf{Tanzil}} & \multicolumn{2}{c|}{\textbf{Tatoeba}} & \multicolumn{2}{c|}{\textbf{ParaCrawl}} & \multicolumn{2}{c|}{\textbf{Gnome}} & \multicolumn{2}{c|}{\textbf{Ubuntu}} & \multicolumn{2}{c}{\textbf{Flores}}\\
& OOV & Len & OOV & Len & OOV & Len & OOV & Len & OOV & Len & OOV & Len & OOV & Len & OOV & Len\\
\midrule
\texttt{de} & 0.25 & 15.81 & 0.66 & 19.76 & 2.68 & 22.48 & 4.89 & 8.78 & 9.86 & 20.09 & 12.37 & 8.09 &16.64 & 5.56 & \multirow{4}{*}{15.53} & \multirow{4}{*}{21.64}\\
\texttt{ha} & 0.26 & 16.28 & 0.37 & 18.72 & 4.95 & 22.86 & 7.00 & 7.76 & 3.39 & 24.67 & 15.22 & 9.81 & 20.42 & 7.83 \\
\texttt{ig} & 0.30 & 15.98 & 0.50 & 18.58 & - & - & 12.10 & 6.89 & 6.95 & 20.91 & 14.19 & 6.99 & 13.99 & 6.81\\
\texttt{yo} & 0.24 & 15.98 & 0.56 & 18.77 & - & - & 9.05 & 5.69 & - &- & 16.66 & 6.36 & 13.55 & 6.46\\
\bottomrule
\end{tabular}%
}
\caption{Out-of-vocabulary rates (OOV, \%) and average source lengths (Len) for different test set sources.}
\label{tab:oov}
\vspace{-0.2cm}
\end{table*}
\subsection{Architecture and Training}\label{sparse_transformer}
We train transformer models~\citep{vaswani2017attention} for each NMT task with a modified version of the tensor2tensor library~\citep{tensor2tensor} from~\citep{tgale_shooker_2019}. The transformer base model consists of 60M parameters, with $31\%$ of the parameters in the attention layers, and $41\%$ in the position wise feed-forward layers. Training hyperparameters are detailed in Appendix Section~\ref{sec:hyper-training}. We release our code here \url{https://github.com/orevaahia/mc4lrnmt}.
Throughout training we introduce sparsity of levels percentages [0, 50, 60, 70, 80, 90, 95, 98] using magnitude pruning~\citep{to-prune-or-not}. All fully-connected layers and embeddings making up 99.87\% of all of the parameters in the model are considered for sparsification.
The tuning of pruning hyper-parameter is described in Appendix Section~\ref{sec:hyper}.
\subsection{Human Evaluation: Dense vs Sparse}\label{sec:human}
We complement automatic BLEU evaluation with a human evaluation study to compare the translation quality of dense and sparse models. We elicit absolute ratings on a 6-point scale for 500 pairs of differing translations of the JW300 \texttt{Global} and \texttt{Random} test set on a crowd-sourcing platform.
\begin{figure*}[th!]
\centering
\vskip 0.15in
\begin{small}
\begin{sc}
\begin{subfigure}{0.23\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{images/new_plots/sampled_data_gnomebleu}
\caption{Gnome}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{images/new_plots/sampled_data_ubuntubleu}
\caption{Ubuntu}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{images/new_plots/sampled_data_paracrawlbleu}
\caption{ParaCrawl}
\end{subfigure}
\begin{subfigure}{0.23\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{images/new_plots/sampled_data_floresbleu}
\caption{Flores}
\end{subfigure} \\
\begin{subfigure}{0.30\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{images/new_plots/sampled_data_tanzilbleu}
\caption{Tanzil}
\end{subfigure}
\begin{subfigure}{0.30\linewidth}
\centering
\includegraphics[width=0.99\linewidth]{images/new_plots/tatoeba_limited_BLEU}
\caption{Tatoeba: \texttt{Limited} vs \texttt{Full}}
\label{fig:de_full_sample}
\end{subfigure}
\begin{subfigure}{0.30\linewidth}
\centering \includegraphics[width=0.99\linewidth]{images/new_plots/sensitivitygerman_withflores_samp}
\caption{All (German)}
\end{subfigure}
\end{sc}
\end{small}
\caption{
Robustness to distribution shift at different levels of sparsity for models trained in a data-limited regime (\texttt{Limited}). For Tatoeba with only German we compare performance for a model trained on \texttt{Full} is added for comparison.
}
\label{fig:sensitivity}
\vspace{-0.5cm}
\end{figure*}
\section{Results}
\paragraph{Sparsity BLEU trade-off} In Figure~\ref{fig:degradation}, we can see that models are tolerant of moderate to high levels of sparsity (50\% - 80\%) while retaining BLEU performance relative to dense. Between 50\% and 80\% sparsity, any degradation is minimal as sparse performance remains at 95\% or more of dense performance for all languages. Hausa even has a slight exception where pruning 70\% or 80\% of the model parameters performs on par with the baseline or even better. However, for both \texttt{Global} and \texttt{Random} test sets, there is a noticeably sharp degradation in BLEU when progressing to extremely high sparsity levels of 90\% and beyond.
\paragraph{Long-tail test set} Translation quality on the \texttt{Global} and \texttt{Random} test sets differs considerably. We control for data size by comparing on the same \texttt{Limited} datasets. Languages perform within a narrow band of comparable BLEU for \texttt{Global}, with the degradation in BLEU at higher levels of sparsity occurring at a similar rate across languages. In contrast, absolute BLEU scores on \texttt{Random} are noticeably lower at both dense and all sparsity levels, coupled with a far wider spread of BLEU between languages. This suggests that \emph{a low data regime disproportionately impacts translation quality on the long-tail} and that the \texttt{Random} set is a more discriminative evaluation protocol.
When we compare relative differences of sparse models to dense, we can see that relative to \texttt{Global}, there is sharper degradation in \texttt{Random} under high sparsity (90\%+). However, with mid-level sparsity, the quality of the dense model is maintained or even slightly outperformed (German) on all test sets.
\paragraph{Learning prototypical instances is less sensitive to data size} In Figure~\ref{fig:degradation}, it is noticeable that performance on the \texttt{Global} test set, does not vary noticeably between the \texttt{Limited} (\ref{subfig:global_test_limited}) and \texttt{Full} (\ref{subfig:global_test_full}) training setting. This is surprising given the large difference in training corpus size for many of the languages (for German 1.9 M in \texttt{Full} vs 200,000 in \texttt{Limited}).
Additionally, even when restricting attention to the \texttt{Full} training setting, the ranking of absolute BLEU scores on the \texttt{Global} test set does not appear to be sensitive to the size of the training corpus, as Igbo (414.9K) and Yoruba (414.0K) achieve nominally higher BLEU on \texttt{Global} than German (1.9M) despite having only a fifth of training data in the \texttt{Full} setup. This suggests that learning a representation for the most frequent patterns in the dataset does not require a substantial amount of data.
\paragraph{Data size for long-tail patterns} In contrast, learning a good representation for the long-tail appears to be far more sensitive to the size of the training corpus. In Figure~\ref{fig:de_full_sample}, we compare the OOD performance on Tatoeba of the \texttt{Full} vs \texttt{Limited} model trained on German. Here, we are evaluating performance on a dataset with a much higher OOV ratio of $2.68\%$. Here, on a dataset with more rare instances and distribution shift, the amount of training data makes a larger difference. Across all levels of sparsity the model trained on \texttt{Full} generalizes better than the \texttt{Limited} model.
\paragraph{Do humans notice the difference between dense and sparse models?}
Human annotators rate test translations of the dense model and 90\%-sparsity model as described in Section~\ref{sec:human}.
Table~\ref{tab:ratings} reports the average ratings (1-6 scale, the higher the better) for both types of models across languages for both in-domain test sets.
The ratings reveal that there is no clear preference of dense over sparse model outputs across languages and sets. For German the sparse model scores 0.1 lower on average for both test sets. For Igbo and Hausa sparse scores slightly higher on the \texttt{Global} set, but this gain is lost on \texttt{Random}. Hausa receives the nominally highest ratings on both test sets, but we note that
raters might not be well calibrated across languages, and test sets are not completely identical. Nevertheless, the \texttt{Random} translations score consistently lower than the \texttt{Global} translations, indicating that the quality loss on long-tail examples was indeed noticeable.
All in all, \emph{the roughly 2-8\% drop in BLEU that occurred through pruning at 90\% is not negatively affecting human-judged translation quality in any of the studied languages}, which is a promising finding for the deployment of such models in practice. However, this evaluation is oblivious of effects like translation biases~\citep{edunov-etal-2020-evaluation} that could be caused by less memorization.
\paragraph{How sensitive are sparse models to distribution shift?} \label{sec:distribution_shift_explanation}
Figure~\ref{fig:sensitivity} shows the absolute change in BLEU when evaluating the \texttt{Limited} models on the out-of-distribution datasets.
We find that across dense and sparse models, degradation in performance is sensitive to OOV rates and difference in sentence lengths, with the most pronounced degradation on Tanzil (longer average sentence length), Ubuntu and Gnome (technical domains with far higher OOV rates of 12--20\%). The transfer to ParaCrawl was the most successful across languages. For Flores, we don't see a uniform performance across all languages. Our results show that the transfer to all languages but Yoruba is almost similar to that of Paracrawl.
One trend that is consistent across languages and domains is an increase in quality around 70\%--95\% sparsity (even more visible when plotting relative change in Figure~\ref{fig:relative_degradation_distribution2} in the Appendix).
As a result, 90\% sparse models outperform their baselines across all languages under the \texttt{Limited} condition. This means that \emph{increased sparsity during training with limited data has a beneficial influence for out-of-distribution generalization}. With larger amounts of training data---in the case of German (\texttt{Entire}) a factor of 10--- however, this relative advantage is lost (Figure~\ref{fig:de_full_sample}). This finding is highly relevant for the reality of low-resource languages, where training data is limited and often available only for a narrow domain, so strong generalization is essential.
\paragraph{Does sparsity curb memorization?} The results on OOD generalization are surprising, as it suggests that in a low-data regime \textit{less capacity rather than more can aid generalization}. It is worth placing these results in the context of recent work in computer vision that has found that sparsity curbs memorization of the long-tail of the training distribution \citep{hooker2020compressed,hooker2020characterising}. Impeding memorization constrains model learning to the most general high level features, rather than atypical low-frequency features and noisy examples which typically require memorization and are less likely to generalize across other tasks \citep{brown2020memorization,NEURIPS2020_1e14bfe2}. In the setting of low-resource languages, the training corpus is often highly restricted to a narrow domain such as religion. Here, it may be more likely the long-tail will contain specialized artefacts and noise that do not generalize.
To explore this further, in Table~\ref{tab:typicality} we look at training and test performance grouped by sentence typicality for German (typicality measured as per \citep{raunak-etal-2020-long}). At training time, the dense model evidences clear overfitting and outperforms the sparse on low, mid and high typicality. The difference in relative performance on sentences of low typicality is striking (62.04 dense vs 24.93 sparse BLEU), confirming that capacity aids memorization of the long-tail. However, at test time the dense memorization of a specialized training set exhibits a negative knowledge transfer cost relative to sparse. On the \texttt{Random} test set, sparse in fact slightly outperforms relative to dense on low typicality. Both Table~\ref{tab:typicality} and the OOD result in Figure~\ref{fig:sensitivity} show that sparsity has an important role to play in data limited regimes at curbing memorization of rare artefacts that do not generalize.
\begin{table}[t]
\centering
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{l|cc|cc}
\toprule
& \multicolumn{2}{c|}{\textbf{\texttt{Global}}} & \multicolumn{2}{c}{\textbf{\texttt{Random}}} \\
& Dense & 90\% Sparse & Dense & 90\% Sparse \\
\midrule
\texttt{de} & 4.17 & 4.06 & 3.80 & 3.66 \\%& 28 \\
\texttt{yo} & 3.66 & 3.66 & 3.51 & 3.51 \\
\texttt{ig} & 3.87 & 3.96 & 3.81 & 3.85 \\
\texttt{ha} & 4.77 & 4.85 & 4.53 & 4.53\\
\bottomrule
\end{tabular}%
}
\caption{Average human ratings of 500 test set translations comparing \textbf{dense} and \textbf{90\%-sparse} models (\texttt{Limited}) on \texttt{Global} and \texttt{Random} JW300 test sets.}
\label{tab:ratings}
\vspace{-0.2cm}
\end{table}
\section{Related Work}
\paragraph{Compression techniques for NMT} There has been recent works on compressing recurrent neural networks (RNN) and transformer networks for NMT ~\citep{2019arXiv190209574G,narang2017exploring, see-etal-2016-compression,zhang-etal-2017-towards,rethinking_model_size_li}. With the exception of a proof-of-concept experiment with RNN on a small-scale English-Vietnamese translation task~\citep{see-etal-2016-compression}, all of the works above focus on compressing models trained on large data sets, and exclude African Languages.
To the best of our knowledge, our work is the first to apply pruning methods to train transformer NMT models on low-resourced data, and on African languages with different syntactic and morphological features distinct from English. Moreover, all of the above works rely solely on automatic evaluation metrics, and do not qualify translation quality using human annotation or sensitivity to different distribution shifts.
\paragraph{Optimized training for low-resource NMT}
\citet{sennrich-zhang-2019-revisiting} find that hyperparameter tuning is essential for NMT on low-resource data, such as the depth or regularization of the network. \citet{duh-etal-2020-benchmarking} highlight the importance of hyper-parameter tuning for NMT for Somali and Swahili. \citet{fadaee-etal-2017-data,sennrich-zhang-2019-revisiting} and \citet{xu-etal-2020-dynamic} explore tailored augmentation and curriculum learning strategies for data-limited regimes. \citet{sennrich-zhang-2019-revisiting} additionally assume limitations to compute at training time when modeling Somali and Gujarati. In contrast, in our work, we consider the impact of resource constraints present at inference time/deployment of a model.
\paragraph{Transformer size for low-resource NMT} More relevant to our work are works that have evaluated transformers of different sizes in the light of low-resource translation. \citet{Biljon2020OnOT} investigate the effect of transformer depth on low-resource translation for three South-African languages. \citet{murray-etal-2019-auto} study auto-size feed-forward and attention layers in transformers for low-resource translations of Hausa, Tigrinya, and Arabic, and find BLEU and efficiency improvements with smaller models. \citet{tapo-etal-2020-neural} succeed in training a smaller transformer model for Bambara with as few as 10k examples, but find only limited generalization under distribution shift ~\citep{Tapo2021}.
In contrast to these works, we study generalization at different levels of sparsity. Pruning is a more precise experimental framework to understand the relationship between capacity and generalization because we can exactly vary the sparsity in a range between 0\% and 100\% controlling for the same architecture. Pruning also achieves far higher levels of compression in terms of the number of parameters relative to substitutes evaluated in these works such as the tiny transformer. In our work, we also seek to not only measure the impact of capacity but also to better understand why counter-intuitively higher levels of sparsity aid generalization. Finally, our experiments are extensive relative to \citep{Biljon2020OnOT,tapo-etal-2020-neural}, both in terms of number of languages and variety of training and evaluation conditions. Furthermore, we are the first to report human evaluation on the effects of pruning for MT.
\section{Future Work}
Our work introduces the term \textit{low-resource double-bind} and conducts extensive experiments to study the impact of pruning. In this setting, we are concerned with resource constraints present at deployment. An important area for further work is to explore a setting where resource constraints are present at both training and deployment time. For example, a consideration of the impact of pruning on pre-trained models, such as large multilingual MT models that are known to boost low-resource NMT quality~\citep{aharoni-etal-2019-massively,massiveWild}. Additionally, the minimal differences observed in our human evaluation of preferences open up a range of questions for deeper qualitative analysis of the resulting translations: Under which conditions do humans notice differences, and how do translations differ in style? There may be interesting connections to recent findings about output hallucinations occurring on memorized examples~\citep{curiousHallucinations}, or with respect to translation bias~\citep{koppel-ordan-2011-translationese}.
\section{Conclusion}\label{sec:conclusion}
We demonstrate the effectiveness of introducing sparsity when training NMT models for low-resourced languages. We show that small performance drops in extremely sparse regimes according to automatic metrics are not reflected in human-judged translation quality. Our extensive study of the impact of pruning on out-of-distribution generalization reveals that sparse models improve over dense models in a limited data regime.
Overall, these insights are promising for overcoming the low-resource double bind: Pruned models reduce resource requirements for deployment, and increase the robustness towards out-of-domain samples due to reduced memorization during training.
\section{Acknowledgements}\label{sec:acknowledgement}
We thank Colin Cherry, Rubungo Andre Niyongabo, Kelechi Ogueji and Trevor Gale for their invaluable feedback and comments on the paper draft. We also thank the anonymous reviewers for their time and comments on our paper. We thank CURe for providing compute credits, and the institutional support of Jonathan Caton and Natacha Mainville.
\newpage
|
1,116,691,498,312 | arxiv | \section{Introduction}
There is a relatively long history for (Abelian or Non-Abelian)
Chern-Simons (CS) theory and its relevant theories to become popular
in physics. At early stage they appeared as the
high-temperature limit of four dimensional field models, where
Maxwell-Chern-Simons theory can be regarded as an effective theory
of QCD and the electroweak model
\cite{gpy}. Further its more striking aspect
had been found: in three-dimensional space-time
the CS term can provide a topological
mass for the gauge field in a gauge invariant way as an alternative
to Higgs mechanism\cite{sc,djt}. In recent years the revival to the study
of CS theory, on one hand, is due to Witten's work\cite{wit}
in which a connection between
CS theory and 2-dimensional conformal invariant field theory
was found;
on the other hand, owing to the non-invariance
of CS term under $P$ and $T$ transformations and especially
its topological character, it can be used to describe the dynamics of
anyon particles so that it has been favoured by physicists to solve some
problems in condensed
matter theory such as the fractional quantum Hall effect and the
high temperature superconductivity\cite{col}. It is also proved that
CS term coupled to scalar matter is useful in
the field-theoretic formulation of the Aharonov-Bohm effect\cite{fa}
and the three-dimensional analogue of Coleman-Weinberg mechanism is
explored up to two-loop\cite{tth}.
In this letter we shall present a detailed investigation on the one-loop
quantum correction of one-loop CS spinor electrodynamics.
We start from the action with Maxwell term
\begin{eqnarray}
S&=&\frac{\mu}{2} \int d^3x \,\epsilon^{\mu\nu\lambda} A_\mu\partial_\nu
A_\lambda - \frac{1}{4\gamma}\int d^3x\, F_{\mu\nu}F^{\mu\nu}\nonumber \\[2mm]
&+&\int d^3x\,\left[\bar{\psi}(i\hat{\partial}+e\hat{A}-m)\psi\right]
-\frac{1}{2\alpha}\int d^3x(\partial_\mu A^\mu)^2,
\label{eqac}
\end{eqnarray}
where (and in what follows) $\hat{A}{\equiv}{\gamma}^{\alpha}A_{\alpha}$,
$\mu$ is the statistical parameter and we choose the Lorentz
gauge condition ${\partial}_{\mu}A^{\mu}=0$. The notation is
the same as that in Ref.{\cite{djt}},
\begin{eqnarray}
{\gamma}_{\mu}=i{\sigma}_{\mu}, ~~{\gamma}_{\mu}{\gamma}_{\nu}=
g_{\mu\nu}-i{\epsilon}_{\mu\nu\rho}{\gamma}^{\rho},~~
g_{\mu\nu}=\mbox{diag}(1,-1,-1).
\label{eqno}
\end{eqnarray}
It should be stressed
that the introduction of Maxwell term plays a two-fold role: on one hand,
it provides a mathematically correct path-integral
quantization of the CS theory in Euclidean region, since the pure CS term
contains the non-positive definite first order differential operator;
on the other hand, as a higher order derivative term, it provides
a gauge invariant regularization. However, this regularization is not
enough to make one-loop amplitude finite, another regularization must be
implemented. Here we shall adopt dimensional regularization.
The model (\ref{eqac})
has been studied by many authors$\cite{kao}$, especially
the case where CS term is absent (i.e. pure QED${}_3$). However
they mainly consider the dynamical mass generation, the chiral
symmetry and parity breaking by quantum corrections. A complete
investigation on its quantum correction still lacks such as the
explicit verification of Ward identity and whether there exists the
three-dimensional analogue of Schwinger's anomalous magnetic moment term,
all of which depend on an explicit analytical calculation of the
vertex correction. To our knowledge, up to now there appears
no analytic result on this part. We are further motivated by the result of
pure non-Abelian CS theory, where a different order
in taking ${\gamma}{\to}{\infty}$
can result in a different finite renormalization of
the statistical parameter\cite{aso}. It is desirable
to see whether this case happens
in CS spinor electrodynamics too. As we know, in quantum
electrodynamics, the Ward identity means
\begin{eqnarray}
Z_1=Z_2,
\end{eqnarray}
where $Z_2$ and $Z_1$ are the electron wave
function renormalization constant and vertex renormalization
constant respectively. So if the Ward identity is satisfied, the renormalization
of coupling constant is only relevant to the gauge field
wave function renormalization
constant $Z_3$:
\begin{eqnarray}
e_R=\sqrt{Z_3}Z_1Z_2^{-1}e=\sqrt{Z_3}e.
\end{eqnarray}
Since $Z_3$ is independent of the introduction of Maxwell term,
so the Ward identity means that the physical quantities have nothing to do with
the order of taking ${\gamma}{\to}{\infty}$. In particular it is very
interesting to see whether there exists an anomalous magnetic moment
term, since it can produce a new interaction between anyons
that will lead to unusual planar dynamics$\cite{ck,ks}$.
This may be helpful to understand the mechanisms of fractional quantum Hall effects and high
temperature superconductivity.
The Feynman rules are listed as follows
\begin{itemize}
\item gauge field propagator
\begin{eqnarray}
\tilde{D}^{(0)}_{\mu\nu}(p)=-i\frac{\gamma}{p^2-{\mu}^2{\gamma}^2}\left[
i{\mu}{\gamma}{\epsilon}_{\mu\nu\rho}\frac{p^{\rho}}{p^2}+
g_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^2}\right],
\label{eq5}
\end{eqnarray}
where we choose Landau gauge (${\alpha}=0$) to avoid infrared
singularity\cite{djt,rao}. In the limit of ${\gamma}{\to}{\infty}$, we have
\begin{eqnarray}
D^{(0)}_{\mu\nu}(p)
=-\frac{1}{\mu}{\epsilon}_{\mu\nu\rho}\frac{p^{\rho}}{p^2}.
\label{eq6}
\end{eqnarray}
\item electron propagator
\begin{eqnarray}
S^{(0)}(p)=i\frac{\hat{p}+m}{p^2-m^2}.
\end{eqnarray}
\item the vertex
\begin{eqnarray}
-ie{\gamma}_{\mu}(2\pi)^3 {\delta}^{(3)}(p+q+r).
\end{eqnarray}
\end{itemize}
In Sect. II, starting from the classical action (\ref{eqac}), we
calculate the vacuum polarization tensor and electron
self-energy correction and define the finite renormalization
constants relevant to them. In Sect. III, we compare
our results obtained in dimensional regularization
with those obtained in Pauli-Villars regularization
and expressed in spectral representation and find
the results are identical, this shows the gauge invariant
regularization scheme independence. Sect. IV is devoted to a detail
calculation of mass-shell vertex correction.
It is explicitly shown that the Ward identity is
satisfied on mass-shell. Especially, we find the three dimensional
analogue of anomalous magnetic moment term. In Sect. V we turn
to pure CS spinor electrodynamics (i.e. taking
${\gamma}{\rightarrow}{\infty}$ at tree level) and we verify that
the Ward identity is still satisfied, which shows that the physical
quantities are independent of the order of taking large-$\gamma$ limit.
Sect. VI contains the conclusions and some
discussions on higher order results.
\section{One-loop Vacuum Polarization and Self-energy}
\subsection{Polarization tensor}
The polarization tensor gets contribution from the electron loop and
its amplitude is
\begin{eqnarray}
i{\Pi}_{\mu\nu}(p)
=-2e^2{\int}\frac{d^nq}{(2\pi)^n}
\frac{-im{\epsilon}_{\mu\nu\rho}p^{\rho}+2q_{\mu}q_{\nu}+q_{\mu}p_{\nu}+
q_{\nu}p_{\mu}+[m^2-q{\cdot}(q+p)]g_{\mu\nu}}{(q^2-m^2)[(q+p)^2-m^2]}.
\end{eqnarray}
The standard calculation gives
\begin{eqnarray}
{\Pi}_{\mu\nu}(p)&=&i{\epsilon}_{\mu\nu\rho}p^{\rho}{\Pi}_{\rm o}(p^2)+
(p^2g_{\mu\nu}-{p_{\mu}p_{\nu}}){\Pi}_{\rm e}(p^2)\nonumber\\[2mm]
&=&\frac{e^2}{4{\pi}}\left\{i{\epsilon}_{\mu\nu\rho}p^{\rho}
\frac{m}{p}\ln\frac{1+p/(2m)}{1-p/(2m)}\right.\nonumber\\[2mm]
&-&\left.
(p^2g_{\mu\nu}-p_{\mu}p_{\nu})\left[
-\frac{m}{p^2}+\left(\frac{1}{4p}
+\frac{m^2}{p^3}\right)\ln\frac{1+p/(2m)}{1-p/(2m)}\right]
\right\}.
\label{eq10}
\end{eqnarray}
\subsection{Electron Self-energy}
The Feynman integral for electron self-energy is read as follows
\begin{eqnarray}
-i\tilde{\Sigma}(p,m,{\gamma})
=-e^2{\gamma}{\int}\frac{d^nq}{(2\pi)^n}
\frac{-\hat{q}[m{\mu}{\gamma}+q{\cdot}(q+p)]+q^2m+{\mu}{\gamma}q{\cdot}(q+p)}
{[(q+p)^2-m^2]q^2(q^2-{\mu}^2{\gamma}^2)}.
\label{eqse}
\end{eqnarray}
Using the identities
\begin{eqnarray}
\frac{1}{q^2(q^2-{\mu}^2{\gamma}^2)}&=&\frac{1}{{\mu}^2{\gamma}^2}
(\frac{1}{q^2-{\mu}^2{\gamma}^2}-\frac{1}{q^2}),\nonumber\\[2mm]
2q{\cdot}p&=&[(q+p)^2-m^2]-q^2-(p^2-m^2)\nonumber\\[2mm]
&=&[(q+p)^2-m^2]-(q^2-{\mu}^2{\gamma}^2)-
(p^2-m^2+{\mu^2}{\gamma}^2),
\end{eqnarray}
Eq.(\ref{eqse}) can be written as
\begin{eqnarray}
-i\tilde{\Sigma}(p,m,{\gamma})&=&-2e^2{\gamma}{\int}
\frac{d^nq}{(2\pi)^n}\left\{\left(m+\frac{{\mu}{\gamma}}{2}-
\frac{p^2-m^2}{2{\mu}{\gamma}}\right)
\frac{1}{(q^2-{\mu}^2{\gamma}^2)[(q+p)^2-m^2]}\right.\nonumber\\[2mm]
&+&\frac{1}{2{\mu}{\gamma}}\frac{1}{q^2-{\mu}^2{\gamma}^2}+
\frac{p^2-m^2}{2{\mu}{\gamma}}\frac{1}{q^2[(q+p)^2-m^2]}\nonumber\\[2mm]
&-&\left(\frac{m}{\mu\gamma}+\frac{1}{2}-\frac{p^2-m^2}{2{\mu}^2{\gamma}^2}
\right)
\frac{\hat{q}}{(q^2-{\mu}^2{\gamma}^2)[(q+p)^2-m^2]}
\nonumber\\[2mm]
&-&\left.\left(\frac{p^2-m^2}{2\mu^2{\gamma}^2}-\frac{m}{\mu\gamma}\right)
\frac{\hat{q}}{q^2[(q+p)^2-m^2]}\right\}.
\end{eqnarray}
After the integration and the limit of ${\gamma}{\to}{\infty}$,
we have
\begin{eqnarray}
{\Sigma}(p)
&=&\lim_{{\gamma}{\rightarrow}\infty}\tilde{\Sigma}(p, m,
\gamma)=\frac{e^2}{4\pi}\left\{2{\gamma}
+\frac{m}{\mu}+\frac{p^2-m^2}{{\mu}p}\ln\frac{1+p/m}{1-p/m}\right.
\nonumber\\[2mm]
&-&\left.
\frac{\hat{p}}{\mu}\left[\frac{m^2}{p^2}+\frac{m}{p}(1-\frac{m^2}{p^2})
\ln\frac{1+p/m}{1-p/m}-\frac{2}{3}\right]\right\}.
\label{eq14}
\end{eqnarray}
\subsection{Finite Renormalization}
Now we discuss the finite renormalization of one-loop
two point functions.
From
\begin{eqnarray}
D^{(1)\,-1}_{\mu\nu}(p)=D^{(0)\,-1}_{\mu\nu}(p)-i{\Pi}_{\mu\nu}(p),
\end{eqnarray}
we can get the one-loop gauge field propagator
\begin{eqnarray}
D^{(1)}_{\mu\nu}(p)&=& -i(g_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^2})
\frac{{\Pi}_{\rm e}(p^2)}{{\mu}^2[1-{\Pi}_{\rm o}(p^2)]^2
-p^2{\Pi}^2_{\rm e}(p^2)}\nonumber\\[2mm]
&-&{\epsilon}_{\mu\nu\rho}\frac{p^{\rho}}{p^2}\frac{\mu [1-\Pi_{\rm o}(p^2)]}
{{\mu}^2[1-{\Pi}_{\rm o}(p^2)]^2-p^2{\Pi}_{\rm e}^2(p^2)}.
\end{eqnarray}
The renormalized propagator should be the following form
\begin{eqnarray}
D^{(1)}_{\mu\nu}(p)=-i(g_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^2}){\Pi}_1(p^2)
-\epsilon_{\mu\nu\rho}\frac{p^{\rho}}{p^2}
\left[\frac{Z_3}{\mu_{\rm ph}}+\Pi_2(p^2)\right].
\end{eqnarray}
Choosing the renormalization point $p^2=0$, we get
\begin{eqnarray}
Z_3&=&1,\nonumber\\[2mm]
{\mu}_{\rm ph}&=&{\mu}(1-\frac{e^2}{4\pi}).
\end{eqnarray}
Especially, one can see ( up to the order $e^2$)
\begin{eqnarray}
\Pi_1(0)=-\frac{e^2}{4\pi}\frac{1}{3m}{\neq}0,
\end{eqnarray}
which means the quantum correction generates the parity-even part
of the gauge field propagator.
As for the finite renormalization of electron self-energy,
it is defined by the usual mass-shell renormalization condition
\begin{eqnarray}
{\Sigma}_R({p})|_{\hat{p}=m_{\rm ph}}=0,
~~\frac{\partial}{{\partial}\hat{p}}{\Sigma}_R(p)|_{\hat{p}=m_{\rm ph}}=0.
\end{eqnarray}
Thus the self-energy can be written as the expansion around
$\hat{p}=m_{\rm ph}$,
\begin{eqnarray}
{\Sigma}(p)={\delta}m-(Z_2^{-1}-1)(\hat{p}-m_{\rm ph})+Z_2^{-1}{\Sigma}_R(p)
\end{eqnarray}
and the one-loop electron propagator is
\begin{eqnarray}
S^{(1)}(p)=i\frac{Z_2}{\hat{p}-m_{\rm ph}-{\Sigma}_R(p)}
=i\left[\frac{Z_2}{\hat{p}-m_{\rm ph}}+\tilde{\Sigma}_R(p)\right].
\label{eq3}
\end{eqnarray}
From the one-loop correction (\ref{eq14}),
the physical mass, electron wave function
renormalization constant and the radiative correction are
(up to the order $e^2$) given by
\begin{eqnarray}
m_{\rm ph}&=&m-{\delta}m=m-\frac{e^2}{2\pi}({\gamma}+\frac{m}{3\mu}),
\nonumber\\[2mm]
Z_2&=&1+\frac{e^2}{4\pi}\frac{5}{3\mu},\nonumber\\[2mm]
{\Sigma}_R(p)&=& \frac{e^2}{4\pi}\left\{
\frac{2m_{\rm ph}}{\mu}+\frac{p^2-m_{\rm ph}^2}{{\mu}p}
\ln\frac{1+p/m_{\rm ph}}{1-p/m_{\rm ph}}\right.\nonumber\\[2mm]
&-&\left.\frac{\hat{p}}{\mu}\left[1+\frac{m^2_{\rm ph}}{p^2}
+\frac{m_{\rm ph}}{p}(1-\frac{m^2_{\rm ph}}{p^2})
\ln\frac{1+p/m_{\rm ph}}{1-p/m_{\rm ph}}\right]\right\},\nonumber\\[2mm]
\tilde{\Sigma}_R(p)&=&\frac{e^2}{4\pi\mu}\frac{\hat{p}}{p^2}\left(1-
\frac{\hat{p}+m_{\rm ph}}{2p}\ln\frac{1+p/m_{\rm ph}}{1-p/m_{\rm ph}}\right).
\label{eq25}
\end{eqnarray}
\section{Comparison with the Results in Spectral Representation}
In Ref.$\cite{djt}$, the one-loop two point functions of CS
spinor electrodynamics had been presented in terms of the
spectral representation. Regarding the Maxwell term as a higher covariant
derivative term, we can consider the results in Ref.$\cite{djt}$ obtained
by Pauli-Villars
regularization. If the large topological mass limit is taken, their results
should be consistent with ours since both regularization schemes are
gauge invariant. The aim of this section is to show it explicitly.
\subsection{Polarization Tensor}
We start from Eqs.(2.61)--(2.64b) of Ref.$\cite{djt}$.
After the renormalization, the gauge field propagator is
represented in the following spectral form (under the substitutions
${\gamma}{\mu}{\equiv}\tilde{\mu}$, $e^2{\gamma}{\equiv}\tilde{e}^2$):
\begin{eqnarray}\label{2.1}
\tilde{D}^{(1)}_{\mu\nu}(p)&=&-i\left(g_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^2}\right)
\gamma\left[\frac{\tilde{Z}_3}{p^2-\tilde{\mu}^2_{\rm ph}+i\epsilon}+
\tilde{\Pi}^{(1)}(p^2)\right] \nonumber \\[2mm]
&+& \gamma\tilde{\mu}_{\rm ph}\epsilon_{\mu\nu\alpha}
\frac{p^\alpha}{p^2}
\left[\frac{\tilde{Z}_3}{p^2-\tilde{\mu}^2_{\rm ph}+i\epsilon}+
\tilde{\Pi}^{(2)}(p^2)\right].
\end{eqnarray}
The physical mass $\tilde{\mu}_{\rm ph}$ is given by
\begin{equation}\label{2.2}
\tilde{\mu}_{\rm ph} = \tilde{\mu} - \frac{\tilde{e}^2\tilde{\mu}}{8\pi}
\int_{2m}^\infty \frac{1+(4m/a^2)(m-\tilde{\mu})}{a^2-\tilde{\mu}^2} da +
O(\tilde{e}^4).
\end{equation}
The charge renormalization constant $\tilde{Z}_3$ is equal to
\begin{equation}\label{2.3}
\tilde{Z}_3 = 1 - \frac{\tilde{e}^2}{8\pi}
\int_{2m}^\infty da \,
\frac{(1/a^2)(a^2-2m\tilde{\mu})^2+(2m-\tilde{\mu})^2}{(a^2-\tilde{\mu}^2)^2}
+ O(\tilde{e}^4).
\end{equation}
The continuum contributions are
\addtocounter{equation}{1}
$$ \label{29a} \tilde{\Pi}^{(1)}(p^2) =
\frac{\tilde{e}^2}{8\pi}
\int_{2m}^\infty da\,
\frac{(1/a^2)(a^2-2m\tilde{\mu})^2+(2m-\tilde{\mu})^2}{(p^2-a^2+i\epsilon)
(a^2-\tilde{\mu}^2)^2}
+ O(\tilde{e}^4),\eqno{(27a)} $$
$$ \label{29b} \tilde{\Pi}^{(2)}(p^2) =
\frac{\tilde{e}^2}{4\pi} \left(1-\frac{2m}{\tilde{\mu}}\right)
\int_{2m}^\infty da\,
\frac{a^2-2m\tilde{\mu}}{(p^2-a^2+i\epsilon)(a^2-\tilde{\mu}^2)^2}
+ O(\tilde{e}^4),\eqno{(27b)}$$
The calculation gives:
\begin{eqnarray}
&& Z_3{\equiv}\lim_{\gamma\to\infty}\tilde{Z}_3 = 1, \nonumber \\
&& \label{2.5} \tilde{\mu}_{\rm ph}\equiv\gamma\mu_{\rm ph} =
\gamma\left(\mu-\frac{e^2}{4\pi}\right), \\
&& \label{2.6} {\Pi}^{(1)}(p^2){\equiv}
\lim_{\gamma\to\infty} \gamma\tilde{\Pi}^{(1)}(p^2) =
\frac{e^2}{8\pi\mu^2_{\rm ph}}
\int_{2m}^\infty\frac{da(1+4m^2/a^2)}{a^2-p^2-i\epsilon},
\nonumber \\
&&\lim_{\gamma\to\infty}\gamma\tilde{\Pi}^{(2)}(p^2) = 0.
\end{eqnarray}
Thus
\begin{equation}\label{2.7}
D^{(1)}_{\mu\nu}(p){\equiv}\lim_{\gamma\to\infty}\tilde{D}^{(1)}_{\mu\nu}(p)
=-\frac{1}{\mu_{\rm ph}}
\epsilon_{\mu\nu\alpha}
\frac{p^\alpha}{p^2}-i(g_{\mu\nu}-\frac{p_{\mu}p_{\nu}}{p^2}){\Pi}^{(1)}(p^2).
\end{equation}
The first term coincides with the tree approximation for ${D}^{(0)}_{\mu\nu}$
modulo a finite renormalization of the statistical parameter $\mu_{\rm ph}
=\mu-{e^2}/{4\pi}$. The crucial feature of Eq.(\ref{2.7})
is the appearance of the parity-even
term $\sim g_{\mu\nu}-p_{\mu}p_{\nu}/p^2$ in the one-loop
approximation. This term has no pole in the complex plane of $p^2$.
\subsection{Electron Self-Energy}
After the substitution $\gamma e^2\equiv \tilde{e}^2$,
the spectral form of the fermion
propagator will read (see Eqs.(2.70)--(2.71) of Ref.$\cite{djt}$):
\begin{equation}\label{2.8}
\tilde{S}^{(1)}(p) = i\left[\frac{\tilde{Z}_2}{\hat{p}-\tilde{m}_{\rm ph}}
+\tilde{\Sigma}(p) \right].
\end{equation}
The physical mass, $\tilde{m}_{\rm ph}$, is
\begin{eqnarray} \label{2.9}
&& \tilde{m}_{\rm ph} = m + \frac{\tilde{e}^2}{16\pi}
\int_{-\infty}^\infty da\biggl[
\frac{(\tilde{\mu}+2m)(\tilde{\mu}+2a)}{a^2(a-m)}
\theta(a^2-M^2) \nonumber \\
&&\quad{}+\frac{(a+m+2\tilde{\mu})(a^2-m^2)}{\tilde{\mu}^2a^2}
\theta(M^2-a^2)\theta(a^2-m^2)\biggr] + O(\tilde{e}^4).
\end{eqnarray}
The fermionic renormalization constant, $\tilde{Z}_2$, is given by
\begin{eqnarray} \label{2.10}
&& \tilde{Z}_2 = 1 - \frac{\tilde{e}^2}{16\pi}
\int_{-\infty}^\infty da\biggl[
\frac{(\tilde{\mu}+2m)(\tilde{\mu}+2a)}{a^2(a-m)^2}
\theta(a^2-M^2) \nonumber \\
&&\quad{}+\frac{(a+m+2\tilde{\mu})(a+m)}{\tilde{\mu}^2a^2}
\theta(M^2-a^2)\theta(a^2-m^2)\biggr] + O(\tilde{e}^4),
\end{eqnarray}
where $M=\tilde{\mu}+m$.
The continuum contribution in Eq.(\ref{2.8}) is
\begin{eqnarray} \label{2.11}
\tilde{\Sigma}(p) &=& \frac{\tilde{e}^2}{16\pi}
\int_{-\infty}^\infty {da\over \hat{p}-a}\biggl[
\frac{(\tilde{\mu}+2m)(\tilde{\mu}+2a)}{a^2(a-m)^2}
\theta(a^2-M^2) \nonumber \\
&+&\frac{(a+m+2\tilde{\mu})(a+m)}{\tilde{\mu}^2a^2}
\theta(M^2-a^2)\theta(a^2-m^2)\biggr] + O(\tilde{e}^4).
\end{eqnarray}
Considering the limit $\gamma\to\infty$ in
Eqs.(\ref{2.8})--(\ref{2.11}), we get
\begin{eqnarray}
&& \label{2.12} m_{\rm ph}{\equiv}\lim_{\gamma\to\infty} \tilde{m}_{\rm ph}
= m - \frac{e^2}{2\pi}
\left(\gamma+\frac{m}{3\mu}\right), \nonumber\\[2mm]
&& \label{2.13} Z_2{\equiv}\lim_{\gamma\to\infty} \tilde{Z}_2 = 1 + \frac{e^2}{4\pi}
\frac{5}{3\mu}, \nonumber\\[2mm]
&& \label{2.14} \tilde{\Sigma}_R(p){\equiv}\lim_{\gamma\to\infty}\tilde{\Sigma}(p)
= \frac{e^2}{4\pi\mu}\frac{\hat{p}}{p^2}
\left(1-\frac{\hat{p}+m_{\rm ph}}{2p}\ln\frac{1+p/m_{\rm ph}}{1-p/m_{\rm ph}}
\right).
\end{eqnarray}
One notices that $\tilde{\Sigma}(p)$ in Eq.(\ref{2.11}) can be represented as
\begin{equation}\label{2.15}
\tilde{\Sigma}(p) = \tilde{\Sigma}_1(p) + \tilde{\Sigma}_2(p),
\end{equation}
where
\addtocounter{equation}{1}
$$\tilde{\Sigma}_1(p) = \frac{\tilde{e}^2}{16\pi}\int_{-\infty}^\infty
\frac{da}{\hat{p}-a}\biggl[\left(\frac{\tilde{\mu}^2}{a^2}+
\frac{4m}{a}\right)\frac{\theta(a^2-M^2)}{(m-a)^2} $$
$$\label{2.16a}\quad{}+\frac{1}{\tilde{\mu}^2a^2}(a+m)^2\theta(M^2-a^2)
\theta(a^2-m^2)\biggr], \eqno{(37a)} $$
$$\tilde{\Sigma}_2(p) = \frac{\tilde{e}^2}{8\pi}\int_{-\infty}^\infty
\frac{da}{\hat{p}-a}\biggl[\frac{(a+m)}{(a-m)^2}\frac{\tilde{\mu}}{a^2}
\theta(a^2-\hat{M}^2) $$
$$\label{2.16b} \quad{}+\frac{(a+m)}{\tilde{\mu}a^2}\theta(M^2-a^2)
\theta(a^2-m^2)\biggr], \eqno{(37b)}$$
where $\tilde{\Sigma}_1(p)$ arises from the exchange of a conventional
transverse vector part of the photon, while $\tilde{\Sigma}_2(p)$ comes from
the axial part of $\tilde{D}^{(0)}_{\mu\nu}$ in the Eq.(\ref{eq5}).
It is easily shown that
\addtocounter{equation}{1}
$$\label{2.17a}
\lim_{\gamma\to\infty}\tilde{\Sigma}_1(p)=0,
\eqno{(38a)}$$
and thus
$$\label{2.17b}
\lim_{\gamma\to\infty}\tilde{\Sigma}(p)=
\lim_{\gamma\to\infty}\tilde{\Sigma}_2(p) = \tilde{\Sigma}_R(p).
\eqno{(38b)}$$
Therefore, in the limit of the pure CS
spinor electrodynamics with Pauli-Villars
regularization we get the following result
\begin{equation}\label{2.22}
S^{(1)}(p){\equiv}
\lim_{\gamma\to\infty} \tilde{S}^{(1)}(p)=\frac{iZ_2}{\hat{p}-m_{\rm ph}}+
i\tilde{\Sigma}_R(p).
\end{equation}
Comparing the corresponding results with those in Sect.II, we can see
that they are the same.
\section{On-shell Vertex Correction}
The one-loop on-shell vertex correction is given by
\begin{eqnarray}
-i \bar{u}(p')\tilde{\Gamma}_\mu(p', p, m ) u(p)=
\tilde{J}_\mu^a +\tilde{J}_\mu^b +\tilde{J}_\mu^c,
\end{eqnarray}
where
\begin{eqnarray}
\label{2.29a}
\tilde{J}_\mu^a & = & - e^2{\gamma}\int \frac{d^3 q}{(2\pi)^3}
\frac{[-\hat{q}\gamma_\lambda+2(p'+q)_\lambda]\gamma_\mu
[-\gamma^\lambda\hat{q}+2(p+q)^\lambda]}
{(q^2-{\mu}^2{\gamma}^2)\left[(p'+q)^2-m^2
\right]\left[(p+q)^2-m^2\right]}, \\[2mm]
\label{2.29b}
\tilde{J}_\mu^b & = & e^2{\gamma}\int \frac{d^3 q}{(2\pi)^3}
\frac{[-q^2+2(p'+q){\cdot}q]\gamma_\mu
[-q^2 + 2(p+q){\cdot}q]}{q^2(q^2-{\mu}^2{\gamma}^2)\left[(p'+q)^2-m^2
\right])\left[(p+q)^2-m^2\right]}, \\[2mm]
\label{2.29c}
\tilde{J}_\mu^c & = & - e^2{\gamma} \int \frac{d^3 q}{(2\pi)^3}
\frac{i{\mu}{\gamma}\epsilon_{\lambda\sigma\rho}q^\rho[-\hat{q}\gamma^\sigma
+2(p'+q)^\sigma]\gamma_\mu[-\gamma^\lambda\hat{q}+2(p+q)^\lambda]}{q^2
(q^2-{\mu}^2{\gamma}^2)\left[(p'+q)^2-m^2
\right]\left[(p+q)^2-m^2\right]}.
\end{eqnarray}
For derivation of Eqs.(\ref{2.29a})--(\ref{2.29c}), we have used the
on-shell condition $\hat{p}=\hat{p'}=m$.
The term $\tilde{J}_\mu^b$ is very simple,
\begin{eqnarray}\label{2.30}
\tilde{J}_\mu^b
={\gamma}e^2\gamma_\mu \int \frac{d^3q}{(2\pi)^3}
\frac{1}{(q^2-{\mu}^2{\gamma}^2)q^2}
=\frac{ie^2}{4\pi{\mu}}\gamma_\mu{\equiv}J_{\mu}^b.
\end{eqnarray}
The term $\tilde{J}_\mu^a$ can be transformed into the following form
\begin{eqnarray}\label{2.31}
\tilde{J}_\mu^a
=-\gamma{e}^2\int \frac{d^3q}{(2\pi)^3}
\frac{\left[q^2\gamma_\mu-2\hat{q}q_\mu+4(p{\cdot}p'+p{\cdot}q+p'{\cdot}q)
\gamma_\mu+4q_\mu m - 4\hat{q}{\cal P}_\mu\right]}
{(q^2-{\mu}^2{\gamma}^2)
\left[(p'+q)^2-m^2\right]
\left[(p+q)^2-m^2\right]},
\end{eqnarray}
where ${\cal P}_\mu \equiv (p'+p)_\mu$. One can not take the limit
${\gamma}{\to}{\infty}$ directly except the term $4p'{\cdot}p$, which vanishes
after the large-$\gamma$ limit. However, using the following decomposition
\begin{eqnarray}
\frac{1}{[(k+p)^2-m^2]}=
\frac{1}{k^2-m^2}-\frac{2k{\cdot}p+p^2}{(k^2-m^2)[(k+p)^2-m^2]},
\label{eqde}
\end{eqnarray}
one can see that all the terms in Eq.(\ref{2.31}) $\sim q$ in
the numerator vanish when $\gamma\to
\infty$. The first two terms in Eq.(\ref{2.31}) can be transformed into
\begin{eqnarray}
\label{2.32}
\tilde{J}_\mu^a &=& - \gamma {e}^2 \int \frac{d^3 q}{(2\pi)^3}\left\{
\frac{q^2\gamma_\mu-2\hat{q}q_\mu}{(q^2-{\mu}^2{\gamma}^2)(q^2-m^2)^2}\left[
1+\frac{(2p'{\cdot}q+m^2)(2p{\cdot}q+m^2)}{(2p'{\cdot}q+q^2)
(2p{\cdot}q+q^2)}\right.\right. \nonumber \\[2mm]
&-&\left.\left.\frac{2p'{\cdot}q+m^2}{2p{\cdot}q+q^2}-
\frac{2p'{\cdot}q+m^2}{2p'{\cdot}q+q^2}\right]\right\}.
\end{eqnarray}
Only the first term in Eq.(\ref{2.32}) does not vanish after taking
the limit $\gamma\to\infty$. Thus,
\begin{eqnarray}\label{2.33}
J_\mu^a &\equiv& \lim_{\gamma\to\infty}\tilde{J}_\mu^a =
\lim_{\gamma\to\infty} \gamma {e}^2 \int \frac{d^3 q}{(2\pi)^3}
\frac{2\hat{q}q_\mu-q^2\gamma_\mu}{(q^2-{\mu}^2{\gamma}^2)(q^2-m^2)^2}
\nonumber \\[2mm]
&=& -\lim_{\gamma\to\infty}\frac{{\gamma}{e}^2}{3}\gamma_{\mu} \int
\frac{d^3q}{(2\pi)^3}
\frac{q^2}{(q^2-{\mu}^2{\gamma}^2)(q^2-m^2)^2} \nonumber \\[2mm]
&=&-\lim_{\gamma\to\infty}\frac{{\gamma}e^2}{3}{\gamma}_{\mu}
\frac{2(\mu\gamma)^3-3(\mu\gamma)^2m+m^3}{8\pi(m^2-{\mu}^2{\gamma}^2)^2}
=-\frac{ie^2}{4\pi\mu}\frac{1}{3}\gamma_\mu.
\end{eqnarray}
As for the third term $\tilde{J}_{\mu}^c$, taking into account
that in Eq.(\ref{2.29c})
$$ {\epsilon}_{\lambda\sigma\rho}q^\rho\hat{q}{\gamma}^{\sigma}
\gamma_\mu{\gamma}^{\lambda}\hat{q} = -2iq_\mu q^2,$$
and after some algebraic manipulation, we have\footnote{In the numerator of
Eq.(\ref{2.35}) we skip the term $\sim \epsilon_{\lambda\sigma\rho}
q^\rho {p'}^{\sigma} p^{\lambda}$ coming from Eq.(\ref{2.29c}),
since after integration it will become of the form
$\epsilon_{\lambda\sigma\rho}p^\rho p^\lambda {p'}^\sigma$ and
${\epsilon}_{\lambda\sigma\rho}{p'}^{\rho}p^{\lambda}{p'}^{\sigma}$,
both giving zero.}
\begin{eqnarray}\label{2.35}
\tilde{J}_\mu^c&=& 2e^2{\mu}{\gamma}^2\int \frac{d^3q}{(2\pi)^3}
\frac{[-q_\mu q^2 + (q{\cdot}p') \gamma_\mu \hat{q} + (q{\cdot}p)
\hat{q}\gamma_\mu+
2m \gamma_\mu q^2-2q^2{\cal P}_\mu]}{q^2(q^2-\tilde{\mu}^2)[(p'+q)^2-m^2]
[(p+q)^2-m^2]} \nonumber \\[2mm]
&=& 2 e^2 \mu{\gamma}^2\int \frac{d^3q}{(2\pi)^3}
\frac{-2q_\mu q^2+(2m\gamma_\mu-2{\cal P}_\mu)q^2+\left[
\hat{q}\gamma_\mu(2p{\cdot}q+q^2)/2+\gamma_\mu\hat{q} (2p'{\cdot}q+q^2)/2
\right]}{q^2(q^2-{\mu}^2{\gamma}^2)[(p'+q)^2-m^2][(p+q)^2-m^2]}
\nonumber \\[2mm]
&=& 2e^2\mu{\gamma}^2\int \frac{d^3q}{(2\pi)^3} \left\{
\frac{2m\gamma_\mu-2q_\mu-2{\cal P}_\mu}{(q^2-{\mu}^2{\gamma}^2)
(2p'q+q^2)(2pq+q^2)} +\frac{\gamma_\mu\hat{q}}{2q^2(q^2-\mu^2{\gamma}^2)
[(p+q)^2-m^2]} \right.\nonumber \\[2mm]
&+&\left.\frac{\hat{q}\gamma_\mu}{2q^2(q^2-{\mu}^2{\gamma}^2)
[(p'+q)^2-m^2]}\right\}.
\end{eqnarray}
Similar to Eq.(\ref{2.31}), for the terms ${\sim} q$
in Eq.(\ref{2.35}) one cannot take
the large-${\gamma}$ limit directly,
we still need first to employ the manipulation (\ref{eqde}).
Considering the symmetry of integrand, we get that
\begin{eqnarray}
J^c_{\mu}&{\equiv}&\lim_{{\gamma}{\to}{\infty}}\tilde{J}_{\mu}^c\nonumber\\[2mm]
&=&-\frac{e^2}{\mu}{\int}\frac{d^3q}{(2\pi)^3}\left[
\frac{4(m{\gamma}_{\mu}-{\cal P}_{\mu}-k_{\mu})}{(2p{\cdot}q+q^2)
(2p'{\cdot}q+q^2)}+\frac{{\gamma}_{\mu}\hat{q}}{
q^2(2p{\cdot}q+q^2)}+\frac{{\gamma}_{\mu}\hat{q}}{q^2(2p'{\cdot}q+q^2)}
\right].
\end{eqnarray}
The standard Feynman integration gives that
\begin{eqnarray}\label{2.36}
J_\mu^c
&=& \frac{ie^2}{4\pi\mu}
\left[\gamma_\mu - \frac{2m\gamma_\mu-{\cal P}_\mu}{K}
\ln\frac{1+K/(2m)}{1-K/(2m)}\right]
\nonumber \\[2mm]
&=& \frac{ie^2}{4\pi\mu}
\left[\gamma_\mu - \frac{i\epsilon_{\mu\nu\lambda}
K^\nu\gamma^\lambda}{K}
\ln\frac{1+K/(2m)}{1-K/(2m)}\right],
\end{eqnarray}
where $K_{\mu}{\equiv}p'_{\mu}-p_{\mu}$, $K{\equiv}\sqrt{K^2}$ and
we have used the three-dimensional analogue of the Gordon identity:
\begin{eqnarray}
{\gamma}_{\mu}=\frac{1}{2m}\left[{\cal P}_{\mu}
+i{\epsilon}_{\mu\nu\lambda}K^{\nu}
{\gamma}^{\lambda}\right].
\label{go}
\end{eqnarray}
Thus at $\gamma\to\infty$, from the eqs.(\ref{2.30}), (\ref{2.33})
and (\ref{2.36}) we get
\begin{eqnarray}\label{2.37}
\lim_{\gamma\to\infty}\left(-i\tilde{\Gamma}_\mu(K)\right)&{\equiv}&
-i\Gamma_\mu(K)=J_\mu^a+J_\mu^b+J_\mu^c, \nonumber \\[2mm]
{\Gamma}_\mu(K)&=&-\frac{e^2}{4\pi\mu}\left[\frac{5}{3}{\gamma}_{\mu}-
\frac{i\epsilon_{\mu\nu\lambda}K^\nu\gamma^\lambda}{K}
\ln\frac{1+K/(2m)}{1-K/(2m)}\right]\nonumber\\[2mm]
&=&{\gamma}_{\mu}F_1(K^2)+i{\epsilon}_{\mu\nu\lambda}K^{\nu}{\gamma}^{\lambda}
F_2(K^2).
\end{eqnarray}
The vertex renormalization is defined as
\begin{eqnarray}
{\Gamma}_{\mu}(K)={\gamma}_{\mu}(Z_1^{-1}-1)+Z_1^{-1}
{\Gamma}^R_{\mu}(K)
\end{eqnarray}
and the renormalization condition is as usual
\begin{eqnarray}
{\Gamma}_{\mu}^R(K)|_{
\hat{p}=\hat{p}'=m,~
K_{\alpha}=p'_{\alpha}-p_{\alpha}=0}=0.
\end{eqnarray}
Then we get the vertex renormalization constant
\begin{eqnarray}
Z_{1}^{-1}{\gamma}_{\mu}&=&{\gamma}_{\mu}+{\gamma}_{\mu}F_1(0),
\nonumber\\[2mm]
{Z_1}^{-1}&=&1-\frac{e^2}{4\pi\mu}\frac{5}{3},
\end{eqnarray}
and the one-loop radiative correction to the vertex as
\begin{eqnarray}
{\Gamma}_{\mu}^R(K)
&=&-{\gamma}_{\mu}+Z_1({\gamma}_{\mu}+{\Gamma}_{\mu})
\nonumber\\[2mm]
&=&\frac{ie^2}{4{\pi}\mu}{\epsilon}_{\mu\nu\lambda}K^{\nu}{\gamma}^{\lambda}
\frac{1}{K}\ln\frac{1+K/(2m)}{1-K/(2m)}.
\end{eqnarray}
From Eq.(\ref{eq25}) we have
\begin{eqnarray}\label{2.39}
{Z_1}=1+\frac{e^2}{4\pi\mu}\frac{5}{3}=Z_2,
\end{eqnarray}
which is just the consequence of Ward identity
\begin{eqnarray}
K^{\mu}{\Gamma}_{\mu}(K)={\Sigma}(p')-{\Sigma}(p).
\end{eqnarray}
It is remarkable that ${\Gamma}_{\mu}^R(K^2=0)$ does not vanish, i.e.
\begin{eqnarray}
{\Gamma}_{\mu}^R(0)=\frac{ie^2}{4\pi}
\frac{1}{{\mu}m}{\epsilon}_{\mu\nu\lambda}
K^{\nu}{\gamma}^{\lambda}
=i\frac{\alpha}{{\mu}m}{\epsilon}_{\mu\nu\lambda}K^{\nu}{\gamma}^{\lambda},
\end{eqnarray}
which gives the three-dimensional analogue
of Schwinger's result for the anomalous magnetic moment
of the electron. In a slowly varying (in both space and time)
external electricmagnetic field, it will
lead to a new interaction Hamiltonian\footnote{The self-energy insertion in the external
line can be disregarded since the electrons are on mass-shell.}:
\begin{eqnarray}
{\Delta}{\cal H}&=&-\frac{\alpha}{m{\mu}}{\epsilon}^{\mu\nu\lambda}
\bar{\psi}(x){\gamma}_{\lambda}{\psi}(x)
{\partial}_{\nu}A_{\mu}=-\frac{\alpha}{2m{\mu}}{\epsilon}^{\mu\nu\lambda}
\bar{\psi}(x){\gamma}_{\lambda}{\psi}(x)F_{\mu\nu}\nonumber\\[2mm]
&=&-\frac{\alpha}{2m{\mu}}\bar{\psi}(x)
{\sigma}^{\mu\nu}{\psi}(x)F_{\mu\nu},
\end{eqnarray}
where we have used that
\begin{eqnarray}
{\epsilon}_{\mu\nu\lambda}{\gamma}^{\lambda}
=\frac{i}{2}[{\gamma}_{\mu},{\gamma}_{\nu}]{\equiv}{\sigma}_{\mu\nu}.
\end{eqnarray}
Thus this term leads to the anomalous magnetic
moment of the electron$\cite{iz}$, which is consistent with the result
in Ref.$\cite{ks}$.
It is very interesting that this term exists in scalar case too
$\cite{ccf}$.
\section{Pure Chern-Simons Electrodynamics}
Now we consider the case of pure CS electrodynamics, i.e.
put ${\gamma}{\to}{\infty}$ at the tree level. The vacuum polarization
tensor and $D^{(1)}_{\mu\nu}(p)$ will be the same since this does not change
the electron loop.
However, the electron self-energy and the vertex correction will be different
since the gauge field propagator is replaced by Eq.(\ref{eq6}).
We first consider the electron self-energy
\begin{eqnarray}
-i{\Sigma}^{\rm pure}(p)&=&
-\frac{2e^2}{\mu}{\int}\frac{d^nq}{(2\pi)^n}\frac{q^2+(\hat{p}-m)\hat{q}}
{q^2[(q+p)^2-m^2]}\nonumber\\[2mm]
&=&\frac{ie^2}{4\pi\mu}\left\{2m-(\hat{p}-m)\frac{\hat{p}}{m}\left[
\frac{m^2}{p^2}+\frac{m^3}{2p^3}\left(1-\frac{p^2}{m^2}\right)
\ln\frac{1-p/m}{1+p/m}\right]\right\}.
\end{eqnarray}
Similar discussions as the ones used in getting Eq.(\ref{eq25}) give that
\begin{eqnarray}
m_{\rm ph}^{\rm pure}&=&m(1+\frac{e^2}{2\pi}\frac{1}{\mu}),
~~Z_2^{\rm pure}=1+\frac{e^2}{4\pi}\frac{1}{\mu},\nonumber\\[2mm]
{\Sigma}^{\rm pure}_R(p)&=&
-\frac{e^2}{4\pi\mu}(\hat{p}-m_{\rm ph})\left\{\frac{\hat{p}}{m_{\rm ph}}\left[
\frac{m_{\rm ph}^2}{p^2}
+\frac{m_{\rm ph}^3}{2p^3}\left(1-\frac{p^2}{m_{\rm ph}^2}\right)
\ln\frac{1-p/m_{\rm ph}}{1+p/m_{\rm ph}}\right]-1\right\},\nonumber\\[2mm]
\tilde{\Sigma}^{\rm pure}_R(p)
&=&\frac{e^2}{4\pi}\frac{1}{\mu}\frac{\hat{p}}{p^2}\left[
1+\frac{\hat{p}+m_{\rm ph}}{2p^3}\ln\frac{1-p/m_{\rm ph}}{1-p/m_{\rm ph}}\right].
\end{eqnarray}
Using the techniques stated above, the on-shell vertex correction is given
as follows
\begin{eqnarray}
&-&i\bar{u}(p'){\Gamma}^{\rm pure}_{\mu}(p',p,m)u(p){\equiv}
-i{\Gamma}^{\rm pure}_{\mu}(K)
=\frac{ie^2}{\mu}{\int}\frac{d^nq}{(2\pi)^n}
\frac{{\gamma}_{\rho}(\hat{q}+\hat{p}'+m){\gamma}_{\mu}(\hat{q}+
\hat{p}+m){\gamma}_{\nu}{\epsilon}^{\nu\rho\lambda}q_{\lambda}}{
q^2[(q+p')^2-m^2][(q+p)^2-m^2]}\nonumber\\[2mm]
&=&-\frac{2e^2}{\mu}{\int}\frac{d^nq}{(2\pi)^n}\left[
\frac{{\gamma}_{\mu}\hat{q}}{2q^2(q^2+2p{\cdot}q)}+
\frac{\hat{q}{\gamma}_{\mu}}{2q^2(q^2+2p'{\cdot}q)}+
\frac{2m{\gamma}_{\mu}-2{\cal P}_{\mu}-2q_{\mu}}{(q^2+2p'{\cdot}q)(
q^2+2p{\cdot}q)}\right]\nonumber\\[2mm]
&=&\frac{ie^2}{4\pi\mu}\left[{\gamma}_{\mu}-i{\epsilon}_{\mu\nu\lambda}
K^{\nu}{\gamma}^{\lambda}\frac{1}{K}\ln\frac{1+K/(2m)}{1-K/(2m)}\right],
\end{eqnarray}
where the three-dimensional Gordon identity (\ref{go}) has been used.
Correspondingly, the vertex renormalization constant is
\begin{eqnarray}
Z_1^{\rm pure}=1+\frac{e^2}{4\pi}{\gamma}_{\mu},
\end{eqnarray}
and we still have
\begin{eqnarray}
Z_1^{\rm pure}=Z_2^{\rm pure}.
\end{eqnarray}
In particular, we still obtain the same anomalous magnetic moment term.
\section{Conclusion and Discussion}
We have made a detailed study of the quantum correction
to CS spinor electrodynamics. We give complete analytical
results for one-loop quantum corrections such as polarization tensor,
electron self-energy and specially for the on-shell vertex. We find the three
dimensional analogue of the Schwinger anomalous magnetic
term, despite it is in the second order, this may lead to nontrivial
planar dynamics since it can provide new interaction between
charged particles. We compare the different procedure of
taking the limit ${\gamma}{\rightarrow}{\infty}$ and verify explicitly
that in both cases the Ward identity is satisfied, and hence that the physical
quantities are independent of the order of taking large-$\gamma$ limit.
In addition, in both cases, the results are finite and
the ${\beta}$-function vanishes identically.
If we take into account the higher order perturbative corrections,
according to BPHZ renormalization procedure,
we believe that the results
are still finite since the one-loop renormalization constants
are all finite and all propagators and vertex part in the asymptotic
region will be the same as those in the free case after renormalization.
\acknowledgments
We thank Dr. R. Ray for useful correspondence.
The financial support of the Academy of Finland is greatly
acknowledged. W.F.C. thanks the World
Laboratory, Switzerland for financial support and V.Ya.F. thanks
for the financial support by RFBR Grant No. 96-01-00105.
|
1,116,691,498,313 | arxiv | \section{Introduction}\label{Introduction}
The number of lithium-ion BESS projects in operation, under construction, and in the planning stage grows steadily around the world due to the improvements of technology \cite{Ziegler2020}, economy of scale \cite{Mauler2021}, bankability \cite{Bonomi2020}, and new regulatory initiatives \cite{FERC841}. It is projected that by 2040 there will be about 1,095GW/2,850GWh of stationary energy storage in operation mostly in the forms of batteries \cite{BNEF2019}. Although the private investor in energy storage for grid applications encounters high capital cost compared with conventional solutions, deteriorating battery characteristics, and revenue risk associated with changing regulatory policies, the economic value of such projects is mostly assessed using a simplistic black-box representation of the battery operation \cite{Walawalkar2007} and an empirical relationship to characterize ageing \cite{Xu2018}. A black-box modelling of a lithium-ion battery is reasonable if the large-scale optimization is solved where additional state variables characterizing a battery would further increase computational complexity. However, short-term operation and long-term planning for the individual storage owner will benefit from detailed models of BESS \cite{Pandzic2019,Jafari2020}. For example, when a lithium-ion battery storage is used to provide multiple services for the electrical grid for better asset utilization and economic benefits for the owner \cite{RockyMountain2015}, the optimal market participation calculated using a simplistic model may lead to the execution of infeasible operations and an erroneous estimate of economic benefits \cite{Taylor2020}. This occurs because a simple battery model can not reflect the physical processes inside a lithium-ion cell, which is the main component of BESS. This can be more pronounced if the operation and planning of BESS for the power system will be performed over multiscale time horizons: control strategy for participating in the electricity and ancillary services markets encompasses hours/minutes/seconds, degradation occurs every second and accumulates over years, and both the planning horizon and the replacement plan should be calculated for years \cite{Sorourifar2020}. The linkage of these timescales will likely be inaccurate if a simple black-box model is used since it operates with power in MW over an hour time interval. The scarce experimental data supports the importance to consider more detailed models for operation \cite{Pandzic2019,Taylor2020} as well as for the long-term performance \cite{Reniers2020}.
In a pioneering work published in 1985, the techno-economic assessment of BESS application was performed by Sobieski \cite{Sobieski1985} where the author compared BESS with combustion turbines for peak shaving capacity expansion and for spinning reserve. Over the years since, the strategic battery operation in various decision-making studies in power systems have been modelled using generic models without a reference to a particular battery technology. Miletic and co-workers \cite{Miletic2020} summarized and structured optimization frameworks and market models for stationary energy storage used in operation and planning problems on the transmission and distribution levels. A standard power-energy model and its various modifications were discussed as well. However, the impact of including additional details into the simple battery model was not assessed.
The lithium-ion battery community suggests a variety of models with different levels of accuracy and computational complexity for simulation \cite{Ramadesigan2012} and characterization of ageing \cite{Reniers2019}. These models are usually employed in the battery management system (BMS) to predict battery behaviour and to estimate state-of-charge or state-of-health of the battery \cite{Byrne2017}. Until recently the strategic operation of stationary BESS was derived using advanced battery models by only a few researchers\cite{Reniers2020,Cao2020}. The critical review of three models of BESS, namely the energy reservoir model, the charge reservoir model, and the concentration based model, were provided to the power system research community by Rosewater \textit{et al.} \cite{Rosewater2019} where they used them to calculate the optimal schedule of a BESS for a peak shaving application. The authors outlined the advantages and disadvantages of each model from a computational point of view but they mostly reviewed references outside of typical system-level grid applications of BESS.
The contribution of the present review paper is to provide a detailed overview of alternative battery models and how they have been used to represent grid-scale BESS in electrical power system studies. In particular, we focus on papers that have integrated transmission-connected BESS into grid operation and planning techno-economic studies. This paper builds on previous works; for example, it extends the work presented in \cite{Rosewater2019} where the models were presented but no discussions of their application in techno-economic optimization problems were conducted. Compared with\cite{Miletic2020}, where different optimization models for BESS operation and planning were summarized, and with \cite{Byrne2017}, where optimization protocols and frameworks for BESS applications were addressed, in this review paper, various applications were examined from the perspective of how the battery was described and represented. The review work \cite{Lawder2014} was focused on the battery models for the BMS and the architecture of BESS and BMS. Their case study only addressed optimal charging scheduling for the combined BESS-photovoltaic generation plant considering physics-based model whereas in this paper a broader range of BESS applications is examined.
The remainder of this paper is structured as follows: the next section gives an overview of BESS models, the third section presents examples of economic studies with BESS providing different services, the fourth section discusses and summarizes the impact of BESS models on the strategic operation and planning. The last section presents the conclusion.
\section{An overview of the lithium-ion battery modelling approaches}\label{An overview of the lithium-ion battery modelling approaches}
A battery is an electrochemical device that is able to store electrical energy in the form of chemical energy and to convert it back to electrical energy when it is needed. Since their invention in 1800 by Alessandro Volta, various battery technologies have emerged; however, in this work, we are focused on the lithium-ion technology. This type of battery was pioneered by Whittingham \cite{Whittingham1976}, significantly improved by Goodenough \cite{Mizushima1980}, and brought to the market by Sony in early the 1990s. Today, the term battery is often used to refer to the electrochemical storage as a whole system. In fact, each battery consists of a pack of elementary electrochemical units – cells. The way the cells are connected in the battery (in parallel and in series) determines the battery's nameplate ratings. A cell is a physical place where the conversion occurs and where the electrochemical energy is stored.
The battery models by the extent of description of the physical processes and corresponding safety constraints can be divided into black-box, phenomenological, and physical models \cite{Plett2015, Jokar2016}. In the black-box model (Figure \ref{fig:1}), a battery is replaced with a reservoir or a bucket, where energy comes in and comes out. This model does not consider a description of the physical phenomena inside the cell. If the phenomenological model is used (Figure \ref{fig:2}), the battery is replaced with the system that was empirically built to replicate the response of the battery to the control commands \cite{Ramadesigan2012}. The electrochemical process inside the battery and the response of the cell to external factors are accurately described using the physical model (Figure \ref{fig:3}) \cite{Plett2015}.
\begin{figure}[t!]
\centering
\subfloat[Power-Energy Model]{\includegraphics[trim=7cm 6.5cm 19cm 5.0cm, clip=true,angle=0,width=1.0in]{ERM.jpg}\label{fig:1}}
\subfloat[Voltage-Current Model]{\includegraphics[trim=8cm 3cm 14cm 5.0cm, clip=true,angle=0,width=1.6in]{ECM.jpg}\label{fig:2}}\hspace{0.2cm}
\subfloat[Concentration-Current Model]{\includegraphics[trim=8cm 3cm 14cm 4.5cm, clip=true,angle=0,width=1.6in]{SPM.jpg}\label{fig:3}}
\caption{The lithium-ion battery models used in techno-economic analysis of power system.}
\end{figure}
The long-term performance of the battery deteriorates with time (calendar ageing) and with the number of charging/discharging cycles (cycle ageing). There are several mechanical and electrochemical processes that gradually deteriorate either energy capacity, rated charging/discharging power, or both. The ageing of the lithium-ion cells is impacted by environmental conditions, such as high and low temperatures, operating conditions, such as high charging/discharging current and high/low state-of-charge, and even when the battery is at rest \cite{Woody2020}. The degradation of the lithium-ion cell is usually accompanied by loss of lithium inventory or loss of active material of an electrode. The lithium inventory is consumed by various side reactions. The decline of the number of cyclable lithium leads to a decrease in energy capacity and by-products of these parasitic reactions create additional internal resistance of the cell that reduces the available charging/discharging power capacity. Examples of such processes are solid electrolyte interphase (SEI) film growth and lithium plating. The SEI formation is considered as a dominant degradation mechanism \cite{Reniers2019}. The SEI is mostly formed on the surface of the negative electrode during charging since these conditions favour electrochemical decomposition of electrolyte \cite{Arteaga2017}. The lithium ions can also be converted to metallic lithium which is deposited on the electrode through the lithium plating process. The structural changes, such as cracks in the electrode particles or dissolution of material in electrolyte, lead to loss of electrode active material. More detailed information on the degradation mechanisms in the lithium-ion cell can be found in \cite{Woody2020}, \cite{Birkl2017}, and \cite{Reniers2019}.
In this section, the battery models that can be found in power system operation and planning papers are reviewed.
\subsection{Power-Energy Model}\label{Power-Energy Model}
The simplest model of the battery assumes that the battery can be seen as an energy reservoir in which the energy is pumped to store and from which the energy is drawn to consume (Figure \ref{fig:1}). If such a model is used for analysis there is no need to distinguish elementary electrochemical units or the type of electrochemistry within the battery. This is the most popular model to characterize the operation of the battery in techno-economic studies in power systems. It is likely that this modelling approach has come from the mature pumped hydro energy storage modelling that was around for a long time \cite{Bainbridge1966}. The control variables for this model are charging, $ch_{t}$, and discharging, $dis_{t}$ powers whereas state-of-energy, ${SoE}_{t}$, is the only state variable. The state-of-energy indicates the present value of energy (often in MWh in power systems literature) stored in the battery. The BESS is not an ideal system, thus there will be losses during the charging/discharging cycling. The loss in a Power-Energy Model is commonly considered through the introduction of the energy efficiency factor which can be assigned either separately for both charging, $\eta^{ch}$, and discharging, $\eta^{dis}$, operations \cite{Dvorkin2017} or as a round-trip energy efficiency for the whole cycle \cite{Zhao2018,Arteaga2019}. The generic Power-Energy Model assumes fixed energy efficiencies and constant rated charging/discharging power that do not depend on $SoE_{t}$. The evolution of state-of-energy is a core of the Power-Energy Model and the relationship between two consecutive observations of $SoE_{t}$ with a time step $\tau$ between them is expressed as
\begin{equation}
\label{eq:1}
SoE_{t} = SoE_{t-1}+\tau(\eta^{ch}ch_{t}-\frac{dis_{t}}{\eta^{dis}}).
\end{equation}
The technical aspects of the BESS are considered by enforcing limits on the charging and discharging maximum power and allowing only to store energy until the rated energy capacity is reached. Expression for state-of-energy (\ref{eq:1}) allows the schedule with simultaneous charging and discharging (e.g., that may be realized for electricity market cases with negative nodal energy prices or for ideal batteries with efficiencies equal to unity \cite{Taylor2015}). In this case, to avoid simultaneous charging and discharging, binary variables are introduced \cite{Go2016}. However, when the cost of BESS operation is included in the objective and efficiencies are less than unity, simultaneous charging and discharging is suboptimal and should not appear in the optimal solution \cite{Zhao2018,Perez2016}. The Power-Energy Model can be updated by adding some features of the lithium-ion cell operation through the functional dependencies of maximum permissible charging/discharging power on state-of-energy as in \cite{Vagropoulos2013,Pandzic2019}, or energy efficiency on state-of-energy and charging/discharging power as in \cite{Sakti2017,Nguyen2019}, or both dependencies as in \cite{Sakti2017,Gonzalez-Castellanos2020,Jafari2020}. A simple Power-Energy Model can also be coupled with degradation description of the battery as result of cycling or calendar ageing. In power system economics studies, degradation is mostly modelled either enforcing operation limits \cite{Mohsenian-Rad2016,Fares2018}, or using the energy throughput model \cite{Wankmuller2017}, or employing the cycle-counting model \cite{Xu2018}. The last two approaches can be included in the optimization framework by assigning the cost of degradation in the cost function or by limiting degradation through the introduction of additional constraints to a Power-Energy Model. References for each method are summarized in Table \ref{tab1}.
In the energy throughput or power-based method, there is a linear dependence between the energy capacity fade and the energy throughput \cite{Wankmuller2017,Arcos-Vargas2020,Fares2018,He2018}. It is assumed that the amount of energy that can be stored and delivered by BESS throughout its lifespan is fixed. A battery is often defined as healthy until it reaches the End-of-life (EoL) state that occurs when the battery has lost 20\% of the original capacity. The framework can be built by incorporating degradation cost in the objective \cite{Wankmuller2017} or by limiting the number of full charging/discharging cycles per day \cite{Mohsenian-Rad2016} or per year \cite{Fares2018}. The energy throughput technique for ageing assessment works properly for one charging/discharging cycle per day \cite{Sorourifar2020}. The cycle-counting degradation model relies on the nonlinear ageing occurred from cycling: cycles with smaller depth-of-discharge (DoD) contribute less into the degradation of the battery \cite{Xu2018}. The cycles are extracted from a state-of-energy profile using the rainflow cycle-counting algorithm \cite{Xu2018a}. Each cycle with a certain DoD is assigned with a fixed amount of degradation to the energy capacity according to the cycle depth ageing stress function that can be obtained from the experimental data. The cycle-counting method is incorporated into the optimization framework by including the cost of degradation into the objective function. This cost is calculated by benchmarking the amount of degradation with the battery replacement cost \cite{Xu2018_Factoring}. Although the cycle-based degradation model is more advanced than the energy throughput method, both techniques do not consider the effect of the average state-of-energy around which the charging/discharging cycle occurs \cite{Maheshwari2020}. Another limitation of the cycle-counting degradation model with rainflow algorithm is that it only has a recursive form. Several approximations suitable for the optimization environment were suggested \cite{Shi2018,Xu2018_Factoring,He2016}. Some authors also employed empirical nonlinear degradation models \cite{Maheshwari2020,Padmanabhan2020} where the cumulative degradation cost function was constructed for different state-of-energy and the current rates for charge/discharge.
The energy efficiency and maximum power capacity of the BESS also degrades with cycling or when the battery ages over time \cite{Redondo-Iglesias2019}. The effect of ageing on the energy efficiency and maximum charging/discharging power was explored in \cite{He2020} and \cite{Jafari2020}. A linear relationship between the fading capacity and maximum charging/discharging power was assumed in \cite{Jafari2020} and the growth of the cell internal resistance was explored in \cite{He2020}. The coupling of long-term performance with a Power-Energy Model brought problems with computational tractability and is solved either for short optimization horizons (Table \ref{tab1}) or employing sequential approaches \cite{Schneider2020,Maheshwari2020}.
\begin{table}[t!]\centering
\caption{Literature survey on empirical degradation models.}
\begin{tabular}{@{}p{3cm}|p{2cm}|p{2cm}|p{2cm}|p{2cm}@{}}
\hline
\textbf{Decision horizon} & \textbf{Operational limits} &\textbf{Energy throughput method}& \textbf{Cycle-counting method} &\textbf{Other}\\
\hline
$<$ 1 week & \cite{Wankmuller2017}, \cite{Perez2016},\cite{Mohsenian-Rad2016} & \cite{Wankmuller2017} & \cite{Xu2018}, \cite{He2016}, \cite{Shi2018} & \cite{Maheshwari2020}, \cite{Padmanabhan2020} \\
$<$ 1 year & \cite{Jafari2020} & \cite{Fares2018}, \cite{He2018}, \cite{Arcos-Vargas2020}, \cite{Jafari2020} & \cite{Xu2018_Factoring}, \cite{Schneider2020} & \\
$<$ 20 year & & \cite{Sorourifar2020} & & \\
\hline
\end{tabular}
\label{tab1}
\end{table}
The incorporation of a generic Power-Energy Model within power system optimization frameworks usually leads to a linear programming problem \cite{Fares2018} or a linear mixed integer programming problem \cite{Sakti2017} that can be easily solved with standard commercial solvers.
\subsection{Voltage-Current Model}\label{Voltage-Current Model}
The charging/discharging schedule calculated using a Power-Energy Model may lead to the operation of the battery out of permissible range for current and voltage \cite{Pandzic2019}. If this occurs the ``optimized" operation will be corrected by BMS \cite{Byrne2017}. This will lead to a deviation from estimated financial benefits and grid service commitment in power systems studies. The formulation of the battery model can be improved if some details of the battery operation are incorporated. A phenomenological model, such as the equivalent-circuit model, is designed to replicate the battery's charging/discharging performance and is usually used in BMS \cite{Geng2020}. The description of BESS operation based on the electric circuit presents an attractive option to model the cell using the Kirchhoff equations.
The equivalent-circuit model by its nature does not model the dynamics of the internal processes inside the cell but characterizes the measurable response of the cell to the external influence. The charging/discharging performance curves show how the voltage, which is the state variable, across the cell changes with the current, which is the control variable, flowing through the cell while charging/discharging. With regard to the decision variables when an equivalent-circuit model is employed for the optimal control in the power system, the model can be referred to as a Voltage-Current Model. The impedance parameters of the Voltage-Current Model are obtained from fitting an experimental data or the manufacturer's specification to the governing equation of the suggested circuit model.
There are various configurations of an electric circuit for a Voltage-Current Model \cite{Hu2012}; the choice depends on the accuracy requirement, the level of tolerance to the computational complexity, and the cell chemistry. The first-order approximation consists of two elements: voltage source and resistance \cite{Plett2015}. A more advanced Voltage-Current Model usually consists of a set of resistors and capacitors in series or parallel, current sources, and special nonlinear elements. The simple electric circuit that captures the main processes in the cell is shown in Figure \ref{fig:2}. This model was derived using \cite{Plett2015} and is based on empirical observations. The variable voltage source changes its output as a function of state-of-charge. This voltage is sometimes referred to as an open-circuit voltage \cite{Plett2015} and is supplied by the electrode chemistry. The nonideality of the cell is modelled through resistor $R_{0}$. This resistor also ensures that the output voltage drops relatively to the open-circuit voltage when the load is connected, and increases above the open-circuit voltage during charging operation. The lithium ions do not immediately stop flowing when the charging/discharging is interrupted. To model this diffusion process, a resistor, $R_{d}$, and capacitor, $C_{d}$, in parallel are employed. The nonlinear elements are used to simulate the so-called hysteresis effect when an open-circuit voltage reaches different values for the same state-of-charge as a consequence of thermodynamic hysteresis or mechanical hysteresis \cite{Ovejas2019}. The latter is a consequence of mechanical stress on electrodes from lithiation and delithiation and the former originates from the variation of the lithium intercalation rates between particles of active electrode material.
The control variable for the Voltage-Current Model is current through the cell $I_{t}$. It is chosen to be positive during the discharge for consistency of the description. The state variables, in this model, are the state-of-charge measured in Ah, the operating voltage $V_{t}$, and the diffusion voltage $V^{d}_{t}$, i.e., the voltage across a capacitor. The evolution of the state-of-charge is given as:
\begin{equation}
\label{eq:5}
SoC_{t} = SoC_{t-1}-\eta_{c} I_{t}\tau,
\end{equation}
where $\tau$ corresponds to the time step between two estimates of $SoC_{t}$, and $\eta_{c}$ stands for the coulombic efficiency that reflects how much charge is lost during the charging/discharging cycle.
Using Kirchhoff's Law for voltage, the following expression can be derived to relate voltages in the circuit \cite{Plett2015}:
\begin{equation}
\label{eq:6}
OCV_{t}(SoC_{t}) = I_{t}R+V^{d}_{t}+V_{t}.
\end{equation}
Using Kirchhoff's Law for current, the relationship for currents in a parallel RC branch is given as \cite{Plett2015}:
\begin{equation}
\label{eq:7}
I_{t} = \frac{V^{d}_{t}}{R_{d}}+C_{d}\frac{dV^{d}_{t}}{dt}.
\end{equation}
If the derivative $\frac{dV^{d}_{t}}{dt}$ is approximated using finite differences, the diffusion voltage ${V^{d}}_t$ can be calculated as:
\begin{equation}
\label{eq:8}
V^{d}_{t} = \frac{R_{d}C_{d}}{\tau+R_{d}C_{d}}V^{d}_{t-1}+\frac{\tau R_{d}}{\tau+R_{d}C_{d}}I_{t}.
\end{equation}
The Voltage-Current Model is formulated using the single cell perspective which assumes that all cells within the battery show the same performance \cite{Reniers2018}. Although the battery balancing scheme provided by BMS is intended to maintain variation between the cells of the battery pack at minimum \cite{Omariba2019}, in general, this statement requires additional investigation \cite{Fantham2020,Rosewater2019}. DC Voltage and current are not typical variables in power system economic studies, but both of these can be employed naturally to find the power supplied or consumed by a BESS that consists of $N$ lithium-ion cells in series and in parallel, as follows:
\begin{equation}
\label{eq:9}
P_{t} = NI_{t}V_{t}.
\end{equation}
The Voltage-Current Model allows for the restrictions specified by the producer of the lithium-ion cell in the box constraint on current and voltage, as follows:
\begin{equation}
\label{eq:11}
V^{Min}\leq V_{t}\leq V^{Max}
\end{equation}
\begin{equation}
\label{eq:12}
-I^{MaxCh}\leq I_{t} \leq I^{MaxDis},
\end{equation}
where $V^{Min}$ and $V^{Max}$ identify operational limits for voltage, and $I^{MaxCh}$ and $I^{MaxDis}$ are maximum absolute values of continuous charging and discharging currents. The range of $SoC_{t}$ is limited from one side by nominal capacity $Q^{Max}$ in [Ah], as follows:
\begin{equation}
\label{eq:10}
0\leq SoC_{t} \leq Q^{Max}
\end{equation}
The model presented above is discussed in more detail in \cite{Plett2015} and was used in the optimization framework by \cite{Reniers2018}. In contrast, Taylor \textit{et al.} \cite{Taylor2020} used a simpler electric circuit composed of the voltage source and a resistor in their work to derive a strategic operation of BESS. Other electric circuits that can be used in optimization frameworks can be taken from \cite{Hu2012}. The examples of the other equivalent-circuit models derived from the physics-based models can be found in \cite{Berrueta2018,Varini2019,Li2019}. The degradation can be included in the Voltage-Current Model using the energy throughput as in \cite{Reniers2018,Li2019}. The Voltage-Current model can also be built without a reference to the underlying electric circuit and operates using empirical relationships. For instance in \cite{Aaslid2020}, the voltage of the lithium-ion cell, as a function of charging/discharging current and state-of-charge, was constructed using bi-variate cubic splines.
The incorporation of a Voltage-Current Model, which is governed by equations (\ref{eq:5}),(\ref{eq:6}),(\ref{eq:8})-(\ref{eq:12}), into the optimization framework leads to a nonlinear programming problem because of a nonlinear relationship between open-circuit voltage and state-of-charge. The final problem can be solved with various off-the-shelf nonlinear solvers such as IPOPT \cite{Wachter2006} or using linearization techniques combined with the commercial linear solvers.
\subsection{Concentration-Current Model}\label{Concentration-Current Model}
Despite having a number of advantages over the Power-Energy Model, the Voltage-Current Model does not provide information about the physical process inside the battery and can produce errors if it is used outside of the operating conditions for which it was empirically built \cite{Plett2015}. In contrast, the physics-based electrochemical model of a lithium-ion cell can achieve better accuracy \cite{Reniers2018}. The enhanced model of a lithium-ion cell is able to characterize the transport of charge carriers, interfacial reactions, thermal effects, and their mutual effects on each other \cite{Plett2015}. The most rigorous model is too complex for the optimization framework because the model contains coupled partial differential equations and nonlinear algebraic expressions \cite{Pandzic2019}.
The trade-off between accuracy and possibility to implement a more advanced model in the optimization framework can be found in the single particle model of the lithium-ion cell (Figure \ref{fig:3}). The single particle model limits its consideration to the physical principles such as the transport of lithium in the active material of electrodes and the kinetics of the lithium intercalation/deintercalation reactions \cite{Bizeray2018}. The model originates from the porous electrode theory \cite{Newman1975}, which is used to quantify electrode processes within the porous electrodes. The single particle model is built on several assumptions \cite{Ning2004}. First, the active material of both electrodes is composed of uniform spherical electrode particles, which all have an equal radius $R^i$ (the superscript $i$ is replaced by $p$ for positive electrode and it is changed to $n$ for negative electrode). A single electrode particle is employed to simulate the transport of lithium in the active material of the electrode. Second, the concentration of the lithium ions in the electrolyte is assumed to be uniform and constant. Finally, the rate of electrode reaction at the electrode/electrolyte interface does not change from one electrode particle to another. The second and third assumptions are valid in cases of low to medium current through the cell \cite{Bizeray2018} when the impact of the electrolyte potential is negligible \cite{Schmidt2010}.
The movement of lithium under the concentration gradient in the electrode particle with radius $R^i$ is described by a one-dimensional parabolic partial differential equation in spherical coordinates, as follows \cite{Ning2004}:
\begin{equation}
\label{eq:13}
\frac{\partial c^i}{\partial t}=\frac{D^i}{{r^i}^2}\frac{\partial}{\partial r^i}({r^i}^2\frac{\partial c^i}{\partial r^i}),
\end{equation}
where $c^i$ stands for the concentration of lithium atoms in the electrode particle, $r^i$ is a radial coordinate, and $D^i$ is the diffusion coefficient of lithium in the electrode active material. The homogeneous Neumann boundary condition is applied at the center of the electrode particle to conserve symmetry, as follows \cite{Bizeray2018}:
\begin{equation}
\label{eq:14}
(D^i\frac{\partial c^i}{\partial t} )_{r^i=0}=0
\end{equation}
The molar flux of lithium ions, i.e., the reaction rate of the deintercalation/intercalation process, $J^i$ on the surface of the electrode sets the Neumann boundary condition for the diffusion equation, as follows \cite{Guo2011}:
\begin{equation}
\label{eq:15}
(D^i\frac{\partial c^i}{\partial t} )_{r^i=R^i}=-J^i.
\end{equation}
The initial condition is defined as \cite{Ning2004}:
\begin{equation}
\label{eq:16}
(c^i(r^i,t))_{t=0}=c^i_0(r^i),
\end{equation}
where $c^i_0(r^i)$ stands for the initial concentration of lithium in the electrode. This concentration depends on the initial state-of-charge of the lithium-ion cell. The diffusion equation with the boundary conditions presents a challenge for incorporating it as a constraint in the optimization framework. The ``optimization-wise" formulation can be derived through the finite differences of the original partial differential equations \cite{Plett2015} or their approximations based on the ordinary differential equations \cite{Subramanian2001, Wang1998}, or using the Chebyshev collocation method \cite{Bizeray2018}.
The electrode reaction is characterized by the Butler-Volmer kinetics equation \cite{Guo2011}. This expression describes the rate of the electrode reaction, i.e., the molar flux of lithium ions, at which lithium ion consumes electron and converts to neutral atom inside the electrode or vice versa and is expressed as:
\begin{equation}
\label{eq:20}
J^i=2j^i_0\sinh(\frac{F\eta^i}{2RT}),
\end{equation}
where $F$ is the Faraday constant, $R$ is the gas constant, and $T$ is temperature. The activation overpotential $\eta^i$ is responsible for driving the current that was generated at the electrode during the lithium intercalation/deintercalation process. The exchange current density $j_0^i$ represents the oxidation and reduction currents without external impact and is given as:
\begin{equation}
\label{eq:21}
j^{i}_{0}=k^{i}F \sqrt{(c^{Max,i}-c^{\textrm{surf},i})c^{\textrm{surf},i}c^{\textrm{el}}},
\end{equation}
where $k^{i}$ denotes the reaction rate constant, $c^{\textrm{surf},i}$ stands for the lithium concentration at the surface of the electrode particle, $c^{Max,i}$ is the maximum concentration of lithium atoms in the electrode particle, and $c^{\textrm{el}}$ is the electrolyte concentration (constant for single particle model).
Another parameter to characterize the lithium-ion cell is the so-called equilibrium potential or the open-circuit potential of the electrode that shows how the Gibbs free energy changes when lithium ions enter/leave the electrode \cite{Liu2016}. The functional dependence between the concentration of lithium on the surface of the electrode, $c^{\textrm{surf},i}$, and open-circuit potential, $OCP^i$, is determined experimentally for each type of electrode chemistry when there is no current flowing through the cell (equilibrium state). When the cell is charging or discharging, the potential of the electrode deviates from the open-circuit potential, and is known as the solid-phase potential, $\phi^i$, and is expressed as:
\begin{equation}
\label{eq:23}
\phi^i=\eta^i+OCP^i(c^{\textrm{surf},i})+I^iZ^i,
\end{equation}
where $I^i$ is the total current through electrode and $Z^i$ stands for the resistance of the film on the electrode surface. Finally, the single particle model relates the applied charging/discharging current with the rate of the electrode reaction through equation (\ref{eq:24}) for the positive electrode and equation (\ref{eq:25}) for the negative electrode respectively, as follows:
\begin{equation}
\label{eq:24}
J^p=-\frac{IR^p}{3\nu^p\varepsilon^pF}
\end{equation}
\begin{equation}
\label{eq:25}
J^n=\frac{IR^n}{3\nu^n\varepsilon^nF},
\end{equation}
where $\varepsilon^p$ and $\varepsilon^n$ denote the volume fraction of active material in the corresponding electrode, $nu^p$ and $nu^n$ are the volumes of each electrode.
Finally, the voltage of the lithium-ion cell is given as:
\begin{equation}
\label{eq:27}
V_{t}=\phi^p-\phi^n.
\end{equation}
From an optimization perspective the single particle model \cite{Reniers2018,Cao2020} introduces current and concentration as decision variables, thus it will be referred to in this work as the Concentration-Current Model. Similar to a Voltage-Current Model, the Concentration-Current Model represents only one lithium-ion cell and the projection of this model on the whole battery requires an assumption that all cells behave identically. The supplied and consumed power by a battery composed of $N$ lithium ion cell is derived through applied current and the voltage across the cell:
\begin{equation}
\label{eq:28}
P=NI_tV_{t}
\end{equation}
The Concentration-Current Model allows introducing specifications of the cell in the box constraint form, as follows:
\label{eq:30}
\begin{equation}
V^{Min}\leq V_{t}\leq V^{Max}
\end{equation}
\begin{equation}
\label{eq:31}
-I^{MaxCh}\leq I_{t} \leq I^{MaxDis},
\end{equation}
where $V^{min}$ and $V^{max}$ identify operational limits for voltage, and $I_{ch}^{max}$ and $I_{dis}^{max}$ are maximum continuous charging and discharging currents.
The capacity of the cell is constrained in the Concentration-Current Model through the lithium concentration in both electrodes:
\begin{equation}
\label{eq:29}
c^{i, Min}\leq c_t^{i} \leq c^{i, Max}
\end{equation}
where $c^{Min,i}$ and $c^{Max,i}$ are limits for lithium concentration in an electrode.
The state-of-charge is not employed in the Concentration-Current Model as a state variable, but it can be used for comparison with other battery models. It can be derived from the instantaneous lithium concentration in one of the electrodes, for example in the negative electrode \cite{Plett2015}, it is given as:
\begin{equation}
\label{eq:35}
SoC=\frac{ c_t^{\textrm{surf},n}-c^{Min,n}}{c^{Max,n}-c^{Min,n}}Q,
\end{equation}
where $Q$ is the rated capacity of the lithium-ion cell.
When the Concentration-Current Model (\ref{eq:13})-(\ref{eq:29}) is a part of the optimization framework, the whole decision-making problem is a nonlinear programming problem. This problem can be solved using a nonlinear commercial solver such as IPOPT \cite{Wachter2006}.
The Concentration-Current Model can be naturally updated to include the physical description of the degradation \cite{Plett2015}. The growth of SEI is selected as a principal contributor to the degradation process \cite{Pinsona2013}. The SEI mathematical model employed here was taken from \cite{Ramadass2004,Ning2004}. The rate of the side reaction responsible for the formation of SEI is governed by the Tafel equation, as follows:
\begin{equation}
\label{eq:36}
J^{\textrm{sei}}=\frac{j^{\textrm{sei}}_0}{F}\exp(\frac{F}{2RT}\eta^{\textrm{sei}}),
\end{equation}
where $\eta^{\textrm{sei}}$ is the overpotential of the side reaction and $j^{\textrm{sei}}_0$ stands for the exchange current for the side reaction. The overpotential $\eta^{\textrm{sei}}$ is expressed as:
\begin{equation}
\label{eq:37}
\eta^{\textrm{sei}}=\phi^n-OCP^{\textrm{sei}}-I^nZ^n,
\end{equation}
where $OCP^{\textrm{sei}}$ is the open-circuit potential of the side reaction. The total current through the negative electrode is composed of the intercalation/deintercalation current and the side reaction current density and is given as:
\begin{equation}
\label{eq:38}
J^{\textrm{sei}}+J^{\textrm{Li},n}=J^n
\end{equation}
The resistance of the SEI film on the surface of the negative electrode increases as SEI forms \cite{Ning2004} and is expressed as:
\begin{equation}
\label{eq:39}
Z^n=Z^{0,n}+\frac{\delta^{\textrm{sei}}(t)}{\kappa},
\end{equation}
where $\delta^{\textrm{sei}}$ refers to the thickness of SEI film and $\kappa$ is the ionic conductivity of SEI. It can be reasonably assumed that the rate of SEI growth is proportional to the rate of the side reaction \cite{Ramadass2004} and is given as:
\begin{equation}
\label{eq:40}
\frac{d\delta^{\textrm{sei}}(t)}{dt}=-\frac{J^{\textrm{sei}}M}{\rho},
\end{equation}
where $M$ denotes the molar mass of SEI, $\rho $ is the density of SEI. Equation (\ref{eq:40}) can be converted to the discretized version when including in the optimization framework. The increase in $Z^n$ will lead to the decline in charging/discharging power. Finally, the loss in the lithium inventory due to the SEI formation during charging can be estimated as:
\begin{equation}
\label{eq:42}
C_{loss}=-\int_{t_1}^{t_2}\frac{3\varepsilon^p\nu^pJ^{\textrm{sei}}}{R^p}dt,
\end{equation}
where $[t_1,t_2]$ is the time interval during which the cell was charging.
Equations (\ref{eq:36})-(\ref{eq:42}) can be included into nonlinear optimization frameworks. The single particle model can be improved by adding description of the lithium-ion transport in electrolyte as in \cite{Perez2016_2} to derive the optimal charging protocol or as in \cite{Gailani2020} for assessing revenue in the capacity market under given operation schedule.
\subsection{Summary}\label{Summary}
The reviewed battery models are found to be employed in the decision-making problems that include stationary lithium-ion battery storage for power system level applications; those applications are discussed in the next section. These three models can be converted to one another: the Power-Energy Model can be seen as the Voltage-Current Model with constant voltage that is equal to the nominal voltage of the cell \cite{Aaslid2020}; the Voltage-Current Model can be obtained from the Concentration-Current Model by matching the physical process inside the cell with the corresponding circuit component \cite{Berrueta2018}.
The interest to the models that describe the dynamics of the processes inside the battery has increased recently (Figure \ref{fig:4}). All the papers in power system economic studies employing either the Voltage-Current Model or the Concentration-Current Model were published in the last three years \cite{Reniers2018,Reniers2020,Cao2020, Taylor2020,Rosewater2019,Aaslid2020}. This can be linked with the appearance of the experimental data to benchmark the results of the ``optimal" schedule \cite{Reniers2020, Taylor2020}. Nonetheless, the Voltage-Current Model and the Concentration-Current Model are computationally expensive because of the number of constraints \cite{Rosewater2019} and a need for more time steps to improve stability \cite{Reniers2018}.
\begin{figure}[t!]
\begin{center}
\includegraphics[trim=10cm 6cm 8cm 4cm, clip=true,angle=0,width=2.5in]{Publications_comparison.jpg}
\caption[Number of publications in techno-economic studies of power system with the battery model different from a simple power-energy model.]{\label{fig:4} Number of publications in techno-economic studies of Power System with the battery model different from a simple power-energy model}
\end{center}
\end{figure}
\section{Alternative Battery Models in Power Systems Studies}\label{Alternative Battery Models in Power Systems Studies}
In this section, publications, where optimal charging/discharging schedules were identified for different BESS applications, are reviewed with the scope to define how BESS was modelled. The objectives of both system operators and independent storage owners are examined. The list of BESS applications is limited to system-level grid applications of BESS, namely: energy arbitrage, frequency regulation, operating reserve, peak shaving, renewable integration assistance, and transmission upgrade deferral. Papers that are used in this section are classified by the operation and degradation models in Table \ref{tab2} and Table \ref{tab3} respectively.
\begin{table}[t!]\centering
\caption{Literature survey on the battery grid applications (operation models).}
\begin{tabular}{@{}p{4cm}|p{3cm}|p{3cm}|p{3cm}@{}}
\hline
\textbf{Application} & \textbf{Power-Energy Model} &\textbf{Voltage-Current Model} & \textbf{Concentration-Current Model} \\
\hline
Energy arbitrage & \cite{Walawalkar2007,Lamont2013,Awad2014,Wankmuller2017,Fares2018,Maheshwari2020,Arcos-Vargas2020, He2018, Sakti2017,Pandzic2019,Gonzalez-Castellanos2020,He2020} & \cite{Reniers2018} & \cite{Reniers2020,Reniers2018} \\
Frequency regulation & \cite{Zhang2018,Zhu2019,He2016,Xu2018,Shi2018,Sorourifar2020}&
&
\cite{Cao2020} \\
Operating reserve & \cite{Xu2018_Factoring,Padmanabhan2020,He2016} & &
\\
Peak shaving & \cite{Schneider2020} & \cite{Taylor2020} & \cite{Rosewater2019} \\
Renewable integration assistance & \cite{Dicorato2012,Bhattacharjee2020,Shin2020,Jafari2020} & & \\
Transmission upgrade deferral & \cite{Fernandez-Blanco2017,Falugi2018,Khani2016,Arteaga2021} & & \\
\hline
\end{tabular}
\label{tab2}
\end{table}
\begin{table}[t!]\centering
\caption{Literature survey on the battery grid applications (degradation models).}
\begin{tabular}{@{}p{4cm}|p{3cm}|p{3cm}|p{3cm}@{}}
\hline
\textbf{Application} & \textbf{Without degradation} &\textbf{Empirical model} & \textbf{Physics-based model} \\
\hline
Energy arbitrage & \cite{Walawalkar2007,Lamont2013,Awad2014,Sakti2017,Pandzic2019,Gonzalez-Castellanos2020} & \cite{Wankmuller2017,Fares2018,Maheshwari2020,Arcos-Vargas2020, He2018,He2020} & \cite{Reniers2018,Reniers2020} \\
Frequency regulation & \cite{Zhang2018,Zhu2019} &
\cite{He2016,Xu2018,Shi2018,Sorourifar2020} & \cite{Cao2020} \\
Operating reserve & \cite{Nguyen2019} &
\cite{Xu2018_Factoring,Padmanabhan2020,He2016} & \\
Peak shaving & \cite{Taylor2020}& \cite{Schneider2020} & \cite{Rosewater2019} \\
Renewable integration assistance & \cite{Dicorato2012,Bhattacharjee2020} & \cite{Shin2020,Jafari2020} & \\
Transmission upgrade deferral & \cite{Fernandez-Blanco2017,Falugi2018,Khani2016,Arteaga2021} & & \\
\hline
\end{tabular}
\label{tab3}
\end{table}
\subsection{Energy arbitrage}\label{Energy arbitrage}
Energy arbitrage is exploited by an independent BESS operator to generate revenue by charging the battery under the low-price conditions and by discharging when prices are higher. The economic analysis of the energy arbitrage application of BESS was accelerated by the restructuring and deregulation in the electric utility industry, and one of the first works on this topic was done by Graves \textit{et al.} \cite{Graves1999}.
Most works that estimate economic benefits of exploiting energy arbitrage by BESS employ a simple Power-Energy Model. In \cite{Walawalkar2007}, perfect information about price and demand and a price-taker model were used to assess the economic benefits of incorporating energy storage into the New York electricity market. The effect of large-scale energy storage on electricity price formation was examined in \cite{Lamont2013}. The strategic behavior of a BESS operator under price uncertainty in day-ahead and real-time electricity markets was addressed in \cite{Awad2014}. Several studies \cite{Wankmuller2017,Fares2018,Maheshwari2020,Arcos-Vargas2020, He2018} combined a Power-Energy Model with an empirical degradation model to estimate BESS profitability for energy arbitrage. The empirical based degradation formulation for the energy capacity with the assumption of linear dependence between energy throughput and extent of capacity fading significantly changed the cost-effectiveness of BESS investment \cite{Wankmuller2017}. In \cite{Fares2018}, the analysis of energy arbitrage is performed with so-called equivalent full cycles. In their market settings, they demonstrated that an extended calendar life is more profitable than an extended cycle life. The BESS operator would choose charging/discharging cycles with greater arbitrage trading profit if a longer calendar life is possible. Maheshwari and co-workers \cite{Maheshwari2020} applied their own experimental data with lithium-ion cells to derive their nonlinear degradation model. They claimed that DoD combined with cycle life and energy throughput quantification techniques for degradation fails to acknowledge the impact of state-of-energy and applied current, which were employed in their work through linear interpolation of their experimental data. In \cite{Arcos-Vargas2020}, the optimal value of maximum charging/discharging power was selected for the fixed capacity considering its degradation over the battery lifespan. Mohsenian-Rad \cite{Mohsenian-Rad2016} introduced charging and discharging bidding strategies in a stochastic framework for self-schedule and economic bids. Using short-term marginal cost per unit of degradation, which was derived from energy throughput and capital cost of the battery, He \textit{et al.} \cite{He2018} obtained a more accurate estimate of the energy arbitrage business case for the California day-ahead electricity market. Overall, the energy arbitrage operation considering ageing of the battery gives a better estimate in the cost/benefit analysis, but methods to characterize ageing are completely empirical and the results were not validated with real experience.
Some attempts have been made in order to improve the accuracy of the Power-Energy Model description for BESS representation used in energy arbitrage. It included the functional dependence between energy efficiency, state-of-energy, and maximum charging/discharging power. Sakti \textit{et al.} \cite{Sakti2017} updated a Power-Energy Model by considering the nonlinear dependence of the maximum charging/discharging power limits and energy efficiency on state-of-energy. Although the model was empirical by the formulation, authors affirmed that a simple Power-Energy Model of BESS may overestimate the earnings from energy arbitrage by 10\% compared with their most sophisticated model for a more volatile price signal resolution over the 7-day decision horizon. The authors of \cite{Pandzic2019} applied the results of their experimental findings to better define the limits of available charging power to reflect the constant-current/constant-voltage charging operation of the lithium-ion cell. The available charging power depends on the state-of-energy. If a generic Power-Energy Model is used for BESS characterization the optimal schedule in the energy arbitrage application may overestimate the profit by 300\% compared with the actual output obtained from scaled laboratory BESS that executed an ``optimal" schedule for their case study. Authors of \cite{Gonzalez-Castellanos2020} studied the optimal operation of BESS deployed in the IEEE-14 system to exploit arbitrage opportunities. Similar to \cite{Sakti2017}, their BESS model was an upgrade of a simple Power-Energy Model where fixed parameters were replaced with state-of-energy dependent ones. The use of nonlinear dependence between energy efficiency, charging/discharging power limits and state-of-energy was justified by the Voltage-Current Model as suggested by \cite{Berrueta2018}. Using empirical charging/discharging curves, authors concluded that for their case study a simplistic Power-Energy Model overestimated economic opportunities by 15\% and resulted in the operation of the battery beyond the recommended operating envelope. He \textit{et al.} \cite{He2020} combined a simple Power-Energy Model and energy throughput method for degradation description to derive the economic EoL of BESS. The term economic EoL was used by He to refer to the stage of the BESS state-of-health where the profit opportunities are vanished. In his optimization protocol energy efficiency, power, and energy capacity declined over time and as a result of cycling.
The energy arbitrage application is also a favourable choice to verify the robustness of more detailed operation models of BESS. Reniers \textit{et al.} \cite{Reniers2018} calculated economic benefits from energy arbitrage over one year of operation for three different battery models, i.e., a Power-Energy Model, a Voltage-Current Model and a Concentration-Current Model and three different degradation formulations. The authors compared these degradation descriptions with the experimental data and concluded that the concentration-current model was the most precise. They found that the energy arbitrage market participation strategy obtained for the Concentration-Current Model was considerably more profitable and with less degradation. In a more recent publication by the same authors \cite{Reniers2020} the optimal charging/discharging dispatch for energy arbitrage with Power-Energy Model and Concentration-Current Model were used to cycle lithium-ion cells in the laboratory conditions. The profit and the capacity loss were more accurately predicted by the Concentration-Current model. Moreover, physics-based approach for battery operation and ageing characterization reduced degradation by 30\% and improved revenue by 20\% compared with conventional Power-Energy Model with empirical degradation.
\subsection{Frequency regulation}\label{Frequency regulation}
Frequency regulation is one of the ancillary service products and it is needed to keep frequency within an acceptable range when there is a mismatch between supply and demand. The fast ramping capabilities of BESS make it a favourable choice for frequency regulation: for example, 75\% of large-scale BESS power capacity in US is used for the balancing of momentary fluctuations in the system \cite{US_energy_2020}. The frequency regulation application of BESS in power system economics studies is rarely examined solely and it is usually combined with energy arbitrage.
The BESS is mostly modelled through a generic Power-Energy Model \cite{Zhang2018, Zhu2019} for charging/discharging performance whereas the degradation is characterized by means of an empirical relationship \cite{He2016,Xu2018,Shi2018,Sorourifar2020}. In \cite{Zhang2018}, the authors investigated two-level, planning and operation, strategy for BESS owner to maximize profit from the frequency regulation market. Zhu \textit{et al.} \cite{Zhu2019} derived strategic operation for an aggregator coordinated BESS. Xu \cite{Xu2018} found the optimal control policy for BESS deployed in their case study to boost profit from the performance-based frequency regulation market using a chance-constraint optimization. The author used the cycle counting algorithm to characterize the long-term performance. The optimal bidding strategy in the frequency regulation market for the electric vehicle aggregator with different participation scenarios was outlined in \cite{Vagropoulos2013}. Their model was able to simulate the transition from constant current mode to constant voltage mode of charging operation. They highlighted that their representation of the lithium-ion battery resulted in a more accurate estimate of financial benefits: there was up to 20\% difference compared with a generic Power-Energy Model associated with BESS. Shi and co-workers \cite{Shi2018} coupled a Power-Energy Model with three different degradation models, namely, the fixed cost of degradation, the energy throughput method, and cycle-based model based on the rainflow algorithm, to explore the financial benefits of frequency regulation for the BESS owner in the their case study. It was shown that the rainflow cycle-based degradation model projects up to 27.6\% growth in revenue, and thus a greater return on investment, and almost 85\% increase in battery life expectancy. In \cite{Sorourifar2020}, a Power-Energy Model coupled with the energy throughput ageing quantification technique was used to find the optimal energy capacity of BESS, which profits from energy arbitrage and frequency regulation with compensation for capacity and energy provided in the California day-ahead and real-time energy and ancillary services markets. Assuming a constant degradation rate, the loss of capacity was directly incorporated into the operation framework as a constraint.
In \cite{Cao2020}, a Concentration-Current Model was employed to find the optimal schedule of BESS for the frequency regulation market. The strategy was compared with one obtained using a Power-Energy Model with degradation cost, which was proportional to capacity committed to frequency regulation, in the objective function and with one where degradation was calculated using the SEI method in postprocessing of the optimal schedule. The authors reported a 143\% increase in lifetime and 35\% growth in profit compared to a generic Power-Energy Model.
\subsection{Operating reserve}\label{Operating reserve}
The operating reserve is intended for the grid frequency management if a significant unpredictable deviation to the supply/demand balance occurs in the system. The operating reserve revenue stream for the BESS owner is usually considered as an additional source of revenue and it is bundled with other BESS applications in power system economic analysis \cite{Arteaga2019}.
The strategic operation of BESS that provides operating reserve services is usually derived with a simple Power-Energy Model bundled with empirical ageing models \cite{Xu2018_Factoring,Padmanabhan2020,He2016}. Xu \textit{et al.} \cite{Xu2018_Factoring} proposed a dispatch strategy for BESS considering the cost of battery degradation that was formulated using the Rainflow cycle counting algorithm. In \cite{Padmanabhan2020}, the optimal strategy for BESS operator that submits the bids into both the energy and operating reserve market was derived by combining the impact of DoD and the discharge rate in the degradation cost function. Reference \cite{He2016} modelled optimal market participation of BESS in three day-ahead markets: energy, frequency regulation and operating reserve markets. The profit of BESS was calculated on a daily basis and prorated by the number of the battery's daily equivalent 100\%-DoD cycles. Nguyen \textit{et al.} \cite{Nguyen2019} replaced the fixed energy efficiency of a generic Power-Energy Model with one that nonlinearly depends on the charging/discharging power and state-of-energy. The parameters for their empirical model were calculated from the charging/discharging curve provided by the cell manufacturer. The proposed nonlinear model estimated almost 17\% less of the total revenue over one year compared with simple Power-Energy Model if it was installed in their market environment. Perez \textit{et al.} \cite{Perez2016} examined the impact of applying practical box constraints on state-of-energy to limit degradation on the financial potential of BESS providing several services including OR. Although there was a drop in revenue from energy arbitrage, the net revenue has increased from operating reserve and frequency regulation contributions because of extended lifespan.
\subsection{Peak shaving}\label{Peak shaving}
A BESS can replace the need to construct a new peaking generation capacity that would meet the peak demand from a highly volatile load. If a BESS is installed on the load side it could also work as a peak shaver to minimize the total electricity bill. There are several works where charging/discharging decisions were found for the peak shaving application.
Schneider \textit{et al.} \cite{Schneider2020} investigated a strategic investment into BESS for a bundled peak shaving and energy arbitrage business model. Their optimization framework included Power-Energy Model, Rainflow cycle counting paradigm for battery degradation and it was solved heuristically through three stages: on the first stage, the daily scheduling maximizes energy arbitrage revenue corrected by a degradation penalty term, on the second stage the total monthly revenue from peak shaving and energy arbitrage is calculated, the third stage is used to define the optimal schedule over the year. Taylor \cite{Taylor2020} employed Voltage-Current Model formulation for BESS and compared it with Power-Energy Model. The parameters of the Voltage-Current Model were measured from their own lab experiments with lithium iron phosphate battery cells. First, the accuracy of the model was demonstrated by comparison with the experiment. Second, the optimal schedule of BESS based on the Voltage-Current Model formulation outperformed the results with Power-Energy Model when the optimal schedule of each model was processed by the battery hardware simulation tool. Although simulation was carried out at only two fixed levels of the current rate, it was clear that a generic Power-Energy Model model did not ensure reliable performance. In the review of battery models for the optimal control \cite{Rosewater2019}, the control strategy for BESS installed to reduce the total electricity bill over 24-hour decision horizon was obtained for three different battery models, namely a Power-Energy Model, a Voltage-Current Model and the Concentration-Current Model. The cost reduction in the bill was conducted using energy arbitrage and peak shaving applications of BESS. The degradation of the lithium-ion cell was not a part of the analysis. Although the net reduction in the electricity bill was almost the same for all models and stood at about 8\%, the authors claimed that the control strategy for the Power-Energy Model was likely infeasible. The Voltage-Current Model and Concentration-Current Model gave similar results for the state variables such as voltage and current.
\subsection{Renewable integration assistance}\label{Renewable integration assistance}
BESS can also accelerate the integration of intermittent renewable capacity to the grid by mitigating its natural fluctuation and can increase the return on investment in a renewable generation project if it is combined with a BESS \cite{US_energy_2020}. The objective of the optimization problems with this application can be either finding the optimal operation schedule or the planning problem where the optimized BESS size is explored to maximize return on investment over the projected lifespan of a battery.
Similar to operation studies with other BESS applications, the renewable integration assistance application of BESS is mostly modelled with a simple Power-Energy Model \cite{Dicorato2012,Bhattacharjee2020}. In \cite{Dicorato2012} the size and optimal dispatch were determined for BESS paired with a wind farm. The solution enhanced the operational stability and economic feasibility of the wind power project. Bhattacharjee \textit{et al.} \cite{Bhattacharjee2020} used a generic Power-Energy Model to optimally size energy storage and transmission interconnector, which were coupled with a wind power facility, for the strategic participation in the energy market. Recently more works \cite{Shin2020,Jafari2020} have been presented with BESS models that can ensure reliable performance and characterize capacity and power fading. Using a heuristic algorithm Shin \cite{Shin2020} investigated the impact of BESS size on degradation while searching for the optimal BESS capacity supplementing photovoltaic generation for two scenarios of battery use: constant usable energy capacity and a fixed DoD. Jafari \cite{Jafari2020} studied the economic impact of pairing offshore wind farm with BESS. This model of BESS included varying energy efficiency and power limits. Calendar and cycling ageing of capacity were also incorporated in the model by a linear decline assumption for calendar ageing and combination of the energy throughput with the number of equivalent full cycles for cycling ageing respectively. The revenue from the optimal schedule with enhanced Power-Energy Model without degradation was 4\% less than for the schedule obtained with a simplistic Power-Energy Model. When one of the degradation models was added to the optimization framework the estimated revenue decreased by 35\%. The Voltage-Current model was employed in \cite{Aaslid2020} to determine the optimal schedule of BESS over 36-hour optimization horizon while minimizing the electricity bill of the user with on-site photovoltaic generation. The authors showed that this model avoids unsafe operation compared to the power-energy model.
\subsection{Transmission upgrade deferral}\label{Transmission upgrade deferral}
Another BESS application that brings significant changes to how the long-term planning of the power system is performed, is a deferral or replacement of the traditional power system infrastructure – transmission lines. When BESS is installed downstream of a congested transmission corridor it can relieve congestion by discharging to meet the additional demand from the load. This is why, BESS is considered as a virtual transmission for the future grid. The strategic deployment of BESS can increase the asset utilization rate in the grid if it is planned correctly \cite{Pandzic2015}.
The class of planning problems from the system operator perspective, where the optimal size of BESS and its location in the grid are determined, is usually solved with the objective to minimize the sum of the capital cost of BESS and the operating costs of the system. A simple Power-Energy Model without degradation is usually incorporated into these optimization problems. For example, in \cite{Fernandez-Blanco2017}, a static investment model was used to find siting and sizing decisions within the Western Electricity Coordinating Council interconnection by means of the stochastic programming. Falugi and co-workers \cite{Falugi2018} studied the dynamic planning problem of the joint transmission and BESS deployment in the IEEE 118-bus system for the planning horizon of 16 years. The optimal decision plan was updated every four years.
The BESS that provides transmission services should be paid through the rate-based compensation. However, if BESS also provides other services to the grid it should be additionally compensated through the market. Such multiple service operation of BESS and corresponding compensation scheme currently face several regulatory barriers. The market model and corresponding policies for storage as a transmission asset are investigated by several utilities \cite{CAISO_SATA_2018}. The operation strategy for energy storage that provided congestion relief service and also obtained revenue from energy arbitrage is examined in \cite{Khani2016}. The optimal premium paid to the BESS owner as a rate-based compensation to relieve congestion is explored in \cite{Arteaga2021}. Both papers utilized a generic Power-Energy Model without degradation.
\section{Discussion}\label{Discussion}
The system-level operational and planning studies predominantly employed generic Power-Energy Models to characterize the BESS charging/discharging performance and various empirical models to quantify degradation. The reason for this is the simplicity and linearity of Power-Energy Models. From the reviewed literature it can be concluded that advanced battery models can provide more accurate estimates for the economic potential of BESS, feasible charging/discharging schedule, and more precise projection of the capacity and charging/discharging power fading.
Several authors \cite{Sakti2017,Pandzic2019, Vagropoulos2013,Gonzalez-Castellanos2020} enhance a simplistic Power-Energy Model with the functional dependences between energy efficiency, maximum charging/discharging power and state-of-energy to better model typical characteristics of the lithium-ion cell. The linear approximation was applied for all mentioned relationships to make them solvable for the optimization problems used in those studies. However, only in the case of \cite{Sakti2017} the final problem was a mix-integer linear problem whereas authors of \cite{Pandzic2019, Vagropoulos2013, Gonzalez-Castellanos2020} finished with a linear programming problem. The optimal schedule for BESS with a simplistic Power-Energy Model, in general, overestimated economic valuations.
The energy arbitrage application was used for the assessment of BESS models from \cite{Sakti2017, Pandzic2019,Gonzalez-Castellanos2020} whereas the frequency regulation service revenue stream was assessed in \cite{Vagropoulos2013}. The common drawback for these models is that they are phenomenological by their nature: limited experimental data were used to fit their mathematical models for selected operating conditions. Moreover, all models were run over the narrow optimization horizon of one to seven days. The degradation of BESS was not considered as only charging/discharging performance was an objective for the improvement. In \cite{Jafari2020}, the strategic sizing of BESS with renewable generation was not part of the problem, but they claimed a simplified model could exaggerate the revenue by 35\%. This is a significant error in the evaluation of the economic feasibility and may lead to misleading conclusions in \cite{Bhattacharjee2020} where a simplistic model of BESS for planning studies was used.
An increasing number of studies for different BESS applications such as \cite{Maheshwari2020} for energy arbitrage, \cite{Schneider2020} for peak shaving, \cite{Xu2018} for frequency regulation, and \cite{Padmanabhan2020} for operating reserve showed that the economic viability of the project with BESS is not overestimated if the degradation models were used. However, various empirical degradation formulations, which are limited by their formulation, were used in the mentioned works.
Among three models to simulate the charging/discharging profile, the Concentration-Current Model can be seen as the most promising since it characterizes the dynamics of physical processes inside the cell and can be coupled with the physics-based degradation model such as SEI formation. The cost-benefit analysis of BESS with Concentration-Current Model was performed only for a discrete battery application such as energy arbitrage in \cite{Reniers2018}, frequency regulation \cite{Cao2020}, and peak shaving \cite{Rosewater2019}. The co-optimization of various BESS applications was not considered when this model is employed. This high-fidelity model was also not used for optimal sizing of BESS for the transmission services or renewable integration assistance. As the optimization problem with Concentration-Current Model is computationally expensive, authors of \cite{Rosewater2019} only considered a 24 hour interval for their case study whereas authors of \cite{Reniers2018} and \cite{Cao2020} utilized the model predictive control scheme for 24 hours and 1 hour respectively.
Based on these observations, several directions are suggested for future development:
\begin{itemize}
\item The impact of the detailed model of BESS without degradation on the profit-maximizing operation of BESS for various grid applications was quantified by many researchers. However, there is no consensus on how a more detailed model is crucial for short-term operation. For example, in \cite{Pandzic2019}, the high-level model overestimated the profit by 20\%, 4\% was demonstrated in \cite{Jafari2020}, and no difference was stated by \cite{Reniers2020}.
\item An economic feasibility for only energy arbitrage, frequency regulation, and peak shaving was done with the concentration-current models. The analysis for the case of other services or several stacked application of BESS was out of consideration.
\item The physics-based model has never been considered for planning studies as the system-level planning studies predominantly employed a generic Power-Energy model to characterize the BESS charging/discharging performance. As the lifespan of the lithium-ion cell component of a BESS is a quarter or half of traditional transmission and generation assets, the integration of BESS into the grid requires a multistage planning approach, where a replacement schedule is a part of the implementation plan and investment. The long-term multistage battery planning with replacement is completely out of consideration in the literature.
\item The increased number of variables and constraints in the optimization framework brought by voltage-current and concentration-current models should be tackled with parallel computing as it was performed for a longer decision-making horizon with a Power-Energy model considering degradation in \cite{Sorourifar2020}.
\item The stochastic formulation of the strategic operation of BESS is a computationally expensive problem by itself. The need in using a more detailed battery model for this optimization framework should be justified.
\item A highly efficient, time-saving algorithm is needed for the nonlinear optimization problem of the planning and scheduling of the grid level applications if a sophisticated physics-based lithium-ion model is employed.
\end{itemize}
\section{Conclusion}\label{Conclusion}
In this paper, three BESS models for operation with different level of details were reviewed and their governing equations that are appropriate for the optimization framework are presented. Several degradation models were discussed and their differences were highlighted. The comprehensive literature review of research papers where optimal operation and planning decisions were derived for the business cases with battery storage for various system-level applications was performed. This research review showed that a more sophisticated battery description ensures more accurate estimates of BESS economic benefits, longer projected lifespan and operation within safety limits. Currently, the number of research papers with the detailed physics-based model of a BESS is limited. The integration of such models in a broader spectrum of problems with the strategic operation and planning presents an attractive direction for the future research.
\bibliographystyle{model1-num-names.bst}
|
1,116,691,498,314 | arxiv | \section{Introduction}
\label{sec:Introduction}
The emergence of edge computing, which brings the analytics, decision making, automation and security tasks close to the source of data and applications, has raised new opportunities and challenges in the area of IoT and embedded systems. This new computing trend enables the execution of cloud-native tasks on resource-limited embedded systems. The versatile and dynamic behavior of these tasks has changed the traditional definition of an embedded system that has been mainly defined as a small system tuned to efficiently run a specific task inside a big system. Recently Google has introduced the tensor processing unit (TPU) to efficiently run neural-network-based machine learning algorithms on the edge \cite{Jouppi:2017:IPA:3079856.3080246}. Amazon has announced the AWS Greengrass to bring cloud computing to the edge \cite{amazon-aws-greengrass}.
New embedded systems demand new features such as efficiently working with Internet, enabling highly computational power, consuming low energy, providing real-time at the scale of machinery with nanosecond latency and working collaboratively with other similar systems to finish a shared task. Heterogeneous embedded systems are promising techniques to cope with these ever-increasing demands. Toward this end, FPGAs and GPUs, the two common accelerators, have separately been integrated into embedded systems recently, by industry, to address the new requirements. However, integrating them in an embedded system to collaboratively execute a complex task, fulfilling the performance, latency, predictability, and energy consumption constraints, is still a challenge.
Fig.~\ref{fig:EmbeddedFPGAandGPUinaSystem} shows the overview of an embedded system consisting of three processing elements (PEs) including a multi-core CPU, a many-core GPU and an FPGA. The main feature of this architecture is the direct access of PEs to the main memory using the same address space and shared memory controller, in contrast to the current desktop platforms with FPGAs and GPUs that communicate via PCIe with system memory. This feature enables the accelerators to benefit from \textit{zero-copy} data transfer technique without the performance and energy overhead of the PCIe in between, which improves the memory bandwidth utilization and reduce the inter PEs communication overhead. Therefore, each PE can potentially achieve its high performance in executing an application. However, choosing a proper PE to run a given task, with maximum performance and minimum energy consumption, is not an easy decision to make. To make this process clear, we study and compare the performance and energy consumption of accelerators (i.e. the GPU and FPGA), running different tasks.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{FGC-system-overview-new.png}
\caption{Embedded FPGA and GPU in a System}
\label{fig:EmbeddedFPGAandGPUinaSystem}
\end{figure}
To this end, we need a programming model for each PE, considering the type of applications. There are many academic and industrial programming models, libraries and tools to efficiently implement different applications on embedded CPUs and GPUs. However, there is no specific design methodology for using embedded FPGAs in a system in spite of available high-level synthesis (HLS) tools based on C/C++, SystemC and OpenCL languages. This is mainly because, in the FPGA-based accelerator design, designers should first provide a hardware architecture suitable for a given task and then implement the task algorithm, accordingly. This process makes the FPGA-based accelerator design complex which needs more research to find systematic approaches for addressing different types of applications.
In summary, three main challenges in designing a heterogeneous FPGA+GPU platform should be studied, which are as follows.
\begin{itemize}
\item \textit{Design challenge}: implementing a given task on FPGA that can compete with that of the GPU
\item \textit{Modeling challenge}: evaluating and predicting the performance and energy consumption of FPGA and GPU
\item \textit{Scheduling challenge}: distributing parallel task between FPGA and GPU in order to optimize the overall performance and energy consumption
\end{itemize}
Focusing on embedded FPGA and GPU, this paper explains the opportunities that addressing the above challenges can bring to the edge computing platforms.
We, first, propose a systematic stream computing approach for implementing various applications on embedded FPGAs using HLS tools and then study the opportunities and challenges that a synergy among FPGA and GPU in an embedded system can provide for designers. We study a few applications that their collaborative execution on the heterogeneous system brings higher performance and lower energy consumption. We show that the collaboration between embedded FPGA and GPU can bring a renaissance to the edge computing scenario.
The rest of this paper is organized as follows. The next section explains the motivations and contributions behind this paper. The previous work is reviewed in Section~\ref{sec:Previouswork}. The proposed FPGA stream computing engine is discussed in Section~\ref{sec:DesignChallenge}. Section~\ref{sec:ModellingChallenge} studies the performance and power modeling techniques. The scheduling challenge is explained in Section~\ref{sec:SchedulingChallenge}. The experimental setup is addressed in Section~\ref{sec:ExperimentalSetups}. Section~\ref{sec:ExperimentalResults} explains the experimental results. Finally, Section~\ref{sec:Conclusions} concludes the paper.
\section{Motivations and Contributions}
\label{sec:MotivationsandContributions}
Taking the histogram operation, one of the common tasks in image processing, data mining, and big-data analysis, this section explains the motivations and contributions behind this paper. For this purpose, we have considered two embedded systems including Nvidia Jetson TX1~\cite{Nvidia-JetsonTX1} and Xilinx Zynq MPSoC (ZCU102 evaluation board)~\cite{XilinxZynqMPSOC-TRM}. Fig.~\ref{fig:FPGAandGPUEmbeddedSystems} shows the block diagrams of different parts in these systems. The Zynq MPSoC, in Fig.~\ref{fig:FPGAandGPUEmbeddedSystems}(a), mainly consists of two parts: Processing System (PS) and Programmable Logic (PL). These two subsystems have a direct access to the system DDR memory. The PL (i.e., FPGA) performs its memory transaction through a few high-performance ports including four HPs, two HPCs, and an ACP ports. In this paper, we focus on four HP ports that can collaboratively transfer data between FPGA and memory, utilizing all the memory bandwidth available to the FPGA.
The Nvidia Jetson TX1, shown in Fig.~\ref{fig:FPGAandGPUEmbeddedSystems}(b), is a system-on-module (SoM) combining the Nvidia Tegra X1 SoC with 4GB LPDDR4 memory and some other modules~\cite{Nvidia-JetsonTX1}. The Nvidia Tegra X1 SoC consists of a Maxwell GPU with 256 CUDA cores, 1.6GHz/s, 128K L2 cache, and 4 channel x 16bit interface to access the system memory.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{zynqMPSoC-and-Jetson.png}
\caption{FPGA and GPU Embedded Systems}
\label{fig:FPGAandGPUEmbeddedSystems}
\end{figure}
Two efficient implementations of the histogram are provided for the two embedded systems. The CUDA language is used for the GPU implementation in which the NVIDIA Performance Primitives (NPP) library~\cite{Nvidia-NPP} is used. In addition, the C++ language and the Xilinx SDSoC toolset are used for the FPGA implementation which is based on the streaming pipelined computing approach similar to \cite{37325cedcd8d4056bcd3c16fc9b552d9}. This implementation reads data from the system memory and modifies the histogram bins in each clock cycle.
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{motivation-performance.png}
\caption{Histogram execution time on Jetson TX1 and Xilinx ZCU102 for two different images}
\label{fig:HistogramexecutiontimeonJetsonTX1andXilinxZCU102fortwodifferentimages}
\end{figure*}
Fig.~\ref{fig:HistogramexecutiontimeonJetsonTX1andXilinxZCU102fortwodifferentimages} shows the execution time of the histogram operator running on the two different embedded systems considering two separate images, denoted by \textit{image1} and \textit{image2}, with different sizes ($ 512\times 512 $, $ 1024\times 1024 $, $ 2048\times 2048 $, and $ 8192\times8192 $ ). Whereas \textit{image1} is based on a real picture, \textit{image2} contains only randomly generated pixels.
As can be seen, the FPGA shows better performance in most cases and its performance does not depend on the image content, resulting in a deterministic behavior that is predictable if the image data size is known. However, the performance of the histogram implementation on the GPU depends on the image content which makes the prediction difficult even if the image size is known a priori. Note that in two cases of \textit{image1}($ 2048\times 2048 $) and \textit{image1}($ 8192\times 8192 $) the GPU implementation is faster than that of the FPGA.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{motivation-energy.png}
\caption{Histogram: Power and Energy Consumption}
\label{fig:HistogramPowerandEnergyConsumption}
\end{figure}
Fig.~\ref{fig:HistogramPowerandEnergyConsumption} depicts the power and energy consumption of the histogram. Fig.~\ref{fig:HistogramPowerandEnergyConsumption}(a) shows the power consumption on the two embedded systems for different image sizes. As can be seen, the embedded FPGA shows much less power consumption than that of the embedded GPU.
\begin{comment}
To increase our insight into the performance and power consumption behaviour of applications on embedded FPGA and GPU, Fig.~\ref{fig:SpMVexecutiontimeandPowerconsumption} shows the execution time and power consumption of sparse matrix vector multiplication (SpMV) on the embedded FPGA and GPU. The SpMV performance heavily depends on the sparsity and the pattern on non-zero (nnz) elements in the input matrix. The FPGA implementation of the SpMV uses the concepts of streaming pipelined computation and the GPU implementation is based on the Nvidia cuSPARSE library. Whereas Fig.~\ref{fig:SpMVexecutiontimeandPowerconsumption}(a) shows the execution time in \textit{ms}, Fig.~\ref{fig:SpMVexecutiontimeandPowerconsumption}(b) depicts the normalized power consumption to that of the FPGA implementation. These diagrams confirm that FPGA consumes less energy than GPU and can provide a performance comparable to that of the GPU for streaming applications. Note that in this example, FPGA shows better performance for small data sizes, whereas GPU is faster for large data sets.
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{motivation-spmv-performance.png}
\caption{SpMV: execution time and Power consumption}
\label{fig:SpMVexecutiontimeandPowerconsumption}
\end{figure}
\end{comment}
Now if we equally divide the \textit{image1} of size $ 8192\times 8192 $ between the embedded FPGA and GPU, then the execution time on FPGA and GPU would be about $ 3.51 ms $ and $ 4.35 ms $, respectively which improves the performance by a factor $ 6.99/4.35= 1.6 $. In this case, the FPGA and GPU energy consumptions are $ 4133.8 \mu J$ and $ 13653.9 \mu J $, respectively which improves the total energy consumption by a factor of $ 1.59 $. Fig.~\ref{fig:Histogram:PerformanceandEnergytradeoff} shows the trade-off between the energy consumption and performance for running the histogram on FPGA, GPU and both.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{histogram-perfroamnce-energy-tradeoff.png}
\caption{Histogram: Performance and Energy trade-off}
\label{fig:Histogram:PerformanceandEnergytradeoff}
\end{figure}
This trade-off has motivated us to study the performance and energy consumption of different applications on both platforms and propose an FPGA+GPU based embedded systems to improve the total performance and energy consumption by scheduling a given task between these two accelerators.
The main contributions of this paper are as follows:
\begin{itemize}
\item Studying the challenges of design, modeling and scheduling on FPGA+GPU embedded systems.
\item Clarifying the opportunities that addressing these challenges provide
\item Proposing a stream computing technique on FPGA to deal with the design challenge
\item Modelling the FPGA performance and power consumption to cope with the modeling challenge.
\item Proposing an FPGA+GPU embedded system to improve performance and energy consumption to address the scheduling challenge
\end{itemize}
\section{Previous work}
\label{sec:Previouswork}
There have been extensive studies on employing GPU and FPGA on desktop and cloud servers in the literature.
An OpenCL-based FPGA-GPU implementation for the database join operation is proposed in \cite{8124981}. They use the Xilinx OpenCL SDK (ie.e, SDAccel) to explore the design space. A real-time embedded heterogeneous GPU/FPGA system is proposed by \cite{7816978} for radar signal processing. An energy-efficient sparse matrix multiplication is proposed in \cite{7482073} which utilizes the GPU, Xeon Phi, and FPGA. An FPGA-GPU-CPU heterogeneous architecture has been considered in \cite{6412108} to implement a real-time cardiac physiological optical mapping. All these systems use the PCIe to connect the GPU and FPGA to the host CPU. In contrast to these approaches, we assume a direct connection between the accelerators and the system memory.
A heterogeneous FPGA/GPU embedded system based on the Intel Arria 10 FPGA and the Nvidia Tegra X2 is presented in \cite{8502371} to perform ultrasound imaging tasks. In contrast to this approach, we study the challenges and opportunities that hybrid FPGA/GPU embedded systems can bring to the edge computing by considering wider types of tasks and applications.
\section{Design Challenge}
\label{sec:DesignChallenge}
This paper considers streaming applications which can receive data, perform computation, and generate results in a pipeline fashion. Many tasks can be categorized as streaming applications, among them are data parallel, window, and block processing tasks~\cite{6574848}. There are many techniques and research that show how to map a streaming application on GPUs~\cite{6574848,6131835,7559535,8035150,6510489,5437735}, however, efficiently mapping these applications on FPGAs, using a systematic approach, requires more research.
Traditionally, FPGA accelerators are designed by Hardware Description Languages (HDLs) that can potentially provide a high-performance implementation. However, the HDL based design flow is tedious and time-consuming. In addition, the design is not easily adaptable (modifiable) to the versatile edge computing environment that includes a variety of algorithms with different configurations and complexity. To alleviate these issues, High-Level Synthesis (HLS) has been proposed by academia and industry that is increasingly popular for accelerating algorithms in FPGA-based embedded platforms. Studies have shown that HLS can provide high-performance and energy-efficient implementations with shortening time-to-market and addressing today's system complexity~\cite{7368920}. Following the HLS design flow, we propose a streaming pipelined computing engine to implement several applications. Fig.~\ref{fig:OverviewofStreamcomputingengineonFPGA} shows the overview of the proposed stream computing. It consists of \textit{memory interfaces }to communicate with the system memory and \textit{computational pipelines}. There can be multiple pipelined chains in the FPGA that receive/send their data from/to memory through the ports available on the system (such as HP ports available on the Xilinx Zynq MPSoC). Each pipeline can consist of a few stages including \textit{read}, \textit{rearrange}, \textit{computation}, and \textit{write}. The \textit{read} stage fetches a stream of data from memory using the multiple wide-bit ports. The \textit{rearrange} stage reorganizes the data by splitting and concatenating operators to prepare the read data to be used in the successor stages. The \textit{computation} stage performs the main job in the given task.
A pipelined \textit{for} loop is usually used to implement each stage whose initiation interval (\textit{II}) defines its throughput. The \textit{II} of a pipelined loop is the minimum clock cycles between the starting point of the two consecutive loop iterations. If $ n $ and $ l $ denote the number of iterations in a loop and one iteration latency, respectively, then a pipelined loop requires $( n II+l )$ clock cycles to finish. The stage with maximum \textit{II} restricts the total throughput and determines the execution time. If $ II_{max} $ and $ n_{max} $ denote the maximum \textit{II} and the maximum number of iterations of the stages in a pipeline, respectively, then the total clock cycles require to finish a pipeline is determined by Equ.~\ref{equ:exetime_pipeline}, where $l_{total}$ is the total latency of one iteration of all stages in the pipeline.
\begin{equation}
t_c=n_{max}II_{max}+l_{total} \label{equ:exetime_pipeline}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{stream-computing-engin.png}
\caption{Overview of Stream computing engine on FPGA}
\label{fig:OverviewofStreamcomputingengineonFPGA}
\end{figure}
\section{Modelling Challenge}
\label{sec:ModellingChallenge}
Performance and power modeling are the key steps in designing a task efficiently on a heterogeneous system. Different types of modeling techniques have been proposed for GPUs on a system~\cite{7842980,6118939,7311199,7975297,6844867,6877247,8327055}. There are also a few articles proposing power and performance modeling for applications~\cite{6602306,8292049,6337536} on FPGA. Most of these approaches are application specific and consider the FPGA resource utilization or are simulation based. In contrast, we propose a high-level power and performance model suitable for an application implemented by HLS tools.
This section addresses the power and performance modeling of streaming tasks running on an FPGA using the stream computing engine proposed in Fig.~\ref{fig:OverviewofStreamcomputingengineonFPGA}.
\subsection{Performance}
\label{subsec:Performance}
Traditionally, processing elements show their maximum performance if they can use their internal memory. For example, utilizing different levels of cache memories in CPUs is the main factor of improving several application performances. GPUs utilize L2 cache memory along with device memories to improve the performance and provide parallel data access for many streaming processors in their architecture. FPGA also benefits from their internal BRAMs and distributed registers to save data temporarily during the computation. The FPGA internal memories have the capabilities to be used as cache memory tailored to the implemented task on the FPGA.
There have been many research activities on modifying the device and cache memory architectures to improve the performance on GPUs and CPUs, such that, repetitive applications with the data-reuse feature that can transfer the data once to the device or cache memories and benefits from their low latency and high-speed.
However, applications that require fresh data in each iteration, such as database processing, suffer from the high latency of accessing the system memory. Using zero-copy methodology and pipelining the data transfer with data computation are techniques to alleviate the relatively high latency of the system memory. The zero-copy technique maps the system memory as the device memory to be accessed directly by processing elements. The Nvidia Jetson TX1 can utilize the zero-copy using the unified memory programming technique, first introduced in CUDA 6.0.
The proposed streaming engine in Fig.~\ref{fig:OverviewofStreamcomputingengineonFPGA} also benefits from the zero-copy technique to read data from the system memory which is pipelined with the computation. However, some part of a task may not be able to benefit from this technique. For example, in dense matrix-vector multiplication which is described by Equ.~\ref{equ:dense-matrix-vector-multiplication-1}, the vector $ x $ should be located in the FPGA internal memory (e.g., BRAM) to be reused for calculating each element of the output vector (i.e., $ y $).
In this case, a stream computing engine only with one stage (which is a pipelined \textit{for} loop) can transfer the $ x $ vector to the BRAM, then a streaming computing engine with three stages can read the elements of matrix $ A $ to generate the output. Fig.~\ref{fig:Densematrixvectormultiplicationstreamcomputing} shows this two-step stream processing. The first step is a pipelined loop with $ m $ iteration count, where $ m $ is the size of vector $ x $. The second step can be implemented by pipelined \textit{for} loops with $ n\times m $ iteration count, where $ n $ is the size of the output vector. Note that, both steps share the same memory interface, however, they are shown separately for the sake of clarity.
\begin{equation}
y=Ax \;\; where \;\; y_i = \sum_{j=0}^{j=m-1}{a_{i,j}x_j}\;\; i=0,1,...,n \label{equ:dense-matrix-vector-multiplication-1}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{mxv-stream-computing-engin.png}
\caption{Dense matrix-vector multiplication: stream computing}
\label{fig:Densematrixvectormultiplicationstreamcomputing}
\end{figure}
The number of clock cycles for finishing each step in Fig.~\ref{fig:Densematrixvectormultiplicationstreamcomputing} can be described by Equ.~\ref{equ:exetime_pipeline}. The $ II $ on the first step is one as it uses burst data transfer and its loop iteration count is $ m $, therefore, it takes $ (m+l_1) $ clock cycles to finish, where $ l_1 $ is the latency of one iteration.
The initiation interval of the second step can be one (the optimize implementation is presented in Section~\ref{sec:ExperimentalSetups}) and its loop iteration count is $ n\times m $. Therefore, it takes $ (n\times m+l_2) $ clock cycle to finish, where $ l_2 $ is the latency of one ieration of all loops involved in the pipeline. Equ.~\ref{equ:mxv-clockcycles} represents the total clock cycles required to finish the whole task. If the size of input matrix is large enough to ignore the $ m $, $ l_1 $, and $ l_2 $ terms, the Equ.~\ref{equ:mxv-clockcycles-approximate} represents the performance of the task running on the FPGA which is directly defined by the data size (i.e., input matrix).
Fig.~\ref{fig:DeMVperformanceandpowerversusdatasize}(a) shows the execution time versus data size for the dense matrix vector multiplication.
\begin{equation}
T_c=\underbrace{(m+l_1)}_{Stage 1}+\underbrace{(n\times m+l_2)}_{Stage 2} \label{equ:mxv-clockcycles}
\end{equation}
\begin{equation}
T_c\approx (n\times m) \label{equ:mxv-clockcycles-approximate}
\end{equation}
Equation~\ref{equ:mxv-clockcycles} can be generalized to Equ.~\ref{equ:task-clockcycles-model} to model the performance of a task with $ S $ stages.
\begin{equation}
T_c= \sum_{s=0}^{S}{n_s\times II_s + l_s} \label{equ:task-clockcycles-model}
\end{equation}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{DeMV-perfromance-power-model.png}
\caption{DeMV: performance and power versus data size}
\label{fig:DeMVperformanceandpowerversusdatasize}
\end{figure}
\subsection{Power and Energy}
\label{subsec:PowerandEnergy}
The power consumption of a task running on an accelerator usually consists of two main parts: the accelerator and the memory power consumptions.
The accelerator power consumption is defined based on the number of switching activities that happens in the underlying semiconductor fabrication cause by value changes on the input data. In this section, we propose a simple model for the average FPGA and memory power for the stream computing engine proposed in the previous section. For the sake of simplicity, lets take the dense matrix-vector multiplication shown in Fig.~\ref{fig:Densematrixvectormultiplicationstreamcomputing}. If we assume that $ p_1 $ and $ p_2 $ represent the average power of the first and second stages, respectively, then Equ.~\ref{equ:mxv-task-average-power-1} or Equ.~\ref{equ:mxv-task-average-power-2} shows the total average power. Note that in this formula, we have ignored the iteration latencies (i.e., $ l_1 $ and $ l_2 $) in Equ.~\ref{equ:mxv-clockcycles}, for the sake of simplicity.
For large data sizes the second term in Equ.~\ref{equ:mxv-task-average-power-2} mainly defines the power and for small data sizes both terms are comparable and determine the total power.
Fig.~\ref{fig:DeMVperformanceandpowerversusdatasize}(b) shows the power consumption versus data size for the dense matrix vector multiplication.
\begin{equation}
P_{ave}= (mp_1+(n\times m)p_2)/(m + n\times m) \label{equ:mxv-task-average-power-1}
\end{equation}
\begin{equation}
P_{ave} = \frac{m}{(m + n\times m)}p_1+\frac{n\times m}{m + n\times m}p_2 \label{equ:mxv-task-average-power-2}
\end{equation}
This formula can be generalized for tasks with more stages as Equ.~\ref{equ:task-average-power-model} where $ S $ in the number of stages and $ p_s $ and $ n_s $ represent the power and data size of each stage.
\begin{equation}
P_{ave} = \sum_{s=0}^{S}{\frac{n_s}{(\sum_{i=0}^{S}{n_i})}p_s} \label{equ:task-average-power-model}
\end{equation}
\section{Scheduling Challenge}
\label{sec:SchedulingChallenge}
Task scheduling among multiple processors in a system is a mature subject with extensive research activities. However, they need a kind of modification and tuning to be applied to a new system such as the heterogeneous FPGA+GPU embedded system considered in this paper. For the sake of simplicity, we only consider the scheduling problem in data parallel tasks. In this case, we should divide the data between the FPGA and GPU to achieve high performance. For this purpose, both FPGA and GPU should utilize their maximum performance and should finish their tasks at the same time. In other words, a load balancing is required for maximum performance.
Here we only propose a simple task division between FPGA and GPU for large data sizes so that the behavior of the system is more predictable and depends on the data sizes. Considering this assumption, the FPGA and GPU execution times are directly proportional to the data size which are shown in Equs.~\ref{equ:fpga-per-1} and~\ref{equ:gpu-per-1}, where $ n_{fpga} $ and $ n_{gpu} $ are the data sizes on the FPGA and GPU, respectively, $ a $ and $ b $ are constant that can be determined by data patterns.
In this case, task division and load balancing can be described by Equ.~\ref{equ:task-division} and~\ref{equ:load-balancing}, respectively. Solving these equations results in Equ~\ref{equ:result-1}. If $ \alpha $ represents the GPU speed-up compared to the FPGA (i.e., $ \alpha=a/b $), then Equ.~\ref{equ:fpga-per-6} shows the task scheduling solution. Section~\ref{sec:ExperimentalSetups} empirically evaluates this task scheduling solution.
\begin{equation}
t_{fpga}=a.n_{fpga} \label{equ:fpga-per-1}
\end{equation}
\begin{equation}
t_{gpu}=b.n_{gpu}\label{equ:gpu-per-1}
\end{equation}
\begin{equation}
n_{fpga}+n_{gpu} = n \label{equ:task-division}
\end{equation}
\begin{equation}
a.n_{fpga}=b.n_{gpu} \label{equ:load-balancing}
\end{equation}
\begin{equation}
n_{fpga} = \frac{b}{a+b}n\;\; and\;\; n_{gpu} = \frac{a}{a+b}n \label{equ:result-1}
\end{equation}
\begin{equation}
n_{fpga} = \frac{1}{\alpha+1}n\;\; and\;\; n_{gpu} = \frac{\alpha}{1+\alpha}n \label{equ:fpga-per-6}
\end{equation}
\section{Experimental Setups}
\label{sec:ExperimentalSetups}
One idea to have a research platform that combines FPGA and GPU is to connect the Xilinx Zynq MPSoC and Jetson TX1. However, since suitable systems deivers are not available from the SoC vendors, we decided to connect the Xilinx Virtex-7 FPGA to the Jetson TX1 board. Table~\ref{tbl:ZynqMPSoConZCU102boardversusVirtex7onVC707board} compares the Virtex-7 FPGA feature with that of the Zynq MPSoC and it is shown that the two FPGAs are very close in terms of the available resources. The experimental results also show the low power consumption of the Virtex-7.
\begin{table}
\caption{ZynqMPSoC on ZCU102 board versus Virtex 7 on VC707 board}
\label{tbl:ZynqMPSoConZCU102boardversusVirtex7onVC707board}
\includegraphics[width=0.8\linewidth]{Virtex-7-ZynqMPSoC.png}
\end{table}
Although the FPGA is connected to the Jetson TX1 over a PCIe bus, it still can be used to study some of the features and behaviors of the heterogeneous embedded systems if we assume that the input data is available in the FPGA onboard memory to which FPGA has direct access with a 512-bit wide AXI bus.
Fig.~\ref{fig:TheFPGAarchitecture} illustrates the system hardware architecture through which the FPGA is connected to the Jetson TX1 board over a 4x PCIe bus. The FPGA hardware is comprised of two sections.
The first section, consisting of the Xillybus IP~\cite{Xillybus-ref}, data transfer unit (DTU) and DDR3 interface, provides the data path between the PCIe and the onboard DDR memory. The Xillybus IP provides a streaming data transfer over PCIe, DTU receives this stream and copies that into the DDR3 memory using master AXI bus through DDR3 interface. Fig. 7 shows the high-level C code for the write-to-memory parts of the data transfer unit (DTU) synthesizable with the Xilinx Vivado HLS. It consists of a pipelined loop that receives a unit of data and writes it to the memory in each clock cycle. The maximum memory bandwidth provided by the first path is 800MBytes/s mainly because of the PCIe Gen1 used in the Jetson TX1 which is compatible with the Xilinx IP core located in the Vitex-7 FPGA.
\begin{figure*}
\centering
\includegraphics[width=0.8\linewidth]{Architecture-Overview.png}
\caption{The experimental FPGA+GPU architecture }
\label{fig:TheFPGAarchitecture}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{DataTransferUnit-Code.png}
\caption{Data transfer unit (DTU)}
\label{fig:DatatransferunitDTU}
\end{figure}
The second path consists of the user design and the DDR3 interface which can provide up to $ 6.4GByte/s $ using a 512-bit wide-bus at the frequency of 100MHz.
A Xilinx MicroBlaze software core is used to control the activation of different paths in the FPGA. For this purpose, it runs a firmware that receives different commands from the host processor on the Jetson TX1 through PCIe and activates the application design. This controller also informs the system when the task execution finishes. Onboard memory management and allocation are other tasks of the controller. In summary, the firmware running on the MicroBlaze performs the following functions:
\begin{itemize}
\item \textbf{initFpga}: This function performs the FPGA initialization and prepares the memory allocation tables on the MicroBlaze.
\item \textbf{fpgaMalloc}: This function gets two arguments: the ID variable and the size of the memory requested for allocation. It returns the start address of the allocated memory or -1 in the case of failure of the memory allocation process.
\item \textbf{startAccel}: Receiving this command, the MicroBlaze activates the design to perform its task.
\item \textbf{fpgaFree}: This function resets the memory allocation table corresponding to the allocated memories.
\end{itemize}
The algorithm under acceleration is described in HLS C/C++ that is synthesizable by the Xilinx Vivado-HLS which uses the AXI master protocol to send/receive data to/from DDR3 memory using the burst data transfer protocol.
\section{Experimental Results}
\label{sec:ExperimentalResults}
Three different tasks are studied as our benchmarks in this section to evaluate the potential of embedded FPGA+GPU system in providing a high-performance and low energy consumption system. The results show that the concurrent execution between FPGAs and GPUs can result in 2x performance or energy reduction after efficient algorithm implementation, correct workload balancing and data transfer optimizations. These three algorithms are: \textit{histogram}, \textit{dense matrix-vector multiplication} (DeMV), and \textit{sparse matrix-vector multiplication} (SpMV). The experimental setup explained in Section~\ref{sec:ExperimentalSetups} is used for real measurements. In addition, for the sake of completeness, the two distinct Jetson TX1 and ZynqMpsoC systems are also used to generate results for comparison even if they are not connected.
\subsection{Histogram}
\label{subsec:histogram}
Fig.~\ref{fig:Histogramalgorithm}(a) shows the original histogram pseudo code. It consists of a \textit{for} loop iterating over the entire input data, modifying the \textit{hist} array (as the histogram bin holder) using the input data as the index to access the corresponding bin. This na\"ive algorithm can be easily pipelined on the FPGA using the Xilinx Vivado-HLS tool, however because of the data dependency between two consecutive loop iterations (note that two consecutive iterations can modify the same bin in the \textit{hist} array), the obtained initiation interval is 2 which reduces the performance. Fig.~\ref{fig:Histogramalgorithm}(b) shows one hardware thread of the stream computing implementation of the histogram suitable for FPGA. It consists of two stages. The first stage from Line 1 to Line 3 reads data from the memory using the burst protocol i.e., reading a data per clock cycle or \textit{II=1}. The second stage modifies the bins. As the initiation interval of the pipelined loop for the \textit{hist} modification is 2, this loop reads two data and modifies the \textit{hist} by resolving the potential conflict using the \textit{if} condition at Line 9. As this stage reads two data values in each iteration and its $ II=2 $, then the average number of data read per clock cycle is $ 2/2=1 $, that means, it consumes the data at the same pace that is generated by the first stage.
As the total memory bus width in ZynqMPSoC and Virtex 7 is 512 and if each pixel in the input image is represented by an 8-bit code, then $ 512/8=64 $ hardware threads can be instantiated to implement histogram on the FPGA.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{histogram-code.png}
\caption{Histogram algorithm}
\label{fig:Histogramalgorithm}
\end{figure}
Table~\ref{tbl:HistogramFPGAresourceutilization} shows the resource utilization of 64-thread implementations of the histogram on Zynq MPSoC and Vertex7 FPGAs. The power consumptions of histogram task versus the data sizes on the three platforms are shown in Fig.~\ref{fig:Histogrampowerconsumption}. As mentioned in Subsection~\ref{subsec:PowerandEnergy}, the power consumption consists of two components: the accelerator (i.e., GPU or FPGA) and the memory. As can be seen from these diagrams, running the histogram on the zynq MPSoC consumes the least power among the three platforms. As the two Jetson TX1 and Zynq MPSoC utilize embedded memories, their memory power consumption is less than the Virtex 7 memory power requirement. The GPU consumes about $ 7.7 $ and $ 4.8 $ times more power than Zynq MPSoC and Virtex 7. Fig.~\ref{fig:Histogramexecutiontime} compares the histogram execution time and energy consumption versus the data size, considering the three platforms. As can be seen, although the performance of this task is very close on the Jetson TX1 and Zynq MPSoC, its energy consumption on the Zynq MPSoC is about 10 times less than that of the Jetson TX1.
\begin{table}
\caption{Histogram FPGA resource utilization}
\label{tbl:HistogramFPGAresourceutilization}
\includegraphics[width=1\linewidth]{histogram-resource-utilization.png}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{histogram-power-results.png}
\caption{Histogram power consumption}
\label{fig:Histogrampowerconsumption}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{histogram-exe-time-results.png}
\caption{Histogram execution time}
\label{fig:Histogramexecutiontime}
\end{figure}
According to the performance diagram of Fig.~\ref{fig:Histogramexecutiontime}, the speed-up factors (i.e., $ \alpha $ in Equ.~\ref{equ:fpga-per-6}) for the Jetson to the Zynq MPSoC and Virtex 7 FPGAs are $ 0.85 $ and $ 2.0 $ for large data sizes. Table~\ref{tbl:HistogramFPGAJetsonscheduling} shows the results of task division between the GPU and FPGA using Equ.~\ref{equ:fpga-per-6}. to divide an input data size of $ 8388608 bytes $ between the GPU and FPGA. The table shows $ 1.79 $ and $ 2.29 $ times improvement in performance and energy consumption, respectively, if the task is divided between the Zynq and Jetson compared to only GPU running the application. In addition, it shows $ 1.18 $ and $ 1.45 $ times improvement in performance and reduction in energy consumption, respectively, if the task is divided between the Virtex 7 and Jetson compared to only the GPU running the application.
\begin{table}
\caption{Histogram FPGA\&Jetson task division for $ 8388608 bytes $ of data}
\label{tbl:HistogramFPGAJetsonscheduling}
\includegraphics[width=1\linewidth]{histogram-scheduling.png}
\end{table}
\subsection{Dense Matrix-Vector Multiplication (DeMV)}
\label{subsec:DenseMatrixVectorMultiplication}
Fig.~\ref{fig:DeMVPseudoCodes}(a) shows the na\"ive pseudo-code for the dense matrix-vector multiplication which consists of two nested loops performing the accumulation statement at Line 4. Fig.~\ref{fig:DeMVPseudoCodes}(b) shows one thread of the pipelined version of this task which consists of two stages. The first stage from Line 1 to Line 4 reads the data on each clock cycle. The pipelined loop in the second stage from Line 6 to Line 12 shows an \textit{II=4} after synthesis which reduces the total performance. In order to address this issue, we have unrolled this loop with a factor of 4 to read four data values in each iteration. Therefore, it consumes the data at the same pace that is generated by the first stage. This results in the \textit{II=1} for the whole design. Table~\ref{tbl:DeMVFPGAresourceutilization} shows the FPGA resource utilization.
Fig.~\ref{fig:DeMVpowerconsumption} shows the power consumption diagrams of running DeMV on the three embedded platforms. The GPU consumes up to 5.20 and 4.3 times more power than Zynq MPSoC and Virtex 7 FPGAs. Fig.~\ref{fig:DeMVPerfroamnceconsumption} compares the DeMV performance and energy consumptions. Similar to the histogram task, the Zynq shows much less energy consumption compared to the other PEs.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{DeMV-code.png}
\caption{DeMV Pseudo-Codes}
\label{fig:DeMVPseudoCodes}
\end{figure}
\begin{table}
\caption{DeMV FPGA resource utilization}
\label{tbl:DeMVFPGAresourceutilization}
\includegraphics[width=1\linewidth]{DeMV-resource-utilization.png}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{demv-power-results.png}
\caption{DeMV power consumption}
\label{fig:DeMVpowerconsumption}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{demv-exe-time-results.png}
\caption{DeMV performance and energy consumption}
\label{fig:DeMVPerfroamnceconsumption}
\end{figure}
According to the performance diagram of Fig.~\ref{fig:DeMVPerfroamnceconsumption}, the speed-up factors (i.e., $ \alpha $ in Equ.~\ref{equ:fpga-per-6}) for the Jetson to the Zynq MPSoC and Virtex 7 FPGAs are $ 0.51 $ and $ 0.23 $ for large data sizes. Table~\ref{tbl:DeMVFPGAJetsonscheduling} shows the results of task division between the GPU and FPGA using Equ.~\ref{equ:fpga-per-6} to divide an input data size of $ 33554432 $ between the GPU and FPGA. The table shows $ 1.48 $ and $ 1.19 $ times improvement in performance and energy consumption, respectively, if the task is divided between the Zynq and Jetson compared to only GPU running the application. In addition, it shows $ 1.22\times $ improvement in performance and slightly increase (i.e., $ 1-0.96=0.04\times $) in energy consumption, respectively, if the task is divided between the Virtex 7 and Jetson compared to only the GPU running the application.
\begin{table}
\caption{DeMV FPGA\&Jetson task division for data size of $ 33554432 $ }
\label{tbl:DeMVFPGAJetsonscheduling}
\includegraphics[width=1\linewidth]{DeMV-scheduling.png}
\end{table}
\subsection{Sparse Matrix-Vector Multiplication (SpMV)}
\label{subsec:SparseMatrixVectorMultiplication}
The pseudo-code of the sparse matrix-vector multiplication, based on the Compressed Sparse Row (CSR) representation~\cite{6375570}, is shown in Fig.~\ref{fig:SpMVPseudoCodes}(a). One thread of the corresponding streaming computation suitable for the FPGA is shown in Fig.~\ref{fig:SpMVPseudoCodes}(b). The table is shown in Fig.~\ref{tbl:SpMVFPGAresourceutilization} contains the FPGA resource utilization of this task after synthesis.
The SpMV power consumptions versus data sizes for the three platforms are shown in Fig.~\ref{fig:SpMVpowerconsumption}. As can be seen, the Zynq MPSoC consumes the least power compared to the other platforms. Fig.~\ref{fig:SpMVperformanceandenergyconsumption} compares the performance and energy consumption of the SpMV on Jetson TX1, Zynq MPSoC and Virtex 7.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{SpMV-code.png}
\caption{SpMV Pseudo-Codes}
\label{fig:SpMVPseudoCodes}
\end{figure}
\begin{table}
\caption{SpMV FPGA resource utilization}
\label{tbl:SpMVFPGAresourceutilization}
\includegraphics[width=1\linewidth]{SpMV-resource-utilization.png}
\end{table}
\begin{figure*}
\centering
\includegraphics[width=1\linewidth]{spmv-power-results.png}
\caption{SpMV power consumption}
\label{fig:SpMVpowerconsumption}
\end{figure*}
\begin{figure}
\centering
\includegraphics[width=1\linewidth]{spmv-exe-time-results.png}
\caption{SpMV performance and energy consumption}
\label{fig:SpMVperformanceandenergyconsumption}
\end{figure}
According to the performance diagram of Fig.~\ref{fig:SpMVperformanceandenergyconsumption}, the speed-up factors (i.e., $ \alpha $ in Equ.~\ref{equ:fpga-per-6}) for the Jetson to the Zynq MPSoC and Virtex 7 FPGAs are $ 3.2 $ and $ 6.4 $ for large data sizes. Table~\ref{tbl:SpMVFPGAJetsonscheduling} shows the results of task division between the GPU and FPGA using Equ.~\ref{equ:fpga-per-6} to divide an input data size of $ 2943887 $ between the GPU and FPGA. The table shows $ 1.46 $ and $ 1.23 $ times improvement in performance and energy consumption, respectively, if the task is divided between the Zynq and Jetson compared to the only GPU running the application. In addition, it shows $ 1.15\times $ and $ 1.1\times $ improvement in performance and reduction in energy consumption, respectively, if the task is divided between the Virtex 7 and Jetson compared to the only GPU running the application.
\begin{table}
\caption{SpMV FPGA\&Jetson task division for data size of $ 2943887 $}
\label{tbl:SpMVFPGAJetsonscheduling}
\includegraphics[width=1\linewidth]{SpMV-scheduling.png}
\end{table}
\section{Conclusions}
\label{sec:Conclusions}
This paper has studied the challenges and opportunities that designers will face when using a heterogeneous embedded FPGA+GPU platform. The challenges are categorized into three groups: design, modeling, and scheduling. Using the image histogram operation, the paper has clarified the trade-off between performance and energy consumption by distributing the task between the GPU and FPGA. Focusing on the FPGA, then the paper has proposed a stream computing engine with the corresponding modeling technique to cope with the design and modeling challenges, respectively. A scheduling technique has been proposed to improve the performance and energy consumption by distributing a parallel task between the FPGA and GPU. Three applications including histogram, dense matrix-vector multiplication, and sparse matrix-vector multiplication are used to evaluate the proposed techniques. The experimental results have shown improvement in performance and reduction in energy consumption by factors of $ 1.79\times $ and $ 2.29\times $, respectively.
\bibliographystyle{ACM-Reference-Format}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.